Google has a new AI search tool. But it is giving some strange and wrong answers. For example, it told users to use glue to make cheese stick to pizza. It also said geologists recommend eating rocks daily. These seem to be mistakes based on joke websites. A Google spokesperson said these are just a few isolated bad examples. But many people are criticizing the new AI tool for giving inaccurate information.
People are making fun of Google’s new AI tool on social media for the wrong answers.
But Google says the bad answers are very rare. Most of the time, the AI gives good information with links for more details.
Google says it has fixed some policy violations. It is using the mistakes to improve the systems.
Google has had AI problems before. In February, it paused its Gemini chatbot because people criticized its “woke” responses.
Gemini’s earlier version, Bard, also had a bad start.
Google started testing its new AI summaries in April for some UK users.
In May, it launched the feature for all US users.
The AI tool provides a summary instead of a long list of websites.
This is meant to make searching easier, but Google warns it is still experimental.
Many people will likely use and trust these AI summaries.
This is because over 90% of global web searches use Google.
Search is still very important for how Google makes money. Google needs to protect and improve its search.
Many experts think AI-powered search is the future. But AI uses a lot of energy, which is bad for the environment.
With AI, you don’t need to look through many websites. The AI can just give you one answer.
But you can only use AI search if you can trust the answers.
Google is getting a lot of criticism for mistakes its AI makes. But this “hallucinating” problem affects all AI systems, not just Google’s.
A reporter asked Google if they could use gasoline to cook spaghetti faster. Strangely, Google’s AI said you can’t, but then gave a recipe for “spicy gasoline spaghetti.”
Why AI Causes Errors
AI errors can happen for many reasons. Here are the main causes:
1. Problems with Data
- Bias in Data: If the training data is biased, the AI will learn and repeat that bias.
- Not Enough Data: AI needs a lot of data to learn well. Too little data can lead to mistakes.
- Noisy Data: Data with errors or irrelevant information can confuse the AI.
2. Issues with Algorithms and Models
- Complex Models: Too complex models can overfit, learning details in training data that don’t apply elsewhere. Simple models can underfit, missing important patterns.
- Wrong Model Choice: Using the wrong type of model for a task leads to poor performance.
- Hyperparameter Tuning: Incorrect settings can hurt the model’s accuracy.
3. Implementation and Operational Problems
- Integration Errors: Mistakes can occur when adding AI to existing systems.
- Real-World Differences: AI trained in controlled settings might fail in unpredictable real-world conditions.
4. Human Factors
- Misinterpreting Results: Users might misunderstand AI outputs.
- Over-Reliance on AI: Trusting AI too much can lead to ignoring its errors.
5. Security Issues
- Adversarial Attacks: Specially crafted inputs can trick AI into making mistakes.
- Data Poisoning: Corrupted data during training can degrade the AI’s performance.
6. Ethical and Regulatory Issues
- Lack of Ethics: Ignoring ethical concerns can lead to harmful AI behavior.
- Regulatory Non-Compliance: Not following laws can cause AI to operate improperly.
Final thoughts
We don’t know how often Google’s AI gives good answers, because people mostly share the funny bad ones online. But AI search needs to handle all kinds of questions, even strange ones.
Other tech companies are also facing criticism over their new AI products.
In the UK, officials are looking at Microsoft for taking continuous screenshots of user activity on its new AI PCs.
And the actress Scarlett Johansson criticized OpenAI for using a voice similar to hers in ChatGPT without permission.