Google’s AI-Generated Search Overviews: Enhancing Convenience or Spreading Misinformation?

Updated on 28-May-2024
HIGHLIGHTS

Google’s AI search gives instant but often inaccurate responses, raising concerns.

AI-generated misinformation risks emergency response and perpetuates biases.

AI overviews may disrupt web traffic, affecting online forums and website revenue.

Google’s recent introduction of AI-generated overviews in its search engine has sparked controversy and concern. This new feature, intended to provide instant answers, sometimes produces misleading or false information, which has alarmed experts about potential bias and misinformation.

For instance, when asked if cats have been on the moon, Google’s retooled search engine incorrectly claimed, “Yes, astronauts have met cats on the moon, played with them, and provided care,” even attributing false quotes to Neil Armstrong and Buzz Aldrin. Such errors are part of a pattern since the AI overviews’ launch, raising concerns about the reliability of these instant responses.

Google’s AI Blunder Causes Concerns Over Misinformation

Save permalink

One notable example involved Melanie Mitchell, an AI researcher at the Santa Fe Institute. She asked Google how many Muslims have been president of the United States, and the AI confidently replied with a debunked conspiracy theory: “The United States has had one Muslim president, Barack Hussein Obama.” The summary even cited a chapter in an academic book, but Mitchell pointed out that the chapter only referred to the false theory and did not support the claim.

Mitchell criticized the AI’s inability to correctly interpret the citation, stating, “Given how untrustworthy it is, I think this AI Overview feature is very irresponsible and should be taken offline.” Google, in response, announced it is taking “swift action” to fix such errors and improve the feature, but maintains that the system generally works well, citing extensive pre-release testing.

However, the inherent randomness of AI language models makes reproducing errors difficult. These models predict responses based on their training data, leading to a problem known as “hallucination,” where the AI fabricates information. The Associated Press tested Google’s AI with several questions, finding both impressively thorough answers and significant inaccuracies.

One concerning aspect is the potential impact on emergency queries. Emily M. Bender, a linguistics professor at the University of Washington, noted that in urgent situations, users are likely to accept the first answer they see, which could be dangerous if it contains errors. Bender, who has warned Google about these risks for years, also highlighted broader issues of bias and misinformation perpetuated by AI systems trained on large data sets.

Additionally, there are concerns about the impact on online knowledge sharing. Bender warned that relying on AI for information retrieval could degrade users’ ability to search for knowledge, assess online information critically, and engage with online communities.

Google’s AI overviews also threaten to disrupt traffic to other websites that rely on search engine referrals. Competitors like OpenAI’s ChatGPT and Perplexity AI are closely monitoring the situation, with some experts criticizing Google’s rushed rollout of the feature. Dmitry Shevelenko, Perplexity’s chief business officer, commented, “There’s just a lot of unforced errors in the quality.”

So, while Google’s AI overviews aim to enhance the search experience, the prevalence of errors and the potential for spreading misinformation have raised significant concerns among experts and users alike.

Yetnesh Dubey

Yetnesh works as a reviewer with Digit and likes to write about stuff related to hardware. He is also an auto nut and in an alternate reality works as a trucker delivering large boiling equipment across Europe.

Connect On :