Google is in a race to manually remove bizarre AI-generated answers from its search results, as social media buzzes with examples of its AI Overview product making odd recommendations. Instances range from suggesting users put glue on their pizza to advising them to eat rocks. The chaotic rollout has prompted Google to swiftly disable AI Overviews for specific searches as these memes circulate online, leading to their rapid disappearance from social networks. This situation is unexpected since Google has been testing AI Overviews for a year. The feature, launched in beta in May 2023 as the Search Generative Experience, reportedly handled over a billion queries, according to CEO Sundar Pichai. However, Pichai also noted that Google reduced the cost of delivering AI answers by 80 percent through advancements in hardware, engineering, and technical breakthroughs. It appears this optimization might have been premature, outpacing the technology’s readiness.
“A company once known for being at the cutting edge and shipping high-quality stuff is now known for low-quality output that’s getting meme’d,” an anonymous AI founder told The Verge.
Despite the criticism, Google maintains that its AI Overview product generally provides high-quality information. “Many of the examples we’ve seen have been uncommon queries, and we’ve also seen examples that were doctored or that we couldn’t reproduce,” Google spokesperson Meghann Farnsworth stated. She confirmed that the company is taking swift action to remove AI Overviews on certain queries and using these instances to improve their systems.
AI expert Gary Marcus, an emeritus professor at NYU, highlighted the challenge of improving AI accuracy from 80 to 100 percent. He emphasized that achieving the final 20 percent is exceptionally difficult, requiring reasoning and human-like fact-checking, which might necessitate artificial general intelligence (AGI).
Google is under significant pressure to compete with other AI-powered search engines, including Bing and potential new entrants like OpenAI. The company has ambitious plans for AI Overviews, including multistep reasoning for complex queries and AI-organized result pages. However, its current reputation hinges on perfecting the basics, and recent mishaps suggest there’s a long way to go. “These models are constitutionally incapable of doing sanity checking on their own work, and that’s what’s come to bite this industry in the behind,” Marcus concluded.