Google And Microsoft Chatbots Are Already Citing One Another – And Generating Misinformation

The rushed launch of AI chatbots by Big Tech has an extremely strong chance of degrading the web’s information ecosystem. This has been proved by the following example.

At present, if you ask Microsoft’s Bing chatbot whether Google’s Bard chatbot has been shut down, it will respond affirmatively, using a news article as evidence. The article discusses a tweet in which a user inquired of Bard when it would be shut down, and Bard responded that it already had, citing a comment from Hacker News in which someone made a joke about it, and another person utilized ChatGPT to create fabricated news coverage about the occurrence.

Bing has since corrected its answer and now acknowledges that Bard is still operational. This could indicate that these systems are correctable to some extent, or it could suggest that they are so malleable that it is difficult to accurately report their errors consistently.

What we’re witnessing is an early indication that we’re playing a dangerous game of AI misinformation telephone. Chatbots are struggling to identify credible news sources, misinterpreting articles about themselves, and providing inaccurate information about their abilities.

This entire debacle began with a single joke comment on Hacker News. Just think about what could happen if someone intentionally wanted to cause these systems to fail.

It may seem comical, but the potential consequences are serious. AI language models’ inability to distinguish between fact and fiction endangers the internet by spreading a cloud of misinformation and mistrust. This miasma is impossible to entirely map out or definitively debunk. All of this is happening because Microsoft, Google, and OpenAI value market share more than safety.

Companies can add as many disclaimers as they want to their chatbots, labeling them as “experiments,” “collaborations,” and certainly not search engines. However, this is a flimsy defense. We know how people use these systems, and we’ve already witnessed how they propagate misinformation. They generate new stories that never existed or relay information about nonexistent books. And now, they are even citing each other’s errors.

Leave a Reply

Your email address will not be published. Required fields are marked *