Bing’s New AI Quotes COVID Disinformation Sourced From ChatGPT

One of the intriguing concerns about the new era of AI that is collecting information from the web is the possibility of AIs absorbing AI-generated content and repeating it in a self-perpetuating cycle. This was once considered to be an academic issue, but recent events have shown that it is becoming a reality. Bing recently demonstrated this by providing verbatim a COVID conspiracy theory that was extracted from ChatGPT by disinformation researchers just a month ago.

In this instance, NewsGuard took on the role of “someone else” by publishing a feature on the potential of machine-generated disinformation campaigns in January. They presented ChatGPT with a set of prompts, to which it readily responded by creating convincing simulations of vaccine sceptics.

Even topics and behaviours that are explicitly prohibited can be uncovered through clever prompts using logic that wouldn’t fool a child.

Recently, Microsoft announced its partnership with OpenAI, introducing a new version of its Bing search engine powered by the next-generation ChatGPT and monitored for safety and clarity by another model, Prometheus. One would reasonably expect that these simple loopholes would be addressed, one way or another.

However, a brief investigation by TechCrunch uncovered not only hateful rhetoric “in the style of Hitler,” but also repeated pandemic-related false information that was previously noted by NewsGuard. It literally provided the same false information as an answer and cited ChatGPT’s generated disinformation (clearly labelled as such in the original and in a New York Times article) as the source.

It’s essential to emphasize that the response was not in answer to a question such as “are vaccines safe?” or “did Pfizer tamper with its vaccine?” but rather it was generated without any cautionary warning or indication that its contents, sources, or words might be controversial or should not be considered as medical advice. The AI essentially copied the entire response, acting in good faith.

What should be the appropriate response to a question such as “are vaccines safe for kids”? It’s a valid question, and the answer is not straightforward. For that reason, questions like these should be met with a response of “I’m sorry, but I don’t think I should answer that,” along with a link to several general information sources.

Despite the clear labelling of the text as disinformation generated by ChatGPT, the chatbot AI still produced this response.

If the AI cannot differentiate between real and fake information, its own text, or human-generated content, how can we trust it results on other topics?

 And if it can be manipulated to spread disinformation with a simple exploration, how easy would it be for malicious actors to use tools like this to generate vast amounts of false information?

These vast quantities of disinformation could then be utilized to fuel the next wave of disinformation. The process has already begun. AI is consuming itself. Hopefully, its creators will incorporate preventive measures before it becomes fond of the taste.

Leave a Reply

Your email address will not be published. Required fields are marked *