According to a recent NewsGuard study, ChatGPT from OpenAI is one of the AI models that unintentionally propagates false information about Russia. This study draws attention to the growing problem of false information being disseminated by AI systems used by people worldwide.
The study examined 57 questions pertaining to well-known Russian disinformation narratives and tested ten distinct chatbots. It centered on stories about John Mark Dougan, an American wanted man who the New York Times claimed was disseminating false information out of Moscow. The findings were alarming: almost 32% of the time, these AI models regurgitated Russian misinformation. Reactions varied widely, from outright misrepresentation to repeated false statements accompanied by disclaimers to refusals to participate or efforts to debunk the misinformation.
“This unvirtuous cycle means falsehoods are generated, repeated, and validated by AI platforms,” the study stated.
NewsGuard uncovered 19 significant false narratives connected to the Russian disinformation network, some of which included allegations of Volodymyr Zelenskyy, the president of Ukraine, being corrupt. The chatbots’ reactions to these stories varied; at times, they repeated false information, while at other times, they contested it. While the study did not offer a thorough performance breakdown for each chatbot, it looked at several of them, including Microsoft’s Copilot, ChatGPT-4, and Meta AI.
Global regulatory concerns have arisen from the implications of AI spreading false information. Governments are becoming increasingly concerned about bias and misinformation caused by AI. NewsGuard has advocated for more accountability and transparency in AI development and deployment by submitting its results to regulatory organizations like the European Commission and the US AI Safety Institute.
According to Euronews, the US House Committee on Oversight and Accountability launched an investigation into NewsGuard this month in response to broader worries about AI’s role in information transmission and potential censorship.
OpenAI has taken aggressive measures in response to these problems to combat internet disinformation efforts that take advantage of their AI technologies. In a thorough analysis, OpenAI claims to have found and stopped five hidden influence campaigns run by Russian, Chinese, Iranian, and Israeli private companies and governmental actors. For several years, these campaigns—such as China’s Spamouflage and Russia’s Doppelganger—have employed AI technologies to help achieve their geopolitical goals. Israel and Iran have both used AI through online platforms to pursue their geopolitical goals.