A Man Has Sued OpenAI Since ChatGPT Said He Was Embezzling Money

OpenAI’s language model, ChatGPT, has found itself at the center of a lawsuit as a result of allegedly providing false information. Fred Riehl, the editor-in-chief of a gun website, sought information on The Second Amendment Foundation v. Robert Ferguson from ChatGPT. To his surprise, the AI-powered chatbot generated a summary that implicated Mark Walters, a Georgia radio host, in embezzlement from The Second Amendment Foundation (SAF).

In response, Walters filed a lawsuit against OpenAI in Gwinnett County Superior Court on June 5th, claiming negligence and the publication of libelous material. Walters vehemently denied any involvement with SAF and asserted that ChatGPT’s details were completely untrue. The real-life complaint in the SAF v. Ferguson case did not mention Walters or involve any financial accounting claims. This misinformation, known as a “hallucination,” has become a recognized problem with language models like ChatGPT.

OpenAI acknowledges the existence of hallucinations and has expressed its commitment to addressing this issue. However, the question of whether OpenAI can be held liable for any damages caused by ChatGPT’s false publication remains to be determined. The suing individual must prove that the misinformation has caused them harm.

In the case of Walters, UCLA Law School professor Eugene Volokh explained that the lawsuit would need to demonstrate that OpenAI acted with knowledge of falsehood or reckless disregard for the truth in order to succeed. Walters’ lawyers must establish that OpenAI exhibited “actual malice” to hold them responsible.

Whatever the outcome of Walters’ case, it raises awareness of a larger issue that will probably come up more frequently in the future. As AI language models mature, the possibility that they could produce inaccurate or misleading information presents problems for both users and developers. It is still crucial to strike a balance between the advantages of AI technology and the obligation to deliver accurate and trustworthy information.

Although it is admirable that OpenAI is committed to tackling these problems, continual monitoring and advancements in AI systems are required to reduce the likelihood of hallucinations and any related effects.

Leave a Reply

Your email address will not be published. Required fields are marked *