The US Federal Trade Commission (FTC) has initiated a comprehensive investigation into OpenAI, the creator of AI chatbot ChatGPT, marking the first official probe by regulators into the risks associated with artificially intelligent chatbots.
The FTC has previously expressed its focus on using AI technology and generative AI tools in ways that impact consumers substantially. In a letter sent to OpenAI, the regulator requested internal materials ranging from user data retention practices to steps taken by the company to mitigate the risk of ChatGPT generating false or misleading statements. While the FTC did not comment on the letter, OpenAI CEO Sam Altman expressed disappointment with the leak but affirmed the company’s commitment to ensuring the safety and consumer-friendliness of its technology, stating their willingness to cooperate with the FTC.
During a hearing, FTC Chair Lina Khan faced criticism from Republican lawmakers regarding the regulator’s enforcement stance. While Khan did not comment specifically on the investigation, she highlighted the broader concerns related to ChatGPT and other AI services that receive vast amounts of data without proper checks on the type of information being fed into these systems. Reports of sensitive information surfacing as responses to queries and instances of libel and defamatory statements have further amplified the FTC’s concerns about fraud and deception.
The FTC’s investigation delves into the technical aspects of ChatGPT’s design, including efforts to address hallucinations (fabricated names, dates, facts, and references) and the oversight of human reviewers, as these issues directly impact consumers. The regulator also requested information on consumer complaints and OpenAI’s initiatives to evaluate users’ understanding of the chatbot’s accuracy and reliability.
Language models behind ChatGPT have raised alarms among experts due to the extensive data collection. Within a short period, OpenAI garnered over 100 million monthly active users. At the same time, Microsoft’s Bing search engine, powered by OpenAI technology, attracted more than 1 million users across 169 countries within two weeks of its launch. Users have reported instances of ChatGPT generating false information, including fake news website links and references to academic papers, a phenomenon referred to as “hallucinations.”
The FTC’s investigation highlights the need to address the technical aspects of ChatGPT’s functioning, such as mitigating hallucinations and ensuring appropriate oversight of human reviewers. It also underscores the importance of consumer privacy and evaluating users’ comprehension of the chatbot’s capabilities.
Italy’s privacy watchdog temporarily banned ChatGPT in March due to concerns about OpenAI’s data collection practices following a cybersecurity breach. The ban was lifted after OpenAI improved its privacy policy’s accessibility and introduced age verification tools.
Altman acknowledged the limitations of ChatGPT and OpenAI’s commitment to transparency, emphasizing that the company’s capped-profits structure prevents unlimited returns. He stated that ChatGPT is built on years of safety research, focusing on learning about the world rather than private individuals and that user privacy is safeguarded.
As the investigation unfolds, it will shed light on the ethical implications, data privacy practices, and reliability of AI chatbots like ChatGPT, contributing to the ongoing discussion around responsible AI development.