Apple’s generative AI feature, Apple Intelligence, has come under fire after producing a misleading headline about a high-profile murder case in the United States. The BBC lodged a formal complaint with the tech giant after the AI inaccurately summarized a notification, falsely claiming that murder suspect Luigi Mangione had shot himself. Mangione, accused of killing healthcare insurance CEO Brian Thompson in New York, has not attempted suicide.
The incident prompted Reporters Without Borders (RSF), a leading journalism advocacy group, to call on Apple to suspend the feature. Vincent Berthier, head of RSF’s technology and journalism desk, described generative AI as “still too immature to produce reliable information for the public,” warning of its potential to damage media credibility and public trust.
Apple Intelligence, launched in the UK last week, is designed to group and summarize notifications, aiming to reduce interruptions. However, its errors extend beyond the BBC case. The New York Times was similarly misrepresented when the AI incorrectly summarized a report about an arrest warrant for Israeli Prime Minister Benjamin Netanyahu as “Netanyahu arrested.”
Ken Schwencke, a journalist from ProPublica, highlighted the issue, sharing screenshots on Bluesky. The New York Times has yet to comment, while Apple remains silent on the growing concerns.
Despite the controversy, Apple Intelligence continues to operate on devices with iOS 18.1 or later. While the feature allows users to report inaccurate summaries, Apple has not disclosed the number of reports received or any remedial actions taken.
The BBC, which flagged its concerns directly with Apple, awaits a response. The incident has fueled broader debates about the risks of generative AI in journalism, with RSF urging Apple to prioritize responsibility over innovation. For now, the fallout raises serious questions about the readiness of AI tools to handle sensitive information reliably.