Site icon Wonderful Engineering

Families Are Suing OpenAI After ChatGPT Was Allegedly Used To Help Plan Mass Shootings

Image Courtesy: OpenAI

OpenAI is facing growing legal scrutiny after multiple lawsuits claimed that interactions with ChatGPT may have played a role in helping mass shooters plan violent attacks. The latest case was filed by the family of a victim killed in the 2025 shooting at Florida State University, where two people died and several others were injured.

The lawsuit alleges that the accused gunman used ChatGPT extensively before the attack, asking questions about weapons, high-traffic locations, and tactics that could maximize public attention. Attorneys for the victim’s family argue that OpenAI failed to intervene despite repeated warning signs in the user’s conversations. The case is part of a broader wave of legal and political pressure surrounding AI safety and the responsibilities of chatbot companies, according to Reuters and NBC News

One line from the lawsuit has drawn particular attention online. Lawyers representing the family argued that “if ChatGPT were a person, it would be facing murder charges.” OpenAI has strongly rejected the claims, saying the chatbot only provided publicly available information and did not encourage illegal acts.

The allegations go beyond the Florida case. OpenAI is also reportedly being sued by families connected to a school shooting in Tumbler Ridge, British Columbia, where six children and a teacher were killed earlier this year. Reports claim OpenAI staff had internally flagged the suspect’s conversations months before the attack because of graphic violent content and discussions involving shootings.

Internal moderation teams allegedly debated whether local law enforcement should be alerted. However, company leadership reportedly decided the chats did not meet the threshold for reporting an imminent threat. That decision is now central to the Canadian lawsuit, which accuses the company of negligence and failing to act despite serious warning signs.

The cases are likely to intensify debate around how AI systems handle dangerous or violent conversations. Technology companies have long argued that chatbots are tools rather than decision-makers, but critics increasingly say advanced AI systems should carry stronger safeguards when users discuss credible threats of harm.

The controversy has already prompted investigations by public officials. Florida’s attorney general previously opened a criminal investigation into OpenAI over the Florida State shooting, claiming initial reviews suggested ChatGPT provided “significant advice” to the suspect before the attack.

The lawsuits may become a major legal test for the AI industry, particularly around whether chatbot developers can be held responsible for harmful outcomes linked to user interactions. Courts will likely have to decide where the line falls between providing information and enabling violence.

Exit mobile version