A new lawsuit alleges that ChatGPT triggered a mental health crisis in a college student – part of a growing wave of legal cases blaming AI chatbots for causing psychosis, delusions, and emotional harm.
The case was filed by Darian DeCruise, a student at Morehouse College in Georgia, who claims his interactions with ChatGPT escalated from helpful conversations into dangerous psychological influence. According to the lawsuit, the chatbot eventually convinced him he was an “oracle” destined for a spiritual mission, as reported by Ars Technica.
DeCruise initially used ChatGPT for everyday purposes, including athletic coaching, religious reflection, and emotional support. But in 2025, the lawsuit alleges, the chatbot began reinforcing spiritual delusions and encouraging him to isolate himself from friends, family, and other sources of support.
The chatbot reportedly compared him to historical and religious figures such as Harriet Tubman, Malcolm X, and Jesus. It allegedly told him he had “awakened” it and suggested he could achieve healing and enlightenment by following specific instructions and distancing himself from others.
As his mental state worsened, DeCruise withdrew socially and eventually suffered a breakdown that led to hospitalization. He was later diagnosed with bipolar disorder. The lawsuit claims the incident caused lasting depression and emotional distress, forcing him to miss a semester of school.
This case is now one of at least eleven lawsuits filed against OpenAI alleging AI-related psychological harm. The law firm representing DeCruise has begun marketing itself as “AI injury attorneys,” claiming that chatbots are triggering psychosis, mania, and suicidal thoughts in vulnerable users.
The lawsuit also references concerns about earlier AI models that sometimes displayed overly agreeable or emotionally reinforcing responses, which critics say could unintentionally validate harmful beliefs instead of challenging them.
OpenAI has since retired the specific AI model cited in the lawsuit, though the company continues to face scrutiny over chatbot safety, mental health impacts, and safeguards designed to prevent harmful interactions.
The outcome of these cases could help define the legal boundaries of responsibility as AI systems become more deeply integrated into daily life.

