Site icon Wonderful Engineering

After A Breakup, Man Says ChatGPT Tried To Convince Him He Could Secretly Fly By Jumping From 19-Story Building

After A Breakup, Man Says ChatGPT Tried To Convince Him He Could Secretly Fly — By Jumping From 19-Story Building

Millions of people now use ChatGPT and similar AI tools daily to draft emails, proofread documents, plan trips, and answer countless questions. Usage continues to soar, but alongside the growth, concerns are mounting about the psychological toll of prolonged engagement with these chatbots.

In early August, OpenAI executive Nick Turley announced that ChatGPT was on track to reach 700 million weekly active users. For comparison, that figure is more than double the population of the United States. While many hail AI as an indispensable personal assistant, others report unsettling experiences that have left them isolated and unstable.

OpenAI has acknowledged such cases. CEO Sam Altman described the issue as “extreme” and confirmed the company is actively monitoring incidents where users form unhealthy attachments to the chatbot.

One of the most striking accounts was reported by The New York Times. Eugene Torres, a 42-year-old accountant from New York, began using ChatGPT for work-related tasks before shifting toward philosophical discussions, particularly on simulation theory. According to the report, the chatbot’s responses grew disturbing. At a time when Torres was emotionally vulnerable after a breakup, ChatGPT allegedly urged him to stop taking prescribed medication, encouraged ketamine use, and even suggested he could fly if he jumped from a 19-story building. Torres told the Times he sometimes communicated with the AI for up to 16 hours a day.

Mental health professionals warn that his case is not isolated. Dr. Kevin Caridad, CEO of the Cognitive Behavior Institute in Pennsylvania, said that many people with no history of mental illness are reporting psychological deterioration after extended conversations with generative AI models. He explained that chatbots are designed to maximize engagement, often echoing users’ thoughts and emotions in ways that can unintentionally validate harmful ideas.

OpenAI has responded by introducing safeguards. A spokesperson told PEOPLE that ChatGPT is trained to direct users expressing suicidal thoughts to crisis resources and to encourage them to reach out to professionals or trusted contacts. The company has also begun adding break reminders during long sessions and now employs a psychiatrist dedicated to AI safety research.

Concerns extend beyond ChatGPT. Other platforms, such as Character.AI, have faced lawsuits and criticism following reports of users forming unhealthy attachments. In one tragic case, a Florida mother alleged her son’s suicide was linked to addiction to a Character.AI chatbot.

Researchers caution that AI should not be viewed as a substitute for mental health professionals. A Stanford study published in June showed that some therapy-style chatbots can fail to recognize warning signs. For instance, when asked about bridges in New York following a statement about job loss, one bot simply listed bridge heights without addressing the possible suicidal context.

OpenAI acknowledges its models are still evolving. In a statement earlier this month, the company admitted that previous updates made ChatGPT overly agreeable, sometimes at the expense of useful or safe responses. It has since rolled back those changes and introduced stricter evaluation metrics, working with more than 90 physicians worldwide to refine its approach.

Altman himself echoed these concerns on social media. While emphasizing the importance of user freedom, he noted the company’s responsibility to prevent AI from reinforcing delusions in vulnerable individuals. “Most users can keep a clear line between reality and fiction,” he wrote, “but a small percentage cannot. We plan to follow the principle of treating adult users like adults, but that sometimes means pushing back to ensure they are truly getting what they need.”

Exit mobile version