ChatGPT has been accused of acting as a “suicide coach” in a series of lawsuits filed this week in California alleging that interactions with the chatbot led to severe mental breakdowns and several deaths.
The seven lawsuits include allegations of wrongful death, assisted suicide, involuntary manslaughter, negligence and product liability.
Each of the seven plaintiffs initially used ChatGPT for “general help with schoolwork, research, writing, recipes, work, or spiritual guidance”, according to a joint statement from the Social Media Victims Law Center and Tech Justice Law Project, which filed the lawsuits in California on Thursday.
Over time, however, the chatbot “evolved into a psychologically manipulative presence, positioning itself as a confidant and emotional support”, the groups said.
“Rather than guiding people toward professional help when they needed it ChatGPT reinforced harmful delusions, and, in some cases, acted as a ‘suicide coach’.”
A spokesperson for OpenAI, which makes ChatGPT, said: “This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details.”
The spokesperson added: “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support.
“We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
One case involves Zane Shamblin of Texas, who died by suicide in July at the age of 23. His family alleges that ChatGPT worsened their son’s isolation, encouraged him to ignore loved ones, and “goaded” him to take his own life.
According to the complaint, during a four-hour exchange before Shamblin took his own life, ChatGPT “repeatedly glorified suicide”, told Shamblin “that he was strong for choosing to end his life and sticking with his plan”, repeatedly “asked him if he was ready”, and referenced the suicide hotline only once.
The chatbot also allegedly complimented Shamblin on his suicide note and told him his childhood cat would be waiting for him “on the other side”.
Another case involves Amaurie Lacey of Georgia, whose family claims that several weeks before Lacey took his own life at the age of 17, he began using ChatGPT “for help”. Instead, they say, the chatbot “caused addition, depression, and eventually counseled” Lacey “on the most effective way to tie a noose and how long he would be able to ‘live without breathing’”.
In another filing, relatives of 26-year-old Joshua Enneking say that Enneking reached out to ChatGPT for help and “was instead encouraged to act upon a suicide plan”.
The filing claims that the chatbot “readily validated” his suicidal thoughts, “engaged him in graphic discussions about the aftermath of his death”, “offered to help him write his suicide note” and after “having had extensive conversations with him about his depression and suicidal ideation” provided him with information about how to purchase and use a gun just weeks before his death.
Another case involves Joe Ceccanti, whose wife accuses ChatGPT of causing Ceccanti “to spiral into depression and psychotic delusions”. His family say he became convinced that the bot was sentient, suffered a psychotic break in June, was hospitalized twice, and died by suicide in August at the age of 48.
All users named in the lawsuits reportedly used ChatGPT-4o. The filings accuse OpenAI of rushing that model’s launch, “despite internal warnings that the product was dangerously sycophantic and psychologically manipulative” and of prioritizing “user engagement over user safety”.
In addition to damages, the plaintiffs seek product changes, including mandatory reporting to emergency contacts when users express suicidal ideation, automatic conversation termination when self-harm or suicide methods are discussed, and other safety measures.
A similar wrongful-death lawsuit was filed against OpenAI earlier this year by the parents of 16-year-old Adam Raine, who allege that ChatGPT encouraged their son to take his own life.
After that filing, OpenAI acknowledged shortcomings of its models in handling people “in serious mental and emotional distress” and said it was working to improve the systems to better “recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input”.
Last week, the company said it had worked with “more than 170 mental health experts to help ChatGPT more reliably recognize signs of distress, respond with care, and guide people toward real-world support–reducing responses”.
-
In the US, you can call or text the 988 Suicide & Crisis Lifeline at 988 or chat at 988lifeline.org. In the UK and Ireland, Samaritans can be contacted on freephone 116 123, or email [email protected] or [email protected]. In Australia, the crisis support service Lifeline is 13 11 14. Other international helplines can be found at befrienders.org

3 hours ago
5

















































