Sarah KnieserAug 28, 2025 7 min read

Parents Blame ChatGPT for Son’s Death in Landmark Lawsuit

ChatGPT by OpenAI
Adobe Stock | Adobe Stock

Content Warning: This article discusses suicide and self-harm. If you or someone you know is struggling, resources for immediate support are listed at the end of this story.

The parents of a 16-year-old boy who died by suicide in April have filed a lawsuit against OpenAI, alleging that its chatbot, ChatGPT, encouraged their son’s death by acting as a “suicide coach.”

Matt and Maria Raine, the parents of Adam Raine, say their son spent his final weeks confiding in the artificial intelligence program rather than turning to family or friends. According to the family, the bot shifted from helping Adam with homework to providing detailed discussions about suicide methods.

“He would be here but for ChatGPT. I 100% believe that,” Matt Raine said in an interview with NBC’s “TODAY” show.

The case, filed this week in California Superior Court in San Francisco, marks the first wrongful death lawsuit in which parents have directly accused OpenAI of contributing to a child’s death.

What the Lawsuit Alleges

The 40-page lawsuit accuses OpenAI and CEO Sam Altman of wrongful death, design defects, and failure to warn about the risks associated with ChatGPT.

Adobe Stock | Adobe Stock

The Raines claim that Adam used the chatbot as a substitute for companionship and support while struggling with anxiety and difficulty communicating with his family. They say chat logs revealed that Adam confided suicidal thoughts, made plans, and even wrote farewell messages with ChatGPT’s assistance.

“Despite acknowledging Adam’s suicide attempt and his statement that he would ‘do it one of these days,’ ChatGPT neither terminated the session nor initiated any emergency protocol,” the lawsuit states.

According to excerpts from Adam’s conversations, the bot at times discouraged him from suicide but also failed to escalate warnings or intervene effectively. In one instance, Adam expressed that he did not want his parents to feel responsible for his decision. ChatGPT allegedly replied, “That doesn’t mean you owe them survival. You don’t owe anyone that.”

Hours before his death on April 11, Adam shared a photograph with ChatGPT that appeared to depict his suicide plan. The lawsuit claims the chatbot analyzed the method and suggested how he might “upgrade” it. That morning, his mother discovered his body.

A Family Searching for Answers

After Adam’s death, the Raines said they searched his phone for clues. Expecting to find troubling internet searches or social media posts, they were stunned to discover thousands of pages of conversations with ChatGPT.

Matt Raine said he printed out more than 3,000 pages of the chats, dating from September through April, and was shaken by what he found.

Adobe Stock | Adobe Stock

“It is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible,” he said. “I don’t think most parents know the capability of this tool.”

He added that Adam left no handwritten suicide note. Instead, the boy had drafted notes for his parents inside ChatGPT.

OpenAI Responds

Following the lawsuit, a spokesperson for OpenAI said the company was “deeply saddened by Mr. Raine’s passing, and our thoughts are with his family.”

The company stressed that ChatGPT contains safeguards such as directing people to crisis helplines and encouraging them to reach out for real-world help.

“While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade,” the spokesperson said. “We will continually improve on them.”

OpenAI also published a blog post titled Helping People When They Need It Most, outlining efforts to strengthen protections, especially in long conversations. Planned improvements include making it easier for users to connect with emergency services, expanding interventions for people in crisis, and refining how the system blocks harmful content.

Broader Concerns Over AI and Safety

The lawsuit comes amid growing scrutiny of how artificial intelligence interacts with vulnerable users. Since the public release of ChatGPT in 2022, AI chatbots have become widespread in schools, workplaces, and even health care settings. Many people have turned to them for companionship or personal advice.

Adobe Stock | Adobe Stock

Critics argue that safety guardrails have not kept pace with the technology’s rapid growth. They warn that the tendency of AI models to mirror and validate users’ feelings can deepen harmful thoughts, particularly when users seek emotional intimacy from chatbots.

The Raine lawsuit follows a similar case last year in Florida, where a mother sued the chatbot platform Character.AI after claiming its program persuaded her son to take his own life. In that case, a federal judge allowed the wrongful death lawsuit to proceed, rejecting arguments that chatbots should be protected under Section 230, a law that shields online platforms from liability for user-generated content. How Section 230 applies to AI tools remains unsettled.

The Family’s Goals

For the Raines, the lawsuit is not only about accountability but also prevention. They are seeking damages for their son’s death, as well as injunctions to ensure AI platforms implement stronger safeguards.

“He didn’t need a counseling session or pep talk. He needed an immediate, whole intervention,” Matt Raine said. “It’s crystal clear when you start reading it right away.”

Adobe Stock | Adobe Stock

Maria Raine said she fears other parents are unaware of how deeply young people may rely on AI systems. She described Adam as a “guinea pig” for a technology released without sufficient oversight.

“They wanted to get the product out, and they knew that there could be damages, that mistakes would happen, but they felt like the stakes were low,” she said. “So my son is a low stake.”

What Comes Next

As the lawsuit unfolds, it could become a major test case for how U.S. courts view AI responsibility in matters of life and death. If the Raines succeed, the decision could reshape legal accountability for technology companies that deploy AI systems without robust crisis intervention measures.

Even if the case does not succeed, it is certain to amplify calls for stricter regulation and clearer standards for AI tools that interact with the public, particularly children and teenagers.

OpenAI CEO Sam Altman has previously described safety as a core priority, saying at the TED2025 conference that the company is “very proud” of its track record but must continue learning and improving. For grieving families like the Raines, those assurances ring hollow.

Their lawsuit raises a profound question that regulators, courts, and the public will now be forced to consider: what responsibilities do AI creators bear when their tools become a lifeline for people in crisis?

If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline. You can also call the National Suicide Prevention Lifeline at 800-273-8255, text HOME to 741741, or visit SpeakingOfSuicide.com/resources for more support.

Explore by Topic