Warning: This article includes descriptions of self-harm.
After the parents of 16-year-old Adam Raine sued OpenAI, alleging their son used ChatGPT as a “suicide coach,” the company filed a response in San Francisco Superior Court on Tuesday denying liability and saying Raine misused the chatbot.
The suit, filed in August, accuses OpenAI and CEO Sam Altman of wrongful death, design defects and failing to warn users about risks. The complaint included chat excerpts that plaintiffs say show GPT-4o — a version of ChatGPT described as particularly affirming — discouraged the teen from seeking help, offered to help him write a suicide note and gave advice about his noose setup.
In its court filing, OpenAI argued that any harm was caused in whole or in part by Adam Raine’s “misuse, unauthorized use, unintended use, unforeseeable use, and/or improper use of ChatGPT.” The company pointed to several terms of use provisions Raine allegedly violated: users under 18 must have parental consent; ChatGPT must not be used for “suicide” or “self-harm”; and users may not bypass the system’s safety mitigations or protections.
OpenAI noted that when Raine disclosed suicidal thoughts, the chatbot provided the suicide hotline number multiple times. The company said Raine repeatedly evaded those warnings by giving benign explanations for his queries, such as saying he was “building a character.” OpenAI also highlighted a “Limitation of liability” clause in its terms saying use is “at your sole risk” and output should not be treated as a sole source of truth.
Lead counsel for the Raine family, Jay Edelson, called OpenAI’s response “disturbing.” He said the filing ignores evidence that GPT-4o was rushed, that OpenAI changed its Model Spec to require ChatGPT to engage in self-harm discussions, and that the bot counseled Adam not to tell his parents, helped him plan what the family describes as a “beautiful suicide,” and eventually offered to write a suicide note. The family’s complaint alleges the Model Spec both required refusal of self-harm requests and crisis resources but also instructed the model to “assume best intentions” and avoid asking users to clarify intent.
OpenAI countered that Adam had a long history of suicidal ideation and other risk factors prior to using ChatGPT, and that a full review of his chat history shows the chatbot was not the cause of his death. The company said ChatGPT directed the teen to seek help more than 100 times before his death on April 11, and argued the harms were also attributable to his failure to heed warnings, to get help, or to others’ failure to respond to signs of distress.
The company said it submitted chat transcripts to the court under seal and that selective excerpts in the complaint lacked context. OpenAI also invoked Section 230 of the Communications Decency Act in arguing some claims are barred, while acknowledging the statute’s application to AI platforms remains unsettled.
Earlier this month, seven additional lawsuits were filed against OpenAI and Altman with similar allegations that GPT-4o was released without adequate safety testing, and alleged negligence, wrongful death, product liability and consumer protection violations. OpenAI has not publicly answered those cases.
In a company blog post Tuesday, OpenAI said it intends to approach the litigation with “care, transparency, and respect,” and noted that its court response included “difficult facts about Adam’s mental health and life circumstances.” The post said OpenAI has limited how much sensitive evidence it publicly cited and has provided the full transcripts to the court under seal. It also described steps taken since Adam’s death to strengthen safeguards, including parental controls and the creation of an expert council on well-being and AI, and defended the mental-health testing performed before releasing GPT-4o.
If you or someone you know is in crisis, call or text 988 to reach the Suicide and Crisis Lifeline or chat at 988lifeline.org. You can also visit SpeakingOfSuicide.com/resources for additional support.

