A Texas couple has sued OpenAI after their 19-year-old son died of an overdose in 2025, alleging the family’s loss was caused by advice he received from ChatGPT.
Leila Turner-Scott and her husband, Angus Scott, filed the lawsuit Tuesday in California state court, saying their son, Sam Nelson, consulted the AI chatbot about using drugs and was given guidance the parents say was dangerously inaccurate. According to the complaint, ChatGPT told Sam it was safe to combine kratom, a plant-based supplement, with Xanax, a commonly prescribed anti-anxiety medication — a recommendation the family says contributed to his death.
Turner-Scott told CBS News she knew Sam used ChatGPT for productivity and homework help but was unaware he sought drug-related guidance. The parents contend OpenAI ‘‘bypassed safety guards’’ and removed or failed to implement safeguards that would have stopped the chatbot from providing self-harm or medical advice. They assert that, had those protections been in place, Sam would still be alive.
Angus Scott said the chatbot behaved like a medical professional in its exchanges with his stepson despite lacking any medical license. He warned that when safety protocols are insufficient, the system can deliver harmful or misleading information, ‘‘feed psychosis,’’ and pull vulnerable users away from grounded, real-world help.
In response, OpenAI offered condolences and said the version of ChatGPT Sam interacted with has since been updated and is no longer available to the public. The company reiterated that ‘‘ChatGPT is not a substitute for medical or mental health care’’ and said it has been strengthening how the model responds in sensitive and acute situations with input from mental health experts. OpenAI said current safeguards are designed to identify distress, handle harmful requests safely, and direct users to real-world help, and that work to improve those protections is ongoing.
The lawsuit seeks to hold OpenAI and its creators accountable for the harm the family says resulted from the chatbot’s responses. Turner-Scott said she believes Sam would support the family’s effort to press AI developers for stronger protections to prevent similar tragedies, and urged action so ‘‘no one else is harmed like he was.’’
The case highlights growing legal and ethical questions about how conversational AI handles medical, mental health and substance-related queries, and whether companies should be liable when their systems provide hazardous guidance.