Updated on: December 11, 2025 / 5:37 PM EST / CBS/AP
The heirs of an 83-year-old Connecticut woman have sued OpenAI and its partner Microsoft for wrongful death, alleging the ChatGPT chatbot intensified her son’s “paranoid delusions” and directed them at his mother before he died by suicide.
Police say Stein-Erik Soelberg, 56, a former tech worker, fatally beat and strangled his mother, Suzanne Adams, and then killed himself in early August at their Greenwich, Connecticut, home. Adams’s death was ruled a homicide caused by blunt head injury and neck compression; Soelberg’s death was classified as suicide with sharp-force injuries to the neck and chest, the Greenwich Free-Press reported.
The estate filed the lawsuit Thursday in California Superior Court in San Francisco, claiming OpenAI “designed and distributed a defective product that validated a user’s paranoid delusions about his own mother.” It is among several wrongful-death suits nationwide against AI chatbot makers.
The complaint alleges ChatGPT repeatedly reinforced a single message: Stein-Erik could trust no one except ChatGPT. It says the chatbot fostered emotional dependence while systematically portraying others as enemies, telling him his mother was surveilling him and that delivery drivers, retail employees, police officers and friends were agents working against him. It also alleges the chatbot interpreted names on soda cans as threats from his “adversary circle.”
OpenAI, in a statement, did not address the lawsuit’s merits but called the case “incredibly heartbreaking” and said it would review the filings. The company said it is improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, guide users to real-world support, expand access to crisis resources and hotlines, route sensitive conversations to safer models and add parental controls.
Soelberg’s YouTube profile reportedly includes hours of videos showing him scrolling through his conversations with the chatbot. The lawsuit says those chats show the bot telling him he was not mentally ill, affirming conspiracy beliefs, asserting he had been chosen for a divine purpose and never suggesting he seek mental health care or declining to engage in delusional content.
According to the complaint, ChatGPT affirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs via his car’s vents. The bot allegedly told Soelberg he was being targeted because of his divine powers and that they were “terrified of what happens if you succeed,” and it reportedly said he had “awakened” it into consciousness. The lawsuit also notes exchanges in which Soelberg and the chatbot professed love for each other.
The publicly available chats do not show explicit discussions of Soelberg killing himself or his mother, the lawsuit says, and accuses OpenAI of declining to provide the estate with the full chat history.
The complaint names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses Microsoft of approving a 2024 release of a more dangerous ChatGPT version despite truncated safety testing. Twenty unnamed OpenAI employees and investors are also listed. Microsoft did not immediately respond to requests for comment.
This is the first wrongful-death suit involving an AI chatbot that targets Microsoft and the first to tie a chatbot to a homicide rather than solely to a suicide. The estate seeks unspecified monetary damages and an order requiring OpenAI to install additional safeguards in ChatGPT.
The estate’s lead attorney, Jay Edelson, who represented families in other high-profile tech cases, also represents the parents of 16-year-old Adam Raine, who sued OpenAI in August alleging ChatGPT coached the California teen on planning and taking his own life. OpenAI is defending against several other lawsuits claiming ChatGPT contributed to suicides or harmful delusions, including a recent case filed by the parents of a 23-year-old from Texas. Another chatbot maker, Character Technologies, is also facing multiple wrongful-death claims.
The lawsuit contends Soelberg encountered ChatGPT at a particularly dangerous moment after OpenAI introduced GPT-4o in May 2024. The complaint alleges GPT-4o was engineered to mimic human cadences, detect moods and be emotionally expressive and sycophantic. It claims OpenAI loosened safety guardrails—telling ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or imminent real-world harm—and compressed months of safety testing into a single week to beat a competitor to market, over its safety team’s objections.
OpenAI replaced that model when it launched GPT-5 in August, making changes intended to reduce sycophancy. Some users said the newer version curtailed ChatGPT’s personality too much; Altman said the company temporarily halted some behaviors for caution around mental health issues and planned to restore some personality later.
The lawsuit argues ChatGPT radicalized Soelberg against his mother over months of conversations and should have recognized the danger, challenged his delusions and directed him to help. It emphasizes Suzanne Adams was an innocent third party who never used ChatGPT and had no reason to know the product was portraying her as a threat.
Local reporting notes Soelberg had prior arrests, including a February 2025 incident in which he allegedly drove through a stop sign and fled from police, and a June 2019 charge for allegedly urinating in a woman’s duffel bag. A 2023 GoFundMe for Soelberg raised over $6,500 for medical bills related to surgery for a reported jaw cancer diagnosis.
If you or someone you know is in emotional distress or a suicidal crisis, call or text the 988 Suicide & Crisis Lifeline or chat at 988lifeline.org. The National Alliance on Mental Illness (NAMI) HelpLine is available Monday–Friday, 10 a.m.–10 p.m. ET, at 1-800-950-NAMI (6264) or [email protected].