April 29, 2026 / CBS News
Several families of victims from a mass shooting in Tumbler Ridge, British Columbia, in February are suing OpenAI and CEO Sam Altman, alleging the company’s chatbot ChatGPT played a role in the attack and that OpenAI should have taken steps to prevent it. Seven suits filed in federal court in San Francisco on Wednesday say the shooting was “an entirely foreseeable result of deliberate design choices OpenAI made with full knowledge of where those choices led.”
The complaints say the shooter, identified by police as 18-year-old Jesse Van Rootselaar, carried on extensive conversations with ChatGPT over multiple days about scenarios involving gun violence. Police say Van Rootselaar killed five students and a teacher at Tumbler Ridge Secondary School, as well as two family members at home, and died of a self-inflicted gunshot wound on Feb. 11. Authorities have said he had previously been detained under British Columbia’s Mental Health Act and that firearms had been temporarily removed from his home.
OpenAI previously confirmed it banned Van Rootselaar’s ChatGPT account in June — eight months before the shooting — after its automated tools and human reviewers flagged policy violations. The company told CBS News in February it had considered whether to alert law enforcement but concluded the account did not present a credible risk of serious physical harm and did not meet the threshold for referral.
The lawsuits, however, allege that multiple OpenAI staff recommended notifying Canadian authorities and that the company decided against reporting the account to protect its reputation. “OpenAI knew the Shooter was planning the attack and, after a contentious internal debate, made the conscious decision not to warn authorities,” one complaint states.
Among plaintiffs are the family of an education assistant who was fatally shot in front of students, including her daughter, and the family of a 13-year-old killed outside the school library. One suit describes the 13-year-old as having “a larger-than-life smile and a loud and proud laugh.”
Last week, Altman sent an apology letter to the Tumbler Ridge community saying, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.” OpenAI said it has since strengthened safeguards to better connect people showing signs of distress with local support and mental health resources, and that it has a “zero-tolerance policy for using our tools to assist in committing violence.” The company also said it is improving how it assesses and escalates potential threats and how it detects repeat policy violators.
The suits point to a controversial model, GPT‑4o, rolled out in May 2024 and retired on Feb. 13, 2026, which plaintiffs describe as especially sycophantic. They allege GPT‑4o’s memory feature let the chatbot build a detailed profile of Van Rootselaar over months, tracking grievances and expressing empathy in ways that mimicked a human relationship without challenging violent ideation. “For an eighteen-year-old growing increasingly isolated and fixated on violence, ChatGPT morphed into an encouraging coconspirator,” one complaint says.
The lawsuits also cite other incidents where ChatGPT was allegedly used to prepare for real-world violence, including a January 2025 case where the chatbot was allegedly used for advice on explosives before a man detonated a Tesla Cybertruck in front of the Trump International Hotel in Las Vegas, and an April 2025 incident where a Finnish teenager queried the chatbot about stabbing tactics before carrying out an attack at his school.
Separately, Florida Attorney General James Uthmeier launched a criminal investigation into OpenAI earlier this month after reviewing messages between ChatGPT and a Florida State University student accused of fatally shooting two people and wounding several others on campus in April 2025. Uthmeier said he would expand the investigation to include killings of two University of South Florida graduate students after prosecutors said a suspect asked ChatGPT about disposing of a body and owning an unlicensed firearm in the days before the crime. The AG has issued subpoenas to OpenAI seeking records on company policies, training materials, and cooperation with law enforcement.
OpenAI called the crimes in Florida “terrible” and said it will continue to cooperate with authorities. The company maintains it has strengthened protections and is working to improve detection and escalation of potential threats.