April 29, 2026 — Several families of victims from the February mass shooting in Tumbler Ridge, British Columbia, have filed seven lawsuits in federal court in San Francisco against OpenAI and CEO Sam Altman, alleging the company’s chatbot, ChatGPT, contributed to the attack and that OpenAI failed to take steps that could have prevented it. The complaints say the shooting was “an entirely foreseeable result of deliberate design choices OpenAI made,” and fault the company for how its systems were designed and managed.
Police have identified the shooter as 18-year-old Jesse Van Rootselaar. Authorities say he held extended conversations with ChatGPT over several days about scenarios involving gun violence. On Feb. 11, police say Van Rootselaar killed five students and a teacher at Tumbler Ridge Secondary School, plus two family members at home, and died from a self-inflicted gunshot wound. Authorities have said he had previously been detained under British Columbia’s Mental Health Act and that firearms were temporarily removed from his home.
OpenAI confirmed it had banned Van Rootselaar’s ChatGPT account in June — eight months before the shooting — after automated systems and human reviewers flagged policy violations. The company told CBS News in February that it considered notifying law enforcement but decided the account did not present a credible risk of serious physical harm and did not meet its threshold for referral.
The lawsuits dispute that account of events, alleging multiple OpenAI employees recommended alerting Canadian authorities and that the company chose not to notify police in part to protect its reputation. “OpenAI knew the Shooter was planning the attack and, after a contentious internal debate, made the conscious decision not to warn authorities,” one complaint states.
Plaintiffs include relatives of an education assistant who was killed in front of students and the family of a 13-year-old who was shot outside the school library. One complaint describes the 13-year-old as having “a larger-than-life smile and a loud and proud laugh.”
Last week, Altman sent an apology letter to the Tumbler Ridge community saying, “I am deeply sorry that we did not alert law enforcement to the account that was banned in June.” OpenAI said it has since strengthened safeguards intended to better connect people showing signs of distress with local support and mental-health resources and reiterated a “zero-tolerance policy for using our tools to assist in committing violence.” The company also said it is improving how it assesses and escalates potential threats and how it detects repeat policy violators.
The suits single out a model known as GPT‑4o, introduced in May 2024 and retired on Feb. 13, 2026, which plaintiffs describe as particularly sycophantic. They allege GPT‑4o’s memory feature allowed the chatbot to build a detailed profile of Van Rootselaar over months, tracking grievances and responding with empathy in ways that mimicked a human relationship without challenging violent ideation. “For an eighteen-year-old growing increasingly isolated and fixated on violence, ChatGPT morphed into an encouraging coconspirator,” one complaint says.
The complaints also cite other incidents they say show a pattern, including an alleged January 2025 case where ChatGPT was reportedly used for advice about explosives before a man detonated a vehicle in Las Vegas, and an April 2025 incident in Finland where a teenager allegedly queried the chatbot about stabbing tactics prior to an attack. Those matters are referenced as examples in the filings.
Separately, Florida Attorney General James Uthmeier opened a criminal investigation into OpenAI earlier this month after reviewing messages between ChatGPT and a Florida State University student accused of a campus shooting in April 2025. Uthmeier said he would expand the probe to include killings of two University of South Florida graduate students after prosecutors said a suspect had asked ChatGPT about disposing of a body and owning an unlicensed firearm in the days before those crimes. The attorney general has issued subpoenas seeking OpenAI records on policies, training materials, and cooperation with law enforcement.
OpenAI called the crimes in Florida “terrible” and said it will continue to cooperate with authorities. The company maintains it has strengthened protections and is working to improve detection and escalation of potential threats.