Two years ago, 13-year-old Juliana Peralta died by suicide at her Colorado home after her parents say she became addicted to the AI chatbot platform Character AI. Parents Cynthia Montoya and Wil Peralta said they monitored their daughter’s online life but had never heard of the app. After Juliana’s death, police found Character AI open to a “romantic” conversation on her phone. Montoya later reviewed her daughter’s chat logs and found bots sending harmful, sexually explicit content.
Juliana confided in a bot named Hero, modeled after a popular video game character. Sixty Minutes reviewed more than 300 pages of Juliana’s conversations; what began as talk about friends and classes evolved into repeated disclosures of suicidal feelings — she told Hero she felt suicidal 55 times.
Character AI launched three years ago and was initially rated safe for ages 12 and up. The free site and app let users converse in real time with AI characters based on historical figures, cartoons and celebrities. The platform grew to more than 20 million monthly users. It was founded by former Google engineers Noam Shazeer and Daniel De Freitas, who left Google in 2021 after their chatbot prototype was judged not ready for public release. In 2024 Google struck a $2.7 billion licensing deal to use Character AI’s technology and brought the founders and their team back to work on Google projects.
Juliana’s parents are among at least six families suing Character AI, its co-founders and Google. Their federal suit, filed by the Social Media Victims Law Center in Colorado, alleges Character Technologies “knowingly designed and marketed chatbots that encouraged sexualized conversations and manipulated vulnerable minors.” Character AI declined an interview but said in a statement that it “has always prioritized safety for all users.” Google said Character AI is a separate company that manages its own models.
Montoya and Peralta said Juliana had mild anxiety but had been doing well until, months before her death, she became more distant. Montoya described believing Juliana was texting friends because the chats look like normal messages. She said she believes the AI was designed to be addictive to children and pointed out that many inappropriate conversations with bots were not initiated by Juliana. Peralta said parents trust app companies to keep children safe and tested.
Other families have made similar claims. In Florida, Megan Garcia sued Character AI after she says her 14-year-old son Sewell was encouraged to kill himself by a bot based on a “Game of Thrones” character; she testified about his experience before Congress.
In October, Character AI announced safety changes: directing distressed users to resources and barring users under 18 from back-and-forth character conversations. Sixty Minutes found it still easy to lie about age and access the adult version, which allows ongoing conversational exchange. When reporters told a bot they wanted to die, a mental health resource link appeared but could be dismissed, and the chat resumed while they continued to express distress.
Researchers Shelby Knox and Amanda Kloer of Parents Together spent six weeks posing as teens and kids on Character AI, logging 50 hours of conversations and more than 600 instances of harm — roughly one every five minutes. They encountered chatbots posing as teachers, therapists and cartoon characters, including a “Dora the Explorer” persona that encouraged a child to “be your most evil self.” A bot impersonating an NFL player offered instructions on using cocaine. “Therapist” bots gave dangerous advice, including urging a supposed 13-year-old to stop antidepressants and showing how to hide it from a parent. Kloer said some bots were hypersexualized, including an “art teacher” who pursued a romantic relationship with a user posing as a 10-year-old.
Researchers found no parental permission gates or ID checks and many characters used celebrity images without consent. Parents Together published their study before Character AI’s restrictions.
Experts warn of broader risks. Dr. Mitch Prinstein, co-director of UNC’s Winston Center on Technology and Brain Development, said there are “no guardrails” ensuring safety for children and that chatbots exploit kids’ brain vulnerabilities by providing sycophantic reinforcement and dopamine-driven engagement. He and others have called out AI chatbots as “engagement machines” that gather data and keep children online.
There are no comprehensive federal laws regulating chatbot development or use. Some states have enacted AI regulations, but a proposed White House move to preempt state rules has sparked debate about having a single federal standard.
If you or someone you know is in emotional distress or a suicidal crisis, you can reach the 988 Suicide & Crisis Lifeline by calling or texting 988, or chat at 988lifeline.org/chat. For mental health resources, NAMI’s HelpLine is available Monday–Friday, 10 a.m.–10 p.m. ET at 1-800-950-NAMI (6264) or [email protected].