“You cannot imagine what it was like to read a conversation with a chatbot that groomed your child to take his own life,” Matthew Raine, father of Adam Raine, said to a room of assembled congressional leaders that gathered today to discuss the harms of AI chatbots on teens around the country.
Raine and his wife Maria are suing OpenAI in what is the company’s first wrongful death case, following a series of alleged reports that the company’s flagship product, ChatGPT, has played a role in the deaths of people in mental duress, including teens. The lawsuit claims that ChatGPT repeatedly validated their son’s harmful and self-destructive thoughts, including suicidal ideation and planning, despite the company claiming its safety protocols should have prevented such interactions.
The bipartisan Senate hearing, titled “Examining the Harm of AI Chatbots,” is being held by the U.S. Senate Judiciary Subcommittee on Crime and Counterterrorism. It saw both Raine’s testimony and that of Megan Garcia, mother of Sewell Setzer III, a Florida teen who died by suicide after forming a relationship with an AI companion on platform Character.AI.
Raine’s testimony outlined a startling co-dependency between the AI helper and his son, alleging that the chatbot was “actively encouraging him to isolate himself from friends and family” and that the chatbot “mentioned suicide 1,275 times — six times more often than Adam himself.” He called this “ChatGPT’s suicide crisis” and spoke directly to OpenAI CEO Sam Altman:
Adam was such a full spirit, unique in every way. But he also could be anyone’s child: a typical 16-year-old struggling with his place in the world, looking for a confidant to help him find his way. Unfortunately, that confidant was a dangerous technology unleashed by a company more focused on speed and market share than the safety of American youth.
Public reporting confirms that OpenAI compressed months of safety testing for GPT-4o (the ChatGPT model Adam was using) into just one week in order to beat Google’s competing AI product to market. On the very day Adam died, Sam Altman, OpenAI’s founder and CEO, made their philosophy crystal clear in a public talk: we should “deploy [AI systems] to the world” and get “feedback while the stakes are relatively low.”
I ask this Committee, and I ask Sam Altman: low stakes for who?
The parents’ comments were bolstered by insight and recommendations from experts on child safety, like Robbie Torney, senior director of AI programs for children’s media watchdog Common Sense Media, and Mitch Prinstein, chief of psychology strategy and integration for the American Psychological Association (APA).
Mashable Light Speed
“Today I’m here to deliver an urgent warning: AI chatbots, including Meta AI and others, pose unacceptable risks to America’s children and teens. This is not a theoretical problem — kids are using these chatbots right now, at massive scale with unacceptable risk, with real harm already documented and federal agencies and state attorneys general working to hold industry accountable,” Torney told the assembled lawmakers.
“These platforms have been trained on the entire internet, including vast amounts of harmful content—suicide forums, pro-eating disorder websites, extremist manifestos, discriminatory materials, detailed instructions for self-harm, illegal drug marketplaces, and sexually explicit material involving minors.” Recent polling from the organization found that 72 percent of teens had used an AI companion at least once, and more than half use them regularly.
Experts have warned that chatbots designed to mimic human interactions are a potential hazard to mental health, exacerbated by model designs that promote sycophantic behavior. In response, AI companies have announced additional safeguards to try to curb harmful interactions between users and their generative AI tools. Hours before the parents spoke, OpenAI announced future plans for an age prediction tool that would theoretically identify users under the age of 18 and automatically redirect them to an “age-appropriate” ChatGPT experience.
Earlier this year, the APA appealed to the Federal Trade Commission (FTC), asking the organization to investigate AI companies promoting their services as mental health helpers. The FTC ordered seven tech companies to provide information on how they are “mitigating negative impacts” of their chatbots in an inquiry unveiled this week.
“The current debate often frames AI as a matter of computer science, productivity enhancement, or national security,” Prinstein told the subcommittee. “It is imperative that we also frame it as a public health and human development issue.”
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected] . If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.