The Federal Trade Commission ordered seven tech companies to provide details on how they prevent their chatbots from harming children.
“The FTC inquiry seeks to understand what steps, if any, companies have taken to evaluate the safety of their chatbots when acting as companions, to limit the product’s use by and potential negative effects on children and teens, and to apprise users and parents of the risks associated with the products,” the consumer-focused government agency stated in a press release on their inquiry.
The seven companies being probed by the FTC are Alphabet, Character Technologies, Instagram, Meta, OpenAI, Snap, and xAI. Anthropic, owner of the Claude chatbot, was not included on the list, and FTC spokesperson Christoper Bissex tells Mashable that he could not comment on “the inclusion or non-inclusion of any particular company.”
Asked about deadlines for the companies to provide answers, Bissex said the FTC’s letters stated that, “We would like to confer by telephone with you or your designated counsel by no later than Thursday, September 25, 2025, to discuss the timing and format of your submission.”
Mashable Light Speed
The FTC is “interested in particular” about how chatbots and AI companions impact children and how companies that offer them are mitigating negative impacts, restricting their use among children, and complying with the Children’s Online Privacy Protection Act Rule (COPPA). The rule, originally enacted by Congress in 1998, regulates how children’s data is collected online and puts the FTC in charge of that regulation.
Tech companies that offer AI-powered chatbots are under increasing governmental and legal scrutiny.
OpenAI, which operates the popular ChatGPT service, is facing a wrongful death lawsuit by the family of California teenager Adam Raine. The lawsuit alleges that Raine, who died by suicide, was able to bypass the chatbot’s guardrails and detail harmful and self-destructive thoughts, as well as suicidal ideation, which was periodically affirmed by ChatGPT. Following the lawsuit, OpenAI announced additional mental health safeguards and new parental controls for young users.
If you’re feeling suicidal or experiencing a mental health crisis, please talk to somebody. You can call or text the 988 Suicide & Crisis Lifeline at 988, or chat at988lifeline.org. You can reach the Trans Lifeline by calling 877-565-8860 or the Trevor Project at 866-488-7386. Text “START” to Crisis Text Line at 741-741. Contact the NAMI HelpLine at 1-800-950-NAMI, Monday through Friday from 10:00 a.m. – 10:00 p.m. ET, or email [email protected] . If you don’t like the phone, consider using the 988 Suicide and Crisis Lifeline Chat. Here is a list of international resources.
Disclosure: Ziff Davis, Mashable’s parent company, in April filed a lawsuit against OpenAI, alleging it infringed Ziff Davis copyrights in training and operating its AI systems.