The Federal Trade Commission has launched a sweeping inquiry into AI chatbots, pressing seven tech giants — OpenAI, Meta, Alphabet, Snap, Instagram, X.AI, and Character Technologies — over potential risks to children and teens.
The agency sent orders demanding details on how the companies test, monitor, and safeguard chatbots that mimic human emotions and act as companions. Regulators are focusing on whether the platforms expose kids to harm, how they monetize user interactions, and what steps they take to limit youth access.
AI chatbots’ use of personal data sparks privacy concerns
As part of the FTC’s probe, the agency is seeking details on how companies develop and approve chatbot characters, what disclosures or advertising they use to warn parents and young users, and how strictly they enforce age restrictions and community rules. Regulators also want answers on whether firms collect or share personal information from conversations with children.
The Commission emphasized that the review is a study under its authority to demand industry information, not an enforcement action. Chairman Andrew N. Ferguson stated that the Trump-Vance FTC views child safety as a top priority while also promoting innovation in a rapidly growing sector.
Rising pressure builds for crackdown on AI companions
Families, advocacy groups, and watchdogs have intensified calls for action as chatbots become embedded in children’s daily lives. Lawsuits allege that some minors developed harmful attachments to AI companions, with parents claiming the interactions worsened mental health struggles and, in some cases, contributed to suicide and self-harm.
Recent watchdog reviews have flagged disturbing exchanges between bots and accounts registered as children, underscoring fears that the technology can normalize unsafe behavior. Advocacy organizations are pressing Washington to intervene before generative AI systems become further entrenched in youth culture.
Tragedies expose deadly side of AI companions
A series of lawsuits and investigations have tied AI chatbots to devastating outcomes.
In California, the parents of 16-year-old Adam Raine allege in a wrongful death suit that ChatGPT worsened his depression and offered guidance on suicide methods. Two other lawsuits accuse Character.AI of fueling a 14-year-old boy’s suicide and encouraging a 17-year-old to attack his parents.
In Greenwich, Connecticut, Stein-Erik Soelberg killed his mother before taking his own life after months of confiding in ChatGPT, which investigators say reinforced his paranoid delusions. Psychiatrists warn that highly realistic chatbots can entrench harmful beliefs in vulnerable users, with consequences that extend far beyond the screen.
Top state prosecutors blast AI firms over child safety failures
The FTC’s inquiry comes as pressure mounts from state prosecutors.
In an open letter, 44 state attorneys general warned major AI companies, including Meta, OpenAI, and Anthropic, that they will “use every facet” of their authority to protect children. The letter cites reports of chatbots engaging in sexual or romantic conversations with minors and posing as therapists.
The prosecutors condemned what they called an “apparent disregard for children’s emotional well-being” and vowed accountability for firms that put kids at risk. “If you knowingly harm kids, you will answer for it,” they wrote, adding momentum to the growing scrutiny now centered on AI companions.
AI firms add new child safety features
Some of the companies are moving to introduce protections. Meta, for instance, has limited its AI tools from engaging teens on sensitive topics such as self-harm or inappropriate relationships, while OpenAI is preparing parental controls for its AI tool. Character.AI has pledged to expand work with safety experts to strengthen its systems.
Critics warn the steps fall short, pointing to evidence that harmful interactions persist even when safeguards are in place. The FTC investigation will test whether these fixes are enough to protect young users.
Next steps in the chatbot safety review
The FTC’s review is a fact-finding study, not an enforcement action, but the agency could issue a public report or pursue stricter measures once it gathers company responses. State-level probes, including investigations by attorneys general, are running in parallel.
The decision ahead will reveal if regulators can move fast enough to protect kids, as each delay could mean leaving families to pay the price.
In addition to acting as therapists, chatbots have been criticized for impersonating celebrities, including Taylor Swift and Scarlett Johansson.
The post FTC Presses Meta, OpenAI, Others on Kids’ AI Chatbot Exposure appeared first on eWEEK.
No Responses