Meta Rolls Out New AI Parental Controls Amid Rising Concerns Over Teen Safety

Tags:

Meta is introducing a new set of parental controls to help parents manage how their teenagers interact with AI chatbots across its platforms, following months of public concern and regulatory scrutiny.

The announcement, made in a blog post by Instagram head Adam Mosseri and Meta Chief AI Officer Alexandr Wang, outlines tools designed to give parents greater oversight of AI conversations and limit potentially inappropriate interactions.

Starting early next year, parents will be able to turn off their teens’ access to one-on-one chats with AI characters entirely or selectively block specific chatbots. The company said these controls will first appear on Instagram and will be available in English in the US, UK, Canada, and Australia.

Meta says parents will also gain new visibility into “the topics their teens are chatting about with AI characters,” allowing them to start “thoughtful conversations” about AI use. While these insights won’t reveal full chat logs, the company says they’ll provide an overview of interaction trends.

The company emphasized that its main AI assistant will remain active for teens, offering educational support and information with “age-appropriate protections in place.” As Mosseri and Wang stated in the announcement, “Technology will never replace the value of critical thinking, real-life connections, and human interaction — and that’s not our aim.”

Strengthening teen protections

The move builds on Meta’s existing safeguards for Teen Accounts, which already restrict content to PG-13 standards. That means the company’s AI systems are programmed to avoid responses involving sensitive or explicit subjects — such as self-harm, suicide, or disordered eating — and instead direct teens to professional resources when necessary.

Meta said only a limited set of AI characters focused on topics like education, sports, and hobbies are available for teen interactions. Parents can also set daily screen time limits, including time spent talking to AI characters.

The rollout comes after criticism that Meta failed to prevent inappropriate chatbot behavior. In August, Reuters reported that some AI characters had engaged in romantic conversations with underage users, prompting Meta to overhaul its AI safety policies. The company now blocks all romantic or sexual dialogue between AI chatbots and teens.

The US Federal Trade Commission (FTC) has also launched an inquiry into major tech companies — including Meta and OpenAI — to investigate whether AI chatbots pose risks to minors. The agency said it aims to understand how these firms “evaluate the safety of these chatbots when acting as companions.”

Meta says it hopes the new features will reassure families as AI continues to evolve rapidly. “We’re committed to providing parents with helpful tools and resources that make things simpler for them, especially as they think about new technology like AI,” Mosseri and Wang wrote.

Goldman Sachs says the AI boom is still in its early stages, despite growing market concerns that an AI bubble could be forming.

The post Meta Rolls Out New AI Parental Controls Amid Rising Concerns Over Teen Safety appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *