OpenAI Updates ChatGPT, Vows to ‘Take the Safer Route’ on Teen Safety

Tags:

OpenAI is adding tighter safeguards to ChatGPT for teens as it prepares new safety features, including parental controls and default protections when age is unclear. The tech company said it is building toward a long-term system to verify user ages. 

The ChatGPT update also promises tools for parents to link accounts, set rules, limit features, and enforce blackout hours.

Prioritizing safety over freedom

OpenAI is prioritizing teen safety in ChatGPT, even if that means limiting some freedoms or privacy to ensure younger users are protected.

The update adds automatic filters to block graphic sexual material and set teen-specific responses. By the end of September, new parental controls will allow families to link accounts starting at age 13, set rules, disable memory or chat history, and enforce blackout hours. 

Parents will also get notifications in rare cases of acute distress, with law enforcement contacted if guardians cannot be reached. To curb overuse, ChatGPT will introduce break reminders during long sessions. When age prediction is uncertain, OpenAI says it will “take the safer route” by defaulting users to the under-18 experience. Adults can prove their age to restore full access.

Teen safety rollout comes as regulators press AI firms for answers

ChatGPT’s new safeguards land as the Federal Trade Commission investigates how AI companies, including OpenAI, Meta, and Alphabet, test and monitor AI chatbots for risks to children. The agency is seeking details on whether these systems expose minors to harm, how data is handled, and what steps firms take to limit unsafe interactions.

Adding to federal scrutiny, 44 state attorneys general signed an open letter vowing to hold AI developers responsible if chatbots put kids at risk.

Lawsuits over high-profile deaths put pressure on ChatGPT’s safety promises

OpenAI faces legal challenges over allegations that ChatGPT worsened the mental health struggles of vulnerable users. For instance, the parents of 16-year-old Adam Raine have sued the company, claiming the AI tool gave their son personalized instructions on suicide methods before his death.

In Connecticut, investigators said a man confided in ChatGPT as it reinforced his paranoid delusions, culminating in a murder-suicide. 

Families and advocates argue such cases expose dangerous gaps in safeguards, intensifying pressure on OpenAI to deliver on its new parental controls and teen protections.

The race is on to make chatbots safer

OpenAI is moving in step with its rivals. Meta has restricted its AI from discussing sensitive topics with teens and is preparing parental controls, while Character.AI has pledged to work with safety experts to strengthen its systems.

Despite these moves, critics argue safety gaps remain, pointing to troubling interactions that still slip through. The spotlight is now on whether companies can deliver safeguards that match the scale and speed of adoption.

For all the industry’s ambition, its future reputation may rest on a simple question: can it keep kids safe?

In other chatbot news: In August, it was reported that Meta’s contractors viewed the “explicit photos” and other personal information that users sent to Meta AI.

The post OpenAI Updates ChatGPT, Vows to ‘Take the Safer Route’ on Teen Safety appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *