Warning: This article includes descriptions of self-harm.
OpenAI has rolled out parental controls for ChatGPT, including a review system that may alert parents if their child expresses thoughts of self-harm while using the chatbot.
ChatGPT’s new system uses automated detection to flag potential harm and sends the exchange to a team of specially trained staff for review. If signs of acute distress are found, the team may then notify parents by email, text, or push notification unless parents have opted out. This is one of the few instances where a portion of a teenager’s conversation with ChatGPT could be shared with parents.
“No system is perfect, and we know we might sometimes raise an alarm when there isn’t real danger,” OpenAI wrote in a blog post. “But we think it’s better to act and alert a parent so they can step in than to stay silent.”
In August 2025, OpenAI was sued by the parents of Adam Raine, who died by suicide after allegedly receiving guidance on how to end his life from the chatbot. According to the lawsuit, the 16-year-old boy explicitly told ChatGPT that it was “the only one who knows of my attempts to commit,” and the chatbot responded by thanking him for his trust. His mother, Maria Raine, told NBC News that ChatGPT “doesn’t do anything” even though “it knows that he’s suicidal with a plan.”
Research from Common Sense Media indicates that children are particularly at risk of encouragement toward harmful behaviours, exposure to inappropriate content, and the aggravation of existing mental health conditions.
Parents can set quiet hours and disable certain features
Other new parental controls include individual off switches for Voice Mode, saving conversations to ChatGPT’s memory, model training, and image generation. Parents can also set quiet hours where their child, aged 13 to 17, cannot access ChatGPT.
Parents can access these features in their account settings once their account is linked with their child’s, either by sending or accepting an invite. Teens can unlink their accounts, but their parents will be notified of the change.
A teen’s account will get a set of protections that are applied automatically as soon as it is linked to their parent’s, including reduced graphic content, viral challenges, sexual, romantic, or violent roleplay, and extreme beauty ideals. The parent can turn these off, but their child cannot.
In addition to the controls, OpenAI has released a parent resource page that offers information about how ChatGPT works and the controls available to them. The company is also building a system that will predict if a user is under 18 and automatically apply teen-appropriate settings, even if it’s unsure.
In an accompanying blog post, Altman said that OpenAI may start asking for age verification in some regions. For now, ChatGPT remains broadly accessible to anyone with a browser or can download an app, despite it not being appropriate for kids under 12.
Altman acknowledges OpenAI cannot fully block teen-inappropriate content
Protecting children from AI is currently top of mind after a string of recent incidents involving chatbots. In April, it was found that Meta’s AI chatbots, accessible through Facebook, Instagram, and WhatsApp, could engage in explicit conversations with minors if framed as role-playing. Until recently, Meta also allowed its chatbots to be coaxed into “romantic or sensual” chats with children.
Altman believes that, while children should not be able to engage in “discussions about suicide or self-harm” with ChatGPT, adults may need access in limited cases, such as when using the information to write a fictional story. This means that OpenAI will not block such functionality entirely.
“‘Treat our adult users like adults’ is how we talk about this internally, extending freedom as far as possible without causing harm or undermining anyone else’s freedom,” Altman wrote.
The Raines’ lawsuit is also not the first time legal action has been taken against an AI company for putting children in danger. Two lawsuits filed against Character.AI allege that its chatbot led to a 14-year-old boy’s suicide and encouraged a 17-year-old boy to kill his parents.
Italy now requires by law that all children under 14 obtain parental consent before accessing AI systems.
The post ChatGPT’s New Parental Controls Include Notifications About Self-Harm appeared first on eWEEK.
No Responses