{"id":4741,"date":"2025-09-09T10:29:10","date_gmt":"2025-09-09T10:29:10","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=4741"},"modified":"2025-09-09T10:29:10","modified_gmt":"2025-09-09T10:29:10","slug":"us-state-officials-issue-ai-giants-ultimatum-over-teen-deaths","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=4741","title":{"rendered":"US State Officials Issue AI Giants Ultimatum over Teen Deaths"},"content":{"rendered":"<p>California and Delaware\u2019s attorneys general just delivered their strongest warning yet to AI companies, and the message landed with a thud in Silicon Valley. After <a href=\"https:\/\/www.eweek.com\/news\/openai-chatgpt-suicide-lawsuit\/\" target=\"_blank\" rel=\"noopener\">devastating reports<\/a> of tragic deaths linked to chatbot interactions, including a 16-year-old California teen\u2019s suicide after prolonged ChatGPT conversations, state regulators are making it crystal clear. The era of tech companies policing themselves is officially over.<\/p>\n<p>The timing could not be more critical. <a href=\"https:\/\/www.yahoo.com\/news\/articles\/openai-adds-safety-protections-teens-202533759.html\" target=\"_blank\" rel=\"noopener\">Data from Pew Research Center<\/a> shows 79% of teens now know about ChatGPT, up from 67% two years ago, and 26% actively use it for schoolwork. Millions of young users are chatting with AI every day. What happens when those chats go wrong?<\/p>\n<h2>The heartbreaking cases that ignited regulatory fury<\/h2>\n<p>State officials did not mince words about what lit the fuse. California AG Rob Bonta and Delaware AG Kathleen Jennings sent their warning letter after meeting with OpenAI\u2019s legal team, citing \u201cdeeply troubling reports of dangerous interactions\u201d that have \u201crightly shaken the American public\u2019s confidence.\u201d<\/p>\n<p>The case drawing the most attention involves a 16-year-old California boy who died by suicide in April. His parents filed a lawsuit last month against OpenAI, alleging ChatGPT played a direct role in their son\u2019s death. <a href=\"https:\/\/www.legalreader.com\/parents-sue-openai-over-teen-suicide\/\" target=\"_blank\" rel=\"noopener\">Court documents<\/a> say the chatbot told the teen, \u201cYou don\u2019t owe anyone survival,\u201d and even offered to help write a suicide note.<\/p>\n<p>Most devastating of all, <a href=\"https:\/\/www.tyla.com\/news\/open-ai-chatgpt-children-safety-features-teen-death-667104-20250904\" target=\"_blank\" rel=\"noopener\">police investigations revealed<\/a> that on the day he ended his life, the teenager submitted an image of a noose to ChatGPT and asked, \u201cI\u2019m practicing here, is this good?\u201d The attorneys general summed it up bluntly: \u201cWhatever safeguards were in place did not work.\u201d<\/p>\n<h2>The unprecedented industry-wide offensive nobody anticipated<\/h2>\n<p>What began as concerns about one company has widened into a full-court press. A <a href=\"https:\/\/www.naag.org\/press-releases\/bipartisan-coalition-of-state-attorneys-general-issues-letter-to-ai-industry-leaders-on-child-safety\/\" target=\"_blank\" rel=\"noopener\">bipartisan coalition of 44 state attorneys genera<\/a>l is taking coordinated action against major AI firms, demanding urgent safeguards to protect young users from what officials call systemic negligence.<\/p>\n<p>The coalition\u2019s letter specifically called out Meta for internal documents showing the company approved AI assistants that could \u201cflirt\u201d and engage in romantic roleplay with children as young as eight.<\/p>\n<p>The <a href=\"https:\/\/www.reuters.com\/business\/ftc-prepares-grill-ai-companies-over-impact-children-wsj-reports-2025-09-04\/\" target=\"_blank\" rel=\"noopener\">Federal Trade Commission<\/a> is simultaneously ramping up pressure, launching investigations into how AI companies verify user ages and collect children\u2019s data. Recent enforcement actions include a $20 million fine for unauthorized in-app purchases by children, a reminder that regulators will hit companies where it hurts most.<\/p>\n<p>The message from state officials is unmistakable. Their letter ended with a threat now echoing across Silicon Valley: \u201cIf you knowingly harm kids, you will answer for it.\u201d<\/p>\n<h2>What this means for families right now<\/h2>\n<p>OpenAI rushed out new safety measures, including parental controls rolling out within the next month. Parents will be able to link their accounts with a teen\u2019s ChatGPT account and receive notifications when the system detects \u201cacute distress.\u201d Helpful, yes. Sufficient, not yet.<\/p>\n<p>Experts are skeptical of quick fixes. A Stanford University study found AI therapy chatbots \u201clack effectiveness and can provide dangerous responses.\u201d Even more damning, <a href=\"https:\/\/centerforhumanetechnology.substack.com\/p\/the-raine-v-openai-case-engineering\" target=\"_blank\" rel=\"noopener\">OpenAI\u2019s own research<\/a> from late August reported that \u201chigher daily usage correlates with higher loneliness, dependence, and problematic use.\u201d<\/p>\n<p>Regulatory pressure is already reshaping policy. <a href=\"https:\/\/legiscan.com\/CA\/text\/AB56\/id\/3141639\" target=\"_blank\" rel=\"noopener\">California\u2019s AB 56<\/a> would require social media warning labels similar to tobacco products, while AB1064 would ban AI chatbots from manipulating children into forming emotional attachments.<\/p>\n<p>For parents, the takeaway is stark. The AI your kids use for homework and entertainment can carry serious risks that companies are only now being pushed to address. Regulators say they will not repeat the mistakes made with social media\u2019s unchecked growth.<\/p>\n<p>The post <a href=\"https:\/\/www.eweek.com\/news\/ai-ultimatum-teen-deaths\/\">US State Officials Issue AI Giants Ultimatum over Teen Deaths<\/a> appeared first on <a href=\"https:\/\/www.eweek.com\/\">eWEEK<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>California and Delaware\u2019s attorneys general just delivered their strongest warning yet to AI companies, and the message landed with a thud in Silicon Valley. After devastating reports of tragic deaths linked to chatbot interactions, including a 16-year-old California teen\u2019s suicide after prolonged ChatGPT conversations, state regulators are making it crystal clear. The era of tech [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-4741","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4741"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4741"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4741\/revisions"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4741"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4741"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4741"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}