California Enacts First-in-the-Nation Law to Shield Children from AI Chatbots

Tags:

California Governor Gavin Newsom on Monday signed a batch of bills aimed at protecting children from the potential harms of artificial intelligence chatbots and other AI tools. 

Senate Bill 243, described as the first law of its kind in the US, will require AI companies to implement protective guardrails for children who interact with chatbots. These guardrails include preventing discussion of specific topics, including suicide and self-harm, with the chatbot required to redirect users to crisis services.

Chatbot operators will also need to remind children every three hours that they are speaking to a robot and not a human.

“Emerging technology like chatbots and social media can inspire, educate, and connect — but without real guardrails, technology can also exploit, mislead, and endanger our kids,” Newsom said in a statement after signing the bill.

Transparency and regulation efforts

Some AI developers will also be required to publicly disclose safety and security protocols for the first time. This goes further than European Union legislation, which calls on developers to privately submit protocols to the relevant authority.

Newsom also proposed a system for companies to disclose major safety incidents that originate from chatbots and other AI tools, a step designed to increase transparency around the potential risks of AI.

“We’ve seen some truly horrific and tragic examples of young people harmed by unregulated tech, and we won’t stand by while companies continue without necessary limits and accountability… California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive,” he said. “This legislation strikes that balance.”

There have been mounting concerns over the lack of guardrails on generative AI. Several families have condemned chatbots such as ChatGPT for allegedly having a part in their child’s suicide. Critics argue that the danger extends beyond children, and the new rules may not fully address the broader risks posed by AI systems that can influence and encourage vulnerable individuals to engage in harmful behaviors. 

The Federal Trade Commission (FTC) announced a review last month on chatbots to determine their effect on children, the first step by the commission to assess whether national regulation is required to better safeguard children.

Industry response and broader impact

California is home to many leading chatbot developers, including market leader OpenAI with ChatGPT, rival Anthropic, search giant Google, and Perplexity.

The implementation of this law, spearheaded by state Sen. Scott Wiener after Newsom vetoed his broader legislative bill following industry backlash, has drawn close attention from regulators in the US and abroad. Some perceive it as a potential framework for a national AI law, provided it can encourage developers to enhance their safety and security protocols. 

While the AI industry has not been broadly supportive of legislation, Anthropic, which has positioned itself as the trustworthy AI developer, did come out in support of the bill in the last days of the legislative session.

Sam Altman, the CEO of OpenAI, called on the US government in 2023 to regulate AI, deeming it critical. However, the prevailing attitude of most AI bigwigs towards regulation has softened since President Trump took office.

As California moves to regulate AI for child safety, concerns about how chatbots behave continue to grow. In a recent case, Meta faced backlash after one of its chatbots impersonated Taylor Swift — raising new questions about identity, safety, and accountability in the AI era.

The post California Enacts First-in-the-Nation Law to Shield Children from AI Chatbots appeared first on eWEEK.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *