California is moving ahead with a state bill that would require transparency and safety reporting for the most powerful AI models, and Anthropic is on board.
The measure, known as SB 53, is California’s Transparency in Frontier Artificial Intelligence Act. SB 53 is designed to mandate public safety frameworks, risk disclosures, and whistleblower protections for large AI developers. Anthropic called the bill a “trust but verify” approach to oversight, saying it ensures labs remain accountable while competing at the frontier.
Lawmakers push transparency in AI systems
Lawmakers are driving the effort through Senator Scott Wiener, joined by Senator Susan Rubio. After vetoing a broader bill last year, Governor Gavin Newsom convened a working group of experts to refine the approach, producing recommendations that shaped the new legislation.
Wiener said the updated bill moves away from liability and toward transparency, making voluntary safety commitments by major AI companies legally enforceable. As Helen Toner of Georgetown University told NBC, “The need for more transparency from frontier AI developers is one of the AI policy ideas with the most consensus behind it.”
Breaking down California’s Transparency in Frontier Artificial Intelligence Act
The legislation lays out specific obligations for organizations building the most powerful AI models, from publishing safety frameworks to reporting critical incidents.
Safety frameworks
Major AI companies must publish detailed safety and security protocols. These frameworks describe how developers test their models, set thresholds for dangerous capabilities, and outline steps to mitigate catastrophic risks.
Transparency reports
Before releasing powerful new models, companies are required to post public transparency reports. These documents must summarize risk assessments, explain deployment decisions, and disclose whether third parties were involved in testing.
Reporting requirements
Developers must report critical safety incidents, such as a loss of model control or weight leaks, to the state within 15 days, or 24 hours in urgent cases. They must also provide confidential summaries of catastrophic risk assessments to California’s Office of Emergency Services.
Whistleblower protections
The bill creates protections for employees who disclose safety concerns. Companies must provide anonymous reporting channels, and retaliation against whistleblowers is prohibited.
Enforcement and scope
SB 53 applies only to large AI companies with revenues above $500 million and models trained with massive computing power. Penalties range from $10,000 for minor violations up to $10 million for repeat, knowing breaches tied to catastrophic risk. The law also blocks local governments from passing conflicting AI rules.
CalCompute
The bill establishes CalCompute, a state-run public cloud cluster to be housed within the University of California system. It is intended to expand access to computing power for safe and equitable AI research.
Why Anthropic endorses California’s AI bill
According to Anthropic, the measure strikes a balance by letting companies compete while remaining transparent about the capabilities of artificial intelligence systems that could endanger public safety.
The company pointed to its Responsible Scaling Policy and system cards as examples of practices it already follows. By making those kinds of safeguards mandatory across the industry, Anthropic argued, the bill stops rivals from rolling back safety commitments to gain an edge.
Anthropic also noted it prefers federal rules, but said California’s action was necessary in the meantime. “Powerful AI advancements won’t wait for consensus in Washington,” the company said.
If passed, SB 53 would make California the first state to hardwire voluntary AI safety pledges into law.
Discover how OpenAI and Anthropic are putting their latest models through rigorous safety trials — and what the results signal for the future of responsible AI development.
The post Anthropic Backs SB 53 – California’s Landmark AI Safety Bill appeared first on eWEEK.
No Responses