{"id":4781,"date":"2025-09-10T18:47:56","date_gmt":"2025-09-10T18:47:56","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=4781"},"modified":"2025-09-10T18:47:56","modified_gmt":"2025-09-10T18:47:56","slug":"anthropic-backs-sb-53-californias-landmark-ai-safety-bill","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=4781","title":{"rendered":"Anthropic Backs SB 53 \u2013 California\u2019s Landmark AI Safety Bill"},"content":{"rendered":"<p>California is moving ahead with a state bill that would require transparency and safety reporting for the most powerful AI models, and Anthropic is on board.<\/p>\n<p>The measure, known as SB 53, is California\u2019s Transparency in Frontier Artificial Intelligence Act. SB 53 is designed to mandate public safety frameworks, risk disclosures, and whistleblower protections for large AI developers. Anthropic called the bill a \u201ctrust but verify\u201d approach to oversight, saying it ensures labs remain accountable while competing at the frontier.<\/p>\n<h2 class=\"wp-block-heading\">Lawmakers push transparency in AI systems<\/h2>\n<p>Lawmakers are driving <a href=\"https:\/\/leginfo.legislature.ca.gov\/faces\/billTextClient.xhtml?bill_id=202520260SB53\" target=\"_blank\" rel=\"noopener\">the effort<\/a> through Senator Scott Wiener, joined by Senator Susan Rubio. After vetoing a broader bill last year, Governor Gavin Newsom convened a working group of experts to refine the approach, producing recommendations that shaped the new legislation.<\/p>\n<p>Wiener said the updated bill moves away from liability and toward transparency, making voluntary safety commitments by major <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/ai-companies\/\">AI companies<\/a> legally enforceable. As Helen Toner of Georgetown University <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/anthropic-backs-californias-sb-53-ai-bill-rcna229908\">told NBC<\/a>, \u201cThe need for more transparency from frontier AI developers is one of the <a href=\"https:\/\/www.nbcnews.com\/tech\/tech-news\/anthropic-backs-californias-sb-53-ai-bill-rcna229908\" target=\"_blank\" rel=\"noopener\">AI policy<\/a> ideas with the most consensus behind it.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Breaking down California\u2019s Transparency in Frontier Artificial Intelligence Act<\/h2>\n<p>The legislation lays out specific obligations for organizations building the most powerful <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/generative-ai-model\/\">AI models<\/a>, from publishing safety frameworks to reporting critical incidents.<\/p>\n<h3 class=\"wp-block-heading\">Safety frameworks<\/h3>\n<p>Major AI companies must publish detailed safety and security protocols. These frameworks describe how developers test their models, set thresholds for dangerous capabilities, and outline steps to mitigate catastrophic risks.<\/p>\n<h3 class=\"wp-block-heading\">Transparency reports<\/h3>\n<p>Before releasing powerful new models, companies are required to post public transparency reports. These documents must summarize risk assessments, explain deployment decisions, and disclose whether third parties were involved in testing.<\/p>\n<h3 class=\"wp-block-heading\">Reporting requirements<\/h3>\n<p>Developers must report critical safety incidents, such as a loss of model control or weight leaks, to the state within 15 days, or 24 hours in urgent cases. They must also provide confidential summaries of catastrophic risk assessments to California\u2019s Office of Emergency Services.<\/p>\n<h3 class=\"wp-block-heading\">Whistleblower protections<\/h3>\n<p>The bill creates protections for employees who disclose safety concerns. Companies must provide anonymous reporting channels, and retaliation against whistleblowers is prohibited.<\/p>\n<h3 class=\"wp-block-heading\">Enforcement and scope<\/h3>\n<p>SB 53 applies only to large AI companies with revenues above $500 million and models trained with massive computing power. Penalties range from $10,000 for minor violations up to $10 million for repeat, knowing breaches tied to catastrophic risk. The law also blocks local governments from passing conflicting AI rules.<\/p>\n<h3 class=\"wp-block-heading\">CalCompute<\/h3>\n<p>The bill establishes CalCompute, a state-run public cloud cluster to be housed within the University of California system. It is intended to expand access to computing power for safe and equitable AI research.<\/p>\n<h2 class=\"wp-block-heading\">Why Anthropic endorses California\u2019s AI bill<\/h2>\n<p>According to Anthropic, the measure strikes a balance by letting companies compete while remaining transparent about the capabilities of <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/what-is-artificial-intelligence\/\">artificial intelligence<\/a> systems that could endanger public safety.<\/p>\n<p>The <a href=\"https:\/\/www.anthropic.com\/news\/anthropic-is-endorsing-sb-53\" target=\"_blank\" rel=\"noopener\">company pointed<\/a> to its Responsible Scaling Policy and system cards as examples of practices it already follows. By making those kinds of safeguards mandatory across the industry, Anthropic argued, the bill stops rivals from rolling back safety commitments to gain an edge.<\/p>\n<p>Anthropic also noted it prefers federal rules, but said California\u2019s action was necessary in the meantime. \u201cPowerful AI advancements won\u2019t wait for consensus in Washington,\u201d the company said.<\/p>\n<p>If passed, SB 53 would make California the first state to hardwire voluntary AI safety pledges into law.<\/p>\n<p><a href=\"https:\/\/www.eweek.com\/news\/openai-anthropic-model-safety-test\/\"><strong>Discover how OpenAI and Anthropic are putting their latest models through rigorous safety trials<\/strong><\/a><strong> \u2014 and what the results signal for the future of responsible AI development.<\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.eweek.com\/news\/california-ai-safety-bill-sb-53-anthropic-endorsement\/\">Anthropic Backs SB 53 \u2013 California\u2019s Landmark AI Safety Bill<\/a> appeared first on <a href=\"https:\/\/www.eweek.com\/\">eWEEK<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>California is moving ahead with a state bill that would require transparency and safety reporting for the most powerful AI models, and Anthropic is on board. The measure, known as SB 53, is California\u2019s Transparency in Frontier Artificial Intelligence Act. SB 53 is designed to mandate public safety frameworks, risk disclosures, and whistleblower protections for [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-4781","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4781"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=4781"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/4781\/revisions"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=4781"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=4781"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=4781"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}