SB 53 is the first American law mandating frontier AI developers to adopt safety policies, industry-standard tools for disclosing how a developer manages severe risks from its models. Most frontier developers including Anthropic, OpenAI, Google DeepMind, xAI, and Meta already have safety policies that satisfy many of SB 53's requirements. The new law requires these frontier developers to strengthen their safety policies in some key respects. For example, they will have to be more transparent about how they manage catastrophic risks from internal use of their models. SB 53 will also ensure that new developers reaching the frontier in the future publish comprehensive safety policies.
It is already a widely accepted best practice in the AI industry that when a company releases a new frontier model, they should publish a report called a model card describing the model's capabilities and safety properties. SB 53 requires frontier AI companies to publish basic model cards, and it further requires that large frontier AI companies publish safety information in a transparency report for every new frontier model they deploy. The report must include accounts of how the developer assessed catastrophic risks from their model, what results they found, how third parties were involved, and what else they did to comply with their own safety policy.
Frontier AI frameworks and transparency reports aren't qualitatively new. Leading AI companies were already publishing them before SB 53 passed. But the law does require one new kind of disclosure. Large frontier developers must regularly assess catastrophic risks from internal use of their models and report what they find to the Office of Emergency Services. In contrast, AI companies' past catastrophic risk assessments have generally focused on risks from exernal deployment, and they have been tied to new model releases instead of occurring on a regular schedule.
SB 53 makes large developers civilly liable for breaches of the above rules. No AI company executives will go to jail for failing to publish a safety policy or model card, but their companies can be faced with fines—up to one million dollars per violation. This is a major change from the previous status quo. Frontier AI developers used to have no legal obligation to disclose anything about their safety and security protocols to government, let alone to the public. But now under SB 53, basic transparency about risk management is no longer optional, and there is force of law holding developers to their published safety commitments.
SB 53 creates an official channel for the California state government to collect reports of safety incidents involving AI. In order for the state government to mount an effective response to an ongoing safety incident, they need to be aware that the incident is unfolding. But many possible incidents would likely be noticed by AI developers or by members of the public before they were noticed by officials. SB 53 ensures that if this should happen, there will be an efficient mechanism for reports of a critical incident to reach the authorities. Further, it mandates that when a frontier developer becomes aware of an AI incident posing an imminent risk of death or serious injury, they must notify authorities within 24 hours.
Existing California law already offers whistleblower protection to AI company employees who report a violation of federal, state, or local law to public officials or to their superiors. Companies may not make rules or enforce contracts that would prevent their employees from blowing the whistle, nor can they retaliate against an employee who becomes a whistleblower. SB 53 strengthens the whistleblower rights of employees on frontier AI developers' safety teams in two ways.Finally, SB 53 calls for California to build a publicly owned AI compute cluster called CalCompute. The cluster's purpose would be to support AI research and innovation for the public benefit. Nothing like CalCompute currently exists in California, but similar projects have been announced or are already underway in several other jurisdictions. New York has already built a compute cluster under their Empire AI initiative, the UK has given academics compute access through its AI Research Resource, and the US National Science Foundation's National AI Research Resource aims to provide the same for American researchers. SB 53 does not specify how much funding California will put behind CalCompute, nor how many AI chips it aims to acquire, so it's hard to tell how much this section of the bill will accomplish. If CalCompute is funded generously in the next state budget, it could be a big deal, but if the project only gets a meager budget, it may not achieve much.