SB 53 will not impose new regulations on early-stage startups. The bill's transparency and safety requirements only apply to "frontier developers," which are defined in § 22757.11.h-i as persons who have trained or begun training a foundational model with more than 10^26 floating point operations (FLOPs). Moreover, the strictest requirements only apply to frontier developers that have over $500 million in annual revenue. To date, only two developers are known to have met both of these conditions: xAI and OpenAI. Both are extremely well resourced private companies. xAI was valued at $18 billion in early 2024 and is reported to be worth as much as $200 billion now. OpenAI was valued at $300 billion after a fundraising round in April 2025.
Only OpenAI and xAI are publicly known to have trained models with more than 10^26 FLOPs, and neither company has stated how much the training runs in question cost. But based on public information about AI GPU prices, energy prices, and datacenter construction costs, experts estimate that a 10^26 FLOP training run would cost in the high tens of millions of dollars.1 This is prohibitively expensive for early-stage startups.
However, SB 53 is designed to avoid placing unreasonable burdens on open source developers.
First, it only applies to the very largest and wealthiest AI developers—those training models with over 10^26 FLOPs. This high threshold means that small open source developers who would struggle to afford compliance costs aren't affected at all.
Second, the bill's incident response requirements are flexible enough that open source developers can still meet them despite having less control over their models after deployment. In particular, developers aren't penalized for failing to report safety incidents that they couldn't have known about, such as misuse of an open source model.3
In practice, this means large open source developers would need to follow internal safety protocols and be transparent about their risk assessments, but they would not be held responsible for controlling models after public release.
The California Report is an expert report on frontier AI safety and governance. It was commissioned in September 2024 by Governor Gavin Newsom and published in June 2025, shortly before SB 53 was amended. The Report surveys approaches California could take to AI governance, putting them in technical and historical context and noting their advantages and drawbacks. It does not advocate for specific policies, but it does recommend principles for future policy to follow and goals for it to aim at.
SB 53 is directly inspired by the principles outlined in the California Report. The bill's sponsor has publicly said as much, and the evidence is all over the bill.
The California Report's strongest recommendation is for AI developers to make themselves more transparent to the public. "Transparency is a fundamental prerequisite of social responsibility and accountability," it says, because "without sufficient understanding of industry practices, the public cannot characterize the societal impact of digital technologies, let alone propose concrete actions to improve business practices." The Report does not recommend specific transparency mechanisms such as safety policies and model cards, but it does name key areas where transparency is most desirable. It says model developers should disclose how they assess the risks associated with their models, what steps they take to mitigate those risks, what they do to secure model weights, and how they test their models before deployment. SB 53 would require large AI developers to publicly reveal all of this information in their safety policies.4
The California Report recommends government establish an adverse event reporting system for incidents involving AI. It says "to better understand the practical impact and, in particular, the incidents or harms associated with AI, we need to develop better post-deployment monitoring."5 SB 53 puts this recommendation into practice by having the California Office of Emergency Services create a system for reporting and documenting critical safety incidents involving AI.
The Report says in its list of key principles that "clear whistleblower protections…can enable increased transparency above and beyond information disclosed by foundation model developers." It later goes on to survey existing whistleblower laws and finds that while all formal employees within most private sector organizations are generally already protected, "a central question in the AI context is whether protections apply to additional parties, such as contractors. Broader coverage may provide stronger accountability benefits."6 SB 53 does not deliver this broader coverage, but it does enhance whistleblower protections for frontier developer employees who are respondible for risk management. As these employees are among the closest to information about the models' safety properties, they are plausibly the most important to protect.
No, SB 53 does not create a new agency. Instead, it gives the California Attorney General and the Office of Emergency Services power to enforce the new transparency rules. The AG and OES monitor AI companies' compliance through critical safety incident reports filed by the companies themselves and tips from whistleblowers within companies.7 When these information sources lead the AG to believe a frontier developer is violating SB 53's transparency requirements, the AG can bring a civil suit against the developer. Violations carry fines up to one million dollars.
Some critics of state-level AI regulation warn that if each state sets its own AI regulation independently, the US could end up with an incoherent state-by-state patchwork of conflicting rules. Such a patchwork could harm small AI developers by driving up compliance costs beyond what they can afford. Eventually small developers would go out of business, locking in today's leading AI companies and slowing innovation.
This is a legitimate worry, but SB 53 is unlikely to create any problematic patchwork effects for two reasons. First, SB 53 does not impose any new regulations on small developers. It only targets large companies worth billions of dollars who can easily afford the cost of compliance.
Moreover, if we look at major AI regulations currently up for debate in other states, we see that none of SB 53's headline transparency requirements are unique to California. The RAISE Act in New York and the AI Safety and Security Transparency Act in Michigan would both require every large AI developer8 to publish and follow a safety policy. The New York bill would also require large developers to report safety incidents to the state Attorney General. And the Michigan bill would also require every large developer to release model cards for all of their frontier systems. The upshot is that if an AI company is already in compliance with NY RAISE and the Michigan Transparency Act, their marginal cost of complying with SB 53 should be minimal, as all of the major transparency requirements in SB 53 are also in one or both of those other bills.
No, SB 53 does not expand AI companies' liability for harms caused by their models. The bill only makes companies civilly liable for transparency violations, not for anything their models do. A large AI developer can be sued and fined for procedural failures—such as neglecting to publish a safety policy, breaking their own safety policy, or publishing a false or misleading model card. But as long as an AI company is being transparent, if something goes wrong and one of their systems causes a catastrophe, the company cannot be sued under SB 53. They are no more liable than they would be under existing law.
Separately, the chapter on whistleblower protections allows a whistleblower who believes they've suffered retaliation from a large AI developer to bring a civil action against that developer. If the whistleblower wins their case, the court can grant them injunctive relief from the retaliation they've suffered plus attorney's fees. All of this is standard for whistleblower protection laws. Section 1102.5 of the California Labor Code allows a whistleblower who believes they've suffered retaliation to sue their employer for relief from retaliation plus attorney's fees plus a cash bounty of up to $10,000. The AI Whistleblower Protection Act currently up for debate in the US Senate would also allow a whistleblower who alleges retaliation by their employer to bring a private action against the AI company in question unless the Department of Labor resolves the allegation on their behalf within 180 days.
No, SB 53 does not directly require any model to have a kill switch. Some AI companies might choose to build kill switches into their systems voluntarily. For instance, Anthropic's Responsible Scaling Policy states (§ 7.1) that they are developing procedures to restrict access to their models in the event of a safety emergency. But SB 53 won't force other companies to do the same unless they want to.
The AI Act is an EU regulation that sets standards for AI companies operating in Europe. Among other things, the Act lays down safety, security, and transparency rules for providers of general purpose AI models, such as the large developers who would be subject to SB 53. Although the AI Act is not a US law, it still applies to American AI companies that deploy their models within the EU, and all of the leading US companies have agreed to comply with it.
Article 55 of the AI Act overlaps substantially with SB 53. The associated Code of Practice—an official guide that tells companies what they can do to follow the Act—requires every frontier AI company to write and implement a safety and security framework saying how they will assess and manage severe risks from their models. The content companies have to put in these frameworks is even more comprehensive than the content required in SB 53's safety policies. Both laws also require every frontier model a company deploys to have a model card (called a "model report" in the Code of Practice).10
But SB 53 goes beyond the EU AI Act in three important waysAs the California Report notes, making AI companies more transparent could have drawbacks for security and for protecting trade secrets. Companies might disclose information that points hostile actors toward vulnerabilities in their models or holes in their internal security. They might also leak trade secrets or confidential IP to their competitors through public transparency disclosures.
SB 53 accounts for these concerns by allowing AI developers to redact their safety policies and model cards as needed "to protect the large developer’s trade secrets, the large developer’s cybersecurity, public safety, or the national security of the United States." These redactions are entirely up to the developer's discretion, so long as they explain in general terms what they redacted and why and retain an unredacted copy of the document for five years.