SB 53 FAQs

How does SB 53 affect startups?

SB 53 will not impose new regulations on early-stage startups. The bill's transparency and safety requirements only apply to "frontier developers," which are defined in § 22757.11.h-i as persons who have trained or begun training a foundational model with more than 10^26 floating point operations (FLOPs). Moreover, the strictest requirements only apply to frontier developers that have over $500 million in annual revenue. To date, only two developers are known to have met both of these conditions: xAI and OpenAI. Both are extremely well resourced private companies. xAI was valued at $18 billion in early 2024 and is reported to be worth as much as $200 billion now. OpenAI was valued at $300 billion after a fundraising round in April 2025.

Only OpenAI and xAI are publicly known to have trained models with more than 10^26 FLOPs, and neither company has stated how much the training runs in question cost. But based on public information about AI GPU prices, energy prices, and datacenter construction costs, experts estimate that a 10^26 FLOP training run would cost in the high tens of millions of dollars.1 This is prohibitively expensive for early-stage startups.

How does SB 53 affect open source?

SB 53's transparency and incident reporting rules apply to all large AI developers regardless of whether they open-source their models or not.2 A large open source developer must publish and follow a safety policy, release a model card for every frontier model it deploys, and report critical safety incidents to the AG just like a closed source developer would have to do.

However, SB 53 is designed to avoid placing unreasonable burdens on open source developers.

First, it only applies to the very largest and wealthiest AI developers—those training models with over 10^26 FLOPs. This high threshold means that small open source developers who would struggle to afford compliance costs aren't affected at all.

Second, the bill's incident response requirements are flexible enough that open source developers can still meet them despite having less control over their models after deployment. In particular, developers aren't penalized for failing to report safety incidents that they couldn't have known about, such as misuse of an open source model.3

In practice, this means large open source developers would need to follow internal safety protocols and be transparent about their risk assessments, but they would not be held responsible for controlling models after public release.

How does SB 53 relate to the California Report on Frontier AI Policy?

The California Report is an expert report on frontier AI safety and governance. It was commissioned in September 2024 by Governor Gavin Newsom and published in June 2025, shortly before SB 53 was amended. The Report surveys approaches California could take to AI governance, putting them in technical and historical context and noting their advantages and drawbacks. It does not advocate for specific policies, but it does recommend principles for future policy to follow and goals for it to aim at.

SB 53 is directly inspired by the principles outlined in the California Report. The bill's sponsor has publicly said as much, and the evidence is all over the bill.

The California Report's strongest recommendation is for AI developers to make themselves more transparent to the public. "Transparency is a fundamental prerequisite of social responsibility and accountability," it says, because "without sufficient understanding of industry practices, the public cannot characterize the societal impact of digital technologies, let alone propose concrete actions to improve business practices." The Report does not recommend specific transparency mechanisms such as safety policies and model cards, but it does name key areas where transparency is most desirable. It says model developers should disclose how they assess the risks associated with their models, what steps they take to mitigate those risks, what they do to secure model weights, and how they test their models before deployment. SB 53 would require large AI developers to publicly reveal all of this information in their safety policies.4

The California Report recommends government establish an adverse event reporting system for incidents involving AI. It says "to better understand the practical impact and, in particular, the incidents or harms associated with AI, we need to develop better post-deployment monitoring."5 SB 53 puts this recommendation into practice by having the California Office of Emergency Services create a system for reporting and documenting critical safety incidents involving AI.

The Report says in its list of key principles that "clear whistleblower protections…can enable increased transparency above and beyond information disclosed by foundation model developers." It later goes on to survey existing whistleblower laws and finds that while all formal employees within most private sector organizations are generally already protected, "a central question in the AI context is whether protections apply to additional parties, such as contractors. Broader coverage may provide stronger accountability benefits."6 SB 53 does not deliver this broader coverage, but it does enhance whistleblower protections for frontier developer employees who are respondible for risk management. As these employees are among the closest to information about the models' safety properties, they are plausibly the most important to protect.

Does SB 53 create a new regulatory agency?

No, SB 53 does not create a new agency. Instead, it gives the California Attorney General and the Office of Emergency Services power to enforce the new transparency rules. The AG and OES monitor AI companies' compliance through critical safety incident reports filed by the companies themselves and tips from whistleblowers within companies.7 When these information sources lead the AG to believe a frontier developer is violating SB 53's transparency requirements, the AG can bring a civil suit against the developer. Violations carry fines up to one million dollars.

Does SB 53 contribute to a regulatory patchwork?

Some critics of state-level AI regulation warn that if each state sets its own AI regulation independently, the US could end up with an incoherent state-by-state patchwork of conflicting rules. Such a patchwork could harm small AI developers by driving up compliance costs beyond what they can afford. Eventually small developers would go out of business, locking in today's leading AI companies and slowing innovation.

This is a legitimate worry, but SB 53 is unlikely to create any problematic patchwork effects for two reasons. First, SB 53 does not impose any new regulations on small developers. It only targets large companies worth billions of dollars who can easily afford the cost of compliance.

Moreover, if we look at major AI regulations currently up for debate in other states, we see that none of SB 53's headline transparency requirements are unique to California. The RAISE Act in New York and the AI Safety and Security Transparency Act in Michigan would both require every large AI developer8 to publish and follow a safety policy. The New York bill would also require large developers to report safety incidents to the state Attorney General. And the Michigan bill would also require every large developer to release model cards for all of their frontier systems. The upshot is that if an AI company is already in compliance with NY RAISE and the Michigan Transparency Act, their marginal cost of complying with SB 53 should be minimal, as all of the major transparency requirements in SB 53 are also in one or both of those other bills.

Does SB 53 introduce new liability for AI harms?

No, SB 53 does not expand AI companies' liability for harms caused by their models. The bill only makes companies civilly liable for transparency violations, not for anything their models do. A large AI developer can be sued and fined for procedural failures—such as neglecting to publish a safety policy, breaking their own safety policy, or publishing a false or misleading model card. But as long as an AI company is being transparent, if something goes wrong and one of their systems causes a catastrophe, the company cannot be sued under SB 53. They are no more liable than they would be under existing law.

Does SB 53 permit private lawsuits against AI developers?

No, SB 53 does not give private actors standing to sue AI developers over transparency violations. The bill states very clearly that only the California Attorney General can bring a civil action against a large developer for breaking the transparency rules.9

Separately, the chapter on whistleblower protections allows a whistleblower who believes they've suffered retaliation from a large AI developer to bring a civil action against that developer. If the whistleblower wins their case, the court can grant them injunctive relief from the retaliation they've suffered plus attorney's fees. All of this is standard for whistleblower protection laws. Section 1102.5 of the California Labor Code allows a whistleblower who believes they've suffered retaliation to sue their employer for relief from retaliation plus attorney's fees plus a cash bounty of up to $10,000. The AI Whistleblower Protection Act currently up for debate in the US Senate would also allow a whistleblower who alleges retaliation by their employer to bring a private action against the AI company in question unless the Department of Labor resolves the allegation on their behalf within 180 days.

Does SB 53 require every model to have a kill switch?

No, SB 53 does not directly require any model to have a kill switch. Some AI companies might choose to build kill switches into their systems voluntarily. For instance, Anthropic's Responsible Scaling Policy states (§ 7.1) that they are developing procedures to restrict access to their models in the event of a safety emergency. But SB 53 won't force other companies to do the same unless they want to.

How does SB 53 compare to the EU AI Act?

The AI Act is an EU regulation that sets standards for AI companies operating in Europe. Among other things, the Act lays down safety, security, and transparency rules for providers of general purpose AI models, such as the large developers who would be subject to SB 53. Although the AI Act is not a US law, it still applies to American AI companies that deploy their models within the EU, and all of the leading US companies have agreed to comply with it.

Article 55 of the AI Act overlaps substantially with SB 53. The associated Code of Practice—an official guide that tells companies what they can do to follow the Act—requires every frontier AI company to write and implement a safety and security framework saying how they will assess and manage severe risks from their models. The content companies have to put in these frameworks is even more comprehensive than the content required in SB 53's safety policies. Both laws also require every frontier model a company deploys to have a model card (called a "model report" in the Code of Practice).10

But SB 53 goes beyond the EU AI Act in three important ways
  1. The AI Act does not mandate public transparency from large developers. The Code of Practice lets companies keep their full safety frameworks private, sending them only to the EU AI Office. Complete model reports likewise go just to the AI Office, not to consumers using the model. Under the Code of Practice, companies only publish summarized versions of these documents "if and insofar as is necessary." In contrast, SB 53 requires a large developer to post their safety policy and model cards prominently on their website for all to read.
  2. The AI Act does nothing for the State of California's awareness of critical safety incidents. It requires an AI company to notify the EU AI Office and their national government of any serious incidents caused by their models, but it says nothing about notifying regional governments. Under SB 53, a large developer will also be required to inform the California Office of Emergency Services when they become aware of a critical incident involving their models.
  3. The AI Act does little to protect AI whistleblowers based outside of the EU, whereas SB 53 would modestly strengthen protections for AI company whistleblowers.

How does SB 53 balance transparency with security and competitive concerns?

As the California Report notes, making AI companies more transparent could have drawbacks for security and for protecting trade secrets. Companies might disclose information that points hostile actors toward vulnerabilities in their models or holes in their internal security. They might also leak trade secrets or confidential IP to their competitors through public transparency disclosures.

SB 53 accounts for these concerns by allowing AI developers to redact their safety policies and model cards as needed "to protect the large developer’s trade secrets, the large developer’s cybersecurity, public safety, or the national security of the United States." These redactions are entirely up to the developer's discretion, so long as they explain in general terms what they redacted and why and retain an unredacted copy of the document for five years.


  1. See § 3 in Heim and Koessler "Training Compute Thresholds." Their precise estimate is that a 10^26 FLOP training run would have cost $70 million in mid 2024.
  2. In fact, the rules apply to a large developer even if they do not deploy their models at all. In principle, a company could initiate a 10^26 FLOP training run and make $100 million of annual revenue without deploying a single model, and they would count as a large developer.
  3. For the precise scope of the incident reporting requirement, see § 22757.13.c.
  4. See § 3 of the Report and § 22757.12.a of SB 53.
  5. See § 4 in the Report.
  6. The first quotation comes from pg 4 of the report, and the second from pg 29.
  7. § 22757.13 says that OES will establish a mechanism for collecting critical safety incident reports, and that any large developer who experiences a critical safety incident will be obliged to report it promptly through the official mechanism.
  8. All three bills use qualitatively similar tests to determine who counts as a large developer. In New York, you're a large developer if you've spent over $5 million on training a single model and over $100 million in aggregate on training all your models. In Michigan, you're a large developer if you've spent over $100 million on training a single model in the last twelve months. And in California, you're a large frontier developer if you've started training a model with over 10^26 FLOPs and you made over $500 million of gross revenue in the previous calendar year.
  9. See § 22757.15.b: "A civil penalty described in this section shall be recovered in a civil action brought only by the Attorney General."
  10. For more detail on safety and security protocols, see commitment 1 in the chapter on safety and security, and for more on model reports, see commitment 7 in the same chapter.