Artificial intelligence is no longer an abstraction debated in research labs. It is already shaping what Californians read, how their medical data is interpreted, what job applications get seen by human eyes, and whether emergency calls are triaged correctly. The most powerful AI systems in the world — trained on oceans of data using staggering amounts of computing power — are built almost entirely in California. And until this week, no law in the United States required the companies building them to tell anyone what safeguards, if any, they had put in place.

Senate Bill 53, authored by Senator Scott Wiener (D-San Francisco) and officially titled the Transparency in Frontier Artificial Intelligence Act (TFAIA), changes that. The bill has cleared both chambers of the California Legislature and now heads to the Governor’s desk, where it awaits a signature. If signed into law, it takes effect January 1, 2026, and will make California the first jurisdiction in the United States with a comprehensive, enforceable transparency and safety-reporting framework for the most advanced AI models in existence.

This article explains what the law actually does, who it targets, why it matters for ordinary Californians, and how it evolved from the failed SB 1047 that Governor Newsom vetoed just twelve months ago.

The Problem SB-53 Is Trying to Solve

To understand SB-53, it helps to understand why lawmakers felt compelled to act. The AI systems at the cutting edge of capability — what the bill calls “frontier models” — are qualitatively different from the AI embedded in smartphone keyboards or spam filters. These are large, general-purpose systems trained on datasets that encompass much of recorded human knowledge. They can write code, reason through scientific problems, engage in nuanced conversation, and, in the most advanced cases, take autonomous actions in the digital world without human supervision at each step.

The California Legislature found, in the text of SB-53 itself, that these systems present “potential catastrophic risk” — meaning scenarios where a model could, if misaligned or maliciously used, contribute to mass casualties, cripple critical infrastructure, or assist in the creation of weapons of mass destruction. That is not scaremongering; it reflects the consensus of a blue-ribbon panel of experts — including Stanford’s Dr. Fei-Fei Li, often called the “godmother of AI,” Carnegie Endowment’s Dr. Mariano-Florentino Cuéllar, and UC Berkeley’s Dr. Jennifer Tour Chayes — convened by Governor Newsom in late 2024 to assess the risks and advise on appropriate policy.

The central paradox was this: the companies that know the most about these risks — the engineers and executives at the labs building frontier AI — were under no obligation to share what they knew with the public or with government authorities. If an AI system began exhibiting dangerous capabilities during internal testing, there was no requirement to report it. If an employee believed their employer was taking unacceptable risks, there was no legal protection that encouraged them to raise the alarm. SB-53 is designed to close both of those gaps.

"Timely reporting of critical safety incidents to the government is essential to ensure that public authorities are promptly informed of ongoing and emerging risks to public safety."

— SB-53 Legislative Findings, Section (m)

Who Does the Law Actually Cover?

SB-53 does not attempt to regulate every chatbot, every AI-powered hiring tool, or every algorithm that recommends videos. It is deliberately scoped to the narrow tip of the AI spear: the handful of companies building the most computationally intensive models in the world.

The law’s key term is “large frontier developer.” To fall under the law’s strictest requirements, a company must satisfy two conditions simultaneously. First, it must have annual gross revenues exceeding $500 million. Second, it must have trained or be training a “frontier model” — defined technically as a foundation model trained using more than 10²⁶ floating-point operations (FLOPs). To put that in perspective, that threshold is several orders of magnitude above the training runs used for most current-generation models. No existing model definitively meets it today, though the largest models of 2025 are approaching it.

Technical Context

What is a "floating-point operation"? It is a single mathematical calculation a computer performs during AI training — multiplying or adding numbers. Training a frontier AI model requires hundreds of quadrillions of these calculations. The 10²⁶ threshold — 100 septillion FLOPs — was chosen deliberately to capture only the very largest systems at the frontier of capability: the ones that researchers believe carry the most potential for outsized impact, both positive and negative.

In practice, this means the law targets companies like OpenAI, Anthropic, Google DeepMind, and Meta AI — firms whose models could plausibly cause catastrophic outcomes if misused or misaligned.

Smaller AI startups, businesses that use AI tools without training their own foundation models, and developers of specialized or narrow AI applications are not directly subject to SB-53. The Legislature was candid that this is intentional — and also provisional: the bill’s findings explicitly note that smaller models may eventually pose risks warranting regulation, leaving the door open for future legislation.

The Four Pillars of SB-53

The law creates obligations that fall into four interconnected categories. Together, they amount to a “trust-but-verify” framework — one that does not tell companies how to build AI, but does require them to show their work, report dangerous developments in real time, and protect the employees willing to speak up when something goes wrong.

Pillar One: The Frontier AI Framework

Every large frontier developer must write, implement, and publicly publish on its website a comprehensive “frontier AI framework.” This is not a marketing document. The law specifies that the framework must describe how the company incorporates recognized national standards (such as the NIST AI Risk Management Framework), international standards (such as ISO/IEC 42001), and industry best-practice consensus into its development process. It must also address governance structures, cybersecurity measures protecting model weights, and the company’s approach to identifying and mitigating catastrophic risk.

Critically, the framework must be updated at least annually — or sooner when a material change occurs. This is not a one-time compliance filing that can be forgotten. It is a living document, subject to public scrutiny, and must reflect what the company is actually doing.

Pillar Two: Transparency Reports Before Deployment

Before releasing a new frontier model — or a substantially modified one — each covered company must publish a transparency report. The report must contain, among other things, the release date, the languages and modalities (text, image, audio, etc.) the model supports, its intended uses and restrictions, and crucially, a summary of the company’s assessment of catastrophic risks associated with that specific model.

The Legislature recognized that some of this information could be legitimately sensitive — competitive secrets or details that could themselves create security risks if disclosed in full. The law permits redaction of trade secrets and certain cybersecurity and national security information. But here is the accountability backstop: whatever is redacted must be retained in unredacted form for five years, available for review if legal proceedings arise.

Pillar Three: Critical Safety Incident Reporting

This may be the provision with the most immediate real-world significance for the California public. SB-53 creates a formal reporting mechanism through California’s Office of Emergency Services (Cal OES), through which both AI companies and ordinary citizens can report “critical safety incidents” involving frontier AI systems.

15

Standard Reporting Window

When a covered developer discovers a critical safety incident with one of its frontier models, it must report to Cal OES within 15 days of that discovery.

24

Emergency Reporting Window

If the incident presents an imminent risk of death or serious injury, the developer must alert relevant authorities — including law enforcement — within just 24 hours.

A “critical safety incident” is defined as an event in which a frontier AI system’s behavior leads to — or poses a real risk of — death, serious physical injury, significant property damage, or major harm. Specific examples written into the law include a model autonomously conducting a cyberattack without human oversight, or a model engaging in behavior that would constitute murder, assault, extortion, or theft if committed by a person.

The public component of this mechanism is particularly notable. The law does not only create an obligation for companies to report upward. It creates an infrastructure through which any Californian who observes dangerous AI behavior can formally flag it to government authorities — bringing democratic accountability to a domain that has, until now, operated almost entirely behind closed doors.

Pillar Four: Whistleblower Protections

Perhaps the most consequential innovation in SB-53, for the long arc of AI governance, is its suite of protections for employees who raise the alarm about safety risks at the companies where they work.

The law prohibits any frontier developer — not just large ones — from adopting or enforcing policies that prevent covered employees from disclosing information to the Attorney General, a federal authority, their supervisor, or a colleague with authority to investigate, if the employee has reasonable cause to believe the company’s activities pose a specific and substantial danger to public health or safety resulting from catastrophic risk, or that the company has violated the TFAIA itself.

In addition, large frontier developers must establish an anonymous internal reporting channel — a process through which covered employees can report concerns within the company without revealing their identity. If a whistleblower brings a successful legal action for retaliation, the law authorizes the award of attorney’s fees, reducing the economic barrier to holding companies accountable. This matters enormously in an industry where non-disclosure agreements and cultural pressure to stay silent are pervasive.

CalCompute: Building a Public AI Infrastructure

SB-53 contains one additional provision that extends well beyond regulatory compliance: the establishment of a public computing consortium called CalCompute, housed within the Government Operations Agency. CalCompute is directed to develop a framework for creating a public cloud computing cluster — potentially housed at the University of California system — that would give researchers, smaller companies, and public institutions access to the kind of computational infrastructure currently monopolized by a handful of private corporations.

A governance and funding report on CalCompute is due by 2027. The long-term vision is significant: if AI’s future depends partly on who has access to the massive computing resources needed to train and run powerful models, then a public computing infrastructure could democratize that access in ways that benefit Californians across the economic spectrum — not just those working at or funded by Big Tech.

Penalties and Enforcement

The law is enforced by the California Attorney General, who is authorized to seek civil penalties for violations of the TFAIA. The maximum civil penalty is $1 million per violation. While critics on both sides of the debate have noted that this is modest relative to the revenues of the companies involved, the law’s primary enforcement mechanism is less about fines and more about the reputational, legal, and operational consequences of being documented as non-compliant — especially once an incident occurs.

The law also includes a notable mutual recognition provision: if a company is in compliance with comparable federal standards or the EU AI Act’s requirements, California will accept that compliance in lieu of separate state filings. This reflects a deliberate design choice to avoid creating a compliance burden that duplicates federal or international frameworks, and signals California’s intent to be a floor-setter, not a siloed regulator.

Key Legal Definitions

Catastrophic risk means a foreseeable and material risk that a frontier model will materially contribute to: (1) the death of, or serious injury to, more than 50 people, or more than $1 billion in property damage from a single incident; (2) expert-level assistance in creating a chemical, biological, radiological, or nuclear weapon; (3) autonomous conduct amounting to murder, assault, extortion, theft, or cyberattack without meaningful human oversight; or (4) a model evading the control of its developer or user.

Critical safety incident means an event in which a frontier model's behavior leads to, or poses a real risk of, death, serious injury, significant damage, or other major harm as specified in the law.

Covered employee means any current or former employee, contractor, or agent of a frontier developer.

How SB-53 Differs From the Vetoed SB-1047

To fully appreciate what SB-53 is, it helps to understand what it chose not to be. Its predecessor, SB 1047 — the Safe and Secure Innovation for Frontier AI Models Act — passed both chambers of the California Legislature in 2024 with strong majorities before being vetoed by Governor Newsom in September of that year. Newsom’s veto message called for an approach “informed by an empirical trajectory analysis of AI systems and capabilities” and commissioned the expert working group whose recommendations shaped SB-53.

SB 1047 was substantially more prescriptive. It would have required developers to implement safety protocols, cybersecurity protections, and — most controversially — a full “kill switch” capability before beginning training a covered model, not just before deploying it. It mandated annual independent third-party audits. It set a 72-hour window for reporting safety incidents to the Attorney General. Its penalty structure was pegged to compute costs — potentially reaching 10% of the cost of training a model for a first violation, and 30% for subsequent violations. It also imposed compliance obligations on cloud computing providers that supply infrastructure for AI training.

The comparison below summarizes the key differences:

Provision SB 1047 (Vetoed 2024) SB 53 (Current Bill)
Kill switch / full shutdown requirement Required Disclosure only
Pre-training safety protocols Mandated Not required
Independent third-party audits Annual requirement Not required
Safety incident reporting window 72 hours 15 days / 24 hrs (imminent)
Civil penalties Up to 10–30% of compute cost Up to $1M per violation
Cloud provider compliance obligations Yes No
Public frontier AI framework (website) Not required Required
Whistleblower protections Included Strengthened
Anonymous internal reporting channel Not required Required
Public incident reporting mechanism No Via Cal OES
CalCompute public infrastructure Included Included
Federal / EU mutual recognition No Yes
Effective stage of regulation Pre-training Deployment-stage

The pattern is clear. SB-53 steps back from the most prescriptive engineering mandates of SB 1047 — the kill switch, the pre-training protocols, the mandatory audits — while strengthening the transparency and accountability infrastructure that SB 1047 left relatively thin. Where SB 1047 asked “did you build it safely?”, SB-53 asks “can you prove to the public what safety measures you have in place, and will you tell us immediately when something goes wrong?”

Whether this is the right tradeoff is a legitimate policy debate. But it is a coherent one, grounded in the Governor’s own stated concern that the first generation of AI regulation be “informed by an empirical trajectory analysis” rather than precautionary mandates that may or may not track actual risk.

How Californians Actually Benefit

It is reasonable to ask: if this law only directly regulates a handful of very large companies, why should the average Californian care? The answer operates on several levels.

The most direct benefit is the incident reporting mechanism. For the first time, there is a formal, government-maintained channel through which dangerous AI behavior — observed by anyone, including ordinary users — can be escalated to public authorities who are obligated to take it seriously. This is structurally similar to the way product safety recalls work: the existence of a mandatory reporting system changes company behavior before incidents happen, because the cost of concealment becomes legally and reputationally untenable.

The transparency framework benefits Californians indirectly but profoundly, because published safety frameworks create a public record against which companies can be held accountable. When a company states publicly that it has implemented a certain class of safeguards, and an incident later reveals those safeguards were inadequate or fictional, the legal and reputational exposure is dramatically higher than if no commitment had ever been made. Disclosure obligations, in this sense, are not merely administrative — they are a form of soft enforcement that changes how companies allocate internal resources toward safety.

The whistleblower protections may matter most in the long run. The people most likely to observe dangerous AI behavior before it becomes publicly visible are the engineers and researchers inside these companies. These individuals currently face structural disincentives to speak up — non-disclosure agreements, cultural norms of internal loyalty, and the practical reality that careers in a small, concentrated industry depend on not making enemies. By giving covered employees legal protection and an anonymous reporting pathway, SB-53 creates a human early-warning system that no external regulator could replicate.

Finally, the California effect is real. Just as the state’s vehicle emissions standards became the de facto national standard because auto manufacturers could not economically maintain separate product lines for California and the rest of the country, AI safety frameworks adopted here will tend to become the standards that large AI companies apply globally. The leverage California holds over the AI industry — home to 32 of the world’s top 50 AI companies and more than 15% of all U.S. AI job postings — means that what happens in Sacramento does not stay in Sacramento.

Industry Reactions and Criticisms

The bill has drawn a genuinely mixed response from the technology sector, in ways that reveal the real fault lines in AI policy. Anthropic, one of the largest frontier AI developers and a direct subject of the law, publicly endorsed SB-53, calling it a “trust-but-verify” approach consistent with its own voluntary safety practices. That endorsement from a company with deep financial incentive to resist regulation is significant.

OpenAI and Meta, by contrast, lobbied against the bill and declined to endorse it, though neither issued categorical opposition statements. Andreessen Horowitz, the prominent venture capital firm, objected most forcefully — arguing the bill imposes excessive compliance burdens and creates a problematic precedent for state-level AI regulation.

Industry critics have raised several specific technical objections worth taking seriously. The 10²⁶ FLOPs threshold, they argue, is an imperfect proxy for risk: some smaller-scale models could pose serious real-world dangers, while some very large models are relatively benign. The $1 million per-violation penalty, others note, is unlikely to deter companies generating billions in annual revenue. And some AI safety advocates — aligned with supporters, not opponents — worry that the 15-day reporting window is too lenient, and that the bill’s removal of mandatory third-party audits leaves a meaningful accountability gap that public disclosure alone cannot fill.

These are fair criticisms, and the Legislature has acknowledged them in the bill’s own text, which notes that the 10²⁶ threshold is a starting point and that the California Department of Technology is empowered to recommend definitional updates to reflect technological change. The bill is explicitly framed as a first-generation framework, not a final word.

Context: A Nation Still Waiting on Federal Action

It is worth situating SB-53 in the broader regulatory landscape. At the federal level, Congress has not passed comprehensive AI safety legislation. The Biden administration’s 2023 Executive Order on AI established voluntary commitments and directed federal agencies to develop sector-specific guidance, but those are not laws. The Trump administration, which took office in January 2025, has signaled a strong preference for light-touch federal AI policy, making comprehensive federal legislation in this Congress unlikely. Into that vacuum, states are moving — California most consequentially, but New York’s RAISE Act is also awaiting a gubernatorial signature, and Michigan and other states have introduced proposals.

Compared to the EU AI Act, which entered into force in August 2024, California’s law is narrower in scope — it addresses only the most powerful models rather than a broad category of “high-risk” applications. But on transparency, SB-53 goes further: safety frameworks must be published publicly on company websites, not merely submitted privately to regulators. That is a meaningful distinction for a democracy in which an informed public, not just government bodies, is a check on corporate power.

"California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive."

— Governor Gavin Newsom

What Happens Next

SB-53 now awaits action by Governor Newsom, who has until mid-October to sign, veto, or allow it to become law without his signature. Given that the bill was drafted substantially in response to recommendations from the expert working group he commissioned after vetoing SB 1047, and given that it was endorsed by leading voices including Dr. Fei-Fei Li and former California Supreme Court Justice Mariano-Florentino Cuéllar, a signature is widely expected. If enacted, the law takes effect January 1, 2026.

The Office of Emergency Services will then be required to stand up the incident reporting mechanism. The California Department of Technology will begin work on the CalCompute framework, with a report to the Legislature due in 2027. And the first cohort of covered companies — OpenAI, Anthropic, Google DeepMind, Meta AI, and any others that meet the thresholds — will need to publish their frontier AI frameworks and ensure their deployment pipelines incorporate transparency reporting before they release next-generation models.

The Legislature has also signaled, explicitly in the bill’s findings, that this is the beginning of the regulatory conversation, not the end. Future legislation — covering smaller developers, specific high-risk application domains, or more prescriptive safety engineering requirements — will follow as the evidence base matures. SB-53 is, in the words of its author, a “first step” designed to build the transparency and accountability infrastructure on which deeper regulation can eventually rest.

A Note on Balance — and Why This Framework Deserves Support

Reasonable people disagree about how aggressively to regulate AI. Those who believe the greatest risk is overly cautious regulation that pushes development to jurisdictions with fewer safeguards have a serious point. Those who believe the potential for catastrophic harm demands stronger pre-emptive controls — mandatory audits, enforceable engineering standards, real-time oversight — also have a serious point.

SB-53 occupies a carefully considered middle ground. It does not tell engineers how to build AI. It does not impose compliance costs so severe that only incumbents can absorb them. It does not require a company to prove its model is safe before deployment — a requirement that, whatever its merits, would demand a shared definition of “safe” that does not yet exist in any scientifically rigorous form. What it does do is ensure that, for the first time, the companies building the most powerful technologies in human history cannot do so in complete secrecy, cannot conceal safety failures from public authorities, and cannot silence the employees who bear witness to what happens inside these systems before the rest of us ever see the output.

That is a meaningful first step. It is the kind of first step that, once established, creates the infrastructure for more. And it is a step that California — home to the industry, home to the talent, home to the public most directly affected — is uniquely positioned to take.