On Sunday, September 29, 2024, Governor Gavin Newsom returned Senate Bill 1047 to the legislature without his signature, vetoing the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act — the most far-reaching AI safety legislation proposed by any American state. The decision ends, for now, California's best opportunity to establish legal guardrails around the most powerful AI systems being built anywhere on earth. This article explains what the Governor said, what he didn't say, and what his veto means for the people of California.

The Veto at a Glance
Vetoed
September 29, 2024
Bill Status
Returned Without Signature
Senate Vote (May 2024)
32 – 1 in Favor
Other AI Bills Signed (Sept.)
17 Separate Bills

Newsom simultaneously vetoed SB 1047 and announced a new advisory panel — including Stanford's Dr. Fei-Fei Li and UC Berkeley Dean Jennifer Tour Chayes — to develop an "empirical, science-based" AI risk analysis for the legislature.

What the Governor Said

The Full Text of His Objections

Newsom’s veto message, addressed to the California State Senate, acknowledged the seriousness of the problem SB 1047 sought to solve. He stated that California “cannot afford to wait for a major catastrophe to occur before taking action to protect the public” — a concession to the bill’s supporters that carries significant weight coming from a governor who had just chosen to block it. But three substantive objections formed the core of his reasoning.

The Governor’s arguments are worth examining in detail, because each one has real policy content — and each one also has real weaknesses that supporters of the bill are right to challenge.

Governor Newsom's Three Core Objections — In His Own Words
Objection 1 · The "False Sense of Security" Argument
"By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology. Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047."
Objection 2 · The "Blunt Instrument" Argument
"SB 1047 does not take into account whether an AI system is deployed in high-risk environments, involves critical decision-making or the use of sensitive data. Instead, the bill applies stringent standards to even the most basic functions — so long as a large system deploys it."
Objection 3 · The "Empirical Basis" Argument
"We must not settle for a solution that is not informed by an empirical trajectory analysis of AI systems and capabilities. Ultimately, any framework for effectively regulating AI needs to keep pace with the technology itself."

On the surface, these objections sound reasonable. A careful reading, however, reveals that each one, while containing a kernel of legitimate policy debate, ultimately fails to justify the veto — and in some cases actively undermines the Governor’s stated commitment to AI safety.

Examining the Arguments

Objection 1: “Smaller Models Could Be Just as Dangerous”

The Governor’s first objection is that SB 1047’s focus on regulating large and expensive AI models “could give the public a false sense of security about controlling this fast-moving technology,” because “smaller, specialized models may emerge as equally or even more dangerous.” This argument has a logical surface appeal — if the bill misses some dangerous systems, does it offer false reassurance?

The problem is that this reasoning, taken seriously, would justify blocking virtually any targeted safety regulation. Building codes that set specific height thresholds for structural requirements do not give us a “false sense of security” about buildings that fall just below the threshold. Pharmaceutical regulations that require Phase III trials for drugs affecting more than a certain number of patients are not invalidated by the theoretical possibility that a smaller-market drug could also cause harm. Regulation necessarily draws lines. The question is whether the lines are drawn in the right place — and the $100 million, 10²⁶ FLOP threshold of SB 1047 was specifically calibrated to capture only the handful of systems with capabilities approaching those capable of catastrophic misuse.

To argue that the bill should be vetoed because it doesn’t also cover smaller models is to argue for no bill at all — because no bill, no matter how broad, can anticipate every possible future risk. The appropriate response to that concern is to pass SB 1047 and then expand its scope as the technology evolves, not to block it entirely.

We believe that the most powerful AI models may soon pose severe risks. It is feasible and appropriate for frontier AI companies to test whether the most powerful models can cause severe harms and to implement reasonable safeguards.

— Geoffrey Hinton, Yoshua Bengio & current/former employees of leading AI companies, open letter to Gov. Newsom, September 2024

Objection 2: “The Bill Regulates the Model, Not Its Use”

The Governor’s second objection is more technically sophisticated. He argued that SB 1047 applied stringent standards based on a model’s size alone, without considering whether it was deployed in high-risk environments or involved critical decision-making or sensitive data. In other words, he preferred a use-based regulatory framework — regulate what AI does, not what AI is.

This is a genuinely interesting policy debate. The EU AI Act takes a broadly similar use-based approach, categorizing AI applications by the level of risk they pose in specific contexts. But there is a meaningful difference between an AI model deployed as a chatbot for recipe suggestions and the same model’s underlying capability to, say, assist in designing a pathogen. SB 1047’s safety obligations were deliberately focused on the foundational model’s capabilities — its potential for catastrophic misuse regardless of its stated purpose — precisely because a sufficiently capable model can be redirected to dangerous uses in ways that deployment-level regulation cannot anticipate or prevent.

More to the point: the bill’s definition of “critical harm” was already tightly tied to actual dangerous use cases — weapons of mass destruction, attacks on critical infrastructure, autonomous AI crimes. The bill was not requiring million-dollar safety audits for an AI that schedules calendar appointments. It was requiring safety protocols for systems capable of genuine catastrophe, regardless of how their developers currently intend to deploy them. That is not a design flaw — it is the point.

Objection 3: “We Need Empirical Evidence First”

The Governor’s third objection is perhaps the most politically revealing. He called for AI regulation to be “based on empirical evidence and science,” and simultaneously announced a new advisory partnership with three prominent academics to conduct a “science-based trajectory analysis” of AI capabilities and risks before the legislature acts.

This argument is, in effect, a request for more study before regulation — a classic delaying mechanism that has been used to postpone action on everything from tobacco regulation to climate policy. The irony is profound: the Governor agreed that the risks described in SB 1047 are real, agreed that catastrophic harms from AI are possible, agreed that California cannot wait for disaster to strike — and then chose to wait for a committee report before acting.

The engineers who build these systems have been sounding alarms in increasingly urgent terms. Geoffrey Hinton — the Nobel laureate in physics and one of the founding scientists of modern deep learning — co-signed a letter urging Newsom to sign the bill, warning that the most powerful AI models “may soon pose severe risks, such as expanded access to biological weapons and cyberattacks on critical infrastructure.” When the people who invented the technology tell us we have enough empirical evidence to act, the argument for further delay becomes very difficult to sustain.

The Political Context

Who Supported and Who Opposed the Bill

To understand the veto, it helps to understand the forces arrayed on either side. The opposition to SB 1047 was led by some of the most powerful corporations and political figures in America. Meta, OpenAI, and former House Speaker Nancy Pelosi all opposed the bill, with Pelosi arguing it would create significant unintended consequences for the U.S. AI ecosystem. OpenAI’s Chief Strategy Officer warned the bill would “threaten that growth, slow the pace of innovation, and lead California’s world-class engineers and entrepreneurs to leave the state.”

Support for the bill came from a more eclectic coalition. The Center for AI Safety, Elon Musk, the Los Angeles Times editorial board, and Anthropic all backed the bill. More than 100 Hollywood artists signed an open letter urging the Governor to sign it. More than 113 current and former employees of OpenAI, Google DeepMind, Anthropic, Meta, and xAI signed a letter in support. SAG-AFTRA, the union representing hundreds of thousands of performing artists, sent a formal letter to Newsom as well.

The pattern is instructive. The opposition was dominated by the largest AI corporations — the companies that stood to bear the most compliance costs under the bill. The support came from independent researchers, working engineers, labor unions, and public interest groups. When the debate is framed that way, the Governor’s alignment with the corporate opposition becomes harder to characterize as a neutral, evidence-based decision.

Opposed SB 1047

Meta and OpenAI, two of the largest AI developers, opposed the bill citing concerns about compliance costs and competitive disadvantage.

Former House Speaker Nancy Pelosi and eight members of California's congressional delegation argued it would harm the U.S. AI ecosystem with minimal safety benefit.

Much of California's venture capital community warned of capital flight and a "chilling effect" on startups.

Supported SB 1047

Geoffrey Hinton, Yoshua Bengio, and other pioneering AI scientists backed the bill, citing real and near-term catastrophic risks.

Over 113 current and former employees of major AI firms signed a public letter of support, citing insider knowledge of the technology's dangers.

Anthropic, SAG-AFTRA, the Center for AI Safety, and the Los Angeles Times editorial board all urged the Governor to sign.

The 17 Bills Newsom Did Sign

The same day Newsom vetoed SB 1047, he announced he had signed 17 AI-related bills into law addressing issues including deepfakes, AI watermarking, protection of children and workers, and AI-generated misinformation. The Governor has pointed to this package as evidence of his commitment to AI safety — and it is true that these bills address real, documented harms occurring today.

But there is a meaningful distinction between those 17 bills and SB 1047. The signed bills all address AI’s current, already-manifested harms — a deepfake of a politician, an actor’s voice cloned without consent, an algorithm that discriminates in hiring. SB 1047 was forward-looking, targeting the catastrophic risks that the most capable future AI systems pose. Signing one category while vetoing the other is not a comprehensive AI safety strategy — it is the equivalent of requiring seat belts in cars currently on the road while refusing to mandate them in next-generation autonomous vehicles under development.

How We Got Here

A Brief Timeline of SB 1047

Early 2024
Sen. Scott Wiener introduces SB 1047. It draws immediate attention as the most ambitious AI safety proposal in the U.S., as well as fierce opposition from major technology companies.
May 21, 2024
The California State Senate passes SB 1047 by a vote of 32 to 1, an overwhelming bipartisan endorsement.
August 2024
Sen. Wiener accepts significant amendments, including limiting the Attorney General's enforcement powers to situations where harm is imminent or has already occurred — a concession intended to address industry concerns. Anthropic, previously neutral, announces support.
August 28, 2024
The California State Assembly passes the amended SB 1047. The bill proceeds to the Governor's desk.
September 17, 2024
Newsom signals concern about the bill at a Salesforce Dreamforce conference, calling it a potentially blunt instrument — though he describes himself as undecided. He signs 17 other AI-related bills.
September 29, 2024
Newsom vetoes SB 1047 and simultaneously announces a new advisory partnership with three academics to develop an empirical AI risk analysis framework for future legislation.

What Comes Next

The Governor’s Promised Path Forward

Alongside the veto, Newsom announced a partnership with Dr. Fei-Fei Li of Stanford, Mariano-Florentino Cuéllar of the Carnegie Endowment for International Peace, and Jennifer Tour Chayes, Dean of UC Berkeley’s College of Computing, to develop an empirical, science-based trajectory analysis of frontier AI models and their risks. He has committed to working with the legislature on future legislation informed by their findings.

This is not nothing. These are serious scientists, and a credible empirical foundation for AI regulation is genuinely valuable. But the timeline is deeply concerning. AI capabilities are advancing at a pace that academic advisory panels, however distinguished, are not designed to match. The companies building frontier AI systems are not pausing their work while California’s consultants prepare their report.

Sen. Wiener's response: The bill's author has signaled he will return with revised legislation in the 2025 session, informed by the Governor's objections and the advisory panel's findings. The coalitions built around SB 1047 — researchers, engineers, labor, and artists — are expected to remain engaged.

The Broader Stakes for Californians

As billions of dollars pour into the development of AI and as it permeates more corners of everyday life, lawmakers in Washington still have not advanced a single piece of federal legislation to protect people from its potential harms, nor to provide oversight of its rapid development. California has historically been the state that steps into these vacuums — on consumer privacy, on emissions, on workers’ rights. The veto of SB 1047 is a significant departure from that tradition at exactly the moment it was most needed.

Newsom’s veto of SB 1047 also keeps California from aligning its AI regulation with that of the European Union, which is currently implementing the EU AI Act — widely regarded as the most comprehensive AI governance framework in the world. The decision leaves California, home to 32 of the world’s 50 leading AI companies, as the only major jurisdiction where the developers of the most powerful AI systems face no legal obligation to document, test, or take responsibility for the catastrophic risks their systems may pose.

Conclusion

A Missed Opportunity — and a Clear Assignment

Governor Newsom’s veto was not made in bad faith. His veto message demonstrates genuine engagement with the policy questions, and his commitment to future AI safety legislation deserves to be taken seriously, not dismissed. His criticisms of SB 1047 — however much we believe they are ultimately insufficient to justify the veto — reflect real debates that the AI safety community must continue to grapple with: how to regulate capabilities versus use cases, how to calibrate thresholds, how to avoid creating compliance regimes that inadvertently disadvantage smaller developers.

But the veto is nonetheless a consequential mistake. It leaves the people of California — and by extension, people everywhere affected by systems built in California — without the safety guarantees that SB 1047 would have established. It rewards the lobbying power of corporations with enormous financial interests in avoiding oversight. And it delays, by at least a year, the kind of proactive, forward-looking AI governance that the technology’s own pioneers have said we urgently need.

The lesson for Californians is that this work is not finished. The bill’s supporters — engineers, scientists, artists, and ordinary citizens who understand the stakes — must now do two things. They must hold the Governor to his promise of a concrete legislative path forward. And they must ensure that the next version of SB 1047, when it arrives in 2025, is stronger, sharper, and harder to dismiss.

The technology will not wait. Neither should California.