01

The Collapse in Washington

For those who have watched the CalCompute Initiative take shape over the past year, the events that erupted in late February 2026 were not a surprise. They were a confirmation. What happened in Washington — in a sequence of events both dramatic and deeply consequential — was the real-world stress test of every assumption our initiative has challenged from the beginning: that the rights of ordinary people can be adequately protected when the most powerful AI systems in the world are owned, operated, and contracted by a handful of private companies accountable primarily to federal procurement offices and their own shareholders.

The confrontation began when Defense Secretary Pete Hegseth summoned Anthropic CEO Dario Amodei to the Pentagon and issued an ultimatum: remove contractual language prohibiting the use of Anthropic’s Claude AI for mass domestic surveillance of Americans and for fully autonomous weapons — or face commercial destruction. Anthropic refused. On February 27, 2026, President Trump declared on Truth Social that every federal agency must immediately cease all use of Anthropic’s technology. The Department of Defense designated Anthropic a “supply chain risk” — a label typically reserved for foreign adversaries — and gave agencies six months to complete the transition.

Hours later, OpenAI CEO Sam Altman announced a new agreement with the Pentagon to deploy OpenAI’s models on classified military networks. Altman claimed the deal included the same safeguards Anthropic had sought. Independent legal analysis told a different story. The contract language, experts found, does not give OpenAI an independent right to prohibit surveillance or autonomous weapons use. It simply says the Pentagon cannot use OpenAI’s tools to break laws already on the books — a distinction with enormous practical significance, because existing law permits forms of data aggregation and behavioral analysis that, when powered by frontier AI, amount to mass surveillance in every meaningful sense.

The next time a supply chain risk designation is applied to a company with actual ties to a foreign adversary, the credibility to make that case will be fundamentally diminished.

— Policy experts on the Pentagon's designation of Anthropic

To understand the full stakes, consider what was happening operationally while this political drama played out. In January 2026, Claude had already been deployed — through Anthropic’s partnership with Palantir Technologies — in the U.S. special operations raid on Caracas that resulted in the capture of Venezuelan President Nicolás Maduro and the deaths of over 80 people, including Cuban soldiers. By late February, as the ban on Anthropic was being announced by executive proclamation, Claude was simultaneously being used to help process targeting intelligence for U.S. strikes on Iran.

This is the landscape into which CalCompute was born. Not as a hypothetical. Not as a precaution against distant futures. As an urgent and necessary response to a present reality.

02

What the Federal Events Revealed About Private AI Governance

The Washington episode taught three lessons that every CalCompute supporter must internalize, because they define the structural problem our initiative must solve.

Private contracts are not constitutional guarantees.

Anthropic tried to use its terms of service as a check on government power. When a sufficiently motivated administration decided those terms were inconvenient, it threatened to destroy the company commercially and moved on to a more compliant provider. The lesson is not that Anthropic was wrong to try. The lesson is that private contractual safeguards, however well-intentioned, are structurally insufficient when the counterparty controls the regulatory environment. The moment a company’s survival depends on federal contracts, its leverage to enforce ethical terms dissolves. OpenAI, to its credit, did not pretend otherwise — its CEO explicitly told employees that operational decisions about how AI is used rest with government officials, not with the companies that build the systems.

The market for AI safety is a race to the bottom.

When Anthropic held its line, the Pentagon did not negotiate. It pivoted — to OpenAI, to Google, to Elon Musk’s xAI — explicitly seeking redundancy so that no single company’s ethical standards could ever again disrupt an active military operation. The Pentagon’s Chief Technology Officer was transparent about this: “I want all of them. I want to give them all the same exact terms because I need redundancy.” In this market structure, the company with the fewest ethical restrictions wins the most contracts. This is not a flaw in the system. It is the system functioning exactly as designed. No competitive market will voluntarily produce strong civil liberties protections when the largest buyer is actively selecting against them.

The same safeguards the Pentagon rejected when Anthropic required them — prohibitions on mass domestic surveillance and fully autonomous weapons — were nominally agreed to by OpenAI. Independent legal analysis found the OpenAI language contained a key qualifier: surveillance is prohibited only when "intentional." This creates an explicit loophole for incidental, aggregated, or algorithmically-driven surveillance that falls outside any traditional legal definition of intentional targeting.

Compute concentration is the root cause, not the symptom.

None of these events would carry the weight they do if AI compute were distributed across universities, public institutions, and a diverse ecosystem of smaller developers. The reason the Pentagon could pivot seamlessly from Anthropic to OpenAI in a single afternoon is that the entire frontier AI infrastructure — training clusters, inference networks, model deployment, classified integrations — is held by three cloud providers and a small number of well-capitalized labs with Pentagon relationships already in place. CalCompute was designed to break this structural concentration. The federal events of 2026 show, with unmistakable clarity, why that concentration is not merely an economic concern but a civil liberties emergency.

3 Cloud companies that control frontier AI infrastructure
~0% Share of large-scale AI projects now run by academics, down from 60% a decade ago
1% Compute used by the largest academic model vs. the largest industry model
6 mo. Time given to phase out Anthropic — while using it for active military operations
03

The Specific Threat to California Residents

California is not insulated from federal AI procurement decisions. With 32 of the world’s top 50 AI companies headquartered here, with state and local agencies increasingly reliant on federal AI partnerships, and with the state’s 39 million residents generating more commercially available personal data than almost any jurisdiction on earth, California sits at the center of the very risks the Anthropic episode exposed.

The surveillance concern is not abstract. Under current U.S. law, federal authorities may legally purchase commercially available data from data brokers — location history, purchasing behavior, social graph data, financial patterns — and submit it to AI systems for analysis. No warrant is required. No probable cause. No judicial oversight. When frontier AI models process this data at scale, the result is, in practical terms, a comprehensive behavioral profile of every person in the data set. The OpenAI contract’s “intentional” carve-out does nothing to address this pathway. Neither does any existing federal statute.

California has the California Consumer Privacy Act and a robust tradition of privacy protection. But CCPA governs private-sector data use, not federal intelligence activities. The gap between what California law protects and what federal agencies can do with AI-processed commercial data is where the most serious risks to California residents currently live.

No amount of intimidation or punishment will change our position on mass domestic surveillance or fully autonomous weapons. But no amount of resolve from one company will substitute for law.

— Synthesizing Anthropic's public statement and the expert consensus that followed

Both Anthropic and OpenAI, notably, have now called on Congress to pass legislation providing protections for citizens against AI-enabled mass surveillance. This is a remarkable convergence: the two companies on opposite sides of the most dramatic AI governance confrontation in American history agree that private contracts are insufficient and that law is required. Congress has not acted. California can.

A Framework for CalCompute: Six Pillars for California's Public AI Infrastructure

The federal events of early 2026 are not just cautionary tales. They are a blueprint for what CalCompute's governance framework must explicitly address. The following six pillars represent our recommendations to the CalCompute Consortium and to the California Legislature, ahead of the January 2027 report deadline.

I
Governance

A Statutory Prohibition on Weaponized Decommissioning

The Pentagon's designation of Anthropic as a "supply chain risk" — a tool traditionally reserved for foreign adversaries — was used as commercial coercion to remove ethical guardrails. CalCompute's enabling legislation must include explicit statutory language prohibiting any state agency from using procurement designations, contract cancellations, or vendor classifications to coerce AI providers into removing civil liberties protections from their usage terms. The integrity of CalCompute's own ethical standards must be insulated from executive pressure through codified protections that require legislative action — not an executive order — to modify. This is not a hypothetical protection. It is the direct lesson of February 27, 2026.

II
Civil Liberties

A Hard Prohibition on Mass Surveillance — Closing the "Intentional" Loophole

CalCompute's usage policy must prohibit use of its compute infrastructure for mass surveillance of California residents, with language that explicitly covers aggregated commercial data analysis, behavioral profiling, and pattern-of-life analysis — not merely "intentional" targeting. The OpenAI contract's "intentional" qualifier is precisely the kind of drafting ambiguity that surveillance agencies have historically exploited. CalCompute must further prohibit the use of its resources to assist any federal agency in conducting surveillance of California residents that would require a warrant if conducted by state or local law enforcement. This is California exercising its sovereign interest in protecting residents from federal AI-enabled surveillance programs that bypass state privacy law.

III
Oversight

An Independent Civilian Oversight Board with Subpoena Authority

Private company ethics boards have no enforcement power. Federal oversight committees are slow, partisan, and frequently captured by the agencies they oversee. CalCompute must be governed by an independent civilian oversight board with real authority: the power to audit all compute allocations, review usage logs, investigate complaints, and — critically — the ability to refer violations to the California Attorney General for enforcement action. Board members must include civil liberties attorneys, academic AI researchers, privacy technologists, labor representatives, and members of communities historically subject to disproportionate surveillance. No single state agency or executive official may unilaterally override the board's determinations on usage policy. This is the institutional architecture that "trusting the government to follow the law" conspicuously lacks.

IV
Access & Equity

Compute Allocation Quotas for Public-Interest Research

The structural lesson of the federal events is that whoever controls compute controls AI development's direction. CalCompute must enshrine in statute — not just policy — that a mandated minimum percentage of its compute capacity is reserved for public-interest work: university research, nonprofit applications, small startups, and projects addressing climate adaptation, public health, rural access to services, and educational equity. This is not charity. It is the core public purpose that distinguishes CalCompute from a state-run cloud provider. If allocation quotas are left to administrative discretion, they will be eroded under pressure from revenue-generating institutional users. Legislative mandate is required.

V
Transparency

Mandatory Use Disclosure and a Public Compute Ledger

One of the most alarming aspects of the Venezuela and Iran episodes is how much of the public learned about AI's role in active military operations from investigative reporting rather than government disclosure. CalCompute must operate on a principle of radical transparency within appropriate security limits. Every allocation of CalCompute resources must be logged and subject to public records requests. An annual public report must disclose the categories of use, the institutions accessing compute, and any instances where the oversight board reviewed or restricted usage. This creates an accountability structure that private cloud providers will never voluntarily adopt — and that makes CalCompute a model for how public AI infrastructure should work everywhere.

VI
Federal Sovereignty

An Explicit Federal Non-Cooperation Clause for Surveillance Requests

California has precedent for declining to extend state resources to assist federal enforcement actions it considers contrary to the rights of state residents — in immigration, in drug enforcement, and in other domains. CalCompute must include an explicit provision that CalCompute infrastructure and the data generated by its users may not be shared with, made accessible to, or processed on behalf of federal agencies without a California court order, and that requests for such access that do not meet this standard will be publicly reported to the Legislature within 30 days. This is a direct response to the surveillance gap identified above: it ensures that CalCompute cannot become an inadvertent on-ramp for federal AI-powered surveillance programs operating outside California's privacy framework.

The Argument Has Been Made For Us

There is a certain grim irony in the fact that the strongest argument for CalCompute was not made by any policy brief, academic paper, or advocacy organization. It was made by the Pentagon, by two competing AI companies, by a president’s Truth Social post, and by the operational reality of commercial AI being deployed in military strikes while the contract governing its use was actively being voided. The events of early 2026 are not edge cases. They are the natural endpoint of a system in which AI infrastructure is entirely privately held, federally dependent, and governed by contractual terms that evaporate the moment they become inconvenient for those with power.

California’s residents deserve something different. They deserve AI infrastructure that is publicly owned, transparently governed, legally insulated from political coercion, and structurally incapable of being turned against them without judicial oversight and public accountability. They deserve the ability to know — not merely hope — that the computational systems built and operated in their name are not being used to profile them, surveil them, or assist in operations that their elected representatives never authorized.

CalCompute is the mechanism by which California begins to build that. The framework recommendations above are not aspirational. They are the minimum necessary response to what we now know the alternative looks like. The Consortium’s report is due to the Legislature in January 2027. The appropriation that follows will determine whether CalCompute becomes the national model it was designed to be — or another well-intentioned law that waited too long to become infrastructure.

The argument has been made for us. Now we must build.