CNBC - Top News · Thursday, May 7, 2026 — 11:41 AM ET
Anthropic CEO BLAMES 80X Growth Surge For Compute Crunch
A federal judge in California has granted Anthropic a temporary injunction blocking the Department of Defense from enforcing punitive measures against the AI company. Judge Rita Lin sided with Anthropic's argument that the government violated its First Amendment rights by designating the company a "supply chain risk" and ordering federal agencies to cease using its Claude AI model. The injunction, which lasts one week while the court hears the full case, stems from Anthropic's refusal to allow the Pentagon to use Claude for fully autonomous lethal weapons or domestic mass surveillance.
The ruling carries significant implications for national security policy and government contracting practices. Judge Lin found that the Pentagon likely overstepped its legal authority, calling the supply chain designation "arbitrary and capricious" without legitimate justification. The judge questioned why the government pursued punitive measures rather than simply discontinuing its relationship with Anthropic, suggesting the actions appeared designed to "cripple" the company. This decision affects the government's efforts to replace Claude across federal agencies, a complicated transition given how deeply embedded Anthropic's technology has become in military operations.
The dispute reflects broader tensions between private AI companies and the Trump administration over acceptable military applications. Anthropic has maintained clear restrictions on its technology's use, while the Pentagon reportedly relies on Claude for critical functions including target selection and analysis of military strikes. The company claims the government's actions could cost hundreds of millions or billions of dollars, framing this as unconstitutional retaliation for taking principled ethical positions on AI weaponization.