In a significant legal challenge, AI firm Anthropic has filed suit against the U.S. Department of Defense in a California federal court. The case centers on the company's recent designation as a "supply-chain risk," a move Anthropic claims is unconstitutional retaliation for its ethical policies.
The dispute began when Anthropic, a leader in advanced AI systems, established firm restrictions on how its technology could be used. The company publicly refused to allow its models to be deployed for mass domestic surveillance or in fully autonomous weapon systems. Shortly after, the Trump administration, which took office in 2025, applied the supply-chain risk label. This designation, often reserved for foreign entities posing security threats, triggered an order for all federal agencies to cease using Anthropic's technology within six months.
In its filing, Anthropic argues the government is punishing it for a protected viewpoint on AI safety, violating First and Fifth Amendment rights. "Defendants are seeking to destroy the economic value created by one of the world's fastest-growing private companies," the lawsuit states. The action has stirred bipartisan concern in Washington, with critics questioning whether policy disagreements can now trigger severe economic penalties for U.S. companies.
The fallout is already tangible. The General Services Administration terminated its OneGov contract with Anthropic, cutting off access for all three branches of federal government. The Departments of Treasury and State are also reportedly winding down use. While major commercial partners like Microsoft continue their work, they are creating strict firewalls to separate those projects from any Pentagon-related work.
The Pentagon has declined to comment on the ongoing litigation.
Source: The Verge