In a federal court filing this week, the Justice Department forcefully rejected Anthropic's attempt to resume its defense contracts, arguing the AI company poses an unacceptable risk to national security systems. The filing is the latest volley in a high-stakes legal battle that could cost Anthropic billions and reshape the Pentagon's use of artificial intelligence.
The dispute centers on a "supply-chain risk" designation applied to Anthropic last year, which blocks its technology from Defense Department use. Justice Department attorneys, representing the Pentagon, stated that concerns about Anthropic's "potential future conduct" motivated the action, not any restriction on the company's speech. They argued that Defense Secretary Pete Hegseth reasonably determined Anthropic staff could potentially "sabotage" or "subvert" critical warfighting systems if the company's self-imposed ethical limits were crossed during operations.
Anthropic, which is challenging the designation in San Francisco federal court, contends the government overstepped its authority. The company maintains its Claude AI models are not suited for autonomous weapons or broad surveillance and that the ban is retaliatory. A hearing on Anthropic's request for a temporary reprieve is set for next Tuesday.
With military operations ongoing, the Pentagon acknowledges it cannot immediately replace Anthropic's tools, which are integrated into systems like Palantir's software for classified work. However, the department is actively working to adopt AI from Google, OpenAI, and xAI in the coming months. The case has drawn support for Anthropic from Microsoft, AI researchers, and a federal labor union, but no groups have yet filed briefs backing the government's position. Anthropic must respond to the latest filing by Friday.
Source: Wired
