Pentagon Bars Claude AI from Defense Work, Citing 'Policy' Conflict

In a rare move, the Pentagon has formally designated AI startup Anthropic as a supply chain risk, effectively banning its Claude models from defense-related work. The decision, announced by Department of Defense Chief Technology Officer Emil Michael, centers on what he described as inherent "policy preferences" within the AI's design.

"When a model's fundamental constitution reflects a different policy stance, it cannot be part of the system that provides equipment to our personnel," Michael stated in a CNBC interview. "The integrity of that chain is paramount. We cannot risk it."

The designation, typically applied to foreign entities, mandates that defense contractors certify they do not use Claude AI. Anthropic responded this week with a lawsuit against the administration of President Donald Trump, calling the action "unprecedented and unlawful" and claiming it jeopardizes hundreds of millions in contracts.

Michael countered that the step is not punitive, noting that government work constitutes a "tiny fraction" of Anthropic's commercial business. He also denied allegations that the Defense Department is actively pressuring companies to abandon Anthropic, labeling such talk "rumors."

Founded in 2021 by former OpenAI personnel, Anthropic has seen rapid enterprise adoption, including within the Defense Department itself. The legal clash now sets a significant precedent for how the U.S. government assesses and regulates the foundational values embedded in artificial intelligence systems used for national security.

Source: CNBC

Source:CNBC
← Back to News