WebpronewsAI & LLMs

Safety or Security? Pentagon Labels Anthropic's AI Ethics a Strategic Risk

In a rare public condemnation, the Department of Defense has declared AI firm Anthropic an "unacceptable risk to national security." The issue isn't a technical failure or a breach. According to a TechCrunch report, the Pentagon objects to the company's refusal to remove its self-imposed safety restrictions on how its Claude AI models can be used.

Anthropic, founded by former OpenAI executives Dario and Daniela Amodei, operates as a public benefit corporation. Its published "red lines" forbid applications like autonomous weapons targeting, mass surveillance, and military integrations where AI could make lethal decisions without human oversight. The company calls these restrictions fundamental to its identity.

A senior DoD official told TechCrunch these rules create "operational gaps that adversaries will not hesitate to exploit." After a year of negotiations for classified defense work, the Pentagon concluded a partnership was untenable. The department's view frames Anthropic's safety-first architecture as a strategic vulnerability, especially as nations like China advance military AI without similar public constraints.

Dario Amodei responded on X, stating the company would not abandon its mission under pressure. "The idea that safety restrictions make America less secure fundamentally misunderstands the risks we face," he wrote.

The standoff highlights a core tension. Defense planners argue voluntary U.S. restrictions create a disadvantage against rivals. AI safety researchers counter that removing safeguards invites catastrophic failures unique to autonomous systems. The Pentagon's move appears preemptive, aiming to use access to future classified contracts as leverage to change Anthropic's policies.

Internally, Amodei told employees this was anticipated pressure. But the financial stakes are real. Secondary market trades reportedly devalued Anthropic by 12% following the announcement. With the company burning through an estimated $3 billion annually, locked doors at the Pentagon concern investors who expected government revenue.

Other AI companies are already approaching the Pentagon, offering fewer restrictions. This dynamic presents a collective action problem: one firm's ethical stand becomes a competitor's opportunity. The episode tests whether corporate-led safety measures can endure when the world's largest customer demands otherwise.

Source: Webpronews

← Back to News