A week after Trump declared Anthropic's relationship with the Pentagon "kaput", internal documents tell a different story. The DoD had just told Anthropic the two sides were "nearly aligned" on security concerns.
This contradiction sits at the heart of Anthropic's latest court filing - sworn declarations defending against allegations that the AI company poses a national security risk. The Pentagon's case, according to Anthropic, rests on technical misunderstandings rather than genuine security threats.
What Actually Happened
The timeline is revealing. Internal Pentagon communications showed productive technical discussions. Engineers were working through implementation details. Security teams were aligning on standards. Then came the public declaration - a complete severing of ties, framed as a matter of national security.
Anthropic's legal team argues the shift wasn't driven by new security findings. Instead, they point to what they call fundamental misunderstandings about how Claude's architecture works. The "sabotage concerns" the Pentagon raised, according to the filing, stem from misreading technical documentation rather than actual vulnerabilities.
The gap between private alignment and public accusation suggests something shifted that had nothing to do with the technology itself. When technical teams are nearly aligned one week and the relationship is declared dead the next, the change isn't technical - it's political.
The Technical Misunderstandings
Anthropic's declarations detail specific points where Pentagon analysts apparently misread how Claude processes information. The concern about potential backdoors, they argue, conflates Claude's constitutional AI training with runtime behaviour. These are different systems with different security implications.
Constitutional AI - Anthropic's method for training Claude to follow ethical guidelines - operates during training, not deployment. The Pentagon's concerns, according to the filing, treated post-training behaviour as if it were a live connection to external systems. That's like worrying that a car's manufacturing process might let the factory remotely control it while you're driving.
The sabotage concerns appear to centre on Claude's ability to decline certain requests. The Pentagon saw this as unpredictable behaviour that could compromise mission-critical systems. Anthropic's position: that's not a bug, it's the design. A model that refuses harmful instructions isn't sabotage - it's a safety feature.
What This Means for AI Procurement
The broader pattern here matters more than this single case. Defence procurement is moving into AI territory without clear technical literacy at the decision-making level. When alignment discussions happen at the engineering level but get overridden at the political level, you end up with contradictions like this.
For AI companies working with government, the message is uncomfortable: technical alignment isn't enough. You can satisfy every security requirement, answer every engineering question, and still find the relationship terminated based on concerns your technical team already addressed.
The sworn declarations are Anthropic's attempt to separate technical reality from political narrative. Whether that works depends less on the strength of their engineering explanations and more on whether courts treat AI security as a technical question or a political one.
The Wider Pattern
This isn't just about Anthropic. Every AI company watching this case is recalibrating their approach to government contracts. When internal alignment can be overridden by external declarations, the risk calculation changes.
The technical community will read Anthropic's filing as a defence of sound engineering practices. The policy community will read it as a company trying to preserve a lucrative contract. Both readings can be true simultaneously - and that's the problem.
The real question isn't whether Anthropic poses a security risk. It's whether AI procurement decisions will be made by people who understand the technology or by people responding to political pressure. Right now, this case suggests the answer is the latter.
For developers and businesses watching from the sidelines, the lesson is stark: building secure systems isn't enough if the people evaluating them don't understand what makes them secure. Technical excellence is necessary but not sufficient when politics enters the room.