Anthropic, the AI lab founded on principles of constitutional AI and safety-first development, is in a standoff with the US Department of Defense. The DoD wants access to Claude for autonomous weapons systems and mass surveillance. Anthropic is saying no.
Ben Thompson's latest Stratechery analysis cuts through the surface-level "ethics versus national security" framing to ask a harder question: when strategic AI capabilities are at stake, does principle matter without power?
The Standoff Nobody Expected
Anthropic built its reputation on alignment - making AI systems that behave according to human values, not just optimising for raw capability. Constitutional AI, their flagship approach, embeds constraints into how models reason and respond. It's a company that's talked publicly about turning down lucrative deals if they compromise safety.
Now they're facing exactly that test. The DoD isn't asking for Claude to help with logistics or procurement. They're asking for autonomous weapons targeting and large-scale surveillance infrastructure. Applications where AI decisions have immediate, irreversible consequences.
Anthropic's position: we built this to be helpful, harmless, and honest. Autonomous weapons systems are antithetical to that mission. The DoD's position: strategic AI capabilities are a national security imperative. Choose a side.
Thompson's Argument: Power Makes the Rules
Thompson's analysis doesn't take a moral stance on whether Anthropic should or shouldn't comply. Instead, he focuses on structural power. In his view, this standoff reveals a fundamental tension: AI labs can have principles, but governments have sovereignty. When those conflict, sovereignty wins.
He draws parallels to encryption debates in the 1990s. Tech companies wanted strong encryption for consumer privacy. Governments wanted backdoors for law enforcement. The companies had principles. The governments had legal authority. Eventually, the legal authority shaped the outcome, even if the technical community disagreed.
Thompson argues we're seeing the same dynamic play out with AI, but the stakes are higher. Encryption is about protecting data. AI is about augmenting decision-making in domains where decisions have kinetic consequences - warfare, surveillance, infrastructure control. Governments won't cede that ground to private companies, no matter how well-intentioned those companies are.
The Question of Who Controls Strategic AI
Here's where Thompson's analysis gets uncomfortable. If Anthropic refuses, what happens? The DoD doesn't abandon autonomous weapons - they find another provider. Maybe a less safety-focused lab. Maybe they build it themselves. Maybe they license a foreign model with fewer constraints.
Thompson's point: Anthropic's refusal might preserve their principles, but it doesn't change the outcome. Someone builds the autonomous weapons system. Someone enables the surveillance infrastructure. The question isn't whether these capabilities get deployed - it's who builds them, under what constraints, with what oversight.
This is the alignment problem at a geopolitical scale. You can align a model to constitutional principles. You can't align geopolitical incentives. And when those incentives demand strategic AI capabilities, principle without power is just... principle.
No Easy Answers
Thompson doesn't offer a resolution, because there isn't one. Anthropic can hold their line and lose influence over how these systems get built. Or they can engage, compromise their stated mission, and try to shape the outcome from inside. Both paths are ethically fraught.
What's clear is this: the era of AI labs operating as independent, mission-driven organisations with full control over their technology's deployment is ending. Strategic AI capabilities are too valuable, too consequential, and too tied to national security for governments to leave them in private hands without strings attached.
Anthropic built Claude to be aligned with human values. Now they're discovering that human values aren't universal, and power still makes the rules. The standoff with the DoD isn't just about one contract. It's about who gets to decide what AI is for - and what happens when that decision is no longer theirs to make.