Anthropic signed a deal with Google and Broadcom for multi-gigawatt TPU capacity. Not megawatts. Gigawatts. That's the power consumption of a small city, dedicated entirely to training AI models.
Ben Thompson's analysis on Stratechery breaks down what this actually means. Anthropic is now operating at a $30 billion annual run-rate. To stay competitive in frontier model development, they need infrastructure at a scale most companies can't even conceptualise. This isn't cloud compute you rent by the hour. This is long-term capacity contracts that lock in hardware, power, and cooling for years.
The numbers are hard to grasp until you compare them. OpenAI's GPT-4 training run reportedly cost over $100 million in compute alone. That was 2023. Frontier models in 2025 are larger, trained on more data, with longer context windows. The compute requirements don't scale linearly - they explode. Anthropic's gigawatt commitment is what it takes to stay in the race.
Why TPUs and Not GPUs?
Anthropic's deal is built around Google's Tensor Processing Units, not Nvidia GPUs. That's a strategic choice, not just a hardware preference. TPUs are optimised specifically for machine learning workloads - matrix multiplication, backpropagation, the operations that dominate model training. They're faster and more power-efficient for these tasks than general-purpose GPUs.
But there's another reason. Nvidia's GPU supply is constrained and expensive. Every frontier lab is competing for the same H100 and H200 chips. By committing to TPUs, Anthropic sidesteps that bottleneck and locks in guaranteed capacity. It's also a signal of the deeper alliance between Anthropic and Google. This isn't a customer-vendor relationship. It's infrastructure partnership.
Google Cloud gets a massive, long-term anchor tenant. Anthropic gets the compute it needs without fighting for scraps in the GPU market. Broadcom, which designs the custom silicon, gets volume orders that justify continued R&D. Everyone wins - except the labs that can't afford to play at this scale.
The Economics of Frontier Models
Here's the uncomfortable truth: frontier AI is now a capital game. The cost to train a competitive model has crossed into territory where only a handful of companies can participate. OpenAI, Anthropic, Google, Meta, xAI - maybe a few others. Everyone else is building on top of their APIs or fine-tuning open models.
Anthropic's $30 billion run-rate isn't profit. It's revenue required to fund the compute, talent, and infrastructure needed to stay competitive. That revenue comes from enterprise contracts, API usage, and strategic partnerships. The business model is selling access to models that cost billions to create. The margins need to be extraordinary just to break even on the next training run.
This is why the battle for enterprise customers is so fierce. It's not about market share for its own sake. It's about funding the compute to build the next generation of models. Lose that revenue stream, and you can't afford to train. Can't train, and you fall behind. Fall behind, and your models become less competitive. It's a flywheel - but only if you can keep it spinning.
What This Means for Builders
If you're building on Anthropic's APIs, this is good news. Multi-gigawatt capacity means they're not slowing down. Claude will keep improving. The context window will keep expanding. The models will get faster and cheaper to run. Anthropic is committing to long-term infrastructure because they believe the market will be there to support it.
But it also crystallises the strategic reality. You're not going to out-train Anthropic. You're not going to compete on foundational model quality unless you have gigawatt-scale compute and billions in capital. The opportunity for most builders isn't in creating foundation models - it's in creating applications on top of them that solve real problems for real businesses.
The interesting question is what happens when smaller labs can't keep up. Do they become acqui-hires for the giants? Do they pivot to specialised models for narrow domains? Or do they find ways to compete on inference efficiency, edge deployment, or privacy-preserving architectures - things that don't require gigawatt training runs?
Anthropic's TPU deal is a signal about the future. Frontier AI is consolidating around a small number of heavily-capitalised players. The next decade of innovation won't come from training bigger models. It'll come from figuring out what to do with them.