Amazon lost the AI training race. Google has TPUs. Microsoft has OpenAI. Amazon has... EC2 instances that anyone can rent. That positioning looked like a weakness eighteen months ago. Ben Thompson argues it's actually their structural advantage.
The thesis: AWS is neutral. It doesn't care which model you run. It just provides compute. Microsoft and Google, by contrast, are in an impossible position - they need to sell cloud services to everyone while also pushing their own frontier models. That conflict is already causing friction with customers who don't want to be locked into Gemini or GPT.
Amazon's decade-long bets on infrastructure are starting to matter. Custom chips. Power generation investment. Data centre capacity built before AI inference became the bottleneck. None of this was positioned as an AI play. It was just Amazon being Amazon - building physical infrastructure at scale while everyone else chased software.
The Inference Economics Problem
Training models is expensive. Running them at scale is bankrupting. Inference costs are the thing nobody solved during the GPT hype cycle. Every startup building on OpenAI's API is paying for compute they can't control, at prices that don't scale with their revenue.
AWS doesn't need to own the model to win here. They need to be the cheapest place to run inference at scale. Custom chips matter for that. Amazon's Trainium and Inferentia chips are designed specifically for model training and inference - not general-purpose compute. They're slower to develop than buying Nvidia, but they're also cheaper to run once deployed.
Microsoft and Google can't compete on price alone because they're subsidising their own model development. They need margin from cloud services to fund OpenAI partnerships and Gemini R&D. AWS just needs to cover the cost of electricity and servers. That's a different game entirely.
Neutrality as a Moat
The strategic play is positioning AWS as Switzerland. Run Anthropic's Claude. Run Meta's Llama. Run your own fine-tuned model. AWS doesn't care. Microsoft and Google care - because every dollar spent running Claude is a dollar not spent on their own models.
This matters most for enterprises. A bank building AI tools doesn't want to be dependent on Google or Microsoft's roadmap. They want infrastructure that persists regardless of which model wins. AWS offers that. Azure and GCP are structurally conflicted.
Thompson's argument is that Amazon's boring infrastructure work - power deals, chip development, global data centre rollout - positions them to win the AI era by being the dumbest pipes. No existential conflict. No model to defend. Just compute at scale.
The Counter-Argument Nobody's Making
AWS is slow. Their product velocity is glacial compared to Azure or GCP. Developers complain about the console, the pricing complexity, the documentation. None of that matters if you're buying raw compute at scale. It matters a lot if you're a startup trying to ship fast.
Amazon's neutrality is an advantage for enterprises. For builders, it's friction. The best developer experience is on Vercel running on AWS infrastructure. The best AI tooling is on Anthropic's Claude API. AWS captures margin but not mindshare. That might be enough. Or it might mean they win the infrastructure game but lose the platform game - which is what happened with mobile.
For business owners, the practical takeaway is this: the cloud you choose matters less than the model you choose. But if you're building something that needs to run at scale, AWS's lack of agenda is starting to look like a feature. They're not trying to sell you on a vision. They're just offering compute. That sounds boring until you realise boring is the thing that survives.
Amazon didn't win by building the best model. They won by building the infrastructure that everyone else's models need to run on. That's not the exciting story. But it might be the durable one.