Intelligence is foundation
Subscribe
  • Luma
  • About
  • Sources
  • Ecosystem
  • Nura
  • Marbl Codes
00:00
Contact
[email protected]
Connect
  • YouTube
  • LinkedIn
  • GitHub
Legal
Privacy Cookies Terms
  1. Home›
  2. Featured›
  3. Voices & Thought Leaders›
  4. Why Amazon Might Actually Win the AI Race by Not Trying to Win It
Voices & Thought Leaders Tuesday, 5 May 2026

Why Amazon Might Actually Win the AI Race by Not Trying to Win It

Share: LinkedIn
Why Amazon Might Actually Win the AI Race by Not Trying to Win It

Amazon lost the AI training race. Google has TPUs. Microsoft has OpenAI. Amazon has... EC2 instances that anyone can rent. That positioning looked like a weakness eighteen months ago. Ben Thompson argues it's actually their structural advantage.

The thesis: AWS is neutral. It doesn't care which model you run. It just provides compute. Microsoft and Google, by contrast, are in an impossible position - they need to sell cloud services to everyone while also pushing their own frontier models. That conflict is already causing friction with customers who don't want to be locked into Gemini or GPT.

Amazon's decade-long bets on infrastructure are starting to matter. Custom chips. Power generation investment. Data centre capacity built before AI inference became the bottleneck. None of this was positioned as an AI play. It was just Amazon being Amazon - building physical infrastructure at scale while everyone else chased software.

The Inference Economics Problem

Training models is expensive. Running them at scale is bankrupting. Inference costs are the thing nobody solved during the GPT hype cycle. Every startup building on OpenAI's API is paying for compute they can't control, at prices that don't scale with their revenue.

AWS doesn't need to own the model to win here. They need to be the cheapest place to run inference at scale. Custom chips matter for that. Amazon's Trainium and Inferentia chips are designed specifically for model training and inference - not general-purpose compute. They're slower to develop than buying Nvidia, but they're also cheaper to run once deployed.

Microsoft and Google can't compete on price alone because they're subsidising their own model development. They need margin from cloud services to fund OpenAI partnerships and Gemini R&D. AWS just needs to cover the cost of electricity and servers. That's a different game entirely.

Neutrality as a Moat

The strategic play is positioning AWS as Switzerland. Run Anthropic's Claude. Run Meta's Llama. Run your own fine-tuned model. AWS doesn't care. Microsoft and Google care - because every dollar spent running Claude is a dollar not spent on their own models.

This matters most for enterprises. A bank building AI tools doesn't want to be dependent on Google or Microsoft's roadmap. They want infrastructure that persists regardless of which model wins. AWS offers that. Azure and GCP are structurally conflicted.

Thompson's argument is that Amazon's boring infrastructure work - power deals, chip development, global data centre rollout - positions them to win the AI era by being the dumbest pipes. No existential conflict. No model to defend. Just compute at scale.

The Counter-Argument Nobody's Making

AWS is slow. Their product velocity is glacial compared to Azure or GCP. Developers complain about the console, the pricing complexity, the documentation. None of that matters if you're buying raw compute at scale. It matters a lot if you're a startup trying to ship fast.

Amazon's neutrality is an advantage for enterprises. For builders, it's friction. The best developer experience is on Vercel running on AWS infrastructure. The best AI tooling is on Anthropic's Claude API. AWS captures margin but not mindshare. That might be enough. Or it might mean they win the infrastructure game but lose the platform game - which is what happened with mobile.

For business owners, the practical takeaway is this: the cloud you choose matters less than the model you choose. But if you're building something that needs to run at scale, AWS's lack of agenda is starting to look like a feature. They're not trying to sell you on a vision. They're just offering compute. That sounds boring until you realise boring is the thing that survives.

Amazon didn't win by building the best model. They won by building the infrastructure that everyone else's models need to run on. That's not the exciting story. But it might be the durable one.

More Featured Insights

Builders & Makers
How to Stop AI from Quietly Breaking in Production
Robotics & Automation
The Roomba Guy is Building a Robot That Doesn't Clean - It Just Keeps You Company

Video Sources

AI Engineer
Training an LLM from Scratch, Locally - Angelos Perivolaropoulos, ElevenLabs
AI Engineer
Skill Issue: How We Used AI to Make Agents Actually Good at Supabase - Pedro Rodrigues, Supabase

Today's Sources

n8n Blog
Production AI Playbook: Evaluation and Monitoring
DEV.to AI
The AI Agent Work That Has Budget Right Now
DEV.to AI
GDPR Website Audit: What Developers Should Check Beyond the Cookie Banner
Towards Data Science
Single Agent vs Multi-Agent: When to Build a Multi-Agent System
The Robot Report
Inside Colin Angle's bid to build companion robots with Familiar Machines & Magic
The Robot Report
ABB Robotics launches OmniVance autonomous surface finishing cell
ROS Discourse
"ROS 2 in a Nutshell: A Survey" is now published in ACM Computing Surveys
Ben Thompson Stratechery
Amazon's Durability
Latent Space
[AINews] The Other vs The Utility
Gary Marcus
The growing AI backlash

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Richard Bland
About Sources Privacy Cookies Terms Thou Art That
MEM Digital Ltd t/a Marbl Codes
Co. 13753194 (England & Wales)
VAT: 400325657
3-4 Brittens Court, Clifton Reynes, Olney, MK46 5LG
© 2026 MEM Digital Ltd