Intelligence is foundation
Podcast Subscribe
Voices & Thought Leaders Sunday, 8 March 2026

Azeem Azhar on AI's Fragile Supply Chain and the 2.8% Productivity Puzzle

Share: LinkedIn
Azeem Azhar on AI's Fragile Supply Chain and the 2.8% Productivity Puzzle

Two things happened this week that shouldn't sit comfortably together. AI productivity gains hit 2.8% in early 2025 - a measurable, real-world bump. And then drone strikes on AWS data centers reminded everyone just how concentrated and vulnerable AI infrastructure has become.

Azeem Azhar's latest Exponential View digs into both, and the tension between them matters more than either story alone.

The 2.8% Productivity Number

Early 2025 data shows a 2.8% productivity increase in workplaces deploying AI tools at scale. That's not hype. It's measured output - tasks completed faster, fewer errors, better resource allocation. For businesses, 2.8% is significant. Compound that over a year and you're talking about real operational leverage.

But Azhar's framing is important here. He calls this the chatbot phase. The gains are coming from relatively simple automation: answering customer queries faster, drafting emails, summarising documents, generating first-draft code. These are valuable, but they're not transformative. They're efficiency improvements on existing workflows, not new capabilities.

The historical comparison he draws is to early internet adoption. Businesses saw productivity gains from email and basic web presence, but those were trivial compared to what came later - e-commerce, cloud infrastructure, mobile-first business models. The first wave of gains is never the full story.

What that means for business owners is this: 2.8% is real, and you should chase it. But it's also just the starting gate. The tools that deliver 10x improvements - not 2.8% - are still being built. If you're only optimising existing processes, you're missing the bigger shift.

The Supply Chain Vulnerability Nobody Wants to Talk About

Then there's the infrastructure problem. Azhar highlights something that should be front-page news but somehow isn't: AI's supply chain is absurdly concentrated. A handful of data centers power most of the world's AI workloads. A handful of chip manufacturers supply the GPUs. A handful of cloud providers host the models.

Drone strikes on AWS data centers - whether hypothetical scenario planning or actual incidents - expose just how fragile this setup is. If a few key facilities go offline, productivity gains evaporate overnight. Not just for one company. For entire industries.

This isn't theoretical. We've seen what happens when supply chains concentrate and then break. COVID-19 and chip shortages. The Suez Canal blockage. Russia-Ukraine and energy prices. Every time, the lesson is the same: concentration equals fragility. And right now, AI infrastructure is more concentrated than almost any other critical system.

Azhar's point is that we're building productivity gains on top of a supply chain that hasn't been stress-tested at scale. The 2.8% improvement assumes the infrastructure stays online. But what's the contingency if it doesn't?

The Knowledge Commons Debate

Layered into this is a quieter but equally important question: who owns the knowledge that AI systems are trained on? Azhar explores the tension between open knowledge commons - the idea that information should be freely accessible for training and development - and the commercial interests that want to lock it down.

If AI models are trained on publicly available data, who benefits? The companies building the models, certainly. But what about the people who created that data in the first place - writers, researchers, artists, developers? The current model assumes their work is fair game for training data. That assumption is increasingly contested.

For builders, this matters because it shapes what's legal to train on and what's commercially viable. If the knowledge commons closes - if major data sources start charging for training access or blocking it outright - the cost and complexity of building new models skyrockets. That concentrates power even further into the hands of companies that already have massive datasets.

The counterargument is that open access accelerates innovation and benefits everyone. Lock down training data and you slow progress. But that assumes the benefits distribute evenly, which they haven't so far.

What This Means for Builders and Business Owners

If you're building on AI infrastructure, Azhar's analysis should make you think twice about dependencies. Relying entirely on a single cloud provider or a single model family is a risk. Not just a technical risk - a business continuity risk.

For business owners chasing that 2.8% productivity gain, the question is: what happens if the tools you're relying on become unavailable, more expensive, or legally contested? Do you have fallback plans? Can your operations survive without them?

The optimistic take is that concentration creates opportunity. If infrastructure is fragile, there's a market for redundancy, for decentralised alternatives, for companies that solve the supply chain problem. But the pessimistic take is that we're building productivity improvements on a foundation that hasn't been tested under real strain.

Azhar's newsletter doesn't give easy answers. It just holds up two truths and asks you to sit with the tension. AI is delivering measurable productivity gains right now. And the infrastructure enabling those gains is more fragile than anyone wants to admit.

Both things are true. And that's the problem.

More Featured Insights

Builders & Makers
GPT-5.4 Ships Computer Use - But Claude Already Won This Race
Robotics & Automation
Agility Drops 'Robotics' From Its Name - And That Tells You Everything

Today's Sources

DEV.to AI
GPT-5.4 Just Made Computer Use a Commodity. Now What?
DEV.to AI
.NET + AI = The Perfect Combo
Towards Data Science
Understanding Context and Contextual Retrieval in RAG
Towards Data Science
The AI Bubble Has a Data Science Escape Hatch
The Robot Report
Humanoid developer Agility Robotics rebrands
Azeem Azhar
Exponential View #564: Intelligence as a target; the future of knowledge; AI, productivity & economy
Gary Marcus
BREAKING: Sam Altman's greed and dishonesty are finally catching up to him

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed