Azeem Azhar ran the numbers on his personal AI usage. 870 million tokens per day. Not for a company. Not for a team. For one person.
That's not a typo. That's the scale of inference consumption when AI becomes genuinely useful - when it's reading your emails, summarising reports, drafting responses, managing your calendar, researching topics, and running background tasks you didn't even know you needed.
And if that's happening at the individual level, the macro implications are staggering. Azeem's analysis explore what Jensen Huang was really saying at GTC: we're transitioning from a training economy to an inference economy. And nobody's ready for how big that economy is about to get.
The Trillion-Dollar Supply Chain Shift
Training models is a one-time cost. Expensive, yes - but bounded. You train GPT-5 once. You don't train it again every time someone asks it a question.
Inference is the opposite. Every interaction costs tokens. Every query, every API call, every agent action - it all runs on inference. And as usage scales, inference costs dwarf training costs. Jensen Huang's estimate: a trillion-dollar supply chain will emerge around inference infrastructure.
That's not hype. That's arithmetic. If personal usage is hitting 870 million tokens a day, and there are 8 billion people on the planet, and AI adoption follows anything resembling smartphone adoption curves - the compute demand for inference will exceed anything we've built infrastructure for.
This is why NVIDIA announced Vera Rubin and Feynman. This is why OpenClaw matters. The companies that control inference infrastructure in 2026 will be the cloud providers of the 2030s. The economics have flipped.
OpenClaw: The Harness That Makes the Engine Work
Azeem's framing is sharp: models are the engine. OpenClaw is the harness. You can have the most powerful engine in the world, but if you can't attach it to anything useful, it's just expensive noise.
OpenClaw proposes a standard for agentic workflows - how agents declare what they can do, request actions from other agents, and coordinate autonomously. It's the missing protocol layer. Right now, every AI system is bespoke. Every integration is custom. Every workflow is fragile.
If OpenClaw gains adoption, it changes the game. Suddenly, agents built by different companies, running different models, can work together. Your scheduling agent can talk to your email agent, which talks to your research agent, which talks to your analytics agent. They coordinate. They share context. They act autonomously.
That's the promise. Whether it delivers depends on adoption beyond NVIDIA's ecosystem. But the fact that it's open - not proprietary - is a strong signal. This isn't a moat play. It's an infrastructure bet.
What This Means for Builders
If inference costs are dropping and usage is spiking, the economics of AI products just changed overnight. Products that were too expensive to run six months ago are suddenly viable. Products that felt marginal are now core.
For developers, this is the green light. Build the thing you thought was too token-intensive. The cost curve is moving in your favour faster than you think. The companies that move now - while inference infrastructure is still being built out - will have a head start that's hard to close.
For businesses, the question is no longer "should we adopt AI?" It's "how fast can we adopt inference at scale?" The companies that get inference infrastructure right will have a cost advantage that compounds. The ones that don't will be paying 10x more for the same capability within two years.
The Personal Consumption Pattern
Azeem's 870 million tokens a day isn't an outlier. It's a preview. When AI becomes genuinely useful - not a novelty, not a toy, but a tool you rely on - consumption scales exponentially. You stop thinking about token costs. You stop rationing usage. You just use it.
That's where we're heading. Personal AI usage that rivals data centre workloads from a decade ago. And the infrastructure to support it doesn't exist yet. It's being built right now. The companies that build it will own the next layer of the stack.
Jensen Huang's message at GTC was clear: the training era made the models possible. The inference era makes them useful. And the companies that win inference will define the next decade of computing.
Azeem's analysis connects the dots. The shift is real. The numbers are staggering. And the opportunity is wide open.