Jensen Huang has a simple mental model for the AI economy. You put electrons in one end - energy - and get tokens out the other. And Nvidia sits in the middle of that pipeline.
Azeem Azhar's latest analysis breaks down Huang's worldview and why it's proving remarkably durable against threats from custom silicon. The model is elegant: every AI workload is fundamentally about converting compute into inference. Nvidia's GPUs are the engine. The rest is just optimization around the edges.
What makes this interesting isn't the hardware itself. It's how Huang has positioned Nvidia as the platform, not just a component. Platform control is the prize in tech. It's why Apple owns iOS, why AWS dominates cloud infrastructure, why Microsoft held onto Windows for decades. Nvidia's CUDA software stack is that same kind of lock-in for AI workloads.
Why Custom Silicon Hasn't Killed Nvidia Yet
The custom silicon story was supposed to be simple. Google builds TPUs. Amazon builds Trainium. Microsoft invests in Maia. They all reduce their dependence on Nvidia, margins compress, and the GPU monopoly cracks.
That hasn't happened. And Azhar's piece explains why. Custom chips work brilliantly for specific workloads - Google's TPUs are fantastic for training models on Google's infrastructure. But they're niche. They don't generalise well. They don't have the software ecosystem. And they require expertise most companies don't have.
Nvidia's advantage is generality. Their GPUs run PyTorch, TensorFlow, JAX, and every other framework researchers care about. Developers know CUDA. The tooling is mature. That ecosystem effect is brutal to compete against. It's not enough to build a faster chip. You have to rebuild the entire developer experience around it.
The real threat isn't custom silicon from hyperscalers. It's Huawei Ascend - and that's because Huawei is playing a different game entirely.
The Huawei Wildcard
Huawei Ascend represents adjacent-market disruption. They're not competing on performance or trying to undercut Nvidia on price. They're building for markets where Nvidia can't compete - primarily China, where US export controls have created an artificial moat.
This is classic disruption theory. You don't beat the incumbent in their home market. You build for a different market entirely, get good, then expand. Huawei has a captive audience in Chinese AI labs and tech companies who can't legally buy Nvidia's latest chips. That's enough volume to fund R&D, mature the ecosystem, and eventually challenge on capability.
The question isn't whether Huawei can build chips as fast as Nvidia. It's whether they can build a software ecosystem compelling enough to fragment the market. And given enough time and enough demand, that's not impossible.
Platform Control Still Matters Most
Huang's electrons-to-tokens model works because it captures the essential truth of the AI economy right now. Compute is the bottleneck. Models are getting bigger. Inference workloads are exploding. Energy costs are climbing. And Nvidia is the company that makes all of that viable at scale.
The threat to Nvidia isn't better hardware. It's a world where compute stops being the bottleneck - where models plateau, where efficiency gains outpace scale, where the marginal value of another GPU drops to zero. That world might be coming. But it's not here yet.
Until then, Huang's model holds. Electrons go in. Tokens come out. And Nvidia prints money in between.