Azeem Azhar's latest Exponential View tackles three developments that don't immediately seem connected - until you look closer. A doomsday job displacement scenario, unprecedented AI regulation from the US, and deeper questions about how human cognition changes when AI is everywhere. Together, they paint a picture of an industry hitting real-world friction.
The Citrini scenario and compute reality
The Citrini scenario - named after researcher who modelled it - suggests rapid, widespread job displacement as AI capabilities accelerate. It's the nightmare version: millions unemployed in short order, economies struggling to adapt, social safety nets overwhelmed.
But Azhar points to something crucial: compute constraints. The scenario assumes unlimited scaling of AI capabilities. Reality is messier. Training frontier models requires enormous computational resources. Inference at scale is expensive. Energy requirements are real. These aren't temporary bottlenecks - they're physical and economic limits that moderate the pace of disruption.
This doesn't mean disruption won't happen. It means it happens unevenly, sector by sector, task by task. Some jobs transform quickly. Others remain stubbornly resistant to automation because the economics don't work, the liability questions aren't resolved, or the human element genuinely matters. The Citrini scenario assumes frictionless deployment. The real world has friction everywhere.
Anthropic and supply-chain regulation
The US government classified Anthropic as a supply-chain risk. Not because of security concerns about the company itself, but because of dependencies on international compute infrastructure and training data pipelines. This is new. AI regulation has focused on outputs - bias, safety, transparency. Now it's focusing on inputs: where does compute come from, who controls training infrastructure, what happens if access is disrupted?
For builders, this matters practically. If you're designing systems that depend on frontier models, you're now thinking about regulatory risk and supply-chain resilience in ways you weren't six months ago. What happens if access to certain model providers becomes restricted? What's your fallback? These aren't hypothetical questions anymore.
Human cognition in an AI world
Azhar's deeper question: how does human cognition change when AI assistance is ubiquitous? Not just "do we get lazier?" but "what cognitive capabilities do we develop differently?"
There's a parallel with literacy. When reading and writing became widespread, human cognition changed. We externalised memory. We developed different reasoning patterns. We structured knowledge differently. AI assistance might do something similar - not replacing thinking, but changing which cognitive muscles we develop and which we let atrophy.
The risk isn't that we become dependent on AI. It's that we optimise for tasks AI can do, and undervalue tasks AI struggles with: synthesis across domains, contextual judgment, ethical reasoning, creative leaps that don't follow from training data. If education systems and workplaces reward AI-compatible thinking, we might systematically underinvest in distinctly human capabilities.
The pattern underneath
What connects these three threads? They're all about friction. The Citrini scenario assumes frictionless disruption - compute constraints provide friction. Supply-chain regulation creates friction. Cognitive adaptation takes time - more friction. None of this stops AI transformation. But it means transformation happens through negotiation with reality, not theory.
For decision-makers, this is useful context. The hype cycle encourages thinking in extremes: either AI changes everything instantly, or it's overblown. The reality is probably neither. Significant change, moderated by real constraints, creating opportunities for those who understand both the capability and the friction.