Remember when AI safety advocates warned that AGI was imminent and we needed to slow down? Gary Marcus just argued that strategy didn't just fail - it made everything worse.
The doomers thought raising alarms about existential risk would pump the brakes on dangerous AI development. Instead, it accelerated corporate power and government overreach. This is worth unpacking, because the consequences are playing out right now.
The AGI-is-nigh narrative
The timeline went like this: prominent researchers, some from inside leading AI labs, started claiming AGI (artificial general intelligence) was 2-5 years away. Not just possible - imminent. Existential risk was around the corner.
The logic seemed sound: if you believe AGI is coming fast and could be catastrophic, you need urgent action. Regulation, safety protocols, maybe even a pause on development.
But here's what actually happened. Corporations heard "AGI in 5 years" and went all-in on the gold rush. Governments heard "existential threat" and started building control frameworks that favour incumbents. The general public heard "super-intelligent AI" and assumed current systems were far more capable than they are.
Everyone heard the alarm. Nobody hit the brakes. Instead, they floored it.
How hype became policy
Marcus points to something subtle but critical: exaggerated timelines gave cover for regulatory capture. If AGI is truly imminent, then only the biggest players with the most resources can be trusted to develop it safely, right?
So you get frameworks that impose compliance costs small labs can't afford. You get safety requirements that sound reasonable but functionally exclude open-source development. You get "responsible AI" policies written by the very companies they're meant to regulate.
The doomers wanted oversight. They got regulatory moats for OpenAI, Google, and Anthropic.
The capability gap
Meanwhile, actual AI capability is nowhere near AGI. Current systems are impressive at narrow tasks - language, image generation, code completion. But they're not intelligent in any general sense. They can't reason reliably, plan long-term, or adapt to truly novel situations.
The gap between the hype and reality creates real problems. Businesses over-invest in solutions that don't work yet. Policymakers regulate imaginary threats while ignoring actual harms - algorithmic bias, misinformation, labour displacement. And genuine safety research gets drowned out by sci-fi scenarios.
What actually matters right now
This is where Marcus gets pointed. Instead of worrying about hypothetical superintelligence, we should be focused on real, present-day issues: systems that amplify misinformation, perpetuate bias, or make critical decisions without transparency.
These aren't future risks. They're happening now. But they're less sexy than AGI doom, so they get less attention and fewer resources. The doomer narrative didn't just fail to slow things down - it redirected focus away from tractable problems we could actually solve.
The trust problem
There's a credibility cost too. When you cry AGI and it doesn't arrive, people stop listening. If we do hit genuinely risky capability thresholds in future, the warnings will land on deaf ears. The boy who cried superintelligence.
Marcus isn't arguing we should ignore AI safety. He's arguing we need honest assessments of capability and risk, not inflated timelines that serve corporate interests or personal brands. Hype cuts both ways.
Where this leaves us
So what's the alternative? Marcus suggests humility about timelines, focus on near-term harms, and scepticism of regulatory frameworks that conveniently benefit incumbents.
That means calling out exaggerated claims - whether they come from boosters or doomers. It means pushing for transparency and accountability in systems being deployed today, not hypothetical ones a decade away. And it means recognising that the biggest risk right now isn't rogue AGI - it's concentrated power in the hands of a few companies with misaligned incentives.
The doomers had good intentions. But intentions don't determine outcomes. And the outcome of AGI-is-nigh rhetoric has been regulatory capture, corporate consolidation, and distraction from real, solvable problems.
Maybe it's time to focus on the AI we have, not the AI we fear.