Intelligence is foundation
Podcast Subscribe
Voices & Thought Leaders Tuesday, 3 March 2026

How AI doomsday predictions backfired spectacularly

Share: LinkedIn
How AI doomsday predictions backfired spectacularly

Remember when AI safety advocates warned that AGI was imminent and we needed to slow down? Gary Marcus just argued that strategy didn't just fail - it made everything worse.

The doomers thought raising alarms about existential risk would pump the brakes on dangerous AI development. Instead, it accelerated corporate power and government overreach. This is worth unpacking, because the consequences are playing out right now.

The AGI-is-nigh narrative

The timeline went like this: prominent researchers, some from inside leading AI labs, started claiming AGI (artificial general intelligence) was 2-5 years away. Not just possible - imminent. Existential risk was around the corner.

The logic seemed sound: if you believe AGI is coming fast and could be catastrophic, you need urgent action. Regulation, safety protocols, maybe even a pause on development.

But here's what actually happened. Corporations heard "AGI in 5 years" and went all-in on the gold rush. Governments heard "existential threat" and started building control frameworks that favour incumbents. The general public heard "super-intelligent AI" and assumed current systems were far more capable than they are.

Everyone heard the alarm. Nobody hit the brakes. Instead, they floored it.

How hype became policy

Marcus points to something subtle but critical: exaggerated timelines gave cover for regulatory capture. If AGI is truly imminent, then only the biggest players with the most resources can be trusted to develop it safely, right?

So you get frameworks that impose compliance costs small labs can't afford. You get safety requirements that sound reasonable but functionally exclude open-source development. You get "responsible AI" policies written by the very companies they're meant to regulate.

The doomers wanted oversight. They got regulatory moats for OpenAI, Google, and Anthropic.

The capability gap

Meanwhile, actual AI capability is nowhere near AGI. Current systems are impressive at narrow tasks - language, image generation, code completion. But they're not intelligent in any general sense. They can't reason reliably, plan long-term, or adapt to truly novel situations.

The gap between the hype and reality creates real problems. Businesses over-invest in solutions that don't work yet. Policymakers regulate imaginary threats while ignoring actual harms - algorithmic bias, misinformation, labour displacement. And genuine safety research gets drowned out by sci-fi scenarios.

What actually matters right now

This is where Marcus gets pointed. Instead of worrying about hypothetical superintelligence, we should be focused on real, present-day issues: systems that amplify misinformation, perpetuate bias, or make critical decisions without transparency.

These aren't future risks. They're happening now. But they're less sexy than AGI doom, so they get less attention and fewer resources. The doomer narrative didn't just fail to slow things down - it redirected focus away from tractable problems we could actually solve.

The trust problem

There's a credibility cost too. When you cry AGI and it doesn't arrive, people stop listening. If we do hit genuinely risky capability thresholds in future, the warnings will land on deaf ears. The boy who cried superintelligence.

Marcus isn't arguing we should ignore AI safety. He's arguing we need honest assessments of capability and risk, not inflated timelines that serve corporate interests or personal brands. Hype cuts both ways.

Where this leaves us

So what's the alternative? Marcus suggests humility about timelines, focus on near-term harms, and scepticism of regulatory frameworks that conveniently benefit incumbents.

That means calling out exaggerated claims - whether they come from boosters or doomers. It means pushing for transparency and accountability in systems being deployed today, not hypothetical ones a decade away. And it means recognising that the biggest risk right now isn't rogue AGI - it's concentrated power in the hands of a few companies with misaligned incentives.

The doomers had good intentions. But intentions don't determine outcomes. And the outcome of AGI-is-nigh rhetoric has been regulatory capture, corporate consolidation, and distraction from real, solvable problems.

Maybe it's time to focus on the AI we have, not the AI we fear.

More Featured Insights

Builders & Makers
Three months of an AI agent trying to make money - what broke
Robotics & Automation
Why K-Scale's humanoid robot startup failed - lessons from the inside

Video Sources

Fireship
Cloudflare just slop forked Next.js…
Dwarkesh Patel
The AI Industry Will Hit Trillions by 2030 - Dario Amodei

Today's Sources

DEV.to AI
What actually goes wrong when autonomous agents try to make money
Hacker News Best
Show HN: I built a sub-500ms latency voice agent from scratch
Hacker News Best
Ars Technica fires reporter after AI controversy involving fabricated quotes
Hacker News Best
Meta's AI smart glasses and data privacy concerns
ML Mastery
Deploying AI Agents to Production: Architecture, Infrastructure, and Implementation Roadmap
The Robot Report
6 lessons I learned watching a robotics startup die from the inside
ROS Discourse
DART upgraded from 6.13 to 6.16.6 in Linux Gazebo Jetty (gz-physics9)
ROS Discourse
The AI for Industry Challenge Toolkit is LIVE
The Robot Report
Intuitive buys European surgical robot distributors
Hackaday Robotics
Cynus Chess Robot: a Chess Board With a Robotic Arm
The Robot Report
NORD adds 112 frame size to IE5+ synchronous motor line
Gary Marcus
How AGI-is-nigh doomers own-goaled humanity
Latent Space
How to Kill the Code Review
Latent Space
[AINews] Truth in the time of Artifice
Ben Thompson Stratechery
Technological Scale and Government Control, Paramount Outbids Netflix for Warner Bros.

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed