Intelligence is foundation
Podcast Subscribe
Voices & Thought Leaders Monday, 9 March 2026

Gary Marcus: Every AI CEO Follows The Same Script

Share: LinkedIn
Gary Marcus: Every AI CEO Follows The Same Script

Gary Marcus published a sharp critique of Dario Amodei this week, and the argument isn't really about Amodei. It's about a pattern.

Marcus's point: Anthropic's CEO talks differently than OpenAI's Sam Altman, but the behaviour is identical. Both overpromise on AGI timelines. Both deploy unreliable AI agents commercially while downplaying safety concerns. Both profit from hype while claiming to prioritise responsibility.

The Pattern Marcus Spotted

Marcus points to Amodei's recent appointment to a military AI oversight committee as evidence of reputational positioning - being seen as the "responsible" AI leader. But Marcus argues the substance doesn't match the image. Anthropic releases Claude with the same reliability issues as GPT-4. They market AI agents for production use despite known hallucination rates. They forecast AGI arrival in similar timeframes to the companies they position themselves against.

The critique isn't about any single decision. It's about the gap between rhetoric and reality. Every major AI lab talks about safety. Every major AI lab ships products with known failure modes. Every major AI lab predicts transformative AI just far enough away to sound serious but close enough to justify massive funding rounds.

Why This Matters

Marcus's argument cuts through the personality-driven narratives that dominate AI coverage. We spend a lot of energy debating whether Altman or Amodei or Demis Hassabis is more trustworthy. Marcus suggests that's the wrong question. The business model - scaling compute, racing to deployment, promising capabilities that don't exist yet - creates incentives that override individual ethics.

Here's the uncomfortable part: he might be right. When unreliable AI systems are marketed as production-ready, that's not a personal failing. That's a structural problem. When safety research happens alongside aggressive commercial deployment, those aren't compatible priorities - they're PR strategies running parallel to business objectives.

The Larger Question

Marcus frames this as "there are no heroes in commercial AI", and that's worth sitting with. Not because everyone in AI is acting in bad faith, but because the incentives might make heroism impossible. If your company needs billion-dollar funding rounds, you can't be cautious about capability claims. If your competitors are shipping fast, you can't be the one who waits for safety validation. If the market rewards confidence, you can't admit uncertainty.

This doesn't mean everyone's equally reckless. Anthropic does publish safety research. OpenAI does run red-teaming exercises. But Marcus's point is that these efforts exist within a system that ultimately rewards speed and scale over caution. The question isn't whether Amodei is more ethical than Altman. The question is whether the economics of AI development allow anyone to act on their stated principles.

Whether you agree with Marcus or not, the pattern he's identified is real. Every AI CEO says the next one will be different. Every product launch comes with safety disclaimers buried in documentation. Every AGI timeline gets pushed back when it doesn't materialise, then replaced with a new prediction.

Maybe there are no heroes because the industry doesn't reward heroism. It rewards shipping.

More Featured Insights

Builders & Makers
Vector Search Misses What Buyers Actually Want
Robotics & Automation
Barcelona Lab Wants Robots That Think - And They're Hiring

Video Sources

Two Minute Papers
Can Your Brain Tell Real Water From A Simulation?

Today's Sources

DEV.to AI
Why I Added an LLM Parser on Top of Vector Search (And What It Changed)
n8n Blog
Production AI Playbook: Human Oversight
Hacker News Best
Agent Safehouse - macOS-native sandboxing for local agents
Towards Data Science
LatentVLA: Latent Reasoning Models for Autonomous Driving
DEV.to AI
How AI is Transforming Marketing in 2026: Tools, Trends, and What Actually Works
Towards Data Science
Write C Code Without Writing C: The Magic of PythoC
ROS Discourse
AI-Robotics Software Engineer - Cognitive Robotics Group (full time)
ROS Discourse
ROS 2 Kilted Kaiju is not officially supported by NVIDIA Isaac Sim
ROS Discourse
Recommended system architecture requirement for the aic development
Gary Marcus
There are no heroes in commercial AI
Ben Thompson Stratechery
MacBook Neo, The (Not-So) Thin MacBook, Apple and Memory

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed