Gary Marcus published a sharp critique of Dario Amodei this week, and the argument isn't really about Amodei. It's about a pattern.
Marcus's point: Anthropic's CEO talks differently than OpenAI's Sam Altman, but the behaviour is identical. Both overpromise on AGI timelines. Both deploy unreliable AI agents commercially while downplaying safety concerns. Both profit from hype while claiming to prioritise responsibility.
The Pattern Marcus Spotted
Marcus points to Amodei's recent appointment to a military AI oversight committee as evidence of reputational positioning - being seen as the "responsible" AI leader. But Marcus argues the substance doesn't match the image. Anthropic releases Claude with the same reliability issues as GPT-4. They market AI agents for production use despite known hallucination rates. They forecast AGI arrival in similar timeframes to the companies they position themselves against.
The critique isn't about any single decision. It's about the gap between rhetoric and reality. Every major AI lab talks about safety. Every major AI lab ships products with known failure modes. Every major AI lab predicts transformative AI just far enough away to sound serious but close enough to justify massive funding rounds.
Why This Matters
Marcus's argument cuts through the personality-driven narratives that dominate AI coverage. We spend a lot of energy debating whether Altman or Amodei or Demis Hassabis is more trustworthy. Marcus suggests that's the wrong question. The business model - scaling compute, racing to deployment, promising capabilities that don't exist yet - creates incentives that override individual ethics.
Here's the uncomfortable part: he might be right. When unreliable AI systems are marketed as production-ready, that's not a personal failing. That's a structural problem. When safety research happens alongside aggressive commercial deployment, those aren't compatible priorities - they're PR strategies running parallel to business objectives.
The Larger Question
Marcus frames this as "there are no heroes in commercial AI", and that's worth sitting with. Not because everyone in AI is acting in bad faith, but because the incentives might make heroism impossible. If your company needs billion-dollar funding rounds, you can't be cautious about capability claims. If your competitors are shipping fast, you can't be the one who waits for safety validation. If the market rewards confidence, you can't admit uncertainty.
This doesn't mean everyone's equally reckless. Anthropic does publish safety research. OpenAI does run red-teaming exercises. But Marcus's point is that these efforts exist within a system that ultimately rewards speed and scale over caution. The question isn't whether Amodei is more ethical than Altman. The question is whether the economics of AI development allow anyone to act on their stated principles.
Whether you agree with Marcus or not, the pattern he's identified is real. Every AI CEO says the next one will be different. Every product launch comes with safety disclaimers buried in documentation. Every AGI timeline gets pushed back when it doesn't materialise, then replaced with a new prediction.
Maybe there are no heroes because the industry doesn't reward heroism. It rewards shipping.