Intelligence is foundation
Subscribe
  • Luma
  • About
  • Sources
  • Ecosystem
  • Nura
  • Marbl Codes
00:00
Contact
[email protected]
Connect
  • YouTube
  • LinkedIn
  • GitHub
Legal
Privacy Cookies Terms
  1. Home›
  2. Featured›
  3. Voices & Thought Leaders›
  4. Why OpenAI's Enterprise Push Looks More Like 1975 Than 2015
Voices & Thought Leaders Wednesday, 13 May 2026

Why OpenAI's Enterprise Push Looks More Like 1975 Than 2015

Share: LinkedIn
Why OpenAI's Enterprise Push Looks More Like 1975 Than 2015

OpenAI isn't building SaaS tools. They're building something older and harder - mainframe-era enterprise transformation. Ben Thompson's analysis draws the parallel nobody else is making: this looks like the 1970s, not the 2010s cloud revolution.

The difference matters. SaaS companies succeeded because they left existing business processes intact. Replace the CRM system, keep the sales workflow. Swap the accounting software, maintain the same reporting structure. The technology changed. The organisation didn't need to. That's why adoption was fast and ROI was measurable. You could trial a tool, measure the impact, scale it if it worked.

AI at enterprise scale doesn't work that way. It requires restructuring how work gets done. Not automating existing tasks - redesigning entire workflows around what the technology makes possible. That's not a software deployment. That's organisational transformation. And transformation looks like the mainframe era, when companies built IT departments from scratch and rewrote their operations around what computers could do.

The Deployment Company Model

OpenAI's shift to becoming a "deployment company" signals this reality. They're not selling API access and walking away. They're embedding teams inside enterprises to rebuild workflows. That's not a product strategy. That's a consulting model with technology attached. It's expensive. It's slow. It doesn't scale like SaaS. But it's the only approach that works when the technology demands organisational change, not just process automation.

Thompson points to the evidence: enterprises that successfully deploy AI aren't the ones with the best models. They're the ones willing to restructure operations top-down. That requires executive buy-in, change management, retraining, often redundancies. It's why most AI pilots fail - not because the technology doesn't work, but because the organisation can't absorb the change it requires.

The mainframe parallel holds. In the 1970s, successful computer adoption meant creating new roles, new departments, new reporting structures. IT became core infrastructure, not a cost centre. The companies that thrived weren't the ones that bought the best hardware - they were the ones that reorganised around what computers made possible. Fifty years later, AI is forcing the same question: are you buying tools, or rebuilding the company?

Apple, Intel, and the Capacity Crunch

Thompson's second observation cuts through the Apple-Intel chip deal speculation. This isn't about Apple diversifying suppliers. It's about TSMC running out of capacity. The world's most advanced chip manufacturer can't keep up with AI compute demand. So Apple, which has relied almost entirely on TSMC for years, is bringing Intel back into the supply chain. Not because Intel's process is better - it isn't - but because Apple needs volume and TSMC can't provide it.

The implications ripple outward. If TSMC is capacity-constrained for Apple, it's capacity-constrained for everyone. AI hardware demand isn't slowing. Training runs are getting bigger. Inference is moving on-device, multiplying chip requirements. The industry assumed manufacturing would scale to meet demand. It's not scaling fast enough. Supply chain constraints are going to shape which AI applications succeed - not because of algorithm performance, but because of silicon availability.

The Death of Finetuning

Thompson's third point is the quietest but most consequential for developers: finetuning is dead. Not technically - you can still finetune models. But strategically, it's irrelevant. Frontier models improve so fast that any finetuned version is obsolete before it's deployed. The effort doesn't pay off. Prompt engineering and retrieval-augmented generation deliver better results with less work and no lock-in.

This kills an entire category of AI startups. The ones offering finetuning-as-a-service, custom model training, domain-specific variants of base models. If the approach itself is obsolete, the business model collapses. More importantly, it shifts where value accumulates. Not in model customisation - in data pipelines, retrieval systems, and workflow integration. The hard part isn't tuning the model. It's connecting it to the right information at the right time.

For enterprises, this simplifies deployment in one sense and complicates it in another. Simpler: you don't need ML expertise to customise models. More complex: you need data infrastructure that most companies don't have. Clean, accessible, well-organised information architecture. Real-time retrieval systems. Governance frameworks that determine what the model can and can't access. That's not a technology problem. That's an organisational design problem. And it brings us back to the mainframe parallel - success requires structural change, not software adoption.

More Featured Insights

Builders & Makers
JPMorgan Stopped Calling AI an Experiment
Robotics & Automation
The Microrobots That Navigate Without Software

Video Sources

AI Engineer
Lessons from Trillion Token Deployments at Fortune 500s - Alessandro Cappelli, Adaptive ML
Theo (t3.gg)
Stop letting your agents write Markdown.
Google DeepMind
Reimagining a 50-year-old interface (the mouse pointer) with AI
AI Engineer
Give Your Agent a Computer - Nico Albanese, Vercel
Matthew Berman
What happened to Anthropic?

Today's Sources

DEV.to AI
When JPMorgan Calls AI "Core Infrastructure," the Rest of the Enterprise World Should Listen
Towards Data Science
From Vibe Coding to Spec-Driven Development
DEV.to AI
Understanding Synthetic Users and Synthetic Data: The Future of AI-Powered Market Research
Robohub
Developing active and flexible microrobots
Hackaday Robotics
The Dark Side of Unitree Robot Dogs
The Robot Report
Comau and OMRON Robotics partner to offer robotics for more industries
ROS Discourse
Parallel robots in ROS
ROS Discourse
QERRA-v2 Classical - Call for Early Testers
Ben Thompson Stratechery
The Deployment Company, Back to the 70s, Apple and Intel
Latent Space
[AINews] The End of Finetuning

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Richard Bland
About Sources Privacy Cookies Terms Thou Art That
MEM Digital Ltd t/a Marbl Codes
Co. 13753194 (England & Wales)
VAT: 400325657
3-4 Brittens Court, Clifton Reynes, Olney, MK46 5LG
© 2026 MEM Digital Ltd