OpenAI isn't building SaaS tools. They're building something older and harder - mainframe-era enterprise transformation. Ben Thompson's analysis draws the parallel nobody else is making: this looks like the 1970s, not the 2010s cloud revolution.
The difference matters. SaaS companies succeeded because they left existing business processes intact. Replace the CRM system, keep the sales workflow. Swap the accounting software, maintain the same reporting structure. The technology changed. The organisation didn't need to. That's why adoption was fast and ROI was measurable. You could trial a tool, measure the impact, scale it if it worked.
AI at enterprise scale doesn't work that way. It requires restructuring how work gets done. Not automating existing tasks - redesigning entire workflows around what the technology makes possible. That's not a software deployment. That's organisational transformation. And transformation looks like the mainframe era, when companies built IT departments from scratch and rewrote their operations around what computers could do.
The Deployment Company Model
OpenAI's shift to becoming a "deployment company" signals this reality. They're not selling API access and walking away. They're embedding teams inside enterprises to rebuild workflows. That's not a product strategy. That's a consulting model with technology attached. It's expensive. It's slow. It doesn't scale like SaaS. But it's the only approach that works when the technology demands organisational change, not just process automation.
Thompson points to the evidence: enterprises that successfully deploy AI aren't the ones with the best models. They're the ones willing to restructure operations top-down. That requires executive buy-in, change management, retraining, often redundancies. It's why most AI pilots fail - not because the technology doesn't work, but because the organisation can't absorb the change it requires.
The mainframe parallel holds. In the 1970s, successful computer adoption meant creating new roles, new departments, new reporting structures. IT became core infrastructure, not a cost centre. The companies that thrived weren't the ones that bought the best hardware - they were the ones that reorganised around what computers made possible. Fifty years later, AI is forcing the same question: are you buying tools, or rebuilding the company?
Apple, Intel, and the Capacity Crunch
Thompson's second observation cuts through the Apple-Intel chip deal speculation. This isn't about Apple diversifying suppliers. It's about TSMC running out of capacity. The world's most advanced chip manufacturer can't keep up with AI compute demand. So Apple, which has relied almost entirely on TSMC for years, is bringing Intel back into the supply chain. Not because Intel's process is better - it isn't - but because Apple needs volume and TSMC can't provide it.
The implications ripple outward. If TSMC is capacity-constrained for Apple, it's capacity-constrained for everyone. AI hardware demand isn't slowing. Training runs are getting bigger. Inference is moving on-device, multiplying chip requirements. The industry assumed manufacturing would scale to meet demand. It's not scaling fast enough. Supply chain constraints are going to shape which AI applications succeed - not because of algorithm performance, but because of silicon availability.
The Death of Finetuning
Thompson's third point is the quietest but most consequential for developers: finetuning is dead. Not technically - you can still finetune models. But strategically, it's irrelevant. Frontier models improve so fast that any finetuned version is obsolete before it's deployed. The effort doesn't pay off. Prompt engineering and retrieval-augmented generation deliver better results with less work and no lock-in.
This kills an entire category of AI startups. The ones offering finetuning-as-a-service, custom model training, domain-specific variants of base models. If the approach itself is obsolete, the business model collapses. More importantly, it shifts where value accumulates. Not in model customisation - in data pipelines, retrieval systems, and workflow integration. The hard part isn't tuning the model. It's connecting it to the right information at the right time.
For enterprises, this simplifies deployment in one sense and complicates it in another. Simpler: you don't need ML expertise to customise models. More complex: you need data infrastructure that most companies don't have. Clean, accessible, well-organised information architecture. Real-time retrieval systems. Governance frameworks that determine what the model can and can't access. That's not a technology problem. That's an organisational design problem. And it brings us back to the mainframe parallel - success requires structural change, not software adoption.