When Robots Think for Themselves, Agents Get Computers, and Finetuning Dies

When Robots Think for Themselves, Agents Get Computers, and Finetuning Dies

Today's Overview

Three distinct stories are reshaping how technology works on the ground this week. The first: robots that think without thinking. Researchers at Leiden built microrobots-chain-like structures made of flexibly connected segments, 3D-printed at 5 micrometers-that move without sensors, software, or external control. They swim. They navigate obstacles. When two meet, they steer away from each other. Their behaviour emerges entirely from shape and environment interaction. No electronics. No code. Just physics.

Enterprise AI Needs Bodies

Meanwhile, OpenAI and Anthropic are both building out deployment companies-organisations staffed with engineers embedded directly into customer teams. OpenAI acquired Tomoro consulting and raised $4 billion. Google is hiring hundreds of "forward deployed engineers." This isn't model release; this is top-down enterprise transformation. Ben Thompson's framing is instructive: this mirrors the mainframe wave of the 1970s, not SaaS. The work isn't helping employees use tools better. It's restructuring entire business processes-cutting costs, replacing roles, moving decision-making to the executive level. That requires human architects on-site, which means these AI labs suddenly need to become professional services firms.

And then there's the infrastructure problem. JPMorgan reclassified AI as core infrastructure, not R&D. That signals seriousness: continuous monitoring, incident-response discipline, accountability. It also signals what's missing: most enterprises don't yet have the data pipelines, platform architecture, or governance frameworks to run AI with infrastructure-level reliability. The companies in JPMorgan's position in five years are those investing now in data quality and modernisation-not just model experimentation.

The Capacity Crunch Gets Real

TSMC's bottleneck is real enough that it's pushing Apple toward Intel-a relationship that seemed economically unthinkable eighteen months ago. Apple has been capacity-constrained across multiple product lines: iPhone 17 Pro demand outpaced supply. Then they had to reallocate capacity to meet it, which starved the Mac line. Now Mac mini and Studio are out of stock. TSMC's reasonable refusal to match the industry's AI-driven investment appetite means their best customer can't get what it needs. Finetuning is dead for similar reasons: with GPU crunch acute, OpenAI deprecated their finetuning APIs this week. The modal use case-get model X performance at model Y prices through task-specific training-was already trending obsolete anyway. Long context and RAG are cheaper. Just very long prompts may be all you need.

The throughline across all three: enterprises are moving from experimentation to operation. That requires infrastructure thinking, not innovation thinking. Deployment people, not research papers. Reliable data, not synthetic shortcuts. And when TSMC can't keep up, even Apple has to hedge.