Afternoon Edition

Humanoid Robots Hit Eight-Hour Shifts; Anthropic Meters the API

Humanoid Robots Hit Eight-Hour Shifts; Anthropic Meters the API

Today's Overview

A week in May where the physical and the commercial collide. Figure's humanoid robots completed a full eight-hour warehouse shift this week-package sorting, autonomous battery swaps, self-diagnostics, zero human intervention. Brett Adcock streamed it live. The robots reason from camera pixels, operate at human parity (three seconds per package), coordinate as a networked fleet, and fail over to maintenance when needed. This isn't a benchmark run. This is a deployed system working shifts alongside human infrastructure. The message is blunt: long-horizon autonomy in physical space just moved from research to operations.

Meanwhile, Anthropic made a commercial choice that split the developer ecosystem. Claude subscribers now get monthly API credits equal to their subscription price-but those credits only work through Anthropic's own tools (Claude.ai, Claude Code). Everything else-third-party harnesses, agent SDKs, OpenClaw-draws from those credits. Developers who built businesses on the subsidy pricing saw this as a rug pull. The criticism came fast from Theo, Matt Pocock, Jeremy Howard. Anthropic responded with a 50% increase to Claude Code's weekly limits through July, but the message was clear: they're putting their pricing muscle behind their own products. OpenAI countered the same day with two free months of Codex for enterprise switchers. This is the subsidy wars in real time-not racing on models anymore, but fighting over pricing, workflows, and which harnesses get the favorable terms.

Building Without the Blackbox

Away from the pricing drama, builders are making practical choices. One engineer compared building a document extractor twice-once with rules and pytesseract, once with Ollama and an open LLM. The LLM version was easier to write but needed more iteration to get right. The rules version was brittle but predictable. Neither won decisively. That's the real conversation now: not which is objectively better, but which trade-offs fit your constraints. A developer building a portfolio shouldn't copy a to-do list from a template; they should build something a client would actually pay for. That means solving real business problems-exam platforms with anti-cheating, learning systems with live sessions, ecommerce with CMS that non-technical teams can use. The portfolios that get hired aren't the ones with the prettiest code. They're the ones that prove they can ship products that work.

On the robotics side, the smaller story with teeth is the LiDAR matrix sensor. A hobbyist added a 64-sensor array to a 3D-printed tank robot. Instead of measuring distance at a single point, the sensor builds a 2D map from 2cm to 3.5 metres. The challenge: half the data arrived garbage. But that's the shape of the problem now-sensors are getting better faster than the code to use them. An LLM wrote most of the driver, then the builder debugged the output. That's the new pattern: sensor → LLM scaffold → human iteration → working system.

The Efficiency Play

There's a structural shift worth tracking. Chinese labs are building models that perform only 6-8 months behind the US frontier, despite running on 2-3 years' less compute. Azeem Azhar's team visited 14 labs and estimates they're extracting 4-7x more intelligence per unit of compute than raw scaling would predict. The chips are constrained (export controls are real), so efficiency becomes a competitive advantage. DeepSeek's flagship costs 11x less than Anthropic's equivalent. Qwen's open-source variants run on a MacBook. The west shipped bigger. China shipped smarter. In a market where inference costs matter-where serving millions of customers with commodity hardware matters-that's not a temporary gap. That's a structural advantage that compounds.

The week settles into a pattern: robotics moved from prototype to production. The business of AI got messier and more territorial. And builders learned that the simplest solution-rules, LLMs, hybrid-depends entirely on what problem you're actually solving and who's paying for it.