Intelligence is foundation
Podcast Subscribe
Voices & Thought Leaders Wednesday, 18 March 2026

Felix Rieseberg: Why Claude Needed Its Own Computer

Share: LinkedIn
Felix Rieseberg: Why Claude Needed Its Own Computer

Anthropic gave Claude its own virtual machine. Not as a curiosity, not as a demo. As production infrastructure called Claude Cowork. Felix Rieseberg, the engineer behind it, sat down with Latent Space to explain why sandboxing AI in its own environment isn't just about safety - it's about capability.

The Problem With Agents In Your System

When AI agents run code, they do it somewhere. Usually that somewhere is your infrastructure, your environment, with access to your filesystem and network. That works until it doesn't. A hallucinated command. An unexpected API call. An agent that decides the best way to solve your problem involves actions you didn't anticipate.

The standard solution is sandboxing - restrict what the agent can access, limit what commands it can run, monitor everything it touches. But restriction creates its own problem. An agent that can't do much isn't particularly useful. You want the capability without the risk.

Felix's insight was that the boundary itself could be the unlock. Give Claude its own isolated virtual machine - a complete computing environment where it can install tools, run code, manage files, experiment freely. The sandbox isn't a limitation. It's a workspace designed specifically for how AI agents think.

Local-First Matters More Than You Think

Claude Cowork runs locally. That detail matters more than it sounds. When an agent needs to iterate on code, test changes, debug failures, every network round-trip adds friction. Local execution means the agent can work at the speed it thinks, not the speed of API calls.

But there's a deeper reason local-first architecture matters for agents. Privacy and control. If an AI agent is helping with sensitive work, you don't want that work leaving your infrastructure. The agent needs access to your context, your data, your environment. Local execution keeps that boundary tight.

Felix talks about this as a fundamental shift in how we think about AI capabilities. Not agents that request actions through APIs, but agents that perform actions in their own environment. The difference is architectural, not semantic. It changes what's possible.

The Knowledge Work Question

Where this gets interesting is knowledge work automation. Not robots replacing manual labour. Software replacing cognitive tasks. Claude Cowork represents a model where AI doesn't just generate code or answer questions - it operates a complete development environment. Installs dependencies. Runs tests. Debugs failures. Iterates independently.

That's closer to how human developers actually work than anything that came before it. We don't write code in isolated snippets. We work in environments we've configured, with tools we've chosen, iterating based on what works. Giving Claude the same capability means it can tackle problems that require that kind of iterative, environmental work.

Why Sandboxing Unlocks More Than It Restricts

Here's the counterintuitive bit. The VM boundary that keeps Claude isolated also gives it freedom. Inside its own environment, it can experiment without consequence. Install experimental packages. Test risky code. Try approaches that might fail. The worst case is contained - you spin up a fresh VM and start again.

Felix points out that this mirrors how we actually build software. Development environments exist specifically so we can break things safely. Claude Cowork brings that same safety-through-isolation to AI agents. The sandbox isn't about limiting the agent. It's about giving it room to work properly.

And there's a safety dimension that matters. As AI capabilities increase, the gap between what an agent can do and what you want it to do widens. Sandboxing provides a control boundary that scales with capability. More powerful agents need stronger isolation, not weaker.

What This Means For Agent Development

The full conversation goes deeper into the technical architecture, the decisions behind Claude Cowork's design, and where Felix thinks this leads. But the core insight is clear - AI agents need their own computing environments, not borrowed access to ours.

That shift has implications beyond Claude. If agents are going to handle complex, multi-step tasks autonomously, they need workspace infrastructure designed for how they operate. Not human tools adapted for AI use. Tools built specifically for agent workflows.

Felix's work at Anthropic suggests a future where AI capabilities and AI safety aren't in tension. The same isolation that protects us from unintended agent behaviour gives agents the freedom to work effectively. The boundary enables both sides.

Listen to the full episode for Felix's complete perspective on sandboxing, local-first agents, and why giving AI its own computer changes the automation equation.

More Featured Insights

Builders & Makers
Agentic AI vs Workflow Automation: The Architectural Difference
Robotics & Automation
Real Farms, Real Robots: The Living Lab Testing Agricultural AI

Video Sources

Theo (t3.gg)
I did not expect this...
NVIDIA Robotics
Building Hospital Automation Using NVIDIA Isaac for Healthcare
NVIDIA Robotics
How Edison's Kosmos AI Scientist Does Six Months of Research in One Day with Nemotron
NVIDIA Robotics
Olaf Takes the Stage With Jensen Huang
Matthew Berman
Every model that matters

Today's Sources

DEV.to AI
What is an agentic AI platform? How it differs from workflow automation
DEV.to AI
OpenClaw Skills: The Complete Guide to Building, Securing, and Deploying AI Agents
Towards Data Science
How to Effectively Review Claude Code Output
Towards Data Science
Self-Hosting Your First LLM
Hacker News Best
Mistral AI Releases Forge
The Robot Report
Inside the new 'Living Lab' advancing agricultural robotics
The Robot Report
Nebius and NVIDIA collaborate for physical AI cloud
ROS Discourse
Introducing the Connext Robotics Toolkit for ROS 2
Latent Space
Why Anthropic Thinks AI Should Have Its Own Computer - Felix Rieseberg on Claude Cowork
Latent Space
Claude Cowork Dispatch: Anthropic's Answer to OpenClaw
Ben Thompson Stratechery
Jensen Huang and Andy Grove, Groq LPUs and Vera CPUs, Hotel California

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed