OpenAI launched managed agents on AWS Bedrock this week - not a model, not an API upgrade, but agents that know who they are, what they're allowed to do, and what happened yesterday. Sam Altman and AWS CEO Matt Garman sat down with Ben Thompson to explore what changes when AI stops being a stateless text completion and starts looking like infrastructure.
The technical shift is deceptively simple: agents running inside AWS with proper identity management, permissions, and persistent memory. An agent can check an S3 bucket, query a database, send a Slack message - all while respecting the same access controls that apply to human employees. It sounds boring until you realise what it unlocks.
The Permissions Problem Nobody Talked About
Every enterprise AI pilot hits the same wall: the model is impressive in demos, then someone asks "Can it access our actual data?" and the conversation dies. Not because the data doesn't exist - because connecting an AI to production systems without accidentally giving it the keys to everything is harder than it looks. IAM policies, role-based access, audit trails - the plumbing that makes corporate IT survivable - none of it worked with AI agents until now.
Bedrock Managed Agents solve this by making the agent a first-class AWS resource. It gets an IAM role. It inherits permissions. It logs actions. The agent isn't a magic black box with root access - it's another service in the stack, subject to the same governance as everything else. For security teams, that's the difference between "absolutely not" and "let's pilot it".
Memory Without the Chaos
The other piece is memory. Most AI interactions are amnesiac - you ask a question, get an answer, then start fresh next time. Managed agents remember context across sessions. A procurement agent that processed your last three purchase orders doesn't need to be re-briefed on vendor preferences or approval workflows. It already knows.
The interview gets interesting when Altman and Garman discuss what this means for work itself. If agents can carry context forward and act on it autonomously, the boundary between "tool you use" and "colleague you delegate to" starts to blur. Garman talks about agents handling routine approvals, Altman mentions agents coordinating across departments. The shape of this is still forming, but the direction is clear: less time telling AI what to do, more time reviewing what it already did.
The Integration Tax Just Dropped
What makes this launch significant isn't just the capabilities - it's the distribution. AWS has enterprise relationships OpenAI couldn't reach alone. Bedrock already sits inside thousands of corporate networks. Managed agents don't require new vendors, new contracts, or new security reviews. They're an AWS service, which means they inherit trust that would take a startup years to earn.
For developers, this changes the integration math. Building an agent that respects permissions used to mean writing custom middleware, managing session state, and hoping nothing broke when policies changed. Now it's configuration. The time from concept to deployed agent just compressed from weeks to days.
The constraints are real - agents are only as good as the systems they connect to, and most enterprise software wasn't designed to be machine-readable. But the foundation is different now. Identity, permissions, memory - the boring infrastructure that makes multi-agent systems survivable in production - just became a managed service.
The question isn't whether enterprises will deploy AI agents. It's how fast they move once the permission problem is solved. Bedrock Managed Agents just removed the biggest technical excuse for waiting.