You build an AI agent. Users love it. Conversations flow. The agent remembers context, learns preferences, becomes genuinely useful. Then you want to switch models - maybe costs drop elsewhere, maybe performance improves, maybe your vendor changes terms. You can't. Not without losing everything that made your agent sticky in the first place.
The memory is locked. Not technically locked, but practically. Because the harness you chose - the framework connecting your agent to the AI model - also controls where memory lives. Switch harnesses, lose memory. Switch models, lose memory. Your users' conversation history, their preferences, the context that made your agent feel intelligent - all of it tied to a single platform.
What Agent Harnesses Actually Control
An agent harness is the infrastructure layer between your code and the AI model. It handles the boring-but-critical bits: routing requests, managing context windows, storing conversation history, orchestrating tool calls. Think of it as the operating system for your AI agent.
LangChain's recent analysis highlights the problem most builders don't see coming. Closed harnesses - the ones operated by model providers themselves - make it dead simple to get started. One API call, memory handled automatically, context managed for you. The catch? That memory lives in their system. On their terms. With their pricing.
When GPT-4 was the clear winner, this didn't matter much. Everyone used OpenAI's API, memory lived there, job done. But the landscape shifted. Claude improved. Gemini got cheaper. Open models became viable. Suddenly that convenient memory layer became a cage.
The Switching Cost You Can't See
Imagine you've built a customer service agent. Six months in, it knows your product inside out. It remembers user issues, tracks ongoing conversations, routes complex queries based on history. Your NPS score climbs. Support costs drop. Everyone's happy.
Then your AI provider doubles their prices. Or a competitor launches a model that's genuinely better for your use case. Or you want to run models locally for data privacy. The technical switch is easy - change an API endpoint, update some config. But the memory? That's the product. Without it, your agent becomes dumb again. Users notice immediately.
This is the lock-in that matters. Not technical lock-in - you can always rewrite code. Memory lock-in means starting from zero while your users expect the intelligence they've come to rely on. Most businesses don't realise this until they're already stuck.
Open Harnesses Change the Equation
An open harness decouples memory from the model provider. Conversation history, user preferences, context - all of it lives in your infrastructure. Postgres, Redis, vector databases - whatever fits your architecture. The harness becomes a translation layer, not a lock.
LangChain's approach (and they're not alone - LlamaIndex and others are heading this direction) treats memory as a first-class citizen you control. Switch from OpenAI to Anthropic? Your memory stays. Run local models for sensitive data? Memory never leaves your system. Test new models without rebuilding history? Trivial.
The trade-off is obvious: more infrastructure to manage. You're running databases, handling backups, managing context window limits yourself. For many early-stage projects, closed harnesses still make sense. Speed matters more than portability when you're validating an idea.
But the moment your agent becomes core to your product - the moment users depend on it remembering things - the calculation flips. Because memory is the moat. An agent that knows me is valuable. An agent that forgets me every time you switch providers is just a chatbot.
What This Means for Builders
If you're starting a new agent project today, the question isn't which model to use. Models will change. Performance will shift. Pricing will fluctuate. The question is: who owns the memory that makes your agent actually useful?
Closed harnesses buy you speed. You'll ship faster, onboard easier, have fewer moving parts to debug. That matters when you're figuring out if anyone wants what you're building. But know the cost. You're renting the intelligence, not building it.
Open harnesses buy you control. More setup, more infrastructure, more to maintain. But when your agent works - when users depend on it - you're not starting over because a vendor changed terms. The memory that makes your product sticky belongs to you.
Most small businesses still run on spreadsheets and email. The ones adopting AI agents are saving hours every week. That gap is going to hurt. But the businesses that build on rented memory? They're going to hurt differently - when they try to switch and realise they can't.
Choose your harness like you choose your database. Because that's what it is - a database for intelligence. And databases you don't control have a way of controlling you.