There's a pattern emerging in how we build systems that need to scale and adapt over time. Not by locking everything into a monolithic architecture, but by defining clear protocols that let components communicate without needing to know everything about each other. InfoQ's recent piece on agentic MLOps systems highlights this shift... and it's relevant far beyond machine learning.
The article focuses on two protocols: Agent-to-Agent (A2A) and Model Context Protocol (MCP). Both are attempts to solve the same fundamental problem. How do you build systems where capabilities can be added incrementally without rewriting the orchestration layer every time something changes?
Why Protocols Matter More Than Frameworks
Most MLOps systems start simple. A pipeline that trains a model, evaluates it, and deploys it to production. Then reality intrudes. You need to add monitoring. Then retraining triggers. Then A/B testing. Then feature stores. Then model versioning. Before long, you have a tangled mess of dependencies where changing one thing breaks three others.
The traditional solution has been to use a framework... something like Kubeflow or MLflow that provides structure. But frameworks make assumptions. They bake in opinions about how things should work. That's fine when your needs align with those opinions. Less fine when they don't. And frameworks tend to couple orchestration with execution, making it hard to swap out components without touching the core system.
Protocols take a different approach. Instead of providing a complete solution, they define how things communicate. Agent-to-Agent protocol specifies how autonomous agents can discover, negotiate, and coordinate with each other. Model Context Protocol defines how models can share context... training data, feature definitions, evaluation metrics... without needing a centralised registry.
This matters because it decouples orchestration from execution. You can add a new agent to handle drift detection without modifying the training pipeline. You can swap out one model for another as long as they both speak the same context protocol. The system becomes extensible by design, not by accident.
What This Looks Like in Practice
Imagine an MLOps system built on these protocols. An orchestrator agent coordinates the workflow. A training agent handles model updates. A monitoring agent watches for drift. An evaluation agent runs tests. Each agent is independent... it doesn't need to know the internal workings of the others. It just needs to understand the protocol.
When you want to add a new capability, like automated retraining when drift is detected, you add a new agent that subscribes to drift events from the monitoring agent and triggers the training agent. No changes to existing code. No redeployment of the orchestration layer. Just a new agent plugged into the system.
That's the theory. In practice, there are trade-offs. Protocol-based systems are more flexible but also more complex to debug. When something goes wrong, you can't just step through a single codebase. You have to trace interactions across multiple agents, each potentially running in different environments. Observability becomes critical... and harder to implement well.
Beyond MLOps
Here's where this gets interesting. The same principles apply to any system that needs to evolve over time. Web applications. Data pipelines. API gateways. Anywhere you have multiple components that need to coordinate without tight coupling, protocols offer a path forward.
The Model Context Protocol, for instance, isn't just about machine learning. It's about defining how systems share context in a standardised way. That could be training data for a model, but it could also be user preferences for a recommendation engine, or configuration state for a deployment system. The protocol abstracts the specifics, making the pattern reusable.
Agent-to-Agent protocol is even more general. It's about autonomous components discovering and negotiating with each other. That applies to microservices, to serverless functions, to IoT devices. Anywhere you want components to self-organise rather than being explicitly orchestrated.
The InfoQ article is worth reading in full because it doesn't just explain the protocols... it walks through implementation considerations. How do you handle versioning when agents expect different protocol versions? How do you secure agent-to-agent communication? How do you prevent circular dependencies when agents can call each other arbitrarily?
These aren't trivial questions. But they're the right questions to be asking if you're building systems meant to last. Protocols don't solve everything. But they provide a foundation that lets systems evolve without constant rewrites. In a world where requirements change faster than code can be rewritten, that's worth paying attention to.