Most companies trying to enforce tech policy on AI-generated code end up in the same place: a wiki nobody reads, a prompt template nobody follows, and a compliance rate hovering somewhere between 60 and 70 percent. The gap isn't intentional. It's structural. The model doesn't know your architecture decision records exist.
A team at a European consultancy just published an open-source solution that sidesteps the problem entirely. Instead of hoping developers remember to paste policy into their prompts, they built an MCP server that injects company rules - tech radar, security constraints, ADRs - directly into the model's context at generation time. Compliance jumped to over 90 percent. Not because the model got smarter. Because it finally knew what the rules were.
The Problem With Prompts
The usual approach is to write a detailed prompt: use React, avoid deprecated libraries, follow the internal API pattern. Then you hope developers copy-paste it correctly every time they ask the model to generate code. In practice, people skip steps. They forget clauses. They assume the model remembers context from three prompts ago. It doesn't.
The result is code that looks fine but violates internal policy in subtle ways. Wrong testing framework. Unapproved dependency. Outdated pattern that was replaced six months ago. Nothing catastrophic, but enough friction to slow down review cycles and erode trust in AI-generated output.
How MCP Changes the Game
The Model Context Protocol lets you define servers - lightweight services that supply structured data to the model at runtime. In this case, the team built a guardrail server that reads from their internal policy repository and surfaces relevant rules based on what the developer is building.
Ask the model to scaffold a new API endpoint? The server injects the company's API design standards. Request a database migration? The server pulls in the approved ORM patterns and security constraints. The model sees these rules as part of its working context, not as optional guidance buried in a document.
The implementation is remarkably straightforward. The server exposes policy documents as resources and uses prompts to guide the model toward compliant solutions. Developers don't need to remember what's in the tech radar. The model is already working with that information when it generates code.
Real Numbers
Before the guardrail server, compliance sat between 60 and 70 percent. Reviewers flagged violations in roughly one out of every three pull requests. After deployment, that rate flipped. Over 90 percent of AI-generated code now meets internal standards on the first pass.
The improvement isn't just about catching mistakes earlier. It changes how developers use the tool. When they trust that the model understands company policy, they're more willing to let it generate entire modules instead of just snippets. That's where the productivity gain actually shows up.
What This Means for Teams Using AI Coding Tools
Every company building software at scale has the same problem: how do you maintain consistency when half your code is machine-generated? Documentation doesn't scale. Linters catch syntax, not architectural decisions. Code review is expensive and slow.
This approach suggests a different path. Instead of treating policy as a reference document, encode it as structured data and pipe it into the model's context. The model becomes aware of your constraints without anyone needing to manually curate every prompt.
The server is open source, built on the Model Context Protocol standard. That means it works with any MCP-compatible client - Claude Desktop, Continue, Cline, or your own internal tools. You point it at your policy repository, define which rules apply to which contexts, and the rest happens automatically.
The Bigger Pattern
What's interesting here isn't just the compliance boost. It's the architectural principle. The best way to control AI behaviour isn't to constrain the model itself - it's to control what the model sees.
That has implications beyond code generation. Support agents need access to up-to-date product documentation. Legal assistants need current case law. Sales teams need accurate pricing. In every case, the problem isn't that the model can't reason - it's that it's reasoning from incomplete or outdated information.
MCP gives you a clean interface to fix that. You don't need to retrain anything. You don't need to embed a year's worth of documents into every prompt. You build a server that supplies the right context at the right time, and the model does what it's good at: generating coherent output based on the information it has.
For teams already using AI coding assistants, this is immediately actionable. The server is documented and available. You can deploy it this week. The jump from 70 to 90 percent compliance isn't theoretical - it's what happens when the model finally knows what you expect from it.