Intelligence is foundation
Subscribe
  • Luma
  • About
  • Sources
  • Ecosystem
  • Nura
  • Marbl Codes
00:00
Contact
[email protected]
Connect
  • YouTube
  • LinkedIn
  • GitHub
Legal
Privacy Cookies Terms
  1. Home›
  2. Featured›
  3. Artificial Intelligence›
  4. Policy as Context: How One Team Hit 90% AI Code Compliance
Artificial Intelligence Tuesday, 12 May 2026

Policy as Context: How One Team Hit 90% AI Code Compliance

Share: LinkedIn
Policy as Context: How One Team Hit 90% AI Code Compliance

Most companies trying to enforce tech policy on AI-generated code end up in the same place: a wiki nobody reads, a prompt template nobody follows, and a compliance rate hovering somewhere between 60 and 70 percent. The gap isn't intentional. It's structural. The model doesn't know your architecture decision records exist.

A team at a European consultancy just published an open-source solution that sidesteps the problem entirely. Instead of hoping developers remember to paste policy into their prompts, they built an MCP server that injects company rules - tech radar, security constraints, ADRs - directly into the model's context at generation time. Compliance jumped to over 90 percent. Not because the model got smarter. Because it finally knew what the rules were.

The Problem With Prompts

The usual approach is to write a detailed prompt: use React, avoid deprecated libraries, follow the internal API pattern. Then you hope developers copy-paste it correctly every time they ask the model to generate code. In practice, people skip steps. They forget clauses. They assume the model remembers context from three prompts ago. It doesn't.

The result is code that looks fine but violates internal policy in subtle ways. Wrong testing framework. Unapproved dependency. Outdated pattern that was replaced six months ago. Nothing catastrophic, but enough friction to slow down review cycles and erode trust in AI-generated output.

How MCP Changes the Game

The Model Context Protocol lets you define servers - lightweight services that supply structured data to the model at runtime. In this case, the team built a guardrail server that reads from their internal policy repository and surfaces relevant rules based on what the developer is building.

Ask the model to scaffold a new API endpoint? The server injects the company's API design standards. Request a database migration? The server pulls in the approved ORM patterns and security constraints. The model sees these rules as part of its working context, not as optional guidance buried in a document.

The implementation is remarkably straightforward. The server exposes policy documents as resources and uses prompts to guide the model toward compliant solutions. Developers don't need to remember what's in the tech radar. The model is already working with that information when it generates code.

Real Numbers

Before the guardrail server, compliance sat between 60 and 70 percent. Reviewers flagged violations in roughly one out of every three pull requests. After deployment, that rate flipped. Over 90 percent of AI-generated code now meets internal standards on the first pass.

The improvement isn't just about catching mistakes earlier. It changes how developers use the tool. When they trust that the model understands company policy, they're more willing to let it generate entire modules instead of just snippets. That's where the productivity gain actually shows up.

What This Means for Teams Using AI Coding Tools

Every company building software at scale has the same problem: how do you maintain consistency when half your code is machine-generated? Documentation doesn't scale. Linters catch syntax, not architectural decisions. Code review is expensive and slow.

This approach suggests a different path. Instead of treating policy as a reference document, encode it as structured data and pipe it into the model's context. The model becomes aware of your constraints without anyone needing to manually curate every prompt.

The server is open source, built on the Model Context Protocol standard. That means it works with any MCP-compatible client - Claude Desktop, Continue, Cline, or your own internal tools. You point it at your policy repository, define which rules apply to which contexts, and the rest happens automatically.

The Bigger Pattern

What's interesting here isn't just the compliance boost. It's the architectural principle. The best way to control AI behaviour isn't to constrain the model itself - it's to control what the model sees.

That has implications beyond code generation. Support agents need access to up-to-date product documentation. Legal assistants need current case law. Sales teams need accurate pricing. In every case, the problem isn't that the model can't reason - it's that it's reasoning from incomplete or outdated information.

MCP gives you a clean interface to fix that. You don't need to retrain anything. You don't need to embed a year's worth of documents into every prompt. You build a server that supplies the right context at the right time, and the model does what it's good at: generating coherent output based on the information it has.

For teams already using AI coding assistants, this is immediately actionable. The server is documented and available. You can deploy it this week. The jump from 70 to 90 percent compliance isn't theoretical - it's what happens when the model finally knows what you expect from it.

More Featured Insights

Quantum Computing
Coherent Ferrons: The Polarization Wave Nobody Expected
Web Development
Graph RAG: Why Your AI Agent Needs a Knowledge Map

Today's Sources

Dev.to
How we built an MCP Guardrail to enforce Tech Policy in real-time
freeCodeCamp
How to Build Production-Ready AI Features with Flutter [Full Handbook for Devs]
TechCrunch
Thinking Machines wants to build an AI that actually listens while it talks
Hugging Face Blog
Building Blocks for Foundation Model Training and Inference on AWS
TechCrunch
GM just laid off hundreds of IT workers to hire those with stronger AI skills
Wired AI
Ilya Sutskever Stands by His Role in Sam Altman's OpenAI Ouster
Phys.org Quantum Physics
Researchers find coherent ferrons-polarization waves with potential across quantum and telecom applications
arXiv – Quantum Physics
The finite-shot help-harm boundary of zero-noise extrapolation
arXiv – Quantum Physics
Beyond the Lorenz Gauge: Probing a Stueckelberg Scalar in the Electric Aharonov-Bohm Effect
Stack Overflow Blog
Connecting the dots for accurate AI
AWS Machine Learning Blog
Building web search-enabled agents with Strands and Exa
Dev.to
Senior Developer. Still Lost.

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Richard Bland
About Sources Privacy Cookies Terms Thou Art That
MEM Digital Ltd t/a Marbl Codes
Co. 13753194 (England & Wales)
VAT: 400325657
3-4 Brittens Court, Clifton Reynes, Olney, MK46 5LG
© 2026 MEM Digital Ltd