Intelligence is foundation
Subscribe
  • Luma
  • About
  • Sources
  • Ecosystem
  • Nura
  • Marbl Codes
00:00
Contact
[email protected]
Connect
  • YouTube
  • LinkedIn
  • GitHub
Legal
Privacy Cookies Terms
  1. Home›
  2. Featured›
  3. Web Development›
  4. Write Down the Rules and Let AI Enforce Them
Web Development Tuesday, 5 May 2026

Write Down the Rules and Let AI Enforce Them

Share: LinkedIn
Write Down the Rules and Let AI Enforce Them

A development team cut their manual code review time by 35% in a month. The method: they wrote down their architectural rules in plain text files and let AI reviewers catch mistakes before human review started.

The approach is simple. Create an AGENTS.md file at the root of your codebase. Document your team's patterns, anti-patterns, and recurring review feedback. Add per-service documentation for anything service-specific. Point your AI code reviewer at these files. Watch it catch 90% of the problems that used to clog up pull requests.

The Pattern It Solves

Most code review bottlenecks aren't about complex logic. They're about repeating the same feedback. Don't use this pattern here. This API call needs error handling. You forgot to update the docs. This breaks our naming convention.

Human reviewers get tired of writing the same comments. Developers get tired of fixing the same mistakes. Everyone knows the rules, but they're in people's heads, not in the codebase. So they get forgotten, especially by new team members or when context-switching between projects.

The case study published on freeCodeCamp documents a tech lead who solved this by externalising the rules. Instead of relying on humans to remember and enforce patterns, they wrote them down in a format AI reviewers could parse.

What Goes in AGENTS.md

The file structure matters. It's not a general coding standards doc - those are too broad. AGENTS.md documents your team's specific decisions and patterns:

Architecture rules: which services talk to which, what happens at boundaries, where state lives. Anti-patterns: things that broke before and shouldn't happen again. Error handling standards: what gets logged, what gets reported, what gets retried. Testing requirements: what needs unit tests vs integration tests, when mocks are appropriate.

The goal is specificity. Not "write good error handling" but "all external API calls must have timeout and retry logic with exponential backoff". Not "test your code" but "database mutations require integration tests against a real Postgres instance".

Per-service docs add another layer. Each service gets a README or AGENTS.md that covers service-specific patterns - data models, API contracts, deployment requirements, known gotchas.

How AI Reviewers Use It

When an AI reviewer processes a pull request, it reads AGENTS.md and relevant service docs first. Then it scans the code changes against those rules. If something violates a documented pattern, it flags it - with a reference to the specific rule that was broken.

The feedback quality goes up because the AI can cite chapter and verse. Instead of "this looks wrong", it says "this breaks the rule in AGENTS.md line 47 about external API timeouts".

Developers get clearer, more actionable feedback. Reviewers spend less time on mechanical checks and more time on actual design questions. The documented rules become the first line of defence.

The Results

The team tracked metrics before and after. Manual review time per PR dropped 35% in the first month. Time to merge decreased because fewer rounds of feedback were needed. New team members onboarded faster because the rules were explicit, not tribal knowledge.

The interesting secondary effect: the act of writing down the rules surfaced inconsistencies. Patterns that seemed obvious when implied turned out to be ambiguous or contradictory when documented. The team had to make decisions they'd been avoiding. The clarity helped everyone, not just the AI.

What This Changes

Most teams treat code review as a human bottleneck to optimise. This flips it. The bottleneck isn't human time - it's undocumented knowledge. Once the knowledge is explicit, automation can enforce it.

The same pattern applies beyond code review. Any workflow with known rules and recurring feedback can benefit. API design reviews. Security checks. Documentation standards. Database schema changes. Write down the rules. Let AI catch violations. Humans focus on the parts that actually need judgement.

The team documented their approach in detail - file formats, AI reviewer setup, metrics tracking. It's not complex. The barrier isn't technical. It's the discipline to write down what you've been keeping in your head.

For teams stuck in review cycles, this is the playbook. Document your patterns. Point AI at them. Measure the result. The rules were always there. Making them explicit is what unlocks the automation.

More Featured Insights

Artificial Intelligence
The CFO's Office Just Became a Set of Instructions
Quantum Computing
Quantum States That Only Exist When You Wiggle the Magnet

Today's Sources

OpenAI Blog
OpenAI and PwC Automate the CFO's Office with AI Agents
TechCrunch AI
Cerebras Heads for $26B+ IPO on OpenAI Partnership Strength
AI Business News
AI Agents Reshape Business Models at Scale
Wired AI
Greg Brockman Defends $30B Personal Stake in OpenAI
TechCrunch
Jensen Huang: AI Is Creating Jobs, Not Killing Them
arXiv cs.LG
Agentopic: Topic Modeling Gets Explainable Via Agent Workflows
ScienceDaily – Quantum Computing
Scientists Create Exotic Matter Forms by Driving Magnetic Fields Over Time
MIT News Physics
Mini-Neptune Inside Hot Jupiter's Orbit Reveals New Planetary Formation Path
arXiv – Quantum Physics
Gravity-Induced Entanglement Works Under Constrained Dynamics
arXiv – Quantum Physics
Coherence Decoherence Rates Differ Dramatically Between Basis Choices
arXiv – Quantum Physics
Non-Hermitian Creutz Ladder Produces Multiple Bulk-Boundary Correspondences
freeCodeCamp
Unblock PR Review Bottlenecks: Move Team Rules Into Codebase
Stack Overflow Blog
Semantic vs. Exact-Match Vector Search: When Each Actually Works
Dev.to
Monitoring OpenAI Agents in Production: Five Metrics That Matter
Elementor
Will AI Replace Web Designers? Not the Ones Who Adapt
Dev.to
Azure Storage for IT Testing and Training: A Hands-On Setup
Hacker News
UK Age Verification Systems Bypassed With Fake Moustaches

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Richard Bland
About Sources Privacy Cookies Terms Thou Art That
MEM Digital Ltd t/a Marbl Codes
Co. 13753194 (England & Wales)
VAT: 400325657
3-4 Brittens Court, Clifton Reynes, Olney, MK46 5LG
© 2026 MEM Digital Ltd