A development team cut their manual code review time by 35% in a month. The method: they wrote down their architectural rules in plain text files and let AI reviewers catch mistakes before human review started.
The approach is simple. Create an AGENTS.md file at the root of your codebase. Document your team's patterns, anti-patterns, and recurring review feedback. Add per-service documentation for anything service-specific. Point your AI code reviewer at these files. Watch it catch 90% of the problems that used to clog up pull requests.
The Pattern It Solves
Most code review bottlenecks aren't about complex logic. They're about repeating the same feedback. Don't use this pattern here. This API call needs error handling. You forgot to update the docs. This breaks our naming convention.
Human reviewers get tired of writing the same comments. Developers get tired of fixing the same mistakes. Everyone knows the rules, but they're in people's heads, not in the codebase. So they get forgotten, especially by new team members or when context-switching between projects.
The case study published on freeCodeCamp documents a tech lead who solved this by externalising the rules. Instead of relying on humans to remember and enforce patterns, they wrote them down in a format AI reviewers could parse.
What Goes in AGENTS.md
The file structure matters. It's not a general coding standards doc - those are too broad. AGENTS.md documents your team's specific decisions and patterns:
Architecture rules: which services talk to which, what happens at boundaries, where state lives. Anti-patterns: things that broke before and shouldn't happen again. Error handling standards: what gets logged, what gets reported, what gets retried. Testing requirements: what needs unit tests vs integration tests, when mocks are appropriate.
The goal is specificity. Not "write good error handling" but "all external API calls must have timeout and retry logic with exponential backoff". Not "test your code" but "database mutations require integration tests against a real Postgres instance".
Per-service docs add another layer. Each service gets a README or AGENTS.md that covers service-specific patterns - data models, API contracts, deployment requirements, known gotchas.
How AI Reviewers Use It
When an AI reviewer processes a pull request, it reads AGENTS.md and relevant service docs first. Then it scans the code changes against those rules. If something violates a documented pattern, it flags it - with a reference to the specific rule that was broken.
The feedback quality goes up because the AI can cite chapter and verse. Instead of "this looks wrong", it says "this breaks the rule in AGENTS.md line 47 about external API timeouts".
Developers get clearer, more actionable feedback. Reviewers spend less time on mechanical checks and more time on actual design questions. The documented rules become the first line of defence.
The Results
The team tracked metrics before and after. Manual review time per PR dropped 35% in the first month. Time to merge decreased because fewer rounds of feedback were needed. New team members onboarded faster because the rules were explicit, not tribal knowledge.
The interesting secondary effect: the act of writing down the rules surfaced inconsistencies. Patterns that seemed obvious when implied turned out to be ambiguous or contradictory when documented. The team had to make decisions they'd been avoiding. The clarity helped everyone, not just the AI.
What This Changes
Most teams treat code review as a human bottleneck to optimise. This flips it. The bottleneck isn't human time - it's undocumented knowledge. Once the knowledge is explicit, automation can enforce it.
The same pattern applies beyond code review. Any workflow with known rules and recurring feedback can benefit. API design reviews. Security checks. Documentation standards. Database schema changes. Write down the rules. Let AI catch violations. Humans focus on the parts that actually need judgement.
The team documented their approach in detail - file formats, AI reviewer setup, metrics tracking. It's not complex. The barrier isn't technical. It's the discipline to write down what you've been keeping in your head.
For teams stuck in review cycles, this is the playbook. Document your patterns. Point AI at them. Measure the result. The rules were always there. Making them explicit is what unlocks the automation.