Finance Gets AI Agents; Engineers Unblock Their Review Queues
Today's Overview
Two patterns emerged this morning that tell different stories about how AI is reshaping work. The first is about acceleration: OpenAI and PwC just announced they're building AI agents to automate the CFO's office-forecasting, controls, workflow automation, all the tasks that used to require a finance ops team. The second is about friction: a tech lead published a detailed breakdown of how their team went from drowning in clean-looking pull requests to catching 90% of recurring mistakes before code review even started. One story is about AI doing the work. The other is about making the people reviewing AI-written work move at human speed again.
The Vector Search Question That Refuses to Go Away
Stack Overflow hosted a conversation between a Qdrant field architect and Ryan Dahl about something developers keep getting wrong: when to use semantic search versus exact-match vector queries. The distinction matters. Semantic search-finding "similar" results-works for discovery and recommendation. Exact-match vector queries work for logs, security events, and anything where you're looking for specific known states, not fuzzy similarity. Most teams reach for semantic search because it sounds advanced, then spend months wondering why their security alerts are triggering on false positives. The practical bit: semantic search needs a data layer that understands meaning. Exact match is just pattern matching with extra steps. Know which problem you're solving.
The PR Review Bottleneck Has a Specific Fix
A tech lead at a mid-sized team documented something every engineering manager is living through right now: AI accelerated code writing faster than code review can keep up. Their team's feature branch throughput jumped 59% in the past year. Main branch throughput dropped 7%. The reviews are the gate. So instead of buying a generic AI code reviewer and hoping it understands their team's unwritten rules, they did something simpler: they wrote the rules down. Two files-AGENTS.md and CLAUDE.md-became the single source of truth for architectural patterns, naming conventions, which shared utilities to reuse before building new ones. Every time a human caught something the AI missed, they added it to the rules file. After a few weeks, the AI started catching 90% of the recurring mistakes before the PR opened. The review burden dropped by a third in the first month. Over months, it kept improving as the rules absorbed every recurring comment. The insight isn't about the tool. It's that the team's accumulated taste only works if it's written down somewhere the AI can read it.
There's a broader pattern here. Finance teams are about to get a similar treatment: instead of asking a CFO's office to do forecasting and controls manually, AI agents will read the rules and patterns in your company's data, then execute them. The work doesn't disappear. It shifts from execution to definition. You get faster if you're disciplined about what the rules actually are.
Separately: quantum computing just got more real. A team at MIT used the James Webb Space Telescope to measure the atmosphere of a mini-Neptune orbiting inside a hot Jupiter. They found water vapour, carbon dioxide, sulphur dioxide-heavy molecules that shouldn't exist where that planet currently orbits. The only explanation: both planets formed much farther out, beyond the star's frost line, then migrated inward together. It's the first direct evidence that mini-Neptunes can form in the cold regions of protoplanetary discs. Not immediately useful for business, but it's the kind of fundamental discovery that sometimes reshapes how you think about a whole domain.
The morning's pattern: acceleration in some areas, friction in others, and the discovery that friction often lives in the places where humans need to define the rules for machines to follow.