A developer sits down to build a feature. Five years ago, they'd spend three days writing code and half a day reviewing it. Today, they spend half a day writing a detailed specification and two days reviewing AI-generated code. The work didn't disappear. It moved.
Louis Knight-Webb runs Vibe Kanban and has watched this shift happen in real time. His observation: software engineering is becoming plan and review. The execution layer - the actual writing of code - is increasingly handled by AI tools. The human work is now concentrated at the edges: defining what to build and checking whether it was built correctly.
This isn't a small change. It's a fundamental reorganisation of how engineering teams operate. And most project management tools haven't caught up.
Why Traditional Kanban Broke
Kanban boards were built for a world where the bottleneck was execution. You had a backlog of tasks, a "doing" column, and a "done" column. The workflow was linear: plan, execute, ship. The goal was to keep the "doing" column moving and prevent tasks from getting stuck.
But when AI handles execution, the bottleneck shifts. The "doing" column empties out quickly. The new congestion points are in planning - getting clarity on what to build - and in review - ensuring the AI-generated code actually does what it's supposed to do.
Knight-Webb describes this as a shift from execution-limited workflows to specification-limited workflows. The constraint isn't how fast you can write code. It's how clearly you can define what the code should do, and how thoroughly you can validate that it does it.
Traditional Kanban doesn't surface these bottlenecks. A task moves from "to do" to "in progress" to "done" in hours, and the board shows green. But the actual work - the cognitive load of specifying behaviour and reviewing output - is invisible. The metrics lie.
What Teams Need Instead
Knight-Webb built Vibe Kanban specifically for this new workflow. The tool emphasises two things traditional Kanban boards don't: clarity of specification and thoroughness of review.
On the planning side, tasks require explicit success criteria before they can move into execution. Not just "build a login page" but "build a login page that handles OAuth, rate-limits failed attempts, and logs security events". The specificity forces better thinking upfront, which reduces the review burden later.
On the review side, the tool tracks time spent validating output, not just time spent generating it. If a task takes ten minutes to specify, five minutes for AI to execute, and two hours to review and fix edge cases, that's a two-hour task. The old metrics would have called it a five-minute task. The new metrics show the real cost.
The Broader Shift: Engineering Is Editing
This isn't just about project management tools. It's about how we think about the role of a software engineer. The job is shifting from writing code to editing code - from creation to curation.
In a plan-and-review workflow, the engineer's value comes from two skills: the ability to clearly articulate intent (planning) and the ability to spot when something is subtly wrong (reviewing). Both require deep technical knowledge, but neither requires writing every line of code from scratch.
This has implications for hiring. The engineers who thrive in this environment aren't necessarily the fastest coders. They're the ones who can write clear specifications, catch edge cases in generated code, and understand system behaviour well enough to know when something looks right but isn't.
It also changes how teams onboard junior engineers. The traditional learning path - write simple code, then harder code, then complex systems - doesn't work as well when AI is writing most of the code. The new learning path might look like: review simple code, then specify simple features, then architect complex systems. The progression is different.
What This Means for Teams
For engineering managers, the takeaway is operational: your tooling and your metrics are probably misaligned with where the actual work happens. If your project management system still treats execution as the bottleneck, it's not showing you where your team is actually spending time.
For individual developers, the shift is existential but not dire. The work isn't disappearing - it's concentrating in areas where human judgement still matters. Planning and review require understanding context, spotting inconsistencies, and making trade-offs. AI tools are good at generating code to a specification. They're not good at deciding what to build or whether what they built is actually correct in context.
The risk is for teams that don't adapt. If you keep treating software development as primarily an execution problem, you'll optimise for speed in the wrong places. You'll skip planning because "AI can figure it out". You'll rush review because "the code looks fine". And you'll ship faster, but with more bugs, more technical debt, and more rework.
Watch Louis Knight-Webb discuss this shift in detail on AI Engineer.