A computer science professor with 23 years of teaching experience just made an argument that will frustrate a lot of people: you still need to learn programming the hard way. Even with AI tools that can generate code faster than you can type.
Professor Mark Mahoney, in an interview with Ania Kubów, argues that the real risk of AI-assisted coding isn't that it makes programmers obsolete. It's that it lets people build broken things quickly without understanding why they fail.
If you're learning to code right now, or teaching someone who is, this tension is unavoidable. AI tools are phenomenal productivity accelerators. They're also phenomenal at hiding gaps in understanding until something breaks in production.
The Broken Things Problem
Mahoney's concern is straightforward: AI can generate code that works in the happy path. It's much worse at anticipating edge cases, handling errors gracefully, or structuring systems that scale. If you don't understand what the generated code is doing, you can't debug it when it fails.
This isn't theoretical. Developers using AI assistants report shipping faster but spending more time debugging later. The code looks fine. It passes initial tests. Then it hits production and the edge cases nobody thought to test reveal themselves.
The gap between "works on my machine" and "works reliably at scale" has always existed. AI tools just make it easier to cross that gap without noticing you've done it.
What "Learning the Hard Way" Actually Means
Mahoney isn't arguing against using AI tools. He's arguing that you need to understand the fundamentals before the tools become useful. That means writing loops manually until you understand iteration. Debugging segmentation faults until you understand memory. Building data structures from scratch until you understand why they're shaped the way they are.
The hard way isn't about suffering for its own sake. It's about building mental models that let you reason about what code is doing without running it. When an AI assistant suggests a solution, you need to be able to evaluate whether it's correct, efficient, and maintainable. That requires knowing what good code looks like.
For experienced developers, this is obvious. For people entering the field now, it's not. If your first experience of programming is prompting an AI and getting working code back, you never build the debugging instinct that comes from hours of figuring out why your loop is off by one.
The Speed Trap
Here's the trap: AI tools make you productive immediately. That feels good. It feels like progress. But if you're learning, immediate productivity can mask the fact that you're not actually learning the underlying concepts.
Mahoney's observation is that students who rely heavily on AI early in their learning produce more code but understand less of it. When they hit a problem the AI can't solve - or when the AI generates plausible-looking code that's subtly wrong - they don't have the foundation to fix it themselves.
This isn't unique to programming. It's the same trap you'd fall into learning any craft by only ever using the automatic mode. You get output without understanding process. That works until it doesn't.
What This Means for People Learning Now
If you're learning to code in 2025, you're navigating a question earlier cohorts didn't face: how much should you lean on AI tools before you've built foundational skills?
Mahoney's advice is clear: learn to write, read, and debug code manually first. Use AI tools once you can already do the thing the tool is helping with. The tool should accelerate work you understand, not replace understanding.
Practically, that means: write your own loops before using an AI to generate them. Build a basic web server from scratch before using a framework. Debug memory issues manually before relying on tooling that abstracts them away.
It's slower. It's frustrating. It works.
The Long Game
The argument for learning the hard way isn't about preserving tradition or gatekeeping. It's about building the kind of understanding that compounds over time.
Developers who understand what's happening under the hood can evaluate new tools faster, debug unfamiliar systems more effectively, and make architectural decisions that don't collapse under scale. That foundation is what separates people who can use AI tools productively from people who are dependent on them.
AI assistants aren't going away. They're getting better. But the developers who'll thrive with them are the ones who could build the same things without them - just slower. The tool accelerates capability. It doesn't replace it.
For anyone teaching or learning programming now, the question isn't whether to use AI. It's when. Mahoney's answer: after you've done the hard work of understanding what you're asking the AI to do.
That's not a popular message in an era of shortcuts and hacks. But it's probably the right one.