Intelligence is foundation
Podcast Subscribe
Builders & Makers Friday, 10 April 2026

Why Learning to Code the Hard Way Still Matters

Share: LinkedIn
Why Learning to Code the Hard Way Still Matters

A computer science professor with 23 years of teaching experience just made an argument that will frustrate a lot of people: you still need to learn programming the hard way. Even with AI tools that can generate code faster than you can type.

Professor Mark Mahoney, in an interview with Ania Kubów, argues that the real risk of AI-assisted coding isn't that it makes programmers obsolete. It's that it lets people build broken things quickly without understanding why they fail.

If you're learning to code right now, or teaching someone who is, this tension is unavoidable. AI tools are phenomenal productivity accelerators. They're also phenomenal at hiding gaps in understanding until something breaks in production.

The Broken Things Problem

Mahoney's concern is straightforward: AI can generate code that works in the happy path. It's much worse at anticipating edge cases, handling errors gracefully, or structuring systems that scale. If you don't understand what the generated code is doing, you can't debug it when it fails.

This isn't theoretical. Developers using AI assistants report shipping faster but spending more time debugging later. The code looks fine. It passes initial tests. Then it hits production and the edge cases nobody thought to test reveal themselves.

The gap between "works on my machine" and "works reliably at scale" has always existed. AI tools just make it easier to cross that gap without noticing you've done it.

What "Learning the Hard Way" Actually Means

Mahoney isn't arguing against using AI tools. He's arguing that you need to understand the fundamentals before the tools become useful. That means writing loops manually until you understand iteration. Debugging segmentation faults until you understand memory. Building data structures from scratch until you understand why they're shaped the way they are.

The hard way isn't about suffering for its own sake. It's about building mental models that let you reason about what code is doing without running it. When an AI assistant suggests a solution, you need to be able to evaluate whether it's correct, efficient, and maintainable. That requires knowing what good code looks like.

For experienced developers, this is obvious. For people entering the field now, it's not. If your first experience of programming is prompting an AI and getting working code back, you never build the debugging instinct that comes from hours of figuring out why your loop is off by one.

The Speed Trap

Here's the trap: AI tools make you productive immediately. That feels good. It feels like progress. But if you're learning, immediate productivity can mask the fact that you're not actually learning the underlying concepts.

Mahoney's observation is that students who rely heavily on AI early in their learning produce more code but understand less of it. When they hit a problem the AI can't solve - or when the AI generates plausible-looking code that's subtly wrong - they don't have the foundation to fix it themselves.

This isn't unique to programming. It's the same trap you'd fall into learning any craft by only ever using the automatic mode. You get output without understanding process. That works until it doesn't.

What This Means for People Learning Now

If you're learning to code in 2025, you're navigating a question earlier cohorts didn't face: how much should you lean on AI tools before you've built foundational skills?

Mahoney's advice is clear: learn to write, read, and debug code manually first. Use AI tools once you can already do the thing the tool is helping with. The tool should accelerate work you understand, not replace understanding.

Practically, that means: write your own loops before using an AI to generate them. Build a basic web server from scratch before using a framework. Debug memory issues manually before relying on tooling that abstracts them away.

It's slower. It's frustrating. It works.

The Long Game

The argument for learning the hard way isn't about preserving tradition or gatekeeping. It's about building the kind of understanding that compounds over time.

Developers who understand what's happening under the hood can evaluate new tools faster, debug unfamiliar systems more effectively, and make architectural decisions that don't collapse under scale. That foundation is what separates people who can use AI tools productively from people who are dependent on them.

AI assistants aren't going away. They're getting better. But the developers who'll thrive with them are the ones who could build the same things without them - just slower. The tool accelerates capability. It doesn't replace it.

For anyone teaching or learning programming now, the question isn't whether to use AI. It's when. Mahoney's answer: after you've done the hard work of understanding what you're asking the AI to do.

That's not a popular message in an era of shortcuts and hacks. But it's probably the right one.

More Featured Insights

Robotics & Automation
Amazon's Million Robots Are Just the Start
Voices & Thought Leaders
Why the Claude Mythos Panic Was Overblown

Video Sources

Ania Kubów
How to learn programming and CS in the AI hype era - interview with prof Mark Mahoney
Theo (t3.gg)
I'm scared about the future of security
NVIDIA Robotics
The 50-State Plan: Public-Private Models for AI Infrastructure and University Transformation
NVIDIA Robotics
AI Literacy at Scale: K-to-Career Access That Delivers Real Student Outcomes
AI Revolution
The Most Dangerous AI Model Ever: Mythos
Google DeepMind
Teaching the foundations of AI in the classroom
OpenAI
ChatGPT and Cancer: How a Tech Founder Rewrote His Treatment Plan

Today's Sources

DEV.to AI
Build a Slack Bot That Monitors Social Media Mentions in Real-Time
Towards Data Science
How Visual-Language-Action (VLA) Models Work
ML Mastery
The Roadmap to Mastering Agentic AI Design Patterns
Towards Data Science
A Visual Explanation of Linear Regression
The Robot Report
Amazon CEO says robotics is key for faster delivery, lower costs
The Robot Report
AGIBOT releases GO-2 foundation model for embodied AI
Gary Marcus
Three reasons to think that the Claude Mythos announcement from Anthropic was overblown

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed