Intelligence is foundation
Podcast Subscribe
Builders & Makers Sunday, 5 April 2026

Linux 7.0 Just Halved PostgreSQL Performance

Share: LinkedIn
Linux 7.0 Just Halved PostgreSQL Performance

An AWS engineer discovered something nasty this week: upgrading to Linux kernel 7.0 cuts PostgreSQL throughput in half. Not a small regression. Not a specific workload. Across the board, database performance dropped 50% after a routine kernel update.

The problem, according to the report, isn't straightforward to fix. Which means anyone running PostgreSQL on recent kernel versions needs to know about this now.

What Actually Broke

The details matter here. This isn't a PostgreSQL bug. The database didn't change. The Linux kernel changed how it handles something fundamental - likely memory management or I/O scheduling based on the symptoms - and PostgreSQL's performance collapsed as a result.

AWS infrastructure runs millions of PostgreSQL instances. Their engineers noticed the regression because they benchmark everything obsessively. But most teams don't have AWS-level observability. They'll just see their database suddenly struggling after a kernel update and have no idea why.

The specific mechanism isn't public yet, but the pattern is familiar: kernel optimisation that helps some workloads while devastating others. The kernel developers probably had good reasons for the change. Database workloads probably weren't in their test suite.

Why This Is Hard to Fix

Kernel regressions usually get fixed quickly once identified. This one won't be, according to the AWS engineers investigating it. That's the concerning bit.

When a fix "isn't straightforward", it means one of several things. The kernel change that caused the regression might be intentional, fixing a different problem or improving other workloads. Rolling it back would break something else. Or the interaction between kernel behaviour and PostgreSQL is complex enough that fixing it properly requires rethinking assumptions on both sides.

Either way, if you're running PostgreSQL in production, you can't just wait for a patch. You need to either stay on kernel 6.x, accept the performance hit, or start testing workarounds.

The Practical Impact

Halving database throughput isn't something you can ignore. If your application was handling 10,000 transactions per second, it now handles 5,000. Your database is suddenly the bottleneck where it wasn't before. Query response times double. Connection pools start filling up. User-facing latency increases.

For teams already running near capacity, this is catastrophic. You either need to provision twice the database resources, which doubles your costs, or you need to stick with an older kernel that's increasingly falling behind on security patches.

Neither option is good. Doubling database costs to maintain the same performance is a hard sell. Running outdated kernels means missing security fixes, which is a hard sell to your security team.

What You Can Do Now

First, don't blindly upgrade to Linux 7.0 if you're running PostgreSQL in production. Test thoroughly in staging first. Measure actual throughput, not just whether queries complete successfully.

If you've already upgraded and see performance issues, this is likely why. Rolling back to kernel 6.x should restore performance, but check your specific version's compatibility and security status.

For teams locked into newer kernels by policy or infrastructure requirements, start investigating PostgreSQL configuration changes that might mitigate the issue. Different memory settings, I/O schedulers, or connection pool configurations might reduce the impact. It's not a fix, but it's better than nothing.

AWS will likely provide their own guidance soon, possibly with kernel patches or PostgreSQL RDS configuration changes. But if you're self-hosting, you're on your own for now.

The Bigger Pattern

This isn't the first time a kernel update has broken database performance, and it won't be the last. The Linux kernel is optimised for diverse workloads - web servers, container orchestration, desktop environments, embedded systems. Databases are just one use case among thousands.

The kernel developers can't test every possible application. They rely on maintainers of major projects - like PostgreSQL - to catch regressions during development. But PostgreSQL developers can't test every possible kernel change either.

The gap between kernel development velocity and application testing capacity creates exactly this kind of problem. A change that makes sense in isolation, tested against standard benchmarks, ships without anyone noticing it devastates a critical database workload.

For infrastructure teams, this reinforces an old lesson: test everything, assume nothing, and maintain rollback capability. The stack beneath your application is constantly changing. Most changes are fine. Some aren't. The only way to know is to test your actual workload, not trust that someone else already did.

More Featured Insights

Robotics & Automation
The Warehouse Robot Workers Don't Hate
Voices & Thought Leaders
The AI Labs Are Running Out of Computers

Today's Sources

Hacker News Best
AWS engineer reports PostgreSQL perf halved by Linux 7.0, fix may not be easy
DEV.to AI
Automating Your Playtest Triage with AI
Towards Data Science
Building a Python Workflow That Catches Bugs Before Production
The Robot Report
Learn to build warehouse robots people enjoy working with at the Robotics Summit
Azeem Azhar
🔮 Exponential View #568: The labs are rationing. Did you notice?
Sebastian Raschka
Components of A Coding Agent
Jack Clark Import AI
Import AI 451: Political superintelligence; Google's society of minds, and a robot drummer

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Free Daily Briefing

Start Every Morning Smarter

Luma curates the most important AI, quantum, and tech developments into a 5-minute morning briefing. Free, daily, no spam.

  • 8:00 AM Morning digest ready to listen
  • 1:00 PM Afternoon edition catches what you missed
  • 8:00 PM Daily roundup lands in your inbox

We respect your inbox. Unsubscribe anytime. Privacy Policy

© 2026 MEM Digital Ltd t/a Marbl Codes
About Sources Podcast Audio Privacy Cookies Terms Thou Art That
RSS Feed