Intelligence is foundation
Subscribe
  • Luma
  • About
  • Sources
  • Ecosystem
  • Nura
  • Marbl Codes
00:00
Contact
[email protected]
Connect
  • YouTube
  • LinkedIn
  • GitHub
Legal
Privacy Cookies Terms
  1. Home›
  2. Featured›
  3. Web Development›
  4. AI Wrote a Database Query That Could Never Run
Web Development Wednesday, 13 May 2026

AI Wrote a Database Query That Could Never Run

Share: LinkedIn
AI Wrote a Database Query That Could Never Run

A developer asked an AI coding assistant to optimise a DynamoDB query. The assistant confidently generated a query using an index that didn't exist. Not an index the developer forgot to create - an index that couldn't exist given the table's schema.

The code looked perfect. The syntax was correct. The method calls matched the AWS SDK documentation. And it would fail instantly in production because the infrastructure it assumed was physically impossible.

The Anatomy of a Confident Hallucination

The developer's post breaks down exactly what went wrong. The AI saw a table with a partition key and sort key, identified a query pattern that would be slow without an index, and generated code to use a Global Secondary Index that would solve the problem.

But DynamoDB indexes aren't magic. They're projections of data with their own partition and sort keys. You can't just declare an arbitrary index in your query - it has to exist, with a schema that supports the query pattern you're using.

The AI-generated code referenced an index with a key structure that contradicted the base table's design. Running the query wouldn't return slow results or trigger an optimisation warning. It would throw an error immediately: the specified index does not exist.

Why This Matters More Than It Seems

The problem isn't that the AI made a mistake. The problem is that it made the mistake with total confidence, in an area where confidence is dangerous.

When AI generates text, a probabilistic guess is fine. If it predicts "the" when "a" would have been slightly better, nobody's system crashes. Language has redundancy. Context fixes errors. Readers compensate.

Infrastructure doesn't compensate. A database query either works or it doesn't. There's no partial credit for "mostly correct" syntax. Either the index exists and the query runs, or the index doesn't exist and the deployment fails.

AI coding assistants are trained on vast amounts of code, but they're fundamentally predicting probable next tokens. They're very good at recognising patterns - what a DynamoDB query usually looks like, how indexes are typically structured, where optimisation opportunities tend to appear.

What they can't do is verify that the infrastructure they're assuming actually exists in your specific environment. They'll generate the query that would be optimal if you had the right indexes. They won't warn you that you don't.

The False Confidence Problem

If the AI had flagged uncertainty - "This query assumes an index on these fields, which I can't verify exists" - the developer would have caught the issue immediately. But AI models don't reliably know when they're guessing.

The output looks polished. The code is formatted correctly. There are even helpful comments explaining the optimisation. Every signal tells the developer this is production-ready code from an expert. Except it isn't. It's a statistically plausible hallucination.

What Developers Should Actually Do

The lesson isn't "don't use AI coding assistants". The lesson is understand where probabilistic reasoning breaks down.

AI is excellent at boilerplate. It's excellent at common patterns. It's excellent at "write me a function that does X" where X is a well-understood problem with standard solutions.

AI is terrible at anything that requires verifying external state. It can't check your database schema. It can't confirm your API keys are valid. It can't verify that the service you're calling actually exposes the endpoint it's using.

That means: use AI for the scaffolding, verify the infrastructure assumptions yourself. If the AI generates a query using an index, check that the index exists before deploying. If it calls an API endpoint, confirm the endpoint is documented. If it imports a library, verify the version is compatible.

The dangerous pattern is treating AI output as peer-reviewed code. It's not. It's a first draft that needs the same scrutiny you'd apply to code from a junior developer who's very fast but doesn't know your codebase.

The Broader Question

This incident is a microcosm of a bigger problem: as AI tools get better at looking right, it gets harder to notice when they're wrong. The syntax is perfect. The logic is sensible. The approach is reasonable. The only problem is that it doesn't match reality.

For infrastructure code, that gap is immediately visible - the deployment fails. For business logic, for data pipelines, for anything where correctness is harder to verify, the gap might not surface until much later. And by then, the confident-but-wrong code is deep in your system.

The fix isn't avoiding AI tools. The fix is knowing what they can and cannot verify, and building your review process accordingly. Probabilistic reasoning and deterministic systems require different validation strategies. Treating them the same is where the real mistake happens.

More Featured Insights

Artificial Intelligence
Medicare Just Cleared AI to Bill for Patient Care Between Visits
Quantum Computing
Silicon Carbide Makes Quantum Networks Work at Room Temperature

Today's Sources

TechCrunch
Medicare's ACCESS Model Creates First Payment Path for AI Patient Monitors
GeekWire
Sam Altman's $1.65 Billion Helion Stake Faces Congressional and Court Scrutiny
arXiv cs.AI
EVOCHAMBER: Multi-Agent Test-Time Evolution Produces Spontaneous Specialization
arXiv cs.AI
Cascaded Generative Framework Improves E-Commerce Recommendations by 2.7%
arXiv cs.AI
RankQ Enables Effective Offline-to-Online RL for Robot Learning
Wired AI
xAI Expands Gas Turbine Fleet Despite Environmental Lawsuit
Quantum Zeitgeist
Erbium-Silicon Room-Temperature Single Photons Eliminate Cryogenic Bottleneck
arXiv – Quantum Physics
Steering-Based Defense Improves Quantum ML Adversarial Robustness by 40%
arXiv – Quantum Physics
End-to-End Neural-Quantum Transcoding for Noisy Channels
Dev.to
AI Confidently Generates DynamoDB Queries That Cannot Exist
Stack Overflow Blog
How Braze Transformed Engineering for the Agentic Era
Dev.to
WordPress Headless + React: Complete Integration Pattern for Production
Dev.to
SkillEvidence: Free Portfolio Builder Designed for Proof Over Resume
Hacker News
Deterministic Fully-Static Binary Translation Without Heuristics
DZone
Java RSS Memory Growth in Docker on M1 Macs: The 128MB Mystery

About the Curator

Richard Bland
Richard Bland
Founder, Marbl Codes

27+ years in software development, curating the tech news that matters.

Subscribe RSS Feed
View Full Digest Today's Intelligence
Richard Bland
About Sources Privacy Cookies Terms Thou Art That
MEM Digital Ltd t/a Marbl Codes
Co. 13753194 (England & Wales)
VAT: 400325657
3-4 Brittens Court, Clifton Reynes, Olney, MK46 5LG
© 2026 MEM Digital Ltd