A developer asked an AI coding assistant to optimise a DynamoDB query. The assistant confidently generated a query using an index that didn't exist. Not an index the developer forgot to create - an index that couldn't exist given the table's schema.
The code looked perfect. The syntax was correct. The method calls matched the AWS SDK documentation. And it would fail instantly in production because the infrastructure it assumed was physically impossible.
The Anatomy of a Confident Hallucination
The developer's post breaks down exactly what went wrong. The AI saw a table with a partition key and sort key, identified a query pattern that would be slow without an index, and generated code to use a Global Secondary Index that would solve the problem.
But DynamoDB indexes aren't magic. They're projections of data with their own partition and sort keys. You can't just declare an arbitrary index in your query - it has to exist, with a schema that supports the query pattern you're using.
The AI-generated code referenced an index with a key structure that contradicted the base table's design. Running the query wouldn't return slow results or trigger an optimisation warning. It would throw an error immediately: the specified index does not exist.
Why This Matters More Than It Seems
The problem isn't that the AI made a mistake. The problem is that it made the mistake with total confidence, in an area where confidence is dangerous.
When AI generates text, a probabilistic guess is fine. If it predicts "the" when "a" would have been slightly better, nobody's system crashes. Language has redundancy. Context fixes errors. Readers compensate.
Infrastructure doesn't compensate. A database query either works or it doesn't. There's no partial credit for "mostly correct" syntax. Either the index exists and the query runs, or the index doesn't exist and the deployment fails.
AI coding assistants are trained on vast amounts of code, but they're fundamentally predicting probable next tokens. They're very good at recognising patterns - what a DynamoDB query usually looks like, how indexes are typically structured, where optimisation opportunities tend to appear.
What they can't do is verify that the infrastructure they're assuming actually exists in your specific environment. They'll generate the query that would be optimal if you had the right indexes. They won't warn you that you don't.
The False Confidence Problem
If the AI had flagged uncertainty - "This query assumes an index on these fields, which I can't verify exists" - the developer would have caught the issue immediately. But AI models don't reliably know when they're guessing.
The output looks polished. The code is formatted correctly. There are even helpful comments explaining the optimisation. Every signal tells the developer this is production-ready code from an expert. Except it isn't. It's a statistically plausible hallucination.
What Developers Should Actually Do
The lesson isn't "don't use AI coding assistants". The lesson is understand where probabilistic reasoning breaks down.
AI is excellent at boilerplate. It's excellent at common patterns. It's excellent at "write me a function that does X" where X is a well-understood problem with standard solutions.
AI is terrible at anything that requires verifying external state. It can't check your database schema. It can't confirm your API keys are valid. It can't verify that the service you're calling actually exposes the endpoint it's using.
That means: use AI for the scaffolding, verify the infrastructure assumptions yourself. If the AI generates a query using an index, check that the index exists before deploying. If it calls an API endpoint, confirm the endpoint is documented. If it imports a library, verify the version is compatible.
The dangerous pattern is treating AI output as peer-reviewed code. It's not. It's a first draft that needs the same scrutiny you'd apply to code from a junior developer who's very fast but doesn't know your codebase.
The Broader Question
This incident is a microcosm of a bigger problem: as AI tools get better at looking right, it gets harder to notice when they're wrong. The syntax is perfect. The logic is sensible. The approach is reasonable. The only problem is that it doesn't match reality.
For infrastructure code, that gap is immediately visible - the deployment fails. For business logic, for data pipelines, for anything where correctness is harder to verify, the gap might not surface until much later. And by then, the confident-but-wrong code is deep in your system.
The fix isn't avoiding AI tools. The fix is knowing what they can and cannot verify, and building your review process accordingly. Probabilistic reasoning and deterministic systems require different validation strategies. Treating them the same is where the real mistake happens.