Here's a supply chain attack nobody saw coming: developers trusting AI-generated code are installing malware without realising it.
The pattern is deceptively simple. An AI coding assistant hallucinates a package name - suggests something that sounds plausible but doesn't actually exist. A developer follows the recommendation. And waiting on npm or PyPI is a malicious package with that exact name, registered by an attacker who knew this would happen.
According to research published on Dev.to, one in five AI-generated code samples recommends packages that don't exist. That's not a rounding error. That's a systematic vulnerability.
The Mechanics of Slopsquatting
Traditional typosquatting relies on human error - developers mistyping package names. Slopsquatting exploits AI error - models confidently suggesting plausible-sounding packages that were never published.
The attack works because large language models don't actually know what packages exist. They predict what names sound likely based on patterns in their training data. Sometimes those predictions are spot-on. Sometimes they're entirely fictional.
Attackers monitor AI tool outputs, identify commonly hallucinated package names, and register them with malicious code. When a developer asks an AI assistant for help and follows its suggestion, they're downloading malware that the AI inadvertently advertised.
What makes this particularly insidious is trust. Developers assume AI tools are pulling from real package registries. The suggestions look legitimate - proper naming conventions, sensible functionality, confident recommendations. There's no obvious red flag.
Why This Matters Now
AI coding assistants have moved from experimental to essential for many development teams. GitHub Copilot, Cursor, Claude - these tools are writing substantial portions of production code. The assumption is that they accelerate development safely.
But if one in five suggestions includes a phantom package, and attackers are systematically exploiting this, we're looking at a new category of supply chain risk. Not from compromised existing packages, but from packages that should never have existed in the first place.
The scale compounds the problem. Thousands of developers might receive the same hallucinated recommendation. A single malicious package registration can compromise multiple codebases simultaneously.
What Developers Can Do
The immediate defence is verification. Before installing any package an AI suggests, check it actually exists and has legitimate maintainers. Look at download counts, publication dates, repository activity. If a package was registered yesterday and has three downloads, that's a signal.
Tooling needs to catch up too. Package managers could flag newly registered packages that match commonly hallucinated names. AI coding assistants could verify package existence before making recommendations. Registry maintainers could monitor for suspicious registration patterns.
But the deeper issue is trust calibration. Developers need to treat AI suggestions the same way they'd treat code from a junior developer - useful starting points that require review, not authoritative answers to copy directly into production.
The pattern here is familiar. Every time we introduce a new tool that increases development velocity, we discover new attack vectors that exploit that speed. The solution isn't to abandon the tools. It's to build verification into the workflow until it becomes automatic.
Slopsquatting works because it exploits a gap between what AI appears to know and what it actually knows. Closing that gap - through better models, better tooling, or better developer habits - is now a security requirement, not a nice-to-have.