Slopsquatting — LLM-Hallucinated Package Names Create New Supply Chain Attack Vector
AI relevance: AI coding agents autonomously resolve dependencies from package registries — when an LLM hallucinates a package name and the agent runs npx or equivalent to fetch it, an attacker who pre-registered that name achieves silent code execution in the developer's environment.
- Security researcher Thomas Roccia (Eriksen) demonstrated a new attack primitive he calls "slopsquatting": register NPM packages under names that LLM-based coding agents are likely to hallucinate as dependencies.
- The attack works because coding agents autonomously search registries for components matching natural-language descriptions. When the agent invents a plausible-but-fictitious package name, the agent itself installs the attacker's code.
- Eriksen registered the
react-codeshiftpackage on NPM and immediately observed downloads — confirming that agents with hallucinated package names are actively installing packages in the wild. - The packages are automatically activated during agent operation when specific keywords appear in prompts, requiring no user interaction beyond the agent running.
- Original skills containing hallucinated package names were cloned and modified by other developers, spreading the attack surface beyond the initial registration.
- The attack targets not just
npxbut other Node.js package installers, widening the scope of affected tooling. - Eriksen's assessment: "The supply chain just got a new link, made of LLM dreams."
Why it matters
This is distinct from traditional typosquatting. Typosquatting relies on human error — slopsquatting exploits a deterministic behavior of LLM-based agents: they generate plausible package names that don't exist, then fetch and execute whatever is registered under those names. The attack surface grows with every new skill definition, every shared agent configuration, and every agent that hallucinates a dependency. Current supply-chain scanners have no detection category for packages that exist because an LLM invented them.
What to do
- Audit AI coding agent configurations for any packages installed without explicit human review.
- Configure agents to require explicit approval before installing new dependencies from registries.
- Use lockfiles and integrity checks — if a package wasn't in your lockfile, flag it before execution.
- Monitor NPM and PyPI for newly registered packages that match your team's agent skill definitions.
- Consider running agents in sandboxed environments with restricted network and filesystem access.