Socket — SANDWORM_MODE npm worm targets AI coding tools
AI relevance: the campaign injects a malicious MCP server into AI coding assistants so prompt-injection can coerce the model into exfiltrating local secrets and cloud credentials.
- Socket ties the activity to a Shai-Hulud-style npm supply-chain worm using typosquatted packages to get initial installs.
- The operation reportedly spans at least 19 malicious packages published under multiple aliases.
- Once installed, the payload harvests developer and CI secrets and uses stolen tokens to modify downstream repositories.
- The malware also deploys a weaponized GitHub Action to extract CI secrets and propagate further.
- A dedicated module writes a hidden malicious MCP server and injects it into AI coding tool configs.
- The MCP tool list carries embedded prompt-injection text instructing the model to read SSH keys, cloud creds, and env secrets.
- Researchers observed harvesting of LLM provider API keys (OpenAI, Anthropic, Google, and others).
Why it matters
- Supply-chain attacks now target the AI tool layer, not just libraries.
- MCP server injection shows how prompt-injection can become a persistence mechanism in agent tooling.
- CI token abuse means a single developer install can cascade into org-wide compromise.
What to do
- Audit npm dependencies for typosquats and lock dependencies with vetted registries.
- Monitor MCP config files for unexpected server entries across AI assistants.
- Rotate tokens and review GitHub Actions permissions if any suspect packages were installed.
- Enforce trusted publishing + 2FA for maintainers to reduce token abuse.
- Sandbox AI tools with least-privilege file and environment access.