Socket & StepSecurity — Malicious node-ipc npm Packages Steal Claude AI, Kiro IDE Credentials
AI relevance: The compromised node-ipc package specifically targets AI development environments, harvesting Claude AI and Kiro IDE settings alongside cloud and infrastructure credentials — directly threatening the toolchains used to build and deploy AI agent systems.
Three versions of the widely-used node-ipc npm package (7M+ weekly downloads) have been confirmed as malicious, carrying an obfuscated credential stealer and backdoor that fires on every require() without relying on npm lifecycle hooks.
- Confirmed malicious versions:
node-ipc@9.1.6,9.2.3, and12.0.1, published by an account named "atiertant" with no prior history on the package. - The payload fingerprints the host environment, enumerates local files, compresses collected data, and exfiltrates to the C2 domain
sh.azurestaticprovider[.]net. - It targets 90 categories of credentials including AWS, GCP, Azure, SSH keys, Kubernetes tokens, GitHub CLI configs, Claude AI and Kiro IDE settings, Terraform state, database passwords, and shell history.
- Version 12.0.1 includes a SHA-256 fingerprint gate — it only activates on machines whose path matches a pre-computed hash, suggesting the attacker targeted a specific project or developer. The 9.x versions run unconditionally.
- Exfiltration uses a dual-channel approach: HTTPS POST and DNS TXT record queries with the resolver redirected to the C2 IP, bypassing corporate DNS logging entirely.
- The malware forks a detached background child process, so exfiltration continues even after the parent Node.js process terminates.
- The package had not been updated since August 2024 — a 21-month dormant period before the malicious versions appeared, consistent with maintainer account compromise or deliberate backdoor insertion.
- This is a separate campaign from the Mini Shai-Hulud attack that hit TanStack and Mistral packages earlier this month, though both target the same npm ecosystem with credential-theft payloads.
Why it matters
AI development toolchains concentrate high-value credentials in predictable locations — Claude API keys, Kiro IDE configs, vector DB connections, and model provider tokens. A single compromised dependency like node-ipc, which is pulled in transitively by many projects, can silently harvest the keys needed to access AI model APIs, agent orchestration frameworks, and production inference infrastructure.
What to do
- Audit
package-lock.jsonandyarn.lockfor any of the three malicious versions and pin to the last known-good release. - Rotate all credentials that could have been exposed on machines where the package was installed — especially cloud provider keys, Claude AI API tokens, and Kiro IDE settings.
- Check DNS logs for queries to
sh.azurestaticprovider[.]netand monitor for unexpected DNS TXT record exfiltration traffic. - Consider deploying npm supply chain integrity tools (Socket, Socket CI, or similar) to catch malicious package behavior before it reaches production.