DPRK PromptMink — Claude Opus Used to Insert Malicious npm Dependency

AI relevance: A state-sponsored threat actor used a frontier LLM (Claude Opus) to co-author a malicious npm dependency commit, blending AI-assisted code into supply-chain attack operations targeting crypto developers.

What Happened

Researchers at ReversingLabs have uncovered a North Korean supply-chain campaign, tracked as PromptMink, in which the threat actor Famous Chollima (aka APT37 / Shifty Corsair) used Anthropic's Claude Opus model to co-author a malicious npm dependency commit.

The package @validate-sdk/v2, listed on npm as a utility SDK for hashing, validation, and encoding, was added to an autonomous trading agent in February 2026. The commit was co-authored by Claude Opus. Behind its legitimate-looking facade, the package exfiltrates secrets and installs persistent RAT access for cryptocurrency theft.

Technical Details

  • The campaign ran for over seven months, with ReversingLabs tracking 60+ packages and 300+ versions tied to the operation.
  • A two-layer strategy separates legitimate-looking Web3 utility packages (to attract adoption) from secondary dependencies that deliver the actual malware.
  • Early payloads focused on harvesting environment files and sensitive data; later iterations added directory scanning for crypto wallets, system info collection, project folder compression and exfiltration, and SSH key installation for persistent remote access.
  • The malware evolved from JavaScript-based code to compiled binaries and Rust-based payloads, improving evasion and enabling cross-platform operation (Linux and Windows).
  • Leftover LLM prompts in the code indicate generative AI was used to shape malicious packages to appeal to AI coding assistants, extending the supply-chain risk into automated development workflows.
  • The campaign is part of the broader Famous Chollima operation, known for the Contagious Interview and fraudulent IT worker scams.

Why It Matters

This is one of the first documented cases of a state-sponsored threat actor using a frontier LLM (Claude Opus) to co-author malicious code commits that blend into legitimate development workflows. The fact that the AI assistant's identity appears in the commit history provides a veneer of legitimacy, potentially bypassing code review that trusts AI-assisted commits. This demonstrates a new attack surface: AI coding assistants themselves are becoming both weapons and camouflage in supply-chain operations.

The campaign's scale (60+ packages, 300+ versions over seven months) and technical evolution (from JS to Rust) signal a sophisticated, well-resourced operation specifically targeting AI-assisted development environments.

What to Do

  • Audit npm dependencies for @validate-sdk/v2 and remove if present.
  • Review commit history for any packages co-authored by AI model identities and verify the legitimacy of those changes.
  • Rotate any exposed secrets, API keys, or crypto wallet credentials in affected development environments.
  • Implement stricter code review policies for AI-assisted commits — treat them with the same scrutiny as human-authored changes.
  • Monitor for Known Indicators of Compromise published by ReversingLabs (see sources).

Sources