PromptMink — North Korean-Linked Supply Chain Attack Uses Claude Opus to Plant Malware
AI relevance: Attackers used Anthropic's Claude Opus as an unwitting co-author in a commit that introduced malware into an autonomous crypto trading agent — marking a shift where state-linked threat groups weaponize AI coding assistants for supply chain infiltration.
- ReversingLabs uncovered the "PromptMink" campaign, a coordinated supply chain attack linked to the North Korean-associated threat group Famous Chollima (previously tied to the Contagious Interview campaign).
- On February 28, 2026, a commit co-authored by Claude Opus was submitted to the
openpaw-graveyardnpm package, an autonomous crypto trading agent. - The commit added
@solana-launchpad/sdkas a dependency, which silently pulled in@validate-sdk/v2— a malicious package masquerading as a data validation tool. - The malware covertly harvests sensitive credentials from the host environment and exfiltrates them to attacker-controlled servers, targeting crypto wallets and funds.
- ReversingLabs had been tracking suspicious
@validate-sdk/v2versions since October 2025, indicating a long-running operation. - The use of a legitimate AI coding assistant (Claude Opus) as a co-author adds a layer of plausible deniability and social engineering — reviewers may trust commits bearing an AI assistant's provenance.
Why it matters
This is the first documented case of a nation-state-linked actor using an AI coding assistant as part of a supply chain attack's cover. The attacker didn't compromise the AI model — they used it as a credibility prop in a commit history, banking on the growing trust in AI-assisted development. As autonomous agents gain access to deployment pipelines, the risk of AI-generated commits introducing malicious code moves from theoretical to operational.
What to do
- Audit npm dependencies for
@validate-sdk/v2and@solana-launchpad/sdk; remove if present. - Treat AI co-authored commits with the same scrutiny as human-authored ones — AI provenance is not a trust signal.
- Implement dependency pinning and lockfile verification in CI/CD pipelines to prevent unexpected transitive dependency additions.
- Monitor npm packages for newly published versions from previously inactive or suspicious maintainers.
- Review supply chain security guidance from CISA on agentic AI deployments, especially regarding automated code review and commit trust boundaries.