Google — UNC6426 weaponized LLM tool to steal credentials, escalated to AWS admin in 72h

AI relevance: The QUIETVAULT credential stealer specifically weaponizes an LLM tool already installed on the victim's machine to autonomously scan the filesystem for tokens, API keys, and environment variables — turning the developer's own AI tooling into a secret-exfiltration engine.

  • Google's Cloud Threat Horizons Report H1 2026 details how threat actor UNC6426 turned the August 2025 nx npm supply-chain breach into a full cloud compromise within 72 hours.
  • The nx packages were trojanized via a pull_request_target workflow exploit ("Pwn Request"), embedding a postinstall script that launched QUIETVAULT, a JavaScript credential stealer.
  • QUIETVAULT's novel technique: it invokes an LLM tool already present on the developer's endpoint to scan the filesystem for sensitive data — GitHub PATs, environment variables, and system info — rather than relying on hardcoded regex patterns.
  • Stolen data was uploaded to a public GitHub repository (/s1ngularity-repository-1), and a developer running Nx Console in a code editor triggered the trojanized update.
  • UNC6426 used the stolen GitHub PAT to perform recon with Nord Stream (legitimate tool for extracting CI/CD secrets), leaking a GitHub service account credential.
  • The attackers then abused GitHub-to-AWS OIDC trust to generate STS tokens for an overly-permissive CloudFormation role, deploy a new stack, and create a persistent admin IAM role.
  • Final impact: data exfiltration from S3 buckets and data destruction in production cloud environments — all starting from a single npm install.

Why it matters

  • This is a concrete, reported case of AI tools being weaponized during active intrusions — not a theoretical attack model, but a technique observed in production by Google's own incident response team.
  • Developer machines with LLM tools installed (copilots, code assistants, local inference) now represent an expanded credential-surface that traditional secret scanners don't model.
  • The supply-chain → credential theft → OIDC abuse → cloud destruction kill chain demonstrates how AI dev tooling is becoming a pivot point in multi-stage cloud compromises.

What to do

  • Restrict OIDC trust relationships: scope GitHub Actions OIDC roles to specific repositories and workflows; never use wildcard trust policies.
  • Audit npm dependencies for postinstall scripts, especially in dev-tooling packages that interact with code editors or AI assistants.
  • Monitor for anomalous LLM tool usage — if a credential stealer invokes an LLM API or local model to scan files, it generates detectable network or process activity.
  • Apply least-privilege to CI/CD roles: the CloudFormation role in this breach was overly permissive; limit IAM capabilities to what each workflow actually needs.
  • Rotate tokens aggressively: GitHub PATs, STS tokens, and service account credentials should have short lifetimes and be monitored for unusual usage patterns.

Sources