Wiz — AI-Generated Supply Chain Campaign Targets GitHub Actions via pull_request_target
AI relevance: A threat actor used AI to generate language-aware malicious payloads targeting CI/CD pipelines, evolving from crude bash scripts to repository-specific wrappers in Go, Python, and JavaScript — demonstrating how AI lowers the barrier to sophisticated supply chain attacks.
- Wiz Research identified a six-wave supply chain campaign starting March 11, 2026, operating under the
prt-scanbanner across six GitHub accounts - The attacker opened over 500 malicious PRs exploiting the
pull_request_targetworkflow trigger, which runs CI in the base repository's context — granting access to secrets even from forked PRs - AI-generated, repository-aware payloads adapted to each target's tech stack: Go test files for Go repos, npm scripts for JavaScript,
conftest.pyfor Python,build.rsfor Rust - Multi-stage payloads followed a RECON/DISPATCH pattern: steal
GITHUB_TOKEN, enumerate cloud secrets (AWS/Azure/GCP), and exfiltrate via base64-encoded workflow log markers - At least two npm packages were compromised with malicious published versions after attackers harvested
NPM_TOKENfrom workflow secrets - The campaign evolved across phases: from simple
os.system()injections in Wave 1 to AI-generated wrappers with obfuscation and encoded stages by Wave 3 - Despite the sophistication, the payloads contained logical errors and GitHub threat model misunderstandings that limited overall success
- Attack infrastructure included ProtonMail addresses with consistent naming patterns (
testedbefore,elzotebo,ezmtebo) and apython-requests/2.32.5user agent
Why It Matters
The pull_request_target misconfiguration is a well-documented risk, yet remains widespread. This campaign shows how AI tools enable attackers to scale social engineering of CI/CD systems — generating plausible PR titles like "ci: update build configuration" and tailoring payloads to each repository's language and build system. The transition from manual script writing to AI-generated, context-aware attack code represents a step change in supply chain attack automation. AI agents and coding assistants that interact with GitHub repositories inherit this risk surface directly.
What To Do
- Audit all GitHub Actions workflows using
pull_request_target— ensure no secrets are accessible to fork-originated PRs - Use
pull_requestinstead ofpull_request_targetunless secret access is strictly required, and then use environment protection rules - Monitor for
prt-scan-branch names and suspicious PRs with generic titles like "ci: update build configuration" - Rotate all CI/CD secrets if you received a suspicious PR from accounts matching the
testedbefore/elzotebo/ezmtebopattern - Enable required reviews for workflow changes — prevent malicious workflow modifications from merging without approval
- Treat AI-generated PRs as higher-risk — AI tools can produce convincing but malicious contributions at scale
Sources: