Pillar Security — Gemini CLI "TrustIssues" CVSS 10 Supply-Chain Compromise

AI relevance: An automated Gemini-powered GitHub issue triage agent processed attacker-controlled input without sanitization, enabling prompt injection that escalated from a public issue to full repository supply-chain compromise.

Pillar Security researcher Dan Lisichkin disclosed a CVSS 10 vulnerability (dubbed "TrustIssues") in Google's AI-powered GitHub workflows. An external attacker with nothing more than a public GitHub issue could achieve full supply-chain compromise of the gemini-cli repository, which has over 101,000 stars.

  • Attack chain: Attacker opens a public GitHub issue with hidden prompt injection. Google's automated Gemini-powered triage agent reads the issue. The injected instructions cause the agent to extract workflow secrets and exfiltrate them to an attacker server.
  • Yolo mode risk: The Gemini agent ran in --yolo mode, auto-approving all tool calls without human confirmation. Permitted tools included gh issue edit and shell access — enough to read files and execute commands.
  • Environment leak via /proc: The prompt injection forced the agent to run cat /proc/$PPID/environ, leaking the workflow's GEMINI_API_KEY and OIDC credentials into a public issue body.
  • Persistence bypass: While the workflow explicitly blanked GITHUB_TOKEN from the environment, the token was persisted on disk by actions/checkout in .git/config. The agent read it from there, base64-encoded it, and sent it to the attacker's server.
  • Privilege escalation: The exfiltrated token had actions:write, allowing the attacker to trigger a different workflow (smoke-test.yml) with contents:write via a fork branch — achieving full write access to the main branch.
  • Blast radius: The vulnerable triage workflow was found across at least eight Google repositories. Google patched within two days; fixed gemini-cli shipped as v0.39.1.

Why it matters

This is Simon Willison's "lethal trifecta" materialized: AI agents with access to private data (runner secrets), exposure to untrusted content (public issues), and ability to externally communicate (shell/curl). CI/CD pipelines running AI triage agents are now a critical attack surface. The token-on-disk persistence by actions/checkout is a subtle but widely-relevant bypass pattern.

What to do

  • If you run AI-powered issue triage or PR review agents, sanitize all external input before it reaches the model prompt.
  • Set persist-credentials: false on actions/checkout when possible, or at minimum audit what tokens land on the runner filesystem.
  • Disable auto-approve/yolo modes for agents exposed to untrusted input.
  • Audit GitHub Actions workflows for the same pattern — any agent that reads public repo content and runs with write permissions needs strict prompt boundaries.

Sources: