Cymulate — Prompt Injection Triggers Zero-Click RCE in AI CLI Tools (Cursor, Kiro, Codex, Gemini)
AI relevance: Multiple widely-used AI CLI coding assistants — Cursor CLI, AWS Kiro, Codex Desktop, and Gemini CLI — can silently execute attacker-controlled code on a developer's machine through a single prompt injection, exploiting Windows PATH resolution and IDE config file poisoning without any user interaction beyond opening a project.
Details
- Cymulate Research Labs identified two attack classes that turn everyday AI tool usage into silent system compromise: arbitrary binary execution via untrusted resolution and arbitrary command execution via configuration poisoning.
- Binary execution via PATH hijacking — AI CLI tools on Windows rely on the default executable search order, which prioritizes the current working directory over trusted system paths. An attacker-controlled file placed in a workspace (via a cloned repository, extracted archive, or binary introduced through prompt injection) can be executed automatically without user approval, signature validation, or warning.
- Config file poisoning — If an LLM's file-write capability allows modification of execution-sensitive files such as
tasks.jsonorsettings.jsonwithout user approval, a single prompt injection can inject commands that execute at the OS level when the IDE reopens the workspace. No second request needed. - Affected products: Cursor CLI, AWS Kiro, Codex Desktop App, and Gemini CLI. The chain works end-to-end: prompt injection → file write → config poisoning → OS-level command execution on workspace reopen.
- Vendor responses varied significantly. GitHub was the only vendor to properly triage and validate the report. One vendor issued a bounty but deprioritized remediation. Others closed reports without meaningful technical engagement — some shifting responsibility to users or dismissing the findings entirely.
- The tools with the highest impact remain vulnerable, meaning routine actions like opening a project or interacting with AI-generated content can lead to silent code execution with no visible indicators.
- Cymulate published the research as part of a two-part series, with Part 1 covering the broader problem of AI tools shipping with security left behind during rapid development cycles.
Why It Matters
This is not an isolated coding bug — it's a systemic failure in how AI agent tooling handles untrusted input from the agents it runs. The same deep integration that makes these tools productive is precisely what makes them dangerous: AI CLI tools inherit the trust level of the developer's shell and workspace, yet receive no equivalent scrutiny for the content they process. When an LLM can write files to the working directory, and the shell resolves binaries from that same directory, a prompt injection becomes equivalent to handing an attacker a remote shell.
What to Do
- Audit all AI CLI tools in your development environment for file-write and binary-resolution behavior on Windows and Unix hosts.
- Enforce workspace isolation: treat every AI-assisted project directory as potentially hostile; never run AI CLI tools in directories containing sensitive credentials.
- Configure PATH ordering to prioritize system directories over the current working directory for all development tooling.
- Restrict LLM file-write permissions to read-only mode for config-sensitive files (
tasks.json,settings.json,.vscode/) where possible.