Novee — CVE-2026-26268 AI Coding Agent RCE in Cursor IDE

AI relevance: Cursor's AI agent autonomously executes Git operations on untrusted repositories, and embedded Git hooks — designed to run locally — become a remote code execution vector the moment the agent checks out malicious code.

Details

  • CVE-2026-26268 is a high-severity arbitrary code execution vulnerability in Cursor IDE, disclosed by Novee researcher Assaf Levkovich in coordination with Cursor.
  • The root cause is not a Cursor code bug but a feature interaction between Git hooks and autonomous AI agent behavior: when an AI coding agent clones or checks out an untrusted repository, Git's pre-commit, post-checkout, and similar hooks execute automatically on the developer's machine.
  • Cursor's agent performs Git operations (clone, checkout, branch switches) as part of normal workflow — without sandboxing — so a malicious repository with weaponized hooks gains code execution directly on the victim's workstation.
  • The vulnerability was published by Cursor via a GitHub Security Advisory (GHSA-8pcm-8jpx-hv8r) in February 2026; public writeup arrived in late April.
  • The attack requires no special permissions — any repository the agent touches becomes a potential delivery mechanism.

Why It Matters

  • AI coding agents blur the boundary between "sandboxed analysis" and "local execution." Traditional dev-tool security assumed developers reviewed code before running it; AI agents don't.
  • Git hooks are a well-known feature, but the threat model changes completely when an autonomous agent — not a human — decides which repositories to clone and which branches to check out.
  • This is a pattern likely to repeat across every AI-powered IDE (Cursor, Windsurf, GitHub Copilot Workspace) that runs tooling on untrusted code without isolation.
  • The vulnerability underscores that "the IDE is not a safe zone" — development environments are now first-class attack surfaces in agentic workflows.

What to Do

  • Update Cursor to the latest patched version; verify the fix covers your workflow.
  • Configure Git's safe.directory and consider core.hooksPath to disable repository-local hooks in untrusted projects.
  • For AI coding agents operating on external code, run them inside containers, sandboxes, or ephemeral environments with no access to host credentials or secrets.
  • Review which repositories your AI agent has access to — a compromised or malicious upstream repo becomes a pivot point.

Sources