Novee — Cursor IDE CVE-2026-26268: Git Hooks Enable RCE via AI Coding Agent
AI relevance: When a Cursor AI agent autonomously opens a malicious repository, it triggers Git hooks that execute attacker code on the developer's workstation — turning an AI coding tool into the delivery mechanism for a workstation compromise.
- CVE-2026-26268 (CVSS 8.1, NVD 9.9) was disclosed by Novee's vulnerability research team and published by Cursor in February 2026.
- The root cause is a feature interaction between Git hooks and Cursor's AI agent: Git hooks (e.g.,
pre-commit,post-checkout) execute automatically on specific Git events, and Cursor's agent autonomously runs Git operations on repositories from untrusted sources. - When a developer clones a crafted repository, the AI agent's automated Git actions trigger the embedded hook scripts, resulting in arbitrary code execution on the host machine.
- The vulnerability affects Cursor versions prior to 2.5, which added sandbox hardening to prevent
.gitconfiguration writes that enable the escape. - Crucially, the flaw is not in Cursor's core product logic — it's a consequence of an AI agent operating autonomously inside an environment where it executes Git commands on code it doesn't control, a threat model most security audits don't cover.
- Novee emphasizes that when security teams audit an application's external attack surface (APIs, authentication, user inputs), the development environment itself rarely comes under scrutiny — yet AI-powered agents are now autonomously operating inside it.
Why it matters
AI coding agents blur the boundary between the developer's trusted workstation and untrusted internet-sourced code. A vulnerability that requires manual developer action in a traditional IDE becomes a zero-click exploit when an AI agent performs the triggering operation autonomously. This pattern — AI agents acting as the execution vector for vulnerabilities that would otherwise require human interaction — will likely recur across the AI developer tool ecosystem.
What to do
- Update to Cursor 2.5+ immediately, which hardens the sandbox against
.gitconfiguration writes. - Review all AI coding tool configurations for autonomous Git operations on untrusted repositories.
- Consider requiring human approval gates for any AI agent action that triggers local code execution (hooks, build scripts, package installs).
- Audit the threat model of any AI tool that operates on developer machines — treat the dev environment as part of your attack surface.