Georgia Tech Vibe Security Radar — 74 CVEs Traced to AI Coding Tools
AI relevance: As AI coding assistants write an increasing share of production code, Georgia Tech researchers are providing the first empirical evidence that these tools are introducing real, tracked vulnerabilities into the open-source ecosystem — at an accelerating rate.
The Vibe Security Radar project, run by Hanqing Zhao at Georgia Tech's Systems Software & Security Lab (SSLab), tracks CVEs directly attributable to AI-assisted coding tools. Their methodology is rigorous: pull data from public vulnerability databases, identify the commit that introduced each flaw, trace backwards to the author, and flag commits carrying AI tool signatures (co-author tags, bot emails).
Key Findings
- 74 confirmed CVEs have been directly linked to AI coding tools across ~50 tracked products (Claude Code, Copilot, Cursor, Devin, Windsurf, Aider, Amazon Q, Google Jules).
- 35 CVEs in March 2026 alone, up from 15 in February and 6 in January — an exponential trajectory.
- Claude Code appears most frequently among the flagged tools, though Zhao attributes this partly to the tool's practice of leaving identifiable metadata signatures in commits.
- The real number is estimated at 400–700 cases across the open-source ecosystem — 5–10× the detected count — because many AI tool signatures are stripped by authors before commit.
- 4% of all public GitHub commits in March came from Claude Code alone, and the share is still climbing.
- Tools like Copilot's inline suggestions leave no trace, making their contribution to vulnerability introduction invisible to current detection methods.
Why It Matters
This is not a benchmark or a hypothetical. These are CVEs with NVD entries that have been shipped to real users. As "vibe coding" — building entire projects with minimal human review — moves from hobbyist to production practice, the pipeline of AI-introduced vulnerabilities will continue to outpace the current detection methods, which rely on metadata that authors can and do strip. The next phase of the project aims to detect AI-written code through stylistic and structural patterns rather than metadata alone.
What to Do
- For maintainers: review commits from AI tools with the same scrutiny as external contributions; consider requiring human sign-off on security-sensitive changes.
- For consumers: track the Vibe Security Radar GitHub for projects in your dependency tree that have been flagged.
- For security teams: add AI code generation to your threat model; assume that any codebase using these tools may contain AI-introduced flaws that bypass traditional review.