Google TIG — First Confirmed AI-Developed Zero-Day
AI relevance: Google Threat Intelligence Group (TIG) has confirmed the first real-world case of a criminal group using an AI model to develop a zero-day exploit — and was preparing to launch a mass-exploitation campaign before Google intervened.
What happened
- Google TIG discovered a zero-day exploit targeting a "popular open-source, web-based administration tool" that had been developed with heavy AI involvement.
- The exploit was found in a Python script designed to bypass two-factor authentication for the affected service.
- A prominent cybercrime group with a "strong record of high-profile incidents and mass exploitation" was preparing to weaponize it at scale for financial gain.
- Google alerted the vulnerable vendor; the flaw has since been patched.
AI artifacts in the code
- Forensic analysis revealed code artifacts inconsistent with human developers: verbose documentation strings, highly annotated code, and a hallucinated (non-existent) CVSS score embedded in the source.
- Google is confident the model used was neither Gemini nor Anthropic's Mythos — indicating other commercial LLMs are already capable of zero-day exploit development.
- GTIG chief analyst John Hultquist called this "probably the tip of the iceberg" and expects "more devastating zero-day attacks" as AI capability trajectories continue to climb.
- The Google Big Sleep agent had already demonstrated AI-assisted zero-day discovery in late 2024, but this is the first confirmed case of adversarial AI-driven exploit development in the wild.
Broader context
- The Guardian reports that criminal groups and state-linked actors from China, North Korea, and Russia are already using commercial models (Gemini, Claude, OpenAI) to "refine and scale up attacks."
- UCL professor Steven Murdoch noted: "We have reached a stage where the old way of discovering bugs is gone, and it will now all be LLM-assisted."
- Anthropic itself delayed the Mythos rollout in April 2026 over concerns the model would be weaponized — yet attackers are achieving similar results with currently available models.
Why it matters
This isn't a hypothetical anymore. The gap between "AI can find vulnerabilities" and "attackers are using AI to build exploits" has closed. Organizations running AI agent systems need to assume that both offensive and defensive sides now operate at AI speed. Time-to-exploit was already shrinking — AI makes it effectively instantaneous.
What to do
- Patch aggressively — AI makes zero-day-to-exploit timelines much shorter.
- Monitor for AI-generated exploit artifacts in your threat intelligence feeds (hallucinated metadata, unusual code annotation patterns).
- Treat all admin interfaces and auth-bypass vectors as high-priority targets for AI-assisted attackers.