Palo Alto Networks — AI Models Drive Majority of Findings in May Patch Cycle
AI relevance: Frontier AI models are now finding more vulnerabilities than traditional scanning in production security programs — a tipping point that reshapes both defense and the threat landscape.
- Palo Alto Networks published its May 2026 Defender's Guide update, reporting that for the first time, the majority of vulnerability findings came from frontier AI model scanning rather than traditional methods.
- Models tested include Anthropic's Claude Mythos (via Project Glasswing, started April 7), Claude Opus 4.7, and OpenAI's GPT-5.5-Cyber (via the Trusted Access for Cyber program).
- The May advisory covers 26 CVEs representing 75 issues — versus the typical <5 CVEs per month — across 130+ products. All SaaS products patched; customer-operated products have patches available. None are known to be exploited in the wild.
- PAN estimates a 3–5 month window before AI-driven exploitation becomes the norm, urging organizations to scan their own codebases and open-source supply chains with AI models now.
- Key operational findings: AI models are not "magic" — they require scanning harnesses, context, guardrails, and threat intelligence. A multi-model approach is necessary due to variance across models' training.
- The company plans to rescan continuously and integrate these models directly into the software development lifecycle.
Why it matters
When a major security vendor reports that AI models outperform traditional scanning at scale, it signals a dual-use inflection: defenders who adopt AI-first vulnerability discovery gain a temporary advantage, but the same capabilities will soon be available to attackers. The 3–5 month window is the defender's window to build institutional scanning capacity before adversarial AI exploitation becomes routine.
What to do
- Start AI-assisted vulnerability scanning on your codebase now — don't wait for adversarial AI tooling to mature.
- Use a multi-model scanning approach; no single model catches the full superset of vulnerabilities.
- Apply AI scanning to open-source dependencies, not just proprietary code.
- Coordinate accelerated patching cycles with development teams; AI discovery without rapid remediation creates a false sense of security.