Anthropic — Mythos Preview autonomously discovers thousands of zero-day vulnerabilities across major systems

AI relevance: Autonomous vulnerability discovery at scale fundamentally changes offensive security economics and raises critical questions about AI agent governance, access controls, and responsible disclosure frameworks.

Anthropic has announced that its Claude Mythos Preview model has autonomously discovered thousands of zero-day vulnerabilities across every major operating system and web browser, demonstrating unprecedented AI-powered offensive security capabilities that could reshape the cybersecurity landscape.

Key discoveries

  • Thousands of zero-day vulnerabilities identified across Windows, Linux, macOS, FreeBSD, Chrome, Firefox, Safari, and Edge
  • 17-year-old FreeBSD NFS server RCE (CVE-2026-4747) granting unauthenticated root access
  • 16-year-old FFmpeg H.264 codec vulnerability introduced in 2003 commit, exposed by 2010 refactor
  • Complete exploit development without human involvement after initial prompt
  • Critical vulnerabilities in every major OS and browser discovered autonomously
  • Project Glasswing initiative to secure critical software using Mythos capabilities

Why it matters

The ability of AI models like Mythos Preview to autonomously discover and exploit vulnerabilities at scale represents a fundamental shift in offensive security capabilities. This changes the economics of vulnerability research, potentially enabling both defenders and attackers to find flaws at unprecedented rates. The technology could proliferate beyond actors committed to responsible disclosure, creating significant risks for economies, public safety, and national security.

Traditional vulnerability discovery relies on human expertise, fuzzing, and manual code review — processes that are time-consuming and resource-intensive. AI-powered autonomous discovery could accelerate this process by orders of magnitude, potentially overwhelming existing patching and disclosure frameworks.

What to do

  • Establish AI agent governance frameworks for responsible vulnerability discovery and disclosure
  • Develop AI-powered defensive security tools to match the offensive capabilities
  • Create access controls and licensing models for advanced AI security capabilities
  • Strengthen software supply chain security through automated patching and dependency management
  • Participate in responsible disclosure programs like Project Glasswing for critical infrastructure
  • Invest in AI security research to understand and mitigate potential misuse scenarios

Sources