Anthropic — Claude Security Public Beta for Vulnerability Scanning

AI relevance: Claude Security represents Anthropic's first commercial defensive AI product — using frontier models (Opus 4.7) to autonomously scan, validate, and patch software vulnerabilities at scale.

What happened

  • Anthropic launched Claude Security (formerly Claude Code Security) in public beta on April 30, 2026.
  • The tool is initially available to Claude Enterprise customers, with Team and Max-tier access planned for later.
  • Claude Security scans repositories, identifies vulnerabilities, validates findings to reduce false positives, and routes fixes through Claude Code for human review and approval.
  • The product is powered by Claude Opus 4.7 — positioned below the unreleased Mythos Preview model in capability, but still significantly more capable than prior generations for vulnerability discovery tasks.
  • Anthropic's broader Project Glasswing research has shown that frontier models like Mythos can discover thousands of unpatched vulnerabilities across operating systems and browsers — making defensive AI tools like Claude Security an increasingly competitive necessity.

Why it matters

  • The launch signals a shift from AI-as-attack-tool to AI-as-defense-platform, with a major model provider building dedicated security products around its own models.
  • Teams that can scan their own codebases with AI can equally scan open-source dependencies for zero-days — raising both defensive and offensive implications.
  • The tool's validation step (reducing false positives before suggesting patches) addresses one of the main friction points that has limited AI-assisted vulnerability discovery in production.

What to do

  • If you're a Claude Enterprise customer, evaluate Claude Security against your existing SAST/DAST pipelines — particularly for AI-adjacent codebases (agent tooling, MCP servers, model-serving infrastructure).
  • Consider the dual-use implications: the same capabilities that find vulnerabilities in your code can find them in your dependencies, including widely used AI frameworks.
  • Monitor the broader AI security tooling market — competing offerings are likely as model providers race to commercialize defensive AI.

Sources