Wiz — Red Agent, AI-BOM, and Wiz Code Expand AI Application Security Platform
AI relevance: Wiz Red Agent uses AI to model attacker behavior and autonomously validate complex, logic-driven vulnerabilities across cloud infrastructure — representing a shift from static scanning to agentic security testing.
- Red Agent entered public preview — an AI system that identifies and validates complex vulnerabilities by modeling attacker behavior, analyzing application behavior, and adapting its approach in real time.
- It joins Blue Agent (now GA for defensive analysis) and Green Agent (public preview for remediation), completing Wiz's three-agent security triad.
- AI-BOM (AI Bill of Materials) creates an inventory of AI frameworks, models, and IDE extensions across an organization — covering LangChain, Gemini Code Assist, GitHub Copilot, Cursor, and others — to reduce shadow AI adoption.
- Wiz Code adds three new functions: pre-commit security guardrails for AI-generated code (including tools like Lovable), remediation skills for Claude Code and Cursor that pull validated findings from the Wiz Security Graph, and AI-BOM integration.
- Wiz Research found that 20% of applications built with AI-assisted coding tools contain significant security issues, including broken access controls and exposed data endpoints.
- Platform coverage now extends to AWS Bedrock Agent Studio, Gemini Enterprise Agent Platform, Microsoft Copilot Studio, Salesforce Agentforce, and Databricks.
- The Technology Intel Centre aggregates feature releases, migration changes, and end-of-life notices across cloud and AI providers, mapping them to affected customer resources.
Why it matters
As AI coding tools accelerate development, they also scale insecure code production. Wiz's approach — embedding security directly into AI developer workflows with remediation skills that let AI agents fix their own findings — represents a pragmatic response to the reality that manual review cannot keep pace with AI-generated code volume. The Red Agent's autonomous vulnerability validation demonstrates that AI is now being weaponized for defense, not just offense.
What to do
- Inventory all AI coding tools in use across your organization — shadow AI is likely more prevalent than you think.
- Evaluate AI-BOM capabilities to gain visibility into which AI frameworks and models are deployed in your codebase.
- Consider implementing pre-commit guardrails for AI-generated code to catch issues before they reach source control.
- Review whether your current vulnerability scanning can handle logic-driven, multi-step attack paths — traditional scanners often miss these.