Cisco — Foundry Security Spec Open-Sources Agentic AI Security Architecture

AI relevance: Cisco's open-source Foundry Security Spec defines a structured architecture for securing AI agents, MCP servers, and skills — addressing the gap where "toss a report at a frontier LLM" produces unbounded, unverifiable output with no way to know what was missed.

Details

  • Cisco announced and open-sourced the Foundry Security Spec, a formal specification for securing agentic AI deployments including MCP servers, A2A agents, and AI skills/plugins.
  • Foundry defines a Detector role that consumes LLM-evaluated detection rules from Project CodeGuard — which Cisco previously donated to the Coalition for Secure AI (CoSAI) — to embed secure-by-default practices into AI coding agent workflows.
  • Three scanning approaches are available uniformly across MCP, A2A agents, and Skills:
    • YARA Analyzer — fast pattern-based detection of known threats including SQL injection, command injection, and hardcoded credentials.
    • LLM Analyzer — AI-powered semantic analysis using frontier models via Amazon Bedrock that examines tool logic, agent behavior, and capability declarations to identify sophisticated and novel threats.
    • Cisco AI Defense Proprietary Scanners — MCP Scanner, A2A Scanner, and Skills Scanner combining threat intelligence with deep code analysis.
  • The AWS and Cisco integration deploys these scanners across AWS, Azure, and GCP environments for unified MCP and A2A security.
  • Cisco distinguished engineer Omar Santos criticized the current state of AI-assisted security review: "Every security team with access to a frontier LLM has tried the same thing at least once: toss a report at the model and ask it to 'find the bugs.' The result is usually a wall of unbounded, unverifiable output that mixes sharp insights with hallucinated findings, with no way to know what was missed or when you're actually done."
  • Foundry addresses this by defining structured scanner roles, verifiable outputs, and a clear boundary between what the AI found and what it didn't — moving from ad-hoc LLM prompting to a governed detection pipeline.
  • Cisco AI Defense on GitHub now includes enterprise governance tools for scanning, enforcing, and auditing every skill, MCP server, and plugin before it runs, with specific support for OpenClaw and NVIDIA OpenShell.

Why It Matters

  • Foundry provides the first open, structured specification specifically for agentic AI security — not just prompt injection detection, but a full scanning architecture covering tool logic, agent behavior, and supply-chain integrity.
  • As MCP servers become the default integration layer between AI agents and production systems (databases, SaaS, infrastructure), the need for standardized security controls at this layer is urgent.
  • Open-sourcing the spec and donating Project CodeGuard to CoSAI signals a push toward industry-standard detection rules rather than vendor-locked approaches.
  • The integration with all three major clouds (AWS, Azure, GCP) means organizations can apply consistent agent security policies regardless of deployment platform.

What to Do

  • Review the Foundry Security Spec on GitHub and evaluate whether its scanner architecture fits your AI agent deployment model.
  • If running MCP servers in production, implement at minimum YARA-based pattern scanning for known threats while planning for LLM-based semantic analysis of tool behavior.
  • Pin trusted MCP server versions and maintain an inventory of all agent skills and plugins — Foundry's governance tools assume you know what you're running.
  • For OpenClaw deployments: Cisco AI Defense's OpenClaw-specific scanners can audit skills for hidden instructions, credential theft patterns, and policy bypass before execution.

Sources