Cisco — State of AI Security 2026 report
AI relevance: The report documents concrete risk in AI/agent deployments—prompt injection and jailbreaks, AI supply-chain weaknesses (datasets/models/tools), and new Model Context Protocol (MCP) attack surfaces that directly affect how LLM systems are built and operated.
- Cisco released its State of AI Security 2026, an annual snapshot of AI threat intelligence, policy shifts, and security research trends.
- The report flags prompt injection and jailbreak evolution as an active, growing class of attacks against deployed LLM apps and agentic systems.
- It highlights AI supply-chain fragility—datasets, open-source models, tools, and other components can introduce vulnerabilities long before production.
- The report calls out the expanding MCP/agent attack surface, where malicious or compromised tools can steer agents into unsafe actions.
- Cisco notes a gap between adoption and readiness: in its survey, 83% planned agentic AI deployments, but only 29% felt ready to deploy them securely.
- On the research/tooling side, Cisco references open-weight model vulnerability work and new open-source scanners (MCP, A2A, agentic skill files) plus a pickle-fuzzer to harden AI supply chains.
Why it matters
- It’s a consolidated view of where AI breaches are showing up in real deployments, not just lab research.
- Supply-chain and agent tooling risks are now first-class security concerns for any org shipping AI agents.
- Policy trends (US/EU/China) are shifting toward innovation-first while security risks keep rising—operators need compensating controls.
What to do
- Threat-model your agent stack (data → model → tools → MCP servers) and identify weak links you don’t control.
- Harden tool integrations with allowlists, least-privilege credentials, and output validation for agent actions.
- Audit open-source dependencies in AI pipelines; treat datasets and tools as code with supply-chain controls.