Salt Security — 1H 2026 State of AI and API Security Report reveals agentic visibility crisis
AI relevance: As AI agents become operational backbones of enterprises, traditional security tools fail to monitor machine-to-machine traffic, creating dangerous blind spots in autonomous AI workflows.
Salt Security's 1H 2026 State of AI and API Security Report, based on a survey of over 300 security leaders, reveals a critical visibility crisis in enterprise AI security with nearly half of organizations unable to monitor their autonomous AI agents.
Key findings
- 48.9% of organizations are entirely blind to machine-to-machine traffic and cannot monitor what their AI agents are doing
- 48.3% cannot differentiate legitimate AI agents from malicious bots
- Only 23.5% of security leaders find their existing security tools "Very effective" against AI agent threats
- 78.6% report increased executive scrutiny of AI security risks at the board level
- 47% of organizations have delayed production releases due to API security concerns for AI systems
- Nearly 47% report API growth of 51-100% in the past year due to AI agent adoption
Why it matters
The transition from human-centric API consumption to autonomous AI agents has created a fundamental architectural shift. APIs now serve as the "Agentic Action Layer" — the operational backbone where AI agents execute actions. Legacy security tools like Web Application Firewalls (WAFs) and basic API Gateways were designed for human developers and predictable sessions, making them architecturally incapable of parsing the unpredictable, logic-based actions generated by autonomous agents.
This creates dangerous "Shadow AI" blind spots where autonomous agents dynamically create undocumented endpoints or leverage MCP servers outside security teams' visibility, exposing sensitive data without formal oversight.
What to do
- Adopt Agentic Security Posture Management (AG-SPM) to continuously discover and govern the agentic lifecycle from code to runtime
- Implement Agentic Detection and Response (AG-DR) that establishes agentic-aware baselines rather than relying on static signatures
- Build dynamic Agentic Security Graphs that map relationships between LLMs, MCP servers, and foundational APIs
- Establish regulatory guardrails aligned with emerging standards like the EU AI Act for traceable and auditable autonomous interactions
- Move beyond model-centric tools to secure the infrastructure where AI agent actions are actually executed