Darktrace — 2026 State of AI Cybersecurity Report: 76% of Security Pros Worried About AI Agent Risk

AI relevance: The report quantifies enterprise security teams' concerns specifically about agentic AI as a new class of insider risk — autonomous systems with direct access to sensitive data and business processes that operate without human context or accountability.

  • Darktrace surveyed 1,500 cybersecurity professionals globally for its 2026 State of AI Cybersecurity Report, released February 3, 2026.
  • 76% are concerned about the security implications of AI agents integrated into their organizations. At senior levels, 47% of security executives say they are "very or extremely concerned."
  • Top risks identified: sensitive data exposure (61%), violations of data security/privacy regulations (56%), and misuse or abuse of AI tools (51%).
  • 73% say AI-powered threats are already having a significant impact on their organization — not a future concern, a present one.
  • Despite rising awareness, only 37% of organizations have a formal policy for securely deploying AI — down 8 percentage points from the previous year's report.
  • 97% of security leaders agree AI in their own security stack significantly strengthens defense capabilities, while 77% report generative AI is now embedded in their security stack.
  • Nearly half of security professionals feel unprepared to defend against AI-driven attacks, even as 92% say these threats are driving major upgrades to defenses.
  • Over a five-month observation period, Darktrace observed anomalous data uploads to generative AI services averaging 75 MB per account — equivalent to roughly 4,700 pages of text — highlighting the shadow AI data leakage problem.
  • Alongside the report, Darktrace launched Darktrace / SECURE AI, a product for discovering live agent identities, mapping MCP connections, auditing AI behavior across SaaS/cloud/network/endpoint/OT/email, and detecting prompt injection attacks.

Why it matters

  • This is the first major industry survey to quantify how security teams view agentic AI as an insider threat category — not just a theoretical risk but one actively reshaping security posture.
  • The governance gap is widening: AI adoption is accelerating while formal security policies are declining. The 8-point drop in organizations with deployment policies is a red flag.
  • The framing of AI agents as "employees without accountability" — with access to sensitive data and the ability to trigger business processes — is a useful mental model for security teams building controls around agent deployments.

What to do

  • Inventory your AI agents: Map every AI agent, copilot, and MCP-connected tool operating in your environment. You cannot secure what you cannot see.
  • Establish formal AI deployment policies: If you are in the 63% without one, start with access controls, data classification rules, and monitoring requirements for AI tools.
  • Monitor for shadow AI: Track data flows to generative AI services; 75 MB per account of anomalous uploads suggests significant uncontrolled data exposure.
  • Treat agents as identities: Apply the same IAM rigor to AI agents as to human users — least privilege, audit logging, credential rotation, and behavioral monitoring.
  • Build AI-specific incident response: Your IR playbooks should cover agent manipulation, prompt injection, and AI-mediated data exfiltration scenarios.

Sources