CrowdStrike — 90+ Organizations Hit by AI Security Tool Hijacking

AI relevance: CrowdStrike's 2026 Global Threat Report documents that adversaries used malicious prompt injection to compromise AI security tools at 90+ organizations in 2025 — and the next wave of autonomous SOC agents with write access to firewalls and IAM policies turns this from a data theft problem into an infrastructure compromise problem.

What happened

  • 90+ organizations had their legitimate AI tools compromised via malicious prompt injection in 2025, per CrowdStrike's 2026 Global Threat Report. Attackers used these to steal credentials and cryptocurrency.
  • All documented cases so far involved AI tools with read-only access — data summarization, log analysis, threat intelligence queries.
  • The threat is escalating: autonomous SOC agents now shipping to production can rewrite firewall rules, modify IAM policies, and quarantine endpoints — all through approved API calls that EDR classifies as authorized activity.
  • CrowdStrike reported AI-enabled adversaries increased operations 89% year-over-year, with AI compressing the time between intent and execution.
  • OWASP's Top 10 for Agentic Applications (Dec 2025) maps three risk categories directly to autonomous SOC agents with write access: Agent Goal Hijacking (ASI01), Tool Misuse (ASI02), and Identity and Privilege Abuse (ASI03).
  • A Saviynt/Cybersecurity Insiders survey of 235 CISOs found 47% had observed AI agents exhibiting unintended behavior, yet only 5% felt confident they could contain a compromised agent.
  • Malicious MCP server clones have already intercepted sensitive data in AI workflows by impersonating trusted services.
  • The U.K. NCSC warned that prompt injection attacks against AI applications "may never be totally mitigated."

Why it matters

  • The gap between documented attacks (read-only tools) and shipping capabilities (write-access agents) is closing fast. The architectural conditions for autonomous-agent-driven infrastructure compromise are already shipping.
  • Palo Alto Networks found an 82:1 machine-to-human identity ratio in the average enterprise — every autonomous agent added extends the attack surface.
  • Industry responses are diverging: Cisco adds inspection at the network layer (AgenticOps), while Ivanti builds governance into the platform layer (Continuous Compliance with approval gates). Neither has been proven at scale against real adversaries.

What to do

  • Inventory all AI agents and tools in your environment — map their privilege scope (read, write, execute) before deploying autonomous capabilities.
  • Implement approval gates and data-context validation for any agent that can modify infrastructure (firewall rules, IAM policies, endpoint quarantine).
  • Monitor for OWASP ASI01–ASI03 indicators: unexpected goal changes, unauthorized tool invocation, and privilege escalation patterns in agent activity logs.
  • Treat MCP server connections as high-value attack surfaces — verify server identity and integrity before allowing agents to connect.

Sources