CrowdStrike — 2026 Global Threat Report: AI-accelerated adversaries
AI relevance: CrowdStrike reports malicious prompt injection against GenAI tools, LLM-enabled malware, and abuse of AI development platforms as active, real-world tradecraft.
- AI-enabled adversary activity rose 89% year-over-year, according to the 2026 Global Threat Report.
- Attackers injected malicious prompts into GenAI tools at 90+ organizations to generate credential-theft commands.
- Threat actors abused AI development platforms to establish persistence and deploy ransomware.
- CrowdStrike highlights LLM-enabled malware (“LAMEHUG”) used by Russia-nexus FANCY BEAR for reconnaissance and document collection.
- Average eCrime breakout time fell to 29 minutes, with the fastest observed breakout at 27 seconds.
- 42% of vulnerabilities were exploited before public disclosure, underscoring the pace of weaponization.
Why it matters
- GenAI tools are now a direct attack surface, not just a productivity layer.
- LLM-assisted malware shows that AI is moving from phishing copy to operational automation.
- Faster breakout times compress incident response windows for AI-enabled intrusions.
What to do
- Harden GenAI use: treat prompts and tool outputs as untrusted input with logging and policy gates.
- Lock down AI dev platforms: enforce least-privilege and monitor for anomalous tool usage.
- Pre-position IR: rehearse response flows for AI-assisted intrusions where dwell time is minutes.
- Patch fast: prioritize edge and identity exposures that enable rapid breakout.