Microsoft — AI as Tradecraft: threat actors operationalize AI across the attack lifecycle
AI relevance: Microsoft's report documents a shift from experimental AI use to AI fully embedded in offensive operations — North Korean groups (Jasper Sleet, Coral Sleet) now use AI for malware development, phishing, deepfake-enabled social engineering, and attack infrastructure management across the full kill chain.
- Published March 6, 2026, the Microsoft Security Blog report "AI as Tradecraft" documents threat actors operationalizing AI along the full cyberattack lifecycle.
- North Korean groups Jasper Sleet and Coral Sleet are leading adoption — using AI for reconnaissance, malware development, infrastructure management, and social engineering at scale.
- Actors are abusing intended model capabilities and jailbreaking techniques to bypass safety filters, enabling tasks that would otherwise require specialized technical expertise.
- AI is being used for automated open-source intelligence gathering — scanning for new attack opportunities, credential leaks, and vulnerable infrastructure without manual effort.
- Microsoft notes that large-scale agentic AI by threat actors hasn't been observed yet, but reliability constraints are the primary blocker — not intent or capability gaps.
- The report highlights AI-enabled deepfake operations for social engineering, including video and audio impersonation used in targeted spear-phishing campaigns.
- A parallel Microsoft report on Tycoon2FA documents a leading adversary-in-the-middle phishing kit that leverages AI-generated content to scale credential theft operations.
Why it matters
- AI is no longer an experimental add-on for threat actors — it's embedded operational infrastructure that lowers the skill barrier for sophisticated attacks.
- Organizations defending against state-sponsored threats must now assume AI-augmented tradecraft at every stage of the kill chain, from recon to exfiltration.
- The gap between "AI-assisted" and "AI-autonomous" attacks is narrowing — as agentic reliability improves, fully autonomous attack campaigns become inevitable.
What to do
- Update threat models to include AI-augmented tradecraft — especially for social engineering, recon, and malware development scenarios.
- Deploy AI-aware detection — deepfake detection, AI-generated content fingerprinting, and behavioral analysis for automated recon patterns.
- Monitor for jailbreak patterns against your own AI deployments — internal LLMs and agents may be targeted for abuse.
- Harden identity verification — move to out-of-band confirmation for high-value transactions to counter deepfake-enabled social engineering.