Unit 42 — Boggy Serpens AI-enhanced malware and multi-wave espionage

AI relevance: This campaign matters to AI ops because it shows a real threat actor using AI-assisted malware development to speed delivery of custom implants against the kinds of email, identity, and infrastructure workflows defenders increasingly automate around.

  • Unit 42 says Boggy Serpens (aka MuddyWater), attributed to Iran’s MOIS, has shifted toward trusted relationship compromise and sustained multi-wave targeting rather than broad, noisy phishing alone.
  • The report highlights a six-month campaign against a Middle East marine and energy company, with four distinct attack waves spanning engineering, finance, travel, and operations-themed lures.
  • According to Unit 42, the group is increasing its technical capability with AI-generated code and more tailored malware development, not just recycling commodity tooling.
  • The tooling described includes Rust-based BlackBeard, the Nuso HTTP backdoor, GhostBackDoor, and customized UDP-based traffic for command-and-control.
  • That combination matters because it blends human-targeted social engineering with faster malware iteration and better evasion, which is exactly where AI assistance can help mature intrusion operations.
  • The report also describes a custom phishing email delivery platform, suggesting the actor is investing in operational infrastructure instead of one-off lure creation.
  • One notable detail is the use of highly specific lures, including internal-style financial spreadsheets and a personalized flight itinerary, implying prior access or strong victim intelligence.
  • This is not “attackers mention AI” fluff; the relevant signal is that a long-running espionage actor is reportedly using AI assistance in its malware development lifecycle while expanding persistence and stealth.

Why it matters

  • Security teams operating AI-heavy environments should assume adversaries are also using AI to accelerate payload variation, lure iteration, and implant development.
  • The report is a reminder that the AI-security overlap is not limited to prompt injection or model abuse. It also includes the way AI can improve the economics of real intrusion operations.
  • For enterprises deploying AI agents into email, ticketing, knowledge, or workflow systems, trusted-account compromise becomes even nastier: one foothold can contaminate both human and machine-driven processes.

What to do

  • Harden identity and mail flows: prioritize MFA, mailbox anomaly detection, and controls around internal-to-internal trust assumptions.
  • Watch for lure realism, not just malware signatures: personalized spreadsheets, itineraries, and operational documents can signal higher-quality recon and compromise.
  • Instrument post-phish behavior: detect unusual macro chains, outbound Telegram/API use, custom HTTP C2, and Rust payload execution.
  • Segment AI-connected workflows: if assistants or automations consume internal email and documents, treat those channels as potential attacker-controlled input after account compromise.
  • Plan for faster adversary iteration: shorten detection and response loops because AI-assisted malware dev reduces the time between lure changes and fresh payloads.

Sources