ThreatLabz — Malicious OpenClaw Skill Distributes Remcos RAT and GhostLoader
AI relevance: A malicious OpenClaw skill weaponizes the framework's instruction-driven execution model — AI agents or developers running the SKILL.md instructions silently download and install Remcos RAT (Windows) or GhostLoader stealer (macOS/Linux), bypassing user interaction entirely.
- Zscaler ThreatLabz published a detailed analysis of a March 2026 campaign that used a deceptive "DeepSeek-Claw" skill for the OpenClaw agentic AI framework as its initial access vector.
- The skill's SKILL.md instruction file embeds a PowerShell one-liner that runs
msiexec /q /ito silently install a remote MSI package containing Remcos RAT — triggered either autonomously by an AI agent parsing the skill or by a developer following the instructions manually. - The Remcos chain abuses a legitimate, digitally signed GoToMeeting binary (G2M.exe) for DLL search-order hijacking: the malicious g2m.dll is sideloaded to execute shellcode in memory.
- The in-memory loader patches ETW and AMSI dynamically, then uses Tiny Encryption Algorithm (TEA) in CBC mode to decrypt and run the final Remcos RAT payload for persistent remote access.
- A separate macOS/Linux path delivers GhostLoader — a cross-platform information stealer — via a heavily obfuscated Node.js payload designed to harvest credentials and sensitive data from developer environments.
- The attack demonstrates how agentic AI skill registries function as a dual-use distribution channel: the same instructions meant to automate legitimate setup become a silent malware delivery mechanism when weaponized.
Why it matters
OpenClaw skills are instruction files that AI agents execute autonomously. When a skill package contains hidden malicious commands, the agent — acting in good faith — becomes the execution vector. This is distinct from traditional malware distribution because the victim's own AI system carries out the compromise. The dual-path design (Remcos for Windows via auto-execution, GhostLoader for macOS/Linux via manual install) maximizes the campaign's reach across developer environments.
What to do
- Audit all installed OpenClaw skills — especially recently added or community-sourced packages — for hidden commands in SKILL.md files before installation or execution.
- Enable skill sandboxing: run skill installation instructions in an isolated environment before trusting them on production or developer workstations.
- Monitor for the specific IOCs documented by ThreatLabz: cloudcraftshub[.]com MSI distribution, GoToMeeting DLL sideloading, and GhostLoader Node.js obfuscation patterns.
- Treat AI skill registries as a software supply-chain boundary: apply the same vetting standards you would to npm, PyPI, or Docker Hub packages.