Acronis TRU: 575+ Malicious AI Skills on ClawHub and Hugging Face Deploy Malware
AI relevance: A coordinated malware campaign is abusing AI platforms — ClawHub (575+ malicious skills across 13 accounts) and Hugging Face (model/dataset repositories used as infection staging points) — to deliver trojans, cryptominers, and infostealers disguised as legitimate AI tools and agent extensions.
What Happened
- Acronis TRU identified 575 malicious OpenClaw skills published across 13 developer accounts on ClawHub. Two threat actors dominate: "hightower6eu" (334 skills, 58%) and "sakaen736jih" (199 skills, 34.6%).
- Skill files masquerade as useful tools (e.g., YouTube transcript summarizers) but secretly instruct users to download password-protected archives or execute base64-encoded commands.
- Windows payloads use VMProtect-packed trojans and a second variant with 30-byte XOR string decryption, dynamic NT API resolution, and in-memory process injection into explorer.exe.
- macOS payloads connect to an external IP and silently download AMOS Stealer, a macOS-focused infostealer sold as MaaS on Telegram and underground forums.
- The injected code establishes AES-encrypted C2 over HTTPS, drops a cryptominer disguised as svchost.exe, and persists via scheduled tasks and Windows Defender exclusion paths.
- On Hugging Face, researchers identified repositories serving as multi-stage infection chains. The ITHKRPAW campaign targeted Vietnamese financial organizations using Cloudflare Workers to serve a PowerShell dropper fetching payloads from Hugging Face datasets. The FAKESECURITY campaign used encoded PowerShell to download secondary scripts from Hugging Face repos, bypassing SmartScreen via Mark-of-the-Web stripping.
Why It Matters
- AI agent skill ecosystems are now a proven malware distribution channel — not a theoretical risk. The indirect prompt injection technique embedded in skill files means AI agents can be turned into unwitting malware delivery vehicles.
- The scale (575+ skills, 13 accounts, multiple malware families) indicates organized, resourceful operators — not opportunistic hobbyists.
- Hugging Face's role as infection staging shows that AI model repositories are being weaponized as C2 infrastructure, a novel supply-chain abuse pattern.
- Combined with the March 2026 LiteLLM PyPI compromise that exposed ~500,000 credentials, the AI toolchain supply chain is under sustained attack.
What to Do
- Treat AI models, datasets, and agent skills as untrusted inputs — apply the same validation rigor as third-party code.
- Audit installed OpenClaw skills for encoded commands, external download instructions, or indirect prompt injection patterns in skill definitions.
- Block known malicious indicators (91.92.242[.]30, velvet-parrot[.]com) and restrict Windows Defender exclusion path modifications via Group Policy.
- Monitor for unexpected process injection into explorer.exe and cryptominer artifacts disguised as system processes.