Check Point — AI assistants as C2 proxies
AI relevance: The research shows how LLM assistants with web browsing/URL fetching can become a covert command-and-control relay, turning AI services into part of the malware runtime.
- Check Point Research demonstrates “AI as a proxy”: malware talks to a web-based assistant, which fetches attacker-controlled URLs and returns responses.
- The PoC targets assistants such as Grok and Microsoft Copilot using their web interfaces rather than API keys.
- In the example, malware uses WebView2 to open the assistant’s page and submit prompts that carry commands or data.
- The assistant’s response embeds attacker-supplied instructions; the implant parses the response to extract the next command.
- Because traffic goes to common AI service domains, the C2 channel blends into permitted enterprise egress.
- Anonymous usage removes traditional kill switches like revoking API keys or accounts.
- CPR frames this as a step toward prompt-driven malware behavior, with adaptive decisions based on model output.
Why it matters
- AI services can become trusted C2 relays, bypassing common egress filtering and detection rules.
- Enterprise AI adoption increases the risk of allowlisted AI domains becoming an attacker’s tunnel.
- This blends “AI misuse” and “malware infrastructure” into a single operational risk for AI ops teams.
What to do
- Monitor egress to AI assistant domains for unusual automation or high-entropy traffic patterns.
- Harden AI web access (SSO, policy controls, telemetry) instead of treating it as low-risk browsing.
- Hunt for WebView2 abuse and non-browser processes embedding AI web sessions.
- Segment AI access so high-trust endpoints can’t freely reach AI web services.