Microsoft Security Blog — malicious AI assistant extensions harvest LLM chat histories

AI relevance: These extensions target ChatGPT/DeepSeek sessions and steal LLM chat content, turning AI usage into a data‑exfil channel inside the browser.

  • Microsoft Defender reports malicious Chromium extensions masquerading as AI assistants to harvest LLM chat histories and browsing data.
  • Reporting indicates the extensions reached roughly 900,000 installs and activity across 20,000+ enterprise tenants.
  • Collected data included full URLs plus AI chat content from platforms like ChatGPT and DeepSeek.
  • The extensions used AI‑themed branding to blend into common “sidebar assistant” workflows.
  • A misleading consent flow let telemetry be re‑enabled after updates, restoring collection without clear user notice.
  • Because Edge supports Chrome Web Store installs, a single listing expanded distribution surface.

Why it matters

  • Browser extensions can sit inside the same session where staff paste proprietary code, roadmaps, and internal data into LLM chats.
  • AI tooling adds a new high‑value data stream to the browser, making extension governance an AI‑security control.

What to do

  • Audit extensions: inventory AI‑themed extensions and remove anything not explicitly approved.
  • Tighten policies: restrict Chrome/Edge extension installs to allowlisted IDs and enforce enterprise policies.
  • Segment AI use: consider separate browser profiles for AI tools, with reduced permissions and session controls.
  • Monitor egress: watch for extension C2 traffic and unusual telemetry patterns tied to AI chat domains.

Sources