ClaudeBleed: Chrome Extension Flaw Allows AI Agent Takeover via Prompt Injection
AI relevance: A vulnerability in the Claude Chrome extension (dubbed ClaudeBleed) allows any Chrome extension — even one with zero permissions — to inject prompts into the Claude AI agent and abuse it for data exfiltration, email sending, and document sharing on behalf of the user.
What Happened
- LayerX security researchers discovered that the Claude extension for Chrome trusts the origin of a command (claude.ai) rather than the execution context, allowing any script running in that origin to issue privileged commands.
- An attacker can create a Chrome extension with a content script running in the Main world and send messages directly to the Claude extension — no special permissions required.
- LayerX demonstrated bypassing Claude's user-confirmation safeguards by forging approval through DOM manipulation and repeated message sending.
- The attack chain enables data exfiltration from Gmail, GitHub, and Google Drive, as well as sending emails, deleting data, and sharing documents — all without the user's knowledge.
- Anthropic deployed a partial fix adding internal security checks, but LayerX found the root cause was not addressed: switching the extension to "privileged" mode still bypasses the mitigation, and users are never notified of the mode switch.
Why It Matters
- This breaks Chrome's extension security model: a zero-permission extension inherits the full capabilities of a trusted AI assistant.
- Browser-based AI agents that interact with user data (email, drive, code repos) present a new attack surface — the extension bridge — that neither browser vendors nor AI companies have fully hardened.
- The incomplete fix illustrates a broader pattern: AI agent security patches often address symptoms (specific exploit paths) rather than the architectural trust boundary failures that enable them.
What to Do
- Review which Chrome extensions you have installed; malicious extensions can silently hijack Claude sessions.
- Organizations should audit AI agent browser extension deployments and restrict extension installation via enterprise policy.
- AI platform vendors should enforce execution-context verification (not just origin trust) for all inter-extension messaging involving AI agent commands.