Oasis Security — "Claudy Day" Prompt Injection and Data Exfiltration Chain in Claude.ai

AI relevance: Three vulnerabilities in Claude.ai chain together to create a complete attack pipeline — from delivering an invisible injected prompt via a crafted URL, to silently exfiltrating sensitive data from the user's conversation history — all against a default, out-of-the-box session with no integrations required.

Details

  • Oasis Security's research team dubbed the finding "Claudy Day" — a chain of three vulnerabilities in Claude.ai and the broader claude.com platform, responsibly disclosed to Anthropic.
  • Invisible prompt injection via URL parameters: Claude.ai allows pre-filling a new chat via claude.ai/new?q=.... Researchers found that certain HTML tags could be embedded in this parameter, invisible in the text box but fully processed by the model — enabling hidden instructions to execute without the user seeing them.
  • Data exfiltration: Combined with the injection, attackers could craft prompts that extract and leak sensitive information from the conversation history to an external endpoint.
  • Open redirect: A third vulnerability on claude.com enabled redirection attacks that could be used as a delivery vector for the injection chain.
  • The attack works against default claude.ai sessions — no MCP servers, integrations, or tools required.
  • Anthropic has fixed the prompt injection issue; the remaining vulnerabilities are being addressed.

Why It Matters

  • Millions of users trust Claude.ai with sensitive conversations, business strategy, financial data, and personal information. An attack that requires no integration setup — just a crafted URL — dramatically expands the threat surface.
  • The invisible injection mechanism (HTML tags hidden from the UI but visible to the model) highlights a fundamental challenge in AI UI security: what the user sees is not what the model receives.
  • This demonstrates how "convenience features" like URL-based chat pre-fill can become attack vectors when input rendering and model processing are not aligned.
  • The chain works end-to-end: delivery → injection → exfiltration. No user interaction beyond clicking a link is needed.

What to Do

  • Update your Claude.ai client and verify the prompt injection fix is active in your account.
  • Treat Claude.ai URLs from untrusted sources with the same caution as any executable link — do not click claude.ai/new?q= URLs from unknown senders.
  • For enterprise deployments, consider URL filtering policies that block or warn on Claude.ai pre-fill parameters from external domains.
  • Organizations should review what sensitive data is discussed in AI assistant sessions and apply data classification policies accordingly.

Sources