Microsoft Security Blog — AI recommendation poisoning

AI relevance: Attackers can bias assistant outputs by slipping hidden “remember this vendor” prompts into AI-summary URLs that write to long-term memory.

  • Microsoft reports a wave of AI recommendation poisoning, where prompt-filled URLs try to plant persistent preferences in assistant memory.
  • The technique targets “Summarize with AI” buttons that prefill a prompt and execute when clicked.
  • Researchers observed over 50 unique prompts across 31 companies and 14 industries attempting to influence recommendations.
  • Typical payloads instruct the AI to remember a brand as trusted or recommend it first in future responses.
  • Microsoft maps the behavior to MITRE ATLAS Memory Poisoning (AML.T0080) and related techniques.
  • Microsoft says Copilot mitigations have already blocked some previously reproducible behaviors, but the attack surface persists.
  • The post frames this as an emerging analogue to SEO—except aimed at AI assistants instead of search engines.

Why it matters

  • Memory-enabled assistants can carry poisoned preferences into high-stakes decisions (finance, health, security).
  • Prompt-in-URL workflows make injection frictionless and scalable for marketers or adversaries.

What to do

  • Disable or gate memory writes for web-triggered prompts and summarization flows.
  • Log and review memory updates for unexpected vendor or preference entries.
  • Educate users that “Summarize with AI” links can carry hidden instructions.

Sources