Microsoft — Copilot summarized confidential emails despite DLP labels
AI relevance: A bug in Microsoft 365 Copilot Chat caused an AI assistant to process confidential, labeled emails, showing how LLM-driven agents can bypass enterprise data-loss policies when context boundaries fail.
- Microsoft confirmed that Copilot Chat incorrectly summarized emails with confidential sensitivity labels applied, even when DLP policies were configured to block that access.
- The issue affected the Copilot “work tab” experience, pulling content from Outlook Drafts and Sent Items folders.
- Microsoft tracked the incident under service advisory CW1226324 and said it first detected the behavior on January 21.
- A code issue allowed labeled emails to be processed, bypassing the intended label-based exclusion for Copilot chat.
- Microsoft began a targeted fix rollout in early February and later stated the root cause had been addressed for most customers.
- The company said access controls remained intact but acknowledged the behavior did not meet the intended Copilot experience for protected content.
Why it matters
- Sensitivity labels are a core control for AI deployment in enterprises; bypasses turn AI features into unintended data access paths.
- Copilot’s access to email content means a single misconfiguration or bug can expose highly sensitive material across workflows.
- AI copilots embedded in office suites operate as privileged internal agents, so context-scoping failures become security incidents.
What to do
- Review Copilot access: validate that sensitivity labels and DLP rules are enforced in Copilot chat and search experiences.
- Audit high-risk folders: add monitoring for Drafts/Sent Items ingestion by AI features when confidentiality labels are set.
- Implement compensating controls: restrict Copilot in regulated mailboxes until policy enforcement is verified.
- Track advisories: ensure service messages like CW1226324 are fed into AI risk registers and incident reviews.