Microsoft 365 Copilot Defect Exposes AI Summarizes of Confidential Emails
Microsoft 365 Copilot Vulnerability Bypasses DLP Policies, Summarizes Confidential Emails; Bug Tracked CW1226324
Summary :
A recently disclosed issue in Microsoft 365 Copilot caused the AI assistant to summarize confidential emails despite sensitivity labels and Data Loss Prevention (DLP) policies being configured.
The bug, tracked under CW1226324, allowed Copilot’s “Work Tab” chat feature to process and summarize emails from Sent Items and Draft folders, even when those emails carried confidentiality labels designed to restrict automated access.
Microsoft findings
Microsoft’s investigation revealed a code-level defect as the root cause. The flaw allows Copilot to inadvertently pick up items stored in users’ Sent Items and Draft folders, bypassing the confidentiality labels applied to those messages.
Although Microsoft categorized the issue as an advisory with potentially limited scope, the incident raises significant concerns regarding AI governance, trust boundaries, and enterprise data protection controls.
As per CSN the flaw allows Copilot to inadvertently pick up items stored in users’ Sent Items and Draft folders, ignoring the confidentiality labels applied to those messages.
Vulnerability Details
The issue happened because of an internal coding mistake in Microsoft 365 Copilot’s Work Tab chat feature. Due to this error, Copilot was able to access emails stored in Sent and Draft folders, even if they were marked as confidential.
In normal conditions, sensitivity labels and DLP policies should block automated tools from processing such emails.
However, because of this flaw, Copilot treated those protected emails as regular content and created summaries from them until Microsoft began deploying a fix in February 2026.

Attack Flow
| Step | Description |
| Configuration | Organization applies confidentiality labels and DLP policies to sensitive emails. |
| Storage | Emails are stored in Sent Items or Draft folders. |
| Trigger | User interacts with Copilot “Work Tab” Chat. |
| Processing | Due to the code bug, Copilot accesses labeled emails. |
| Exposure | Copilot generates summaries of confidential content, bypassing expected DLP enforcement. |

Source:0din
Why It’s Effective
- DLP Control Bypass: AI processing occurred despite policy enforcement.
- Trust Boundary Violation: Copilot acted as a privileged internal processor without honoring classification restrictions.
- Compliance Risk: Potential regulatory implications under GDPR, HIPAA, ISO 27001, and industry frameworks.
- AI Governance Gap: Demonstrates that AI systems must be independently validated against traditional security controls.
Broader Implications
This issue shows that AI tools inside business software can sometimes ignore security rules, even when protection like DLP and sensitivity labels are properly set. It proves that AI systems can create new risk areas that traditional security controls may not fully cover.
As more companies use AI assistants in daily work, security teams must regularly test and monitor how AI handles sensitive data. AI should be treated like a powerful internal system that needs strict oversight, not just a simple productivity feature.
Remediation:
Microsoft has initiated a fixed rollout and is monitoring deployment progress. However, organizations should take proactive measures:
- Validate that sensitivity labels are now properly enforced with Copilot.
- Audit Copilot usage logs and AI interaction history.
- Re-test DLP enforcement across Sent and Draft folders.
- Update AI governance documentation and risk registers.
- Conduct tabletop exercises covering AI-driven data exposure scenarios.
Conclusion:
This incident highlights that AI integrations can introduce unexpected security gaps, even in well-configured enterprise environments. Organizations cannot assume that existing security controls will automatically work the same way with AI-powered features.
As AI adoption increases, companies must strengthen AI governance, continuously validate security policies, and monitor AI behavior just like any other critical system. Proactive testing and oversight are essential to prevent future data exposure risks.
Bypassing DLP policies by AI aided assistants signals huge security gap which needs to be addressed at enterprise level as AI tool taking over enterprise security posture cannot be undermined.
References:
Recent Comments