Microsoft 365

Microsoft 365 Copilot Defect Exposes AI Summarizes of Confidential Emails

Microsoft 365 Copilot Vulnerability Bypasses DLP Policies, Summarizes Confidential Emails; Bug Tracked CW1226324

Summary :

A recently disclosed issue in Microsoft 365 Copilot caused the AI assistant to summarize confidential emails despite sensitivity labels and Data Loss Prevention (DLP) policies being configured. 

The bug, tracked under CW1226324, allowed Copilot’s “Work Tab” chat feature to process and summarize emails from Sent Items and Draft folders, even when those emails carried confidentiality labels designed to restrict automated access. 

Microsoft findings

Microsoft’s investigation revealed a code-level defect as the root cause. The flaw allows Copilot to inadvertently pick up items stored in users’ Sent Items and Draft folders, bypassing the confidentiality labels applied to those messages.

Although Microsoft categorized the issue as an advisory with potentially limited scope, the incident raises significant concerns regarding AI governance, trust boundaries, and enterprise data protection controls.

As per CSN the flaw allows Copilot to inadvertently pick up items stored in users’ Sent Items and Draft folders, ignoring the confidentiality labels applied to those messages.

Vulnerability Details 

The issue happened because of an internal coding mistake in Microsoft 365 Copilot’s Work Tab chat feature. Due to this error, Copilot was able to access emails stored in Sent and Draft folders, even if they were marked as confidential. 

In normal conditions, sensitivity labels and DLP policies should block automated tools from processing such emails.

However, because of this flaw, Copilot treated those protected emails as regular content and created summaries from them until Microsoft began deploying a fix in February 2026. 

Attack Flow 

Step Description 
Configuration Organization applies confidentiality labels and DLP policies to sensitive emails. 
Storage Emails are stored in Sent Items or Draft folders. 
Trigger User interacts with Copilot “Work Tab” Chat. 
Processing Due to the code bug, Copilot accesses labeled emails. 
Exposure Copilot generates summaries of confidential content, bypassing expected DLP enforcement. 

Source:0din 

Why It’s Effective 

  • DLP Control Bypass: AI processing occurred despite policy enforcement. 
  • Trust Boundary Violation: Copilot acted as a privileged internal processor without honoring classification restrictions. 
  • Compliance Risk: Potential regulatory implications under GDPR, HIPAA, ISO 27001, and industry frameworks. 
  • AI Governance Gap: Demonstrates that AI systems must be independently validated against traditional security controls. 

Broader Implications 

This issue shows that AI tools inside business software can sometimes ignore security rules, even when protection like DLP and sensitivity labels are properly set. It proves that AI systems can create new risk areas that traditional security controls may not fully cover. 

As more companies use AI assistants in daily work, security teams must regularly test and monitor how AI handles sensitive data. AI should be treated like a powerful internal system that needs strict oversight, not just a simple productivity feature. 

Remediation

Microsoft has initiated a fixed rollout and is monitoring deployment progress. However, organizations should take proactive measures: 

  • Validate that sensitivity labels are now properly enforced with Copilot. 
  • Audit Copilot usage logs and AI interaction history. 
  • Re-test DLP enforcement across Sent and Draft folders. 
  • Update AI governance documentation and risk registers. 
  • Conduct tabletop exercises covering AI-driven data exposure scenarios. 

Conclusion: 
This incident highlights that AI integrations can introduce unexpected security gaps, even in well-configured enterprise environments. Organizations cannot assume that existing security controls will automatically work the same way with AI-powered features. 

As AI adoption increases, companies must strengthen AI governance, continuously validate security policies, and monitor AI behavior just like any other critical system. Proactive testing and oversight are essential to prevent future data exposure risks. 

Bypassing DLP policies by AI aided assistants signals huge security gap which needs to be addressed at enterprise level as AI tool taking over enterprise security posture cannot be undermined.

References

Azure AD configuration file for ASP.NET Core apps credentials leaked by Cybercriminals

A critical flaw in AzureD supported cyber criminals to get access to the digital keys in Azure cloud environment and discovered by Resecurity researchers .

The action enabled unauthorized token requests against Microsoft’s OAuth 2.0 endpoints and giving adversaries a direct path to Microsoft Graph and Microsoft 365 data.

A small critical cloud misconfiguration can give access to cyber attackers to infiltrate and this happened to Azure D when their Cloud native application configuration file for ASP.NET Core applications has been leaking credentials for Azure ActiveDirectory (AD).

Cloud application are not merely hosted in the cloud instead they are built to thrive in a cloud environment, providing unprecedented scalability, resilience and flexibility making them game changer.

Recently the publicly accessible configuration file for ASP.NET Core applications has been leaking credentials for Azure ActiveDirectory (AD). This potentially led attackers to authenticate directly via Microsoft’s OAuth 2.0 endpoints and infiltrate Azure cloud environments.

This issue cannot be overlooked by enterprise as the discovery by Resecurity’s HUNTER team exposed Azure AD credentials  ClientId and ClientSecret — exposed in an Application Settings (appsettings.json) file on the public Internet.

Once the credentials lands up in hackers domain any malicious activates can be conducted and compromise an organization’s Azure-based cloud deployment simultaneously retrieve sensitive data from SharePoint or Exchange Online etc. Further abuse of Graph API for privilege escalation or persistence; and the deployment of malicious applications under the organization’s tenant.

Exploiting AzureD Flaw The attack flow

To exploit the flaw, an attacker can first use the leaked ClientId and ClientSecret to authenticate against Azure AD using the Client Credentials from OAuth2 flow to acquire an access token.

Once this is acquired, the attacker then can send a GET request to the Microsoft Graph API to enumerate users within the tenant.

This allows them to collect usernames and emails; build a list for password spraying or phishing; and/or identify naming conventions and internal accounts, according to the post.

Cyber attacker also can query the Microsoft Graph API to copy OAuth2 to take permission grants within the tenant, revealing which applications have been authorized for further permissions, they hold.

Once acquired token allows an attacker to use group information to identify privilege clusters and business-critical teams.

Protecting Enterprise from getting Azure secrets exposed.

Enterprise failing to practice regular scanning, penetration tests, or code reviews, exposed cloud files can remain unnoticed until attackers discover them and exploit them, according to the post.

Further for better security posture enterprise can restricting file access; removing secrets from code and configuration files; rotating exposed credentials immediately; enforcing least privilege principles and setting up monitoring and alerts on credential use, according to the post.

Importance of automation in cloud native application

Implement continuous integration and continuous deployment (CI/CD) pipelines to automate building, deploying, and testing cloud native applications. Manage and provision cloud infrastructure using code, allowing for version control and repeatability. 

Several benefits of following best practices when developing cloud native apps, like increased scalability, fewer occurrences of critical failures, and high efficiency

Enterprises having product based focus will go for cloud-first approach and ask questions on how to go about cloud computing etc.

What could have happened or will happen if not looked into Azure Active Directory (Azure AD) flaw?

Azure Active Directory (Azure AD) termed as high impact in terms of vulnerability.

Once authenticated, attackers can:

  • Retrieve sensitive SharePoint, OneDrive, or Exchange Online data via Graph API calls.
  • Enumerate users, groups, and roles, mapping out the tenant’s privilege model.
  • Abuse permission grants to escalate privileges or install malicious service principals.
  • Deploy rogue applications under the compromised tenant, creating persistence and backdoors.

Enterprises must perform compliance checks to ensure that application designed meets industry standards and regulatory requirements. Once robust auditing and reporting mechanisms is on track that changes any access to sensitive data. 

Source: JSON Config File Leaks Azure AD Credentials

Critical Flaw in Azure AD Lets Attackers Steal Credentials and Install Malicious Apps

Scroll to top