DLP

Microsoft 365 Copilot Defect Exposes AI Summarizes of Confidential Emails

Microsoft 365 Copilot Vulnerability Bypasses DLP Policies, Summarizes Confidential Emails; Bug Tracked CW1226324

Summary :

A recently disclosed issue in Microsoft 365 Copilot caused the AI assistant to summarize confidential emails despite sensitivity labels and Data Loss Prevention (DLP) policies being configured. 

The bug, tracked under CW1226324, allowed Copilot’s “Work Tab” chat feature to process and summarize emails from Sent Items and Draft folders, even when those emails carried confidentiality labels designed to restrict automated access. 

Microsoft findings

Microsoft’s investigation revealed a code-level defect as the root cause. The flaw allows Copilot to inadvertently pick up items stored in users’ Sent Items and Draft folders, bypassing the confidentiality labels applied to those messages.

Although Microsoft categorized the issue as an advisory with potentially limited scope, the incident raises significant concerns regarding AI governance, trust boundaries, and enterprise data protection controls.

As per CSN the flaw allows Copilot to inadvertently pick up items stored in users’ Sent Items and Draft folders, ignoring the confidentiality labels applied to those messages.

Vulnerability Details 

The issue happened because of an internal coding mistake in Microsoft 365 Copilot’s Work Tab chat feature. Due to this error, Copilot was able to access emails stored in Sent and Draft folders, even if they were marked as confidential. 

In normal conditions, sensitivity labels and DLP policies should block automated tools from processing such emails.

However, because of this flaw, Copilot treated those protected emails as regular content and created summaries from them until Microsoft began deploying a fix in February 2026. 

Attack Flow 

Step Description 
Configuration Organization applies confidentiality labels and DLP policies to sensitive emails. 
Storage Emails are stored in Sent Items or Draft folders. 
Trigger User interacts with Copilot “Work Tab” Chat. 
Processing Due to the code bug, Copilot accesses labeled emails. 
Exposure Copilot generates summaries of confidential content, bypassing expected DLP enforcement. 

Source:0din 

Why It’s Effective 

  • DLP Control Bypass: AI processing occurred despite policy enforcement. 
  • Trust Boundary Violation: Copilot acted as a privileged internal processor without honoring classification restrictions. 
  • Compliance Risk: Potential regulatory implications under GDPR, HIPAA, ISO 27001, and industry frameworks. 
  • AI Governance Gap: Demonstrates that AI systems must be independently validated against traditional security controls. 

Broader Implications 

This issue shows that AI tools inside business software can sometimes ignore security rules, even when protection like DLP and sensitivity labels are properly set. It proves that AI systems can create new risk areas that traditional security controls may not fully cover. 

As more companies use AI assistants in daily work, security teams must regularly test and monitor how AI handles sensitive data. AI should be treated like a powerful internal system that needs strict oversight, not just a simple productivity feature. 

Remediation

Microsoft has initiated a fixed rollout and is monitoring deployment progress. However, organizations should take proactive measures: 

  • Validate that sensitivity labels are now properly enforced with Copilot. 
  • Audit Copilot usage logs and AI interaction history. 
  • Re-test DLP enforcement across Sent and Draft folders. 
  • Update AI governance documentation and risk registers. 
  • Conduct tabletop exercises covering AI-driven data exposure scenarios. 

Conclusion: 
This incident highlights that AI integrations can introduce unexpected security gaps, even in well-configured enterprise environments. Organizations cannot assume that existing security controls will automatically work the same way with AI-powered features. 

As AI adoption increases, companies must strengthen AI governance, continuously validate security policies, and monitor AI behavior just like any other critical system. Proactive testing and oversight are essential to prevent future data exposure risks. 

Bypassing DLP policies by AI aided assistants signals huge security gap which needs to be addressed at enterprise level as AI tool taking over enterprise security posture cannot be undermined.

References

Copilot Studio SupplyChain Attack Steals OAuth Tokens via CoPhishing

Summary 

The CoPhish attack is a sophisticated phishing technique exploiting Microsoft Copilot Studio to steal OAuth tokens by tricking users into granting attackers unauthorized access to their Microsoft Entra ID accounts.

By Copilot Studio’s customizable AI agents, attackers create chatbots hosted on legitimate Microsoft domains that wrap traditional OAuth consent attacks in an authentic-looking interface, increasing the likelihood of successful deception. 

Technical Details 

The attackers often use a trial license or compromised tenant to create the agent, backdooring the authentication workflow so that, post-consent, OAuth tokens are exfiltrated via HTTP to attacker infrastructure.

Few Demo links like copilotstudio.microsoft.com add credibility, closely mimicking official Microsoft Copilot services, and victims see familiar branding and login flows.

While Microsoft has implemented consent policy updates including blocking risky permissions by default for most users significant gaps remain: unprivileged users can still approve internal apps and privileged admins retain broad consent authority.

Tokens exfiltrated by CoPhish can be used for impersonation, data theft or sending further phishing emails, often going undetected as the traffic is routed through Microsoft infrastructure. 

malicious CopilotStudio page                                                                                                                         Source: securitylabs.datadoghq.com 

Attack Flow 

Step Description 
1. Build Malicious Copilot Agent Attackers create a customized Copilot Studio chatbot, usually on a trial license within their own or a compromised Microsoft tenant, configuring it to appear as a legitimate assistant. 
2. Backdoor Authentication Workflow The agent’s “Login” topic is modified to include an HTTP request that will exfiltrate any OAuth tokens granted by users during authentication. 
3. Share Demo Link Attackers generate and distribute demo website URL (like, copilotstudio.microsoft.com) pointing to the malicious chatbot, mimicking official Copilot Studio services and passing basic domain trust checks. 
4. Victim and Trigger Consent Victims access the link, interact with the familiar interface, and are prompted to login, beginning an OAuth consent flow that requests broad Microsoft Graph permissions. 
5. Token Exfiltration After the victim consents, the agent collects the issued OAuth token and sends it via HTTP to an attacker-controlled server, often relaying through Microsoft IP addresses to avoid detection in standard traffic logs. 
6. Abuse Granted Permissions Attackers use the stolen token to impersonate the victim, accessing emails, calendars, and files or conducting further malicious actions such as sending phishing emails or stealing sensitive data. 
7. Persist and Retarget Due to policy gaps, attackers can repeat the process targeting both internal and privileged users, tailoring requested app permissions and adapting to Microsoft’s evolving security measures. 

                             Source: securitylabs.datadoghq.com 

Why It’s Effective 

  • Leverages trusted Microsoft domains and branding with realistic AI chatbot flows, bypassing phishing detection and user suspicion. 
  • Bypasses multi-factor authentication by stealing fully privileged OAuth tokens that persist until revoked. 
  • Targets both regular users and privileged admins by adapting requested permissions, making it scalable and versatile. 

Recommendations 

Here are some recommendations below 

  • Enforce strict Microsoft Entra ID consent policies to limit user approval of app permissions, especially high-risk scopes. 
  • Restrict or disable user creation and publishing of Copilot Studio agents unless explicitly authorized by admins. 
  • Monitor Entra ID audit logs and Microsoft Purview for suspicious app consent, agent creation or modifications in Copilot workflows. 
  • Apply Azure AD Conditional Access requiring MFA and device compliance for accessing Copilot Studio and related AI services. 
  • Implement tenant-level Data Loss Prevention (DLP) and sensitivity labeling 
  • Educate users on phishing risks and regularly reviewing/revoking app permissions and tokens. 

Conclusion: 
CoPhish highlights how AI-powered low-code platforms like Microsoft Copilot Studio can be exploited for advanced phishing attacks targeting identity systems.

Despite Microsoft’s improvements to consent policies, significant risks remain, requiring organizations to enforce strict consent controls, limit app creation, and monitor Entra ID logs vigilantly. As AI-driven tools grow, proactive security measures are essential to defend against these evolving hybrid threats leveraging trusted cloud services. 

References

Hashtags 

#Infosec #CyberSecurity #Microsoft #Copilot #Vulnerabilitymanagement # Patch Management #ThreatIntel CISO #CXO #Intrucept  

New Cyberattack Methodology ‘Man in Prompt’, User’s at Risk, Target-AI Tools

AI tools like ChatGPT, Google Gemini and others being afflicted by malicious actors via injecting harmful instructions into leading GenAI tools. These were overlooked previously and attack methodology targets the browser extensions installed by various organizations.

The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers.

As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it and cover their tracks. 

The exploit has been tested on all top commercial LLMs, with proof-of-concept demos provided for ChatGPT and Google Gemini. 

The question is how do they impact Users & organizations at large & how does the AI tools function within web browsers?

For organizations the implications can be high then expected as AI tools are most sought after and slowly organization across verticals are relying on AI tools.

The LLMs used and tested on many organizations are mostly trained ones. They carry huge data set of information which are mostly confidential and possibility of being vulnerable to such attack rises .

The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers. As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks. 

The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers. As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks. 

LayerX researcher termed this type of attack as ‘hacking copilots’ that are equipped to steal organizational information.

The prompts given are a part of the web page structure where input fields are known as the Document Object Model, or DOM. So virtually any browser extension with basic scripting access to the DOM can read or alter what users type into AI prompts, even without requiring special permissions.

Bad actors can use compromised extensions to carry out activities including manipulating a user’s input to the AI.

  • Perform prompt injection attacks, altering the user’s input or inserting hidden instructions.
  • Extract data directly from the prompt, response, or session.
  • Compromise model integrity, tricking the LLM into revealing sensitive information or performing unintended actions

Understanding the attack scenario

Proof-of-concept attacks against major platforms

For ChatGPT, an extension with minimal declared permissions could inject a prompt, extract the AI’s response and remove chat history from the user’s view to reduce detection.

LayerX implemented an exploit that can steal internal data from corporate environments using Google Gemini via its integration into Google Workspace.

Over the last few months, Google has rolled out new integrations of its Gemini AI into Google Workspace. Currently, this feature is available to organizations using Workspace and paying users.

Gemini integration is implemented directly within the page as added code on top of the existing page. It modifies and directly writes to the web application’s Document Object Model (DOM), giving it control and access to all functionality within the application

These platforms are vulnerable to  any exploit which Layer X researchers showcased that without any special permissions shows how practically any user is vulnerable to such an attack. 

Threat mitigation

These kind of attacks creates a blind spot for traditional security tools like endpoint Data Loss Prevention (DLP) systems or Secure Web Gateways, as they lack visibility into these DOM-level interactions. Blocking AI tools by URL alone also won’t protect internal AI deployments.

LayerX advises organisations to adjust their security strategies towards inspecting in-browser behaviour.

Key recommendations include monitoring DOM interactions within AI tools to detect suspicious activity, blocking risky extensions based on their behavior rather than just their listed permissions, and actively preventing prompt tampering and data exfiltration in real-time at the browser layer.

(Source: https://layerxsecurity.com/blog/man-in-the-prompt-top-ai-tools-vulnerable-to-injection/)

Re-release of November 2024 Exchange Server Security Updates

Microsoft users had a tough time to send or load attachments to emails when using Outlook, were unable to connect to the server, and in some cases could not log into their accounts.

Microsoft Exchange Online is a platform for business communication that has a mail server and cloud apps for email, contacts, and calendars.

Microsoft mitigated the issue after identification were able to determine the cause of the outages and is rolling out a fix for the issue. That rollout is gradual, however, as outage reports continue to come in at DownDetector.

Impact

The outage left many users unable to communicate with colleagues, particularly as it coincided with the start of the workday in Europe. Frustration quickly spread across social media, with users reporting issues accessing emails and participating in Teams calls

Re-release of November 2024 Exchange Server Security Updates 

Summary 

OEM Microsoft 
Severity High 
Date of Announcement 27/11/2024 
Product Microsoft Exchange Server 
CVE ID CVE-2024-49040 
CVSS Score 7.5 
Exploited in Wild No 
Patch/Remediation Available Yes 
Advisory Version 1.0 

Overview 

On November 27, 2024, Microsoft re-released the November 2024 Security Updates (SUs) for Exchange Server to resolve an issue introduced in the initial release on November 12, 2024. The original update (SUv1) caused Exchange Server transport rules to intermittently stop functioning, particularly in environments using transport or Data Loss Protection (DLP) rules. The updated version (SUv2) addresses this issue. 

Table of Actions for Admins: 

Scenario Action Required 
SUv1 installed manually, and transport/DLP rules are not used Install SUv2 to regain control over the X-MS-Exchange-P2FromRegexMatch header. 
SUv1 installed via Windows/Microsoft Update, no transport/DLP rules used No immediate action needed; SUv2 will be installed automatically in December 2024. 
SUv1 installed and then uninstalled due to transport rule issues Install SUv2 immediately. 
SUv1 never installed Install SUv2 immediately. 

Remediation Steps 

1. Immediate Actions 

  • Use the Health Checker script to inventory your Exchange Servers and assess update needs. 
  • Install the latest Cumulative Update (CU) followed by the November 2024 SUv2. 

2. Monitor System Performance 

  • After enabling AMSI integration for message bodies, monitor for any performance issues such as delays in mail flow or server responsiveness. 

3. Run SetupAssist Script for Issues 

  • Use the SetupAssist script to troubleshoot issues with failed installations or update issues, and check logs for specific error details. 

References

Scroll to top