Copilot Studio SupplyChain Attack Steals OAuth Tokens via CoPhishing
Summary
The CoPhish attack is a sophisticated phishing technique exploiting Microsoft Copilot Studio to steal OAuth tokens by tricking users into granting attackers unauthorized access to their Microsoft Entra ID accounts.
By Copilot Studio’s customizable AI agents, attackers create chatbots hosted on legitimate Microsoft domains that wrap traditional OAuth consent attacks in an authentic-looking interface, increasing the likelihood of successful deception.
Technical Details
The attackers often use a trial license or compromised tenant to create the agent, backdooring the authentication workflow so that, post-consent, OAuth tokens are exfiltrated via HTTP to attacker infrastructure.
Few Demo links like copilotstudio.microsoft.com add credibility, closely mimicking official Microsoft Copilot services, and victims see familiar branding and login flows.
While Microsoft has implemented consent policy updates including blocking risky permissions by default for most users significant gaps remain: unprivileged users can still approve internal apps and privileged admins retain broad consent authority.
Tokens exfiltrated by CoPhish can be used for impersonation, data theft or sending further phishing emails, often going undetected as the traffic is routed through Microsoft infrastructure.

malicious CopilotStudio page Source: securitylabs.datadoghq.com
Attack Flow
| Step | Description |
| 1. Build Malicious Copilot Agent | Attackers create a customized Copilot Studio chatbot, usually on a trial license within their own or a compromised Microsoft tenant, configuring it to appear as a legitimate assistant. |
| 2. Backdoor Authentication Workflow | The agent’s “Login” topic is modified to include an HTTP request that will exfiltrate any OAuth tokens granted by users during authentication. |
| 3. Share Demo Link | Attackers generate and distribute demo website URL (like, copilotstudio.microsoft.com) pointing to the malicious chatbot, mimicking official Copilot Studio services and passing basic domain trust checks. |
| 4. Victim and Trigger Consent | Victims access the link, interact with the familiar interface, and are prompted to login, beginning an OAuth consent flow that requests broad Microsoft Graph permissions. |
| 5. Token Exfiltration | After the victim consents, the agent collects the issued OAuth token and sends it via HTTP to an attacker-controlled server, often relaying through Microsoft IP addresses to avoid detection in standard traffic logs. |
| 6. Abuse Granted Permissions | Attackers use the stolen token to impersonate the victim, accessing emails, calendars, and files or conducting further malicious actions such as sending phishing emails or stealing sensitive data. |
| 7. Persist and Retarget | Due to policy gaps, attackers can repeat the process targeting both internal and privileged users, tailoring requested app permissions and adapting to Microsoft’s evolving security measures. |



Source: securitylabs.datadoghq.com
Why It’s Effective
- Leverages trusted Microsoft domains and branding with realistic AI chatbot flows, bypassing phishing detection and user suspicion.
- Bypasses multi-factor authentication by stealing fully privileged OAuth tokens that persist until revoked.
- Targets both regular users and privileged admins by adapting requested permissions, making it scalable and versatile.
Recommendations
Here are some recommendations below
- Enforce strict Microsoft Entra ID consent policies to limit user approval of app permissions, especially high-risk scopes.
- Restrict or disable user creation and publishing of Copilot Studio agents unless explicitly authorized by admins.
- Monitor Entra ID audit logs and Microsoft Purview for suspicious app consent, agent creation or modifications in Copilot workflows.
- Apply Azure AD Conditional Access requiring MFA and device compliance for accessing Copilot Studio and related AI services.
- Implement tenant-level Data Loss Prevention (DLP) and sensitivity labeling
- Educate users on phishing risks and regularly reviewing/revoking app permissions and tokens.
Conclusion:
CoPhish highlights how AI-powered low-code platforms like Microsoft Copilot Studio can be exploited for advanced phishing attacks targeting identity systems.
Despite Microsoft’s improvements to consent policies, significant risks remain, requiring organizations to enforce strict consent controls, limit app creation, and monitor Entra ID logs vigilantly. As AI-driven tools grow, proactive security measures are essential to defend against these evolving hybrid threats leveraging trusted cloud services.
References:
Hashtags
#Infosec #CyberSecurity #Microsoft #Copilot #Vulnerabilitymanagement # Patch Management #ThreatIntel CISO #CXO #Intrucept


