2025-06-12
Microsoft 365 Copilot Prompt Injection Attack Patched
Microsoft has addressed a "zero-click" attack chain in which Microsoft 365 (M365) Copilot's Retrieval-Augmented Generation (RAG) chatbot receives malicious instructions disguised as a typical email, causing it to "automatically exfiltrate sensitive and proprietary information from M365 Copilot context, without the user's awareness, or relying on any specific victim behavior." CVE-2025-32711, CVSS score 9.3, bypasses cross-prompt injection attack (XPIA) classifiers, external link redaction, Content-Security-Policy, and M365 Copilot's reference mentions. Aim Labs, who discovered this vulnerability (dubbed "EchoLeak"), state that while no exploitation in the wild has been observed, the relevant "general design flaws" of RAG-based AI may mean more applications are vulnerable. The researchers contend that "protecting AI applications requires introducing finer levels of granularity into current frameworks;" while the attack might be understood as Indirect Prompt Injection, Aim coins "LLM Scope Violation" to describe instructions that "make the LLM attend to trusted data in the model's context, without the user's explicit consent."
Editor's Note
Microsoft has patched the flaw, and no user action is required to address the issue. The flaw is leveraging Microsoft Graph to answer a query which is using data from their mailbox, OneDrive, SharePoint, Office Files and MS Teams. It's the Microsoft Graph interface which allows exfiltration of the otherwise organizationally private data.

Lee Neely
These retrieval bugs are really fascinating, as any GenAI that uses any type of document ingestion or web scraping feature can be prone to having issues between what is designed to be data to be ingested versus new prompt commands. These confusion bugs are common in many traditional web bugs where the system cannot determine the difference between data and code.

Moses Frost
This is another warning that AI applications are complex, immature and, from a security perspective, present serious risks to data security requiring remediation until those 'finer levels of granularity' are routinely built in.

John Pescatore
One more instance in which features, functions, and 'early-to-market' trump secure by design. We can expect it to get worse as software becomes ever more complex.

William Hugh Murray
Read more in
Microsoft: M365 Copilot Information Disclosure Vulnerability
SC Media: Microsoft 365 Copilot ‘zero-click’ vulnerability enabled data exfiltration
The Hacker News: Zero-Click AI Vulnerability Exposes Microsoft 365 Copilot Data Without User Interaction
Bleeping Computer: Zero-click AI data leak flaw uncovered in Microsoft 365 Copilot