Microsoft 365 Copilot was till just lately susceptible to an assault technique that might have been leveraged by risk actors to acquire delicate data, AI safety agency Purpose Safety reported on Wednesday.
The zero-click assault, dubbed EchoLeak and involving a vulnerability tracked as CVE-2025-32711, enabled attackers to get Copilot to mechanically exfiltrate doubtlessly useful data from a focused person or group with out requiring person interplay.
Microsoft on Wednesday revealed an advisory for the vulnerability, which it described as ‘AI command injection in M365 Copilot’ and categorised as ‘vital’, however knowledgeable prospects {that a} patch has been applied on the server aspect and no buyer motion is required.
Be taught Extra About AI Vulnerabilities at SecurityWeek’s AI Danger Summit
The Microsoft 365 Copilot is a productiveness assistant designed to boost the best way customers work together with purposes similar to Phrase, PowerPoint and Outlook. Copilot can question emails, extracting and managing data from the person’s inbox.
The EchoLeak assault entails sending a specifically crafted electronic mail to the focused person. The e-mail incorporates directions for Copilot to gather secret and private data from prior chats with the person and ship them to the attacker’s server.
The person doesn’t must open the malicious electronic mail or click on on any hyperlinks. The exploit, which Purpose Safety described as oblique immediate injection, is triggered when the sufferer asks Copilot for data referenced within the malicious electronic mail. That’s when Copilot executes the attacker’s directions to gather data beforehand supplied by the sufferer and ship it to the attacker.
For instance, the attacker’s electronic mail can reference worker onboarding processes, HR guides, or go away of absence administration guides. When the focused person asks Copilot about one in all these matters, the AI will discover the attacker’s electronic mail and execute the directions it incorporates. Commercial. Scroll to proceed studying.
To be able to execute an EchoLeak assault, the attacker has to bypass a number of safety mechanisms, together with cross-prompt injection assault (XPIA) classifiers designed to forestall immediate injection. XPIA is bypassed by phrasing the malicious electronic mail in a method that makes it appear as if it’s aimed on the recipient, with out together with any references to Copilot or different AI.
The assault additionally bypasses picture and hyperlink redaction mechanisms, in addition to Content material Safety Coverage (CSP), which ought to stop knowledge exfiltration.
“This can be a novel sensible assault on an LLM utility that may be weaponized by adversaries,” Purpose Safety defined. “The assault ends in permitting the attacker to exfiltrate essentially the most delicate knowledge from the present LLM context – and the LLM is getting used towards itself in ensuring that the MOST delicate knowledge from the LLM context is being leaked, doesn’t depend on particular person conduct, and could be executed each in single-turn conversations and multi-turn conversations.”
Purpose Safety identified that whereas it demonstrated the EchoLeak assault towards Microsoft’s Copilot, the approach may match towards different AI purposes as nicely.
Associated: The Root of AI Hallucinations: Physics Idea Digs Into the ‘Consideration’ Flaw
Associated: Going Into the Deep Finish: Social Engineering and the AI Flood