AI assistants have quickly changed the way things are done every day, making it easier for teams to handle overflowing inboxes, client communications, and incident response. Microsoft Copilot and other tools like it work right into your daily tasks, summarizing emails and meetings and pulling in information from all over the Microsoft 365 ecosystem. But this ease of use creates a new security boundary that many businesses are not yet ready to defend.

Microsoft 365 Copilot's email summarization surfaces have a serious cross-prompt injection vulnerability (XPIA) that researchers at Permiso Security have found. This vulnerability is now known as CVE-2026-26133. The flaw lets an attacker take over Copilot's output by putting attacker-controlled text in a regular email. This makes phishing content that looks real in the assistant's trusted summary interface without using attachments, macros, or regular exploit code.

This makes it possible to exfiltrate data with just one click. When the user clicks what looks like a "Verify your Identity" button, any internal context that was included in that link is sent to infrastructure controlled by the attacker without the user knowing it. This attack pattern is very similar to CVE-2025-32711 (EchoLeak), which Aim Security found.

In that case, hidden prompts in emails made Microsoft 365 Copilot send sensitive data through specially made image URLs. This shows that XPIA against AI summarization tools is a repeatable, cross-platform vulnerability class, not just one case. If your company uses Microsoft 365 Copilot, you should do the following: Right away, apply the March 2026 patch. Microsoft said on March 11, 2026, that it would be available to all affected surfaces.

Check Copilot's permissions. Limit its ability to access only what it needs to do its job, and limit cross-app access to Teams, OneDrive, and SharePoint whenever possible. Turn on Microsoft Purview sensitivity labels and DLP policies.

These settings make it harder for someone to steal data by retrieving it.