By avoiding configured Data Loss Prevention (DLP) policies and summarizing email messages protected by confidentiality sensitivity labels incorrectly, a security flaw in Microsoft 365 Copilot exposes potentially sensitive organizational data to unauthorized AI processing This article explores messages protected confidentiality. . First reported on February 4, 2026, the problem—tracked under Microsoft reference CW1226324—remains unresolved.

The incident report claims that even when DLP policies are specifically set up to prevent such processing, the Copilot "Work Tab" Chat feature is actively summarizing emails with a confidential sensitivity label. Technical Specifics and the Root Cause According to Microsoft's investigation, the primary cause was a code-level flaw.

Bypassing the confidentiality labels placed on those messages, the vulnerability enables Copilot to unintentionally retrieve items from users' Sent Items and Draft folders. Sensitivity labels and DLP policies should, in theory, keep Copilot from viewing or handling any emails that have been marked as confidential. But for the impacted email folders, the bug essentially makes these controls inoperable, enabling the AI to display restricted content in chat summaries.

Organizations in regulated sectors like healthcare, finance, and government, where email confidentiality controls are not just best practices but also necessary for compliance, should be especially concerned about this. The problem has a practical effect on public sector users who depend on Microsoft 365, as evidenced by the NHS's internal flagging of the incident as INC46740412.

Microsoft has started implementing a patch in impacted environments as of February 11, 2026, and is contacting a portion of affected users to confirm impact remediation. Nevertheless, the issue is still unresolved for some organizations, and the rollout has not yet reached full saturation. As the fix develops, Microsoft hopes to offer a remediation timeline.

Any organization that has Microsoft 365 Copilot enabled and confidentiality labels set up on email could be impacted, so the impact is wide-ranging. Administrators are encouraged to check Copilot activity logs for unusual access to labeled content and keep an eye out for updates under reference CW1226324 in the Microsoft 365 admin center. An important security flaw is when an AI assistant circumvents DLP policies.

The integrity of an organization's information protection posture is compromised when an AI tool, even unintentionally, gets around DLP controls, which are a fundamental component of enterprise data governance. Security teams should think about temporarily limiting Copilot access in environments that handle extremely sensitive email communications until the fix is fully implemented. LinkedIn, X, and Microsoft are anticipated to provide daily cybersecurity updates by 11:00 AM UTC on February 18, 2026.

To have your stories featured, get in touch with us.