Flag for Teams' Malicious Messages By enabling Defender for Office 365 Plan 1 users to directly report suspicious messages, Microsoft is greatly enhancing the threat detection capabilities within Microsoft Teams This article explores teams malicious messages. . This update, which is being tracked under Roadmap ID 531760, democratizes threat intelligence gathering, a feature that was previously exclusive to higher-tier Plan 2 subscribers.
This represents a change in Microsoft's security approach. An update published on February 9, 2026, states that this rollout tackles the increasing need to secure collaboration platforms with the same vigor as email environments. Enabling end users to serve as the first line of defense has become essential to a strong security posture as the boundaries between internal communication and external threats become increasingly hazy.
Previously, only companies with Defender for Office 365 Plan 2 could report messages in Teams, whether they were in direct chats, channels, or meeting logs. As a result, Plan 1 environments were left without the advantage of real-time user feedback and dependent only on automated backend protections. This experience is unified by the latest update.
Plan 1 users will be able to tag messages in two different categories after the rollout is finished in late March 2026: Security Risk: For material thought to be spam, malware, or phishing. Not a Security Risk: For valid messages that automated filters misidentified as false positives. Security operations centers (SOCs) depend on these user-generated signals.
They offer instant insight into possible security lapses and assist in educating Microsoft's detection algorithms to better understand the subtleties of conversational attacks, like chat-based business email compromise (BEC) attempts. Although this feature improves security, it needs administrative intervention to work. Microsoft has made it clear that using the reporting feature is optional.
It honors the current "User-reported" configuration settings in the organization. Security administrators should visit the Microsoft Defender portal to get ready for the launch in mid-March. The toggles for Teams reporting will turn on automatically if the "User reported" settings are enabled. User-submitted reports will then be routed to a designated mailbox set up by the IT team or to the Defender portal's "User reported" page, enabling centralized triage and investigation.
This action comes as attackers are increasingly turning to collaboration tools instead of traditional email phishing. Employees frequently consider platforms like Teams to be "trusted spaces," which leaves them vulnerable to social engineering attacks. Organizations can react more quickly to campaigns that evade automated filters by incorporating user feedback directly into the Defender detection loop.
When the feature goes live next month, security teams are encouraged to update their internal documentation and inform staff members of these changes so they know how and when to report suspicious activity. X, LinkedIn, and LinkedIn for daily ZeroOwl. To have your stories featured, get in touch with us.












.webp%3Fw%3D1068%26resize%3D1068%2C0%26ssl%3D1&w=3840&q=75)