Microsoft says Copilot was summarizing confidential emails without permission
A bug in Microsoft 365 and Copilot has been causing the AI assistant to summarize emails that were explicitly labeled as confidential, according to a report from Bleeping Computer. The Copilot security bug reportedly bypassed organizations' data loss prevention (DLP) policies, which are used to protect sensitive information.
The bug specifically affected Copilot Chat. According to Microsoft's documentation, it caused emails with a confidential label to be "incorrectly processed by Microsoft 365 Copilot chat.”
For context, Copilot Chat, which rolled out to Microsoft 365 apps like Word, Excel, Outlook, and PowerPoint for enterprise customers last fall, is pitched as a content-aware AI assistant. Tech companies like Microsoft are integrating AI assistants into virtually all of their products, creating new types of cybersecurity risks in the process. Businesses using AI assistants could be at risk from prompt injection and data compliance violations, for instance.
The Copilot Chat issue, tracked internally as CW1226324, was first detected on Jan. 21 and impacts Copilot’s "work tab" chat feature, Bleeping Computer reports. Per Microsoft’s advisory, Copilot Chat was incorrectly pulling in and summarizing emails from users’ Sent Items and Drafts folders — even when those messages had sensitivity labels designed to block automated access. In other words, emails and sensitive information that were supposed to be off-limits weren’t.
Microsoft confirmed to Bleeping Computer that a code issue was responsible and said it began rolling out a fix in early February. The company is continuing to monitor deployment and reach out to some affected users to verify the patch is working. Additionally, Microsoft hasn’t disclosed how many organizations were impacted, noting that the scope may change as the investigation continues.