{"id":5566,"date":"2025-10-28T11:56:17","date_gmt":"2025-10-28T11:56:17","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5566"},"modified":"2025-10-28T11:56:17","modified_gmt":"2025-10-28T11:56:17","slug":"copilot-diagrams-could-leak-corporate-emails-via-indirect-prompt-injection","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5566","title":{"rendered":"Copilot diagrams could leak corporate emails via indirect prompt injection"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Microsoft has patched an indirect prompt injection flaw in Microsoft 365 Copilot that could have allowed attackers to steal sensitive data using clickable Mermaid diagrams.<\/p>\n<p>According to findings published by security researcher Adam Logue, the exploit could be triggered through specially crafted Office documents containing hidden instructions. When processed by Copilot, these prompts caused the assistant to fetch recent enterprise emails, convert them into a hex-encoded string, and embed that data into a clickable diagram created with the diagramming tool Mermaid.<\/p>\n<p>When a user clicks what looks like a legitimate \u201clogin\u201d button in the diagram, the encoded data would to sent to an attacker-controlled server, Logue noted in a blog <a href=\"https:\/\/www.adamlogue.com\/microsoft-365-copilot-arbitrary-data-exfiltration-via-mermaid-diagrams-fixed\/\" target=\"_blank\" rel=\"noopener\">post<\/a>.<\/p>\n<p>Microsoft patched the flaw by removing the ability for interactive hyperlinks in Mermaid diagrams within Copilot chats. \u201cThis effectively mitigated the data exfiltration risk,\u201d Logue confirmed.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>Diagram trick for data leak<\/h2>\n<p>Logue laid out a multi-stage attack chain starting with a seemingly benign Office document (for example, an Excel sheet) containing visible content alongside hidden white-text instructions on a second sheet. These hidden prompts redirect Microsoft 365 Copilot away from its intended summarization task and instead instruct it to call its internal tool \u201csearch-enterprise_emails\u201d to retrieve recent tenant emails.<\/p>\n<p>The retrieved content is then hex-coded, broken into 30-character chunks (to satisfy rendering constraints), and embedded into a diagram created via Mermaid. That diagram is styled to look like a \u201clogin button\u201d and contains a hyperlink pointing to an attacker-controlled server.<\/p>\n<p>Logue was able to demonstrate (in a proof of concept), creating financial sheets with crafted instructions in white text. A successful exploit led the user to the attacker-controlled login. \u201cWhen I asked M365 Copilot to summarize the document, it no longer told me it was about financial information and instead, responded with an excuse that the document contained sensitive information and couldn\u2019t be viewed without proper authorization or logging in first,\u201d Logue said.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>The bigger threat of indirect prompt injection<\/h2>\n<p>The incident underscores that the risk goes beyond simple \u201cprompt injection,\u201d where a user types malicious instructions directly into an AI. Here, the attacker hides instructions inside document content that gets passed into the assistant without the user\u2019s awareness. Logue described how the hidden instructions use progressive task modification (e.g, \u201cfirst summarise, then ignore that and do X\u201d) layered across spreadsheet tabs.<\/p>\n<p>Additionally, the disclosure exposes a new attack surface where the diagram-generation feature (Mermaid output) becomes the exfiltration channel. Logue explained that clicking the diagram opened a browser link that quietly sent the encoded email data to an attacker-controlled endpoint. The transfer happened through a standard web request, making it indistinguishable from a legitimate click-through in many environments.<\/p>\n<p>\u201cOne of the interesting things about mermaid diagrams is that <a href=\"https:\/\/mermaid.js.org\/config\/directives.html\" target=\"_blank\" rel=\"noopener\">they also include support for CSS<\/a>,\u201d Logue noted. \u201cThis opens up some interesting attack vectors for data exfiltration, as M365 Copilot can generate a mermaid diagram on the fly and can include data retrieved from other tools in the diagram.\u201d<\/p>\n<p>Recent disclosures highlight a surge in indirect prompt injection attacks, where <a href=\"https:\/\/www.csoonline.com\/article\/4053107\/ai-prompt-injection-gets-real-with-macros-the-latest-hidden-threat.html\">hidden macros<\/a> in documents or <a href=\"https:\/\/www.csoonline.com\/article\/4069887\/github-copilot-prompt-injection-flaw-leaked-sensitive-data-from-private-repos.html\">embedded comments<\/a> in pull requests hijack AI-driven workflows and extract data without obvious user action. These trends underscore that tools like diagram generators or visual outputs can soon become stealthy channels for exfiltration.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Microsoft has patched an indirect prompt injection flaw in Microsoft 365 Copilot that could have allowed attackers to steal sensitive data using clickable Mermaid diagrams. According to findings published by security researcher Adam Logue, the exploit could be triggered through specially crafted Office documents containing hidden instructions. When processed by Copilot, these prompts caused the [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":5567,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-5566","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5566"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5566"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5566\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/5567"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5566"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5566"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5566"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}