{"id":7830,"date":"2026-04-15T12:09:36","date_gmt":"2026-04-15T12:09:36","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=7830"},"modified":"2026-04-15T12:09:36","modified_gmt":"2026-04-15T12:09:36","slug":"copilot-and-agentforce-fall-to-form-based-prompt-injection-tricks","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=7830","title":{"rendered":"Copilot and Agentforce fall to form-based prompt injection tricks"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Enterprise AI agents are supposed to streamline workflows. Instead, two fresh findings show they can just as easily streamline data exfiltration.<\/p>\n<p>Security researchers have uncovered prompt-injection vulnerabilities in both Microsoft Copilot Studio and Salesforce Agentforce that allow attackers to execute malicious instructions via seemingly harmless prompts.<\/p>\n<p>According to Capsule Security findings, SharePoint forms and public-facing lead forms within Copilot are vulnerable to attackers issuing prompts that can override system intent and trigger data exfiltration to attacker-controlled servers.<\/p>\n<p>One of these flaws has already been assigned a high-severity CVE, with another \u201ccritical\u201d one reportedly missing the bar for categorization. The flaws can allow theft of PIIs, customer\/lead records, free-text business context, and operational\/workflow data.<\/p>\n<p>In both cases, AI agents treat untrusted user input as trusted instructions, Capsule researchers noted in the <a href=\"https:\/\/www.capsulesecurity.io\/blog-post\/pipeleak-the-lead-that-stole-your-database-exploiting-salesforce-agentforce-with-indirect-prompt-injection\" target=\"_blank\" rel=\"noopener\">disclosures<\/a> shared with CSO ahead of their<a href=\"https:\/\/www.capsulesecurity.io\/blog-post\/shareleak-taking-the-wheel-of-microsofts-copilot-studio-cve-2026-21520\" target=\"_blank\" rel=\"noopener\"> publication<\/a> on Wednesday.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>ShareLeak: SharePoint forms data leaked through Copilot<\/h2>\n<p>The Microsoft-side issue, dubbed \u201cShareLeak,\u201d is about how Copilot Studio agents process SharePoint form submissions. The attack begins with a crafted payload inserted into a standard form field, like \u201ccomments\u201d, which the agent later ingests as part of its operational context.<\/p>\n<p>Because the system concatenates user input with system prompts, the injected payload overrides the agent\u2019s original instructions. The model is thus tricked into believing the attacker\u2019s instructions are legitimate system directives. The malicious input moves from form submission to agent execution without any resistance.<\/p>\n<p>Once compromised, the agent can access connected SharePoint Lists and extract sensitive customer data, including names, addresses, phone numbers, and send it externally via email. The researchers found that even when Microsoft\u2019s safety mechanisms flagged suspicious behavior, the data was exfiltrated.<\/p>\n<p>The root cause is that there is no reliable separation between trusted system instructions and untrusted user data. In the existing setup, the AI cannot distinguish between the two, the researchers said.<\/p>\n<p>Microsoft <a href=\"https:\/\/msrc.microsoft.com\/update-guide\/vulnerability\/CVE-2026-21520\" target=\"_blank\" rel=\"noopener\">patched<\/a> the issue following disclosure, assigning <a href=\"https:\/\/nvd.nist.gov\/vuln\/detail\/CVE-2026-21520\" target=\"_blank\" rel=\"noopener\">CVE-2026-21520<\/a> to it and assessing its severity at 7.5 out of 10 on the CVSS scale. The mitigation was carried out internally, and no further action is required from the users.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>PipeLeak: Salesforce Agentforce hijacked by a simple lead<\/h2>\n<p>In the Salesforce Agentforce case, attackers embed malicious instructions inside a public-facing lead form. When an internal user later asks the agent to review or process that lead, the agent executes the embedded instructions as if they were part of its task.<\/p>\n<p>According to a Capsule demonstration, the agent retrieves CRM data via the \u201cGetLeadsInformation\u201d function and then sends it externally via email.<\/p>\n<p>The compromise isn\u2019t limited to a single record. Researchers demonstrated that a hijacked agent could query and exfiltrate multiple lead records in bulk, effectively turning a single form submission into a database extraction pipeline.<\/p>\n<p>The researchers said Salesforce acknowledged the prompt injection issue but characterized the exfiltration vector as \u201cconfiguration-specific,\u201d pointing to optional human-in-the-loop (<a href=\"https:\/\/www.csoonline.com\/article\/4108592\/human-in-the-loop-isnt-enough-new-attack-turns-ai-safeguards-into-exploits.html\">HITL<\/a>) controls. Capsule\u2019s pushback on that framing argues that requiring manual approvals undermines the very purpose of autonomous agents.<\/p>\n<p>The deeper issue, they noted, is insecure defaults. Systems designed for automation should not allow untrusted inputs to redefine agent goals.<\/p>\n<p>Both disclosures converge on a baseline that calls for treating all external inputs as untrusted and having filters in place that separate data from instructions. This would entail enforcing input validation, least-privilege access, and strict controls on actions like outbound email.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Enterprise AI agents are supposed to streamline workflows. Instead, two fresh findings show they can just as easily streamline data exfiltration. Security researchers have uncovered prompt-injection vulnerabilities in both Microsoft Copilot Studio and Salesforce Agentforce that allow attackers to execute malicious instructions via seemingly harmless prompts. According to Capsule Security findings, SharePoint forms and public-facing [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":7831,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-7830","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7830"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7830"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7830\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/7831"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7830"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7830"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7830"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}