{"id":5040,"date":"2025-09-25T13:00:00","date_gmt":"2025-09-25T13:00:00","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5040"},"modified":"2025-09-25T13:00:00","modified_gmt":"2025-09-25T13:00:00","slug":"vulnerability-in-salesforce-ai-could-be-tricked-into-leaking-crm-data","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5040","title":{"rendered":"Vulnerability in Salesforce AI could be tricked into leaking CRM data"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>A newly disclosed critical vulnerability in Salesforce\u2019s Agentforce platform could trick the AI agent into leaking sensitive CRM data through indirect prompt injection.<\/p>\n<p>Researchers at Noma Security, who identified the bug dubbed \u201cForcedLeak,\u201d said in a <a href=\"https:\/\/noma.security\/blog\/forcedleak-agent-risks-exposed-in-salesforce-agentforce\" target=\"_blank\" rel=\"noopener\">blog post<\/a> shared with CSO ahead of its publication on Thursday that it could be exploited by attackers inserting malicious instructions into a routine customer form.<\/p>\n<p>Salesforce patched the issue after disclosure, but Noma researchers believe the implications go well beyond one bug.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>From innocent Form to full-on data heist<\/h2>\n<p>The attack path revealed by Noma was deceptively simple. By planting malicious text inside Salesforce\u2019s Web-to-Lead form, commonly used in marketing campaigns, researchers found that an AI agent (Agentforce) tasked with reviewing submissions could be coaxed into running instructions it was never meant to. The description field, with its 42000-character allowance, provided enough space to hide multi-step payloads disguised as harmless business requests.<\/p>\n<p>Once an employee interacts with the data and asks Agentforce to process it, the system obediently carries out both the real request and the attacker\u2019s hidden script. Worse, Salesforce\u2019s content security policy included an expired domain still on its whitelist. By re-registering the domain for just $5, researchers created a trusted-looking data exfiltration channel, turning a minor oversight into a major security hole.<\/p>\n<p>\u201cIndirect Prompt Injection is basically cross-site scripting, but instead of tricking a database, attackers get inline AI to do it,\u201d said <a href=\"https:\/\/www.linkedin.com\/in\/cisoandy\/\" target=\"_blank\" rel=\"noopener\">Andy Bennett<\/a>, CISO at Apollo Information Systems. \u201cIt\u2019s like a mix of scripted attacks and social engineering. The innovation is impressive and the impacts are potentially staggering.\u201d<\/p>\n<p>Salesforce said Thursday that it had patched this CVE-pending flaw on September 8, 2025, by enforcing \u201cTrusted URL allowlists\u201d for Agentforce. While the company did not directly credit Noma for the finding, it included a related description. \u201cOur underlying services powering Agentforce will enforce the Trusted URL allowlist to ensure no malicious links are called or generated through potential prompt injection,\u201d it said then.<\/p>\n<p>A Salesforce spokesperson said, \u201cThe security landscape for prompt injection remains a complex and evolving area, and we continue to invest in strong security controls and work closely with the research community to help protect our customers as these types of issues surface.\u201d<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>Guardrails, not just patches<\/h2>\n<p>While Salesforce responded quickly with a patch, experts agree that AI agents represent a fundamentally broader attack surface. These systems combine memory, decision-making, and tool execution, meaning compromises can spread quickly and, as Bennett puts it, \u201cat machine speed.\u201d\u00a0<\/p>\n<p>\u201cIt\u2019s advisable to secure the systems around the AI agents in use, which include APIs, forms, and middleware, so that prompt injection is harder to exploit and less harmful if it succeeds,\u201d said Chrissa Constantine, senior cybersecurity solution architect at Black Duck. She emphasized that true prevention requires not just patching but \u201cmaintaining configuration and establishing guardrails around the agent design, software supply chain, web application, and API testing.\u201d<\/p>\n<p>Noma\u2019s researchers echoed that call, urging organizations to treat AI agents like production systems, inventorying every agent, validating outbound connections, sanitizing inputs before they reach the model, and flagging any sensitive data access or internet egress.<\/p>\n<p>Sanitize external input before the agent sees it, suggested Elad Luz, head of research at Oasis Security. \u201cTreat free-text from contact forms as untrusted input. Use an input mediation layer to extract only expected fields, strip\/neutralize instructions, links, and markup, and prevent the model from interpreting user content as commands (prompt-injection resilience).\u201d<\/p>\n\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>A newly disclosed critical vulnerability in Salesforce\u2019s Agentforce platform could trick the AI agent into leaking sensitive CRM data through indirect prompt injection. Researchers at Noma Security, who identified the bug dubbed \u201cForcedLeak,\u201d said in a blog post shared with CSO ahead of its publication on Thursday that it could be exploited by attackers inserting [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":5035,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-5040","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5040"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5040"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5040\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/5035"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5040"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5040"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5040"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}