A newly disclosed critical vulnerability in Salesforce’s Agentforce platform could trick the AI agent into leaking sensitive CRM data through indirect prompt injection.
Researchers at Noma Security, who identified the bug dubbed “ForcedLeak,” said in a blog post shared with CSO ahead of its publication on Thursday that it could be exploited by attackers inserting malicious instructions into a routine customer form.
Salesforce patched the issue after disclosure, but Noma researchers believe the implications go well beyond one bug.
From innocent Form to full-on data heist
The attack path revealed by Noma was deceptively simple. By planting malicious text inside Salesforce’s Web-to-Lead form, commonly used in marketing campaigns, researchers found that an AI agent (Agentforce) tasked with reviewing submissions could be coaxed into running instructions it was never meant to. The description field, with its 42000-character allowance, provided enough space to hide multi-step payloads disguised as harmless business requests.
Once an employee interacts with the data and asks Agentforce to process it, the system obediently carries out both the real request and the attacker’s hidden script. Worse, Salesforce’s content security policy included an expired domain still on its whitelist. By re-registering the domain for just $5, researchers created a trusted-looking data exfiltration channel, turning a minor oversight into a major security hole.
“Indirect Prompt Injection is basically cross-site scripting, but instead of tricking a database, attackers get inline AI to do it,” said Andy Bennett, CISO at Apollo Information Systems. “It’s like a mix of scripted attacks and social engineering. The innovation is impressive and the impacts are potentially staggering.”
Salesforce said Thursday that it had patched this CVE-pending flaw on September 8, 2025, by enforcing “Trusted URL allowlists” for Agentforce. While the company did not directly credit Noma for the finding, it included a related description. “Our underlying services powering Agentforce will enforce the Trusted URL allowlist to ensure no malicious links are called or generated through potential prompt injection,” it said then.
A Salesforce spokesperson said, “The security landscape for prompt injection remains a complex and evolving area, and we continue to invest in strong security controls and work closely with the research community to help protect our customers as these types of issues surface.”
Guardrails, not just patches
While Salesforce responded quickly with a patch, experts agree that AI agents represent a fundamentally broader attack surface. These systems combine memory, decision-making, and tool execution, meaning compromises can spread quickly and, as Bennett puts it, “at machine speed.”
“It’s advisable to secure the systems around the AI agents in use, which include APIs, forms, and middleware, so that prompt injection is harder to exploit and less harmful if it succeeds,” said Chrissa Constantine, senior cybersecurity solution architect at Black Duck. She emphasized that true prevention requires not just patching but “maintaining configuration and establishing guardrails around the agent design, software supply chain, web application, and API testing.”
Noma’s researchers echoed that call, urging organizations to treat AI agents like production systems, inventorying every agent, validating outbound connections, sanitizing inputs before they reach the model, and flagging any sensitive data access or internet egress.
Sanitize external input before the agent sees it, suggested Elad Luz, head of research at Oasis Security. “Treat free-text from contact forms as untrusted input. Use an input mediation layer to extract only expected fields, strip/neutralize instructions, links, and markup, and prevent the model from interpreting user content as commands (prompt-injection resilience).”
No Responses