A important vulnerability chain in Salesforce’s Agentforce AI platform, which might have allowed exterior attackers to steal delicate CRM knowledge.
The vulnerability, dubbed ForcedLeak by Noma Labs, which found it, carries a CVSS rating of 9.4 and was executed via a complicated oblique immediate injection assault.
This discovery highlights the expanded and essentially completely different assault floor introduced by autonomous AI brokers in comparison with conventional methods.
Upon notification from Noma Labs, Salesforce promptly investigated the problem and has since deployed patches. The repair prevents Agentforce brokers from sending knowledge to untrusted URLs, addressing the speedy danger.
The analysis demonstrates how AI brokers will be compromised via malicious directions hidden inside what are usually thought of trusted knowledge sources.
ForcedLeak Assault
The assault exploited a number of weaknesses, together with inadequate context validation, overly permissive AI mannequin conduct, and a important Content material Safety Coverage (CSP) bypass.
Attackers might create a malicious Net-to-Lead submission containing unauthorized instructions. When the AI agent processed this lead, the Giant Language Mannequin (LLM) handled the malicious directions as respectable, resulting in the exfiltration of delicate knowledge.
The LLM was unable to distinguish between trusted knowledge loaded into its context and the attacker’s embedded directions.
The assault vector was an oblique immediate injection. Not like a direct injection, the place an attacker inputs instructions straight into the AI, this methodology entails embedding malicious directions in knowledge that the AI will later course of throughout a routine job.
On this case, the attacker positioned a payload within the “Description” discipline of an internet kind, which was then saved within the CRM. When an worker requested the AI agent to evaluate the lead, the agent executed the hidden instructions.
A key issue within the success of this assault was the invention of a flaw in Salesforce’s Content material Safety Coverage. The researchers discovered that the area my-salesforce-cms.com was whitelisted however had expired and was out there for buy.
By buying this area, an attacker might set up a trusted channel for knowledge exfiltration. The AI agent, following its directions, would ship delicate knowledge to this attacker-controlled area, bypassing safety controls that may usually block such actions, Noma Labs stated.
Salesforce has since re-secured the expired area and applied stricter safety controls, together with Trusted URLs Enforcement for each Agentforce and Einstein AI, to stop comparable points.
If exploited, ForcedLeak might have had extreme penalties. The vulnerability risked exposing confidential buyer contact info, gross sales pipeline knowledge, inside communications, and historic interplay information.
Any group utilizing Salesforce Agentforce with the Net-to-Lead function enabled was doubtlessly susceptible, particularly these in gross sales and advertising who frequently course of exterior lead knowledge.
Salesforce recommends that clients take the next actions:
Apply the really helpful updates to implement Trusted URLs for Agentforce and Einstein AI.
Audit current lead knowledge for any suspicious submissions containing uncommon directions.
Implement strict enter validation and sanitize all knowledge from untrusted sources.
Observe us on Google Information, LinkedIn, and X for each day cybersecurity updates. Contact us to function your tales.