Immediate injection and an expired area might have been used to focus on Salesforce’s Agentforce platform for knowledge theft.
The assault technique, dubbed ForcedLeak, was found by researchers at Noma Safety, an organization that not too long ago raised $100 million for its AI agent safety platform.
Salesforce Agentforce allows companies to construct and deploy autonomous AI brokers throughout features equivalent to gross sales, advertising and marketing, and commerce. These brokers act independently to finish multi-step duties with out fixed human intervention.
The ForcedLeak assault technique recognized by Noma researchers concerned Agentforce’s Net-to-Lead performance, which allows the creation of an internet type that exterior customers equivalent to convention attendees or people focused in a advertising and marketing marketing campaign can fill out to supply lead info. This info is saved into the shopper relationship administration (CRM) system.
The researchers found that attackers can abuse types created with the Net-to-Lead performance to submit specifically crafted info, which when processed by Agentforce brokers causes them to hold out varied actions on the attacker’s behalf.
The potential influence was demonstrated by submitting a payload that included innocent directions alongside directions asking the AI agent to gather electronic mail addresses and add them to the parameters of a request going to a distant server.
When an worker asks Agentforce to course of the lead that features the malicious payload, the immediate injection triggers and the info saved within the CRM is collected and exfiltrated to the attacker’s server.
The assault had important probabilities of remaining undetected as a result of Noma researchers found {that a} trusted Salesforce area had been left to run out. An attacker might have registered that area and used it for the server receiving the exfiltrated CRM knowledge.
After being notified, Salesforce regained management of the expired area and carried out modifications to forestall AI agent output from being despatched to untrusted domains. Commercial. Scroll to proceed studying.
Some of these assaults should not unusual. Researchers in latest months demonstrated a number of theoretical assaults the place integration between AI assistants and enterprise instruments have been abused for knowledge theft.
Associated: ChatGPT Focused in Server-Aspect Knowledge Theft Assault
Associated: ChatGPT Tricked Into Fixing CAPTCHAs
Associated: Prime 25 MCP Vulnerabilities Reveal How AI Brokers Can Be Exploited