Nov 19, 2025Ravie LakshmananAI Safety / SaaS Safety
Malicious actors can exploit default configurations in ServiceNow’s Now Help generative synthetic intelligence (AI) platform and leverage its agentic capabilities to conduct immediate injection assaults.
The second-order immediate injection, based on AppOmni, makes use of Now Help’s agent-to-agent discovery to execute unauthorized actions, enabling attackers to repeat and exfiltrate delicate company knowledge, modify information, and escalate privileges.
“This discovery is alarming as a result of it is not a bug within the AI; it is anticipated conduct as outlined by sure default configuration choices,” mentioned Aaron Costello, chief of SaaS Safety Analysis at AppOmni.
“When brokers can uncover and recruit one another, a innocent request can quietly flip into an assault, with criminals stealing delicate knowledge or gaining extra entry to inner firm techniques. These settings are straightforward to miss.”
The assault is made doable due to agent discovery and agent-to-agent collaboration capabilities inside ServiceNow’s Now Help. With Now Help providing the power to automate features akin to help-desk operations, the state of affairs opens the door to doable safety dangers.
As an illustration, a benign agent can parse specifically crafted prompts embedded into content material it is allowed entry to and recruit a stronger agent to learn or change information, copy delicate knowledge, or ship emails, even when built-in immediate injection protections are enabled.
Essentially the most important facet of this assault is that the actions unfold behind the scenes, unbeknownst to the sufferer group. At its core, the cross-agent communication is enabled by controllable configuration settings, together with the default LLM to make use of, device setup choices, and channel-specific defaults the place the brokers are deployed –
The underlying massive language mannequin (LLM) should assist agent discovery (each Azure OpenAI LLM and Now LLM, which is the default selection, assist the characteristic)
Now Help brokers are routinely grouped into the identical group by default to invoke one another
An agent is marked as being discoverable by default when revealed
Whereas these defaults will be helpful to facilitate communication between brokers, the structure will be inclined to immediate injections when an agent whose foremost process is to learn knowledge that is not inserted by the consumer invoking the agent.
“By means of second-order immediate injection, an attacker can redirect a benign process assigned to an innocuous agent into one thing way more dangerous by using the utility and performance of different brokers on its group,” AppOmni mentioned.
“Critically, Now Help brokers run with the privilege of the consumer who began the interplay except in any other case configured, and never the privilege of the consumer who created the malicious immediate and inserted it right into a discipline.”
Following accountable disclosure, ServiceNow mentioned the conduct is meant to be this fashion, however the firm has since up to date its documentation to offer extra readability on the matter. The findings display the necessity for strengthening AI agent safety, as enterprises more and more incorporate AI capabilities into their workflows.
To mitigate such immediate injection threats, it is suggested to configure supervised execution mode for privileged brokers, disable the autonomous override property (“sn_aia.enable_usecase_tool_execution_mode_override”), phase agent duties by group, and monitor AI brokers for suspicious conduct.
“If organizations utilizing Now Help’s AI brokers aren’t intently inspecting their configurations, they’re doubtless already in danger,” Costello added.
