Browser safety agency LayerX has disclosed a brand new assault technique that works towards standard gen-AI instruments. The assault includes browser extensions and it may be used for covert knowledge exfiltration.
The strategy, named Man-in-the-Immediate, has been examined towards a number of extremely standard massive language fashions (LLMs), together with ChatGPT, Gemini, Copilot, Claude and DeepSeek.
LayerX demonstrated that any browser extension, even ones that would not have particular permissions, can entry these AI instruments and inject prompts instructing them to offer delicate knowledge and exfiltrate it.
“When customers work together with an LLM-based assistant, the immediate enter discipline is often a part of the web page’s Doc Object Mannequin (DOM). Because of this any browser extension with scripting entry to the DOM can learn from, or write to, the AI immediate straight,” LayerX defined.
Study Extra About Securing AI at SecurityWeek’s AI Danger Summit – August 19-20, 2025 on the Ritz-Carlton, Half Moon Bay
The assault poses the most important risk to LLMs which might be constructed and customised by enterprises for inner use. These AI fashions usually deal with extremely delicate info corresponding to mental property, company paperwork, private info, monetary paperwork, inner communications, and HR knowledge.
A proof-of-concept (PoC) focusing on ChatGPT confirmed how a malicious extension with no permissions can open a brand new browser tab within the background, open the chatbot, and instruct it to offer info. The hacker can then exfiltrate the information to a command and management (C&C) server and erase the chat historical past to cowl their tracks.
The attacker can work together with the extension from a C&C server that may be distant or hosted regionally.Commercial. Scroll to proceed studying.
In a PoC focusing on Google’s Gemini, LayerX confirmed how an attacker might goal company knowledge by the AI’s integration with Google Workspace, together with Gmail, Docs, Meet and different purposes. This allows a malicious browser extension to work together with Gemini and inject prompts instructing it to extract emails, contacts, recordsdata and folders, and assembly invitations and summaries.
An attacker might additionally receive an inventory of the focused enterprise’s prospects, get a abstract of calls, acquire info on folks, and search for delicate info corresponding to PII and mental property.
The attacker would want to trick the focused person into putting in a malicious browser extension with a view to conduct a Man-in-the-Immediate assault, however an evaluation performed by LayerX discovered that 99% of enterprises use at the very least one browser extension and 50% have greater than ten extensions. This means that in lots of instances it won’t be too troublesome for risk actors to trick targets into putting in yet another extension.
LayerX instructed SecurityWeek that it initially reported its findings to Google, however the tech big assessed that this isn’t really a software program vulnerability, which is what different LLM builders are additionally more likely to consider.
The safety agency agrees that this isn’t really a vulnerability that may require the allocation of a CVE, however reasonably an general weak point that exploits the low degree of privileges required to work together with LLMs.
LayerX recommends monitoring DOM interactions with gen-AI instruments in the hunt for listeners and webhooks that work together with AI prompts, and blocking browser extensions primarily based on behavioral danger.
Associated: Flaw in Vibe Coding Platform Base44 Uncovered Non-public Enterprise Purposes
Associated: From Ex Machina to Exfiltration: When AI Will get Too Curious
Associated: OpenAI’s Sam Altman Warns of AI Voice Fraud Disaster in Banking