Cybersecurity researchers have demonstrated a brand new immediate injection approach known as PromptFix that tips a generative synthetic intelligence (GenAI) mannequin into finishing up supposed actions by embedding the malicious instruction inside a pretend CAPTCHA test on an online web page.
Described by Guardio Labs an “AI-era tackle the ClickFix rip-off,” the assault approach demonstrates how AI-driven browsers, similar to Perplexity’s Comet, that promise to automate mundane duties like looking for gadgets on-line or dealing with emails on behalf of customers could be deceived into interacting with phishing touchdown pages or fraudulent lookalike storefronts with out the human consumer’s data or intervention.
“With PromptFix, the method is completely different: We do not attempt to glitch the mannequin into obedience,” Guardio stated. “As a substitute, we mislead it utilizing strategies borrowed from the human social engineering playbook – interesting on to its core design purpose: to assist its human rapidly, utterly, and with out hesitation.”
This results in a brand new actuality that the corporate calls Scamlexity, a portmanteau of the phrases “rip-off” and “complexity,” the place agentic AI – programs that may autonomously pursue objectives, make selections, and take actions with minimal human supervision – takes scams to an entire new stage.
With AI-powered coding assistants like Lovable confirmed to be vulnerable to strategies like VibeScamming, an attacker can successfully trick the AI mannequin into handing over delicate data or finishing up purchases on lookalike web sites masquerading as Walmart.
All of this may be achieved by issuing an instruction so simple as “Purchase me an Apple Watch” after the human lands on the bogus web site in query by one of many a number of strategies, like social media advertisements, spam messages, or SEO (website positioning) poisoning.
Scamlexity is “a posh new period of scams, the place AI comfort collides with a brand new, invisible rip-off floor and people turn into the collateral injury,” Guardio stated.
The cybersecurity firm stated it ran the check a number of instances on Comet, with the browser solely stopping often and asking the human consumer to finish the checkout course of manually. However in a number of cases, the browser went all in, including the product to the cart and auto-filling the consumer’s saved deal with and bank card particulars with out asking for his or her affirmation on a pretend purchasing website.
In an analogous vein, it has been discovered that asking Comet to test their e mail messages for any motion gadgets is sufficient to parse spam emails purporting to be from their financial institution, robotically click on on an embedded hyperlink within the message, and enter the login credentials on the phony login web page.
“The outcome: an ideal belief chain gone rogue. By dealing with the complete interplay from e mail to web site, Comet successfully vouched for the phishing web page,” Guardio stated. “The human by no means noticed the suspicious sender deal with, by no means hovered over the hyperlink, and by no means had the possibility to query the area.”
That is not all. As immediate injections proceed to plague AI programs in methods direct and oblique, AI Browsers will even should cope with hidden prompts hid inside an online web page that is invisible to the human consumer, however could be parsed by the AI mannequin to set off unintended actions.
This so-called PromptFix assault is designed to persuade the AI mannequin to click on on invisible buttons in an online web page to bypass CAPTCHA checks and obtain malicious payloads with none involvement on the a part of the human consumer, leading to a drive-by obtain assault.
“PromptFix works solely on Comet (which actually capabilities as an AI Agent) and, for that matter, additionally on ChatGPT’s Agent Mode, the place we efficiently acquired it to click on the button or perform actions as instructed,” Guardio instructed The Hacker Information. “The distinction is that in ChatGPT’s case, the downloaded file lands inside its digital setting, in a roundabout way in your laptop, since every part nonetheless runs in a sandboxed setup.”
The findings present the necessity for AI programs to transcend reactive defenses to anticipate, detect, and neutralize these assaults by constructing sturdy guardrails for phishing detection, URL popularity checks, area spoofing, and malicious recordsdata.
The event additionally comes as adversaries are more and more leaning on GenAI platforms like web site builders and writing assistants to craft practical phishing content material, clone trusted manufacturers, and automate large-scale deployment utilizing companies like low-code website builders, per Palo Alto Networks Unit 42.
What’s extra, AI coding assistants can inadvertently expose proprietary code or delicate mental property, creating potential entry factors for focused assaults, the corporate added.
Enterprise safety agency Proofpoint stated it has noticed “quite a few campaigns leveraging Lovable companies to distribute multi-factor authentication (MFA) phishing kits like Tycoon, malware similar to cryptocurrency pockets drainers or malware loaders, and phishing kits concentrating on bank card and private data.”
The counterfeit web sites created utilizing Lovable result in CAPTCHA checks that, when solved, redirect to a Microsoft-branded credential phishing web page. Different web sites have been discovered to impersonate transport and logistics companies like UPS to dupe victims into getting into their private and monetary data, or make them pages that obtain distant entry trojans like zgRAT.
Lovable URLs have additionally been abused for funding scams and banking credential phishing, considerably decreasing the barrier to entry for cybercrime. Lovable has since taken down the websites and carried out AI-driven safety protections to stop the creation of malicious web sites.
Different campaigns have capitalized on misleading deepfaked content material distributed on YouTube and social media platforms to redirect customers to fraudulent funding websites. These AI buying and selling scams additionally depend on pretend blogs and evaluation websites, usually hosted on platforms like Medium, Blogger, and Pinterest, to create a false sense of legitimacy.
“GenAI enhances risk actors’ operations moderately than changing present assault methodologies,” CrowdStrike stated in its Menace Looking Report for 2025. “Menace actors of all motivations and ability ranges will nearly definitely enhance their use of GenAI instruments for social engineering within the near-to mid-term, notably as these instruments turn into extra out there, user-friendly, and complicated.”