Skip to content
  • Home
  • Cyber Map
  • About Us – Contact
  • Disclaimer
  • Terms and Rules
  • Privacy Policy
Cyber Web Spider Blog – News

Cyber Web Spider Blog – News

Globe Threat Map provides a real-time, interactive 3D visualization of global cyber threats. Monitor DDoS attacks, malware, and hacking attempts with geo-located arcs on a rotating globe. Stay informed with live logs and archive stats.

  • Home
  • Cyber Map
  • Cyber Security News
  • Security Week News
  • The Hacker News
  • How To?
  • Toggle search form
Someone Created First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model

Someone Created First AI-Powered Ransomware Using OpenAI’s gpt-oss:20b Model

Posted on August 27, 2025August 27, 2025 By CWS

Cybersecurity firm ESET has disclosed that it found a synthetic intelligence (AI)-powered ransomware variant codenamed PromptLock.
Written in Golang, the newly recognized pressure makes use of the gpt-oss:20b mannequin from OpenAI regionally by way of the Ollama API to generate malicious Lua scripts in real-time. The open-weight language mannequin was launched by OpenAI earlier this month.
“PromptLock leverages Lua scripts generated from hard-coded prompts to enumerate the native filesystem, examine goal information, exfiltrate chosen knowledge, and carry out encryption,” ESET stated. “These Lua scripts are cross-platform appropriate, performing on Home windows, Linux, and macOS.”
The ransomware code additionally embeds directions to craft a customized notice primarily based on the “information affected,” and the contaminated machine is a private laptop, firm server, or an influence distribution controller. It is at the moment not identified who’s behind the malware, however ESET instructed The Hacker Information that PromptLoc artifacts have been uploaded to VirusTotal from america on August 25, 2025.

“PromptLock makes use of Lua scripts generated by AI, which signifies that indicators of compromise (IoCs) could fluctuate between executions,” the Slovak cybersecurity firm identified. “This variability introduces challenges for detection. If correctly applied, such an method may considerably complicate menace identification and make defenders’ duties tougher.”
Assessed to be a proof-of-concept (PoC) quite than a completely operational malware deployed within the wild, PromptLock makes use of the SPECK 128-bit encryption algorithm to lock information.
In addition to encryption, evaluation of the ransomware artifact means that it is also used to exfiltrate knowledge and even destroy it, though the performance to really carry out the erasure seems not but to be applied.
“PromptLock doesn’t obtain your entire mannequin, which may very well be a number of gigabytes in measurement,” ESET clarified. “As a substitute, the attacker can merely set up a proxy or tunnel from the compromised community to a server operating the Ollama API with the gpt-oss-20b mannequin.”

The emergence of PromptLock is one other signal that AI has made it simpler for cybercriminals, even those that lack technical experience, to shortly arrange new campaigns, develop malware, and create compelling phishing content material and malicious websites.
Earlier as we speak, Anthropic revealed that it banned accounts created by two totally different menace actors that used its Claude AI chatbot to commit large-scale theft and extortion of non-public knowledge concentrating on no less than 17 distinct organizations, and developed a number of variants of ransomware with superior evasion capabilities, encryption, and anti-recovery mechanisms.
The event comes as massive language fashions (LLMs) powering numerous chatbots and AI-focused developer instruments, resembling Amazon Q Developer, Anthropic Claude Code, AWS Kiro, Butterfly Impact Manus, Google Jules, Lenovo Lena, Microsoft GitHub Copilot, OpenAI ChatGPT Deep Analysis, OpenHands, Sourcegraph Amp, and Windsurf, have been discovered prone to immediate injection assaults, probably permitting data disclosure, knowledge exfiltration, and code execution.
Regardless of incorporating sturdy safety and security guardrails to keep away from undesirable behaviors, AI fashions have repeatedly fallen prey to novel variants of injections and jailbreaks, underscoring the complexity and evolving nature of the safety problem.

“Immediate injection assaults may cause AIs to delete information, steal knowledge, or make monetary transactions,” Anthropic stated. “New types of immediate injection assaults are additionally continually being developed by malicious actors.”
What’s extra, new analysis has uncovered a easy but intelligent assault known as PROMISQROUTE – quick for “Immediate-based Router Open-Mode Manipulation Induced by way of SSRF-like Queries, Reconfiguring Operations Utilizing Belief Evasion” – that abuses ChatGPT’s mannequin routing mechanism to set off a downgrade and trigger the immediate to be despatched to an older, much less safe mannequin, thus permitting the system to bypass security filters and produce unintended outcomes.
“Including phrases like ‘use compatibility mode’ or ‘quick response wanted’ bypasses hundreds of thousands of {dollars} in AI security analysis,” Adversa AI stated in a report printed final week, including the assault targets the cost-saving model-routing mechanism utilized by AI distributors.

The Hacker News Tags:AIPowered, Created, gptoss20b, Model, OpenAIs, Ransomware

Post navigation

Previous Post: Hackers Weaponize Trust with AI-Crafted Emails to Deploy ScreenConnect
Next Post: NVIDIA NeMo AI Curator Enables Code Execution and Privilege Escalation

Related Posts

CISA Warns of Active Spyware Campaigns Hijacking High-Value Signal and WhatsApp Users CISA Warns of Active Spyware Campaigns Hijacking High-Value Signal and WhatsApp Users The Hacker News
Cybercrime Group Recruits Women for IT Vishing Cybercrime Group Recruits Women for IT Vishing The Hacker News
Microsoft Fixes Entra ID Flaw Allowing Identity Takeover Microsoft Fixes Entra ID Flaw Allowing Identity Takeover The Hacker News
CISA Flags Meteobridge CVE-2025-4008 Flaw as Actively Exploited in the Wild CISA Flags Meteobridge CVE-2025-4008 Flaw as Actively Exploited in the Wild The Hacker News
New Advanced Linux VoidLink Malware Targets Cloud and container Environments New Advanced Linux VoidLink Malware Targets Cloud and container Environments The Hacker News
Malicious Rust Crates Steal Solana and Ethereum Keys — 8,424 Downloads Confirmed Malicious Rust Crates Steal Solana and Ethereum Keys — 8,424 Downloads Confirmed The Hacker News

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Recent Posts

  • Amazon Quick’s Vulnerability Exposed AI Chat to Unauthorized Users
  • Mythos Excels in Vulnerability Detection, Faces Varied Challenges
  • OpenAI Faces Lawsuit Over ChatGPT Data Sharing Practices
  • Revolutionizing Data Center Security with DPUs
  • Ghostwriter Intensifies Phishing Attacks on Ukraine

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Archives

  • May 2026
  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • June 2025
  • May 2025

Recent Posts

  • Amazon Quick’s Vulnerability Exposed AI Chat to Unauthorized Users
  • Mythos Excels in Vulnerability Detection, Faces Varied Challenges
  • OpenAI Faces Lawsuit Over ChatGPT Data Sharing Practices
  • Revolutionizing Data Center Security with DPUs
  • Ghostwriter Intensifies Phishing Attacks on Ukraine

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Copyright © 2026 Cyber Web Spider Blog – News.

Powered by PressBook Masonry Dark