Skip to content
  • Blog Home
  • Cyber Map
  • About Us – Contact
  • Disclaimer
  • Terms and Rules
  • Privacy Policy
Cyber Web Spider Blog – News

Cyber Web Spider Blog – News

Globe Threat Map provides a real-time, interactive 3D visualization of global cyber threats. Monitor DDoS attacks, malware, and hacking attempts with geo-located arcs on a rotating globe. Stay informed with live logs and archive stats.

  • Home
  • Cyber Map
  • Cyber Security News
  • Security Week News
  • The Hacker News
  • How To?
  • Toggle search form

Cursor AI Code Editor Vulnerability Enables RCE via Malicious MCP File Swaps Post Approval

Posted on August 5, 2025August 5, 2025 By CWS

Aug 05, 2025Ravie LakshmananAI Safety / MCP Protocol
Cybersecurity researchers have disclosed a high-severity safety flaw within the synthetic intelligence (AI)-powered code editor Cursor that might lead to distant code execution.
The vulnerability, tracked as CVE-2025-54136 (CVSS rating: 7.2), has been codenamed MCPoison by Examine Level Analysis, owing to the truth that it exploits a quirk in the best way the software program handles modifications to Mannequin Context Protocol (MCP) server configurations.
“A vulnerability in Cursor AI permits an attacker to realize distant and chronic code execution by modifying an already trusted MCP configuration file inside a shared GitHub repository or enhancing the file domestically on the goal’s machine,” Cursor stated in an advisory launched final week.
“As soon as a collaborator accepts a innocent MCP, the attacker can silently swap it for a malicious command (e.g., calc.exe) with out triggering any warning or re-prompt.”
MCP is an open-standard developed by Anthropic that permits massive language fashions (LLMs) to work together with exterior instruments, knowledge, and companies in a standardized method. It was launched by the AI firm in November 2024.

CVE-2025-54136, per Examine Level, has to do with the way it’s potential for an attacker to change the habits of an MCP configuration after a consumer has accredited it inside Cursor. Particularly, it unfolds as follows –

Add a benign-looking MCP configuration (“.cursor/guidelines/mcp.json”) to a shared repository
Look forward to the sufferer to tug the code and approve it as soon as in Cursor
Change the MCP configuration with a malicious payload, e.g., launch a script or run a backdoor
Obtain persistent code execution each time the sufferer opens the Cursor

The elemental downside right here is that after a configuration is accredited, it is trusted by Cursor indefinitely for future runs, even when it has been modified. Profitable exploitation of the vulnerability not solely exposes organizations to produce chain dangers, but in addition opens the door to knowledge and mental property theft with out their information.
Following accountable disclosure on July 16, 2025, the difficulty has been addressed by Cursor in model 1.3 launched late July 2025 by requiring consumer approval each time an entry within the MCP configuration file is modified.

“The flaw exposes a essential weak point within the belief mannequin behind AI-assisted growth environments, elevating the stakes for groups integrating LLMs and automation into their workflows,” Examine Level stated.
The event comes days after Intention Labs, Backslash Safety, and HiddenLayer uncovered a number of weaknesses within the AI device that might have been abused to acquire distant code execution and bypass its denylist-based protections. They’ve additionally been patched in model 1.3.
The findings additionally coincide with the rising adoption of AI in enterprise workflows, together with utilizing LLMs for code era, broadening the assault floor to numerous rising dangers like AI provide chain assaults, unsafe code, mannequin poisoning, immediate injection, hallucinations, inappropriate responses, and knowledge leakage –

A take a look at of over 100 LLMs for his or her capacity to write down Java, Python, C#, and JavaScript code has discovered that 45% of the generated code samples failed safety exams and launched OWASP High 10 safety vulnerabilities. Java led with a 72% safety failure price, adopted by C# (45%), JavaScript (43%), and Python (38%).
An assault known as LegalPwn has revealed that it is potential to leverage authorized disclaimers, phrases of service, or privateness insurance policies as a novel immediate injection vector, highlighting how malicious directions might be embedded inside official, however usually ignored, textual elements to set off unintended habits in LLMs, corresponding to misclassifying malicious code as secure and providing unsafe code recommendations that may execute a reverse shell on the developer’s system.
An assault known as man-in-the-prompt that employs a rogue browser extension with no particular permissions to open a brand new browser tab within the background, launch an AI chatbot, and inject them with malicious prompts to covertly extract knowledge and compromise mannequin integrity. This takes benefit of the truth that any browser add-on with scripting entry to the Doc Object Mannequin (DOM) can learn from, or write to, the AI immediate instantly.
A jailbreak approach known as Fallacy Failure that manipulates an LLM into accepting logically invalid premises and causes it to provide in any other case restricted outputs, thereby deceiving the mannequin into breaking its personal guidelines.
An assault known as MAS hijacking that manipulates the management stream of a multi-agent system (MAS) to execute arbitrary malicious code throughout domains, mediums, and topologies by weaponizing the agentic nature of AI techniques.
A method known as Poisoned GPT-Generated Unified Format (GGUF) Templates that targets the AI mannequin inference pipeline by embedding malicious directions inside the chat template information that execute throughout the inference part to compromise outputs. By positioning the assault between enter validation and mannequin output, the strategy is each sneaky and bypasses AI guardrails. With GGUF information distributed by way of companies like Hugging Face, the assault exploits the availability chain belief mannequin to set off the assault.
An attacker can goal the machine studying (ML) coaching environments like MLFlow, Amazon SageMaker, and Azure ML to compromise the confidentiality, integrity and availability of the fashions, in the end resulting in lateral motion, privilege escalation, in addition to coaching knowledge and mannequin theft and poisoning.
A examine by Anthropic has uncovered that LLMs can study hidden traits throughout distillation, a phenomenon known as subliminal studying, that causes fashions to transmit behavioral traits by way of generated knowledge that seems fully unrelated to these traits, doubtlessly resulting in misalignment and dangerous habits.

“As Massive Language Fashions grow to be deeply embedded in agent workflows, enterprise copilots, and developer instruments, the chance posed by these jailbreaks escalates considerably,” Pillar Safety’s Dor Sarig stated. “Fashionable jailbreaks can propagate by way of contextual chains, infecting one AI part and resulting in cascading logic failures throughout interconnected techniques.”
“These assaults spotlight that AI safety requires a brand new paradigm, as they bypass conventional safeguards with out counting on architectural flaws or CVEs. The vulnerability lies within the very language and reasoning the mannequin is designed to emulate.”

The Hacker News Tags:Approval, Code, Cursor, Editor, Enables, File, Malicious, MCP, Post, RCE, Swaps, Vulnerability

Post navigation

Previous Post: North Korean Hackers Weaponizing NPM Packages to Steal Cryptocurrency and Sensitive Data
Next Post: Cisco Says User Data Stolen in CRM Hack

Related Posts

ClickFix Malware Campaign Exploits CAPTCHAs to Spread Cross-Platform Infections The Hacker News
Coinbase Agents Bribed, Data of ~1% Users Leaked; $20M Extortion Attempt Fails The Hacker News
Microsoft Warns Default Helm Charts Could Leave Kubernetes Apps Exposed to Data Leaks The Hacker News
Echo Chamber Jailbreak Tricks LLMs Like OpenAI and Google into Generating Harmful Content The Hacker News
Hackers Exploit Critical CrushFTP Flaw to Gain Admin Access on Unpatched Servers The Hacker News
UNC6148 Backdoors Fully-Patched SonicWall SMA 100 Series Devices with OVERSTEP Rootkit The Hacker News

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News

Recent Posts

  • ScarCruft Hacker Group Launched a New Malware Attack Using Rust and PubNub
  • Microsoft 365 Direct Send Weaponized to Bypass Email Security Defenses
  • New Ghost Calls Attack Abuses Web Conferencing for Covert Command & Control
  • WhatsApp Has Taken Down 6.8 Million Accounts Linked to Malicious Activities
  • Malicious Go, npm Packages Deliver Cross-Platform Malware, Trigger Remote Data Wipes

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Archives

  • August 2025
  • July 2025
  • June 2025
  • May 2025

Recent Posts

  • ScarCruft Hacker Group Launched a New Malware Attack Using Rust and PubNub
  • Microsoft 365 Direct Send Weaponized to Bypass Email Security Defenses
  • New Ghost Calls Attack Abuses Web Conferencing for Covert Command & Control
  • WhatsApp Has Taken Down 6.8 Million Accounts Linked to Malicious Activities
  • Malicious Go, npm Packages Deliver Cross-Platform Malware, Trigger Remote Data Wipes

Pages

  • About Us – Contact
  • Disclaimer
  • Privacy Policy
  • Terms and Rules

Categories

  • Cyber Security News
  • How To?
  • Security Week News
  • The Hacker News