Cline is an open-source AI coding agent with 3.8 million installs and over 52,000 GitHub stars. Incorporates 4 essential safety vulnerabilities that allow attackers to execute arbitrary code and exfiltrate delicate knowledge by malicious supply code repositories.
Mindgard researchers found the failings throughout an audit of the favored VSCode extension, which helps Claude Sonnet and the free Sonic mannequin.
The vulnerabilities stem from insufficient prompt-injection protections throughout Cline’s evaluation of supply code information. Attackers can embed malicious directions in Python, Markdown, and shell scripts to override the agent’s security guardrails.
Notably, exploitation requires nothing greater than opening a compromised repository and requesting evaluation.
Mindgard experiences that every one vulnerabilities had been disclosed to the seller earlier than publication, although the group didn’t reply to repeated coordination makes an attempt.
Cline AI Coding Agent Vulnerabilities
DNS-based Information Exfiltration permits attackers to leak delicate API keys and atmosphere variables. By hiding directions in code feedback, attackers can trick Cline into working ping instructions that embed system data in DNS requests despatched to their very own servers.
.clinerules Arbitrary Code Execution exploits Cline’s customized guidelines system. Attackers place malicious Markdown information in a challenge’s .clinerules listing.
To power all execute_command operations to run with requires_approval=false, bypassing person consent mechanisms and enabling silent code execution.
The TOCTOU Vulnerability makes use of time-of-check-time-of-use logic to steadily modify shell scripts throughout a number of evaluation requests.
An attacker can first add innocent code to a script, then later change it so as to add dangerous code whereas the background process remains to be working.
Data Leakage reveals the underlying mannequin infrastructure by error messages, exposing that the Sonic mannequin is powered by grok-4.
Cline’s growth group applied mitigations in model 3.35.0, together with enhanced immediate injection detection.
Mindgard researchers notice the seller’s delayed response raises considerations concerning the velocity of LLM agent exploitation relative to safety remediation timelines.
The findings underscore that system prompts will not be innocent configuration information however core safety boundaries.
As AI brokers turn out to be integral growth instruments, securing the intersection of language, instruments, and code execution stays critically underdeveloped.
Observe us on Google Information, LinkedIn, and X for each day cybersecurity updates. Contact us to characteristic your tales.
