Recent findings by cybersecurity experts have uncovered significant security vulnerabilities in a popular AI-driven coding assistant, Claude Code, developed by Anthropic. These vulnerabilities could potentially allow for remote code execution and unauthorized extraction of API credentials, posing severe risks to users.
Configuration Exploits and Vulnerability Breakdown
The vulnerabilities were identified as being linked to several configuration mechanisms, such as Hooks, Model Context Protocol (MCP) servers, and environment variables. These flaws could be exploited to execute arbitrary shell commands or exfiltrate API keys when users clone and open repositories without sufficient checks.
Security researchers from Check Point Research highlighted three primary categories of vulnerabilities. One notable issue, without a CVE identifier but with a CVSS score of 8.7, involves a code injection vulnerability that bypasses user consent, potentially leading to unauthorized code execution. This flaw was addressed in version 1.0.87 in September 2025.
Vulnerabilities and Their Impact
Another critical flaw, CVE-2025-59536, also with a CVSS score of 8.7, allows for automatic execution of shell commands when the tool is initialized in an untrusted directory. This was rectified in version 1.0.111 in October 2025. Meanwhile, CVE-2026-21852, with a CVSS score of 5.3, pertains to information disclosure within Claude Code’s project-load process. This vulnerability could lead to exfiltration of sensitive data, including API keys, and was fixed in version 2.0.65 in January 2026.
An advisory from Anthropic explained the potential risk of starting Claude Code in a repository controlled by an attacker. If such a repository includes a settings file that alters the ANTHROPIC_BASE_URL to a malicious endpoint, API requests could be made before any trust prompts are shown, potentially leaking API keys.
Wider Implications for AI-Powered Tools
The exploitation of these vulnerabilities could enable attackers to access shared project files, modify or delete cloud-stored data, upload harmful content, and even incur unexpected API costs. The first vulnerability, in particular, could lead to covert execution on a developer’s system with minimal interaction.
According to Check Point, as AI tools increasingly gain capabilities to execute commands, initiate external integrations, and engage in network communications autonomously, configuration files become integral to the execution layer. This change in functionality significantly alters the threat landscape, extending risks beyond just running untrusted code to opening untrusted projects. In AI-centric development environments, the supply chain risk now encompasses both source code and the automation frameworks around it.
Given these vulnerabilities, it is crucial for developers and organizations utilizing Claude Code to update to the latest versions and remain vigilant about the security of their repositories.
