The discovery of malware embedded in the most downloaded skill on OpenClaw’s ClawHub marketplace has unveiled significant security vulnerabilities. This malicious software, disguised as a legitimate AI tool, highlights the risks associated with open-source platforms.
Exposing the Threat
OpenClaw, known for its open-source AI agent platform, operates ClawHub, a marketplace where developers publish skills to enhance agent capabilities. Security researcher @chiefofautism recently uncovered 1,184 malicious skills, with one actor responsible for uploading 677 of these packages. This indicates a severe supply chain vulnerability within the AI agent ecosystem.
Alarmingly, ClawHub’s verification process required only a one-week-old GitHub account, enabling attackers to upload numerous malicious skills under the guise of legitimate applications such as crypto trading tools and YouTube summarizers. These skills, complete with professional documentation, concealed harmful code that misled users.
Mechanisms of the Malware
Once activated, the malware instructed AI agents to execute commands through hidden AI prompts. On macOS, it deployed Atomic Stealer (AMOS), which extracted sensitive information like browser passwords, SSH keys, and crypto wallet credentials. On other systems, it opened a reverse shell, granting attackers remote access to compromised machines.
Cisco’s AI Defense team uncovered nine vulnerabilities in a top-ranked ClawHub skill, “What Would Elon Do?” These included critical exploits that exfiltrated user data to an attacker’s server using undetectable methods. The skill was downloaded thousands of times, exacerbating the problem.
Addressing the Security Breach
The vulnerability issue was not new; Koi Security had previously identified 341 malicious entries in ClawHub, linked to a campaign called ClawHavoc. Similarly, Snyk’s audit revealed 341 threats, with the publisher “hightower6eu” responsible for over 314 hazardous packages. These findings pointed to a common command-and-control server.
In response, OpenClaw partnered with Google’s VirusTotal to scan all uploaded skills, categorizing them as benign, suspicious, or malicious. Daily re-scans aim to detect mutations in these skills post-approval.
This incident mirrors npm supply chain attacks but with a unique twist: the malware functions within an AI agent, capable of executing commands with broad system permissions. Traditional security tools struggle to detect these natural language-encoded threats, posing a significant challenge.
Organizations using OpenClaw face heightened risks from “Shadow AI” activities, where agent actions bypass conventional monitoring and leave limited audit trails. Continuous vigilance and advanced security measures are crucial to mitigate these threats.
