OpenClaw, the viral open-source AI agent that security firms warn is “insecure by default,” has integrated VirusTotal’s malware scanning into its ClawHub skills marketplace following weeks in which security researchers documented malicious extensions and widespread unauthorized deployments in enterprises.
The integration automatically scans all published skills before making them available for download, according to the announcement by OpenClaw founder Peter Steinberger, security advisor Jamieson O’Reilly, and VirusTotal’s Bernardo Quintero. Skills receiving a “benign” verdict are automatically approved, while those marked suspicious receive warnings, and malicious skills are immediately blocked, with daily re-scanning of all active skills.
“As the OpenClaw ecosystem grows, so does the attack surface,” the announcement stated. “We’ve already seen documented cases of malicious actors attempting to exploit AI agent platforms. We’re not waiting for this to become a bigger problem.”
Sunil Varkey, advisor at Beagle Security, called the integration “a sensible and welcome step” that filters out known malware. “Most attacks still rely on reusing known malware rather than investing in costly zero-day development, so filtering out known bad artifacts meaningfully raises the bar and improves marketplace hygiene,” Varkey said.
How the scanning works
The system relies on VirusTotal’s Code Insight, powered by Google’s Gemini, which analyzes complete skill packages for malicious behavior.
“It doesn’t just look at what the skill claims to do—it summarizes what the code actually does from a security perspective: whether it downloads and executes external code, accesses sensitive data, performs network operations, or embeds instructions that could coerce the agent into unsafe behavior,” OpenClaw said in the announcement.
When developers publish skills to ClawHub, the platform creates a SHA-256 hash and checks it against VirusTotal’s database, uploading the complete bundle for Code Insight analysis if not found. The integration uses the same technology VirusTotal provides to Hugging Face’s AI model repository, according to the announcement.
What prompted the response
The scanning initiative follows a series of security incidents documented by multiple firms over the past two weeks. Koi Security’s February 1 audit of all 2,857 ClawHub skills discovered 341 malicious ones in a campaign dubbed “ClawHavoc.”
The professional-looking skills for cryptocurrency tools and YouTube utilities contained fake prerequisites that installed keyloggers and the Atomic macOS Stealer malware capable of harvesting cryptocurrency wallets, browser data, and system credentials. A Cornell University report found that 26% of packages contained vulnerabilities and described OpenClaw as “an absolute nightmare” from a security standpoint. Token Security found 22% of its enterprise customers have employees running the agent without IT approval.
Security vendor Noma reported that 53% of its enterprise customers gave OpenClaw privileged access over a single weekend, according to a January 30 Gartner analysis. Gartner characterized OpenClaw as “a powerful demonstration of autonomous AI for enterprise productivity, but it is an unacceptable cybersecurity liability” and recommended enterprises “block OpenClaw downloads and traffic immediately,” describing shadow deployments as creating “single points of failure, as compromised hosts expose API keys, OAuth tokens, and sensitive conversations to attackers.”
OpenClaw >surpassed 150,000 GitHub stars in late January, gaining viral popularity on social media. The platform, launched in November 2025 and rebranded twice due to trademark disputes, allows community-developed “skills” that run with full access to the agent’s tools and data—the architecture that ClawHavoc exploited.
Limitations of malware scanning
While the VirusTotal integration addresses known malware in the skills marketplace, OpenClaw acknowledged significant limitations in the announcement. “Let’s be clear: this is not a silver bullet,” the announcement stated. “A skill that uses natural language to instruct an agent to do something malicious won’t trigger a virus signature. A carefully crafted prompt injection payload won’t show up in a threat database.”
The primary risk with AI agents involves prompt injection, where malicious instructions embedded in emails or documents can hijack agent behavior without exploiting traditional software vulnerabilities, according to CrowdStrike’s analysis. The Moltbook social network for OpenClaw agents illustrated these risks when it exposed 1.5 million API tokens and 35,000 email addresses after a database misconfiguration.
Varkey cautioned that “threats like prompt injection, logic abuse, and misuse of legitimate tools sit outside the reach of malware scanning,” adding that the integration should be “seen as the foundation for broader governance and technical controls, not the finish line.” The VirusTotal integration is the first step in what Steinberger called a “broader security initiative,” with plans to publish a threat model, security roadmap, and audit results at trust.openclaw.ai.
No Responses