Six flaws found hiding in OpenClaw’s plumbing

Tags:

Security researchers have uncovered six high-to-critical flaws affecting the open-source AI agent framework OpenClaw, popularly known as a “social media for AI agents.” The flaws were discovered by Endor Labs as its researchers ran the platform through an AI-driven static application security testing (SAST) engine designed to follow how data actually moves through the agentic AI software.

The bugs span several web security categories, including server-side request forgery (SSRF), missing webhook authentication, authentication bypasses, and path traversal, affecting the complex agentic system that combines large language models (LLMs) with tool execution and external integrations.

The researchers also published working proof-of-concept exploits for each of the flaws, confirming real-world exploitability. OpenClaw has published patches and security advisories for the issues.

Flaws included SSRF paths, auth bypass, and file escapes

Endor Labs’ disclosure characterized the six OpenClaw vulnerabilities by weakness type and individual severity rather than CVE identifiers.

Several of the issues are SSRF bugs affecting different tools, including a gateway component (CVSS 7.6) that accepts user-supplied URLs to establish outbound WebSocket connections. The other two included an SSRF in Urbit Authentication (CVSS 6.5) and an Image Tool SSRF (CVSS 7.6). These SSRF paths were rated medium to high severity because they could allow access to internal services or cloud metadata endpoints, depending on deployment.

Access control failures accounted for another cluster of findings. A webhook handler “Telnyx” designed to receive external events lacked proper webhook verification (CVSS 7.5), enabling forged requests from untrusted sources. Separately, an authentication bypass (CVSS 6.5) allowed unauthenticated users to invoke a protected webhook functionality “Twilio” without valid credentials.

The disclosure also detailed a path traversal vulnerability (CVSS not assigned) in browser upload handling, where insufficient sanitization of file paths could allow writes outside intended directories.

“The combination of AI-powered analysis and systematic manual validation provides a practical path forward for securing AI infrastructure,” the researchers said. “As AI agent frameworks become more prevalent in enterprise environments, security analysis must evolve to address both traditional vulnerabilities and AI-specific attack surfaces.”

Following the data revealed the danger

To overcome the limitations of “traditional static analysis” tools that reportedly struggle with modern software stacks where inputs pass through numerous transformations before reaching risky operations, Endor Labs implemented the AI SAST approach, which, it claimed, maintains context across these transformations.

This helped the researchers understand “not only where dangerous operations exist but also whether attacker-controlled data can reach them.” The test engine mapped the full journey of “untrusted data”, from entry points such as HTTP parameters, configuration values, or external API responses to security-sensitive “sinks” like network requests, file operations, or command execution.

Endor Labs said it responsibly disclosed the vulnerabilities to the OpenClaw maintainers, who subsequently addressed the issues, allowing the researchers to publish technical details. The disclosure did not provide extensive mitigation guidance but noted that fixes were implemented across the affected components.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *