If you thought running an AI agent locally kept it safely inside your machine’s walls, you’re in for a surprise. Researchers at Oasis Security have disclosed a flaw chain that allowed a malicious website to quietly connect to a locally running OpenClaw agent and take full control.
The issue stems from a fundamental assumption baked into developer tools that anything coming from “localhost” can be trusted. In reality, however, modern browsers allow external websites to open WebSocket connections to local services.
According to Oasis findings, malicious browser pages can silently connect to the OpenClaw gateway, which auto-trusts localhosts and disables rate limits, enabling rapid password brute-forcing and unauthorized device pairing.
“The modern web browser acts as a porous membrane, permitting untrusted, external JavaScript to bridge the gap to local services via WebSockets,” said Jason Soroko, senior fellow at Sectigo. “By relying on a local IP address to grant immunity from rate-limiting and to silently auto-approve device pairings, the system abandons the core tenets of a zero-trust architecture.”
Once hijacked, the attacker can obtain the OpenClaw agent’s high privileges, including autonomous workflows, access to codebases, integrations, and credentials. Oasis researchers called the flaw ClawJacked, tracked under CVE-2026-25253, and reported full proof-of-concept (PoC) code to OpenClaw, which then promptly fixed it.
‘localhost’ became a weaklink
Oasis Security’s research showed how a combination of design choices enabled the flaw. OpenClaw relied on local binding, automatic device pairing, and minimal authentication friction to streamline onboarding.
Because WebSocket connections to localhost are not constrained by traditional cross-origin protections, a hostile website could initiate communication with the agent’s local gateway. From there, weak authentication controls and implicit trust in local origins allow the attacker to pair a device session and begin issuing commands.
“What stands out is that it’s clear that product usefulness improved faster than security,” said Randolph Barr, CISO at Cequence Security. “The design focused on making the developer experience as smooth as possible by using local binding, automatic device pairing, and less friction for connectivity. This made adoption faster but also made defensive controls less effective.”
Gal Moyal of Noma Security echoed this concern that agentic AI tools prioritize seamless developer experience over security. He noted that WebSocket access to localhost is a known browser behavior, “but its intersection with an unthrottled authentication endpoint and automatic device trust from localhost creates a particularly dangerous combination.”
The full attack chain involves a victim visiting a malicious website whose hidden script connects to the locally running OpenClaw gateway via WebSockets, brute-forces its password without rate limits, and silently registers as a trusted device due to implicit localhost trust. Once authenticated, the attacker gains full control of the AI agent and its accessible data and functions.
A larger blast radius
Unlike regular software vulnerabilities, compromised AI agents have a bigger blast radius as they hold sensitive API keys, session tokens, file system access, and the authority to execute tasks across enterprise tools.
Barr emphasized that autonomous systems “aggregate identity, credentials, and workflow authority,” meaning a failure doesn’t occur quietly. Instead, the agent executes actions “with the full authority of the user, at machine speed and machine scale.” In developer environments, that could include modifying code repositories, accessing internal systems, or triggering automated processes.
Soroko described the browser itself as the unexpected attack vector, effectively bypassing the developer’s physical perimeter and “turning a simple background tab into an effective lock-pick.” Oasis noted that the OpenClaw team responded quickly, coordinating disclosure and issuing a fix (OpenClaw v2026.2.25 or later) within 24 hours. However, experts caution that rapid patching alone may not address the broader architectural risks. Organizations deploying AI agents should implement stronger authentication, explicit user approval for session pairing, rate limiting, credential scoping, and behavioral monitoring, they noted.
No Responses