{"id":7883,"date":"2026-04-21T12:16:12","date_gmt":"2026-04-21T12:16:12","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=7883"},"modified":"2026-04-21T12:16:12","modified_gmt":"2026-04-21T12:16:12","slug":"prompt-injection-turned-googles-antigravity-file-search-into-rce","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=7883","title":{"rendered":"Prompt injection turned Google\u2019s Antigravity file search into RCE"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Security researchers have revealed a prompt injection flaw in Google\u2019s Antigravity IDE that could be weaponized to bypass its sandbox protections and achieve remote code execution (RCE).<\/p>\n<p>The issue came from Antigravity\u2019s ability to allow AI agents to invoke native functions, like searching files, on behalf of the user. Designed to kill complexity, the feature could allow attackers to inject malicious input into a tool parameter.<\/p>\n<p>According to Pillar Security researchers, the vulnerability could bypass Antigravity\u2019s \u201cmost restrictive security configuration,\u201d Secure Mode.<\/p>\n<p>The flaw was reported to Google in January, which acknowledged and fixed the issue internally, awarding Pillar Security a bounty through its Vulnerability Reward Program (VRP) for AI-specific categories. Google did not immediately respond to CSO\u2019s request for comments.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>File search could be turned into code execution<\/h2>\n<p>Pillar\u2019s prompt injection vector relied on Antigravity\u2019s \u201cfind_my_name\u201d tool and an \u201cfd\u201d utility within. find_my_name is one of Antigravity\u2019s built-in agent tools that allows the AI to search for files and directories in the project workspace using the fd command line.<\/p>\n<p>What was happening is that any string beginning with \u201c-\u201d was being interpreted by fd as a flag rather than a search pattern, allowing execution of binaries within files matching a \u201c-Xsh\u201d pattern. \u201cThe technique exploits insufficient input sanitization of the find_by_name tool\u2019s Pattern parameter, allowing attackers to inject command-line flags into the underlying fd utility, converting a file search operation into arbitrary code execution,\u201d the researchers said in a blog <a href=\"https:\/\/www.pillar.security\/blog\/prompt-injection-leads-to-rce-and-sandbox-escape-in-antigravity\">post<\/a>.<\/p>\n<p>Essentially, instead of just locating files, \u201cfd\u201d could be tricked into executing attacker-supplied binaries across those files using a crafted prompt that manipulates the \u201cPattern\u201d parameter. The researchers demonstrated this by creating a file in the local directory with the malicious prompt to exploit the \u201cpattern\u201d injected. Antigravity picked up the file, ran its intended tasks (like launching Calculator), and also launched the search tool, now primed to execute \u201c-Xsh\u201d patterns.<\/p>\n<p>This could also be turned into remote code execution via <a href=\"https:\/\/www.csoonline.com\/article\/4080154\/copilot-diagrams-could-leak-corporate-emails-via-indirect-prompt-injection.html\">indirect prompt injection<\/a>. \u201cA user pulls a benign-looking source file from an untrusted origin, such as a public repository, containing attacker-controlled comments that instruct the agent to stage and trigger the exploit,\u201d the researchers explained.<\/p>\n<p>The worst part was that it was unstoppable with the existing protection.<\/p>\n<h2 class=\"wp-block-heading\"><a><\/a>Google\u2019s sandbox never got a chance<\/h2>\n<p>Antigravity\u2019s Secure Mode, which is designed to restrict network access, prevent out-of-workspace writes, and ensure all command operations run strictly under a sandbox context, could not flag or quarantine this technique. This is because the find_my_name tool is called much before Secure Mode restrictions are evaluated.<\/p>\n<p>\u201cThe agent treats it as a native tool invocation, not a shell command, so it never reaches the security boundary that Secure Mode enforces,\u201c the researchers noted.<\/p>\n<p>The issue was trimmed down to a twofold root cause. A \u201cNo <a href=\"https:\/\/www.csoonline.com\/article\/4151814\/langchain-path-traversal-bug-adds-to-input-validation-woes-in-ai-pipelines.html\">input validation<\/a>\u201d at the Pattern parameter, which accepts arbitrary strings without checking for legitimate search pattern characters. The second was \u201cno argument termination,\u201d which refers to fd\u2019s inability to distinguish between flags and search terms. Google has already fixed the flaw internally, and Antigravity users need not do anything else to remain protected. However, the flaw\u2019s ability to bypass Secure Mode, Pillar researchers point out, underlines that security controls focused on shell commands are insufficient. \u201cThe industry must move beyond sanitization-based controls toward execution isolation,\u201d they said. \u201cEvery native tool parameter that reaches a shell command is a potential injection point.\u201d<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Security researchers have revealed a prompt injection flaw in Google\u2019s Antigravity IDE that could be weaponized to bypass its sandbox protections and achieve remote code execution (RCE). The issue came from Antigravity\u2019s ability to allow AI agents to invoke native functions, like searching files, on behalf of the user. Designed to kill complexity, the feature [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":7884,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-7883","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7883"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=7883"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/7883\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/7884"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=7883"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=7883"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=7883"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}