{"id":5982,"date":"2025-11-28T01:56:57","date_gmt":"2025-11-28T01:56:57","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5982"},"modified":"2025-11-28T01:56:57","modified_gmt":"2025-11-28T01:56:57","slug":"security-researchers-caution-app-developers-about-risks-in-using-google-antigravity","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5982","title":{"rendered":"Security researchers caution app developers about risks in using Google Antigravity"},"content":{"rendered":"<div>\n<div class=\"grid grid--cols-10@md grid--cols-8@lg article-column\">\n<div class=\"col-12 col-10@md col-6@lg col-start-3@lg\">\n<div class=\"article-column__content\">\n<div class=\"container\"><\/div>\n<p>Google\u2019s Antigravity development tool for creating artificial intelligence agents has been out for less than 11 days and already the company has been forced to update the known issues pages after security researchers discovered what they say are vulnerabilities.<\/p>\n<p><a href=\"https:\/\/mindgard.ai\/blog\/google-antigravity-persistent-code-execution-vulnerability\" target=\"_blank\" rel=\"noopener\">According to a blog from Mindgard<\/a>, one of the first to discover problems with Antigravity, Google isn\u2019t calling the issue it found a security bug. But Mindgard says a threat actor could create a malicious rule by taking advantage of Antigravity\u2019s strict direction that any AI assistant it creates must always follow user-defined rules.<\/p>\n<p>Author Aaron Portnoy, Mindgard\u2019s head of research and innovation, says that after his blog was posted, Google replied on November 25 to say a report has been filed with the responsible product team.<\/p>\n<p>Still, until there is action, \u201cthe existence of this vulnerability means that users are at risk to backdoor attacks via compromised workspaces when using Antigravity, which can be leveraged by attackers to execute arbitrary code on their systems. At present there is\u00a0no setting\u00a0that we could identify to safeguard against this vulnerability,\u201d Portnoy wrote in his blog.<\/p>\n<p>Even in the most restrictive mode of operation, \u201cexploitation proceeds unabated and without confirmation from the user,\u201d he wrote.<\/p>\n<p>Asked for comment, a Google spokesperson said the company is aware of the issue reported by Mindguard, and is working to address it.\u00a0<\/p>\n<p>In an email, Mindgard\u2019s Portnoy told <em>CSOonline<\/em> that the nature of the flaw makes it difficult to mitigate. \u201cStrong identity would not help mitigate this issue, because the actions undertaken by Antigravity are occurring with the identity of the user running the application,\u201d he said. \u201cAs far as the operating system can tell, they are indistinguishable. Access management control could possibly do so, but only if you were able to restrict access to the global configuration directory, and it may have downstream impact on Antigravity\u2019s functionality.\u201d For example, he said, this could cause Model Context Protocol (MCP), a framework for standardizing the way AI systems share data, to malfunction.<\/p>\n<p>\u201cThe attack vector is through the source code repository that is opened by the developer,\u201d he explained, \u201cand doesn\u2019t need to be triggered through a prompt.\u201d<\/p>\n<p>Other researchers also have found problems with Antigravity:<\/p>\n<p><a href=\"https:\/\/blog.deadbits.ai\/p\/indirect-prompt-injection-in-ai-ides\" target=\"_blank\" rel=\"noopener\">Adam Swanda says he discovered and disclosed <\/a>to Google an indirect prompt injection vulnerability. He said Google told him the particular problem is a known issue and is expected behavior of the tool.<\/p>\n<p>another researcher who goes by the name Wunderwuzzi <a href=\"https:\/\/embracethered.com\/blog\/posts\/2025\/security-keeps-google-antigravity-grounded\/\" target=\"_blank\" rel=\"noopener\">blogged about discovering five holes<\/a>, including data exfiltration and remote command execution via indirect prompt injection vulnerabilities.\u00a0<br \/>This blog notes that, according to Google\u2019s Antigravity Known Issues page, <a href=\"https:\/\/bughunters.google.com\/learn\/invalid-reports\/google-products\/4655949258227712\/antigravity-known-issues\" target=\"_blank\" rel=\"noopener\">the company is working on fixes for several issues.\u00a0 <\/a>\u00a0<\/p>\n<p>Asked for comment about the issues reported in the three blogs, a Google spokesperson said, \u201cWe take security issues very seriously and encourage reporting of all vulnerabilities so we can identify and address them quickly. We will continue to post <a href=\"https:\/\/bughunters.google.com\/learn\/invalid-reports\/google-products\/4655949258227712\/antigravity-known-issues\" target=\"_blank\" rel=\"noopener\">known issues<\/a> publicly as we work to address them.\u201d<\/p>\n<h2 class=\"wp-block-heading\">What is Antigravity?<\/h2>\n<p>Antigravity is an integrated application development environment (IDE), released on November 18, that leverages the Google Gemini 3 Pro chatbot. \u201cAntigravity isn\u2019t just an editor,\u201d <a href=\"https:\/\/antigravity.google\/blog\/introducing-google-antigravity?utm_source=deepmind.google&amp;utm_medium=referral&amp;utm_campaign=gdm&amp;utm_content=\" target=\"_blank\" rel=\"noopener\">Google says<\/a>. \u201cIt\u2019s a development platform that combines a familiar, AI-powered coding experience with a new agent-first interface. This allows you to deploy agents that autonomously plan, execute, and verify complex tasks across your editor, terminal, and browser.\u201d<\/p>\n<p>Individuals \u2014 including threat actors \u2014 can use it for free.<\/p>\n<p>Google Antigravity streamlines workflows by offering tools for parallelization, customization, and efficient knowledge management, the company says, to help eliminate common development obstacles and repetitive tasks. Agents can be spun up to tackle routine tasks such as codebase research, bug fixes, and backlog tasks.<\/p>\n<h2 class=\"wp-block-heading\">Agent installs the malware<\/h2>\n<p>The problem Mindgard discovered involves Antigravity\u2019s rule that the developer has to work within a trusted workspace or the tool won\u2019t function. The threat manifests through an attacker creating a malicious source code repository, Portnoy says. \u201cThen, if a target opens it (through finding it on Github, being social engineered, or tricked into opening it) then that user\u2019s system becomes compromised persistently.\u201d\u00a0<\/p>\n<p>The malicious actor doesn\u2019t have to create an agent, he explained. The agent is part of Antigravity and is backed by the Google Gemini LLM. The agent is the component that is tricked into installing the backdoor by following instructions in the malicious source code repository that was opened by the user.<\/p>\n<p>\u201cIn Antigravity,\u201d Mindgard argues, \u201c\u2019trust\u2019 is effectively the entry point to the product rather than a conferral of privileges.\u201d The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. \u201cEven after a complete uninstall and re-install of Antigravity,\u201d says Mindgard, \u201cthe backdoor remains in effect. Because Antigravity\u2019s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.\u201d<\/p>\n<p>For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them.<\/p>\n<h2 class=\"wp-block-heading\">Process \u2018perplexing\u2019<\/h2>\n<p>In his email, Portnoy acknowledged that Google is now taking some action. \u201cGoogle is moving through their established process, although it was a bit perplexing on the stop-and-start nature. First [the reported vulnerability] was flagged as not an issue. Then it was re-opened. Then the Known Issues page was altered in stealth to be more all encompassing. It\u2019s good that the vulnerability will be reviewed by their security team to ascertain its severity, although in the meantime we would recommend all Antigravity users to seriously consider the vulnerability found and means for mitigation.\u201d<\/p>\n<p>Adam Swanda says in his blog that he was able to partially extract an Antigravity agent\u2019s system prompt, enough to identify a design weakness that could lead to indirect prompt injection. <\/p>\n<h2 class=\"wp-block-heading\">Highlights broader issues<\/h2>\n<p>The problem is, the prompt tells the AI to strictly follow special XML-style tags that handle privileged instructions in a conversation between a user and the chatbot, so there would be no warning to the user that special, and possibly malicious, instructions were retrieved. When the agent fetches external web content, Swanda says, it doesn\u2019t sanitize these special tags to ensure they are actually from the application itself and not untrusted input. An attacker can embed their own special\u00a0message in a webpage, or presumably any other content, and the Antigravity agent will treat those commands as trusted system instructions. <\/p>\n<p>This type of vulnerability isn\u2019t new, he adds, but the finding highlights broader issues in large language models and agent systems:<\/p>\n<p>LLMs cannot distinguish between trusted and untrusted sources;<\/p>\n<p>untrusted sources can contain malicious instructions to execute tools and\/or modify responses returned to the user\/application;<\/p>\n<p>system prompts should not be considered secret or used as a security control.<\/p>\n<h2 class=\"wp-block-heading\">Advice to developers<\/h2>\n<p>Swanda recommends that app development teams building AI agents with tool-calling:<\/p>\n<p>assume all external content is adversarial.\u00a0Use strong input and output guardrails, including tool calling; Strip any special syntax before processing;<\/p>\n<p>implement tool execution safeguards.\u00a0Require explicit user approval for high-risk operations, especially those triggered after handling untrusted content or other dangerous tool combinations;<\/p>\n<p>not rely on prompts for security.\u00a0System prompts, for example, can be extracted and used by an attacker to influence their attack strategy.<\/p>\n<p>In addition, Portnoy recommends that developers work with their security teams to ensure that they sufficiently vet and assess the AI-assisted tools that they are introducing to their organization. \u201cThere are numerous examples of using AI-assisted tools to accelerate development pipelines to enhance operational efficiency,\u201d he said. \u201cHowever, from experience, security in bleeding-edge (recently dropped) tools is somewhat of an afterthought. Thinking seriously about the intended use case of the AI tool, what data sources it can access, and what it is connected to are fundamental to ensuring you remain secure.\u201d\u00a0<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Google\u2019s Antigravity development tool for creating artificial intelligence agents has been out for less than 11 days and already the company has been forced to update the known issues pages after security researchers discovered what they say are vulnerabilities. According to a blog from Mindgard, one of the first to discover problems with Antigravity, Google [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":5983,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3],"tags":[],"class_list":["post-5982","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-education"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5982"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5982"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5982\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/media\/5983"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5982"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5982"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5982"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}