Security researchers caution app developers about risks in using Google Antigravity

Tags:

Google’s Antigravity development tool for creating artificial intelligence agents has been out for less than 11 days and already the company has been forced to update the known issues pages after security researchers discovered what they say are vulnerabilities.

According to a blog from Mindgard, one of the first to discover problems with Antigravity, Google isn’t calling the issue it found a security bug. But Mindgard says a threat actor could create a malicious rule by taking advantage of Antigravity’s strict direction that any AI assistant it creates must always follow user-defined rules.

Author Aaron Portnoy, Mindgard’s head of research and innovation, says that after his blog was posted, Google replied on November 25 to say a report has been filed with the responsible product team.

Still, until there is action, “the existence of this vulnerability means that users are at risk to backdoor attacks via compromised workspaces when using Antigravity, which can be leveraged by attackers to execute arbitrary code on their systems. At present there is no setting that we could identify to safeguard against this vulnerability,” Portnoy wrote in his blog.

Even in the most restrictive mode of operation, “exploitation proceeds unabated and without confirmation from the user,” he wrote.

Asked for comment, a Google spokesperson said the company is aware of the issue reported by Mindguard, and is working to address it. 

In an email, Mindgard’s Portnoy told CSOonline that the nature of the flaw makes it difficult to mitigate. “Strong identity would not help mitigate this issue, because the actions undertaken by Antigravity are occurring with the identity of the user running the application,” he said. “As far as the operating system can tell, they are indistinguishable. Access management control could possibly do so, but only if you were able to restrict access to the global configuration directory, and it may have downstream impact on Antigravity’s functionality.” For example, he said, this could cause Model Context Protocol (MCP), a framework for standardizing the way AI systems share data, to malfunction.

“The attack vector is through the source code repository that is opened by the developer,” he explained, “and doesn’t need to be triggered through a prompt.”

Other researchers also have found problems with Antigravity:

Adam Swanda says he discovered and disclosed to Google an indirect prompt injection vulnerability. He said Google told him the particular problem is a known issue and is expected behavior of the tool.

another researcher who goes by the name Wunderwuzzi blogged about discovering five holes, including data exfiltration and remote command execution via indirect prompt injection vulnerabilities. 
This blog notes that, according to Google’s Antigravity Known Issues page, the company is working on fixes for several issues.   

Asked for comment about the issues reported in the three blogs, a Google spokesperson said, “We take security issues very seriously and encourage reporting of all vulnerabilities so we can identify and address them quickly. We will continue to post known issues publicly as we work to address them.”

What is Antigravity?

Antigravity is an integrated application development environment (IDE), released on November 18, that leverages the Google Gemini 3 Pro chatbot. “Antigravity isn’t just an editor,” Google says. “It’s a development platform that combines a familiar, AI-powered coding experience with a new agent-first interface. This allows you to deploy agents that autonomously plan, execute, and verify complex tasks across your editor, terminal, and browser.”

Individuals — including threat actors — can use it for free.

Google Antigravity streamlines workflows by offering tools for parallelization, customization, and efficient knowledge management, the company says, to help eliminate common development obstacles and repetitive tasks. Agents can be spun up to tackle routine tasks such as codebase research, bug fixes, and backlog tasks.

Agent installs the malware

The problem Mindgard discovered involves Antigravity’s rule that the developer has to work within a trusted workspace or the tool won’t function. The threat manifests through an attacker creating a malicious source code repository, Portnoy says. “Then, if a target opens it (through finding it on Github, being social engineered, or tricked into opening it) then that user’s system becomes compromised persistently.” 

The malicious actor doesn’t have to create an agent, he explained. The agent is part of Antigravity and is backed by the Google Gemini LLM. The agent is the component that is tricked into installing the backdoor by following instructions in the malicious source code repository that was opened by the user.

“In Antigravity,” Mindgard argues, “’trust’ is effectively the entry point to the product rather than a conferral of privileges.” The problem, it pointed out, is that a compromised workspace becomes a long-term backdoor into every new session. “Even after a complete uninstall and re-install of Antigravity,” says Mindgard, “the backdoor remains in effect. Because Antigravity’s core intended design requires trusted workspace access, the vulnerability translates into cross-workspace risk, meaning one tainted workspace can impact all subsequent usage of Antigravity regardless of trust settings.”

For anyone responsible for AI cybersecurity, says Mindguard, this highlights the need to treat AI development environments as sensitive infrastructure, and to closely control what content, files, and configurations are allowed into them.

Process ‘perplexing’

In his email, Portnoy acknowledged that Google is now taking some action. “Google is moving through their established process, although it was a bit perplexing on the stop-and-start nature. First [the reported vulnerability] was flagged as not an issue. Then it was re-opened. Then the Known Issues page was altered in stealth to be more all encompassing. It’s good that the vulnerability will be reviewed by their security team to ascertain its severity, although in the meantime we would recommend all Antigravity users to seriously consider the vulnerability found and means for mitigation.”

Adam Swanda says in his blog that he was able to partially extract an Antigravity agent’s system prompt, enough to identify a design weakness that could lead to indirect prompt injection.

Highlights broader issues

The problem is, the prompt tells the AI to strictly follow special XML-style tags that handle privileged instructions in a conversation between a user and the chatbot, so there would be no warning to the user that special, and possibly malicious, instructions were retrieved. When the agent fetches external web content, Swanda says, it doesn’t sanitize these special tags to ensure they are actually from the application itself and not untrusted input. An attacker can embed their own special message in a webpage, or presumably any other content, and the Antigravity agent will treat those commands as trusted system instructions.

This type of vulnerability isn’t new, he adds, but the finding highlights broader issues in large language models and agent systems:

LLMs cannot distinguish between trusted and untrusted sources;

untrusted sources can contain malicious instructions to execute tools and/or modify responses returned to the user/application;

system prompts should not be considered secret or used as a security control.

Advice to developers

Swanda recommends that app development teams building AI agents with tool-calling:

assume all external content is adversarial. Use strong input and output guardrails, including tool calling; Strip any special syntax before processing;

implement tool execution safeguards. Require explicit user approval for high-risk operations, especially those triggered after handling untrusted content or other dangerous tool combinations;

not rely on prompts for security. System prompts, for example, can be extracted and used by an attacker to influence their attack strategy.

In addition, Portnoy recommends that developers work with their security teams to ensure that they sufficiently vet and assess the AI-assisted tools that they are introducing to their organization. “There are numerous examples of using AI-assisted tools to accelerate development pipelines to enhance operational efficiency,” he said. “However, from experience, security in bleeding-edge (recently dropped) tools is somewhat of an afterthought. Thinking seriously about the intended use case of the AI tool, what data sources it can access, and what it is connected to are fundamental to ensuring you remain secure.” 

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *