Gemini Trifecta: AI autonomy without guardrails opens new attack surface

Tags:

Security researchers at Tenable revealed three distinct vulnerabilities across Gemini’s cloud assist, search optimization, and browsing components.

If exploited, these flaws allow attackers to inject prompts, hijack AI logic, and quietly siphon private user data, even bypassing many of Google’s built-in safeguards. Together, the flaws have been dubbed “Gemini Trifecta.”

Itay Ravia, head of Aim Labs, the cybersecurity outfit that first documented a similar EchoLeak zero-click attack on Microsoft 365 Copilot, said, “Tenable’s Gemini Trifecta reinforces that agents themselves become the attack vehicle once they’re granted too much autonomy without sufficient guardrails. The pattern is clear: logs, search histories, and browsing tools are all active attack surfaces.”

Google has since patched the issue, but researchers emphasized that the episode is a wake-up call for the AI era.

Prompt injection in Gemini Cloud Assist and Search

Gemini Cloud Assist is a feature that helps users summarize and interpret cloud logs (particularly in Google Cloud). Tenable found that this service could be tricked by an attacker to embed specially formatted content, such as through a manipulated HTTP User-Agent header, in a log. The tweaked content then flows into the logs, which Gemini later ingests and summarizes.

In a proof-of-concept (PoC) shared in a blog post, the researchers sent malicious prompt fragments via the User Agent field to a Cloud Function endpoint. When Gemini later “explained” the log entry, it included a phish-ready link derived from the crafted input–though the full prompt was hidden behind a collapsed “Additional prompt details” section.

Because logs are pervasive and are often considered passive artifacts, this effectively turns nearly any public-facing cloud endpoint into an attack surface, researchers noted. The blog post further argued that several other Google Cloud services, including Functions, Run, App Engine, Load Balancing, etc, could be similarly abused if logs are used in AI-assisted summarization.

The second vector exploits Gemini’s search personalization. As Gemini’s Search module uses a user’s past queries as context, an attacker could use JavaScript tricks to insert malicious “search queries” into a user’s browser history. When Gemini reads that history as context, it treats those injected prompts as legitimate inputs.

“The underlying issue was the model’s inability to differentiate between legitimate user queries and injected prompts from external sources,” the researchers said. “The JavaScript trick to inject search history to victims included stopping a redirect to the Google Search API, but waiting long enough to allow it to be logged in the search history and not actually redirecting the page.”

Exfiltration via the browsing tool

Even after prompt injection, the attacker needs a way to pull data out, and that’s what the third flaw affecting the Gemini Browsing Tool allowed. Tenable researchers crafted prompts to trick Gemini to fetch external web content using the Browser Tool, embedding user data into the query string of that request. The outbound HTTP call thereby carried the user’s sensitive data to an attacker-controlled server, without relying on visibly rendered links or markdown tricks.

This finding is notable as Google already has mitigations like suppressing hyperlink rendering or filtering image markdowns. The attack bypassed those UI-level defenses by using Google Browsing Tool invocation as the exfiltration channel.

While Google did not immediately respond to CSO’s request for comment, Tenable said the cloud giant has fixed all of these issues by sanitizing link outputs in Browser Tool and bringing in more structural protections in Gemini Cloud Assist and Search.

Prompt injection attacks have been around since AI first came into play, alongside some other sophisticated ways to subvert these intelligent models, including EchoChamber, EchoLeak, and Crescendo.  “These are intrinsic weaknesses in the way today’s agents are built, and we will continue to see them resurface across different platforms until runtime protections are widely deployed,” Ravia noted.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *