Google Gemini vulnerability enables hidden phishing attacks

Tags:

Google Gemini for Workspace can be abused to generate email summaries that appear legitimate but contain malicious instructions or warnings. The problem is that attackers can redirect their victims to phishing sites without attachments or direct links. The vulnerability was submitted to 0DIN (0Day Investigative Network), Mozilla’s GenAI bug bounty program.

Although similar indirect prompt attacks on Gemini were already reported in 2024 and security measures were taken, the technique is still viable today, according to the expert.

How the attack works

In a blog post, GenAI bug bounty technical product manager Marco Figueroa explains that the attack relies on crafted HTML / CSS inside the email body. Because the injected text is hidden the user never sees the instruction in the original message. The trigger happens when the user requests Gemini to summarize their unread emails, they receive a manipulated response that appears to be legitimate, originating from Gemini itself.

Gemini parses the invisible directive and appends the attacker’s phishing warning to its summary output. If the user follows the AI-generated notification and follows the attacker’s instructions, this leads to leaked credentials or a phone-based social engineering attack.

Current LLM guard-rails largely focus on user-visible text. HTML/CSS tricks (e.g., zero-font, white-font, off-screen) bypass those heuristics because the model still receives the raw markup, according to the post.

0DIN has accessed the issue as of moderate risk with the intent of credential harvesting and voice-phishing (vishing).

To prevent such attacks, the researcher advises security teams to follow various detection and defense methods. One option is to remove, neutralize or ignore content that is designed to be hidden in the body text.

Alternatively, it is recommended to implement a post-processing filter that scans the Gemini output for urgent messages, URLs or phone numbers and flags the message for further review.

There are also further associated risks including supply-chain risk, as Newsletters, CRM systems and automated ticketing emails can become injection vectors — turning one compromised SaaS account into thousands of phishing beacons.

“Prompt injections are the new email macros. “Phishing For Gemini” shows that trustworthy AI summaries can be subverted with a single invisible tag,” according to the post. “Until LLMs gain robust context-isolation, every piece of third-party text your model ingests is executable code. Security teams must treat AI assistants as part of the attack surface and instrument them, sandbox them, and never assume their output is benign.”

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *