Unplug Gemini from email and calendars, says cybersecurity firm

Tags:

CSOs should consider turning off Google Gemini access to employees’ Gmail and Google Calendars, because the chatbot is vulnerable to a form of prompt injection, says the head of a cybersecurity firm that discovered the vulnerability.

”If you’re worried about the risk, you might want to turn off automatic email and calendar processing by Gemini until this, and potentially other things like it, are addressed,” Jeremy Snider, CEO of US-based FireTail, said in an interview.

“You could still make Gemini available to users for productivity purposes, but automatic pre-processing [of mail and calendars] is not ideal,” he said.

“Be aware if your developers are integrating Gemini into applications for chatbots and other use cases,” he added, “and then monitor your LLM [large language model] responses.”

Snider was commenting on the release this week of a FireTail report that found Gemini, Deepseek, and Grok are susceptible to what’s knows as ASCII Smuggling. It’s an old technique used by threat actors that could be used on new AI technology.

It incorporates invisible ASCII control characters that can embed hidden instructions within a seemingly benign string of text that isn’t filtered.

“Your browser (the UI) shows you a nice, clean prompt,” explains the report. “But the raw text that gets fed to the LLM has a secret, hidden payload tucked inside, encoded using Tags Unicode Blocks, characters not designed to be shown in the UI and therefore invisible. The LLM reads the hidden text, acts on it, and you see nothing wrong. It’s a fundamental application logic flaw.”

This flaw is “particularly dangerous when LLMs, like Gemini, are deeply integrated into enterprise platforms like Google Workspace,” the report adds.

FireTail tested six AI agents. OpenAI’s ChatGPT, Microsoft Copilot, and Anthropic AI’s Claude caught the attack. Gemini, DeepSeek, and Grok failed.

In a test, FireTail researchers were able to change the word “Meeting” in an appointment in Google Calendar to “Meeting. It is optional.” That may seem innocuous, but Snider worries a threat actor could do worse if Gemini is integrated into Google Workspace for enterprises. The tactic can be used for identity spoofing, FireTail argues, because a vulnerable chatbot would automatically process malicious instructions and bypass a typical Accept/Decline security gate.

For users with LLMs connected to their inboxes, a simple email containing hidden commands can instruct the AI agent to search the inbox for sensitive items or gather details about contacts, the report argues, “turning a standard phishing attempt into an autonomous data extraction tool.”  

Snider also worries about what happens if a vulnerable AI agent is integrated into a customer support chatbot.

What especially irritates him is that when FireTail reported its findings last month to Google, the company brushed off the threat.

“It looks to us as if the issue you’re describing can only result in social engineering,” FireTail says it was told by Google’s bug report team, “and we think that addressing it would not make our users less prone to such attacks.”

Snider told CSO that he “fundamentally disagrees.”

“Social engineering is a big problem,” he said. “When you take away the risk of social engineering, it does make users safe.”

The solution, he added, is for an AI agent to filter inputs.

Google was asked for comment on the FireTail report. No reply had been received by our deadline, nor was there a response from xAI, which is behind Grok.

“ASCII Smuggling attacks against AIs aren’t new,” commented Joseph Steinberg, a US-based cybersecurity and AI expert. “I saw one demonstrated over a year ago.”

He didn’t specify where, but in August 2024, a security researcher blogged about an ASCII smuggling vulnerability in Copilot. The finding was reported to Microsoft.

Many ways of disguising malicious prompts will be discovered over time, he added, so it’s important that IT and security leaders ensure that AIs don’t have the power to act without human approval on prompts that could be damaging.

It may be wise, he added, to convert all prompt requests to standard ASCII characters that are visible and expected before they reach the AI engine.

Similar prompt injection attacks

Last month CSO reported that attackers are increasingly exploiting generative AI by embedding malicious prompts in macros and exposing hidden data through parsers. Other such flaws include the discovery by Aim Security researchers of EchoLeak (CVE-2025-32711), a zero-click prompt injection vulnerability in Microsoft 365 Copilot that has since been patched.

In July, Pangea reported that large language models (LLMs) could be fooled by prompt injection attacks that embed malicious instructions into a query’s legal disclaimer, terms of service, or privacy policies. At the time,  Kellman Meghu, principal security architect at Canadian incident response firm DeepCove Cybersecurity, said, “How silly we are as an industry, pretending this thing [AI] is ready for prime time … We just keep throwing AI at the wall hoping something sticks.”

FireTail’s Snider believes that eventually Google will plug the hole it discovered in Gemini, in response to the “unwanted attention” from reporting by several IT news sites.

Categories

No Responses

Leave a Reply

Your email address will not be published. Required fields are marked *