A newly disclosed weakness in Google’s Gemini shows how attackers could exploit routine calendar invitations to influence the model’s behavior, underscoring emerging security risks as enterprises embed generative AI into everyday productivity and decision-making workflows.
The vulnerability was identified by application security firm Miggo. In its report, Miggo’s head of research, Liad Eliyahu, said Gemini parses the full context of a user’s calendar events, including titles, times, attendees, and descriptions, allowing it to answer questions such as what a user’s schedule looks like for the day.
“The mechanism for this attack exploits that integration,” Eliyahu said. “Because Gemini automatically ingests and interprets event data to be helpful, an attacker who can influence event fields can plant natural language instructions that the model may later execute.”
Miggo’s researchers said the finding highlights a broader security challenge facing LLM-based systems, where attacks focus on manipulating meaning and context rather than exploiting clearly identifiable malicious code.
“This Gemini vulnerability isn’t just an isolated edge case,” Eliyahu said. “Rather, it is a case study in how detection is struggling to keep up with AI-native threats. Traditional AppSec assumptions (including recognizable patterns and deterministic logic) do not map clearly to systems that reason in language and intent.”
Severity vs traditional attacks
The issue is significant not because it mirrors a conventional software flaw, but because it demonstrates how AI systems can be manipulated in ways similar to social engineering attacks.
“Traditionally, a calendar invite, email, or document is treated as data only,” said Sunil Varkey, a cybersecurity analyst. “The attacker must break code logic or memory safety to make the system ‘do something’, rather than rely on data alone.”
But in this case, the ‘bug’ is less about flawed code and more about how an LLM interprets language in context, combined with the permissions it has across connected applications, said Sakshi Grover, senior research manager for IDC Asia Pacific Cybersecurity Services.
“That combination turns a normal business object, a calendar invite, into an attack payload,” Grover said. “It reveals that LLM security at major vendors is still catching up to real-world enterprise threat models, especially around indirect prompt injection, tool use, and cross-app data handling.”
Keith Prabhu, founder and CEO of Confidis, said that while the execution of this attack occurs through Google Gemini, it more closely resembles a phishing-style technique.
“Once the malicious invite is accepted by the user, Gemini considers the accepted invite as trusted and executes the prompt,” Prabhu said. “While the rest of the computing world is moving towards Zero Trust, AI tools still trust desktop components implicitly. This can be a serious flaw since AI tools can be misused to act as a ‘concierge’ to do tasks that malware cannot directly do.”
Real enterprise exposure
Analysts point out that the risk is significant in enterprise environments as organizations rapidly deploy AI copilots connected to sensitive systems.
“As internal copilots ingest data from emails, calendars, documents, and collaboration tools, a single compromised account or phishing email can quietly embed malicious instructions,” said Chandrasekhar Bilugu, CTO of SureShield. “When employees run routine queries, the model may process this manipulated context and unintentionally disclose sensitive information.”
Grover said that threats of prompt injection have moved from theoretical to operational. “In IDC’s Asia/Pacific Study conducted in August 2025, enterprises in India cited ‘LLM prompt injection, model manipulation, or jailbreaking AI assistants’ as the second most concerning AI-driven threat, right after ‘model poisoning or adversarial inputs during AI training’,” she added.
Measures to prioritize
Prabhu said that security leaders need to include AI security awareness as a part of their annual information security training for all staff. The endpoints would also need to be hardened, keeping in mind threats due to this new attack vector.
Grover said organizations should assume prompt injection attacks will occur and focus on limiting the potential blast radius rather than trying to eliminate the risk altogether. She said this requires enforcing least privilege for AI systems, tightly scoping tool permissions, restricting default data access, and validating every AI-initiated action against business rules and sensitivity policies.
“The goal is not to make the model immune to language, because no model is, but to ensure that even if it is manipulated, it cannot quietly access more data than it should or exfiltrate information through secondary channels,” Grover added.
Varkey said security leaders should also rethink how they position AI copilots within their environments, warning against treating them like simple search tools. “Apply Zero Trust principles with strong guardrails: limit data access to least privilege, ensure untrusted content can’t become trusted instruction, and require approvals for high-risk actions such as sharing, sending, or writing back into business systems,” he added.
No Responses