Enterprise Flaw ‘GeminiJack’ ZeroClick in Gemini Fixed by Google: Case of Prompt Injection Attack

Across the AI based enterprise ecosystem of Google Gemini, researchers discovered an AI injection prompt vulnerability known as ‘GeminiJack’, that allows attackers to steal sensitive Gmail, Docs, and Calendar data. The vulnerability has been fixed, but experts say it is just the beginning of AI vulnerabilities to come.

The AI flaw in Google Gemini has exploited prompt injection to manipulate AI retrieval systems, leaking years of emails, calendars, and document repositories via disguised image requests. The vulnerability exposes architectural weaknesses in enterprise LLMs, with similar injection attacks likely as companies rush to deploy LLM, without proper safeguards.

How the flaw was detected

After an employee activated Gemini Enterprise through a search, the system used its retrieval system to gather relevant content – pulling the attacker’s document into its context, interpreting the instructions as legitimate queries and executing them across all Workspace data sources it had permission to access.

More into ‘GeminiJack’

‘GeminiJack’ interacted with systems to push poisoned content and manipulate to exfiltrate sensitive information without the target’s knowledge. The defining trait of the attack was that it required no interaction from the victim. Researchers noted that exploiting Gemini zero-click behavior meant employees did not need to open links, click prompts, or override warnings.

The attack also bypassed standard enterprise security controls like clicks were not required from the targeted employees and no warning signs appeared along with no traditional security tools were triggered.

Attack Module of ‘GeminiJack’

As per Noma Security researchers, attackers could embed hidden instructions inside a shared document or message. The URL contained the exfiltrated internal data discovered during searches.

As part of the fix, Vertex AI Search has now been fully separated from Gemini Enterprise, ensuring the two systems no longer share the same LLM-driven retrieval pipelines, Google noted.

Threat detection requires comprehensive inspection of all data sources feeding the agent’s context including tool outputs, retrieval-augmented generation data and other external inputs.

With the GeminiJack zero-click, a single routine AI query could leak:

  • Years of internal emails – including customer and financial communications
  • Complete calendar histories – revealing negotiations, business relationships, and organizational behavior
  • Entire document repositories – from contracts to technical architecture

Despite the patch, security researchers warned that similar indirect prompt-injection attacks could emerge as more organizations adopt AI systems with expansive access privileges.

Case of prompt Injection Attack

Over the past few years, the advent of large language models (LLMs) has revolutionized the field of natural language processing (NLP), enabling innovative new applications, to perform a diverse range of tasks. This includes summarizing large volumes of text, creatively generating new content, performing advanced reasoning and dynamically generating execution plans to achieve complex tasks.

With time attackers evolved and attacks are targeted to restrict enterprise workflows with adverse techniques like indirect prompt injection.

Indirect prompt injection can be used against systems that leverage large language models (LLMs) to process untrusted data. Here the attacker controls or influences the output of an instruction-tuned LLM by injecting text that the LLM misinterprets as legitimate instructions.

These instructions might be hidden from the user, for example using white text on a white background or non-printing Unicode characters.

There are two basic types of prompt injection attacks:

  1. Direct prompt injection: An attacker submits adversarial prompts directly to an AI tool.
  2. Indirect prompt injection: An attacker embeds prompt injections in external content that a GenAI system may access, such as documents or emails.

(Sources: Google Fixes GeminiJack Zero-Click AI Data Leak)

(Source: Indirect Prompt Injection Attacks: Hidden AI Risks)

Scroll to top