Month: August 2025

New Cyberattack Methodology ‘Man in Prompt’, User’s at Risk, Target-AI Tools

AI tools like ChatGPT, Google Gemini and others being afflicted by malicious actors via injecting harmful instructions into leading GenAI tools. These were overlooked previously and attack methodology targets the browser extensions installed by various organizations.

The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers.

As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it and cover their tracks. 

The exploit has been tested on all top commercial LLMs, with proof-of-concept demos provided for ChatGPT and Google Gemini. 

The question is how do they impact Users & organizations at large & how does the AI tools function within web browsers?

For organizations the implications can be high then expected as AI tools are most sought after and slowly organization across verticals are relying on AI tools.

The LLMs used and tested on many organizations are mostly trained ones. They carry huge data set of information which are mostly confidential and possibility of being vulnerable to such attack rises .

The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers. As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks. 

The attack methodology named as ‘Man in Prompt’, exercise its attack with new class exploit targeting the AI tools as per LayerX’s researchers. As per the research any browser extension, even without any special permissions, can access the prompts of both commercial and internal LLMs and inject them with prompts to steal data, exfiltrate it, and cover their tracks. 

LayerX researcher termed this type of attack as ‘hacking copilots’ that are equipped to steal organizational information.

The prompts given are a part of the web page structure where input fields are known as the Document Object Model, or DOM. So virtually any browser extension with basic scripting access to the DOM can read or alter what users type into AI prompts, even without requiring special permissions.

Bad actors can use compromised extensions to carry out activities including manipulating a user’s input to the AI.

  • Perform prompt injection attacks, altering the user’s input or inserting hidden instructions.
  • Extract data directly from the prompt, response, or session.
  • Compromise model integrity, tricking the LLM into revealing sensitive information or performing unintended actions

Understanding the attack scenario

Proof-of-concept attacks against major platforms

For ChatGPT, an extension with minimal declared permissions could inject a prompt, extract the AI’s response and remove chat history from the user’s view to reduce detection.

LayerX implemented an exploit that can steal internal data from corporate environments using Google Gemini via its integration into Google Workspace.

Over the last few months, Google has rolled out new integrations of its Gemini AI into Google Workspace. Currently, this feature is available to organizations using Workspace and paying users.

Gemini integration is implemented directly within the page as added code on top of the existing page. It modifies and directly writes to the web application’s Document Object Model (DOM), giving it control and access to all functionality within the application

These platforms are vulnerable to  any exploit which Layer X researchers showcased that without any special permissions shows how practically any user is vulnerable to such an attack. 

Threat mitigation

These kind of attacks creates a blind spot for traditional security tools like endpoint Data Loss Prevention (DLP) systems or Secure Web Gateways, as they lack visibility into these DOM-level interactions. Blocking AI tools by URL alone also won’t protect internal AI deployments.

LayerX advises organisations to adjust their security strategies towards inspecting in-browser behaviour.

Key recommendations include monitoring DOM interactions within AI tools to detect suspicious activity, blocking risky extensions based on their behavior rather than just their listed permissions, and actively preventing prompt tampering and data exfiltration in real-time at the browser layer.

(Source: https://layerxsecurity.com/blog/man-in-the-prompt-top-ai-tools-vulnerable-to-injection/)

Analyzing the newly discovered Vulnerability in Gemini CLI; Impact on Software coding

Google’s Gemini command line interface (CLI) AI agent

Its not been one month when Google’s Gemini CLI vulnerability discovered by Tracebit researchers and found attackers could use prompt injection attacks to steal sensitive data.

Google’s Gemini CLI, an open-source AI agent for coding could allow attackers exploit to hide malicious commands, using “a toxic combination of improper validation, prompt injection and misleading UX,” as Tracebit explains.

After reports of the vulnerability surfaced, Google classified the situation as Priority 1 and Severity 1 on July 23, releasing the improved version two days later.

Those planning to use Gemini CLI should immediately upgrade to its latest version (0.1.14). Additionally, users could use the tool’s sandboxing mode for additional security and protection.

Disclosure of the vulnerability

Researchers reported on vulnerability directly to Google through its Bug Hunters programme. According to a timeline provided by Tracebit, the vulnerability was initially reported to Google’s Vulnerability Disclosure Programme (VDP) on 27 June, just two days after Gemini CLI’s public release.

Impact of the vulnerability

A detailed analysis found that in the patched version of Gemini CLI, attempts at code injection display the malicious command to users. This require explicit approval for any additional binaries to be executed. This change is intended to prevent the silent execution that the original vulnerability enabled.

Tracebit’s researchers played an important role in discovering and reporting the issue which is symbol of independent security research, particularly as AI-powered tools become central to software development workflows.

LLM integral to software development but hackers are using it too

Gemini CLI integrates Google’s LLM with traditional command line tools such as PowerShell or Bash. This allows developers to use natural language prompts to speed up tasks such as analyzing and debugging code, generating documentation, and understanding new repositories (“repos”).

As developers worldwide are using LLMs to help them develop code faster, attackers worldwide are using LLMs to help them understand and attack applications faster. 

Tracebit also discovered that malicious commands could easily be hidden in Gemini CLI This is possible by by packing the command line with blank characters, pushing the malicious commands out of the user’s sight.

More vigilance required when examining and running third-party or untrusted code, especially in tools leveraging AI to assist in software development.

Through the use of LLMs, AI excels at educating users, finding patterns and automate repetitive tasks.

Sam Cox, Tracebit’s founder, says he personally tested the exploit, which ultimately allowed him to execute any command — including destructive ones. “That’s exactly why I found this so concerning,” Cox told Ars Technica. “The same technique would work for deleting files, a fork bomb or even installing a remote shell giving the attacker remote control of the user’s machine.”

Source: https://in.mashable.com/tech/97813/if-youre-coding-with-gemini-cli-you-need-this-security-update

Scroll to top