Summary
Anthropic’s Claude Code gained traction as a powerful AI coding assistant and promises developers a safe and streamlined way to build with Claude’s capabilities. But recently two high-severity vulnerabilities have been discovered in Claude Code, Anthropic’s AI-powered coding assistant. These flaws allow attackers to escape security restrictions and execute arbitrary system commands.
AI coding assistant was meant to enforce restrictions but unknowingly reveals how to bypass them. Threat researchers from Cymulate discovered two high-severity vulnerabilities in Claude Code, which were quickly addressed by the team.
These issues allowed me to escape its intended restrictions and execute unauthorized actions, all with Claude’s own help.
| Severity | High |
| CVSS Score | 8.7 |
| CVEs | CVE-2025-54794, CVE-2025-54795 |
| POC Available | Yes |
| Actively Exploited | No |
| Exploited in Wild | No |
| Advisory Version | 1.0 |
Overview
Notably, Claude’s own feedback mechanisms were leveraged by attackers to refine and optimize their payloads.
These CVEs highlight how generative AI tools can be manipulated into aiding exploitation attempts, demonstrating the risks of integrating AI into secure development workflows.
| Vulnerability Name | CVE ID | Product Affected | Severity | Fixed Version |
| Path Restriction Bypass | CVE-2025-54794 | Claude Code < v0.2.111 | 7.7 | v0.2.111 |
| Command Injection | CVE-2025-54795 | Claude Code < v1.0.20 | 8.7 | v1.0.20 |
Technical Summary
CVE-2025-54794 – Directory Restriction Bypass
Claude Code tried to keep file access safe by only allowing work in certain folders. But it used a weak method to check file paths it just checked if the file name started with an allowed folder name. An attacker could create a folder with a similar name (like /tmp/allowed_dir_malicious) and trick Claude into thinking it was safe.
This could allow attackers to reach outside the safe folder, read secret files or even access system settings. Using symbolic links, attackers could also jump to important files that should never be touched.
CVE-2025-54795 – Command Injection
Claude only allows certain commands, like echo or ls, to run. But there was a mistake in how it cleaned user input. Attackers could hide harmful commands inside allowed ones. Example – echo “\”; <MALICIOUS_COMMAND>; echo \”” tricks Claude into running the attacker’s command between two harmless echo commands.
Even worse, Claude helped improve these attack attempts. When a try failed, the attacker asked Claude why it didn’t work. Claude explained the problem and suggested fixes leading to successful attacks.
| CVE ID | System Affected | Vulnerability Details | Impact |
| CVE-2025-54794 | Claude Code versions below v0.2.111 | Claude used a weak prefix matching to check if files were inside a safe folder. Attackers could create folders with similar names to bypass these checks. | Attackers can escape the sandbox, access sensitive files, and potentially escalate system privileges. |
| CVE-2025-54795 | Claude Code versions below v1.0.20 | Claude allowed only safe commands, but input was not cleaned properly. Attackers could hide malicious commands inside allowed ones like echo. | Attackers can run harmful commands, open applications, and possibly install malware or backdoors. |
POC Available:
This vulnerability exploits a weakness in how Claude handles whitelisted command strings. Improper input sanitization allows attackers to inject arbitrary shell commands using echo, bypassing any user prompt or approval.
Step 1 – Try a basic payload
echo “test”; ls -la ../restricted (This gets flagged by Claude, and it asks for user confirmation)
Step 2 – Refined working payload:
echo “\”; ls -la ../restricted; echo \””
Claude executes this without a prompt.
Lists a directory (../restricted) outside the current working directory, which should not be accessible.
Step 3 – Execute arbitrary system command (e.g., launch Calculator)
echo “\”; open -a Calculator; echo \””
This launches the Calculator app without any user approval.
Remediation:
For CVE-2025-54794 → Update to v0.2.111 or later
For CVE-2025-54795 → Update to v1.0.20 or later
Conclusion:
These vulnerabilities highlight a growing concern in AI-assisted development, the AI’s ability to assist malicious users. Claude Code not only allowed abuse through technical flaws, but also helped attackers refine and improve their exploitation strategy.
Organizations leveraging AI in development pipelines must apply the same rigor used for traditional tools, enforce strict input validation, isolate environments and assume AI can be misled or exploited.
Anthropic’s security and engineering teams has been fast with their professional response and smooth coordination during disclosure.
References: