Adversa — Claude Code deny rule bypass allows prompt injection of blocked commands

AI relevance: Claude Code is Anthropic's flagship agentic CLI tool that executes shell commands and tools, so security bypasses in its permission system directly enable prompt injection attacks against developers and enterprises using AI coding assistants.

  • Tel Aviv security firm Adversa discovered a critical vulnerability in Claude Code's permission system on April 1, 2026.
  • The bypass occurs when command pipelines exceed 50 subcommands, causing the deny rule system to fail silently.
  • Attackers can use prompt injection to execute explicitly blocked commands like curl, wget, and nc despite deny rules.
  • The vulnerability affects Claude Code's bashPermissions.ts module at line 2174 where behavior defaults to "ask" instead of "deny".
  • When the 50-command threshold is exceeded, blocked commands silently revert to asking for permission rather than being denied.
  • This represents a fundamental security boundary failure in AI agent permission systems.
  • The fix is simple: change the behavior key from "ask" to "deny" in the bashPermissions.ts file.
  • This vulnerability demonstrates how AI agent security models can fail in unexpected ways under edge cases.

Why it matters

Agentic AI tools like Claude Code operate with significant system access, making their permission systems critical security boundaries. When deny rules can be bypassed through prompt injection, attackers gain unauthorized access to execute dangerous commands that could lead to data exfiltration, system compromise, or lateral movement within development environments. This vulnerability highlights the immature state of AI agent security and the need for rigorous testing of permission systems under adversarial conditions.

What to do

  • Update Claude Code immediately — Check for patches from Anthropic addressing this vulnerability
  • Review command pipeline complexity — Monitor for unusually long command sequences in AI agent usage
  • Implement additional security layers — Use network filtering, endpoint protection, and command monitoring
  • Audit permission configurations — Verify that deny rules are properly enforced in all scenarios
  • Monitor for anomalous behavior — Watch for unexpected network connections or file system access
  • Consider sandboxed execution — Run AI agents in isolated environments with restricted privileges
  • Educate developers — Train teams on AI agent security risks and proper configuration

Sources