GitHub Advisory — Langflow CSV Agent RCE (CVE-2026-27966)

AI relevance: Langflow is a common LLM workflow hub; the CSV Agent’s prompt-driven Python REPL turns a single malicious prompt into host-level RCE inside AI agent infrastructure.

  • The CSV Agent node hardcodes allow_dangerous_code=True, automatically enabling LangChain’s python_repl_ast tool.
  • Attackers can embed a tool call in model output (or prompt injection) to execute arbitrary Python and OS commands.
  • The advisory notes there is no UI toggle or environment flag to disable this behavior in affected releases.
  • Proof-of-concept shows a CSVAgent flow writing files on the server via __import__("os").system(...).
  • Fix guidance recommends defaulting to allow_dangerous_code=False or exposing an explicit opt-in toggle.
  • Any Langflow deployment that lets untrusted users influence prompts becomes a direct RCE surface.

Security impact

CSV ingestion pipelines are deceptively dangerous in agent workflows. When a tool like Langflow’s CSV agent allows code execution, it provides a path from seemingly benign data ingestion to full host compromise. If your agent accepts user-provided files, an attacker can deliver a malicious CSV that triggers execution the moment it’s parsed. This makes “document processing” an RCE surface — one that’s easy to miss in threat models.

In AI deployments, CSV ingestion often happens inside data preprocessing jobs with broad access to data lakes, embeddings, or evaluation logs. Compromise here can spill training data, customer records, or internal datasets. It also allows silent tampering with data that feeds future model behavior, which is a subtle but damaging integrity risk.

Mitigation strategy

Upgrade or disable the vulnerable CSV agent. Put CSV parsing inside a sandbox or container with no secrets mounted and minimal network access. Validate file origins and scan inputs. If you must allow user uploads, treat them as untrusted code and process them in isolated worker environments.

Why it matters

  • Langflow often runs with access to model keys and connectors; RCE turns prompt injection into full credential compromise.
  • Agent workflow hubs are commonly deployed in shared environments, so a single flow can become a lateral movement foothold.
  • The issue highlights how “tool enablement” choices in agent frameworks can silently expand the blast radius.

What to do

  • Patch: update Langflow to the fixed release referenced by the advisory.
  • Restrict inputs: isolate or gate CSV Agent flows that accept untrusted prompts or files.
  • Sandbox execution: run Langflow inside constrained containers with minimal privileges and network egress.
  • Rotate secrets: if exposure is possible, rotate API keys and audit outbound calls from Langflow hosts.

Sources