XCIT Threat Labs — Malicious LLM Routers Inject Payloads and Steal Credentials
AI relevance: Third-party LLM API routers sit at the trust boundary between AI agents and model providers, giving them the ability to silently inject malicious tool calls into agent sessions and exfiltrate API keys — a supply-chain attack vector that affects any agent architecture routing through external API services.
What happened
- XCIT Threat Labs audited 428 LLM API routers (28 paid, 400 free) used as intermediaries between AI agents and model providers like OpenAI, Anthropic, and Google.
- Because routers drop TLS sessions to process requests, they have access to all plaintext data — including API keys, secret credentials, and JSON tool-call payloads passing between agents and models.
- 9 routers actively injected malicious tool calls into AI sessions (1 paid, 8 free), inserting harmful instructions into JSON payloads that agents then executed.
- 17 routers harvested credentials, capturing researcher-owned AWS keys that passed through their infrastructure.
- One router drained a crypto wallet by modifying transactions in transit to redirect Ether from a decoy test wallet — demonstrating real financial theft capability.
- A single leaked OpenAI API key triggered automated abuse generating over 100 million tokens via GPT-5.4 models installed on compromised routers.
- Poor relay services generated over 2 billion tokens and stole 99 credentials across 440 Codex sessions, many unattended.
- Attacks include conditional and dependency-targeted injection — malicious payloads that activate only when specific criteria are met, making detection harder.
Why it matters
AI agent architectures commonly route through third-party API services for load balancing, cost optimization, or model fallback. These routers are treated as trusted infrastructure, but this research shows that benign-looking services can silently compromise entire agent sessions. For organizations using AI coding agents (Claude Code, Copilot), credential and key leakage through a compromised router cascades into broader infrastructure compromise.
What to do
- Inventory all third-party LLM API routers in your agent architecture and assess their trustworthiness — prefer self-hosted or well-audited services.
- Never pass production API keys through free or unvetted routing services; use scoped, short-lived credentials.
- Implement request/response integrity checking to detect payload modification in transit.
- Avoid sending sensitive data (private keys, seed phrases, production credentials) through AI agents that route through third-party services.