Vercel Breached via Context AI Supply Chain — Internal Database Sold for $2M

AI relevance: This breach demonstrates how AI-powered SaaS tools with OAuth access to corporate Google Workspaces become a lateral-movement vector — a compromised AI vendor's OAuth tokens gave attackers access to a Vercel employee's Drive, cascading into a full database breach.

On April 19, 2026, Vercel announced it was breached through a third-party compromise of Context AI, an AI-powered knowledge tool. A Vercel employee had connected Context AI to their Vercel Enterprise Google account, granting the AI tool full read access to Google Drive. When Context AI's OAuth tokens were stolen, attackers used that access to reach Vercel's internal database, which is now being sold on BreachForums for $2M.

Key Findings

  • Attack chain: Context AI suffered a security incident where unauthorized actors gained access to OAuth tokens → those tokens provided access to Google Workspace accounts of users who had authorized Context AI → attackers accessed a Vercel employee's Drive → gained access to Vercel's internal database.
  • Context AI disclosed that a subset of users on their legacy and experimental products were affected by unauthorized OAuth token access.
  • The leaked Vercel database is being sold on BreachForums for $2M. While Vercel stated their software remained safe, the full impact of internal database access is unknown — API keys, GitHub tokens, and NPM account credentials may have been exposed.
  • OX Security recommends that Vercel and Context AI customers consider themselves breached and rotate all keys, tokens, and credentials.
  • Popular Vercel-maintained NPM packages (Next.js, Turborepo, SWR, AI SDK) should be pinned to specific versions to mitigate potential future supply chain attacks if maintainer accounts were compromised.
  • As of April 20, 2026, the beta.context.ai URL was no longer accessible, suggesting Context AI took their experimental products offline.

Why It Matters

This is a textbook AI supply-chain attack: an AI tool with broad OAuth permissions becomes a lateral-movement bridge into an enterprise environment. The risk is not just the AI tool's own security — it's the cumulative blast radius when that tool has access to Google Workspace, Slack, GitHub, or other connected services. Organizations granting AI SaaS tools OAuth access should assume those tools are equivalent to giving the vendor read access to their entire connected ecosystem.

What to Do

  • Check Google Account third-party app connections for Context AI's app ID: 110671459871-30f1spbu0hptbs60cb4vsmv79i7bbvqj.apps.googleusercontent.com
  • Check if the Context AI Chrome Extension was installed (ID: omddlmnhcofjbnbflmjginpjjblphbgk).
  • Rotate all keys, tokens, and credentials connected to Vercel and Context AI.
  • Pin Vercel-maintained NPM packages to specific versions (see npmjs.com/~vercel-release-bot).
  • Review and restrict OAuth scopes granted to AI SaaS tools — prefer least-privilege access over blanket Drive/workspace permissions.

Sources