Lovable — BOLA Exposes AI Chat Histories and Database Credentials in Vibe Coding Platform

AI relevance: Vibe coding platforms that generate full-stack applications from natural language prompts are producing insecure code at scale — Lovable's broken authorization and exposed Supabase credentials show how AI-assisted development creates real-world data exposure without any traditional "hack."

Lovable, the $6.6 billion vibe coding platform with eight million users, is at the center of a growing security controversy after a researcher demonstrated that a broken object-level authorization (BOLA) vulnerability in its API allowed anyone with a free account to access other users' projects, source code, database credentials, and AI chat histories.

What happened

  • Security researcher @weezerOSINT reported the flaw to Lovable's HackerOne bug bounty program on March 3, 2026.
  • Lovable patched the issue for new projects but never fixed it for existing ones, marked a follow-up report as a "duplicate," and closed it — leaving the vulnerability open for 48 days.
  • The researcher demonstrated that as few as five API calls from a free account could extract another user's profile, source code, and hardcoded Supabase database credentials.
  • Exposed projects included a Danish nonprofit's data with real user records (names, job titles, LinkedIn profiles, Stripe customer IDs) linked to employees at Accenture Denmark, Copenhagen Business School, and reportedly Nvidia, Microsoft, Uber, and Spotify.
  • Lovable initially denied a data breach, calling the exposure "intentional behavior" and blaming "unclear documentation." It later partially apologized but also blamed HackerOne for not escalating the report.

Why it matters

  • This is the third documented security incident involving Lovable. In February 2026, a researcher found 16 vulnerabilities (6 critical) in a single Lovable-hosted app, exposing 18,697 user records including student accounts from UC Berkeley and UC Davis. His support ticket was closed without response.
  • The broader "vibe coding" category shows consistent insecurity: 40–62% of AI-generated code contains vulnerabilities, and a Q1 2026 assessment found 91.5% of vibe-coded apps had at least one AI hallucination-related flaw.
  • Over 60% of vibe-coded apps expose API keys or database credentials in public repositories — Lovable's AI generates Supabase-connected apps with hardcoded secrets by default.
  • Gartner forecasts 60% of all new code will be AI-generated by end of 2026. If security posture doesn't improve, the attack surface will grow exponentially.

What to do

  • If you use Lovable or any vibe coding platform: audit all generated projects for hardcoded secrets, disabled row-level security, and overly permissive access controls before deploying to production.
  • Never trust AI-generated database configurations — enforce secret rotation and environment-variable-based credential management.
  • Organizations allowing employees to use AI coding tools should establish approval workflows and automated secret scanning for generated code.
  • Security teams should treat vibe-coded applications as high-risk by default and apply additional review gates before production deployment.

Sources: