[un]prompted 2026 — Netflix Researchers on Source-to-Sink LLM Vulnerability Discovery
AI relevance: Netflix security engineers Scott Behrens (Principal Security Engineer) and Justice Cassel (Application & GenAI Security) presented a "source-to-sink" methodology for improving vulnerability discovery in LLM systems at the [un]prompted 2026 AI Security Practitioner conference.
Key points
- The talk was presented at [un]prompted 2026, an AI security practitioner conference, and published on Security Boulevard on May 2, 2026.
- The "source to sink" framing suggests an end-to-end approach to LLM vulnerability discovery — tracing how untrusted input flows through model systems to find exploitable paths, analogous to traditional source-to-sink analysis in application security.
- Both presenters work on Netflix's application and GenAI security teams, giving the research operational grounding from a company running LLM systems at streaming scale.
- The talk is part of [un]prompted 2026's AI Security Practitioner track, which focuses on hands-on security research for AI systems rather than theoretical risk frameworks.
- The publication on Security Boulevard (part of the Techstrong Group network) indicates the content was selected for the conference's Creators, Authors and Presenters program.
Why it matters
Applying "source-to-sink" dataflow analysis — a well-established technique in traditional application security — to LLM systems is a promising direction. As AI agents gain tool access and execute code, the attack surface increasingly resembles a complex application with multiple trust boundaries. A systematic methodology for tracing how malicious inputs propagate through LLM pipelines (prompt construction, context assembly, tool invocation, output handling) could provide defenders with a more rigorous approach than ad-hoc red-teaming.
What to do
- Map your LLM system's trust boundaries: where does untrusted input enter, how is it assembled into prompts, what tools can be invoked, and how are outputs consumed?
- Apply source-to-sink thinking to prompt injection risk: trace user-controlled data through every stage of the LLM request lifecycle.
- Watch for the full talk recording or slides from [un]prompted 2026 for actionable techniques.