Cloudflare — AI Security for Apps GA
AI relevance: This is a control-plane security layer for internet-facing AI endpoints, directly affecting how teams protect production LLM and agent traffic.
- Cloudflare announced general availability of AI Security for Apps with detection and mitigation workflows integrated into its WAF stack.
- The platform labels AI-powered endpoints as
cf-llmand now offers endpoint discovery on all plans, including free tiers. - Detection modules cover prompt injection, PII exposure, and toxic/sensitive content categories, with metadata usable in custom rules.
- New GA capability: custom topic detection, letting defenders define business-specific prompt topics to log or block.
- Cloudflare says detection is not path-based only; it uses endpoint behavior profiling to find non-obvious AI routes.
- The product emphasizes enforcement at the edge, so risky prompts can be blocked before they hit model infrastructure.
Why it matters
Most AI incidents in production are not model-weight compromises; they are abuse of exposed inference and agent endpoints. If teams do not have inventory plus request-level controls, shadow AI routes become a blind spot. Cloudflare’s GA release is notable because it combines discovery and enforcement in an existing web security surface that many teams already operate.
What to do
- Inventory all internet-facing AI endpoints and verify which routes process prompt content versus standard API traffic.
- Enable prompt-injection and PII detections in monitor mode first, then progressively enforce blocking rules.
- Define custom topics tied to policy (e.g., internal project names, customer records) and alert on repeated high-risk prompts.
- Pair edge detections with app-side logging so investigations can correlate blocked prompts with user/session context.