Radware — Agentic AI Protection Solution launch
AI relevance: Radware’s Agentic AI Protection targets autonomous AI agents at runtime, explicitly covering prompt injection, tool abuse, and unauthorized data access in agentic workflows.
- Radware announced its Agentic AI Protection Solution, positioning it as a dedicated runtime security layer for autonomous AI agents in enterprise environments.
- The company argues that static guardrails aren’t sufficient for agentic systems and is emphasizing behavioral analysis to detect malicious intent in real time.
- The solution is framed around four pillars: discovery/visibility of all agents and tools, intent-based security, deep integration with major agent platforms, and continuous posture management via a risk graph.
- Radware lists direct/indirect prompt injection, tool abuse, and unauthorized data access as key threat categories the platform addresses.
- The press release says coverage extends to both custom agents and third-party platforms (e.g., Microsoft 365 Copilot, Copilot Studio, AWS Bedrock).
- Radware ties the launch to “ZombieAgent,” a zero-click indirect prompt injection risk that can persist inside agent memory and exfiltrate data without user interaction.
Why it matters
- Agentic AI security is shifting from policy-only guardrails to runtime detection and identity-aware controls — this product pitch reflects that industry move.
- Large enterprises deploying multiple agent platforms need a consolidated view of tool access, agent identities, and cross-agent risk paths.
What to do
- Inventory all agent platforms in use (Copilot, Bedrock, internal agents) and map tool access by identity.
- Instrument for prompt injection attempts with telemetry that can detect intent shifts and multi-step tool abuse.
- Track persistence risks (agent memory, long-term context stores) since this is where “zombie” instructions can live.