Cisco Security Blog — Foundation AI’s push for agentic security systems

• Category: Security

  • What Cisco is announcing: a coordinated “agentic security systems” direction under Cisco Foundation AI (models + retrieval + multi-agent workflows), aimed at security operations.
  • Core building block: Foundation-sec-8B-Reasoning, positioned as an open-weight security reasoning model for tasks like threat modeling, attack-path analysis, and incident investigation.
  • Key claim: security teams need agentic systems that do multi-step reasoning over logs, configs, intel, and org context — not just one-shot Q&A.
  • Retrieval layer: an “AI search” framework that iteratively refines search (reflection, backtracking, query revision) so smaller models can navigate messy information spaces.
  • Operational productization: PEAK Threat Hunting Assistant applies this to threat-hunt preparation: gather intel, refine hypotheses, identify data sources, output structured hunt plans.
  • Governance emphasis: Cisco repeatedly stresses human oversight, explainability (reasoning traces), and user-controlled data access — i.e., “agents that assist” rather than fully autonomous responders.
  • Trend signal: vendors are converging on a stack: reasoning model + retrieval + agent orchestration + guardrails, with security workflows as a “killer app.”

Why it matters

  • Security teams are buying time: if agentic systems can automate the “research and framing” phases (triage, hunt prep), humans can spend more time validating and responding.
  • Retrieval is the battleground: the moment you let an agent iteratively search + summarize, you’re also creating new risks (poisoned sources, prompt injection, data leakage). The architecture you pick matters.
  • Open-weight security models: if credible, they can enable on-prem / controlled deployments for regulated environments that can’t ship logs to generic LLM APIs.

What to do

  1. Decide where “agentic” is acceptable: start with low-blast-radius steps (research, hypothesis generation, draft hunt queries) before letting agents take actions in production.
  2. Demand evidence trails: require citations / links for summaries, store retrieval traces, and make it easy for an analyst to reproduce why a conclusion was reached.
  3. Harden your retrieval inputs: maintain curated intel sources, add allowlists, and isolate browsing/scraping from privileged environments.
  4. Build an eval loop: measure false positives/negatives on real incidents and hunts; don’t rely on “looks smart” demos.
  5. Plan for policy enforcement: even in “assistive” mode, implement role-based access, redaction, and output constraints (what can and can’t be summarized or exported).

Sources