CISA/NSA/FBI — Deploying AI systems securely (joint guidance)

• Category: Security

  • CISA highlights a joint Cybersecurity Information Sheet on best practices for deploying and operating externally developed AI systems.
  • The guidance is framed around classic security outcomes: confidentiality, integrity, availability — applied to AI systems and their data/services.
  • It’s explicitly about operations: protect, detect, and respond to malicious activity against AI systems.
  • It pushes teams to plan for vulnerabilities in AI systems (and the surrounding ecosystem: connectors, data, dependencies), not just model behavior.
  • It also points readers to related guidance on secure AI development and engaging with AI safely.
  • Takeaway: treat AI features as production systems with owners, inventories, patching, monitoring, and incident response.

Why it matters

“Externally developed AI” often means you’re inheriting risk from vendors, model providers, plugins/connectors, and opaque update cycles. Guidance like this is useful because it reinforces that AI security is mostly systems security — but you need to apply it deliberately.

What to do

  • Inventory your AI dependencies: model provider, SDKs, prompt/tooling libs, retrieval sources, and any connectors with credentials.
  • Enforce least privilege for data + tools: scope what the AI can read, and require approvals for high-impact actions.
  • Add monitoring: log model/tool requests, retrieval sources, and actions taken; alert on unusual tool usage and data access.
  • Prepare incident response: key rotation playbooks, kill-switches (disable tools/connectors), and rollback plans for model/prompt updates.

Sources