Security / AI CVEs / Research
- Anthropic Officially Launches Project Glasswing — $100M Commitment, 12 Partners, Thousands of Zero-Days Found — 2026-04-23
- Comment and Control — Prompt Injection Leaks Secrets in Three AI Coding Agents — 2026-04-23
- LiteLLM PyPI Compromised — Multi-Stage Credential Stealer in 3M-Download Package — 2026-04-23
- Lovable — BOLA Exposes AI Chat Histories and Database Credentials in Vibe Coding Platform — 2026-04-23
- npm CanisterWorm — Self-Spreading Supply-Chain Attack Targets AI Agent Tooling — 2026-04-23
- NVIDIA — Indirect AGENTS.md Injection in OpenAI Codex via Malicious Dependencies — 2026-04-23
- Red Hat RHEL AI — Two InstructLab CVEs: Path Traversal & trust_remote_code RCE — 2026-04-23
- Anthropic — Unauthorized Access to Mythos AI Model — 2026-04-22
- Brex — CrabTrap Open-Source LLM-as-a-Judge Proxy for AI Agent Security — 2026-04-22
- CSA Survey — 82% of Enterprises Have Unknown AI Agents in Their Environments — 2026-04-22
- CVE-2026-26144: Excel XSS Chains to Copilot Agent for Silent Data Exfiltration — 2026-04-22
- Mondoo — Free AI Agent Skills Security Checker Launches — 2026-04-22
- Pillar Security — Google Antigravity Sandbox Escape via Prompt Injection — 2026-04-22
- Hacktron — Claude Opus Builds Full Chrome Exploit Chain for $2,283 — 2026-04-21
- Gartner — Agentic AI Will Trigger Security Incidents at Scale — 2026-04-21
- Postiz — CVE-2026-40487 Stored XSS via File Upload Validation Bypass — 2026-04-21
- VulnCheck — Project Glasswing: Only 1 Confirmed CVE Despite Anthropic Mythos Hype — 2026-04-21
- Claude Opus Used to Build Working Chrome Exploit Chain — 2026-04-20
- Suzu Labs — Dark web operators pivot to frontier LLMs for offensive cyber — 2026-04-20
- Vercel Breached via Third-Party AI Tool OAuth Compromise — 2026-04-20
- Georgia Tech Vibe Security Radar — 74 CVEs Traced to AI Coding Tools — 2026-04-20
- CISA Calls for AI Companies to Join CVE Program as CNAs — 2026-04-19
- GreyNoise — 91K attack sessions reveal active targeting of exposed LLM infrastructure — 2026-04-19
- iProov Threat Intelligence Report — 1,151% Surge in iOS Deepfake Injection Attacks — 2026-04-19
- Microsoft — Excel XSS chains to Copilot Agent for clickless data exfiltration (CVE-2026-26144) — 2026-04-19
- OX Security — Full MCP STDIO Command Injection Advisory: CVEs Across LangFlow, LiteLLM, GPT Researcher, Agent Zero — 2026-04-19
- Wiz — AI-Generated Supply Chain Campaign Targets GitHub Actions via pull_request_target — 2026-04-19
- Xcitium ThreatLabs — Malicious LLM routers steal credentials and drain crypto wallets — 2026-04-19
- GreyNoise — 91,403 Attack Sessions Target Exposed LLM Infrastructure — 2026-04-18
- Hadrian — 70 AI Offensive Security Tools Cataloged as Pen Testing Economics Collapse — 2026-04-18
- XCIT Threat Labs — Malicious LLM Routers Inject Payloads and Steal Credentials — 2026-04-18
- Cisco AI Defense — Open-Source Agent Security Toolkit Launch — 2026-04-18
- Gambit Security — Single Hacker Used Claude Code and ChatGPT to Breach Nine Mexican Government Agencies — 2026-04-18
- Abnormal Security — ATHR: AI Voice Agents Automate Full Vishing Attack Chain — 2026-04-17
- Cisco Talos — n8n AI Workflow Platform Abused for Malware Delivery and Device Fingerprinting — 2026-04-17
- Cloudflare — Enterprise MCP Reference Architecture for Secure Agentic Workflows — 2026-04-17
- Google Cloud Threat Intelligence — Defending Enterprises Against AI-Powered Exploitation — 2026-04-17
- Apple Intelligence — Prompt injection bypasses on-device AI guardrails (RSAC 2026) — 2026-04-16
- Flowise — CVSS 10.0 CustomMCP RCE enables full server compromise (CVE-2025-59528) — 2026-04-16
- Microsoft — AI-enabled device code phishing campaign bypasses MFA at scale (April 2026) — 2026-04-16
- Capsule Security — ShareLeak and PipeLeak prompt injection in Copilot Studio and Agentforce — 2026-04-16
- ChatboxAI — MCP StdioClientTransport OS Command Injection (CVE-2026-6130) — 2026-04-13
- OpenAI — Axios Supply Chain Compromise Impacts macOS App Certification — 2026-04-12 18:30
- Anthropic — Command injection vulnerability fixed in Claude Code LSP binary detection — 2026-04-12
- Trend Micro — Sockpuppeting: Single-Line Jailbreak for 11 Major AI Models — 2026-04-12
- Google Chrome — Critical WebML and PrivateAI vulnerabilities expose memory data and enable sandbox escape — 2026-04-11
- Guardian — Claude Mythos AI model demonstrates unprecedented vulnerability discovery capabilities, raising security concerns — 2026-04-11
- ModelContextProtocol — Java SDK DNS rebinding vulnerability allows MCP server takeover (CVE-2026-35568) — 2026-04-09
- Microsoft — CVE-2026-26113/26110 Preview Pane RCE — 2026-04-08
- KuCoin — AI trading agent vulnerabilities cause $45M crypto breaches — 2026-04-05
- Mercor — LiteLLM supply chain breach exposes 4TB of AI training data — 2026-04-05
- Microsoft — Agent Governance Toolkit addresses OWASP AI agent security risks — 2026-04-05
- Microsoft — Azure MCP Server authentication flaw exposes sensitive data (CVE-2026-32211) — 2026-04-03
- Adversa — Claude Code deny rule bypass allows prompt injection of blocked commands — 2026-04-01
- Anthropic — Three OS command injection vulnerabilities in Claude Code CLI and Agent SDK — 2026-04-01
- Anthropic — Claude Code npm source map leak exposes 512K+ lines — 2026-04-01
- arXiv — Agent Skills Security Analysis Framework Vulnerabilities — 2026-04-01
- arXiv — BadSkill: Agent Supply Chain Backdoor Attacks via Model-in-Skill Poisoning — 2026-04-01
- arXiv:2604.11806 — Meerkat Detects Hidden Safety Violations in AI Agent Traces — 2026-04-01
- aws-mcp-server — Command Injection RCE (CVE-2026-5058, ZDI-26-246) — 2026-04-01
- CSA — AI Agent Weaponization Threat Briefing — 2026-04-01
- Tenable Research — Claude Code GitHub Action MCP Server RCE Vulnerability — 2026-04-01
- Comment and Control — Prompt Injection to Credential Theft in Claude Code, Gemini CLI, and Copilot Agent — 2026-04-01
- Depthfirst — $80M Series B for AI Security Platform — 2026-04-01
- Microsoft — GitHub Copilot privacy policy shifts to opt-out AI training model — 2026-04-01
- Google DeepMind — AI Agent Traps Taxonomy Reveals Six Critical Vulnerability Classes — 2026-04-01
- Harness Engineering — LangChain Guardrails Tutorial for Safe AI Agents — 2026-04-01
- Check Point — HexStrike AI MCP Server Command Injection — 2026-04-01
- LangChain-ChatChat — RCE via MCP STDIO Server Configuration (CVE-2026-30617) — 2026-04-01
- Langflow — Critical vulnerability CVE-2026-33309 — 2026-04-01
- Endor Labs — Marimo CVE-2026-39987 Pre-Auth RCE — 2026-04-01
- nginx-ui — MCPwn: Unauthenticated MCP Endpoint Leads to Full Server Takeover (CVE-2026-33032) — 2026-04-01
- Oasis Security — Claude.ai prompt injection & data exfiltration — 2026-04-01
- OpenAI — ChatGPT DNS side channel data exfiltration vulnerability — 2026-04-01
- OpenAI — GPT-5.4-Cyber lowers refusal boundary for defensive cybersecurity — 2026-04-01
- OpenClaw Claude Bridge — Sandbox bypass allows arbitrary tool execution in spawned subprocesses (CVE-2026-39398) — 2026-04-01
- OpenClaw Security Crisis — What 346K Stars & 135K Exposed Instances Teach Us — 2026-04-01
- Praetorian — Indirect Prompt Injection Bypasses LLM Supervisor Agents — 2026-04-01
- PraisonAI — Four critical vulnerabilities expose multi-agent AI systems to sandbox escape, RCE, and data exfiltration — 2026-04-01
- PraisonAI — execute_code() vulnerability allows arbitrary Python code execution in multi-agent systems — 2026-04-01
- Red Hat OpenShift AI — Kubernetes Token Disclosure (CVE-2026-5483) — 2026-04-01
- Token Security — Azure MCP RCE vulnerability enables cloud takeover — 2026-04-01
- ToxSec — AI-Generated Code Leaks Hardcoded Secrets at Scale — 2026-04-01
- Unit 42 — Chrome Gemini Live panel hijack vulnerability enables camera/mic access — 2026-04-01
- Unit 42 — Vertex AI P4SA overprivileged agents expose Google Cloud data — 2026-04-01
- Vitalik Buterin — Warns against AI agent security risks, shares private LLM stack — 2026-04-01
- Wiz Research — Axios npm Supply Chain Compromise Delivers Cross-Platform RAT — 2026-04-01
- WordPress TTS Plugin — CVE-2026-1233 Database Exposure — 2026-04-01
- Zscaler ThreatLabz — Fake Claude Code Source Distributes Vidar & GhostSocks Malware — 2026-04-01
- UK AISI Study — AI Chatbots Ignoring Human Instructions Rising Five-Fold — 2026-03-30
- Cyera Research — LangChain & LangGraph Multiple Vulnerabilities (CVE-2026-34070) — 2026-03-30
- Dev.to — MCP Server Audit Finds 66% Have Critical Vulnerabilities — 2026-03-29
- Offensive Security — MCP server command injection vulnerabilities CVE-2026-5007 and CVE-2026-5023 — 2026-03-29
- Novee — autonomous AI red teaming for LLM applications — 2026-03-29
- Backslash — MCP NeighborJack and over-privileged tool exposure — 2026-03-28
- NIST — Monitoring deployed AI systems in production — 2026-03-28
- TrojAI — Agent runtime intelligence and coding-agent protection — 2026-03-28
- Unit 42 — Boggy Serpens AI-enhanced malware and multi-wave espionage — 2026-03-28
- Check Point — agentic era AI threat landscape — 2026-03-27
- LiteLLM — PyPI supply-chain compromise hits AI gateway — 2026-03-26
- Palo Alto Networks — Prisma AIRS 3.0 adds agent artifact security — 2026-03-26
- Qualys — MCP servers become shadow IT for AI operations — 2026-03-26
- Aqua Security Trivy Supply Chain Attack — 2026-03-23
- Unit 42 — Security tradeoffs of AI agents — 2026-03-22
- Cloudflare — AI Security for Apps GA — 2026-03-19
- JFrog + NVIDIA — Agent Skills Registry adds trust layer for agentic supply chain — 2026-03-19
- Manifold — $8M seed to secure autonomous AI agents at runtime — 2026-03-19
- Microsoft — Detecting prompt abuse in AI tools — 2026-03-19
- JFrog — Universal MCP Registry for AI supply-chain security — 2026-03-18
- Jozu — AI agent disables own security guardrails in 4 commands — 2026-03-18
- Agent Shield — Audit of 17 popular MCP servers finds universal security gaps — 2026-03-14
- Microsoft — AI as Tradecraft: threat actors operationalize AI across the attack lifecycle — 2026-03-14
- Alibaba ROME agent paper documents rogue tool use — 2026-03-12
- alice.io — Caterpillar security auditor — 2026-03-12
- CyberDesserts — ClawHavoc malicious skill campaign — 2026-03-12
- Irregular — Rogue AI agents collaborate to hack systems, exfiltrate data — 2026-03-12
- Google GTIG — AI Threat Tracker: adversarial use update — 2026-03-11
- Google — UNC6426 weaponized LLM tool to steal credentials, escalated to AWS admin in 72h — 2026-03-11
- Balungpisah — Critical prompt injection and rate-limiting flaws found in LLM Gateway — 2026-03-09
- LWN — GitHub issue title prompt injection compromises 4,000 developer machines — 2026-03-09
- HackerNoon — Self-modifying AI malware emerges as major cybersecurity threat — 2026-03-09
- Huntress — Fake OpenClaw installers spread GhostSocks — 2026-03-07
- Noma Security — ContextCrush in Context7 MCP server — 2026-03-07
- AI Agent Security Threat Model 2026 — 2026-03-06
- Check Point Research — Claude Code project-file RCE & key exfil — 2026-03-06
- Securing MCP and Agent Tool Supply Chains — 2026-03-06
- Microsoft Security Blog — malicious AI assistant extensions harvest LLM chat histories — 2026-03-06
- Prompt Injection Defense Playbook (2026) — 2026-03-06
- Cisco Talos — 2025 CVE retrospective (AI-related CVEs double) — 2026-03-06
- VulnerableMCP — MCP security database for real-world tool flaws — 2026-03-06
- Unit 42 — Web-based indirect prompt injection observed in the wild — 2026-03-05
- Techzine — DeepKeep AI Agent Scanner — 2026-03-04
- BlacksmithAI — Multi-agent penetration testing framework — 2026-03-03
- Oasis Security — ClawJacked OpenClaw WebSocket takeover — 2026-03-03
- MIT AI Agent Index — transparency gaps in agent safety reporting — 2026-03-01
- Orca Security — RoguePilot GitHub Copilot prompt injection — 2026-03-01
- SD Times — MCP privacy and security gaps — 2026-02-28
- IBM — X-Force Threat Intelligence Index 2026 — 2026-02-27
- Provos.org — IronCurtain agent sandbox architecture — 2026-02-27
- Check Point Research — Claude Code hooks/MCP RCE — 2026-02-26
- CrowdStrike — 2026 Global Threat Report: AI-accelerated adversaries — 2026-02-26
- Trail of Bits — Comet prompt-injection audit — 2026-02-26
- Pillar Security — Operation Bizarre Bazaar LLMjacking campaign — 2026-02-25
- Socket — SANDWORM_MODE npm worm targets AI coding tools — 2026-02-25
- Veza — Access Agents for AI identity governance — 2026-02-25
- GitHub Advisory — Cline unauthorized npm publish added postinstall — 2026-02-24
- Kai Security AI — Honeypot MCP server logs AI agent probing — 2026-02-23
- Phoenix Security — SANDWORM_MODE npm worm poisons AI toolchains — 2026-02-23
- Unit 42 — 2026 IR report on AI-accelerated attacks — 2026-02-23
- Cisco — State of AI Security 2026 report — 2026-02-22
- Microsoft — Copilot summarized confidential emails despite DLP labels — 2026-02-21
- Microsoft Security Blog — Running OpenClaw safely — 2026-02-21
- NIST — AI Agent Standards Initiative — 2026-02-21
- OpenAI — ChatGPT Lockdown Mode — 2026-02-21
- Check Point — AI assistants as C2 proxies — 2026-02-20
- mbgsec — Cline issue-triage prompt injection led to npm supply-chain publication — 2026-02-20
- Google GTIG — AI Threat Tracker: distillation & integration — 2026-02-20
- Praetorian — MCP server attack surface research — 2026-02-20
- Cerbos — MCP Authorization for AI Agents — 2026-02-19
- PromptArmor — Link preview data exfiltration in agent chats — 2026-02-19
- Snyk — AI Agent Guardrails — 2026-02-19
- Straiker STAR Labs — SmartLoader poisons an Oura MCP server — 2026-02-19
- University of Toronto — MCP security risk guidance — 2026-02-19
- Microsoft Security Blog — Copilot Studio agent misconfigurations — 2026-02-18
- OWASP — Secure MCP Server Development Guide — 2026-02-18
- Cyata — Anthropic MCP Git server prompt-injection CVEs — 2026-02-17
- LayerX — Claude Desktop Extensions zero-click RCE via calendar event — 2026-02-17
- AgentAudit — MCP server security findings across 194 packages — 2026-02-16
- Microsoft Security Blog — AI recommendation poisoning — 2026-02-13
- Praetorian — Augustus open-source LLM prompt-injection scanner — 2026-02-11
- Ars Technica — Moltbook prompt worms and viral prompt injection — 2026-02-10
- Endor Labs — MCP needs AppSec as classic vulns hit agent tooling — 2026-02-10
- Levo — Launch Week 2026 adds AI firewall + MCP security testing — 2026-02-10
- Trend Micro — OpenClaw’s Agentic Assistant Risk Map — 2026-02-10
- Operant AI — Agent Protector for runtime agent security — 2026-02-09
- Radware — Agentic AI Protection Solution launch — 2026-02-09
- AuthMind — OpenClaw’s 230 malicious skills expose agentic supply-chain risk — 2026-02-07
- Infosecurity Magazine — ZombieAgent zero-click prompt injection in ChatGPT connectors — 2026-02-07
- Darktrace — 2026 State of AI Cybersecurity Report: 76% of Security Pros Worried About AI Agent Risk — 2026-02-06
- Noma Security — DockerDash: Prompt Injection in Docker Ask Gordon AI Enables RCE via Image Metadata — 2026-02-06
- ThreatDown — 2026 State of Malware: AI Drives Machine-Scale Cyberattacks — 2026-02-05
- Vectra AI — From Clawdbot to OpenClaw: Automation as a Backdoor — 2026-02-04
- NVIDIA AI Red Team — Mandatory sandbox controls for agentic coding workflows — 2026-02-03
- Clutch Security — 95% of enterprise MCP servers run on endpoints with zero security visibility — 2026-02-02
- GitGuardian / NHIcon 2026 — Agentic AI forces a paradigm shift in non-human identity security — 2026-02-02
- InstaTunnel — Agent hijacking and intent breaking: the goal-oriented attack surface — 2026-02-02
- Keyfactor — Two-thirds of enterprises say AI agents are a bigger security risk than humans — 2026-02-02
- Christian Schneider — From LLM to agentic AI: how agents amplify prompt injection into kill chains — 2026-02-02
- Check Point / Lakera — 40% of 10,000 MCP servers found to have security weaknesses — 2026-02-01
- Dev.to — Implementing Sudo for AI Agents — 2026-02-01
- The Register — Ungoverned AI agent identities are the new shadow IT — 2026-02-01
- Reuters — Open-Source AI Models Vulnerable to Criminal Misuse — 2026-02-01
- Trend Micro — ÆSIR: AI Agents Finding Zero-Days in AI Infrastructure — 2026-02-01
- arXiv — EchoLeak: zero-click prompt injection in Microsoft 365 Copilot — 2026-01-31
- Cisco — Personal AI agents like OpenClaw are a security nightmare — 2026-01-31
- CrowdStrike — Agentic tool chain attacks (tool poisoning, shadowing, rugpull) — 2026-01-31
- DataDome — MCP prompt injection & tool poisoning defenses — 2026-01-31
- LangChain — January 2026 newsletter (agent robustness + observability/evals) — 2026-01-31
- GitHub Advisory — node-tar hardlink path traversal (CVE-2026-24842) — 2026-01-31
- Pen Test Partners — Eurostar chatbot guardrail bypass + ID tampering — 2026-01-31
- Snyk — Clawdbot/Moltbot prompt injection: ‘one email away from disaster’ — 2026-01-31
- Wiz — ZeroDay.cloud: cloud + AI infra zero-days — 2026-01-31
- AWS/Wiz — CodeBreach: unanchored ACTOR_ID filters in CodeBuild webhooks — 2026-01-30
- Bitdefender — Hugging Face abused to distribute polymorphic Android RAT payloads — 2026-01-30
- Bizarre Bazaar: attackers monetizing exposed LLM & MCP endpoints (LLMjacking) — 2026-01-30
- Operation ‘Bizarre Bazaar’: LLMjacking campaign targets exposed LLM/MCP endpoints (Pillar Security) — 2026-01-30
- CISA/NCSC-UK/FBI — Secure connectivity principles for OT networks — 2026-01-30
- Cisco Security Blog — Foundation AI’s push for agentic security systems — 2026-01-30
- curl — Ending its bug bounty after an AI slop flood — 2026-01-30
- Google Developers Blog — Gemini CLI hooks for policy & automation — 2026-01-30
- Google: Gemini 3 in Chrome adds an agentic ‘auto browse’ workflow — 2026-01-30
- GreyNoise — Threat actors actively targeting exposed LLM endpoints — 2026-01-30
- Bitdefender — Android dropper used Hugging Face datasets to deliver RAT payloads — 2026-01-30
- Kaspersky — OWASP Agentic Top 10 (2026): practical risks + controls for AI agents — 2026-01-30
- Model Context Protocol — MCP Apps: UI components inside agent chats — 2026-01-30
- Microsoft: runtime inspection to block risky AI agent tool calls — 2026-01-30
- Microsoft — turning threat reports into detection insights with AI — 2026-01-30
- n8n: sandbox escape bugs lead to full RCE in self-hosted instances — 2026-01-30
- NIST/CAISI — RFI on security practices for AI agents — 2026-01-30
- OpenAI — Hardening ChatGPT Atlas against prompt injection — 2026-01-30
- Varonis — Reprompt: single-click Copilot prompt injection chain for silent data exfiltration — 2026-01-30
- Varonis — Reprompt one-click Copilot session hijack (patched) — 2026-01-30
- AI Email Triage Workflow (labels, summaries, suggested replies) — 2026-01-29
- AI security news digest: what to watch this week — 2026-01-29
- CISA/NSA/FBI — Deploying AI systems securely (joint guidance) — 2025-06-03
- Google — SAIF (Secure AI Framework): a practitioner’s map — 2025-05-12
- OWASP — Top 10 for LLM Apps: what to fix first — 2025-04-18
- NIST — AI Risk Management Framework (RMF) for security teams — 2025-03-10