Securing Autonomous AI: The OWASP Agentic Top 10 and Real-World CVE Mappings

17/04/2026
Andrew Heller
Raúl Redondo, Senior Adversarial Engineer I Andrew Heller, Marketing Manager

The enterprise integration of artificial intelligence is rapidly transitioning from static Large Language Models (LLMs) to autonomous agentic systems. Unlike legacy LLM applications that provide a single, linear, and reactive response, agentic applications are autonomous and proactive. They autonomously decompose goals into subtasks, invoke external tools, maintain persistent memory across sessions, and coordinate with peer agents.

As John Sotiropoulos, OWASP ASI Co-Lead, stated: “Once AI began taking actions, the nature of security changed forever”. Currently, 84% of developers use AI coding tools (Stack Overflow). However, with 35% of AI incidents caused by simple prompts, this architectural shift introduces a fundamentally new threat model.

To address this, Lares, Damovo’s adversarial and cybersecurity advisory practice, has mapped the emerging OWASP Agentic Top 10 standards to real-world Common Vulnerabilities and Exposures (CVEs) and defensive baselines.

The Threat Shift: Chatbots vs. Agents

Standard security controls designed for conversational AI are insufficient for agentic systems. The autonomous loop of planning, retrieving, and executing creates dynamic new attack vectors.

Every tool an agent can access is a potential exploit path, every memory entry can be persistently poisoned, and every inter-agent message is a potential attack vector that needs strict controls.

The OWASP Agentic Top 10: Complete Threat Mapping

To effectively threat-model agentic deployments, security teams must bridge the gap between AI-specific behaviors and traditional vulnerability management. Below is the complete mapping of the OWASP Agentic Security Initiative (ASI) risks, alongside real-world CVEs and exploit paths identified by Lares adversarial research.

Phase 1: Goals, Tools, and Identities

This category keeps teams honest about what agents can actually do and who they pretend to be.

ASI01: Agent Goal Hijack

The gateway to every other vulnerability. Attackers manipulate objectives because agents cannot reliably distinguish legitimate instructions from attacker-controlled content.

  • The Exploit: Adversaries embed hidden instructions to silently override an agent’s goals, leading to zero-click data exfiltration.

  • CVE Mapping: CVE-2025-64660 (GitHub Copilot) and CVE-2025-61590 (Cursor).

ASI02: Tool Misuse & Exploitation

Agents operating within authorized privileges may apply legitimate tools in unsafe ways.

  • The Exploit: An agent chains PowerShell and cURL commands under valid credentials, allowing data exfiltration that completely bypasses EDR detection.

  • CVE Mapping: CVE-2025-8217 (Amazon Q).

ASI03: Identity & Privilege Abuse

An architectural mismatch between user-centric identity systems and agentic design. Agents operate in an attribution gap that makes enforcing true least privilege impossible.

  • The Exploit: Attackers exploit dynamic trust to escalate access using Non-Human Identities (NHIs).

  • CVE Mapping: CVE-2025-32711 (Microsoft 365 Copilot).

Phase 2: Supply Chain & Memory

Supply chain and memory are always in the loop. Security teams must verify everything.

ASI04: Agentic Supply Chain Vulnerabilities

Agentic ecosystems dynamically load external tools and agent personas at runtime. This creates a live supply chain that cascades vulnerabilities.

  • Incident Mapping: The Postmark MCP supply chain attack, where a malicious server impersonating a legitimate tool was loaded at runtime to secretly BCC emails to an attacker.

ASI05: Unexpected Code Execution (RCE)

Vibe coding tools and agentic systems generate and execute code in real-time.

  • The Exploit: Attackers bypass sandboxing to execute remote code. Mitigations require banning eval() functions and unsafe deserializers.

  • CVE Mapping: CVE-2025-53773 (GitHub Copilot).

ASI06: Memory & Context Poisoning

Adversaries corrupt stored context, causing future reasoning to become biased.

  • The Exploit: Attackers seed malicious data. The corrupted context survives session resets, requiring cryptographic provenance tracking to detect.

  • CVE Mapping: CVE-2025-54136 (Cursor IDE).

Phase 3: System-Level Behavior

When agents talk to each other, failures cascade at scale.

ASI07: Insecure Inter-Agent Communication

Multi-agent systems depend on continuous communication. Without semantic validation or mutual authentication, attackers intercept and manipulate messages.

  • CVE Mapping: CVE-2025-52882 (Claude Code).

ASI08: Cascading Failures:

A single fault propagates and compounds into system-wide harm. Because agents delegate autonomously, errors bypass stepwise human checks and require digital twin replay testing to safely mitigate.

Phase 4: Humans & Governance

Putting humans and rigid policy engines back into the picture.

ASI09: Human-Agent Trust Exploitation

Agents establish strong trust through natural language fluency and anthropomorphism. Attackers exploit this automation bias to manipulate humans into performing the final audited action, making the agent’s role invisible to forensics.

ASI10: Rogue Agents

Compromised agents deviate from their intended scope, creating a containment gap for traditional rule-based monitoring systems. Mitigating rogue agents requires behavior certificates and independent watchdog agents.

The 5-Step Agentic Action Plan

To bridge the gap between rapid AI adoption and enterprise-grade security, we recommend aligning your defenses with the core principle of Zero Trust: designing with fault tolerance that assumes the failure or exploitation of any component.

Organizations should immediately implement the following 5-step action plan:

  1. Discover & Inventory: Map all AI agents, MCP servers, Non-Human Identities (NHIs), and tools.

  2. Threat Model with ASI: Use ASI01 through ASI10 as a checklist against specific agent deployments before moving to production.

  3. Enforce Least Agency: Grant only the minimum autonomy needed. Use short-lived credentials and Just-In-Time (JIT) access.

  4. Build Kill Switches: Implement circuit breakers between workflows, blast-radius caps, and emergency halt mechanisms that are tested regularly.

  5. Monitor & Respond: Establish deep observability into agent behavior using watchdog agents, distributed tracing, and AI-specific incident response playbooks.

Assess Your AI Attack Surface

Deploying autonomous agents requires shifting from output filtering to intent validation and strict execution boundaries. If your organization is building or testing agentic AI systems, contact the Damovo advisory team. We can help you implement least-agency governance, test kill switches to prevent cascading failures, and schedule a scoping call to identify the right testing stage for your environment.