Back to all blogs

The Agentic AI security threat landscape in 2026: what attackers are actually doing

The Agentic AI security threat landscape in 2026: what attackers are actually doing

Archisman Pal

Archisman Pal

|

Head of GTM

Head of GTM

Feb 28, 2026

|

10 min read

Blog cover image of: Malicious OpenClaw Skills Exposed: A Full Teardown
Repello tech background with grid pattern symbolizing AI security

-

The CrowdStrike 2026 Global Threat Report documents an 89% increase in AI-enabled attacks. Here's what that means for agentic AI deployments and what security teams need to do now. Slug: agentic-ai-security-threats-2026

TL;DR

  • The CrowdStrike 2026 Global Threat Report documents an 89% year-over-year increase in AI-enabled attacks, with average eCrime breakout time now at 29 minutes

  • AI is being used offensively across five attack classes: spearphishing at scale, voice cloning, LLM-assisted vulnerability discovery, automated lateral movement, and direct exploitation of AI systems via prompt injection

  • Agentic AI deployments are the highest-value target in this threat landscape because a single successful attack propagates across every chained tool call downstream

  • The 90+ organizations already compromised via AI prompt injection in 2025 are a leading indicator, not an outlier

The CrowdStrike 2026 Global Threat Report, released in February 2026, puts a number on something security teams have been watching build for two years: AI-enabled attacks surged 89% year-over-year, and the average time from initial access to lateral movement now sits at 29 minutes. The fastest observed breakout in the dataset happened in 27 seconds. In one tracked intrusion, data exfiltration began within four minutes of initial access.

These numbers matter not just as trend data but as a forcing function. At 29-minute breakout times, the window for human-in-the-loop detection and response has effectively closed. And when attackers combine that speed with agentic AI as the target, the blast radius of a single successful intrusion multiplies across every tool call the agent can make.

This piece covers what attackers are actually doing with AI in 2026, why agentic AI systems are their most valuable current target, and what the threat trajectory means for security teams responsible for these deployments.

What the 89% increase actually covers

The headline figure captures a broad category. AI-enabled attacks in the CrowdStrike data span five distinct technique classes, and understanding which ones are scaling matters more than the aggregate number.

AI-generated spearphishing is the highest-volume application. Threat actors are using large language models to produce highly personalized phishing content at industrial scale, eliminating the grammatical errors and generic phrasing that historically made phishing detectable. The CrowdStrike report notes that ChatGPT was referenced in criminal forums 550% more than any other model, reflecting its role as a commodity tool in offensive operations.

Voice cloning and vishing represent the fastest-growing social engineering vector. Deepfake voice technology has matured to the point where short audio samples are sufficient to produce convincing impersonations. CrowdStrike's dataset includes threat actors using voice cloning to bypass multi-factor authentication workflows by impersonating executives and IT helpdesk personnel in real-time calls.

LLM-assisted vulnerability discovery is being operationalized by nation-state actors. FANCY BEAR, the Russia-nexus threat actor, deployed LLM-enabled malware (LAMEHUG) to automate reconnaissance and document collection. The eCrime actor PUNK SPIDER used AI-generated scripts to accelerate credential dumping and erase forensic evidence. Both represent the shift from AI as a planning tool to AI as an active component in the kill chain.

Malware-free intrusion continues to dominate: 82% of detections in the CrowdStrike dataset were malware-free, relying on credential theft, living-off-the-land techniques, and identity-based access. AI accelerates every stage of this approach by reducing the skill floor required to move through environments without triggering traditional detection.

Direct exploitation of AI systems is the newest attack class and the one most relevant to teams deploying agentic AI. CrowdStrike documented AI tools being exploited at more than 90 organizations by injecting malicious prompts to generate commands for stealing credentials and cryptocurrency. This is prompt injection operating at enterprise scale, not in a research context.

Why agentic AI systems are the primary target

Traditional AI deployments (a model behind a chat interface, a classifier in a pipeline) have a bounded blast radius. A successful attack on a single-turn chatbot produces a single malicious output. It's bad, but it's contained.

Agentic AI deployments do not have this property. An agent with access to email, calendar, code execution, file systems, and external APIs is not one attack surface. It is all of those attack surfaces simultaneously, connected by a model that will act on whatever instructions it receives. Repello's research on security threats in agentic AI browsers documents exactly this: the same properties that make agentic AI useful (autonomous multi-step task execution, chained tool calls, persistent memory) are what make it dangerous when an attacker gets a foothold.

The math on blast radius is simple. An agent that can read email, execute shell commands, and make API calls operates with the combined permissions of every integration it holds. A single successful prompt injection that hijacks one tool call can propagate instructions to every downstream tool in the chain. Compromise the agent; compromise everything the agent touches.

The OWASP Agentic AI Top 10 for 2026 reflects this reality in its threat taxonomy. Tool call hijacking, memory poisoning, and orchestrator manipulation are all attack classes that exist because of the agentic architecture itself, not because of any individual tool's vulnerability.

The specific attack playbook targeting agentic deployments

Understanding how attackers target agentic AI systems specifically requires looking beyond general AI-enabled attack techniques to the threat classes that exploit agentic architecture properties.

Prompt injection via external content is the primary entry point. Agentic systems routinely process external data: web pages retrieved by a search tool, documents passed to a summarization tool, emails read by a calendar integration. Any of that content can carry embedded instructions that the model interprets as legitimate, overriding its original task. The CrowdStrike finding about 90+ organizations compromised via prompt injection represents the real-world maturation of this attack class. It is no longer a theoretical concern.

Memory poisoning targets agents with persistent context. Many production agentic deployments maintain memory across sessions to provide continuity. An attacker who successfully injects content into an agent's long-term memory store can influence every future session that draws on that memory. The attack does not need to be repeated; it persists until the memory is explicitly cleared or audited.

Tool definition manipulation exploits the trust model most teams use for their tool registries. If an attacker can modify a tool's description or input schema in a registry without triggering an integrity check, the model will follow the modified definition. OWASP calls this the "rug pull" vulnerability. It is closely related to the supply chain risk documented in CrowdStrike's findings around AI-assisted code generation and dependency injection.

Credential harvesting through agent context is directly evidenced in the CrowdStrike data. Agents that hold API tokens, OAuth credentials, or session cookies in their context window are carrying high-value targets. The 90+ organizations compromised through AI prompt injection were largely targeted for credential theft, not data destruction. Attackers understand that an agent's context window is a credential store.

The speed problem: what 29-minute breakout means for agentic security

The 29-minute average breakout time in the CrowdStrike report is significant for any security program, but it is especially significant for agentic AI specifically. Traditional intrusion response assumes a detection and containment window measured in hours. Most security operations centers are not staffed or tooled to operate at the speed the CrowdStrike data describes.

For agentic AI deployments, the timeline is potentially shorter. An agent that processes external content in real time can be compromised, manipulated, and used to exfiltrate data within a single task execution cycle. There is no lateral movement phase to detect; the agent's tool access means the attacker is already everywhere the agent can reach.

The NIST AI Risk Management Framework emphasizes the need to map and measure AI-specific risks across the deployment lifecycle. The speed data from CrowdStrike makes the case for why measurement cannot be periodic. By the time a quarterly security review identifies an exploitable weakness in an agentic deployment, the window for exploitation has already been open for months.

How security teams are responding to the agentic AI threat landscape

The CrowdStrike report's title, "AI Accelerates Adversaries," points directly to the response requirement: defenders need to match that acceleration, not chase it from behind.

Two capabilities are non-negotiable for agentic AI security in 2026. As Repello AI Research Team noted: "The window for detection and response has collapsed from hours to minutes. Agentic deployments require continuous monitoring that operates at machine speed, not periodic assessments."

The first is continuous automated red teaming. Point-in-time assessments cannot keep pace with an attack surface that changes every time a new tool is added, a model is updated, or a prompt template is modified. Automated red teaming engines need to run ongoing attack batteries against deployed agentic systems across the full threat taxonomy: prompt injection, tool call hijacking, memory poisoning, and credential exfiltration. Repello's ARTEMIS automated red teaming engine is built for exactly this: continuous attack coverage that surfaces exploitable weaknesses before the 29-minute clock starts.

The second is runtime monitoring at the inference layer. Even with rigorous red teaming, novel attacks will reach production. Runtime monitoring that detects behavioral anomalies in tool call patterns, flags oversized outputs, and blocks injection attempts in real time is the difference between a detected incident and a completed breach. Repello's ARGUS runtime security layer operates at this layer, providing coverage for active exploitation attempts that pre-deployment testing cannot anticipate.

The combination matters because the CrowdStrike data shows both attack categories are active. Attackers are using AI to find vulnerabilities (which red teaming detects proactively) and using AI to execute attacks at machine speed (which runtime monitoring catches in production).

Conclusion

The 89% increase in AI-enabled attacks documented in the CrowdStrike 2026 Global Threat Report is a data point, not a ceiling. The underlying drivers (lower skill floor for attackers, commodity access to capable models, AI-amplified attack speed) are not slowing down. Agentic AI deployments sit at the intersection of the highest-value target class and the least-mature security posture in most organizations.

The security teams that get ahead of this are the ones treating agentic AI security as a continuous discipline rather than a pre-deployment checklist. The 27-second breakout time in the CrowdStrike data is an extreme case today. It is a baseline case in two years.

Talk to Repello about securing your agentic AI deployment before the clock starts.

Frequently asked questions

What does the CrowdStrike 2026 Global Threat Report say about AI-enabled attacks?

The CrowdStrike 2026 Global Threat Report documents an 89% year-over-year increase in AI-enabled attacks. Key findings include: average eCrime breakout time now at 29 minutes (a 65% increase in speed from 2024), the fastest observed breakout at 27 seconds, AI tools exploited at more than 90 organizations via prompt injection to steal credentials and cryptocurrency, and ChatGPT referenced 550% more than any other model in criminal forums. The report's title, "AI Accelerates Adversaries," frames the core finding: AI is compressing the time between attacker intent and attacker execution.

Why are agentic AI systems particularly vulnerable to AI-enabled attacks?

Agentic AI systems hold the combined permissions of every tool integration they connect to, and they act autonomously on instructions. A single successful prompt injection in an agentic context does not produce one malicious output; it can propagate attacker-controlled instructions to every downstream tool call in the chain. This means an agent with access to email, files, APIs, and code execution is all of those attack surfaces simultaneously. The same architecture that makes agents useful (autonomous multi-step execution, chained tool calls) is what makes them the highest-value target in the current threat landscape.

What attack techniques are threat actors using against agentic AI?

The primary attack classes targeting agentic AI deployments are: prompt injection via external content (embedding instructions in documents, web pages, or emails that the agent processes); memory poisoning (injecting persistent instructions into the agent's long-term memory store); tool definition manipulation (modifying tool descriptions to alter model behavior without triggering integrity checks); and credential harvesting via the agent's context window, which often holds API tokens and session cookies. All of these exploit architectural properties specific to agentic systems rather than general AI vulnerabilities.

How does the 29-minute breakout time affect agentic AI security?

The 29-minute average eCrime breakout time documented by CrowdStrike closes the window for human-in-the-loop detection and response. For agentic AI specifically, the timeline is potentially shorter still: an agent processing external content in real time can be compromised and used to exfiltrate data within a single task execution cycle, with no lateral movement phase to detect. This makes periodic security assessments insufficient; agentic deployments require continuous monitoring and automated detection that can operate at machine speed.

What should security teams do to protect agentic AI deployments in 2026?

Two capabilities are essential: continuous automated red teaming to proactively identify exploitable weaknesses before attackers do, covering prompt injection, tool call hijacking, memory poisoning, and credential exfiltration; and runtime monitoring at the inference layer to detect and block active exploitation attempts that pre-deployment testing cannot anticipate. Neither is sufficient without the other. Red teaming without runtime monitoring leaves production deployments exposed to novel attacks; runtime monitoring without red teaming has no visibility into what weaknesses exist until an attack succeeds.

Share this blog

Subscribe to our newsletter

Repello tech background with grid pattern symbolizing AI security
Repello tech background with grid pattern symbolizing AI security
Repello AI logo - Footer

Sign up for Repello updates
Subscribe to our newsletter to receive the latest insights on AI security, red teaming research, and product updates in your inbox.

Subscribe to our newsletter

8 The Green, Ste A
Dover, DE 19901, United States of America

Follow us on:

LinkedIn icon
X icon, Twitter icon
Github icon
Youtube icon

© Repello Inc. All rights reserved.

Repello tech background with grid pattern symbolizing AI security
Repello AI logo - Footer

Sign up for Repello updates
Subscribe to our newsletter to receive the latest insights on AI security, red teaming research, and product updates in your inbox.

Subscribe to our newsletter

8 The Green, Ste A
Dover, DE 19901, United States of America

Follow us on:

LinkedIn icon
X icon, Twitter icon
Github icon
Youtube icon

© Repello Inc. All rights reserved.