What is the OWASP LLM Top 10?
The OWASP LLM Top 10 is the curated list of the most critical security risks specific to applications using large language models, maintained by the Open Worldwide Application Security Project (OWASP). It is the closest thing to a canonical taxonomy in LLM security and the framework most red-team programs, vendor security questionnaires, and compliance audits map their coverage against.
The current list (2026 edition)
| ID | Risk | Plain English |
|---|---|---|
| LLM01 | Prompt Injection | Adversarial input causes the model to follow attacker instructions |
| LLM02 | Sensitive Information Disclosure | Model leaks credentials, PII, or proprietary content |
| LLM03 | Supply Chain | Compromised models, datasets, or dependencies introduce risk |
| LLM04 | Data and Model Poisoning | Adversarial training data corrupts model behavior |
| LLM05 | Improper Output Handling | Model output isn't validated before downstream use |
| LLM06 | Excessive Agency | Model has more capability or autonomy than the task requires |
| LLM07 | System Prompt Leakage | Hidden instructions are recoverable via attacks |
| LLM08 | Vector and Embedding Weaknesses | RAG and embedding pipelines are exploitable |
| LLM09 | Misinformation | Model produces confident wrong content (hallucinations) |
| LLM10 | Unbounded Consumption | Resource exhaustion through token amplification or other amplification attacks |
How the list is maintained
OWASP runs an open community process: practitioners propose entries, evidence is reviewed (real-world incidents, research papers, exploit reproductions), the working group votes on inclusion. The list is versioned annually — the 2024 version differs meaningfully from the 2026 version both in entries and prioritization.
The maintenance cadence is fast (relative to traditional software security taxonomies) because the LLM threat landscape is evolving fast. Risks that didn't have established names in 2023 (system prompt leakage, vector and embedding weaknesses, MCP-specific tool poisoning) are now formal entries.
How to use it
Three operational uses:
-
Coverage scaffolding for red-team engagements. Each scoped engagement should explicitly cover all 10 categories, with measurable per-category attack-success-rate findings. Gaps in coverage = unaudited attack surface.
-
Vendor security questionnaires. "Does your AI security platform cover OWASP LLM01-10?" is now a standard buyer question. Vendors map their detection and mitigation coverage to the categories.
-
Internal risk register. Application security teams running enterprise AI programs use the Top 10 as the rubric for assessing every new AI deployment before release.
Companion frameworks
OWASP has expanded into adjacent territory:
- OWASP Agentic AI Top 10 — agentic-specific risks (tool abuse, multi-agent collusion, persistence)
- OWASP Machine Learning Top 10 — pre-LLM ML risks (adversarial examples, model inversion, poisoning)
- OWASP AI Security and Privacy Guide — reference handbook covering all of the above
A complete AI security program typically maps coverage against the LLM Top 10 and the Agentic Top 10 — the two surfaces overlap but neither subsumes the other.
See also
For Repello's full coverage of each category with reproduction examples and defense recommendations, see the OWASP LLM Top 10 2026 cornerstone. The official OWASP page is at genai.owasp.org.