Glossary/OWASP LLM Top 10

What is the OWASP LLM Top 10?

The OWASP LLM Top 10 is the curated list of the most critical security risks specific to applications using large language models, maintained by the Open Worldwide Application Security Project (OWASP). It is the closest thing to a canonical taxonomy in LLM security and the framework most red-team programs, vendor security questionnaires, and compliance audits map their coverage against.

The current list (2026 edition)

IDRiskPlain English
LLM01Prompt InjectionAdversarial input causes the model to follow attacker instructions
LLM02Sensitive Information DisclosureModel leaks credentials, PII, or proprietary content
LLM03Supply ChainCompromised models, datasets, or dependencies introduce risk
LLM04Data and Model PoisoningAdversarial training data corrupts model behavior
LLM05Improper Output HandlingModel output isn't validated before downstream use
LLM06Excessive AgencyModel has more capability or autonomy than the task requires
LLM07System Prompt LeakageHidden instructions are recoverable via attacks
LLM08Vector and Embedding WeaknessesRAG and embedding pipelines are exploitable
LLM09MisinformationModel produces confident wrong content (hallucinations)
LLM10Unbounded ConsumptionResource exhaustion through token amplification or other amplification attacks

How the list is maintained

OWASP runs an open community process: practitioners propose entries, evidence is reviewed (real-world incidents, research papers, exploit reproductions), the working group votes on inclusion. The list is versioned annually — the 2024 version differs meaningfully from the 2026 version both in entries and prioritization.

The maintenance cadence is fast (relative to traditional software security taxonomies) because the LLM threat landscape is evolving fast. Risks that didn't have established names in 2023 (system prompt leakage, vector and embedding weaknesses, MCP-specific tool poisoning) are now formal entries.

How to use it

Three operational uses:

  1. Coverage scaffolding for red-team engagements. Each scoped engagement should explicitly cover all 10 categories, with measurable per-category attack-success-rate findings. Gaps in coverage = unaudited attack surface.

  2. Vendor security questionnaires. "Does your AI security platform cover OWASP LLM01-10?" is now a standard buyer question. Vendors map their detection and mitigation coverage to the categories.

  3. Internal risk register. Application security teams running enterprise AI programs use the Top 10 as the rubric for assessing every new AI deployment before release.

Companion frameworks

OWASP has expanded into adjacent territory:

A complete AI security program typically maps coverage against the LLM Top 10 and the Agentic Top 10 — the two surfaces overlap but neither subsumes the other.

See also

For Repello's full coverage of each category with reproduction examples and defense recommendations, see the OWASP LLM Top 10 2026 cornerstone. The official OWASP page is at genai.owasp.org.