Glossary/Excessive Agency

What is Excessive Agency in AI Agents?

Excessive agency is the design failure where an AI system has access to more capabilities, broader permissions, or higher autonomy than its task actually requires. It is OWASP LLM06 in the LLM Top 10 and a recurring theme across the Agentic AI Top 10. Excessive agency doesn't cause attacks by itself — it determines how bad attacks are when they succeed.

Three dimensions of excessive agency

OWASP frames it as three overlapping failure modes:

  1. Excessive functionality. The agent has tools or capabilities it doesn't need for its job. A customer-service agent doesn't need a send_money tool. A code-review agent doesn't need browser automation. Each unused capability is unused-but-still-attackable.

  2. Excessive permissions. The tools the agent does need are scoped too broadly. A read_files tool that can read any file in the home directory, when the task only needs to read files in ~/projects/this-repo/. A database tool with admin credentials, when read-only would suffice.

  3. Excessive autonomy. The agent takes high-impact actions without human confirmation. A scheduling agent that can send meeting invites without asking. A finance agent that approves expense reports without a human sign-off. Wherever the model's confidence and the user's intent diverge, autonomy turns mistakes into faits accomplis.

Concrete examples

Why excessive agency is the highest-leverage thing to fix

Most AI security failures are "model produces wrong output" → "user sees wrong output." That's recoverable.

Excessive agency turns "model produces wrong output" into "real-world consequence." That's not always recoverable — sent emails, deleted files, executed code, transferred funds.

Reducing agency to the minimum necessary scope is the single most effective control because it bounds the worst case regardless of how the upstream attack happened.

Defenses