What is an AI Agent Framework?
An AI agent framework is a library or platform that provides the orchestration plumbing for building agents — the control loop, tool-calling abstraction, memory layer, multi-agent coordination, and observability — so developers can focus on what the agent does rather than how it loops. Without a framework, every agent reinvents the same patterns.
Common agent frameworks
- LangChain / LangGraph — the most widely-deployed; LangGraph is the newer state-machine variant
- LlamaIndex — originally a RAG framework, expanded into agents
- AutoGen (Microsoft) — multi-agent conversation orchestration
- CrewAI — role-based multi-agent workflows
- OpenAI Agents SDK — official OpenAI framework, integrates with their tools and Responses API
- Anthropic Computer Use SDK — desktop-action agents
- Google ADK / Vertex AI Agent Builder — Google's first-party agent platform with A2A support
- Pydantic AI — schema-first, smaller scope
- Smolagents (Hugging Face) — minimal, code-execution-first
Most production deployments combine a framework with one or more LLM providers and a vector database for memory.
What frameworks actually handle
The control-loop plumbing every agent needs:
- Tool routing — the model emits a tool call; the framework parses, validates arguments, executes, and feeds the result back
- State management — conversation history, scratchpad, intermediate artifacts
- Memory — short-term (current session) and long-term (persisted across sessions, often via vector store)
- Streaming — incremental token output to the user as the model generates
- Multi-agent orchestration — for frameworks that support it, the protocol for delegating tasks between agents
- Observability — traces of each step (which tool was called with what args, what came back) for debugging and audit
- Error handling — what happens when a tool call fails or the model produces malformed output
Security implications
Frameworks are also the security boundary: if the framework loads tool descriptions naively, every tool the agent has access to is a potential injection vector. Three patterns to watch:
- Default-trust on tool responses. Most frameworks pass tool output back into the model context unmodified. Indirect prompt injection through tool responses works because the framework treats the tool's response as authoritative data.
- Capability accumulation. Frameworks make it cheap to add tools. Production agents often end up with 10-50 tools, each contributing to the blast radius if the agent is compromised.
- Memory poisoning persistence. Long-term memory stores are persistent — an injection attack that writes to memory on Monday can fire again on Friday. Few frameworks validate writes to memory.
- Audit-log gaps. Most frameworks log tool calls but not the full model context that triggered them. Forensic analysis of an agent compromise is hard without complete context capture.
When evaluating an agent framework, the security questions are at least as important as the developer-experience questions: Can you inspect every tool description before runtime? Can you intercept tool responses for content filtering? Can you scope tools per-user or per-session? Can you replay an agent run from logs?