What is the NIST AI Risk Management Framework?
The NIST AI Risk Management Framework (AI RMF 1.0) is the voluntary framework published by the US National Institute of Standards and Technology in January 2023 for managing risks associated with AI systems across their lifecycle. It is the closest thing the United States has to an authoritative AI governance standard and is referenced by federal AI executive orders, sector-specific regulators, and most enterprise AI risk programs.
What the framework covers
NIST AI RMF organizes AI risk management around four functions:
-
Govern — establish policies, accountability, and oversight for AI systems. Includes risk tolerance, organizational roles, supplier management, and a continuous improvement loop.
-
Map — understand context. What is the AI system, what's it used for, who's affected, what are the legal/ethical/business constraints, what does the deployment look like end-to-end?
-
Measure — evaluate the system against trustworthiness characteristics. Accuracy, reliability, robustness, security, resilience, accountability, transparency, explainability, privacy, fairness.
-
Manage — prioritize and respond to risks based on their likelihood and impact. Includes incident response planning, ongoing monitoring, and decommissioning workflow.
These functions are not sequential — they run continuously and feed back into each other.
The seven trustworthiness characteristics
NIST defines a trustworthy AI system as one that is:
- Valid and Reliable — performs as intended within its operational scope
- Safe — does not lead to harm to people, property, or environment
- Secure and Resilient — robust against adversarial attacks and operational failures
- Accountable and Transparent — clear ownership, auditable decisions
- Explainable and Interpretable — outputs can be understood at the relevant level
- Privacy-Enhanced — respects user privacy
- Fair, with Harmful Bias Managed — equitable treatment across groups
Most AI security programs map their controls to these seven attributes plus the four functions, producing a 7×4 matrix used as the assessment framework.
The Generative AI Profile
In July 2024, NIST published a companion document — AI 600-1, the Generative AI Profile — that extends the AI RMF specifically for generative AI systems including LLMs. The profile catalogs 12 risks unique to GenAI:
- Confabulation, dangerous/violent recommendations, data privacy, environmental impact, harmful bias, human-AI configuration, information integrity, information security, intellectual property, obscene content, value chain and component integration, content provenance.
It is the most-cited reference document for GenAI risk programs in regulated US enterprises.
How NIST AI RMF compares to other frameworks
| Framework | Origin | Force | Focus |
|---|---|---|---|
| NIST AI RMF | US (NIST) | Voluntary; referenced by EO 14110, 14179 | Whole-AI-lifecycle risk management |
| EU AI Act | EU | Mandatory in EU | Risk-tier compliance, market access |
| ISO 42001 | ISO | Voluntary, certification-based | AI management systems |
| OWASP LLM Top 10 | OWASP | Voluntary, technical | LLM-specific vulnerability classes |
| MITRE ATLAS | MITRE | Voluntary, technical | Adversarial threat tactics and techniques |
A mature AI security program typically uses NIST AI RMF as the governance scaffold and OWASP/ATLAS as the technical control catalog.
See also
The current canonical NIST AI RMF page is at nist.gov/itl/ai-risk-management-framework.