Back to all blogs

AI Bill of Materials (AI-BOM): The Security and Compliance Guide

AI Bill of Materials (AI-BOM): The Security and Compliance Guide

Archisman Pal

Archisman Pal

|

Head of GTM

Head of GTM

Mar 1, 2026

|

8 min read

AI Bill of Materials (AI-BOM): The Security and Compliance Guide
Repello tech background with grid pattern symbolizing AI security

-

What an AI-BOM is, how it differs from SBOM, what it must contain (models, datasets, APIs, agent chains, MCP servers, tool integrations), the EU AI Act and NIST AI RMF requirements, and how Repello Inventory auto-generates a living AI-BOM.

TL;DR

  • An AI Bill of Materials (AI-BOM) is a comprehensive inventory of every component in an AI system: trained models and versions, training and fine-tuning datasets, inference APIs, agent dependency chains, MCP server connections, and external tool integrations.

  • Every security professional knows SBOM from Executive Order 14028. AI-BOM is the same concept applied to AI systems, where the dependency chain is more complex, the risks are different, and the regulatory requirements are catching up fast.

  • The EU AI Act (Article 11), NIST AI RMF, and SEC AI risk materiality guidance all create traceability requirements that an AI-BOM directly satisfies.

  • Without an AI-BOM, you cannot assess supply chain risk in your AI stack, demonstrate regulatory compliance, or respond accurately to an AI security incident.

  • Repello's AI Asset Inventory auto-generates a living AI-BOM that updates continuously as models, APIs, and integrations change.

In May 2021, Executive Order 14028 on Improving the Nation's Cybersecurity made the Software Bill of Materials a federal requirement for software suppliers. The concept was not new, but EO 14028 made it operational: any software vendor selling to the US federal government needed to know and document every component their software contained.

The logic was direct: you cannot manage supply chain risk across dependencies you have not inventoried. The Log4Shell vulnerability in late 2021 demonstrated exactly why. Organizations scrambling to assess their exposure found the exercise nearly impossible without a complete dependency map. Those with SBOM practices in place responded in hours. Those without spent weeks trying to enumerate whether they were affected. The same principle applies to AI systems, where the supply chain is more complex and the risks are less mature.

AI deployments face an equivalent problem, with a more complex dependency chain and a less mature tooling ecosystem. An AI system is not just code. It is trained models with specific version histories, datasets used in training and fine-tuning, inference APIs from third-party providers, agent orchestration layers with their own dependency chains, MCP server connections, and external tool integrations, each with its own provenance, its own risk surface, and its own regulatory implications. An AI Bill of Materials (AI-BOM) is the structured inventory that makes this dependency chain visible, auditable, and manageable.

What is an AI Bill of Materials?

An AI Bill of Materials is a structured inventory of every component in an AI system or AI-powered application. It captures what the system is built from, where each component came from, what version is in use, and what dependencies each component introduces.

The concept directly extends SBOM. Where an SBOM tracks software packages, libraries, and their known vulnerabilities, an AI-BOM tracks the AI-specific components that traditional software dependency tooling was not designed to capture: foundation models and fine-tuned derivatives, training datasets and their provenance, inference API connections, agent orchestration chains, and the external systems those agents can reach.

The NTIA's minimum elements for an SBOM established that a useful bill of materials must identify components, establish relationships between them, and be machine-readable and updatable. The same principles apply to AI-BOM. As the Repello AI Research Team notes, a static document that captures the model in use at deployment is not a living AI-BOM. A continuously updated inventory that reflects the current state of every model, dataset, API, and integration in the environment is essential for ongoing risk management.

SBOM vs. AI-BOM: what changes

Security teams familiar with SBOM can apply most of the conceptual framework directly. The components tracked, the regulatory mandates in play, and the tooling required are all different, but the underlying goal is identical: visibility into your dependency surface as a prerequisite for managing its risk.



SBOM

AI-BOM

What it tracks

Software packages, libraries, open-source dependencies

AI models, training datasets, inference APIs, agent chains, MCP servers, tool integrations

Format standards

SPDX, CycloneDX (mature, widely supported)

Emerging: no dominant standard yet

Regulatory mandate

EO 14028 (US federal), EU Cyber Resilience Act

EU AI Act Art. 11, NIST AI RMF, SEC AI risk disclosures

Primary risk addressed

Known CVEs in dependencies, license compliance, supply chain tampering

Model provenance, data poisoning, inference API exposure, agentic dependency hijacking

Update trigger

Software release, dependency update

Model version change, dataset update, new API integration, new agent tool connection

Who generates it

Build tooling: Syft, Trivy, FOSSA, OWASP Dependency-Check

AI inventory tooling: Repello AI Inventory

Who needs it

Software vendors, federal contractors, regulated industries

Any organization deploying AI in production, regulated or otherwise

Consequence of absence

Cannot assess CVE exposure, cannot demonstrate compliance

Cannot assess AI supply chain risk, cannot respond to AI incidents, cannot demonstrate regulatory compliance

The key structural difference is dynamic complexity. A software dependency tree is relatively stable between releases: the same packages at the same versions, updated when the software is rebuilt. An AI system's dependency surface can shift without a formal release: a foundation model provider updates a model version, a fine-tuning dataset is refreshed, a new tool integration is added to an agent. An AI-BOM that does not update continuously is outdated almost immediately.

What an AI-BOM must contain

A complete AI Bill of Materials covers six component categories. Each introduces distinct risk and distinct compliance documentation requirements.

Trained models and versions

Every AI model in use, whether a foundation model accessed via API, a fine-tuned derivative, or a model hosted in your own infrastructure. The AI-BOM should capture: model name and provider, specific model version or checkpoint, training objective and task type, deployment context (which applications or agents use it), and whether the model version is pinned or auto-updating.

Version pinning matters because AI model providers update models continuously. A model accessed as "gpt-4o" today may behave differently from "gpt-4o" in three months if the provider updates the underlying weights. If your AI-BOM does not capture version identifiers, you cannot determine whether a newly disclosed model-level vulnerability affects your deployment.

Datasets used in training and fine-tuning

For any model you fine-tune or train internally: the dataset name and version, its source and provenance, the data collection methodology, any data licensing restrictions, and the date range of data included. This matters both for security (a poisoned training dataset is a supply chain attack) and for compliance (the EU AI Act requires documentation of training data characteristics for high-risk AI systems).

For third-party foundation models accessed via API, document what is publicly known about training data from the provider's model cards and technical reports. Where providers do not disclose training data details, note the absence as a known gap in provenance.

Inference APIs and third-party model connections

Every external API endpoint through which your applications query AI models: provider name, API endpoint, model version accessed, authentication method, data handling terms (does the provider use query data for training?), and data residency. These connections are supply chain risk surfaces: a compromised inference API can return adversarially modified outputs, and a provider's data handling policy determines whether proprietary data submitted in queries is retained.

Agentic applications that dynamically select which model to call based on task type introduce additional complexity: the AI-BOM must capture the selection logic and the full set of models that logic can route to, not just the models observed in a point-in-time audit.

Agent dependency chains

For agentic AI deployments, document the full orchestration topology: which agents exist, which models each agent uses, which other agents each agent can invoke, the trust relationships between agents (which can issue instructions to which), and the memory stores each agent reads from and writes to. This is the agentic equivalent of a software dependency tree, and it is the component most consistently missing from AI security programs.

Repello's research on MCP tool poisoning demonstrates why agent dependency chains require explicit documentation: a malicious tool definition entering through one node in the chain can propagate instructions to every downstream agent that node can reach. Without a documented dependency map, the blast radius of such an attack is impossible to assess. The OWASP Agentic Top 10 identifies undocumented agent topologies as a direct contributor to the top agentic risk categories.

MCP server connections

Model Context Protocol servers extend what AI agents can do: connecting them to file systems, databases, APIs, code execution environments, and external services. Every MCP server connection in your environment should be documented in the AI-BOM: the server name and provider, the capabilities it exposes, the permissions it grants the connecting agent, and whether the server definition is from a trusted internal source or an external provider.

MCP server connections are a supply chain risk surface distinct from inference APIs: a malicious or compromised MCP server can redefine the tools available to a connected agent, injecting capabilities or redirecting existing ones without modifying the agent's model or system prompt.

External tool integrations

Every external system that an AI agent or application can call: web browsing capabilities, email and calendar access, CRM and database integrations, code execution environments, payment APIs, communication platforms. For each: the integration name, the permission scope granted, the authentication method, the data types the integration can access or transmit, and whether the integration is bidirectional (the external system can also send content back to the AI agent as input).

External tool integrations determine the impact ceiling of a successful attack. An AI-BOM that documents model versions but not tool integrations understates risk systematically: it captures what the system knows but not what it can do.

Regulatory requirements driving AI-BOM adoption

Three major frameworks create explicit or implicit AI-BOM requirements that are either already in force or rapidly approaching enforcement.

EU AI Act, Article 11 and Annex IV. The EU AI Act requires providers of high-risk AI systems to maintain technical documentation covering the system's general description, design specifications, training methodology, training datasets and their characteristics, validation and testing procedures, and monitoring and maintenance procedures. Annex IV enumerates the required documentation elements in detail. This documentation must be kept up to date and made available to national competent authorities on request. An AI-BOM is the operational structure that generates and maintains this documentation continuously rather than as a point-in-time compliance exercise. The August 2026 Annex III enforcement deadline makes this an operational concern now. See the EU AI Act compliance guide for a detailed breakdown of requirements by deadline.

NIST AI RMF, Map and Measure functions. The NIST AI Risk Management Framework requires organizations to map AI risks across the full AI lifecycle, including supply chain risks from third-party models, datasets, and APIs. The Measure function requires ongoing assessment of those risks. Both functions presuppose traceability: you cannot map supply chain risks you have not inventoried, and you cannot measure risks in components you cannot identify. NIST AI 600-1, the Generative AI Profile, extends these requirements specifically to generative AI, explicitly addressing training data transparency, model provenance, and third-party dependency risk.

SEC AI risk materiality disclosures. The SEC's cybersecurity disclosure rules, in force since December 2023, require public companies to disclose material cybersecurity risks and incidents. As AI systems become material to business operations and as AI-specific incidents (model compromise, data exfiltration through AI agents, training data poisoning) increase in frequency, the question of whether an AI security incident is material becomes a board-level concern. An AI-BOM provides the foundation for accurate materiality assessment: you cannot determine whether a disclosed AI model vulnerability affects your organization if you do not know which model versions you are running.

How Repello Inventory auto-generates a living AI-BOM

The practical challenge with AI-BOM is maintenance. A manually curated inventory is outdated the moment a developer adds a new API integration, a model provider updates a version, or a new agent is deployed. The AI-BOM needs to be a living document that reflects the current state of the environment continuously, not a spreadsheet that is accurate on the day it was last edited.

Repello's AI Asset Inventory discovers AI assets across the enterprise environment automatically, capturing the components that manual audits consistently miss. This includes shadow AI deployments introduced outside formal procurement channels: AI tools, model API calls, and agentic integrations that employees adopt without security team awareness, which are the category most enterprise AI-BOMs are systematically incomplete on. In documented assessments, organizations using Repello's inventory discovery have found between three and ten times more AI integrations than their self-reported inventories contained.

The inventory feeds directly into the broader AI security posture management program. An AI-BOM without ongoing risk assessment is visibility without action. Repello connects asset discovery to continuous adversarial testing via ARTEMIS and runtime behavioral monitoring via ARGUS, closing the loop from inventory to risk assessment to production protection. The VANTAGE framework provides the structural methodology that connects all three.

Frequently asked questions

What is an AI Bill of Materials (AI-BOM)?

An AI Bill of Materials is a structured, continuously updated inventory of every component in an AI system: the trained models and versions in use, datasets used in training and fine-tuning, inference API connections, agent orchestration chains and their dependency relationships, MCP server connections, and external tool integrations. It applies the SBOM concept to the AI-specific supply chain, where the components, risks, and regulatory requirements are distinct from those addressed by traditional software dependency tooling.

How is an AI-BOM different from an SBOM?

An SBOM tracks software packages, libraries, and open-source dependencies with known CVE databases and established format standards (SPDX, CycloneDX). An AI-BOM tracks AI-specific components that software dependency tooling was not designed to capture: model versions and provenance, training data lineage, inference API connections, agentic dependency chains, MCP server connections, and external tool integrations. The primary risks also differ: SBOM surfaces known vulnerabilities in software dependencies; AI-BOM surfaces model provenance issues, data poisoning exposure, inference API risk, and agentic tool abuse surfaces.

Is an AI-BOM required by law?

The EU AI Act's Article 11 and Annex IV documentation requirements for high-risk AI systems are functionally an AI-BOM mandate: providers must maintain and update technical documentation covering training data characteristics, model specifications, and system design. NIST AI RMF requires supply chain traceability across the AI lifecycle. SEC cybersecurity disclosure rules require material AI risk assessment, which presupposes knowing which AI components are in use. The trajectory is toward AI-BOM requirements becoming broadly standard across regulated industries.

What happens if you do not have an AI-BOM?

Without an AI-BOM, you cannot determine whether a newly disclosed model-level vulnerability affects your deployment, cannot assess the blast radius of a compromised inference API, cannot produce the technical documentation required by the EU AI Act or NIST AI RMF, and cannot respond accurately to an AI security incident. The gap mirrors the SBOM gap exposed by Log4Shell: organizations without dependency inventories spent weeks determining their exposure while those with structured inventories responded in hours.

How often should an AI-BOM be updated?

An AI-BOM should update continuously, not on a release cycle. AI system components change outside formal software release processes: model providers update versions, fine-tuning datasets are refreshed, developers add tool integrations, and MCP server connections are established without security team involvement. A quarterly manual audit produces a document that is accurate for one day and increasingly inaccurate for the subsequent 89 days. Automated discovery tooling that continuously monitors the environment for new AI assets and integration changes is the only practical approach.

Conclusion

The SBOM analogy holds precisely because the problem is the same: supply chains you cannot see are risks you cannot manage. For software, EO 14028 and the Log4Shell incident together made SBOM standard practice. For AI, the combination of EU AI Act enforcement, NIST AI RMF requirements, and a rapidly expanding agentic attack surface is driving the same shift.

An AI-BOM is not a compliance artifact to be produced once and filed. It is the operational foundation for every AI security function that follows: risk assessment, adversarial testing, incident response, and governance reporting all require knowing what is in your AI environment before they can function. Organizations that build that foundation now will have a structural advantage when regulatory scrutiny intensifies and incident response timelines compress.

To see how Repello's AI Asset Inventory builds and maintains a living AI-BOM for enterprise environments, visit repello.ai/inventory or request a demo.

Share this blog

Subscribe to our newsletter

Repello tech background with grid pattern symbolizing AI security
Repello tech background with grid pattern symbolizing AI security
Repello AI logo - Footer

Sign up for Repello updates
Subscribe to our newsletter to receive the latest insights on AI security, red teaming research, and product updates in your inbox.

Subscribe to our newsletter

8 The Green, Ste A
Dover, DE 19901, United States of America

Follow us on:

LinkedIn icon
X icon, Twitter icon
Github icon
Youtube icon

ยฉ Repello Inc. All rights reserved.

Repello tech background with grid pattern symbolizing AI security
Repello AI logo - Footer

Sign up for Repello updates
Subscribe to our newsletter to receive the latest insights on AI security, red teaming research, and product updates in your inbox.

Subscribe to our newsletter

8 The Green, Ste A
Dover, DE 19901, United States of America

Follow us on:

LinkedIn icon
X icon, Twitter icon
Github icon
Youtube icon

ยฉ Repello Inc. All rights reserved.