Back to all blogs

|
|
9 min read


TL;DR
The EU AI Act is in force. The February 2025 deadline for prohibited AI practices has passed. The August 2025 deadline for General Purpose AI model obligations is here. The August 2026 deadline for high-risk AI system requirements is approaching.
High-risk AI systems under Annex III (covering biometrics, critical infrastructure, employment, essential services, law enforcement, and others) require technical documentation that is functionally an AI Bill of Materials: models, training data, testing procedures, accuracy metrics, and ongoing monitoring evidence.
Most organizations cannot produce this documentation today because they do not have a complete inventory of what AI systems are running in their environment, which of those meet the high-risk threshold, and what version and training data lineage each system has.
The gap is not just a compliance risk. It is a security risk. AI systems operating without inventory and documentation oversight are by definition unmonitored.
Repello's AI Asset Inventory builds the living inventory that both compliance documentation and security monitoring require.
The EU AI Act (Regulation EU 2024/1689) is the European Union's binding legal framework for artificial intelligence systems. It establishes risk-tiered requirements for AI systems covering documentation, human oversight, technical robustness, transparency, and ongoing monitoring. The requirements scale with the potential harm the system can cause: from minimal obligations for low-risk AI to strict conformity assessment for high-risk systems to outright prohibition for certain applications.
The Act entered into force on August 1, 2024. It is not a proposed regulation or a future policy commitment. It is law, with phased deadlines that are rolling through the compliance calendar now.
Most compliance teams are aware of the regulation in broad terms. Fewer have translated that awareness into a concrete inventory of which AI systems in their organization are subject to which requirements and when. That translation is where compliance programs succeed or fail: the Act's documentation requirements presuppose that you know what AI systems you have, how they are classified, and what evidence you can produce about their design, training, and ongoing performance.
For the majority of organizations, the honest answer to those questions is: not well enough. A 2024 survey by the IBM Institute for Business Value found that 64% of CEOs say AI is already affecting how they run their companies, but independent assessments of enterprise AI inventories consistently find that organizations are operating three to ten times more AI integrations than their official inventories contain. You cannot document what you have not found.
This guide covers the compliance timeline, what high-risk classification means in practice, what Article 11 and Annex IV require you to document, and how to build the inventory that makes compliance achievable rather than performative.
The EU AI Act compliance timeline: where you are now
The Act's phased implementation structure means different obligations activated at different points. Understanding which deadlines have passed and which are approaching is the prerequisite for prioritizing compliance work.
February 2, 2025 (6 months after entry into force): Prohibited AI practices under Article 5 became enforceable. These include social scoring systems, real-time remote biometric surveillance in public spaces (with narrow exceptions), AI that exploits vulnerabilities of specific groups, and manipulation systems that operate below conscious awareness. Organizations running any system that could be characterized under these categories needed to have discontinued or remediated them by this date.
August 2, 2025 (12 months after entry into force): Obligations for General Purpose AI (GPAI) models activated. Providers of GPAI models (foundation models made available for integration into other systems) must maintain technical documentation, comply with copyright law, and publish summaries of training data. Providers of GPAI models with systemic risk (those trained with more than 10^25 floating-point operations) face additional obligations: adversarial testing, incident reporting to the European AI Office, and cybersecurity protections. Organizations building products on foundation models from third-party providers are not directly subject to these obligations as providers, but they are subject to them as deployers of systems built on GPAI models.
August 2, 2026 (24 months after entry into force): High-risk AI system requirements under Annex III become enforceable. This is the deadline that affects the broadest range of enterprise AI deployments and where the documentation and inventory requirements are most detailed.
August 2, 2027 (36 months after entry into force): High-risk AI systems covered under existing EU product safety legislation (medical devices, machinery, vehicles) that were already on the market before August 2026 must comply.
The August 2026 deadline is the one most organizations need to be actively preparing for now. With 12 months to compliance, and documentation requirements that take months to build from scratch, organizations that have not started their inventory and classification exercise are already behind.
What counts as high-risk AI under Annex III
The EU AI Act's high-risk classification is not based on the sophistication of an AI system or its technical architecture. It is based on the domain of deployment and the potential impact on people's fundamental rights, safety, or access to essential services.
Annex III of the EU AI Act lists eight high-risk categories:
Biometric systems: Remote biometric identification, biometric categorization that infers sensitive attributes, and emotion recognition systems used in workplace or educational settings.
Critical infrastructure: AI systems used in the management or operation of critical digital infrastructure, road traffic, or utilities (water, gas, heating, electricity).
Education and vocational training: AI that determines access to educational institutions, evaluates students, or monitors behavior in educational settings.
Employment and workforce management: AI used in recruitment (CV screening, interview scoring), performance evaluation, promotion decisions, or termination, and systems that monitor workers.
Essential private and public services: AI used in credit scoring, insurance risk assessment, emergency service dispatch prioritization, and benefit eligibility determination.
Law enforcement: AI used in risk assessment for criminal recidivism, detection of deepfakes, or evaluation of evidence reliability. Narrow exceptions apply.
Migration, asylum, border control: AI systems that assess immigration application risk, verify documents, or predict flight risk.
Administration of justice: AI that researches facts and law, assists in interpreting legislation, or applies law to facts.
The practical question for compliance teams is not "does our AI use advanced technology" but "what decisions does our AI influence, and about whom?" An AI system that helps an HR team screen CVs is high-risk under the Act regardless of its technical sophistication. A highly capable foundation model used purely for internal document summarization with no decisions affecting individuals may not be.
This classification exercise requires a complete inventory of AI systems before it can be performed. You cannot classify what you have not inventoried.
What Article 11 and Annex IV require you to document
For high-risk AI systems, Article 11 of the EU AI Act requires providers to draw up and keep up-to-date technical documentation before the system is placed on the market or put into service. Annex IV enumerates the required documentation elements:
General system description: The intended purpose of the system, the version information, how the system interacts with hardware and software, the forms of input data, and the training approach.
Design specifications and development process: A description of the system's design, the choices made in development and their rationale, the mathematical framework of the model, and the key design parameters and their interaction.
Training data documentation: A description of the datasets used in training; provenance, selection criteria, collection methodology, pre-processing steps, labeling methodology, and known limitations or biases in the data.
Validation and testing: The validation and testing procedures used, what metrics were used to evaluate performance, the testing datasets used, and results including accuracy, robustness, and non-discrimination metrics. Documentation must cover testing across the intended deployment population.
Monitoring, logging, and oversight: A description of the monitoring capabilities built into the system, the logging functionality, and the human oversight mechanisms that allow appropriate human intervention.
Cybersecurity measures: A description of the measures taken to protect the system against unauthorized third-party attempts to alter its intended purpose, outputs, or performance. This includes the full AI attack surface: model interaction layer, retrieval pipelines, tool integrations, MCP connections, and memory stores.
Performance on previously unseen data: A description of how the system's performance has been validated on data not used during training.
This documentation set is, in functional terms, an AI Bill of Materials combined with a security and performance audit record. It requires knowing which model version is in use, the complete lineage of training data, the validation methodology applied before deployment, and the ongoing monitoring approach in production. None of this can be produced retroactively without the underlying records. According to Repello AI Research Team analysis of enterprise AI compliance patterns, organizations that did not capture training data provenance at training time cannot reconstruct it after the fact; this gap is the single largest blocker to compliance programs across the industry.
The inventory gap: what most organizations are missing
The compliance documentation requirements assume that organizations have structured oversight of their AI systems from development through deployment. For internally developed systems, this means documentation practices that most AI development teams have not historically maintained. For third-party AI systems, it means contractual and technical access to documentation that providers may not have offered proactively.
The more immediate problem for most organizations is the inventory itself. Before you can classify systems, document them, or assess whether they meet the high-risk threshold, you need to know they exist. Shadow AI (AI tools and integrations operating in the enterprise without security or compliance team awareness) is a structural obstacle to EU AI Act compliance. An AI system that HR teams are using to screen CVs, introduced through a SaaS tool adopted without IT approval, is a high-risk AI system under the Act regardless of whether the compliance team knows about it.
The typical enterprise AI inventory gap runs deep. Standard IT asset management tools were not designed to detect AI-specific integration patterns: model API calls embedded in applications, AI-powered browser extensions used by employees, Slack bots processing HR data, or developer tools that route code and internal documentation to external model providers. A compliance team working from an IT asset list will consistently undercount the AI systems in scope for the Act.
This gap makes the AI security posture management function a compliance function as much as a security function. Continuous AI asset discovery is not optional when the compliance obligation requires you to document every high-risk AI system in your environment.
Building the inventory that makes compliance achievable
A compliant EU AI Act posture requires three capabilities operating together: discovery, classification, and documentation.
Discovery: Continuous automated scanning that identifies AI systems across the enterprise environment, including systems introduced outside formal procurement channels. Repello's AI Asset Inventory performs this function, identifying model API connections, AI-powered SaaS integrations, and agentic tool chains that manual audits consistently miss. The VANTAGE framework provides the structural methodology for building the inventory program on top of that discovery function.
Classification: Once AI systems are discovered, each requires assessment against the Annex III high-risk categories. This is a legal and functional analysis, not a technical one: what decisions does this system influence, in what domain, and about whom? Classification should be documented and reviewed when system scope changes.
Documentation: For each high-risk system identified, build and maintain the Annex IV documentation set. For systems developed internally, this means establishing documentation practices during development rather than trying to reconstruct them afterward. For third-party systems, this means requiring documentation from providers as a procurement condition and maintaining records of what has been received and verified.
The documentation is not a one-time exercise. Article 11 requires that technical documentation be kept up to date. A model version update, a change in training data, a new deployment context, or a change in the system's intended purpose all require documentation updates. A living inventory that tracks changes to AI systems in the environment is the operational infrastructure that makes ongoing compliance maintenance feasible.
For the security dimensions of compliance, Repello's ARTEMIS covers the adversarial testing that Annex IV and the GPAI systemic risk provisions require, and ARGUS provides the runtime monitoring and logging infrastructure that Article 11's ongoing oversight requirements depend on.
Frequently asked questions
When does the EU AI Act apply to high-risk AI systems?
The primary high-risk AI system requirements under Annex III become enforceable on August 2, 2026. However, the compliance documentation these requirements demand (training data records, validation testing, monitoring infrastructure) cannot be built retroactively. Organizations subject to the Act need to begin their inventory, classification, and documentation programs now to have compliant evidence by the enforcement date. Earlier deadlines for prohibited practices (February 2025) and General Purpose AI models (August 2025) have already passed.
How do I know if my AI system is "high-risk" under the EU AI Act?
High-risk classification under Annex III depends on the domain of deployment and the decisions the system influences, not the technical sophistication of the AI. The eight high-risk categories cover biometrics, critical infrastructure, education, employment and workforce management, essential private and public services, law enforcement, migration and border control, and administration of justice. If your AI system influences access to services, employment decisions, or safety-critical functions in these domains, it is almost certainly in scope.
What documentation does the EU AI Act require for high-risk AI?
Annex IV requires a general system description, design specifications, training data documentation (provenance, selection criteria, labeling methodology, known limitations), validation and testing results across relevant populations, monitoring and logging capabilities, cybersecurity measures, and performance assessment on previously unseen data. This documentation must be maintained and updated as the system changes and made available to national competent authorities on request.
Does the EU AI Act apply to AI systems we buy from third-party vendors?
Yes, as a deployer. Providers of high-risk AI systems have documentation and conformity obligations; deployers who use those systems have obligations around human oversight, monitoring, and transparency. Deployers are also responsible for ensuring the system is used within the scope its provider documented. If a third-party AI system in your environment is high-risk under Annex III, you have compliance obligations as a deployer regardless of whether the provider has met their own obligations.
What is the penalty for non-compliance with the EU AI Act?
Fines for non-compliance vary by violation type. Prohibited AI practices (Article 5 violations) carry fines of up to 35 million euros or 7% of global annual turnover, whichever is higher. Other violations of the Act's obligations carry fines of up to 15 million euros or 3% of global annual turnover. Providing incorrect or misleading information to authorities carries fines of up to 7.5 million euros or 1% of global annual turnover.
Conclusion
The EU AI Act compliance timeline is not hypothetical. Prohibited practices enforcement has passed. GPAI model obligations are in effect. The high-risk system documentation requirements arrive in August 2026, and building compliant documentation from scratch takes time that is already running out for organizations that have not started.
The prerequisite for all of it is knowing what AI systems you have. Classification, documentation, monitoring, and incident reporting are all impossible without a complete and current inventory of AI systems across the enterprise. That inventory is also the prerequisite for the security program that runs alongside compliance: the continuous adversarial testing and runtime monitoring that the Act's cybersecurity and ongoing oversight requirements depend on.
Compliance and security, in the EU AI Act's framework, are not separate tracks. The documentation the Act requires is the same documentation that a mature AI security posture management program generates as a byproduct of doing security correctly.
To learn how Repello builds the AI asset inventory and security program that EU AI Act compliance requires, visit repello.ai/inventory or request a demo.
Share this blog
Subscribe to our newsletter











