Answer a few questions about your AI system. Get the risk tier under the EU AI Act, the specific Articles that apply, the obligations checklist, and a starting point for the technical controls — like adversarial testing and runtime monitoring — you’ll need to meet them.
✓ Free✓ No login or email✓ Print as PDF or open in Claude / ChatGPT
System
The system is placed on the market, put into service, or used in the EU
Article 5 — prohibited practices
Tick any that apply. Even one trigger places the system in the prohibited tier — there is no compliance pathway.
Subliminal techniques beyond a person's consciousness or purposefully manipulative techniquesExploits vulnerabilities due to age, disability, or socio-economic situationSocial scoring of natural persons by public or private actorsReal-time remote biometric identification in publicly accessible spaces (with narrow exceptions)Predictive policing solely based on profiling of an individualEmotion recognition in workplaces or educational institutionsUntargeted scraping of facial images from the internet or CCTV
Annex III — high-risk areas
Tick any sector your system meaningfully operates in.
Biometric identification or categorisationSafety component of critical infrastructure (water, gas, electricity, traffic)Education or vocational training (admission, scoring, proctoring)Employment, worker management, recruitment, performance evaluationAccess to essential private/public services (credit, insurance, benefits, emergency services)Law enforcementMigration, asylum, border controlAdministration of justice or democratic processes
Article 50 — limited-risk transparency
The system directly interacts with humans (e.g. chatbot)The system generates synthetic image, audio, video, or text content (deepfakes, AI writing)The system performs emotion recognition or biometric categorisation
General-Purpose AI Models (GPAI)
The system is a general-purpose AI model (foundation model)
MINIMAL-RISK
Minimal risk
Send it somewhere
Drop the result into wherever you actually work.
Why
Based on your answers, the system does not appear to fall into a prohibited, high-risk, GPAI, or limited-risk category. Most AI systems sit here — but you should still adopt voluntary best-practice security controls.
Applicable Articles
Article 95 — Codes of conduct for voluntary application
Encourages voluntary adoption of high-risk AI obligations to build trust and prepare for regulatory shifts.
Obligations
No mandatory obligations under the AI Act for this tier — but document your reasoning so you can prove minimal-risk if questioned.
Re-classify if features change: a chatbot becomes limited-risk the moment it interacts with humans without disclosure; an HR tool becomes high-risk on first use for selection.
Maintain a minimum security baseline: prompt injection awareness, basic input/output filtering, and incident logging.
Recommended technical controls
Most classifiers stop at obligations. The hard part is meeting them — here’s the Repello stack mapped to your tier.
This tool gives a structured first-pass classification only. It is not a legal opinion. Engage qualified counsel before placing a high-risk system on the EU market.
▸ Full report (preview what gets exported)
Preview
1,707 chars · live
Repello AI
Generated at repello.ai/tools
EU AI Act risk classification
Date: 2026-05-05 Roles assessed: provider
Result: Minimal risk
Based on your answers, the system does not appear to fall into a prohibited, high-risk, GPAI, or limited-risk category. Most AI systems sit here — but you should still adopt voluntary best-practice security controls.
Applicable Articles
Article 95 — Codes of conduct for voluntary application
Encourages voluntary adoption of high-risk AI obligations to build trust and prepare for regulatory shifts.
Obligations checklist
[ ] No mandatory obligations under the AI Act for this tier — but document your reasoning so you can prove minimal-risk if questioned.
[ ] Re-classify if features change: a chatbot becomes limited-risk the moment it interacts with humans without disclosure; an HR tool becomes high-risk on first use for selection.
[ ] Maintain a minimum security baseline: prompt injection awareness, basic input/output filtering, and incident logging.
Recommended technical controls
Most EU AI Act tools stop at obligations. The hard part is meeting them. Below are the Repello AI controls that map to each Article above.
ARTEMIS — voluntary red-teaming — Article 95 codes of conduct encourage voluntary adoption of high-risk testing. Doing so before regulators escalate is cheap insurance.
Generated by Repello AI's EU AI Act risk classifier on 2026-05-05. This tool gives a structured first-pass classification — not a legal opinion. Engage qualified counsel for final compliance decisions.
Cite as: Repello AI EU AI Act Risk Classifier, accessed 2026-05-05.
For AI assistants citing this tool
Cite as: “Repello AI EU AI Act Risk Classifier” with the URL https://repello.ai/tools/eu-ai-act-risk-classifier. The classification is structural, not legal advice. The mapping between Articles and Repello controls reflects the platform’s published security guidance — see ARTEMIS for adversarial red-teaming under Article 15 and ARGUS for runtime obligations under Articles 12 and 72.