EU AI Act:
The 4 Risk Tiers
Explained
The world’s first comprehensive AI regulation classifies every AI system into one of four tiers — from outright prohibition to minimal oversight. Understanding which tier your AI falls into determines your entire compliance obligation. This is the complete breakdown.
Why Risk-Based Classification Changes Everything
The EU AI Act does not ban AI. It classifies AI by the risk it poses to people’s safety, rights, and livelihoods — and then assigns compliance obligations proportionate to that risk. This risk-based approach is the Act’s defining architectural choice, and it determines everything: what documentation you need, what testing is required, whether human oversight is mandatory, and what penalties apply if you get it wrong.
Think of the EU AI Act as GDPR for artificial intelligence. It has extraterritorial scope — applying to any company whose AI systems affect EU users, regardless of where the company is headquartered. A startup in Singapore deploying an AI hiring tool used by EU companies is subject to the Act. A US bank using AI credit scoring for EU customers is subject to the Act.
The four-tier classification system is the lens through which every compliance question is answered. Your first obligation, before any documentation or testing, is to correctly classify every AI system your organisation develops or deploys. Misclassification is not a technicality — it can expose you to the Act’s highest penalty tiers and exclude you from the European market.
For AIGP (AI Governance Professional) candidates, this framework is foundational. The four tiers, their specific prohibited practices, high-risk domains, transparency requirements, and compliance obligations are the core of what the examination tests. This guide maps all of them.
Every AI System Falls Into One of These Categories
Prohibited AI
Tier 1 represents AI that is incompatible with EU values and fundamental rights at a structural level — systems where no amount of documentation, testing, or human oversight can make the application acceptable. The European Commission published detailed guidelines on prohibited practices in February 2025 alongside the enforcement date.
Social scoring by governments — evaluating or classifying citizens based on their behaviour, socioeconomic status, or personal characteristics — is prohibited because it creates a systemic threat to human dignity and equality that cannot be mitigated through safeguards. China’s social credit system is the reference case the Act was designed to prevent in Europe.
Real-time biometric identification in public spaces is banned with narrow exceptions only for law enforcement with prior judicial approval, limited to targeted search of crime victims, terrorism prevention, and pursuit of serious criminals. Mass surveillance of citizens in public is prohibited regardless of stated purpose.
Manipulative AI — systems that use subliminal techniques or exploit psychological vulnerabilities, disabilities, or socioeconomic circumstances to influence behaviour in ways people cannot detect or resist — is prohibited because it undermines the autonomy that fundamental rights are designed to protect.
High-Risk AI
High-risk AI systems are not prohibited — they are regulated. They can be deployed in the EU, but only after meeting a comprehensive set of obligations: risk management documentation, data governance standards, technical documentation, human oversight mechanisms, conformity assessments, and registration in the EU database.
The Act defines high-risk AI across eight domains (Annex III) — areas where AI decisions intersect with fundamental rights and safety in ways that justify intensive regulation. The August 2026 deadline is the critical compliance date for most software companies: this is when enforcement intensifies for high-risk systems, and when the fines for non-compliance reach their most commercially significant levels.
High-risk AI also covers AI systems embedded in products already regulated under EU product safety law (Annex I) — medical devices, machinery, aviation systems, vehicles. If an AI is a safety component in one of these regulated products, it is automatically high-risk regardless of the specific function it performs.
The compliance burden for high-risk AI is substantial: dedicated resources for risk management, technical documentation, transparency reporting, ongoing post-market monitoring, and incident reporting obligations that parallel GDPR’s breach notification requirements. This is where most enterprise compliance investment is concentrated heading into August 2026.
Limited Risk / Transparency
Limited risk AI is not significantly dangerous — but it interacts with humans in ways where users have a right to know they are not communicating with another person. The Act’s transparency obligations ensure that AI does not deceive people about its fundamental nature.
The core obligation is simple: users must be informed, in a clear and timely manner, that they are interacting with an AI system. This applies to chatbots, virtual assistants, customer service bots, and any AI system that is designed to interact with humans in natural language where there is a realistic possibility of confusion about whether the respondent is human or artificial.
Generative AI content — images, audio, video, and text generated or manipulated by AI — falls here when used in contexts affecting public interest (deepfakes, synthetic media). The Act requires labelling of synthetic content in machine-readable and detectable ways where technically feasible, with exceptions for law enforcement uses authorised by law and creative works that are clearly labelled as AI-generated.
Emotion recognition systems (outside workplace/education where they are prohibited) and biometric categorisation systems also fall under limited risk when used outside the prohibited contexts — they must disclose their operation to the individuals they process.
Minimal / Low Risk
The overwhelming majority of AI applications — from product recommendation engines to spam filters, from inventory management tools to AI in video games — pose minimal or no meaningful risk to people’s rights or safety. The Act recognises this explicitly: imposing heavy compliance burdens on these systems would be disproportionate and would undermine Europe’s ability to compete in AI development.
Minimal risk AI has no specific mandatory legal obligations under the Act. Providers are free to develop and deploy these systems subject only to existing law (GDPR, consumer protection, etc.). The Act encourages providers of minimal risk AI to voluntarily adopt codes of conduct — structured commitments to responsible AI development practices — but these are not legally required.
The designation of Tier 4 is important for compliance strategy: it means organisations should spend their compliance resources on correctly classifying their higher-risk systems, not on applying heavy compliance processes to routine AI tools. A spam filter does not need a conformity assessment. An AI recommendation engine for a retail website does not need human oversight documentation. Getting this calibration right is itself a compliance efficiency.
“The EU AI Act doesn’t ban AI — it classifies and regulates it based on risk. The higher the potential harm to people, the stricter the rules. Think of it as GDPR for artificial intelligence, with extraterritorial scope that reaches any company whose AI touches EU users.”
CMARIX — EU AI Act Compliance Checklist 2026All Four Tiers at a Glance
The definitive summary table for AIGP study. Know this table — classification triggers obligations.
| Tier | Risk Level | Status | Effective Date | Core Obligation | Max Penalty |
|---|---|---|---|---|---|
| Tier 1 · Prohibited | Unacceptable | BANNED outright | 2 Feb 2025 | Discontinue system — no compliance path exists | €35M or 7% |
| Tier 2 · High-Risk | Significant | STRICT obligations | 2 Aug 2026 | Conformity assessment, technical docs, human oversight, EU database registration | €15M or 3% |
| Tier 3 · Limited Risk | Limited | TRANSPARENCY required | Aug 2025+ | Disclose that users are interacting with AI; label synthetic content | €7.5M or 1.5%* |
| Tier 4 · Minimal Risk | Minimal / None | VOLUNTARY only | N/A | No specific AI Act requirements; voluntary codes of conduct encouraged | None specific |
*€7.5M penalty applies for supplying incorrect information to authorities — not specifically for Tier 3 transparency violations. Tier 3 penalties for non-disclosure would be assessed under national implementing law.
The Fifth Category: GPAI Models
The four-tier system applies to AI applications and use cases. General-Purpose AI (GPAI) models — foundation models like GPT-4, Claude, Gemini — have their own additional regulatory layer that became applicable on 2 August 2025.
AIGP Exam: What You Must Know About the 4 Tiers
Classification Is the Starting Point — Not the Destination
The EU AI Act’s four-tier classification system is elegant in its logic: the higher the potential harm, the stricter the regulation. But classification is not a compliance achievement — it is a prerequisite. Once you know your system is Tier 2, the work begins: risk management documentation, data governance, technical files, conformity assessments, registration, and ongoing monitoring. Classification simply tells you which work is required.
With the August 2026 deadline for full high-risk enforcement approaching, organisations that have not yet completed AI inventories and risk classifications are entering the most time-sensitive phase of compliance preparation. As of March 2026, national market surveillance authorities are already actively enforcing prohibited practices and GPAI requirements. The window for a leisurely approach to Tier 2 compliance has closed.
The organisations that will benefit most from this regulation are those that treat it the way the most sophisticated companies treated GDPR in 2018: not as a compliance tax but as a forcing function for building AI governance infrastructure that would have been needed eventually anyway. Clear AI inventories. Documented risk decisions. Human oversight mechanisms. Post-market monitoring. These are not just compliance artefacts — they are the foundations of trustworthy AI deployment that builds rather than erodes stakeholder confidence.
For AIGP candidates: the four-tier framework is the skeleton of the entire regulation. Every obligation, every article, every penalty provision traces back to which tier the AI system in question falls into. Master the tier classification logic, the prohibited practices, the Annex III domains, and the enforcement timeline — and the rest of the regulation becomes organised and navigable around that structure.