EU AI Act: The 4 Risk Tiers — A Complete Visual Guide
Visual Guide AIGP Study Regulation (EU) 2024/1689 In Force August 2024

EU AI Act:
The 4 Risk Tiers
Explained

The world’s first comprehensive AI regulation classifies every AI system into one of four tiers — from outright prohibition to minimal oversight. Understanding which tier your AI falls into determines your entire compliance obligation. This is the complete breakdown.

April 2026 · AI Regulation · AIGP Exam Reference · 18 min read
Tier 1
Prohibited Risk
BANNED
Tier 2
High Risk
STRICT RULES
Tier 3
Limited Risk
TRANSPARENCY
Tier 4
Minimal Risk
VOLUNTARY
Penalty Range
€7.5M – €35M
or up to 7% global turnover
Aug 1, 2024
EU AI Act enters into force. The clock starts for all compliance obligations.
Feb 2, 2025
Prohibitions (Tier 1) and AI literacy obligations take effect. ✅ In force now.
Aug 2, 2025
Governance rules and GPAI model obligations become applicable. ✅ In force now.
Aug 2, 2026
Full high-risk AI (Tier 2) enforcement begins. ⚠️ Approaching fast.
Aug 2, 2027
Extended deadline for high-risk AI systems embedded in regulated products.
The Framework

Why Risk-Based Classification Changes Everything

The EU AI Act does not ban AI. It classifies AI by the risk it poses to people’s safety, rights, and livelihoods — and then assigns compliance obligations proportionate to that risk. This risk-based approach is the Act’s defining architectural choice, and it determines everything: what documentation you need, what testing is required, whether human oversight is mandatory, and what penalties apply if you get it wrong.

Think of the EU AI Act as GDPR for artificial intelligence. It has extraterritorial scope — applying to any company whose AI systems affect EU users, regardless of where the company is headquartered. A startup in Singapore deploying an AI hiring tool used by EU companies is subject to the Act. A US bank using AI credit scoring for EU customers is subject to the Act.

The four-tier classification system is the lens through which every compliance question is answered. Your first obligation, before any documentation or testing, is to correctly classify every AI system your organisation develops or deploys. Misclassification is not a technicality — it can expose you to the Act’s highest penalty tiers and exclude you from the European market.

For AIGP (AI Governance Professional) candidates, this framework is foundational. The four tiers, their specific prohibited practices, high-risk domains, transparency requirements, and compliance obligations are the core of what the examination tests. This guide maps all of them.

Penalty Structure
Tier 1: Prohibited AI
€35M or 7% turnover
Whichever is higher. The Act’s maximum penalty. No compliance path — system must stop.
Tier 2: High-Risk Violations
€15M or 3% turnover
For violations of high-risk AI system obligations — documentation, testing, human oversight.
Incorrect Information
€7.5M or 1.5% turnover
For supplying incorrect, incomplete, or misleading information to national authorities.
SMEs and startups: penalties capped at percentages rather than fixed amounts. Authorities consider severity, duration, intentionality, and mitigation actions.
The Four Tiers

Every AI System Falls Into One of These Categories

Tier 1
Unacceptable Risk · Article 5

Prohibited AI

These AI systems are banned outright in the EU. There is no compliance path — if your system falls here, it must stop.
BANNED
Effective since
2 Feb 2025

Tier 1 represents AI that is incompatible with EU values and fundamental rights at a structural level — systems where no amount of documentation, testing, or human oversight can make the application acceptable. The European Commission published detailed guidelines on prohibited practices in February 2025 alongside the enforcement date.

Social scoring by governments — evaluating or classifying citizens based on their behaviour, socioeconomic status, or personal characteristics — is prohibited because it creates a systemic threat to human dignity and equality that cannot be mitigated through safeguards. China’s social credit system is the reference case the Act was designed to prevent in Europe.

Real-time biometric identification in public spaces is banned with narrow exceptions only for law enforcement with prior judicial approval, limited to targeted search of crime victims, terrorism prevention, and pursuit of serious criminals. Mass surveillance of citizens in public is prohibited regardless of stated purpose.

Manipulative AI — systems that use subliminal techniques or exploit psychological vulnerabilities, disabilities, or socioeconomic circumstances to influence behaviour in ways people cannot detect or resist — is prohibited because it undermines the autonomy that fundamental rights are designed to protect.

Prohibited Examples
🚫
Social Scoring Systems
Government AI that scores, ranks, or classifies citizens based on behaviour, socioeconomic status, or personal characteristics, with detrimental consequences to their rights or access to services.
🚫
Real-Time Biometric Surveillance
Facial recognition or other biometric identification systems operating in real time in publicly accessible spaces for law enforcement (with very narrow, court-approved exceptions only).
🚫
Manipulative AI
Systems using subliminal techniques beyond conscious perception, or exploiting vulnerabilities of specific groups (age, disability, socioeconomic situation) to impair autonomous decision-making.
🚫
Emotion Recognition at Work & School
AI that infers emotional states of individuals in workplace or educational settings (safety-purpose exceptions apply, e.g. detecting driver fatigue).
🚫
Predictive Policing on Personal Traits
AI that assesses the likelihood of a person committing a crime based on personal characteristics, not objective and verifiable facts directly linked to criminal activity.
🚫
Untargeted Facial Image Scraping
Creating or expanding facial recognition databases through mass, untargeted scraping of images from the internet or CCTV footage, regardless of purpose.
Obligation There is no compliance path for Tier 1 systems. If your AI application falls in this category, it must be discontinued immediately. Violations carry the Act’s maximum penalty: €35 million or 7% of global annual turnover. As of April 2026, national market surveillance authorities are actively enforcing these prohibitions.
Tier 2
Significant Risk · Annexes I & III

High-Risk AI

Strict obligations must be met before deployment. Most enterprise AI teams need to understand this tier deeply.
STRICT RULES
Full enforcement
2 Aug 2026

High-risk AI systems are not prohibited — they are regulated. They can be deployed in the EU, but only after meeting a comprehensive set of obligations: risk management documentation, data governance standards, technical documentation, human oversight mechanisms, conformity assessments, and registration in the EU database.

The Act defines high-risk AI across eight domains (Annex III) — areas where AI decisions intersect with fundamental rights and safety in ways that justify intensive regulation. The August 2026 deadline is the critical compliance date for most software companies: this is when enforcement intensifies for high-risk systems, and when the fines for non-compliance reach their most commercially significant levels.

High-risk AI also covers AI systems embedded in products already regulated under EU product safety law (Annex I) — medical devices, machinery, aviation systems, vehicles. If an AI is a safety component in one of these regulated products, it is automatically high-risk regardless of the specific function it performs.

The compliance burden for high-risk AI is substantial: dedicated resources for risk management, technical documentation, transparency reporting, ongoing post-market monitoring, and incident reporting obligations that parallel GDPR’s breach notification requirements. This is where most enterprise compliance investment is concentrated heading into August 2026.

High-Risk Domains (Annex III)
⚠️
Critical Infrastructure
AI as safety components in transport, utilities, water, gas, electricity networks — where failure puts citizens’ lives at risk.
⚠️
Employment & HR
Automated CV screening, candidate ranking, performance assessment, promotion decisions, and task allocation in work settings.
⚠️
Education & Vocational Training
AI determining access to education, evaluating students, detecting cheating — decisions that shape someone’s professional future.
⚠️
Essential Services & Benefits
AI credit scoring, insurance risk assessment, loan eligibility, and access to public services, housing, and utilities.
⚠️
Law Enforcement
AI assessing reliability of evidence, crime analytics tools, polygraph-like emotion detection systems used by police.
⚠️
Migration & Border Control
Automated visa and asylum application processing, border security risk assessment, lie detection at borders.
⚠️
Justice & Democratic Processes
AI assisting in interpreting law, researching facts for court cases, or influencing elections and democratic participation.
⚠️
Safety-Critical Products (Annex I)
AI embedded in medical devices, machinery, aviation systems, vehicles — automatically high-risk as safety components in regulated products.
Obligations Risk management system (Article 9) · Technical documentation (Article 11) · Data governance and quality (Article 10) · Transparency and instructions for use (Article 13) · Human oversight measures (Article 14) · Accuracy, robustness and cybersecurity (Article 15) · Conformity assessment before market entry · EU database registration · Post-market monitoring and incident reporting. Fines up to €15M or 3% of global turnover for non-compliance.
Tier 3
Limited Risk · Article 50 · Transparency Obligations

Limited Risk / Transparency

Users must be informed they are interacting with AI. Disclosure is the core obligation — not prohibition or intensive compliance.
TRANSPARENCY
Effective since
Aug 2025+

Limited risk AI is not significantly dangerous — but it interacts with humans in ways where users have a right to know they are not communicating with another person. The Act’s transparency obligations ensure that AI does not deceive people about its fundamental nature.

The core obligation is simple: users must be informed, in a clear and timely manner, that they are interacting with an AI system. This applies to chatbots, virtual assistants, customer service bots, and any AI system that is designed to interact with humans in natural language where there is a realistic possibility of confusion about whether the respondent is human or artificial.

Generative AI content — images, audio, video, and text generated or manipulated by AI — falls here when used in contexts affecting public interest (deepfakes, synthetic media). The Act requires labelling of synthetic content in machine-readable and detectable ways where technically feasible, with exceptions for law enforcement uses authorised by law and creative works that are clearly labelled as AI-generated.

Emotion recognition systems (outside workplace/education where they are prohibited) and biometric categorisation systems also fall under limited risk when used outside the prohibited contexts — they must disclose their operation to the individuals they process.

Limited Risk Examples
💬
Chatbots & Virtual Assistants
Customer service bots, AI assistants, helpdesk chatbots — must disclose: “You are interacting with an AI” in a clear and timely manner.
🎭
Deepfakes & Synthetic Media
AI-generated images, audio, video, or text that could be mistaken for authentic human-created content, particularly in public-interest contexts.
✍️
AI-Generated Text on Public Matters
AI-produced text on matters of public interest — news articles, policy commentary, public communications — must be labelled as AI-generated.
😶
Emotion Recognition (Non-Prohibited Uses)
Emotion recognition systems used outside the prohibited workplace/education contexts must disclose their operation to individuals being assessed.
🤖
AI Voice Interfaces
AI systems that communicate with users through natural language or voice — must clearly identify themselves as AI, especially where human-AI distinction could be confused.
Obligation Mandatory disclosure to users that they are interacting with an AI system (Article 50(1)). Labelling of AI-generated or manipulated content in a machine-readable format where technically feasible (Article 50(2)). No conformity assessment required. No registration in EU database required. Primary obligation is transparency — ensuring humans know they are engaging with AI.
Tier 4
Minimal / No Risk · Voluntary Codes of Practice

Minimal / Low Risk

No specific legal obligations. Voluntary codes of conduct encouraged. The vast majority of AI applications fall here.
VOLUNTARY
Specific rules
None

The overwhelming majority of AI applications — from product recommendation engines to spam filters, from inventory management tools to AI in video games — pose minimal or no meaningful risk to people’s rights or safety. The Act recognises this explicitly: imposing heavy compliance burdens on these systems would be disproportionate and would undermine Europe’s ability to compete in AI development.

Minimal risk AI has no specific mandatory legal obligations under the Act. Providers are free to develop and deploy these systems subject only to existing law (GDPR, consumer protection, etc.). The Act encourages providers of minimal risk AI to voluntarily adopt codes of conduct — structured commitments to responsible AI development practices — but these are not legally required.

The designation of Tier 4 is important for compliance strategy: it means organisations should spend their compliance resources on correctly classifying their higher-risk systems, not on applying heavy compliance processes to routine AI tools. A spam filter does not need a conformity assessment. An AI recommendation engine for a retail website does not need human oversight documentation. Getting this calibration right is itself a compliance efficiency.

Minimal Risk Examples
🎮
AI-Enabled Video Games
AI characters, adaptive difficulty systems, procedural content generation in games — no specific obligations, existing consumer protection law applies.
📧
Spam Filters
Email and content filtering systems using ML classification — minimal risk, no meaningful impact on fundamental rights or safety.
📦
Inventory Management AI
AI systems forecasting demand, optimising stock levels, and automating reordering — operational AI with no significant rights implications.
🎵
Content Recommendation Systems
AI recommending music, films, products, or content based on user preferences — no specific AI Act obligations beyond existing consumer law.
🔍
Search & Navigation Tools
AI-powered search ranking, map routing, autocomplete suggestions — general-purpose tools without specific rights-affecting decisions.
🌐
Translation & Language Tools
Automated translation services, grammar assistants, language learning tools — language AI without significant risk to rights or safety.
Obligation No specific AI Act obligations. Voluntary codes of conduct are encouraged but not required. Existing EU law (GDPR, Product Liability Directive, Consumer Protection Directive) continues to apply. Organisations should still maintain an AI inventory, including Tier 4 systems, to demonstrate classification decisions are documented and deliberate.

“The EU AI Act doesn’t ban AI — it classifies and regulates it based on risk. The higher the potential harm to people, the stricter the rules. Think of it as GDPR for artificial intelligence, with extraterritorial scope that reaches any company whose AI touches EU users.”

CMARIX — EU AI Act Compliance Checklist 2026
Quick Reference

All Four Tiers at a Glance

The definitive summary table for AIGP study. Know this table — classification triggers obligations.

Tier Risk Level Status Effective Date Core Obligation Max Penalty
Tier 1 · Prohibited Unacceptable BANNED outright 2 Feb 2025 Discontinue system — no compliance path exists €35M or 7%
Tier 2 · High-Risk Significant STRICT obligations 2 Aug 2026 Conformity assessment, technical docs, human oversight, EU database registration €15M or 3%
Tier 3 · Limited Risk Limited TRANSPARENCY required Aug 2025+ Disclose that users are interacting with AI; label synthetic content €7.5M or 1.5%*
Tier 4 · Minimal Risk Minimal / None VOLUNTARY only N/A No specific AI Act requirements; voluntary codes of conduct encouraged None specific

*€7.5M penalty applies for supplying incorrect information to authorities — not specifically for Tier 3 transparency violations. Tier 3 penalties for non-disclosure would be assessed under national implementing law.

General-Purpose AI

The Fifth Category: GPAI Models

The four-tier system applies to AI applications and use cases. General-Purpose AI (GPAI) models — foundation models like GPT-4, Claude, Gemini — have their own additional regulatory layer that became applicable on 2 August 2025.

All GPAI Models: Transparency Requirements
All providers of GPAI models must prepare and publish technical documentation about their models, including training data details, copyright compliance evidence, and information enabling downstream providers who build on these models to comply with their own AI Act obligations. Transparency is mandatory regardless of systemic risk classification.
Systemic Risk GPAI: Additional Obligations
GPAI models trained with over 10^25 FLOPs are presumed to carry systemic risk and face additional obligations: adversarial testing (red-teaming), incident reporting to the EU AI Office, cybersecurity measures, and energy efficiency reporting. OpenAI, Anthropic, Google DeepMind, and similar frontier model providers are in scope.

AIGP Exam: What You Must Know About the 4 Tiers

All six Tier 1 prohibited practices and why each is banned
The eight Annex III high-risk domains and example use cases in each
Tier 2 compliance obligations: Articles 9–15 (risk, data, transparency, oversight, accuracy)
Tier 3 transparency obligations: Article 50 disclosure and synthetic content labelling
The enforcement timeline: Feb 2025, Aug 2025, Aug 2026, Aug 2027
Penalty tiers: €35M/7%, €15M/3%, €7.5M/1.5% — which violation triggers which
GPAI distinction: non-systemic vs systemic risk models and their obligations
Extraterritorial scope: the Act applies if EU users are affected, regardless of company location
Classification methodology: Annex I (regulated products) vs Annex III (use-case domains)
The “no compliance path” principle for Tier 1 — versus the obligation-based path for Tier 2
The Compliance Imperative

Classification Is the Starting Point — Not the Destination

The EU AI Act’s four-tier classification system is elegant in its logic: the higher the potential harm, the stricter the regulation. But classification is not a compliance achievement — it is a prerequisite. Once you know your system is Tier 2, the work begins: risk management documentation, data governance, technical files, conformity assessments, registration, and ongoing monitoring. Classification simply tells you which work is required.

With the August 2026 deadline for full high-risk enforcement approaching, organisations that have not yet completed AI inventories and risk classifications are entering the most time-sensitive phase of compliance preparation. As of March 2026, national market surveillance authorities are already actively enforcing prohibited practices and GPAI requirements. The window for a leisurely approach to Tier 2 compliance has closed.

The organisations that will benefit most from this regulation are those that treat it the way the most sophisticated companies treated GDPR in 2018: not as a compliance tax but as a forcing function for building AI governance infrastructure that would have been needed eventually anyway. Clear AI inventories. Documented risk decisions. Human oversight mechanisms. Post-market monitoring. These are not just compliance artefacts — they are the foundations of trustworthy AI deployment that builds rather than erodes stakeholder confidence.

For AIGP candidates: the four-tier framework is the skeleton of the entire regulation. Every obligation, every article, every penalty provision traces back to which tier the AI system in question falls into. Master the tier classification logic, the prohibited practices, the Annex III domains, and the enforcement timeline — and the rest of the regulation becomes organised and navigable around that structure.

Sources: European Commission — EU AI Act Official Policy Page (digital-strategy.ec.europa.eu) · Trilateral Research — EU AI Act Compliance Timeline 2025 · CMARIX — EU AI Act Compliance Checklist 2026 · GDPR Local — AI Risk Classification Guide · Trail ML — EU AI Act Risk Classifications · LegalNodes — EU AI Act 2026 Updates · is4.ai — EU AI Act Compliance Guide 2026 · Quantamix Solutions — EU AI Act Compliance Guide Updated March 2026 · Dataiku — EU AI Act High-Risk Requirements · ModelOp — EU AI Act Summary & Compliance Requirements