Demystifying the EU AI Act: Understanding AI Risk Categories
In this blog post we briefly summarize the key provisions of the EU AI Act 2024 about the risk categorization of AI tools.
Key Takeaways
- The EU AI Act defines four risk levels: unacceptable, high, limited, and minimal risk systems.
- Unacceptable systems are prohibited because they may violate fundamental rights.
- High-risk systems (e.g., healthcare, finance) are subject to strict compliance and documentation requirements.
- For limited-risk systems, transparency is important: it must be indicated when AI is operating in the background.
- There is less regulation for minimal-risk systems, but ethical operation is still expected.
AI Risk Categories: Minimal to Unacceptable
The European Union’s AI Act, introduced in 2024, aims to regulate AI to ensure it’s ethical, safe, and reliable. It categorizes AI systems by risk levels: Unacceptable, High, Limited, and Minimal. Each category defines specific rules, guiding developers on compliance.
1. Unacceptable Risk (Banned)
These AI systems are completely prohibited because they threaten fundamental rights or public safety.
Examples include:
- Government-led social scoring systems rating individuals’ behavior or trustworthiness.
- AI systems that manipulate or exploit individuals, especially vulnerable groups like children.
- Predictive policing algorithms that predict criminal behavior.
- Emotion recognition tools used in schools or workplaces to evaluate attentiveness.
- Real-time biometric surveillance in public by law enforcement.
If your AI falls into this category, you cannot legally deploy it in the EU.
2. High Risk (Strictly Regulated)
These systems significantly impact health, safety, or fundamental rights. They require extensive compliance.
Typical high-risk AI systems:
- Healthcare: Disease diagnostics and treatment recommendations.
- Autonomous vehicles: AI systems for self-driving cars and aviation.
- Education and hiring: Automated exam scoring or job application screening.
- Essential services access: AI-based credit scoring and benefit allocation.
- Law enforcement and judiciary: Evidence evaluation or sentencing support systems.
- Biometric identification in controlled environments, such as security checks.
If your AI system is High Risk, you must:
- Undergo conformity assessments (audits/tests).
- Implement risk management procedures, including bias and safety checks.
- Use high-quality, unbiased training data.
- Maintain detailed technical documentation for transparency and auditing.
- Incorporate human oversight mechanisms to supervise or intervene in AI decisions.
- Continuously monitor and log AI operations, reporting any serious issues promptly.
This category requires significant preparation, possibly needing CE certification or regulatory approval.
3. Limited Risk (Transparency Required)
AI systems here aren’t inherently dangerous but can influence user decisions or emotions. Transparency is crucial.
Common examples:
- Chatbots and virtual assistants: Must clearly indicate they’re AI (“Hi, I’m an AI assistant.”).
- AI-generated content: Images, videos, or texts created by AI must be explicitly labeled to prevent misinformation.
- Impersonation tools (voice or face-swapping technologies): Users must know they’re interacting with synthetic media.
- Emotion recognition for non-critical applications, like interactive museum exhibits, provided users are informed.
Limited Risk AI obligations:
- Clearly disclose AI usage to users.
- Label AI-generated media clearly.
- Allow users easy access to human assistance if needed.
- Avoid manipulative designs or “dark patterns”; prioritize ethical transparency.
Transparency fosters trust, enhancing user satisfaction and compliance.
4. Minimal Risk (No Specific Regulation)
Most everyday AI systems fall here, with no additional regulatory requirements under the Act.
Examples include:
- Email spam filters and basic inbox sorting.
- Gaming AI, such as non-player character behavior or dynamic difficulty adjustments.
- Recommendation algorithms for products or entertainment.
- Productivity tools like grammar checkers, translation apps, and scheduling assistants.
- Basic analytics AI optimizing websites or operational efficiency without severe consequences.
Minimal-risk AI can operate without extra compliance steps, though general data privacy and ethical practices remain applicable.
Conclusion & Compliance Strategy
Understanding these risk categories is vital for AI developers and businesses targeting EU markets:
- Unacceptable Risk: Avoid entirely.
- High Risk: Requires rigorous preparation and documentation.
- Limited Risk: Prioritize transparency.
- Minimal Risk: General ethical standards suffice.
Early compliance positions businesses advantageously, ensuring products are safe, ethical, and trusted by users. Familiarity with these categories helps anticipate regulatory needs, streamlining development and market entry. For detailed guidelines, refer to the official EU AI Act (Regulation (EU) 2024/1689) available on EUR-Lex or summaries from the European Commission.
In your company, what LLMs (or any other AI tools) do you use? What are your experiences outside of OpenAI and Microsoft tools? Are there any that could be categorized as high-risk? Why not talk about it over a good cup of coffee?