Aug 19, 2025

The EU AI Act
The European Union has adopted the world’s first comprehensive AI law: the Artificial Intelligence Act (AI Act). Officially Regulation (EU) 2024/1689, this landmark legislation creates a harmonized framework for AI across all EU member states. It introduces a risk-based classification system, assigning different regulatory duties based on the potential harm AI systems pose to people’s safety and rights. This article breaks down what the AI Act means for businesses developing or deploying AI, especially in high-impact sectors like healthcare, finance, law enforcement, and HR.
Understanding the Risk Pyramid
The AI Act categorizes AI into four risk tiers:
Unacceptable Risk: Banned entirely (e.g., social scoring, real-time facial recognition in public, manipulative AI).
High Risk: Permitted only with strict compliance measures and conformity assessments.
Limited Risk: Subject to transparency obligations (e.g., chatbots, generative AI).
Minimal Risk: Most AI systems (e.g., spam filters) face no new obligations.
This pyramid ensures proportionate regulation, targeting oversight where the stakes are highest.
What AI Practices Are Banned?
AI systems that pose an "unacceptable risk" to rights and safety are outright prohibited. These include:
Government-run social scoring systems.
Real-time biometric ID in public by law enforcement (with very narrow exceptions).
Emotion recognition in schools or workplaces.
Predictive policing based on profiling.
AI that subliminally manipulates users.
Violations can lead to fines up to €35 million or 7% of global turnover.
High-Risk AI: Strict Compliance Required
High-risk AI covers two broad categories:
AI as part of safety-regulated products (e.g., medical devices, vehicles).
Standalone AI in sensitive areas like:
Healthcare (e.g., diagnostics tools)
Finance (e.g., credit scoring)
Employment (e.g., recruitment software)
Law enforcement (e.g., evidence analysis)
Border control, education, public services, and justice.
Key Obligations for High-Risk AI Providers:
Risk management and continuous testing.
High-quality, bias-mitigated training data.
Detailed technical documentation.
Logging and traceability.
Clear instructions and user transparency.
Human oversight capabilities.
Robustness, accuracy, and cybersecurity measures.
EU database registration and CE marking after conformity assessment.
Users (deployers) of high-risk AI also have duties: monitoring performance, ensuring input data quality, and reporting serious incidents.
Limited-Risk AI: Transparency is Key
AI that interacts with people or generates content must disclose its AI nature. Examples include:
Chatbots that must identify as non-human.
AI-generated images or videos that must be labeled.
Emotion recognition used in non-sensitive contexts must be disclosed.
Providers of generative AI must also:
Label outputs (e.g., watermarks for deepfakes).
Disclose training data summaries to comply with copyright law.
Minimal-Risk AI: No New Rules
Most AI applications – like spam filters, movie recommendations, grammar tools – are classified as minimal risk and are not subject to any AI Act obligations. However, developers are encouraged to follow voluntary codes of conduct and ethical best practices.
Special Rules for General-Purpose AI (GPAI)
Foundational models (like large language models) fall under new obligations if deployed in the EU:
Publish technical documentation and training data summaries.
Provide safe integration instructions.
Respect EU copyright law.
If classified as systemic GPAI (e.g., models with over 10^25 FLOPs), extra safeguards apply:
Rigorous testing and adversarial evaluations.
Incident reporting to EU regulators.
Risk mitigation and cybersecurity measures.
Business Compliance Checklist
If your company develops or deploys AI, here’s how to start preparing:
Audit your AI systems: Identify which are high-risk or fall into other regulated categories.
Design for compliance: Build in risk management, transparency, and human oversight.
Prepare documentation: Maintain technical files, logs, and conformity assessments.
Label and disclose: For generative and limited-risk AI, ensure visible transparency.
Engage regulators early: Use regulatory sandboxes or consult with Notified Bodies.
When Do These Rules Apply?
Bans on unacceptable-risk AI: In effect from February 2025.
General-purpose AI obligations: Apply from August 2025.
High-risk AI rules: Apply from August 2026.
Enforcement and Penalties
The AI Act will be enforced by national regulators and coordinated by the European AI Office. Penalties include:
Up to €35 million or 7% of global turnover for banned practices.
Up to €15 million or 3% for general non-compliance.
Reduced fines for SMEs.
Final Thoughts: Trustworthy AI as Competitive Advantage
While the AI Act introduces compliance challenges, it also offers a path to building more trustworthy, robust, and competitive AI systems. Businesses that start aligning with the Act now can gain consumer trust and reduce future legal risks.
Want to prepare your AI portfolio for compliance? Book a demo with Cogrant or explore our resources on the AI compliance roadmap.