The EU AI Act (AI Act) entered into force on 1 August 2024 and is the EU’s comprehensive law on artificial intelligence. In parallel, ISO published ISO/IEC 42001 for AI management systems. Together they form a pair: the law says what you must do, the standard shows how.
This article explains what the AI Act requires, what ISO 42001 contains, and how to use the standard to meet the law’s requirements.
What is the EU AI Act?
The AI Act (Regulation (EU) 2024/1689) applies directly in all EU member states. The law covers everyone who develops, provides, or uses AI systems within the EU.
The core of the law is a risk-based classification in four tiers:
- Unacceptable risk – AI systems that manipulate people, exploit vulnerabilities, or perform social scoring. The ban has been in effect since 2 February 2025.
- High risk – AI systems in areas such as recruitment, credit scoring, education, and critical infrastructure. These systems must meet requirements for risk management, documentation, transparency, and human oversight. The requirements apply from 2 August 2026.
- Limited risk – AI systems that interact directly with people, such as chatbots. Transparency requirements apply, meaning users must be informed they are communicating with an AI.
- Minimal risk – All other AI systems. No specific requirements, but the law encourages voluntary codes of conduct.
Violations of the rules on prohibited AI systems can result in fines of up to EUR 35 million or 7% of global annual turnover.
Timeline
| Date | What applies |
|---|---|
| 1 August 2024 | Law enters into force |
| 2 February 2025 | Ban on AI systems with unacceptable risk |
| 2 August 2025 | Requirements for general-purpose AI models (GPAI) |
| 2 August 2026 | Main body of the law, including high-risk AI requirements |
| 2 August 2027 | High-risk classification via EU harmonised product legislation |
What is ISO 42001?
ISO/IEC 42001:2023 is a certifiable standard for AI management systems (AIMS). It follows the same structure as other ISO management system standards (ISO 9001, ISO 27001, ISO 14001) with clauses 4-10.
What makes ISO 42001 unique are the AI-specific requirements:
- AI risk assessment (clause 6.1.2) – Identify and analyse risks linked to your AI systems. Assess consequences for the organization, individuals, and society.
- AI risk treatment (clause 6.1.3) – Select controls to address the risks. Compare with the controls in Annex A and determine which apply to your organization, documented in a statement of applicability (SoA).
- AI system impact assessment (clause 6.1.4) – Assess the impact of your AI systems on individuals and society, not just the organization.
Annex A contains 9 control domains with 38 controls covering everything from AI policy and data management to third-party relationships:
- Policies related to AI (A.2)
- Internal organization (A.3)
- Resources for AI systems (A.4)
- Impact assessment (A.5)
- AI system lifecycle (A.6)
- Data for AI systems (A.7)
- Information for stakeholders (A.8)
- Use of AI systems (A.9)
- Third-party relationships (A.10)
How ISO 42001 supports AI Act compliance
The AI Act states what is required, but the law does not describe how an organization should build its internal processes. ISO 42001 does.
The connection is concrete. The AI Act requires, for example, that providers of high-risk AI systems have a risk management system (Article 9), technical documentation (Article 11), and quality management (Article 17). ISO 42001 provides the framework:
| AI Act requirement | ISO 42001 support |
|---|---|
| Risk management system (Art. 9) | AI risk assessment and risk treatment (6.1.2, 6.1.3) |
| Impact assessment | AI system impact assessment (6.1.4, A.5) |
| Technical documentation (Art. 11) | Resource documentation and system documentation (A.4, A.6.2.3, A.6.2.7) |
| Data quality (Art. 10) | Data controls and provenance (A.7) |
| Transparency (Art. 13) | Information for stakeholders (A.8) |
| Human oversight (Art. 14) | Responsible use of AI (A.9) |
| Quality management (Art. 17) | The full management system (clauses 4-10) |
ISO 42001 certification is not a legal requirement. But it gives you a demonstrable structure showing that you work systematically with AI governance. During a regulatory inspection, it can be the difference between showing a functioning system and having to start building one.
Where to start
If your organization uses AI systems, including purchased ones, start with these steps:
-
Take inventory. Which AI systems do you use? Many organizations have more AI systems than they realize, from chatbots and analytics tools to AI features embedded in existing software.
-
Classify according to the AI Act. Assess which risk category each system falls into. Systems in recruitment, credit scoring, education, and similar areas likely fall into the high-risk category.
-
Start the risk assessment. Use ISO 42001’s framework (clause 6.1.2) to assess risks and consequences. Go through the Annex A controls and determine which apply.
-
Document. The AI Act requires documentation. ISO 42001 specifies exactly what: policy, risk assessments, statement of applicability, system documentation, and event logs.
-
Build governance. Assign roles and responsibilities (A.3.2), create a reporting process for AI concerns (A.3.3), and make sure management takes ownership.
If you already have a management system, such as ISO 9001 or ISO 27001, you have a head start. ISO 42001 follows the same underlying structure, and much of the work on context, leadership, planning, and improvement overlaps.
GDPR applies in parallel
The Swedish Authority for Privacy Protection (IMY) points out that GDPR applies when personal data is processed in connection with the development or use of AI systems. The AI Act does not replace the data protection regulation. If your AI system processes personal data, you need to comply with both frameworks.
Summary
The EU AI Act sets the rules. ISO 42001 gives you the method to follow them. Start with an inventory of your AI systems, classify them, and perform a risk assessment. If you already work with ISO standards, you already have the foundation.
AmpliFlow participated in ISO’s working group for ISO/IEC 42001 and has built support for the standard, from risk analysis and statement of applicability to AI features that help you in daily work. Read more about how AmpliFlow supports ISO 42001 or book a demo.