Organizations adopting AI are being pulled in two directions at once. On one side, they need to understand what the law requires. On the other, they need a practical way to govern AI across teams, use cases, and vendors.
Enter EU AI Act and ISO 42001.
On the surface they both touch the same topic, but they solve different problems. The AI Act is a binding EU regulation that sets legal obligations for AI systems and general-purpose AI models in scope. ISO 42001 is a voluntary international standard for building an AI management system across the organization.
Many organizations will end up using both: the AI Act to understand what must be addressed, and ISO 42001 to build the governance model that helps them do it consistently.
What is the EU AI Act?
The EU AI Act is the EU’s legal framework for regulating AI. It sets binding rules based on AI risk and defines obligations depending on how AI is developed, provided, or used.
The EU AI Act is Regulation (EU) 2024/1689, the EU’s harmonized legal framework for artificial intelligence. The European Commission describes it as the first comprehensive legal framework on AI, built around a risk-based approach. Its aim is to promote trustworthy AI while protecting health, safety, and fundamental rights.
In practice, the AI Act matters because it does not treat every AI use case the same. It distinguishes between prohibited practices, high-risk AI systems, transparency obligations for certain AI uses, and obligations for providers of general-purpose AI models. That structure matters because a company’s obligations depend on what kind of AI it develops, provides, or deploys, and on the role it plays in the value chain.
It also matters that the Act is already phasing in. The Commission states that the AI Act entered into force on 1 August 2024 and will be fully applicable on 2 August 2026, with some provisions applying earlier. Prohibited AI practices and AI literacy obligations started to apply on 2 February 2025, and obligations for providers of general-purpose AI models entered into application on 2 August 2025. Organizations are therefore operating in a transition period where some obligations are already live and others continue phasing in.
What is ISO/IEC 42001?
ISO/IEC 42001 is an international standard for building an AI management system. It helps organizations govern AI through defined policies, processes, roles, and continual improvement.
ISO/IEC 42001:2023 is the international standard for AI management systems. ISO says it provides requirements and guidance for organizations that develop, provide, or use AI systems, and that it is the first global standard focused on establishing, implementing, maintaining, and continually improving an AI management system.
That framing is important. ISO 42001 is not a law and it is not limited to a single AI product or one narrow legal category. It is an organization-level management system standard. Its purpose is to embed policies, objectives, processes, accountability, and continuous improvement around AI. In other words, it is about creating repeatable governance, not just documenting one-off controls.
This makes ISO 42001 especially relevant for organizations that need to coordinate AI governance across product teams, internal use of AI, procurement, leadership oversight, and audit or assurance functions.
Even where it is voluntary, it can serve as a structured operating model for responsible AI management. That last point is an inference from ISO’s description of the standard as an organization-wide management system for establishing policies, objectives, and processes around the responsible development, provision, or use of AI.
Key differences between the EU AI Act and ISO 42001
Legal status
This is the biggest difference between the two. The EU AI Act is binding law for organizations and AI use cases in scope. ISO 42001 is a voluntary standard. A company may decide to adopt ISO 42001 as part of its governance program, but it cannot opt out of the AI Act if the regulation applies.
Purpose
The AI Act is meant to regulate AI risks in the EU market and protect public interests such as health, safety, and fundamental rights. ISO 42001 is meant to help organizations establish and run an AI management system with policies, processes, and continual improvement. One is regulatory. The other is managerial.
Scope
The AI Act applies obligations based on the type of AI, the level of risk, and the role of the actor, such as provider or deployer. ISO 42001 applies at the management-system level across the organization and can cover development, provision, procurement, and use of AI more broadly.
How implementation works
The AI Act asks organizations to determine whether specific systems or models fall into regulated categories and then meet the relevant obligations. ISO 42001 asks organizations to build a governance structure that defines responsibilities, processes, controls, review mechanisms, and improvement cycles.
What “good evidence” looks like
Under the AI Act, evidence is tied to legal obligations under the regulation. Under ISO 42001, evidence sits more in the management system itself: policies, roles, documented processes, internal reviews, and continual improvement records. This distinction matters because legal compliance and management-system maturity are related, but not identical, ideas.
Where they overlap
Even though they are different in status and design, the two frameworks overlap in several important ways.
- Both push organizations toward stronger governance and accountability. The AI Act does this through legal obligations and actor-specific duties. ISO 42001 does it through a structured management system that establishes policies, objectives, and processes.
- Both are concerned with risk management. The AI Act is explicitly risk-based, with different obligations depending on the nature of the AI system or model. ISO 42001 is built to help organizations manage AI-related risks through a systematic governance framework.
- Both rely on documentation, oversight, and review. The exact mechanics differ, but neither framework works in a vacuum. Organizations need defined responsibilities, records, internal processes, and a way to monitor whether AI is being managed as intended.
This overlap is exactly why many organizations will not treat them as alternatives. They will treat them as layers. The AI Act sets the external requirements. ISO 42001 can help create the internal governance architecture that supports those requirements.
How to use the EU AI Act and ISO 42001 together
For many organizations, the most useful approach is to combine the two. Start with the AI Act and then use ISO 42001 to operationalize what the organization needs to do. The AI Act tells you what to pay attention to from a legal and regulatory standpoint, while ISO 42001 helps you build the management system that makes that work repeatable.
A sensible sequence starts with legal scoping. An organization should first identify which AI systems or general-purpose AI models it develops, provides, deploys, or integrates into its operations, and then determine whether any of them are prohibited, high-risk, subject to transparency obligations, or covered by general-purpose AI obligations. This is the regulatory mapping exercise. Without it, teams may build governance processes around the wrong assumptions.
Once that legal picture is clear, ISO 42001 becomes useful as the governance layer. The standard can help the organization define policies for AI use, assign roles and responsibilities, set objectives, establish risk assessment and review processes, formalize documentation practices, and create mechanisms for monitoring and continual improvement. These are exactly the kinds of things that help an organization move from ad hoc AI oversight to a durable system.
That is why the strongest way to frame the relationship is this: the EU AI Act defines the obligations that matter in law, while ISO 42001 provides a management system that can help an organization address those obligations in a structured and repeatable way. This is partly synthesis, but it follows directly from the official purposes of the two frameworks.
Why many organizations will use both
Many organizations will find that the AI Act alone is not enough as an internal operating model. A law can tell you what obligations exist, but it does not automatically create the governance routines needed to manage AI across an enterprise. Teams still need ownership, policies, review mechanisms, training, vendor processes, and oversight. That is where ISO 42001 becomes useful.
At the same time, ISO 42001 alone is not enough for organizations with EU AI Act exposure. A management system can improve governance maturity, but it does not substitute for understanding the actual legal duties attached to high-risk AI systems, transparency obligations, prohibited practices, or general-purpose AI models.
That is why the combination is so attractive. The AI Act anchors the compliance agenda. ISO 42001 gives the organization a practical structure for running that agenda across functions and over time.
Final takeaway
The EU AI Act and ISO 42001 should not be framed as an either-or choice. One defines regulatory obligations for AI in scope, while the other helps organizations build governance that is consistent, documented, and repeatable.
For many organizations, the best path will be to use them together. Start with the AI Act to identify what obligations apply. Then use ISO 42001 to build the governance layer that helps the organization meet those obligations in practice. That is likely to be the most realistic route for teams that want both AI compliance and a workable operating model for AI governance.


















