Resources · ISO 42001
ISO 42001 explained — the AI management standard your clients will soon require.
ISO 42001 is the international standard for AI management systems. Published in December 2023, it is the first globally recognised framework specifically designed for organisations that develop, provide, or use AI systems. It is already being referenced in EU AI Act guidance, appearing in enterprise procurement requirements, and becoming a de facto benchmark for regulated industries asking: can we trust your AI?
If you build or deploy AI systems for enterprise clients — particularly in regulated sectors like financial services, healthcare, or legal — ISO 42001 certification is moving from optional to expected. This guide explains what the standard covers, who needs it, how it compares to ISO 27001 and the EU AI Act, and what implementing it actually involves.
In this guide
- What ISO 42001 is and why it exists
- Who needs ISO 42001
- What the standard covers — clause by clause
- ISO 42001 versus ISO 27001 — what is the difference
- ISO 42001 and the EU AI Act — how they fit together
- The certification process
- What implementation actually involves
- How long it takes and what it costs
- Where to start
1. What ISO 42001 is and why it exists
ISO 42001 — formally titled "Information technology — Artificial intelligence — Management system" — is a management system standard published by the International Organisation for Standardisation in December 2023. It follows the same high-level structure as other ISO management system standards, including ISO 27001 for information security and ISO 9001 for quality management.
A management system standard does not specify what technology to use or how to build it. Instead it specifies the management framework that an organisation must have in place to govern a particular domain — the policies, processes, roles, documentation, monitoring, and continuous improvement cycle needed to manage AI responsibly.
ISO 42001 exists because AI introduces risks and responsibilities that existing management frameworks — including ISO 27001 — do not adequately address. AI systems can produce biased, inaccurate, or harmful outputs in ways that are difficult to detect and explain. They raise questions about accountability, transparency, and human oversight that traditional software governance does not answer. ISO 42001 provides a structured framework for managing these AI-specific risks at an organisational level.
The core purpose: ISO 42001 is a framework for answering the question "how does your organisation ensure that AI is developed, deployed, and used responsibly?" It does not specify technical architecture. It specifies the management system that governs how technical decisions are made, reviewed, and improved over time.
2. Who needs ISO 42001
ISO 42001 is relevant to three types of organisations, each with different motivations for pursuing it.
AI developers and providers
Companies that build AI systems — whether as standalone products or embedded in services — will increasingly find that enterprise customers require evidence of AI governance as a condition of procurement. ISO 42001 certification provides that evidence in a standardised, auditable form. For AI companies selling into regulated industries — financial services, healthcare, legal — certification will become a procurement requirement within the next one to two years in much the same way that ISO 27001 has become a requirement for software vendors.
Organisations deploying AI internally
Large enterprises that use AI systems in their operations — for credit decisioning, HR screening, customer service, fraud detection — face regulatory and reputational pressure to demonstrate that they govern AI responsibly. ISO 42001 provides a framework for doing this that can be demonstrated to regulators, auditors, and boards. For organisations subject to the EU AI Act, ISO 42001 provides a structured approach to meeting the Act's governance requirements.
Professional services firms with AI-adjacent practices
Law firms, accounting firms, and consultancies that advise clients on AI governance — or that use AI tools in client engagements — are beginning to face questions from clients about how they govern AI use internally. ISO 42001 provides a credible answer.
Organisations that do not yet build or use AI systems do not need ISO 42001 today. But the trajectory is clear — AI adoption is accelerating, regulatory pressure is increasing, and the demand for evidence of responsible AI governance is growing. The organisations that implement ISO 42001 now will be ahead of the curve when it becomes a requirement rather than a differentiator.
3. What the standard covers — clause by clause
ISO 42001 follows the Annex SL high-level structure used by all modern ISO management system standards. This means it has the same clause numbering as ISO 27001, ISO 9001, and others — making integration between standards significantly easier for organisations that hold multiple certifications.
The standard has ten clauses. Clauses 1 through 3 are introductory. Clauses 4 through 10 contain the requirements.
Clause 4
Context of the organisation
The organisation must understand its internal and external context — including regulatory requirements, stakeholder expectations, and the scope of its AI activities. This includes identifying all AI systems in use or development, understanding the interests of affected parties, and defining the scope of the AI management system.
Clause 5
Leadership
Top management must demonstrate commitment to the AI management system — approving the AI policy, assigning roles and responsibilities, and integrating AI governance into strategic planning. Leadership accountability is not delegable — the standard requires evidence that senior management is actively engaged, not just aware.
Clause 6
Planning
The organisation must identify risks and opportunities related to its AI activities, set objectives for the AI management system, and plan how to achieve them. This includes AI-specific risk assessment — evaluating the potential impact of AI systems on individuals, society, and the organisation — and planning the actions needed to address those risks.
Clause 7
Support
The organisation must ensure it has the resources, competence, awareness, and communication processes needed to operate the AI management system. This includes AI-specific training and awareness programmes, and the documentation infrastructure needed to maintain the management system.
Clause 8
Operation
This is the most substantive clause — it covers the operational controls that govern how AI systems are developed and deployed. It requires impact assessments for AI systems, controls for the AI system lifecycle, supplier and partner management requirements, and incident management processes. Annex A of the standard provides a set of controls that organisations can select from based on their AI impact assessment — these are the specific technical and organisational measures that apply to individual AI systems.
Clause 9
Performance evaluation
The organisation must monitor, measure, analyse, and evaluate the performance of the AI management system — through internal audits and management reviews. This creates a structured review cycle that ensures the management system stays current as AI systems and the regulatory environment evolve.
Clause 10
Improvement
The organisation must address nonconformities when they occur — through corrective action — and continually improve the AI management system. This is the continuous improvement cycle that characterises all ISO management system standards.
In addition to these clauses, ISO 42001 includes a set of annexes that provide guidance on AI-specific topics including AI system impact assessment, data governance for AI, human oversight of AI systems, and responsible AI principles. The annexes are informative rather than normative — they provide guidance rather than requirements — but they are practically important for understanding how to implement the standard.
4. ISO 42001 versus ISO 27001 — what is the difference
ISO 27001 is the international standard for information security management systems. Many organisations already hold ISO 27001 certification, and a common question is: if we have ISO 27001, do we also need ISO 42001?
The short answer is: they address different risks and are complementary rather than interchangeable.
| Dimension | ISO 27001 | ISO 42001 |
|---|---|---|
| Primary focus | Protecting information assets from security threats — confidentiality, integrity, availability. | Managing the risks introduced by AI systems — bias, opacity, harmful outputs, accountability gaps. |
| What it governs | How the organisation protects information — access controls, encryption, incident response, vendor security. | How the organisation develops, deploys, and uses AI — impact assessment, human oversight, transparency, lifecycle management. |
| Who it addresses | Attackers, malicious insiders, accidental data loss — external and internal security threats. | The AI system itself — the risks that emerge from how AI models work, what data they use, and how their outputs affect people. |
| Regulatory relevance | Widely referenced in data protection law, financial regulation, and procurement requirements across all industries. | Referenced in EU AI Act guidance and codes of practice. Increasingly referenced in enterprise procurement, especially in regulated sectors. |
| Do you need both? | Yes, if you develop or deploy AI systems that process personal data or make consequential decisions. ISO 27001 addresses how you protect the data. ISO 42001 addresses how you govern the AI. Both are needed — they do not overlap significantly. | |
For organisations that already hold ISO 27001, the good news is that ISO 42001 follows the same high-level structure. Policies, procedures, internal audit processes, management review, and corrective action processes from ISO 27001 can be extended to cover ISO 42001 requirements rather than built from scratch. An integrated ISO 27001 and ISO 42001 management system is significantly less effort to implement than two separate systems.
5. ISO 42001 and the EU AI Act — how they fit together
ISO 42001 and the EU AI Act address overlapping concerns — responsible AI governance — but from different perspectives. The EU AI Act is a legal regulation with mandatory requirements and financial penalties for non-compliance. ISO 42001 is a voluntary management system standard that provides a structured framework for meeting those requirements.
The European Commission and the EU AI Office have referenced ISO 42001 in guidance documents as a framework that can help organisations meet EU AI Act obligations. While ISO 42001 certification does not automatically mean EU AI Act compliance — the Act has specific technical requirements that go beyond what the standard covers — implementing ISO 42001 creates the governance infrastructure that makes EU AI Act compliance significantly more manageable.
Specifically, ISO 42001 implementation helps with:
- AI system inventory and risk classification — the Clause 4 and 6 requirements create the documentation of AI systems and their risks that the EU AI Act requires.
- Technical documentation — the Annex A controls and Clause 8 impact assessment requirements produce documentation that overlaps significantly with EU AI Act technical documentation requirements.
- Human oversight — ISO 42001 requires defined human oversight processes for AI systems, which maps directly to EU AI Act Article 14 requirements.
- Continuous monitoring — the Clause 9 performance evaluation requirements create the monitoring infrastructure the EU AI Act requires for post-deployment oversight.
- Incident management — ISO 42001 requires AI incident management processes that align with EU AI Act incident reporting requirements.
For organisations facing EU AI Act obligations, implementing ISO 42001 as part of the compliance programme is a sound approach — it provides a structured framework that addresses many of the Act's requirements while also producing a certification that demonstrates AI governance capability to customers and regulators.
6. The certification process
ISO 42001 certification follows the same process as other ISO management system certifications. It involves three stages.
Stage 1 audit — documentation review
A certification body — an accredited third-party organisation — reviews your AI management system documentation to assess whether it meets the requirements of the standard. This audit is primarily desk-based. The auditor reviews your AI policy, procedures, risk assessments, impact assessments, and other documentation to identify any gaps. The output is a list of areas to address before the Stage 2 audit.
Stage 2 audit — implementation assessment
The auditor visits your organisation — in person or remotely — to verify that the management system described in your documentation is actually implemented and operating effectively. They will interview staff, review evidence of processes in action, and test whether controls are working as described. If nonconformities are found, they must be addressed before certification can be awarded.
Certification and surveillance audits
If both audit stages are passed, the certification body awards ISO 42001 certification. The certificate is valid for three years, subject to annual surveillance audits that verify the management system continues to operate effectively. At the three-year mark, a full recertification audit is conducted.
Certification bodies that currently offer ISO 42001 certification include BSI, Bureau Veritas, SGS, TÜV Rheinland, and Lloyd's Register, among others. The choice of certification body matters — for organisations seeking EU recognition, choosing a body accredited by a national accreditation authority within the EU or UK is recommended.
7. What implementation actually involves
Implementing ISO 42001 is a management project, not a technology project. The work involves defining policies, designing processes, conducting assessments, producing documentation, and establishing review cycles. Here is what the main workstreams look like.
AI policy and governance structure
The first deliverable is an AI policy — a document approved by top management that sets out the organisation's commitments and principles for responsible AI. Alongside the policy, you need a governance structure: who is responsible for AI governance, how decisions are made, and how escalation works. For most organisations, this means defining an AI governance role or committee and establishing clear decision rights.
AI system inventory and impact assessment
Every AI system in scope must be documented and assessed for its potential impact on individuals, society, and the organisation. The impact assessment is the centrepiece of ISO 42001 implementation — it drives which Annex A controls apply to each system and what level of oversight is required. The assessment considers the system's purpose, the decisions it influences, the populations affected, the data it uses, and the potential for bias or harm.
Annex A control selection and implementation
Based on the impact assessment, the organisation selects the Annex A controls that apply to each AI system. Annex A covers areas including data governance for AI, human oversight mechanisms, transparency and explainability, incident management, and supplier management for AI. The controls selected must be implemented — not just documented — and evidence of implementation must be maintained.
Policies and procedures
The standard requires documented policies and procedures covering the key aspects of AI governance — how AI systems are developed, how impact assessments are conducted, how incidents are managed, how suppliers are assessed, how training is provided, and how the management system is reviewed. For organisations that already have ISO 27001, many of these procedures can be extensions of existing documents rather than new ones.
Internal audit and management review
The standard requires a programme of internal audits to verify that the management system is operating as designed, and management reviews to evaluate its effectiveness and drive improvement. These are ongoing activities — not one-time exercises — that must be embedded into the organisation's operating rhythm.
8. How long it takes and what it costs
The implementation timeline for ISO 42001 depends on the size and complexity of the organisation, the number and complexity of AI systems in scope, and whether the organisation already has ISO 27001 in place.
Define the scope of the management system. Inventory AI systems. Conduct a gap assessment against the standard's requirements. Produce a project plan and implementation roadmap.
Draft and approve the AI policy. Define the governance structure and assign roles. Establish the documentation framework.
Conduct impact assessments for each AI system in scope. Select Annex A controls. Implement controls — this is the longest phase and depends heavily on the number and complexity of systems.
Complete the documentation suite. Deliver awareness training. Conduct the internal audit. Address any nonconformities found.
Stage 1 documentation review. Address findings. Stage 2 implementation assessment. Receive certification.
For a small to medium organisation with a small number of AI systems, implementation typically takes four to six months from start to certification. For larger organisations with many AI systems, six to twelve months is more realistic. Organisations that already hold ISO 27001 typically complete implementation in three to four months, since much of the management system infrastructure already exists.
On cost: Implementation costs vary significantly based on scope and whether the organisation uses external support. As a rough guide, expect implementation support costs of ₹15L to ₹25L for Indian organisations and €18,000 to €30,000 for European organisations, plus certification body fees of approximately ₹3L to ₹8L or €4,000 to €10,000 depending on the certification body and scope. These are rough estimates — actual costs depend heavily on the complexity of the AI systems in scope and the existing maturity of the organisation's governance infrastructure.
Thinking about ISO 42001 certification?
We implement ISO 42001 for regulated enterprises — building the management system, conducting impact assessments, and preparing you for certification. The scoping conversation is 30 minutes.
Book a Scoping Call →9. Where to start
The most common mistake organisations make when approaching ISO 42001 is treating it as a documentation project — producing policies and procedures to satisfy an auditor without building a management system that actually governs AI. This produces a certificate but not capability. Auditors are increasingly skilled at identifying management systems that exist on paper but not in practice.
The right starting point is understanding what AI systems you have, what risks they introduce, and what governance gaps exist between your current state and what ISO 42001 requires. This gap assessment — done properly — typically takes one to two weeks and produces a clear implementation roadmap.
From there, the sequence is:
- Define the scope — which AI systems, which parts of the organisation, which geographies.
- Establish governance — who owns the AI management system, what their authority is, how decisions are made.
- Conduct impact assessments — for each AI system in scope, assess its potential impact and determine which controls apply.
- Implement controls — build the policies, processes, and technical measures required for each system.
- Document — produce the documentation the standard requires and that an auditor will review.
- Test — internal audit to verify the system works as designed before inviting the certification body in.
- Certify — Stage 1 and Stage 2 audit with your chosen certification body.
If you already hold ISO 27001, step one should be an integration assessment — understanding which elements of your existing management system can be extended to cover ISO 42001 requirements, and which new elements are needed. The integration opportunity is significant and can reduce implementation time and cost substantially.
The bottom line
ISO 42001 is becoming the benchmark for responsible AI governance in the same way that ISO 27001 became the benchmark for information security. The organisations that implement it now — while it is still a differentiator rather than a minimum requirement — will be better positioned to win enterprise business, satisfy regulators, and demonstrate AI governance capability to boards and auditors.
The standard is not onerous if approached correctly. It is a management framework — a structured way of governing AI that most organisations will benefit from regardless of whether they pursue certification. The certification is evidence of the framework, not the framework itself.
Start with a gap assessment. Understand where you are and where you need to get to. Everything else follows from that clarity.
← Previous guide
India's DPDP Act — what AI teams need to knowReady to implement ISO 42001?
We implement ISO 42001 for regulated enterprises — from gap assessment through to certification. The scoping conversation takes 30 minutes.
Book a Scoping Call