Resources  ·  EU AI Act

What the EU AI Act means for your business — in plain language.

March 2026 · 15 minute read · Conzept Sparx

The EU AI Act is the world's first comprehensive legal framework for artificial intelligence. It came into force in August 2024 and is now being actively enforced. If your organisation uses, develops, or deploys AI systems that affect people in the European Union — regardless of where you are based — this law applies to you.

This guide explains what the EU AI Act actually requires, in plain language. No legal jargon. No consultant-speak. Just a clear picture of what the law means, which of your AI systems it affects, what you need to do, and what happens if you do not act.

1. What the EU AI Act is and why it exists

The EU AI Act is a regulation — not a directive — which means it applies directly and uniformly across all 27 EU member states without needing to be transposed into national law. It was adopted by the European Parliament in March 2024, entered into force in August 2024, and is being phased in over the following two to three years.

The law exists because AI systems are increasingly being used in situations where they can directly affect people's lives — their access to credit, their employment prospects, their healthcare outcomes, their legal rights. The EU decided that these high-stakes uses of AI needed a regulatory framework to ensure they are safe, transparent, and accountable.

The approach is risk-based. Not all AI is treated the same. A spam filter and a medical diagnosis tool are both AI, but they carry very different risks. The EU AI Act classifies AI systems by the risk they pose and applies proportionate requirements accordingly.

The core principle: AI systems that can significantly affect people's rights, safety, or livelihoods must be demonstrably safe, transparent, and subject to human oversight — before they are deployed, not after something goes wrong.

2. Who it affects — including non-EU companies

The EU AI Act has extraterritorial reach, similar to GDPR. It applies to you if:

  • You are based in the EU and develop or deploy AI systems.
  • You are based outside the EU but your AI systems are used in the EU — for example, a SaaS product used by European customers.
  • You are based outside the EU and the outputs of your AI systems are used in the EU — for example, an AI model that generates content consumed by EU users.
  • You are an importer or distributor of AI systems into the EU market.

This means a company based in India, the US, or the UAE that sells an AI-powered product to European customers is subject to the EU AI Act for those systems. The law follows the users, not just the developer.

The regulation distinguishes between two main types of organisations:

  • Providers — organisations that develop or place AI systems on the market. If you build an AI application and sell it or make it available to others, you are a provider. Providers carry the heaviest compliance obligations.
  • Deployers — organisations that use an AI system in a professional context. If you buy an AI tool and use it in your operations, you are a deployer. Deployers have lighter obligations but are not exempt.

Many organisations are both — they build AI systems for internal use. In that case, both sets of obligations apply.

3. The four risk tiers and what they mean

The EU AI Act organises AI systems into four risk categories. Your obligations depend entirely on which category your systems fall into.

Risk tier What it covers Obligation
Unacceptable risk AI systems that pose an unacceptable threat to people's rights and safety. Examples: social scoring systems, real-time biometric surveillance in public spaces, AI that manipulates people subliminally. Prohibited outright. These systems cannot be placed on the EU market.
High risk AI systems used in critical areas where errors can significantly harm people. Examples: credit scoring, CV screening, medical diagnostics, educational assessment, law enforcement tools, critical infrastructure management. Must meet extensive requirements before deployment — technical documentation, human oversight, audit trails, conformity assessment, and registration in an EU database.
Limited risk AI systems that interact with people but pose lower risk. Examples: chatbots, AI-generated content, deepfakes. Transparency requirements — users must be told they are interacting with AI. Certain AI-generated content must be labelled as such.
Minimal risk Most AI systems — spam filters, recommendation engines, AI-powered search, most productivity tools. No mandatory requirements. Voluntary codes of conduct encouraged.

The most important thing for most organisations is determining whether any of their AI systems fall into the high-risk category. This is where the regulatory burden sits, and where most of the enforcement attention will be focused.

High-risk AI systems are defined in Annex III of the EU AI Act. The list covers eight sectors: biometric identification, critical infrastructure, education, employment, access to essential services (credit, insurance), law enforcement, migration, and administration of justice. If your AI system makes decisions or assists in decisions in any of these areas, it is almost certainly high-risk.

4. What high-risk AI systems must do

If you operate a high-risk AI system, the EU AI Act imposes eight categories of obligation. These are not optional — they are legal requirements that must be met before deployment.

Risk management system

You must establish, implement, document, and maintain a risk management system throughout the lifecycle of the AI system. This means identifying risks before deployment, implementing measures to address them, testing residual risks, and monitoring the system after it goes live. Risk management is not a one-time exercise — it is a continuous process.

Data governance

Training, validation, and testing data must meet quality standards. You must document what data was used, how it was collected, what biases might be present, and how those biases were addressed. Data must be relevant, representative, and sufficiently free of errors for the system's intended purpose.

Technical documentation

Before deploying a high-risk AI system, you must produce technical documentation covering the system's purpose, architecture, training methodology, performance metrics, known limitations, and the data governance measures applied. This documentation must be maintained and kept up to date throughout the system's lifecycle. It must be available to market surveillance authorities on request.

Record-keeping and audit trails

High-risk AI systems must be capable of automatically logging events relevant to identifying risks and post-market monitoring. These logs must include sufficient information to enable post hoc analysis — specifically, when the system was used, what inputs it received, what outputs it produced, and whether human oversight was applied. Logs must be retained for a minimum period defined by the regulation.

Transparency and information to deployers

Providers must supply deployers with clear and adequate information about the system — its capabilities, its limitations, how it was trained, what human oversight is required, and how to interpret its outputs. This information must be in plain language, not buried in technical specifications.

Human oversight

High-risk AI systems must be designed to allow effective human oversight. This means people must be able to understand what the system is doing, intervene when necessary, and override its decisions. The system must not be designed in a way that makes it practically impossible for humans to meaningfully review its outputs. Human oversight is not just a policy requirement — it must be built into the system technically.

Accuracy, robustness, and cybersecurity

High-risk AI systems must achieve appropriate levels of accuracy for their intended purpose. They must be resilient to attempts to manipulate their outputs — both through adversarial inputs and through attacks on their underlying infrastructure. Accuracy benchmarks must be documented and the system must be tested against them before deployment.

Conformity assessment

Before placing a high-risk AI system on the EU market, providers must conduct a conformity assessment — a documented evaluation confirming that the system meets all the requirements above. For most high-risk systems this can be done internally through self-assessment. For certain categories — biometric identification systems and AI in critical infrastructure — third-party assessment by a notified body is required.

The key implication for builders: All eight of these requirements must be addressed before deployment — not after. An AI system that is built without audit trails, documentation, and human oversight controls cannot simply have them added on later without significant cost and rework. Building governance-first is not just good practice — it is the only way to meet the legal deadline.

5. Enforcement deadlines you need to know

The EU AI Act is being phased in over time. The key dates are:

  • February 2025: Prohibited AI practices banned. Unacceptable-risk systems must already be off the market.
  • August 2025: GPAI (General Purpose AI) model obligations apply. If you develop or deploy large foundation models, obligations around transparency and systemic risk management are already active.
  • August 2026: Core obligations for high-risk AI systems apply. This is the most significant deadline for most regulated businesses. By this date, high-risk AI systems must meet all the requirements described above — documentation, human oversight, audit trails, conformity assessment, and database registration.
  • August 2027: High-risk AI systems embedded in regulated products (medical devices, machinery, vehicles) must comply — a 12-month grace period beyond the general high-risk deadline.

August 2026 is the critical deadline for most organisations. It is close enough that organisations which have not yet started compliance work are already behind. Building the required documentation, controls, and oversight mechanisms typically takes six to twelve months — which means the planning and implementation work needs to start now.

A note on existing systems: The EU AI Act applies to new systems placed on the market after the relevant deadlines, but transitional provisions also catch systems already deployed. If you have AI systems currently in production that would be classified as high-risk, you need to assess their compliance position now — not when they are next updated.

6. How it overlaps with GDPR

The EU AI Act and GDPR are complementary and overlap significantly for AI systems that process personal data. Most high-risk AI systems will be subject to both regulations simultaneously. This is not a coincidence — the EU designed them to work together.

The key overlaps are:

  • Data Protection Impact Assessments: GDPR Article 35 requires DPIAs for high-risk processing of personal data. High-risk AI systems that process personal data will almost always require a DPIA. Under guidance from EU supervisory authorities, AI Act technical documentation and DPIA requirements can be partially integrated — but both must be produced.
  • Automated decision-making: GDPR Article 22 restricts solely automated decisions with significant effects on individuals. Many high-risk AI systems will be caught by Article 22, requiring human review, the ability to contest decisions, and clear disclosure to individuals that a decision was made by AI.
  • Data minimisation and purpose limitation: Both regulations require that personal data used in AI systems is limited to what is necessary for the specific purpose. AI systems trained on broad datasets of personal data need to be assessed carefully against both sets of requirements.
  • Lawful basis: GDPR requires a lawful basis for every use of personal data. If your AI system processes personal data in training or inference, you need a documented lawful basis for each use — consent, legitimate interest, or another ground. The AI Act does not replace this requirement — it adds to it.

In practice, this means that a CISO or compliance team dealing with a high-risk AI system needs to run both the EU AI Act compliance process and a GDPR review simultaneously. The good news is that many of the underlying activities overlap — data mapping, risk assessment, documentation — and can be done together efficiently if approached correctly.

7. What non-compliance costs

The EU AI Act includes significant financial penalties for non-compliance. The fine structure is tiered by severity:

  • Up to €35 million or 7% of global annual turnover — whichever is higher — for deploying prohibited AI systems or providing false information to authorities.
  • Up to €15 million or 3% of global annual turnover — whichever is higher — for failing to meet high-risk AI system requirements.
  • Up to €7.5 million or 1.5% of global annual turnover — whichever is higher — for providing incorrect or misleading information to market surveillance authorities.

For small and medium enterprises, the caps are more lenient — fines are capped at the lower of the percentage or the fixed amount. But for mid-market and larger organisations, the percentage-based cap is the binding constraint — and 3% of global turnover is a material number for any regulated enterprise.

Beyond financial penalties, non-compliant systems can be ordered off the market. For businesses whose products or services depend on AI, a forced withdrawal is potentially more damaging than a fine. Reputational damage from a public enforcement action compounds the commercial impact further.

It is also worth noting that EU data protection authorities — the same bodies that enforce GDPR — have been given a role in EU AI Act enforcement for AI systems that process personal data. This means a single AI system could attract enforcement action from both a market surveillance authority and a DPA simultaneously.

8. Where to start

The EU AI Act is complex, but the path to compliance is not mysterious. It follows a logical sequence that most regulated organisations can follow with the right guidance.

Step one — Inventory your AI systems

You cannot comply with obligations you do not know you have. Start by creating a comprehensive inventory of every AI system your organisation uses, develops, or deploys — including AI embedded in third-party software you have procured. Many organisations are surprised by how many AI systems they have when they actually look.

Step two — Classify each system by risk tier

For each system in your inventory, determine its risk classification under the EU AI Act. The key question is: does this system fall within one of the eight high-risk sectors in Annex III? If the answer is yes or possibly yes, treat it as high-risk until you can document a clear rationale for a lower classification.

Step three — Gap assess your high-risk systems

For every high-risk system, assess its current state against the eight requirements described in this guide. Where does documentation exist? Where are audit trails in place? Where is human oversight designed in? The gaps this assessment reveals become your compliance roadmap.

Step four — Build what is missing — or retrofit it

For systems still in development, embed the required controls into the build — governance-first development is always faster and cheaper than retrofitting. For systems already in production, assess what can be added without a full rebuild and prioritise accordingly. Some retrofitting will require architectural changes — the sooner you identify this, the more time you have to address it.

Step five — Produce the required documentation

Technical documentation, risk management records, conformity assessments, and data governance documentation must be produced and maintained. This is not a one-time exercise — it must be updated whenever the system changes materially.

Step six — Register and sustain

High-risk AI systems must be registered in the EU AI Act database before deployment. After deployment, ongoing monitoring, incident reporting processes, and regular reviews of documentation must be maintained. Compliance is not a project that ends at deployment — it is an ongoing operating state.

Not sure where your AI systems stand?

Take the free EU AI Act Readiness Assessment — 18 questions, 5 minutes, immediate scored report across 6 compliance dimensions.

Take the Free Assessment →

The bottom line

The EU AI Act is not a future concern. It is being enforced now for prohibited systems and GPAI models, and high-risk obligations apply from August 2026 — which for most organisations means compliance work needs to be underway today.

The organisations that will find this manageable are those that approach it systematically: inventory first, classify second, gap assess third, build or retrofit fourth, document fifth, sustain always. Those that wait for an enforcement action or a customer demand to trigger the work will pay significantly more — in time, in cost, and in risk.

The EU AI Act is ultimately good for regulated enterprises that take it seriously. It creates a level playing field where AI systems that can be defended and explained will be trusted — and those that cannot will be removed from the market. Building AI the right way from the start is not just a compliance requirement. It is a competitive advantage.

Ready to make your AI systems EU AI Act compliant?

We build AI applications with compliance embedded from day one — and retrofit governance into systems already in production. The scoping conversation takes 30 minutes.

Book a Scoping Call