Frameworks

The EU AI Act: What it means and how to implement it

EU Artificial Intelligence Act (AI Act) compliance guide: map roles, manage risk and governance, classify high-risk AI, meet transparency and CE marking duties.

Article contents

ISO 27001 collection
The EU AI Act: What it means and how to implement it
NIS2 collection
The EU AI Act: What it means and how to implement it
Cyberday blog
The EU AI Act: What it means and how to implement it

The Artificial Intelligence Act (AI Act) is the EU's first horizontal law for AI, setting out how AI can be developed, placed on the market and used. It covers providers, importers, distributors and deployers, whether their AI systems operate within the EU or simply produce outputs that affect people there. The goal is to reduce risks to health, safety and fundamental rights while keeping the door open for trustworthy innovation.

One important distinction from most compliance frameworks: the AI Act is not primarily concerned with risks to your organisation. It regulates the risks that AI systems produce for individuals, groups and society at large. A hiring tool that discriminates, a credit model that excludes unfairly, a chatbot that misleads: these are the harms the act targets. This outward-facing risk lens shapes everything from classification to enforcement.

The act takes a risk-based approach: the greater the potential for harm, the stricter the controls. Certain AI practices are banned outright, while high-risk systems are permitted only under tight governance and technical requirements. For lower-risk uses, obligations stay light, calling for basic transparency duties rather than full compliance regimes.

Different levels of risk

High-risk systems include those deployed in sensitive domains such as law enforcement, employment, and critical infrastructure. These systems must satisfy extensive requirements covering risk management, human oversight, data governance, and technical robustness.

Limited-risk systems include chatbots, image or text generators, or basic recommendation systems. These only face transparency obligations, principally requiring disclosure that users are interacting with AI.

Minimal-risk systems, such as spam filters, are entirely unregulated under the act, though voluntary codes of conduct are encouraged.

Because the act builds on existing EU law rather than replacing it, organisations already working with GDPR, product safety rules or consumer protection frameworks have a head start. Much of the required governance can be layered onto what is already in place, though the bar for documentation and accountability rises across the entire AI lifecycle.

Free compliance assessment: Check your AI Act compliance status

What the AI Act requires

The act is structured around risk tiers, role-based obligations and lifecycle controls. It also introduces specific rules for general-purpose AI models and targeted transparency duties for everyday AI interactions. These objectives translate into concrete instruments: bans and high-risk controls protect fundamental rights, CE marking and harmonised standards provide legal certainty, and regulatory sandboxes along with codes of practice support innovation.

It is worth noting that the act's definition of an AI system is deliberately technology neutral. It covers any software that infers from inputs and generates outputs such as predictions, recommendations or decisions, whether built on machine learning, logic-based methods or statistical approaches. Many tools that teams do not currently label as AI may still qualify, making a thorough inventory an essential first step.

Providers, deployers and the value chain

The AI Act assigns obligations according to where you sit on the value chain; the most important distinction is made between providers of high risk AI and deployers of high risk AI systems. Providers are the organisations that develop an AI system or place it on the market under their own name. They carry the heaviest regulatory burden: technical documentation, risk management, quality management, conformity assessment, CE marking and post-market monitoring all fall on the provider.

Deployers are the organisations that use an AI system in their own operations. Their obligations are lighter but still significant, especially for high-risk systems. Deployers of high-risk AI must ensure human oversight, use the system in line with the provider's instructions, and monitor for risks in their specific context. They are also required to carry out a fundamental rights impact assessment before putting a high-risk system into use which defines the potential impacts usage of the AI system may have on individuals, groups and society. Importers and distributors have their own duties around verification and traceability, but for most readers the provider/deployer distinction is the one that matters.

Understanding which role you play for each AI system is the starting point for compliance. An organisation can be a provider for one system and a deployer for another, and the obligations differ substantially.

Risk-based tiers and prohibited practices

The act defines four risk bands: prohibited practices, high-risk systems, limited-risk (transparency) uses and minimal-risk uses. The regulatory burden scales directly with the risk classification.

High-risk systems require a substantial compliance programme covering the full lifecycle. 

Limited-risk systems need only disclosure and transparency measures.

Minimal-risk systems face no specific obligations, though voluntary codes of conduct are encouraged.

This means that identifying the risk level of each system is just as important as identifying your role, because the two together determine how much work compliance will require.

Prohibited practices are narrow but strict. They include:

  • AI that manipulates behaviour in ways likely to cause significant harm
  • Social scoring by public authorities that leads to unfair or harmful treatment
  • Real-time remote biometric identification in public spaces, restricted to tightly defined law enforcement exceptions
  • Several biometric categorisation uses that infer sensitive attributes

High-risk classification and obligations

The operational core of the act focuses on the regulation of high-risk AI systems, and the law lists these uses in two groups. The first covers AI used as a safety component in EU regulated products such as medical devices, machinery and vehicles. The second covers stand-alone AI used in sensitive decisions, including access to education, recruitment and worker management, credit scoring, essential services, law enforcement, migration and justice.

The compliance burden for high-risk systems is substantial. Providers must:

  • Run a risk management system
  • Establish a quality management system
  • Implement data governance for training, validation and testing datasets
  • Maintain documentation, logging, accuracy, robustness, cybersecurity and human oversight
  • Operate a post-market monitoring system and incident reporting

Deployers of high-risk systems face fewer but still meaningful obligations. They must use the system according to the provider's instructions, ensure human oversight, monitor for context-specific risks, and carry out a fundamental rights impact assessment before deployment.

Transparency for everyday AI interactions

Even AI uses that fall outside the high-risk category can require safeguards. The act mandates clear disclosure when people interact with AI such as chatbots, and it requires labels for synthetic or manipulated media that could be mistaken for authentic. Together, these measures reduce deception risk and help users calibrate their trust.

In practice, providers and deployers need to adjust interfaces, content pipelines and user notices accordingly. Staff must understand when AI interaction notices and synthetic content labels apply, particularly where these duties intersect with platform policies, marketing workflows and public communications.

General-purpose AI models and systemic risk

General-purpose AI models have cross-domain capabilities, which means their providers must maintain technical documentation, publish information about capabilities and limitations, and give downstream users the risk-relevant details they need. Copyright safeguards and training data summaries are part of this transparency obligation.

Models classified as high-impact face stronger duties due to their systemic risk potential, including model evaluations, adversarial testing, cybersecurity controls, incident reporting and resource transparency covering compute and energy use. Providers of these models must cooperate with regulators and support the development of harmonised standards and codes of practice.

Intermediaries that fine-tune or repackage general-purpose models inherit documentation obligations from the original provider, while downstream deployers must independently assess their own use cases and meet whichever high-risk or transparency duties apply in context.

Conformity assessment and CE marking

Before a high-risk system can be placed on the market or put into service, it must undergo conformity assessment. Some assessments rely on internal control backed by technical documentation, while others require review by a notified body. Once a system is found compliant, the provider applies CE marking and registers it in the EU AI database.

The act also enables harmonised standards and common specifications, which offer practical routes to demonstrating presumption of conformity. Providers must maintain comprehensive technical files and keep them current across versions and retraining cycles.

Governance, enforcement and penalties

Each member state will appoint a national supervisory authority, and notified bodies along with market surveillance authorities will oversee high-risk systems tied to product rules. At the EU level, a coordination structure will guide standards development, issue guidance and handle cross-border cases.

The penalties for non-compliance are significant. Using prohibited practices or breaching key requirements can lead to fines up to the higher of a fixed amount or a percentage of global annual turnover. The act scales these caps to reflect organisation size and the gravity of the infringement. Maintaining accurate records and reporting incidents promptly can materially reduce enforcement risk.

How to implement the AI Act in your organisation

Practical implementation works best when tied to existing governance structures, and small teams can stage work to match the phased application timelines rather than tackling everything at once.

The two questions that determine everything else are: what is your role for each AI system (provider or deployer), and what is the system's risk classification? Start by building an inventory of AI systems and features across your organisation, then map each one to a role and a risk band. A deployer using a limited-risk chatbot faces disclosure duties. A deployer using a high-risk recruitment tool needs human oversight arrangements and a fundamental rights impact assessment. A provider of that same tool faces the full compliance programme. Getting this mapping right early prevents both over-engineering and dangerous gaps.

From there, run structured risk assessments for high-risk items covering safety, bias, robustness, data provenance and rights impact. Stand up a quality management system, update procurement contracts to incorporate AI Act obligations, and align existing DPIAs or security testing with AI risk assessments to avoid duplicate effort. Compliance does not end at launch: models drift and contexts change, so post-market monitoring and periodic reassessment need to be part of the operating rhythm.

For security teams, much of this will feel familiar. Threat modelling can extend to adversarial ML risks, secure development can incorporate data lineage and evaluation protocols, and incident response can broaden to cover AI-specific events and regulator notifications. For compliance managers, the alignment opportunity is real: data governance under GDPR maps naturally to AI dataset controls, and a single well-designed control can produce evidence for multiple frameworks.

Cyberday can help centralise your AI inventory, map controls to the AI Act and align them with ISO 27001, NIS2 and GDPR. Dashboards visualise control coverage across frameworks, track supplier status and incidents, and provide audit-ready exports so you can focus effort where risk is highest.

Get started with Cyberday

Cyberday helps teams operationalise the AI Act alongside ISO 27001, NIS2 and GDPR. You can maintain an AI asset inventory, map obligations to controls and collect evidence once for multiple frameworks. Dashboards track conformity, supplier status and incidents so you can focus effort where risk is highest.

Check your AI Act status now: AI Act Assessment

Webinar: AI Agents in ISMS work

See how AI agents can accelerate the most time-consuming parts of your compliance process, from building the ISMS foundation to risk management, audits, questionnaires, documentation, and training - without sacrificing control, consistency, or auditability.

Watch the webinar