This article covers:
Before exploring the specific classifications and requirements, let’s first take a closer look at how the EU AI Act defines Artificial Intelligence. According to the Act, an AI system includes four main elements:
The final element is critical, as a system that has the potential to alter our surroundings poses risks to society, stressing the need for legislation to protect individuals from negative consequences.
Various AI systems can threaten individuals, but the potential risks differ. Let’s find out how the EU AI Act categorises different risk levels and what this means for specific AI systems.
AI systems classified as minimal risk, such as chatbots and spam filters, are considered to have little to no impact on individuals’ rights. Therefore, they must only meet certain transparency criteria when engaging directly with individuals.
Systems that pose only limited risks to individuals are mostly unregulated and are only subject to some transparency requirements. General Purpose AI systems (GPAI) fall under this classification if there is no systemic risk involved, such as a negative impact on public health, fundamental rights, or society.
Related: The future of privacy: Examining the impacts of ChatGPT
High-risk systems are subject to the most stringent requirements and standards under the EU AI Act. These systems are often used in crucial areas such as essential commercial and governmental services, employment, safety components, biometrics, and critical infrastructure.
Typical use cases for these systems include recruiting, credit checks, and admissions. The improper use of such systems could have significant consequences for individuals, which is why the EU AI Act focuses on regulating them.
Lastly, certain systems are categorised as posing unacceptable risks to people’s safety, livelihoods, and rights and are therefore banned under the EU AI Act.
These prohibited systems include those that exploit vulnerabilities, manipulate or mislead individuals, derive emotions, scrape facial recognition databases, or use biometric data to categorise people (e.g., social scoring systems).
Many parties are involved in the AI systems supply chain, including providers and deployers. In this article, we focus on the role of providers, who create or develop AI systems that are placed on the market. This definition includes companies or government agencies.
Remember that much like the GDPR, the EU AI Act has an extraterritorial scope. This means that even if a provider is based outside the EU, they might be required to appoint a representative within the EU.
Watch this on-demand webinar for a deep dive: Video | The EU AI Act II: the providers awaken (dataguard.uk)
Providers of high-risk AI systems face the most stringent requirements as they develop and create the AI systems. Let’s find out which obligations you must meet as a provider.
Providers must ensure their high-risk AI systems comply with the requirements set out in Section 2 (Articles 8-15) of the EU AI At.
As a provider, you need to establish, implement, document, and maintain a risk management system throughout the lifecycle of high-risk AI systems to effectively identify and mitigate risks—internally and externally.
Related: What is risk management, and how can companies identify risks?
Data sets used by the AI system must be relevant, representative, error-free, and complete to the extent possible. Providers must also implement data governance measures to ensure data quality and integrity to address potential biases in the data. Otherwise, training AI systems with poor data quality can have a permanent negative impact, especially in the context of high-risk systems.
Providers must prepare and maintain technical documentation before the system is placed on the market or put into service. This obligation follows the essential baseline of the EU AI Act, which views AI systems as products that need to be regulated.
Providers are required to automatically record events (logs) and maintain these logs over the system's lifetime. Specific documentation must be retained and made available to authorities for at least 10 years to trace particular moments in the system’s lifetime in case of an error. This emphasises the long-term accountability that regulators expect for certain AI systems.
Providers are required to inform users that they are interacting with an AI system and to provide clear instructions for its use. They must also supply deployers with detailed instructions on the operation, limitations, and risks associated with the AI system.
Providers must establish appropriate oversight measures, including human review, to ensure that AI systems function as intended and don’t pose risks to individuals' health, safety, or fundamental rights.
This emphasises the relevance of human controls, encouraging critical scrutiny of the impact these systems can have on individuals.
Additionally, providers must ensure AI systems are accurate, robust, and secure, by implementing measures designed to mitigate risks related to cybersecurity throughout their lifecycle.
To ensure compliance with the EU AI Act, providers must implement a documented quality management system (QMS) related to their AI systems. If your organisation already holds an ISO 27001 certification, you may leverage your existing information security management system (ISMS) to fulfil some of the QMS requirements.
Related: 12 benefits of ISO 27001: Compliance and certification
The EU AI Act recognises that processes may fail or not work out the first time, as developing an AI system is not something that happens overnight. This obligation regulates how to deal with situations where a system doesn’t produce the desired output.
In these cases, providers must take corrective actions, including withdrawal, disabling, or recalling non-conforming AI systems, and comply with any information or instruction given by competent authorities.
Providers are required to cooperate with competent authorities following reasoned requests and provide information and documentation related to conformity.
Providers in third countries not based in the EU are required to appoint an authorised representative in the EU to carry out certain tasks.
Distributors, importers, deployers or other third parties may be considered providers of high-risk systems in case of actions such as rebranding and substantial modifications.
Providers must ensure that the high-risk AI system undergoes the relevant conformity assessment procedure before being placed on the market. This assessment verifies that the system meets all regulatory requirements.
This obligation is closely linked to the previous one. It requires providers to draw up and obtain an EU declaration of conformity, stating that the high-risk AI system complies with the relevant requirements of the EU AI Act. This document must be available to regulatory authorities upon request.
As a known standard for products in the EU, providers are required to affix the CE marking to the high-risk AI system to indicate conformity with the regulation. This marking demonstrates that the system complies with the EU AI Act and other relevant EU legislation.
Providers must register themselves and their AI systems in the EU database before placing a system on the market.
Once you’ve done everything you need to do to place your AI system on the market, you need to establish and document a post-market monitoring system proportional to the nature and risks of the AI technologies. This system must ensure continuous compliance and identify any emerging risks.
Providers must report any serious incidents to the competent supervisory authority. The reporting timeframe depends on the severity of the incident.
This obligation is similar to other regulations, like the GDPR. Many organisations may already have processes and procedures in place that can be leveraged to comply with the EU AI Act.
The requirements for high-risk AI systems are the most extensive ones. Still, there are also requirements for providers of GPAI. Discover the requirements you need to meet if you develop and provide a GPAI.
As with high-risk systems, providers of GPAI must create and maintain technical documentation related to the AI system. This documentation should include information on the training and testing processes and the evaluation results.
Providers also are required to ensure transparency when AI systems interact directly with individuals, making it clear to users that they are interacting with an AI system.
Similarly to high-risk systems, GPAI providers must ensure that human oversight measures are implemented for general purpose AI models. These measures should be appropriate to the risk and context of use.
Providers must establish and document a post-market monitoring system for GPAI, similar to the requirement for high-risk systems. This system should be proportional to the nature and risks of the AI system involved.
The EU AI Act recognises that GPAI might also have systemic risks. Therefore, providers must assess and mitigate potential systemic risks that may stem from the development, market placement, or use of GPAI models. If you, as a provider, already have a risk management system in place, this isn’t an obligation that should be difficult to achieve.
As the risks associated with certain AI systems are lower, the EU AI Act imposes fewer requirements on organisations developing them. When providing AI systems classified as limited risks, your obligations include the following.
Transparency is key—this is already the case with GDPR and now the EU AI Act. Providers of limited-risk AI systems need to provide clear instructions on how to use AI systems and the required information to ensure transparency. This way, providers help users understand the AI system’s capabilities, limitations and how their data is being processed—even if it’s not personal data in this case.
Providers are also obliged to take measures to develop the AI literacy of both their staff and individuals using AI systems on their behalf.
Providers of limited-risk AI systems are encouraged to adopt and adhere to voluntary codes of conduct to ensure the ethical and responsible use of these systems.
There are compulsory requirements because AI systems can impact our environment, rights and freedoms. Still, the EU AI Act encourages providers of any AI system to adopt best practices, balancing innovative technologies with ethical and responsible innovation.
Now is the time for your business to prepare for the upcoming obligations of the EU AI Act to stay compliant. Evaluate your business needs and strategy regarding the implementation of the AI Act.
Identify the current or planned use of AI systems in your organisation and compare it to the risk classification levels. Proactively plan the implementation of an AI governance framework to stay ahead of the regulations. Be mindful of the staggered enforcement of the Act and prioritise which requirements and risks to address first.
Do you need any additional information about the EU AI Act? Download your complete guide on the EU AI Act and get an overview of everything you need to know.