Everything you need to know about the European AI act

The European AI Regulation (AI Act) is the world's first-ever AI law and consists of a comprehensive set of rules for the use of Artificial Intelligence (AI) in the European Union (EU). It was already provisionally agreed upon by European countries in late 2023. On March 13, 2024, the AI Act was also finally approved by the European Parliament. In this article, Joost van Dongen explains what the AI Act entails, its purpose and its implications.

#tech
#AI

Date: May 24, 2024

Modified May 30, 2024

Written by: Joost van Dongen

Reading time: +/- 5 minutes

What is the purpose of the AI Act?

The AI Act, like the Data Act and NIS-2, is part of the European Union's digital strategy. With the AI Act, the EU aims to better control the conditions for the development and use of this innovative technology. A key principle of the AI Act is that AI systems covered by the AI Act must be secure, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should never be fully automated. Furthermore, human oversight of AI is required to prevent harmful effects.

What are AI systems under the AI Act?

AI systems, according to the AI Act, will be defined as a machine-based system designed to operate with different levels of autonomy and, for explicit or implicit purposes, derive from received inputs how to generate outputs such as predictions, content, recommendations or decisions that may affect physical or virtual environments.

A characteristic of an AI system compared to regular software is that the output is not predetermined because of the use of a strict algorithm.

The definition was deliberately chosen broadly to ensure that the AI Act will not quickly become obsolete. The AI Act will apply to the marketing, putting into service and use of AI systems in the Union. Thus, the Act will apply not only to providers of AI systems, but also to any importer, distributor or professional user.

Some key tenets of the AI Act:

  1. The AI Act classifies AI systems based on their risk. Roughly speaking, there are three risk classifications:
    • Unacceptable risk: these are AI systems that pose unacceptable risks (think social scoring systems and manipulative systems). Such AI systems are prohibited under the AI Act;
    • High risk: most of the AI Act deals with AI systems that are high risk. The AI Act stipulates that two types of AI systems are high risk, namely:
      • AI systems used as (safety component of) a product covered by EU harmonization legislation. One can think of regulations for example in the field of civil aviation, vehicle safety, marine equipment, toys and personal protective equipment (Annex I, AI Act);
      • AI systems listed in Schedule III of the AI Act, including in the areas of biometrics, critical infrastructure, education and vocational training, employment, human resource management, benefits, migration, democratic processes and law enforcement. Note that there is an exception for systems that merely perform a procedural task or enhance the result of a previously completed human activity.

        Providers of high-risk AI systems are required to meet strict conditions of security, transparency and traceability. Such systems must include a risk management system. There are also strict requirements for the quality of datasets used to train with. A high-risk AI system must additionally include automatic logging, have technical documentation available, provide instructions for use, meet various cybersecurity requirements and have a system of human oversight.
    • Low risk: such AI systems have limited requirements. Lighter transparency requirements apply to these systems. End users should be aware that they are dealing with AI.
  2. The AI Act introduces a system whereby the an entity, which is not the provider, is still considered a provider. This entity will then also be subject to strict obligations if it is a high-risk AI system;
  3. Importers and distributors of AI systems must verify system compliance. Distributors must verify, among other things, that the AI system meets CE requirements;
  4. Failure to comply with the AI Act may result in fines. As with the AVG, a supervisory authority will be created that can impose fines for failure to comply with obligations under the AI Act;
  5. Citizens have the ability under the AI Act to bring claims for damages against the manufacturer or provider of an AI system. Damages cannot be excluded and a reversal of the burden of proof applies.

Consequences of the AI Act?

The AI Act will have far-reaching implications for providers, importers, distributors and professional users of AI systems, but it does not stand alone. AI systems must also comply with pre-existing legislation. In the area of intellectual property, there is currently uncertainty as to whether AI-generated work can be copyrighted. What is clear is that AI-generated work can infringe on third-party copyrights. In addition, terms of use of AI systems may affect the use of the output. Parties working with providers of AI systems will have to make contractual agreements. Issues such as liability and accessibility play an important role here. Cybersecurity regulations (think NIS-2) also have a major effect on AI.

When will the AI Act take effect?

The European Council approved the AI Act on May 21. Full entry into force of the AI Act will follow 24 months after this date. Some parts of the AI Act will apply earlier. For example, when it comes to AI systems that pose an unacceptable risk (6 months after entry into force) and codes of conduct (9 months after entry into force).


Stay Focused

As attorneys for business owners , we understand the importance of staying ahead. Together with us, you will have all the opportunities and risks in sight. Do you have questions about the implications of the AI Act? If so, please contact Joost van Dongen or one of our other specialists on our AI team.

Contact

More on this topic: