AI, What are we talking about ?

The AI ​​Act was formally approved and published on July 12, meaning it will come into effect on August 1, 2024. How did this process go? What can we expect? And above all, what can, or should we do ourselves? At the EUR we are working on guidelines regarding AI and the interpretation of the AI ​​Act to support the responsible use of AI. Guided by the question of what kind of social impact we want to create as an organization, by means of these guidelines, we aim to promote a use of AI which is in line with our Erasmian values, which have at their heart the interaction between humans and AI.

Preparation

From 2018 to April 21, 2021, the European Commission (EC) has been working on a proposal to regulate AI within the European Union (EU). During this process, the EC has been advised by different parties. In particular, a high-level expert group (HLEG) was founded and came up a year later with Ethics guidelines for Trustworthy AI .  

On February 19, 2020, the EC released a Whitepaper On Artificial Intelligence endorsing the ambition to become a global leader in innovation in the field of data economy and its related applications, as also set out in 2018 in the European Data Strategy

By then EU leaders in the Council of the European Union were also considered the digital transition. They requested the EC to come up with a proposal to:,

  • increase investments in AI for both research and innovation;,
  • achieve better coordination between European research centers; and
  • provide a clear, objective definition of “high-risk AI systems”.

On April 21, 2021, the EC presented the proposal for the AI ​​Act accompanied by an impact assessment. (i.e. an instrument created to map the risks and consequences of AI). 

Legislative process

The actual legislative process only started from that moment on. What did that look like? 

  1. First of all, the Council of the European Union, consisting of 27 ministers of Member States with digitalization in their portfolio, drafted its opinion on the proposal (via the so-called “general approach”). The AI ​​Act has indeed a direct effect on Member States.
  1. The proposal was then analyzed by the European Parliament, consisting of approximately 700 directly elected representatives. This led to several proposed amandements that were announced in June 2023. 
  1. Negotiations took place behind closed doors between all the parties mentioned above, the so-called Trilogue.
  1. On December 9, 2023, an agreement was reached, concluding a political agreement.
  1. On May 21, 2024, the AI ​​Act was adopted by the European Commission.

What now ?

On July 12, 2024, the AI ​​Act was officially published in the EU Official Journal. This means that the AI ​​Act will come into effect on August 1, 2024.  

The following deadlines should be taken into consideration:

  • August 2, 2024: Meet obligations of high-risk AI systems, if they are already in use and undergo a change in their design or intended purpose.  
  • November 2, 2024: All member States have designated a supervisory authority  
  • February 2, 2025: Cease the use of the prohibited (banned) AI systems 
  • August 2, 2025: Provisions for generative AI models are now in effect & national regulations for setting penalties are created (all member states)  
  • February 2, 2026: EU Guidelines for the assessment of high-risk AI systems are ready  
  • August 2, 2026: Most articles, including provisions for high-risk AI, are now in effect   
  • August 2, 2027: Requirements for high-risk AI systems in products enter into force 
  • August 2, 2030: Provisions for AI systems used by government organizations that were already in use before implementation are applicable  

What are prohibited AI systems and what are high-risk AI systems? 

AI systems that pose an unacceptable risk are prohibited. This includes AI systems for harmful manipulation, social scoring, and emotion recognition in education. 

The AI ​​Act further labels AI systems as high-risk if they are used in certain specific areas. These include, for example, AI systems for evaluating candidates for a vacancy or the admission of students in education.

What else can we expect from Europe?  

The EC has recently established an European AI Office that supports the development and use of trustworthy AI while protecting against AI risks. This AI office will be the center of AI expertise and is will be responsible for the creation of a single European AI governance system. 

In addition, the EC has launched an AI innovation package to support startups and SMEs in developing reliable AI that complies with EU values ​​and rules. Together with the AI ​​Office, this should contribute to new use cases and emerging applications in Europe’s diverse industrial ecosystems in the public sector.

The EC supports the acceleration of digital technologies and makes contributions to back these efforts (the multiannual financial framework). This includes several instruments and funding programs such as: Program Digital Europa and Horizon Europe (projects with AI, Data & Robotics).

What can we expect from the Netherlands? 

Since 2023, the Autoriteit Persoonsgegevens (the Dutch Data Protection Authority) has been designated as the coordinating supervisor of algorithms and AI, by the Directie Coördinatie Algoritmes* (the Directorate of Coordination of Algorithms or DCA). At this stage, the DCA focuses on mapping algorithms and AI, strengthening (inter)collaboration between supervisors, working on guidance for the responsible use of AI and algorithms, and coordinating the preparation for the supervision of the AI Act.  

Developments can be followed in the half-yearly reports*. (link only available in Dutch)

What does this mean for the EUR? 

We have learned from the introduction of the GDPR that waiting for the moment of its entry into force is not desirable. For this reason, the AI@EUR program has been working hard for over a year on: 

Within the EUR several other colleagues are also taking serious steps on guidelines and policy documents on AI. The EUR Generative AI User Guideline has already been published and the EUR will soon have an AI Supervisor. Currently, an AI strategy for the EUR is also being developed, where all relevant stakeholders are represented and have their own role. EDIS, CLI, and ECDA are coordinating this effort.

What can you do ?

If you are currently using or developing an AI system, please communicate this to our team via: ai@eur.nl. The focus is on high-risk AI systems and Generative AI. The AI@EUR team can support you with an AI assessment, so you know what key aspects and steps to consider. 

When it comes to purchasing AI systems or systems containing AI, two aspects are important:

  1. Drawing up purchasing conditions that comply with the AI ​​Act, so that it can be assessed whether (potential) suppliers can comply with the AI ​​Act.
  1. Make proper contractual arrangements to limit (potential) risks, for example, by drafting up General Purchasing Conditions for AI systems.

Let us know if you would like to think along with us or share your insights. 

And lets join forces together so that we can comply with the AI ​​Act by the aforementioned deadlines and secure the responsible and reliable use of AI within the EUR.