The AI Regu­la­ti­on: Impli­ca­ti­ons and requirements 

On 12 July 2024, AI Regu­la­ti­on (EU) 2024/1689, also known as the AI Act, was published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on after long nego­tia­ti­ons. It ente­red into force on 1 August 2024. The new regu­la­ti­on impo­ses exten­si­ve requi­re­ments on com­pa­nies. Com­pa­nies must meet the first requi­re­ments as ear­ly as 2 Febru­ary 2025. Many addi­tio­nal requi­re­ments will app­ly from 2 August 2026.

Aim of the AI Regulation

The AI Regu­la­ti­on is inten­ded to crea­te a legal frame­work for the safe and trans­pa­rent use of AI sys­tems in the EU. This includes sys­tems that act with a cer­tain degree of auto­no­my, are adap­ta­ble and can gene­ra­te con­tent, pre­dic­tions, recom­men­da­ti­ons or decis­i­ons from inputs. The par­ti­cu­lar­ly con­tro­ver­si­al are­as of appli­ca­ti­on  not only include chat­bots such as ChatGPT or Micro­soft Copi­lot, but also tech­no­lo­gies such as AI-supported facial reco­gni­ti­on, which is used at the Olym­pic Games in Paris, for example.

Risk-based approach

The AI Regu­la­ti­on takes a risk-based approach and distin­gu­is­hes bet­ween pro­hi­bi­ted AI, high-risk AI, Gene­ral Pur­po­se AI (GPAI) and low- or minimal-risk AI. The hig­her the risk of an AI sys­tem caus­ing harm to socie­ty, the stric­ter the requi­re­ments of the AI Regu­la­ti­on. As ear­ly as 2 Febru­ary 2025, appli­ca­ti­ons with unac­cep­ta­ble risk, such as social scoring, are to be ban­ned. At the heart of the AI Regu­la­ti­on are the requi­re­ments for high-risk AI, which will app­ly from 2 August 2026. The regu­la­ti­ons for GPAI, which also include Lar­ge Lan­guage Models (LLM), will app­ly from 2 August 2025. Among other things, the regu­la­ti­on pro­vi­des for trans­pa­ren­cy obli­ga­ti­ons, human over­sight mecha­nisms and detail­ed docu­men­ta­ti­on and report­ing obligations.

Legal pro­blem areas

It is not only the ques­ti­on of the respon­si­ble super­vi­so­ry aut­ho­ri­ty that is still being hot­ly deba­ted. In Ger­ma­ny, in addi­ti­on to the data pro­tec­tion super­vi­so­ry aut­ho­ri­ties, the Fede­ral Net­work Agen­cy or even a sepa­ra­te fede­ral aut­ho­ri­ty for digi­tal affairs are also being dis­cus­sed. Regard­less of the ongo­ing wrang­ling over com­pe­ten­ces, it is alre­a­dy clear: The GDPR remains unaf­fec­ted by the AI Regu­la­ti­on. As soon as an AI pro­ces­ses per­so­nal data, the data pro­tec­tion super­vi­so­ry aut­ho­ri­ties are the­r­e­fo­re respon­si­ble in that case and the rules of play under data pro­tec­tion law must be obser­ved. In addi­ti­on to the clas­sic chal­lenges of data pro­tec­tion law, new ques­ti­ons such as the legal assess­ment of Lar­ge Lan­guage Models (LLMs) or the legal basis for the pro­ces­sing of per­so­nal trai­ning data arise.

The rights to trai­ning data, prompts and out­puts are now not only occu­py­ing US courts, as in the dis­pu­te bet­ween the ‘New York Times’ and Ope­nAI, but also the case law in Ger­ma­ny. The AI Regu­la­ti­on obli­ges AI pro­vi­ders to imple­ment copy­right com­pli­ance stra­te­gies and estab­lishes trans­pa­ren­cy obli­ga­ti­ons for the trai­ning of AI models. Many copy­right ques­ti­ons are still unans­we­red today.

Cyber­se­cu­ri­ty is also one of the legal requi­re­ments of the AI Regu­la­ti­on. High-risk AI sys­tems must have an appro­pria­te level of accu­ra­cy, robust­ness and cyber­se­cu­ri­ty. In the case of pro­ducts with digi­tal ele­ments, addi­tio­nal requi­re­ments of the Cyber Resi­li­ence Act (CRA) must also be met.

Impli­ca­ti­ons for companies

Com­pa­nies that deve­lop, manu­fac­tu­re, sell or use AI sys­tems should adapt to the new requi­re­ments now and start revie­w­ing their sys­tems and pro­ces­ses at an ear­ly stage. A com­pre­hen­si­ve AI stra­tegy that invol­ves all rele­vant stake­hol­ders and defi­nes tech­ni­cal, legal and busi­ness requi­re­ments is recom­men­ded. It is also clear that a holi­stic approach is nee­ded in the iden­ti­fi­ca­ti­on and imple­men­ta­ti­on of mea­su­res, taking into account the inter­ac­tions bet­ween legal acts. Anyo­ne who vio­la­tes the regu­la­ti­on and uses pro­hi­bi­ted AI must expect a fine of up to seven per­cent of glo­bal annu­al tur­no­ver or a maxi­mum of 35 mil­li­on euros.

Con­clu­si­on

The AI Regu­la­ti­on pres­ents com­pa­nies with chal­lenges, but also oppor­tu­ni­ties. By imple­men­ting the neces­sa­ry mea­su­res at an ear­ly stage, com­pa­nies can gain a com­pe­ti­ti­ve advan­ta­ge with AI.

Down­load

reuschlaw Onepager AI Regulation

reusch­law One­pager AI Regulation

back

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.