Final ver­si­on of the AI Act

Com­pa­nies need to adapt their AI strategy

Fol­lo­wing the long-fought poli­ti­cal agree­ment of the EU insti­tu­ti­ons on the AI Act on 8 Decem­ber 2023, a final com­pro­mi­se text of the AI Act is now in place (we last repor­ted on this on 22 June 2023 and 15 Febru­ary 2023). The AI Act repres­ents a risk-based approach to the regu­la­ti­on of AI. It con­ta­ins the defi­ni­ti­on of AI, the descrip­ti­on of pro­hi­bi­ted AI prac­ti­ces, requi­re­ments for high-risk AI and general- pur­po­se AI models (GPAI).

To be on the safe side, com­pa­nies should incor­po­ra­te the new legal requi­re­ments of the AI Act into their AI strategy.

Scope of appli­ca­ti­on of the AI Act

Trans­la­ted lite­ral­ly, Art. 3 (1) of the AI Act defi­nes an AI sys­tem as “a machine-based sys­tem desi­gned to ope­ra­te with vary­ing levels of auto­no­my and that may exhi­bit adap­ti­ve­ness after deploy­ment and that, for expli­cit or impli­cit objec­ti­ves, infers, from the input it recei­ves, how to gene­ra­te out­puts such as pre­dic­tions, con­tent, recom­men­da­ti­ons, or decis­i­ons that can influence phy­si­cal or vir­tu­al environments”.

The defi­ni­ti­on cor­re­sponds to inter­na­tio­nal stan­dards, but is still very broad. Com­pa­nies should the­r­e­fo­re note that any soft­ware or machine-aided sys­tems they offer might fall under this defi­ni­ti­on and must then ful­fil the dif­fe­ren­tia­ted requi­re­ments of the AI Act.

The AI Act is bin­ding for com­pa­nies that place AI sys­tems or GPAI models on the EU mar­ket, regard­less of whe­re they are estab­lished or based. Importers, dis­tri­bu­tors, manu­fac­tu­r­ers, aut­ho­ri­sed repre­sen­ta­ti­ves and affec­ted per­sons in the EU are also addres­sed by the AI Act.

Pro­hi­bi­ted AI practices

The AI Act pro­hi­bits the use of AI in a way that is incom­pa­ti­ble with the fun­da­men­tal rights or values of the EU. The fol­lo­wing pro­hi­bi­ti­ons are of par­ti­cu­lar rele­van­ce to com­pa­nies due to their broad definition:

  • Sub­li­mi­nal tech­ni­ques bey­ond a person’s con­scious­ness or pur­po­seful­ly mani­pu­la­ti­ve or decep­ti­ve tech­ni­ques, with the objec­ti­ve to or the effect of mate­ri­al­ly dis­tort­ing a person’s or a group of per­sons’ beha­viour, that cau­se or are likely to cau­se a per­son or group of per­sons signi­fi­cant harm;
  • AI sys­tems that exploit any of the vul­nerabi­li­ties of a per­son or a spe­ci­fic group of per­sons due to their age, disa­bi­li­ty or a spe­ci­fic social or eco­no­mic situation;
  • Bio­me­tric cate­go­ri­sa­ti­on sys­tems that cate­go­ri­se indi­vi­du­al­ly natu­ral per­sons based on their bio­me­tric data to dedu­ce or infer sen­si­ti­ve infor­ma­ti­on such as their race, poli­ti­cal opi­ni­ons etc.;
  • Social scoring;
  • AI sys­tems and ser­vices that use facial reco­gni­ti­on data­ba­ses through the unt­ar­ge­ted scra­ping of facial images from the inter­net or CCTV foo­ta­ge as well as AI sys­tems that are used for the spe­ci­fic pur­po­se of infer­ring emo­ti­ons of a natu­ral per­son in the are­as of work­place and edu­ca­ti­on insti­tu­ti­ons, except for medi­cal or safe­ty reasons (e. g. for moni­to­ring the fati­gue sta­tus of a pilot).

High-risk AI sys­tems and com­pre­hen­si­ve risk assessment

The risk-based approach of the AI Act, which is alre­a­dy known from the pro­po­sal for an AI regu­la­ti­on, is par­ti­cu­lar­ly evi­dent in the clas­si­fi­ca­ti­on of high-risk AI and the obli­ga­ti­ons in con­nec­tion with high-risk AI. In con­trast, pro­vi­ders and deploy­ers of “low-risk” AI sys­tems only have to ensu­re a “suf­fi­ci­ent level of AI liter­acy” (AI know­ledge taking into account the rights and obli­ga­ti­ons in the con­text of the AI Act and awa­re­ness about the oppor­tu­ni­ties and risks of AI and pos­si­ble harm it can cau­se) of the per­sons invol­ved in the ope­ra­ti­on and use of AI sys­tems on behalf of the­se pro­vi­ders and deploy­ers, while high-risk AI is sub­ject to most of the obli­ga­ti­ons sti­pu­la­ted by the AI Act.

AI is cate­go­ri­sed as high-risk AI by the degree of signi­fi­can­ce of the risk it poses to health, safe­ty and fun­da­men­tal rights of the EU. The AI sys­tems lis­ted in Annex III of the AI Act auto­ma­ti­cal­ly qua­li­fy as high-risk AI sys­tems, e. g. cer­tain cri­ti­cal infra­struc­tures such as water, gas and elec­tri­ci­ty sup­p­ly or medi­cal devices (see also our artic­le “AI-based medi­cal devices: MDR ver­sus AI Regu­la­ti­on”). In addi­ti­on, an AI sys­tem is con­side­red high-risk AI under Art. 6 (1) of the AI Act if it is inte­gra­ted into a pro­duct as a safe­ty com­po­nent or the AI sys­tem its­elf is a pro­duct that falls under the New Legis­la­ti­ve Frame­work (NLF) or other har­mo­nis­ed EU legis­la­ti­on lis­ted in Annex II of the AI Act, and the pro­duct with the AI safe­ty com­po­nent or the AI sys­tem its­elf requi­res a con­for­mi­ty assess­ment by a third par­ty befo­re being pla­ced on the mar­ket. This includes, among­st others, legis­la­ti­on on machi­nery, toys, mari­ne equip­ment, motor vehic­les, ATEX, pres­su­re equip­ment and medi­cal devices (e. g. MDR and IVDR).

Howe­ver, the­re are excep­ti­ons. Con­ver­se­ly, AI sys­tems shall not be con­side­red as high-risk AI if they do not pose signi­fi­cant risks to health, safe­ty or EU fun­da­men­tal rights (Art. 6 (2a) of the AI Act).

In this con­text, pro­vi­ders and deploy­ers of high-risk AI sys­tems must ful­fil and imple­ment important obli­ga­ti­ons, such as risk manage­ment, a fun­da­men­tal rights impact assess­ment, a qua­li­ty manage­ment sys­tem appro­pria­te to the size of the provider’s orga­ni­sa­ti­on to ensu­re con­for­mi­ty and suf­fi­ci­ent (tech­ni­cal) docu­men­ta­ti­on. Even if con­for­mi­ty assess­ments have alre­a­dy been car­ri­ed out for pro­ducts that fall under the har­mo­ni­sa­ti­on legis­la­ti­on, com­pa­nies must take par­ti­cu­lar account of the pro­duct safe­ty and qua­li­ty requi­re­ments for the AI com­po­nent in the risk ana­ly­sis to be car­ri­ed out. Par­ti­cu­lar atten­ti­on must be paid to the new­ly added EU fun­da­men­tal rights impact assess­ment in the con­text of high-risk AI requi­re­ments. Howe­ver, this requi­re­ment is expec­ted to be ful­fil­led by com­ple­ting a ques­ti­on­n­aire and con­cerns only pro­vi­ders and deploy­ers of KI sys­tems that use KI in bodies gover­ned by public law, pri­va­te actors pro­vi­ding public ser­vices, and deploy­ers that are ban­king and insu­rance ser­vice pro­vi­ders using AI sys­tems lis­ted as high-risk in Annex III, point 5, (b) and (ca) of the AI Act.

GPAI regu­la­ti­ons

The cur­rent com­pro­mi­se text distin­gu­is­hes bet­ween two dif­fe­rent types of GPAI: “GPAI models” and “GPAI models with sys­te­mic risk”. A GPAI model is dee­med to pose a sys­te­mic risk (as defi­ned by Art. 52a of the AI Act) if it has high impact capa­bi­li­ties eva­lua­ted on the basis of appro­pria­te tech­ni­cal tools and metho­do­lo­gies. If, for exam­p­le, the trai­ning of the GPAI alre­a­dy requi­res an amount of com­pu­te grea­ter than 10^25 FLOPs, the GPAI model has a high impact capa­bi­li­ty and is a GPAI model with sys­te­mic risk (as defi­ned by Art. 52a of the AI Act.) The pro­vi­ders of the first-mentioned models only have to com­ply with a smal­ler num­ber of “mini­mum requi­re­ments” such as trans­pa­ren­cy and docu­men­ta­ti­on obli­ga­ti­ons. Art. 52 of the AI Act sets out trans­pa­ren­cy obli­ga­ti­ons for pro­vi­ders of GPAI models and deploy­ers of cer­tain AI sys­tems, inclu­ding inter alia the dis­clo­sure of inter­ac­tion with AI sys­tems and the mar­king of con­tent or out­put gene­ra­ted or mani­pu­la­ted by AI systems.

The AI Act sub­jects GPAI with sys­te­mic risk to addi­tio­nal and stric­ter requi­re­ments set out in Art. 52d of the AI Act. The pro­vi­ders of such high-performance GPAI models with sys­te­mic risk will be requi­red, among other things, to assess and miti­ga­te sys­te­mic risks, report serious inci­dents, per­form state-of-the-art test­ing and model eva­lua­tions and ensu­re cyber­se­cu­ri­ty. GPAI models with sys­te­mic risk might include, for exam­p­le, the GPT‑4 model from OpenAI.

Legis­la­ti­ve steps to come

The adop­ti­on of the AI Act by the EU Par­lia­ment and a Coun­cil con­fi­gu­ra­ti­on is still awai­ted. This is sche­du­led for the first half of the year. If it is adopted as plan­ned, a stag­ge­red start of appli­ca­ti­on is inten­ded for indi­vi­du­al are­as, e. g. after six months alre­a­dy for pro­hi­bi­ted AI prac­ti­ces, after one year for GPAI and after three years for high-risk AI sys­tems fal­ling under Art. 6 (1) of the AI Act and the cor­re­spon­ding regulations.

Con­clu­si­on

The adop­ti­on of the AI Act is fast approa­ching. Com­pa­nies should alre­a­dy now check whe­ther their pro­ducts with inte­gra­ted AI com­pon­ents are to be con­side­red as high-risk AI, whe­ther their GPAI models pose a sys­te­mic risk and whe­ther they are using poten­ti­al­ly pro­hi­bi­ted AI prac­ti­ces. It should also be noted that many spe­ci­fic obli­ga­ti­ons still depend on the design of the num­e­rous imple­men­ting acts and the secon­da­ry legis­la­ti­on still to be adopted.

back

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.