AI Act update

Ano­ther step for­ward for the AI Act; preli­mi­na­ry poli­ti­cal agree­ment rea­ched in the Euro­pean Parliament

The AI Act is desi­gned to crea­te a sin­gle legal frame­work for the use of arti­fi­ci­al intel­li­gence (AI) in the Euro­pean Uni­on. While the­re is agree­ment with regard to the “big-picture” goals of the legis­la­ti­on, i.e. ensu­ring a high degree of secu­ri­ty when inter­ac­ting with AI sys­tems, cer­tain aspects of the Act have long been the sub­ject of con­tro­ver­si­al dis­cus­sion. Com­pro­mi­se solu­ti­ons have now been iden­ti­fied in this regard.

High-risk AI system

The AI Act defi­nes an AI sys­tem as “soft­ware that is deve­lo­ped with one or more of the tech­ni­ques and approa­ches lis­ted in Annex I and can, for a given set of human-defined objec­ti­ves, gene­ra­te out­puts such as con­tent, pre­dic­tions, recom­men­da­ti­ons, or decis­i­ons influen­cing the envi­ron­ments they inter­act with.”

AI sys­tems fal­ling within the cri­ti­cal are­as and cases lis­ted in Annex III to the Act are auto­ma­ti­cal­ly cate­go­ri­zed as “high-risk” sys­tems. But a con­ces­si­on has now been obtai­ned on this point so that AI sys­tems would only be clas­si­fied as “high-risk” if they pose a “signi­fi­cant risk.”

A risk is con­side­red to be “signi­fi­cant” if it has a signi­fi­cant impact based on its seve­ri­ty, inten­si­ty, pro­ba­bi­li­ty of occur­rence and dura­ti­on, and if it has the poten­ti­al to harm an indi­vi­du­al, mul­ti­ple per­sons or a spe­ci­fic group of peo­p­le. AI sys­tems are also con­side­red to be high-risk if they pose signi­fi­cant health risks.

AI-based medi­cal devices

Accor­din­gly, medi­cal devices which are based on AI com­pon­ents will con­ti­nue to be clas­si­fied as high-risk AI sys­tems even after the­se chan­ges to the defi­ni­ti­on of “high-risk,” so that they will have to satis­fy the spe­cial regu­la­to­ry requi­re­ments for high-risk AI sys­tems in accordance with the AI Act in addi­ti­on to the requi­re­ments of the MDR.

The ques­ti­ons rela­ting to the tre­at­ment of self-learning AI and the clas­si­fi­ca­ti­on of the resul­ting pro­duct chan­ges as signi­fi­cant chan­ges in terms of the MDR, requi­ring a new con­for­mi­ty assess­ment, remain unre­sol­ved. It remains to be seen whe­ther this pro­blem will be iden­ti­fied and resol­ved in a time­ly man­ner. Hop­eful­ly this will be the case, as it would crea­te legal cer­tain­ty for the affec­ted eco­no­mic operators.

The text of the AI Act is curr­ent­ly being fina­li­zed. The Act is expec­ted to take effect befo­re the end of this calen­dar year (beco­ming bin­ding 24 months after its ent­ry into effect).

Recom­men­ded actions

Medi­cal device manu­fac­tu­r­ers who­se pro­ducts fea­ture AI com­pon­ents will be sub­ject to both the Medi­cal Device Regulation


Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.