Com­pro­mi­se: Coun­cil sett­les on new pro­po­sal for AI regulation

Under the Czech Pre­si­den­cy, the Coun­cil of the Euro­pean Uni­on (Coun­cil) has agreed on a com­pro­mi­se pro­po­sal for a regu­la­ti­on estab­li­shing har­mo­nis­ed rules for arti­fi­ci­al intel­li­gence (AI Regulation).

The draft includes the fol­lo­wing important chan­ges, among others:

Rest­ric­ted defi­ni­ti­on of the term “AI

The broad defi­ni­ti­on of AI has alre­a­dy been the sub­ject of much deba­te during the legis­la­ti­ve pro­cess, as cri­tics feared that the broad wor­ding would mean that all soft­ware would be sub­ject to the AI Regu­la­ti­on and that the scope of appli­ca­ti­on and strict requi­re­ments would thus be exten­ded to pro­ducts that do not requi­re such strin­gent regu­la­ti­on. The cur­rent Coun­cil draft the­r­e­fo­re con­ta­ins a nar­rower defi­ni­ti­on of AI, now only cove­ring data-based sys­tems that exhi­bit ele­ments of auto­no­my and use machi­ne lear­ning methods and logic- and knowledge-based concepts.

Limi­ta­ti­on of the mate­ri­al scope of application 

Accor­ding to the Coun­cil, AI sys­tems that ser­ve mili­ta­ry pur­po­ses or natio­nal secu­ri­ty are to be excluded from the mate­ri­al scope of appli­ca­ti­on. Fur­ther­mo­re, the are­as of rese­arch and deve­lo­p­ment of AI sys­tems them­sel­ves are not to be sub­ject to the AI Regu­la­ti­on. The­re is also to be an excep­ti­on to the scope for pri­va­te indi­vi­du­als who do not use AI professionally.

Exten­si­on of the ban on social scoring 

It is also plan­ned to extend the ban on social scoring to pri­va­te actors in addi­ti­on to public aut­ho­ri­ties, ther­eby expan­ding the scope of pro­tec­tion of the AI Regulation.

Adapt­a­ti­on of the list of high-risk AI sys­tems (Annex III)

The Coun­cil also sees a need for adap­ting the list of high-risk CI sys­tems in Annex III. While sys­tems for detec­ting deepf­akes rela­ted to law enforce­ment or crime ana­ly­sis, and for veri­fy­ing the authen­ti­ci­ty of tra­vel docu­ments, will no lon­ger be clas­si­fied as high-risk AI sys­tems, cri­ti­cal digi­tal infra­struc­tu­re and life and health insu­rance will be added to the list. In prin­ci­ple, clas­si­fi­ca­ti­on in the future is to be more clo­se­ly lin­ked to the actu­al risk posed by AI sys­tems ins­tead of their abs­tract risk.

Expan­si­on of the tar­get group

Through the intro­duc­tion of a new Artic­le 23a, the scope of the AI Regu­la­ti­on, which pre­vious­ly only cover­ed pro­vi­ders of high-risk AI sys­tems, is also to cover other actors under cer­tain conditions.

Appr­oval of (real) test environments 

To ensu­re inno­va­ti­ve strength in the EU, AI sys­tems are to be able to be tes­ted under real-life con­di­ti­ons in real labo­ra­to­ries. To this end, sim­pli­fied access to per­so­nal data and thus a rela­xa­ti­on of the inten­ded pur­po­se prin­ci­ple of the GDPR is to be sti­pu­la­ted. Howe­ver, the pre­re­qui­si­te is that the deve­lo­ped AI sys­tems ser­ve a signi­fi­cant public inte­rest. In addi­ti­on, under cer­tain cir­cum­s­tances and sub­ject to spe­cial secu­ri­ty pre­cau­ti­ons, test tri­als in real envi­ron­ments are to be permitted.

Sum­ma­ry

Despi­te the cur­rent deve­lo­p­ments and the Council’s reso­lu­ti­on, the AI Regu­la­ti­on is still in the draft stage. Befo­re the AI Regu­la­ti­on can enter into force, the Euro­pean Com­mis­si­on, the Coun­cil and the Euro­pean Par­lia­ment must reach agree­ment in tri­lo­gue nego­tia­ti­ons and com­mit to a draft. It thus remains unclear when a bin­ding regu­la­to­ry frame­work for AI sys­tems will final­ly be available in the EU.

back

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.