The future is now: the world’s first regu­la­to­ry frame­work for AI

On 21 April 2021, the Euro­pean Com­mis­si­on beca­me the first legis­la­ti­ve body in the world to pro­po­se a draft Regu­la­ti­on lay­ing down rules on arti­fi­ci­al intel­li­gence  (AI Regu­la­ti­on).

Scope

The Pro­po­sal fol­lows a risk-based approach and estab­lishes duties for AI pro­vi­ders, users, importers, dis­tri­bu­tors and ope­ra­tors both within and out­side the EU. It defi­nes rule for the use of AI, as well as for making it available on the mar­ket and put­ting it into ser­vice. Artic­le 3 defi­nes AI sys­tems as soft­ware that is deve­lo­ped with one or more of the tech­ni­ques lis­ted in Annex I (PDF) of the Regu­la­ti­on for a given set of human-defined objec­ti­ves. The defi­ni­ti­on makes clear that AI sys­tems can be inte­gra­ted into a pro­duct or can exist as stand-alone soft­ware, but in all cases ser­ve the pur­po­se of auto­ma­ting pro­ces­ses. In fact, the defi­ni­ti­on includes the word “auto­no­my.”

The tech­ni­ques spe­ci­fied in Annex I include e.g. machi­ne lear­ning (super­vi­sed; unsu­per­vi­sed; rein­force­ment lear­ning) approa­ches, logic- and knowledge-based approa­ches and sta­tis­ti­cal approaches.

Pro­hi­bi­ted AI systems

Artic­le 5 pro­hi­bits use of AI in cer­tain are­as and for cer­tain pur­po­ses. In par­ti­cu­lar, it prohibits:

  • sub­li­mi­nal tech­ni­ques of con­trol­ling beha­vi­or which could result in harm;
  • exploi­ting weak­ne­s­ses based on age, disa­bi­li­ty, etc.;
  • social scoring; and
  • real-time remo­te bio­me­tric iden­ti­fi­ca­ti­on systems.

Cle­ar­ly, the risk-based approach calls for inter­ven­ti­on abo­ve all in cases whe­re the­re are con­cerns about an impact on humans in such a way as to place key values at risk (life, health, free will, etc.).

High-risk AI systems

“High-risk AI sys­tems” are defi­ned by Artic­le 6, in con­junc­tion with Anne­xes II and III. An AI sys­tem is “high-risk” if it is inten­ded to be used as the safe­ty com­po­nent of a pro­duct or is its­elf a pro­duct cover­ed by EU har­mo­niza­ti­on legis­la­ti­on, and is requi­red to under­go a third-party con­for­mi­ty assess­ment pro­ce­du­re. In addi­ti­on, cer­tain appli­ca­ti­ons are desi­gna­ted as high-risk, e.g.:

  • cri­ti­cal infrastructure;
  • recruit­ment and trai­ning assignments;
  • cre­dit evaluations;
  • law enforce­ment and cri­mi­nal prosecution;
  • migra­ti­on, asyl­um and bor­der control;
  • legal tech appli­ca­ti­ons by the courts.

Requi­re­ments for AI systems

In accordance with Artic­le 52, it must gene­ral­ly be evi­dent to con­su­mers when they are inter­ac­ting with an AI sys­tem, as e.g. in the case of chat­bots. “Deep fakes,” i.e. video, image or audio files which are mani­pu­la­ted by an AI sys­tem so that they dis­play con­tent which does not actual­ly belong, must be iden­ti­fied as such. Excep­ti­ons to this requi­re­ment app­ly e.g. in cases cover­ed by the free­dom of expres­si­on and the rights to free­dom of the arts and sciences.

Howe­ver, spe­cial requi­re­ments app­ly for high-risk AI sys­tems. In short, it must be ensu­red that AI sys­tems are safe for their inten­ded and fore­seeable use over their enti­re life cycle. Spe­ci­fic rules are defi­ned for the­se sys­tems in the sec­tion of the Pro­po­sal begin­ning with Artic­le 8, e.g.:

  • use of non-discriminatory trai­ning data sets;
  • (tech­ni­cal) documentation;
  • trans­pa­ren­cy, i.e. com­pre­hen­si­ble results;
  • resi­li­ence, i.e. sys­tem inte­gri­ty and data secu­ri­ty in the face of hack­ing attacks;
  • robust­ness, i.e. ensu­ring that the sys­tem can­not be alte­red by hackers; and
  • human over­sight.

Requi­re­ments for providers

The duties estab­lished by the Regu­la­ti­on app­ly pri­ma­ri­ly to pro­vi­ders, so that the latter’s role con­forms to that of the manu­fac­tu­rer for con­ven­tio­nal pro­ducts. Duties for high-risk AI sys­tems include e.g.:

  • ensu­ring adhe­rence to the requi­re­ments in Artic­le 8 and in the sub­se­quent Articles;
  • set­ting up a qua­li­ty manage­ment system;
  • con­duc­ting a con­for­mi­ty assess­ment procedure;
  • regis­tra­ti­on of the AI system;
  • per­forming mar­ket surveillance;
  • report­ing errors to the aut­ho­ri­ties; and
  • affi­xing the CE marking.

The­re are also requi­re­ments for users, importers, dis­tri­bu­tors and operators.

Con­for­mi­ty assess­ment procedure

Depen­ding on the type of high-risk AI sys­tem, the con­for­mi­ty assess­ment pro­ce­du­re can eit­her be con­duc­ted by means of inter­nal con­trols in accordance with Annex VI or must be con­duc­ted by a noti­fied body in accordance with Annex VII. Both methods requi­re both a qua­li­ty manage­ment sys­tem and tech­ni­cal docu­men­ta­ti­on. The assess­ment pro­ce­du­re for high-risk AI sys­tems may be inte­gra­ted into the con­for­mi­ty pro­ce­du­res pro­vi­ded for by other har­mo­niza­ti­on legislation.

As has been done in other pro­duct are­as, har­mo­ni­zed rules and stan­dards are to be crea­ted for AI sys­tems and published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on, and adhe­rence to the­se rules and stan­dards will crea­te the pre­sump­ti­on of conformity.

Mar­ket surveillance

A spe­cial office is to be crea­ted to per­form mar­ket sur­veil­lan­ce and to ensu­re that con­for­mi­ty assess­ment pro­ce­du­res are con­duc­ted pro­per­ly. Vio­la­tors will be sub­ject to fines of up to € 30 mil­li­on or 6% of their annu­al revenues.

Out­look and prac­ti­cal relevance

The Pro­po­sal will now be con­side­red by the Euro­pean Par­lia­ment and the Coun­cil. The are­as of respon­si­bi­li­ty have yet to be con­clu­si­ve­ly defi­ned, and the same is true at the natio­nal level for the Minis­tries invol­ved. The EU Com­mis­si­on has cal­led for a time­ta­ble of 18 months for enact­ment of the Regu­la­ti­on, which some have cal­led over­ly optimistic.

Sin­ce this docu­ment is only a Pro­po­sal, for now, the­re is no imme­dia­te need to respond to the pro­po­sed chan­ges in the legal situa­ti­on as they rela­te to making soft­ware available on the mar­ket. But manu­fac­tu­r­ers of con­ven­tio­nal pro­ducts which are con­trol­led or dri­ven by soft­ware should abso­lut­e­ly moni­tor deve­lo­p­ments in order to ensu­re that they will con­form to legal requi­re­ments in the future.

We will keep you infor­med about the pro­gress of the procedure.

back

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.