On 21 April 2021, the European Commission became the first legislative body in the world to propose a draft Regulation laying down rules on artificial intelligence (AI Regulation).
Scope
The Proposal follows a risk-based approach and establishes duties for AI providers, users, importers, distributors and operators both within and outside the EU. It defines rule for the use of AI, as well as for making it available on the market and putting it into service. Article 3 defines AI systems as software that is developed with one or more of the techniques listed in Annex I (PDF) of the Regulation for a given set of human-defined objectives. The definition makes clear that AI systems can be integrated into a product or can exist as stand-alone software, but in all cases serve the purpose of automating processes. In fact, the definition includes the word “autonomy.”
The techniques specified in Annex I include e.g. machine learning (supervised; unsupervised; reinforcement learning) approaches, logic- and knowledge-based approaches and statistical approaches.
Prohibited AI systems
Article 5 prohibits use of AI in certain areas and for certain purposes. In particular, it prohibits:
- subliminal techniques of controlling behavior which could result in harm;
- exploiting weaknesses based on age, disability, etc.;
- social scoring; and
- real-time remote biometric identification systems.
Clearly, the risk-based approach calls for intervention above all in cases where there are concerns about an impact on humans in such a way as to place key values at risk (life, health, free will, etc.).
High-risk AI systems
“High-risk AI systems” are defined by Article 6, in conjunction with Annexes II and III. An AI system is “high-risk” if it is intended to be used as the safety component of a product or is itself a product covered by EU harmonization legislation, and is required to undergo a third-party conformity assessment procedure. In addition, certain applications are designated as high-risk, e.g.:
- critical infrastructure;
- recruitment and training assignments;
- credit evaluations;
- law enforcement and criminal prosecution;
- migration, asylum and border control;
- legal tech applications by the courts.
Requirements for AI systems
In accordance with Article 52, it must generally be evident to consumers when they are interacting with an AI system, as e.g. in the case of chatbots. “Deep fakes,” i.e. video, image or audio files which are manipulated by an AI system so that they display content which does not actually belong, must be identified as such. Exceptions to this requirement apply e.g. in cases covered by the freedom of expression and the rights to freedom of the arts and sciences.
However, special requirements apply for high-risk AI systems. In short, it must be ensured that AI systems are safe for their intended and foreseeable use over their entire life cycle. Specific rules are defined for these systems in the section of the Proposal beginning with Article 8, e.g.:
- use of non-discriminatory training data sets;
- (technical) documentation;
- transparency, i.e. comprehensible results;
- resilience, i.e. system integrity and data security in the face of hacking attacks;
- robustness, i.e. ensuring that the system cannot be altered by hackers; and
- human oversight.
Requirements for providers
The duties established by the Regulation apply primarily to providers, so that the latter’s role conforms to that of the manufacturer for conventional products. Duties for high-risk AI systems include e.g.:
- ensuring adherence to the requirements in Article 8 and in the subsequent Articles;
- setting up a quality management system;
- conducting a conformity assessment procedure;
- registration of the AI system;
- performing market surveillance;
- reporting errors to the authorities; and
- affixing the CE marking.
There are also requirements for users, importers, distributors and operators.
Conformity assessment procedure
Depending on the type of high-risk AI system, the conformity assessment procedure can either be conducted by means of internal controls in accordance with Annex VI or must be conducted by a notified body in accordance with Annex VII. Both methods require both a quality management system and technical documentation. The assessment procedure for high-risk AI systems may be integrated into the conformity procedures provided for by other harmonization legislation.
As has been done in other product areas, harmonized rules and standards are to be created for AI systems and published in the Official Journal of the European Union, and adherence to these rules and standards will create the presumption of conformity.
Market surveillance
A special office is to be created to perform market surveillance and to ensure that conformity assessment procedures are conducted properly. Violators will be subject to fines of up to € 30 million or 6% of their annual revenues.
Outlook and practical relevance
The Proposal will now be considered by the European Parliament and the Council. The areas of responsibility have yet to be conclusively defined, and the same is true at the national level for the Ministries involved. The EU Commission has called for a timetable of 18 months for enactment of the Regulation, which some have called overly optimistic.
Since this document is only a Proposal, for now, there is no immediate need to respond to the proposed changes in the legal situation as they relate to making software available on the market. But manufacturers of conventional products which are controlled or driven by software should absolutely monitor developments in order to ensure that they will conform to legal requirements in the future.
We will keep you informed about the progress of the procedure.
back