Companies need to adapt their AI strategy
Following the long-fought political agreement of the EU institutions on the AI Act on 8 December 2023, a final compromise text of the AI Act is now in place (we last reported on this on 22 June 2023 and 15 February 2023). The AI Act represents a risk-based approach to the regulation of AI. It contains the definition of AI, the description of prohibited AI practices, requirements for high-risk AI and general- purpose AI models (GPAI).
To be on the safe side, companies should incorporate the new legal requirements of the AI Act into their AI strategy.
Scope of application of the AI Act
Translated literally, Art. 3 (1) of the AI Act defines an AI system as “a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments”.
The definition corresponds to international standards, but is still very broad. Companies should therefore note that any software or machine-aided systems they offer might fall under this definition and must then fulfil the differentiated requirements of the AI Act.
The AI Act is binding for companies that place AI systems or GPAI models on the EU market, regardless of where they are established or based. Importers, distributors, manufacturers, authorised representatives and affected persons in the EU are also addressed by the AI Act.
Prohibited AI practices
The AI Act prohibits the use of AI in a way that is incompatible with the fundamental rights or values of the EU. The following prohibitions are of particular relevance to companies due to their broad definition:
- Subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques, with the objective to or the effect of materially distorting a person’s or a group of persons’ behaviour, that cause or are likely to cause a person or group of persons significant harm;
- AI systems that exploit any of the vulnerabilities of a person or a specific group of persons due to their age, disability or a specific social or economic situation;
- Biometric categorisation systems that categorise individually natural persons based on their biometric data to deduce or infer sensitive information such as their race, political opinions etc.;
- Social scoring;
- AI systems and services that use facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage as well as AI systems that are used for the specific purpose of inferring emotions of a natural person in the areas of workplace and education institutions, except for medical or safety reasons (e. g. for monitoring the fatigue status of a pilot).
High-risk AI systems and comprehensive risk assessment
The risk-based approach of the AI Act, which is already known from the proposal for an AI regulation, is particularly evident in the classification of high-risk AI and the obligations in connection with high-risk AI. In contrast, providers and deployers of “low-risk” AI systems only have to ensure a “sufficient level of AI literacy” (AI knowledge taking into account the rights and obligations in the context of the AI Act and awareness about the opportunities and risks of AI and possible harm it can cause) of the persons involved in the operation and use of AI systems on behalf of these providers and deployers, while high-risk AI is subject to most of the obligations stipulated by the AI Act.
AI is categorised as high-risk AI by the degree of significance of the risk it poses to health, safety and fundamental rights of the EU. The AI systems listed in Annex III of the AI Act automatically qualify as high-risk AI systems, e. g. certain critical infrastructures such as water, gas and electricity supply or medical devices (see also our article “AI-based medical devices: MDR versus AI Regulation”). In addition, an AI system is considered high-risk AI under Art. 6 (1) of the AI Act if it is integrated into a product as a safety component or the AI system itself is a product that falls under the New Legislative Framework (NLF) or other harmonised EU legislation listed in Annex II of the AI Act, and the product with the AI safety component or the AI system itself requires a conformity assessment by a third party before being placed on the market. This includes, amongst others, legislation on machinery, toys, marine equipment, motor vehicles, ATEX, pressure equipment and medical devices (e. g. MDR and IVDR).
However, there are exceptions. Conversely, AI systems shall not be considered as high-risk AI if they do not pose significant risks to health, safety or EU fundamental rights (Art. 6 (2a) of the AI Act).
In this context, providers and deployers of high-risk AI systems must fulfil and implement important obligations, such as risk management, a fundamental rights impact assessment, a quality management system appropriate to the size of the provider’s organisation to ensure conformity and sufficient (technical) documentation. Even if conformity assessments have already been carried out for products that fall under the harmonisation legislation, companies must take particular account of the product safety and quality requirements for the AI component in the risk analysis to be carried out. Particular attention must be paid to the newly added EU fundamental rights impact assessment in the context of high-risk AI requirements. However, this requirement is expected to be fulfilled by completing a questionnaire and concerns only providers and deployers of KI systems that use KI in bodies governed by public law, private actors providing public services, and deployers that are banking and insurance service providers using AI systems listed as high-risk in Annex III, point 5, (b) and (ca) of the AI Act.
GPAI regulations
The current compromise text distinguishes between two different types of GPAI: “GPAI models” and “GPAI models with systemic risk”. A GPAI model is deemed to pose a systemic risk (as defined by Art. 52a of the AI Act) if it has high impact capabilities evaluated on the basis of appropriate technical tools and methodologies. If, for example, the training of the GPAI already requires an amount of compute greater than 10^25 FLOPs, the GPAI model has a high impact capability and is a GPAI model with systemic risk (as defined by Art. 52a of the AI Act.) The providers of the first-mentioned models only have to comply with a smaller number of “minimum requirements” such as transparency and documentation obligations. Art. 52 of the AI Act sets out transparency obligations for providers of GPAI models and deployers of certain AI systems, including inter alia the disclosure of interaction with AI systems and the marking of content or output generated or manipulated by AI systems.
The AI Act subjects GPAI with systemic risk to additional and stricter requirements set out in Art. 52d of the AI Act. The providers of such high-performance GPAI models with systemic risk will be required, among other things, to assess and mitigate systemic risks, report serious incidents, perform state-of-the-art testing and model evaluations and ensure cybersecurity. GPAI models with systemic risk might include, for example, the GPT‑4 model from OpenAI.
Legislative steps to come
The adoption of the AI Act by the EU Parliament and a Council configuration is still awaited. This is scheduled for the first half of the year. If it is adopted as planned, a staggered start of application is intended for individual areas, e. g. after six months already for prohibited AI practices, after one year for GPAI and after three years for high-risk AI systems falling under Art. 6 (1) of the AI Act and the corresponding regulations.
Conclusion
The adoption of the AI Act is fast approaching. Companies should already now check whether their products with integrated AI components are to be considered as high-risk AI, whether their GPAI models pose a systemic risk and whether they are using potentially prohibited AI practices. It should also be noted that many specific obligations still depend on the design of the numerous implementing acts and the secondary legislation still to be adopted.
back