Under the Czech Presidency, the Council of the European Union (Council) has agreed on a compromise proposal for a regulation establishing harmonised rules for artificial intelligence (AI Regulation).
The draft includes the following important changes, among others:
Restricted definition of the term “AI
The broad definition of AI has already been the subject of much debate during the legislative process, as critics feared that the broad wording would mean that all software would be subject to the AI Regulation and that the scope of application and strict requirements would thus be extended to products that do not require such stringent regulation. The current Council draft therefore contains a narrower definition of AI, now only covering data-based systems that exhibit elements of autonomy and use machine learning methods and logic- and knowledge-based concepts.
Limitation of the material scope of application
According to the Council, AI systems that serve military purposes or national security are to be excluded from the material scope of application. Furthermore, the areas of research and development of AI systems themselves are not to be subject to the AI Regulation. There is also to be an exception to the scope for private individuals who do not use AI professionally.
Extension of the ban on social scoring
It is also planned to extend the ban on social scoring to private actors in addition to public authorities, thereby expanding the scope of protection of the AI Regulation.
Adaptation of the list of high-risk AI systems (Annex III)
The Council also sees a need for adapting the list of high-risk CI systems in Annex III. While systems for detecting deepfakes related to law enforcement or crime analysis, and for verifying the authenticity of travel documents, will no longer be classified as high-risk AI systems, critical digital infrastructure and life and health insurance will be added to the list. In principle, classification in the future is to be more closely linked to the actual risk posed by AI systems instead of their abstract risk.
Expansion of the target group
Through the introduction of a new Article 23a, the scope of the AI Regulation, which previously only covered providers of high-risk AI systems, is also to cover other actors under certain conditions.
Approval of (real) test environments
To ensure innovative strength in the EU, AI systems are to be able to be tested under real-life conditions in real laboratories. To this end, simplified access to personal data and thus a relaxation of the intended purpose principle of the GDPR is to be stipulated. However, the prerequisite is that the developed AI systems serve a significant public interest. In addition, under certain circumstances and subject to special security precautions, test trials in real environments are to be permitted.
Despite the current developments and the Council’s resolution, the AI Regulation is still in the draft stage. Before the AI Regulation can enter into force, the European Commission, the Council and the European Parliament must reach agreement in trilogue negotiations and commit to a draft. It thus remains unclear when a binding regulatory framework for AI systems will finally be available in the EU.back