CNIL publishes guidance for AI developers and operators
The European Union (EU) is currently preparing an AI Regulation and an AI Liability Directive . However, there are already a large number of legal requirements that must be observed when developing or using AI, not least because of the processing of training data. While the requirements of the Data Act will be decisive for non-personal data in the future, the General Data Protection Regulation (GDPR) must be observed for personal data. In this light, the French data protection supervisory authority CNIL recently published a handout on the privacy-compliant use of AI , the key contents of which we present and evaluate below.
Determination of the purpose of deployment
According to the CNIL, when developing and using AI systems, it is essential to define a clear purpose for their deployment. Already before processing personal data, it should be clearly specified exactly what the purpose of the processing is to be. This is because it is only on the basis of this purpose that it is possible to measure whether, for example, the principle of data minimisation is being observed. The determination of the legal basis for processing also requires defining the purpose of processing.
Deployment and development
In its guidance, CNIL distinguishes two phases: the development phase, in which an AI system is trained with data, and the deployment phase, in which the algorithm is used. The two phases should be separated in terms of data protection law, even though they are often interwoven in practice, since AI systems also use the processed data in the deployment phase to further improve the algorithm. In the CNIL’s view, overall lower data protection requirements should apply to the development phase than to the deployment phase. In the deployment phase, an AI system should only process data that have been proven effective in achieving the pre-defined purpose during the development phase.
The data sets
Developers can either generate the data needed to develop or train the AI themselves, or they can put existing data to a new use. In doing so, they must ensure that the compilation of the data sets complies with the law. One particular challenge is therefore merging data from different sources. An AI system developed by means of unlawful processing of personal data may not be used, according to the CNIL.
Rights of data subject
When processing personal data by means of AI systems, the controller must in principle safeguard the rights of data subjects . For companies it is encouraging that the CNIL sees exemptions from the information requirements under the GDPR for AI systems. For example, informing data subjects may be waived if data was not collected directly from the data subjects and the controller can prove that it is not possible to inform the data subjects or would require disproportionate effort.
The handout is a practical aid for data controllers. At the same time, the CNIL’s extensive involvement with AI shows that data protection requirements play a key role in the development and deployment of AI and are the focus of data protection supervisory authorities. Regardless of whether data protection supervisory authorities are additionally entrusted with monitoring AI systems under the planned European AI Regulation, developers and operators of AI systems should therefore take into account the applicable data protection requirements and ensure compliance with them via a compliance management system.back