With Microsoft Copilot, Microsoft is bringing advanced language understanding and generation capabilities to its applications, which makes automatic text summaries, translation and creation possible. Microsoft Azure AI Services offer customers a database that they can use to access more than 1,600 AI models. These include LLMs such as GPT, but also specialised models for programming and mathematics as well as image, music and video generation. This provides Microsoft users with unimagined benefits and a wide range of possibilities. At the same time, however, companies also face legal challenges that require careful consideration.
Data protection requirements
The special thing about Microsoft Copilot is that the LLMs used can generate company-specific and context-related responses by accessing the information of the respective customer in the Microsoft cosmos. As with the use of Microsoft Azure AI Services, personal data can be processed in the process. Although Microsoft contractually assures in the Microsoft Products and Services Data Protection Addendum (DPA) that customers retain full control over their own data and that customer data is not used for training purposes, companies must implement numerous data protection requirements. This applies in particular to the following points:
- Principles of data processing and accountability
- Lawfulness of data processing
- Rights of data subjects
- Data security and Data protection by design
- Data protection impact assessment
- Transfer of data to third parties and third-country transfers
- AI-specific risks for those affected, e.g. hallucinations
The individual use of AI in companies and the specific circumstances of the individual case are decisive for data protection-compliant use of AI.
Intellectual property
The rights to training data, prompts and outputs are now not only a matter for US courts, as in the dispute between the New York Times and OpenAI, but also for the courts in Germany. The AI Regulation obliges AI providers to implement copyright compliance strategies and establishes transparency obligations for the training of AI models. If copyrighted content is used to train AI systems and traces of it are found in the output, there is a risk of legal disputes. In addition, the outputs of AI applications such as Microsoft Copilot are not automatically protected by copyright under current law. Strategies to protect intellectual property and trade secrets when using AI are therefore essential for companies.
Realisation in practice
Companies should not be deterred by the existing legal challenges and should not prematurely reject the use of Microsoft Copilot or Azure AI Services. Instead, anyone wishing to utilise AI applications from Microsoft should take a close look at the deployment scenario and assess possible risks as part of an AI strategy. In order to consider and implement data protection requirements, it is advisable to carry out a data protection impact assessment (DPIA), which is even required by law in the case of high risks. To protect their intellectual property, companies should check on a case-by-case basis whether copyright protection exists and document the use of AI, the prompts used and the edits made. Ususally, a case-by-case examination can ensure the legally compliant use of Microsoft’s AI applications.