On 12 July 2024, AI Regulation (EU) 2024/1689, also known as the AI Act, was published in the Official Journal of the European Union after long negotiations. It entered into force on 1 August 2024. The new regulation imposes extensive requirements on companies. Companies must meet the first requirements as early as 2 February 2025. Many additional requirements will apply from 2 August 2026.
Aim of the AI Regulation
The AI Regulation is intended to create a legal framework for the safe and transparent use of AI systems in the EU. This includes systems that act with a certain degree of autonomy, are adaptable and can generate content, predictions, recommendations or decisions from inputs. The particularly controversial areas of application not only include chatbots such as ChatGPT or Microsoft Copilot, but also technologies such as AI-supported facial recognition, which is used at the Olympic Games in Paris, for example.
Risk-based approach
The AI Regulation takes a risk-based approach and distinguishes between prohibited AI, high-risk AI, General Purpose AI (GPAI) and low- or minimal-risk AI. The higher the risk of an AI system causing harm to society, the stricter the requirements of the AI Regulation. As early as 2 February 2025, applications with unacceptable risk, such as social scoring, are to be banned. At the heart of the AI Regulation are the requirements for high-risk AI, which will apply from 2 August 2026. The regulations for GPAI, which also include Large Language Models (LLM), will apply from 2 August 2025. Among other things, the regulation provides for transparency obligations, human oversight mechanisms and detailed documentation and reporting obligations.
Legal problem areas
It is not only the question of the responsible supervisory authority that is still being hotly debated. In Germany, in addition to the data protection supervisory authorities, the Federal Network Agency or even a separate federal authority for digital affairs are also being discussed. Regardless of the ongoing wrangling over competences, it is already clear: The GDPR remains unaffected by the AI Regulation. As soon as an AI processes personal data, the data protection supervisory authorities are therefore responsible in that case and the rules of play under data protection law must be observed. In addition to the classic challenges of data protection law, new questions such as the legal assessment of Large Language Models (LLMs) or the legal basis for the processing of personal training data arise.
The rights to training data, prompts and outputs are now not only occupying US courts, as in the dispute between the ‘New York Times’ and OpenAI, but also the case law in Germany. The AI Regulation obliges AI providers to implement copyright compliance strategies and establishes transparency obligations for the training of AI models. Many copyright questions are still unanswered today.
Cybersecurity is also one of the legal requirements of the AI Regulation. High-risk AI systems must have an appropriate level of accuracy, robustness and cybersecurity. In the case of products with digital elements, additional requirements of the Cyber Resilience Act (CRA) must also be met.
Implications for companies
Companies that develop, manufacture, sell or use AI systems should adapt to the new requirements now and start reviewing their systems and processes at an early stage. A comprehensive AI strategy that involves all relevant stakeholders and defines technical, legal and business requirements is recommended. It is also clear that a holistic approach is needed in the identification and implementation of measures, taking into account the interactions between legal acts. Anyone who violates the regulation and uses prohibited AI must expect a fine of up to seven percent of global annual turnover or a maximum of 35 million euros.
Conclusion
The AI Regulation presents companies with challenges, but also opportunities. By implementing the necessary measures at an early stage, companies can gain a competitive advantage with AI.