New obligations for companies
The AI Regulation (EU) 2024/1689 (AI Act) has been in force since 1 August 2024 (we reported). The aim of the AI Act is to ensure the safe and transparent use of AI systems in the EU. This entails new obligations for companies and public bodies that offer, operate, introduce, distribute or use AI. All requirements of the AI Act must be implemented by 2 August 2026 at the latest. Some obligations will already apply from 2 February 2025, one of which is the obligation to ensure AI expertise within the company.
AI competence according to Art. 4 AI Act
According to Art. 4 of the AI Act providers and operators of AI systems of any kind shall take measures to ensure, to their best extent, a sufficient level of AI literacy of their staff and other persons dealing with the operation and use of AI systems on their behalf. This obligation applies to both providers and operators of AI systems. Importers and distributors are not affected by the wording.
The AI Act defines AI literacy as the skills, knowledge and understanding that allow providers, deployers and affected persons, taking into account their respective rights and obligations, to make an informed deployment of AI systems, as well as to gain awareness about the opportunities and risks of AI and possible harm it can cause (Art. 3 No. 56 AI Act). The aim of AI literacy is to maximise the benefits of AI systems while safeguarding fundamental rights, health and safety and enabling democratic control (Recital 20 AI Act).
Realisation in practice
The challenge in practice is to determine what is specifically owed within the scope of AI literacy. The AI Act does not specify a catalogue of measures in this regard. However, it can be deduced from the definition that both technical knowledge, such as basic knowledge of AI systems and how they work, as well as an awareness of the opportunities and risks of AI and a social, ethical and legal understanding must be ensured, taking into account the individual case. With regard to social and ethical issues, particular attention must be paid to fairness, transparency and responsibility when using AI. For the legal information, the requirements for data protection, intellectual property, the protection of trade secrets and cyber security, among others, must be explained.
AI literacy can be imparted through various measures. Relevant measures include, in particular, the development of internal guidelines and standards as well as training courses. Certification programmes and the appointment of an AI officer can also contribute to AI literacy. As a rule, a wide-ranging package of measures is required. In accordance with the risk-based approach of the AI Act, the specific measures for AI literacy must be adapted to the respective context and the existing knowledge of the users, so that in practice the measures are to be designed differently depending on the user, type of AI system and intended use. In addition, it must be taken into account that AI literacy must be ensured throughout. Hence, the measures should be carried out both regularly and on an ad hoc basis in the company. This is the only way to keep users up to date (training obligation). In order to enforce the measures, companies have to define appropriate control and enforcement powers in their internal guidelines.