AI exper­ti­se in accordance with the EU AI Act

New obli­ga­ti­ons for companies

The AI Regulati­on (EU) 2024/1689 (AI Act) has been in force sin­ce 1 August 2024 (we repor­ted). The aim of the AI Act is to ensu­re the safe and trans­pa­rent use of AI sys­tems in the EU. This ent­ails new obli­ga­ti­ons for com­pa­nies and public bodies that offer, ope­ra­te, intro­du­ce, dis­tri­bu­te or use AI. All requi­re­ments of the AI Act must be imple­men­ted by 2 August 2026 at the latest. Some obli­ga­ti­ons will alre­a­dy app­ly from 2 Febru­ary 2025, one of which is the obli­ga­ti­on to ensu­re AI exper­ti­se within the company.

AI com­pe­tence accor­ding to Art. 4 AI Act

Accor­ding to Art. 4 of the AI Act pro­vi­ders and ope­ra­tors of AI sys­tems of any kind shall take mea­su­res to ensu­re, to their best ext­ent, a suf­fi­ci­ent level of AI liter­acy of their staff and other per­sons deal­ing with the ope­ra­ti­on and use of AI sys­tems on their behalf. This obli­ga­ti­on appli­es to both pro­vi­ders and ope­ra­tors of AI sys­tems. Importers and dis­tri­bu­tors are not affec­ted by the wording.

The AI Act defi­nes AI liter­acy as the skills, know­ledge and under­stan­ding that allow pro­vi­ders, deploy­ers and affec­ted per­sons, taking into account their respec­ti­ve rights and obli­ga­ti­ons, to make an infor­med deploy­ment of AI sys­tems, as well as to gain awa­re­ness about the oppor­tu­ni­ties and risks of AI and pos­si­ble harm it can cau­se (Art. 3 No. 56 AI Act). The aim of AI liter­acy is to maxi­mi­se the bene­fits of AI sys­tems while safe­guar­ding fun­da­men­tal rights, health and safe­ty and enab­ling demo­cra­tic con­trol (Reci­tal 20 AI Act).

Rea­li­sa­ti­on in practice

The chall­enge in prac­ti­ce is to deter­mi­ne what is spe­ci­fi­cal­ly owed within the scope of AI liter­acy. The AI Act does not spe­ci­fy a cata­lo­gue of mea­su­res in this regard. Howe­ver, it can be dedu­ced from the defi­ni­ti­on that both tech­ni­cal know­ledge, such as basic know­ledge of AI sys­tems and how they work, as well as an awa­re­ness of the oppor­tu­ni­ties and risks of AI and a social, ethi­cal and legal under­stan­ding must be ensu­red, taking into account the indi­vi­du­al case. With regard to social and ethi­cal issues, par­ti­cu­lar atten­ti­on must be paid to fair­ness, trans­pa­ren­cy and respon­si­bi­li­ty when using AI. For the legal infor­ma­ti­on, the requi­re­ments for data pro­tec­tion, intellec­tu­al pro­per­ty, the pro­tec­tion of trade secrets and cyber secu­ri­ty, among others, must be explained.

AI liter­acy can be impar­ted through various mea­su­res. Rele­vant mea­su­res include, in par­ti­cu­lar, the deve­lo­p­ment of inter­nal gui­de­lines and stan­dards as well as trai­ning cour­ses. Cer­ti­fi­ca­ti­on pro­gram­mes and the appoint­ment of an AI offi­cer can also con­tri­bu­te to AI liter­acy. As a rule, a wide-ranging packa­ge of mea­su­res is requi­red. In accordance with the risk-based approach of the AI Act, the spe­ci­fic mea­su­res for AI liter­acy must be adapt­ed to the respec­ti­ve con­text and the exis­ting know­ledge of the users, so that in prac­ti­ce the mea­su­res are to be desi­gned dif­fer­ent­ly depen­ding on the user, type of AI sys­tem and inten­ded use. In addi­ti­on, it must be taken into account that AI liter­acy must be ensu­red throug­hout. Hence, the mea­su­res should be car­ri­ed out both regu­lar­ly and on an ad hoc basis in the com­pa­ny. This is the only way to keep users up to date (trai­ning obli­ga­ti­on). In order to enforce the mea­su­res, com­pa­nies have to defi­ne appro­pria­te con­trol and enforce­ment powers in their inter­nal guidelines.

back

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.