The EU AI Lia­bi­li­ty Direc­ti­ve – Impli­ca­ti­ons for companies

Lia­bi­li­ty regime, AI Regu­la­ti­on, arti­fi­ci­al intel­li­gence, AI, AI Lia­bi­li­ty Directive

With the AI Lia­bi­li­ty Direc­ti­ve, the EU Com­mis­si­on wants to crea­te new lia­bi­li­ty rules for arti­fi­ci­al intel­li­gence (AI) and sup­ple­ment the plan­ned AI Regu­la­ti­on. We have a lea­k­ed ver­si­on of the plan­ned direc­ti­ve, which EURACTIV and the Tages­spie­gel also repor­ted on. Offi­ci­al­ly, the pro­po­sed direc­ti­ve is not sche­du­led to be published until 28 Sep­tem­ber 2022 . We would like to intro­du­ce you in advan­ce to the con­tent and impli­ca­ti­ons of the AI Lia­bi­li­ty Direc­ti­ve for busi­nesses and pro­vi­de you tips on how to avo­id legal risks.

What is the goal of the AI Lia­bi­li­ty Directive?

The AI Lia­bi­li­ty Direc­ti­ve ser­ves to adapt exis­ting lia­bi­li­ty rules to the digi­tal age and to deve­lo­p­ments in the field of AI. In the case of digi­tal pro­ducts, it is not clear to what ext­ent they are sub­ject to the lia­bi­li­ty regime of the Pro­duct Lia­bi­li­ty Direc­ti­ve. In addi­ti­on, the Pro­duct Lia­bi­li­ty Direc­ti­ve only pro­vi­des for com­pen­sa­ti­on for phy­si­cal or mate­ri­al dama­ge. Howe­ver, net­wor­king and new tech­no­lo­gies mean that data and pri­va­cy can also be dama­ged by inse­cu­re pro­ducts. In addi­ti­on, the com­ple­xi­ty of digi­tal pro­ducts makes it dif­fi­cult for inju­red par­ties to iden­ti­fy the pro­du­cer responsible.

The draft at a glance

The draft AI Lia­bi­li­ty Direc­ti­ve pro­vi­des for the har­mo­ni­sa­ti­on of natio­nal non-contractual lia­bi­li­ty regimes for dama­ges cau­sed by AI. Anyo­ne who suf­fers dama­ge as a result of AI should be able to make a cla­im just as easi­ly as with respect to dama­ge that occur­red wit­hout the invol­vement of AI. Pur­su­ant to Artic­le 1(2) of the AI Lia­bi­li­ty Direc­ti­ve, the dis­clo­sure requi­re­ments for high-risk AI sys­tems under the AI Regu­la­ti­on and the bur­den of pro­of for non-contractual fault-based dama­ge com­pen­sa­ti­on claims are the­r­e­fo­re to be har­mo­nis­ed. This is express­ly wit­hout pre­ju­di­ce to Euro­pean lia­bi­li­ty regu­la­ti­ons for the trans­port industry.

Dis­clo­sure requirements

In accordance with Artic­le 3(1) of the AI Lia­bi­li­ty Direc­ti­ve, mem­ber sta­tes should ensu­re that inju­red par­ties can demand dis­clo­sure of infor­ma­ti­on from the ope­ra­tor, manu­fac­tu­rer or user of high-risk AI sys­tems. Cor­re­spon­ding claims also exist against dis­tri­bu­tors or other third par­ties who are obli­ged by the AI Regu­la­ti­on. A request can be made for trai­ning and vali­da­ti­on data, infor­ma­ti­on from tech­ni­cal docu­men­ta­ti­on and recor­ding requi­re­ments, and infor­ma­ti­on from the qua­li­ty manage­ment sys­tem and about cor­rec­ti­ve actions taken. The infor­ma­ti­on may only be sur­ren­de­red to the ext­ent neces­sa­ry and appro­pria­te to pur­sue the cla­im. If the request is unlawful­ly denied, the pre­sump­ti­on that the reques­ted infor­ma­ti­on would have sub­stan­tia­ted the cla­im takes effect. For enforce­ment and con­trol, mem­ber sta­tes must estab­lish appro­pria­te judi­cial powers.

Rever­sal of bur­den of proof

To make it easier for inju­red par­ties to assert claims against com­pa­nies, the AI Lia­bi­li­ty Direc­ti­ve pro­vi­des for a shift in the bur­den of pro­of under the fol­lo­wing conditions:

  1. The aggrie­ved par­ty has set forth spe­ci­fic vio­la­ti­ons of the AI Ordi­nan­ce by the com­pa­ny; and
  2. the duty which the com­pa­ny has brea­ched is inten­ded pre­cis­e­ly to pro­tect against the dama­ge which has occur­red; and
  3. in accordance with natio­nal law, a breach of a duty to exer­cise due dili­gence has been deter­mi­ned to be the fault of the com­pa­ny; and
  4. the­re is a cau­sal link bet­ween the inf­rin­ge­ment and the dama­ge incurred.

If the rever­sal of the bur­den of pro­of appli­es, the com­pa­ny must pro­ve that it is not respon­si­ble for the dama­ge incurred.


The draft AI Lia­bi­li­ty Direc­ti­ve is still at an ear­ly stage. Howe­ver, it is alre­a­dy clear that the pro­ce­du­ral con­se­quen­ces of a direc­ti­ve in this form would be serious. In addi­ti­on to a loss of intellec­tu­al pro­per­ty due to exces­si­ve dis­clo­sure requi­re­ments, com­pa­nies will be put on the defen­si­ve from the out­set by the rever­sal of the bur­den of pro­of. Alt­hough it may be some time befo­re the AI Lia­bi­li­ty Direc­ti­ve is adopted and trans­po­sed into natio­nal law, com­pa­nies that use AI or intend to do so in the future should alre­a­dy take mea­su­res to redu­ce lia­bi­li­ty and meet their pro­duct respon­si­bi­li­ty. For more infor­ma­ti­on, see our free reusch­law White­pa­per on updating requi­re­ments ari­sing from pro­duct respon­si­bi­li­ty based on civil and public law.


Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.