AI and soft­ware in the con­text of pro­duct lia­bi­li­ty and pro­duct safety

The inten­si­ve dis­cus­sions about the new AI Regu­la­ti­on and also the AI Lia­bi­li­ty Direc­ti­ve[1] often cau­se the much more far-reaching regu­la­ti­ons of the Pro­duct Safe­ty Regu­la­ti­on and Pro­duct Lia­bi­li­ty Direc­ti­ve, which are curr­ent­ly being resp. have been revi­sed, to be lost from view. Espe­ci­al­ly for soft­ware manu­fac­tu­r­ers or manu­fac­tu­r­ers of pro­ducts with digi­tal ele­ments, the AI Regu­la­ti­on is not always neces­s­a­ri­ly rele­vant, but the set of rules of the pro­duct safe­ty regu­la­ti­on and pro­duct lia­bi­li­ty direc­ti­ve are. The deli­mi­ta­ti­on will be pre­sen­ted, as well as the influence of the sta­te of the art from pro­duct safe­ty law and the stan­dards and dele­ga­ted acts from the AI Regu­la­ti­on on lia­bi­li­ty from the pro­duct lia­bi­li­ty direc­ti­ve and sect. 823 para. 1 (§ 823 I) Ger­man Civil Code. 

1. The­ma­tic breakdown

The legal dis­cus­sions around the legis­la­ti­ve deve­lo­p­ments of recent years rela­ted to AI have been and con­ti­nue to be very inten­se[2] . The AI Regu­la­ti­on has under­go­ne count­less chan­ges sin­ce the EU first pro­po­sed it in April 2021 and has occu­p­ied at least five Coun­cil pre­si­den­ci­es so far. With much less dis­cur­si­ve back­ground noi­se so far, but with equal­ly rele­vant con­tent, the AI Lia­bi­li­ty Direc­ti­ve was published in draft form in Sep­tem­ber last year as a neces­sa­ry addi­ti­on to the public law regu­la­ti­ons from the per­spec­ti­ve of the Euro­pean legislator. 

The AI-specific regu­la­ti­ons are likely to have much less influence in prac­ti­ce than the inten­si­ty of the public dia­lo­gue would sug­gest. The scope of appli­ca­ti­on of the AI Regu­la­ti­on and the civil law reflec­tion of the AI Lia­bi­li­ty Direc­ti­ve is sim­ply too small for a com­pre­hen­si­ve regu­la­ti­on of the Euro­pean soft­ware mar­ket to be assumed.

It is all the more wort­hwhile to deal with the new regu­la­ti­ons of the old Pro­duct Lia­bi­li­ty Direc­ti­ve 85/374/EEC and the Pro­duct Safe­ty Direc­ti­ve 2001/95/EC, which are also in the legis­la­ti­ve pro­cess. The lat­ter will have a direct influence on the issues rele­vant here in the form of a regu­la­ti­on, which can alre­a­dy be seen in the inclu­si­on of soft­ware expres­sis ver­bis in the scope of appli­ca­ti­on, to be dis­cus­sed in more detail later. This puts an end to deca­des of dis­cus­sion, at least among Ger­man lawy­ers[3] , while at the same time sub­jec­ting an enti­re indus­try to new regu­la­ti­ons. The Pro­duct Lia­bi­li­ty Direc­ti­ve, which is also to be con­side­red in detail and which, like the new pro­duct safe­ty regu­la­ti­on, expli­cit­ly decla­res soft­ware to be the sub­ject of regu­la­ti­on, also con­tri­bu­tes to this. The distinc­tion made by the dif­fe­rent terms soft­ware and arti­fi­ci­al intel­li­gence alre­a­dy sug­gests that the last regu­la­ti­ons descri­bed will have a much broa­der scope of appli­ca­ti­on than the AI Regu­la­ti­on. The regu­la­ti­ons are roun­ded off – for all risks from net­wor­king – by the Cyber­se­cu­ri­ty Ordi­nan­ce for Pro­ducts with Digi­tal Ele­ments CRA[4] and – spe­ci­fi­cal­ly for machi­nes – by the new Machi­nery Ordinance.

II. Over­view of cur­rent legal deve­lo­p­ments rela­ted to soft­ware and AI 

1. AI Regulation 

As ear­ly as 2021,[5] the Euro­pean Com­mis­si­on draf­ted an approach for regu­la­ting arti­fi­ci­al intel­li­gence cor­re­spon­ding to the New Legis­la­ti­ve Frame­work (NLF),[6] which has been inten­si­ve­ly dis­cus­sed sin­ce then and has been sub­jec­ted to mani­fold chan­ges.[7] The core of the AI regu­la­ti­on is a con­for­mi­ty assess­ment geared towards the pla­cing of arti­fi­ci­al intel­li­gence on the mar­ket by the manu­fac­tu­rer, which defi­nes mini­mum requi­re­ments for arti­fi­ci­al intel­li­gence systems.

The addres­sees are in par­ti­cu­lar pro­vi­ders of arti­fi­ci­al intel­li­gence, but also its com­mer­cial users as well as manu­fac­tu­r­ers of pro­ducts that in turn con­tain arti­fi­ci­al intel­li­gence. As is gene­ral­ly the case in the pro­duct safe­ty law of the NLF,[8] a risk-based approach is taken that 

  • pro­hi­bits cer­tain arti­fi­ci­al intel­li­gence prac­ti­ces under Art. 5, 
  • places high-risk AI under the con­di­ti­ons of Art. 8 accor­ding to the defi­ni­ti­on of Art. 6 in con­junc­tion with Annex III, and
  • sub­jects Gene­ral Pur­po­se AI in the defi­ni­ti­on of Art. 3, point 1 to a few remai­ning requi­re­ments under Art. 4.

The defi­ni­ti­on of arti­fi­ci­al intel­li­gence has now been made as an arti­fi­ci­al intel­li­gence sys­tem in Art. 3 I AI Regu­la­ti­on as follows:

“means a sys­tem that is desi­gned to ope­ra­te with ele­ments of auto­no­my and that, based on machi­ne and/or human-provided data and inputs, infers how to achie­ve a given set of objec­ti­ves using machi­ne lear­ning and/or logic- and knowledge-based approa­ches, and pro­du­ces system-generated out­puts such as con­tent (gene­ra­ti­ve AI sys­tems), pre­dic­tions, recom­men­da­ti­ons or decis­i­ons, influen­cing the envi­ron­ments with which the AI sys­tem interacts”.

In con­trast to ear­lier attempts at defi­ni­ti­on, this defi­ni­ti­on at least lea­ves soft­ware untouch­ed that does not con­tain any degree of auto­no­my. If the regu­la­ti­on is adopted in this way, a deter­mi­ni­stic sys­tem based on input – pro­ces­sing – out­put does not meet the requi­re­ments of the defi­ni­ti­on of Art. 3 I AI Regu­la­ti­on, which will right­ly exclude num­e­rous manu­fac­tu­r­ers of soft­ware from its scope of application.

The inclu­si­on of gene­ral pur­po­se AI, in turn, has made the draft regu­la­ti­on, which is actual­ly aimed at pro­hi­bi­ted sys­tems and high-risk AI, a dif­fi­cult instru­ment to assess for tho­se manu­fac­tu­r­ers who inte­gra­te arti­fi­ci­al intel­li­gence com­pon­ents in their soft­ware and thus fall within the scope of the AI Regu­la­ti­on via the defi­ni­ti­on of Art. 3 Ib Gene­ral Pur­po­se. In con­trast to the manu­fac­tu­ring indus­try, which has been fami­li­ar with the approach and requi­re­ments of the NLF for almost 15 years, the imple­men­ta­ti­on of a con­for­mi­ty assess­ment pro­ce­du­re in com­pli­ance with basic requi­re­ments and the use of har­mo­nis­ed stan­dards is a com­ple­te­ly new field for soft­ware manufacturers.

2. AI Lia­bi­li­ty Direc­ti­ve (Pro­po­sal) 

As ear­ly as 2020, the Euro­pean legis­la­tor was con­side­ring a lia­bi­li­ty frame­work for arti­fi­ci­al intel­li­gence, inter alia in the White Paper on Arti­fi­ci­al Intel­li­gence[9] and has sub­se­quent­ly con­tin­ued to deepen this dis­cus­sion.[10] The plan­ned AI Lia­bi­li­ty Direc­ti­ve expands the exis­ting strict lia­bi­li­ty in the EU on the basis of the Pro­duct Lia­bi­li­ty Direc­ti­ve 85/374/EEC – which is also being revi­sed – to include a civil law frame­work for dama­ges cau­sed by arti­fi­ci­al intel­li­gence systems.

Regard­less of the clas­si­fi­ca­ti­on of arti­fi­ci­al intel­li­gence accor­ding to the AI Regu­la­ti­on, the plan­ned direc­ti­ve covers all dama­ge cau­sed by arti­fi­ci­al intel­li­gence sys­tems. What will also beco­me clear in the revi­si­on of the Pro­duct Lia­bi­li­ty Direc­ti­ve, which is still to be dis­cus­sed, also forms the core of the AI Lia­bi­li­ty Direc­ti­ve: the inten­si­ve inter­ven­ti­on of the Euro­pean legis­la­tor in the civil pro­ce­du­ral posi­ti­on of the plain­ti­ff par­ty in par­ti­cu­lar.[11] The start­ing point of the legis­la­ti­ve con­side­ra­ti­ons is the assump­ti­on, which has been pre­sen­ted like a man­tra in the dis­cus­sion about arti­fi­ci­al intel­li­gence sys­tems,[12] of con­sidera­ble dif­fi­cul­ties for the inju­red par­ty to pro­ve the fault in an arti­fi­ci­al intel­li­gence and also the con­tri­bu­ti­on to cau­sa­ti­on of the alle­ged tort­fe­a­sor, as a rule the manu­fac­tu­rer of the arti­fi­ci­al intel­li­gence. As a con­se­quence, the AI Lia­bi­li­ty Direc­ti­ve intro­du­ces ease­ments of the bur­den of pro­of unknown in pre­vious civil pro­ce­du­ral law, which con­sti­tu­te the actu­al core of the plan­ned directive.

The AI Lia­bi­li­ty Direc­ti­ve only appli­es to non-contractual, fault-based claims for dama­ges. Claims for dama­ges resul­ting from the pro­duct lia­bi­li­ty direc­ti­ve as well as the exemp­ti­ons from lia­bi­li­ty and duties of care under the Digi­tal Ser­vices Act remain unaf­fec­ted. The essen­ti­al defi­ni­ti­ons of terms from rela­ted legal acts of the EU, in par­ti­cu­lar the AI Regu­la­ti­on, are adopted.

a) Dis­clo­sure obli­ga­ti­ons of the ope­ra­tors and users of AI systems

Accor­ding to Art. 3 AI Lia­bi­li­ty Direc­ti­ve, a poten­ti­al clai­mant in the event of dama­ge should first request the ope­ra­tor of the AI sys­tem or per­sons assi­mi­la­ted to it to dis­c­lo­se rele­vant evi­dence. This requi­re­ment does not app­ly if the cla­im for dama­ges is brought befo­re a court. In this case – or if the ope­ra­tor refu­ses dis­clo­sure – the courts are empowered to order dis­clo­sure. For this, howe­ver, the plain­ti­ff must pre­sent suf­fi­ci­ent facts and evi­dence to make the cla­im for dama­ges plau­si­ble. The order also requi­res that the clai­mant has done ever­y­thing reasonable to obtain the evi­dence from the respondent.

If the defen­dant fails to com­ply with an order, the court shall pre­su­me a breach of the duty of care of the clai­mant and thus its pro­ba­ti­ve value for the cla­im for dama­ges. This pre­sump­ti­on is rebuttable.

b) Rever­sal of the bur­den of proof

Under the fol­lo­wing three con­di­ti­ons, a cau­sal link bet­ween the defendant’s fault and the dama­ge cau­sed by the AI sys­tem is rebut­ta­b­ly pre­su­med by the court accor­ding to Art. 4 AI Lia­bi­li­ty Directive:

  • The court pre­su­mes becau­se of non-disclosure or the plain­ti­ff pro­ves that the defen­dant brea­ched duties of care that were direct­ly inten­ded to pre­vent the dama­ge that occurred.
  • The­re is a reasonable likeli­hood that the breach of duties of care has influen­ced the harmful effects of the AI system.
  • The plain­ti­ff pro­ves that the harmful effects of the AI sys­tem cau­sed the damage.

If one of the­se con­di­ti­ons is met, the defen­dant bears the bur­den of pro­ving that he is not respon­si­ble for the dama­ge. Howe­ver, the rever­sal of the bur­den of pro­of does not app­ly if the defen­dant pro­ves that the clai­mant has suf­fi­ci­ent evi­dence and exper­ti­se to pro­ve causality.

The afo­re­men­tio­ned ease­ments of the bur­den of pro­of are in each case rebut­ta­ble. The inten­si­ty to which the defen­dant must pro­vi­de this evi­dence is left open by the pro­po­sal, so that at least under Ger­man law full pro­of must be assu­med. In prac­ti­ce, this is likely to lead to a con­sidera­ble shift in the equa­li­ty of arms in civil pro­ce­du­re, which in the end sim­ply shifts the pos­si­ble dif­fi­cul­ties in clas­si­fy­ing AI-induced dama­ge from the plain­ti­ff to the defendant.

The ext­ent to which such a signi­fi­cant encroach­ment on the civil pro­ce­du­ral law of the Mem­ber Sta­tes, in par­ti­cu­lar Art. 114 TFEU, is cover­ed, can­not be defi­ni­tively asses­sed here. Howe­ver, the abo­ve con­side­ra­ti­ons undoub­ted­ly repre­sent a con­sidera­ble encroach­ment to the detri­ment of the manu­fac­tu­r­ers of arti­fi­ci­al intel­li­gence sys­tems, which can hard­ly be jus­ti­fied with the pre­su­med spe­cial fea­tures of arti­fi­ci­al intel­li­gence sys­tems, name­ly the black box pro­blem. Rather, the­se spe­cial fea­tures can also be found in many con­ven­tio­nal pro­ducts, the cau­se and effect rela­ti­onships of which can only be plau­si­bi­li­sed by the inju­red par­ty with the help of expert sup­port. In addi­ti­on, the clas­si­fi­ca­ti­on as high-risk AI is rele­vant for rele­vant dis­clo­sure and ease­ments of pro­of of the AI Lia­bi­li­ty Direc­ti­ve, which in turn can be adapt­ed, but abo­ve all exten­ded, by the Euro­pean Com­mis­si­on via Annex III of the AI Regu­la­ti­on. Arti­fi­ci­al intel­li­gence sys­tems that have not been cover­ed so far can the­r­e­fo­re get within the scope of appli­ca­ti­on and thus also beco­me the sub­ject of lia­bi­li­ty under the AI Lia­bi­li­ty Direc­ti­ve. Such a dyna­mi­sa­ti­on of the law with pos­si­ble retroac­ti­ve effect must be view­ed cri­ti­cal­ly at the very least, pro­ba­b­ly also in view of the fact that espe­ci­al­ly in the case of high-risk AI not its risk for per­so­nal and mate­ri­al dama­ge was the force behind the classification. 

3. Pro­duct Safe­ty Regulation 

In con­trast to the gene­ri­cal­ly new regu­la­to­ry instru­ments of the AI Regu­la­ti­on and AI Lia­bi­li­ty Direc­ti­ve, the Pro­duct Safe­ty Direc­ti­ve 2001/95/EC alre­a­dy exists as an instru­ment for regu­la­ting the mini­mum requi­re­ments for pro­ducts under public law. Due to the signi­fi­cant chan­ges brought about by digi­ta­li­sa­ti­on, the Pro­duct Safe­ty Direc­ti­ve is curr­ent­ly being revi­sed. Alre­a­dy in June 2021, the Euro­pean Com­mis­si­on published its pro­po­sal for the revi­si­on of the Pro­duct Safe­ty Direc­ti­ve 2001/95/EC,[13] fol­lo­wing the new spi­rit as the Pro­duct Safe­ty Regu­la­ti­on. It aims to update the legal frame­work for the safe­ty of non-food pro­ducts for con­su­mers and to adapt the legal frame­work to the spe­ci­fic chal­lenges of new tech­no­lo­gies and busi­ness models. The Coun­cil of the EU has taken up the Commission’s pro­po­sals and part­ly toned them down, as can be seen from a con­so­li­da­ted ver­si­on from Decem­ber 2022.[14] The final ver­si­on was published in the Offi­ci­al Jour­nal of the Euro­pean Uni­on in May 2023.[15] The new Pro­duct Safe­ty Regu­la­ti­on also has con­sidera­ble effects for eco­no­mic actors of clas­sic non-food pro­ducts,[16] but of par­ti­cu­lar inte­rest here is the exten­si­on of the regu­la­to­ry effect of gene­ral pro­duct safe­ty law to the aspects of digi­tal pro­per­ties and functions.

Cen­tral to this is the defi­ni­ti­on of pro­duct in Art. 3 I Pro­duct Safe­ty Regulation:

“pro­duct means any item, inter­con­nec­ted or not to other items sup­pli­ed or made available, whe­ther for con­side­ra­ti­on or not, inclu­ding in the con­text of pro­vi­ding a ser­vice – which is inten­ded for con­su­mers or is likely, under reason­ab­ly fore­seeable con­di­ti­ons, to be used by con­su­mers even if not inten­ded for them”.

The refe­rence to any kind of con­nec­tion (inter­con­nec­ted) opens the refe­rence to all risks from cyber­se­cu­ri­ty and soft­ware in gene­ral, wit­hout requi­ring an embo­di­ment of the objects con­nec­ted to the pro­duct. This may seem sur­pri­sing at first glan­ce given the use of the same term (item), but it is sup­port­ed by a con­side­ra­ti­on of the Euro­pean legislator’s considerations.

Reci­tal 25, for exam­p­le, indi­ca­tes the ext­ent to which soft­ware and the chan­ges neces­sa­ry over the life of the pro­duct, e.g. through updates,[17] must alre­a­dy be taken into account by the respon­si­ble eco­no­mic ope­ra­tor in the initi­al risk assess­ment of the product:

“New tech­no­lo­gies might pose new risks to con­su­mers’ health and safe­ty or chan­ge the way the exis­ting risks could mate­ria­li­se, such as an exter­nal inter­ven­ti­on hack­ing the pro­duct or chan­ging its cha­rac­te­ristics. New tech­no­lo­gies might sub­stan­ti­al­ly modi­fy the ori­gi­nal pro­duct, for ins­tance through soft­ware updates, which should then be sub­ject to a new risk assess­ment if that sub­stan­ti­al modi­fi­ca­ti­on were to have an impact on the safe­ty of the product”.

Obvious­ly, the manu­fac­tu­rer of a pro­duct con­tai­ning soft­ware is thus forced to take a much more inten­si­ve look at the pro­per­ties of the soft­ware com­pon­ents he pro­ces­ses as a com­po­nent than see­med neces­sa­ry under the exis­ting pro­duct safe­ty law. This inter­pre­ta­ti­on is equal­ly sup­port­ed by reci­tal 26:

“Spe­ci­fic cyber­se­cu­ri­ty risks affec­ting the safe­ty of con­su­mers, as well as pro­to­cols and cer­ti­fi­ca­ti­ons, can be dealt with by sec­to­ral legis­la­ti­on. Howe­ver, it should be ensu­red that, in cases whe­re such sec­to­ral legis­la­ti­on does not app­ly, the rele­vant eco­no­mic ope­ra­tors and natio­nal aut­ho­ri­ties take into con­side­ra­ti­on risks lin­ked to new tech­no­lo­gies, when desig­ning the pro­ducts and asses­sing them respec­tively, in order to ensu­re that chan­ges intro­du­ced in the pro­duct do not jeo­par­di­se its safety”.

The Pro­duct Safe­ty Regu­la­ti­on thus obli­ges the respon­si­ble eco­no­mic ope­ra­tors to car­ry out a risk assess­ment of their respec­ti­ve pro­duct, which must take into account the influence of con­nec­ted components.

As in the pre­vious pro­duct safe­ty legis­la­ti­on, devia­ti­ons from the­se requi­re­ments give rise to obli­ga­ti­ons on the part of the respon­si­ble eco­no­mic ope­ra­tors, both with regard to pro­ducts on the mar­ket and the dis­tri­bu­ti­on of fur­ther pro­ducts,[18] due to the inter­ac­tion of the exis­ting Direc­ti­ve 2001/95/EC in its respec­ti­ve natio­nal trans­po­si­ti­on in con­nec­tion with the new Mar­ket Sur­veil­lan­ce Regu­la­ti­on 2019/1020/EU.[19]

The risks here ran­ge from the obvious ban on dis­tri­bu­ti­on to fines – great­ly miti­ga­ted by the final text – to a mar­ket mea­su­re with regard to pro­ducts alre­a­dy pla­ced on the mar­ket. Howe­ver, the new Pro­duct Safe­ty Regu­la­ti­on con­sider­a­b­ly expands the obli­ga­ti­ons of the respon­si­ble eco­no­mic ope­ra­tor by inter­fe­ring with the inten­si­ty and vari­ants of a mar­ket mea­su­re with man­da­to­ry mini­mum con­tents. The cen­tral pro­vi­si­on in this con­text is Art. 37 Pro­duct Safe­ty Regulation:

“Wit­hout pre­ju­di­ce to other reme­dies that may be offe­red by the eco­no­mic ope­ra­tor, it shall offer to the con­su­mer the choice bet­ween at least two of the fol­lo­wing remedies:

a) repair of the recal­led product;

b) repla­ce­ment of the recal­led pro­duct with a safe one of the same type and at least the same value and qua­li­ty; or

c) an ade­qua­te refund of the value of the recal­led pro­duct, pro­vi­ded that the amount of the refund shall be at least equal to the pri­ce paid by the consumer”.

Here, quite obvious­ly unim­pres­sed by natio­nal con­trac­tu­al regu­la­ti­ons, a war­ran­ty régime of its own is being estab­lished in com­bi­na­ti­on with mar­ket sur­veil­lan­ce measures.

In rela­ti­on to digi­tal con­tent, the wheel has come full cir­cle inso­far as pro­duct defects trig­ge­red by soft­ware or con­nec­tions to other pro­ducts can be sol­ved by an update, but the user must still be offe­red ano­ther vari­ant with con­tents of a repla­ce­ment of the exis­ting pro­duct. Software- or connectivity-induced risks of a pro­duct thus lead to a con­sidera­ble mar­ket risk that did not exist in a com­pa­ra­ble way under the pre­vious­ly appli­ca­ble legal situa­ti­on. Rather, the legal situa­ti­on in Ger­ma­ny has been dia­me­tri­cal­ly con­cre­ti­sed by the Fede­ral Court of Jus­ti­ce (BGH)[20] in civil law for the area of B2B to the effect that a noti­ce (war­ning) to the user can suf­fice if the limi­ta­ti­on peri­od for mate­ri­al defects has expired.

The over­all scope of appli­ca­ti­on of pro­duct safe­ty law is obvious­ly much broa­der than the AI Regu­la­ti­on and its lia­bi­li­ty law annex in the AI Lia­bi­li­ty Direc­ti­ve can depict. Any con­nec­tion or incor­po­ra­ti­on of digi­tal ele­ments, inclu­ding inten­ded con­nec­ti­vi­ty, must then also be asses­sed by the respon­si­ble eco­no­mic ope­ra­tor with regard to their safe­ty, regard­less of the digi­tal element’s cha­rac­te­ristic as an arti­fi­ci­al intel­li­gence sys­tem accor­ding to the AI Regu­la­ti­on. Accor­din­gly, the defects of a pro­duct must also be asses­sed on the basis of the­se pro­per­ties and lead to a non-compliant pro­duct and the mea­su­res pre­sen­ted. Fur­ther­mo­re, the rele­vant regu­la­ti­ons of the pre­vious Pro­duct Safe­ty Act as natio­nal imple­men­ta­ti­on of the Pro­duct Safe­ty Direc­ti­ve 2001/95/EC have so far been clas­si­fied as pro­tec­ti­ve law in the sen­se of § 823 II BGB, so that civil lia­bi­li­ty can alre­a­dy be deri­ved here.

4. Pro­duct Lia­bi­li­ty Direc­ti­ve (draft)

Cer­tain­ly the broa­dest approach to lia­bi­li­ty risks for manu­fac­tu­r­ers of soft­ware or pro­ducts with digi­tal com­pon­ents can be found in the revi­si­on of the almost 40-year-old Pro­duct Lia­bi­li­ty Direc­ti­ve 85/374/EEC. The Euro­pean Com­mis­si­on published a draft[21] on 28 Sep­tem­ber 2022, which will bring signi­fi­cant changes. 

Unli­ke befo­re, the defi­ni­ti­ons are no lon­ger scat­te­red throug­hout the text of the Direc­ti­ve, but have been brought tog­e­ther under one artic­le. Over­all, the draft is ori­en­ted towards the ter­mi­no­lo­gy of the NLF and thus neces­s­a­ri­ly par­al­lels the approach of the NLF with a civil lia­bi­li­ty régime. For the first time, the term “rela­ted ser­vices” is defi­ned, which means digi­tal ser­vices that are inte­gra­ted into or inter-connected with a pro­duct and would lead to the fail­ure of one or more of its func­tions if they were miss­ing. Reci­tals 3 and 12 of the draft make the objec­ti­ve clear:

“(3) Direc­ti­ve 85/374/EEC needs to be revi­sed in the light of deve­lo­p­ments rela­ted to new tech­no­lo­gies, inclu­ding arti­fi­ci­al intel­li­gence (AI), new cir­cu­lar eco­no­my busi­ness models and new glo­bal sup­p­ly chains, which have led to incon­sis­ten­ci­es and legal uncer­tain­ty, in par­ti­cu­lar as regards the mea­ning of the term ‘pro­duct’. Expe­ri­ence gai­ned in the appli­ca­ti­on of Direc­ti­ve 85/374/EEC has also shown that inju­red par­ties face dif­fi­cul­ties in obtai­ning com­pen­sa­ti­on due to limi­ta­ti­ons in clai­ming dama­ges and dif­fi­cul­ties in gathe­ring evi­dence to pro­ve lia­bi­li­ty, espe­ci­al­ly in the light of incre­asing tech­ni­cal and sci­en­ti­fic com­ple­xi­ty. This includes claims for dama­ges rela­ted to new tech­no­lo­gies, inclu­ding AI. The revi­si­on will the­r­e­fo­re encou­ra­ge the pro­vi­si­on and use of such new tech­no­lo­gies, inclu­ding AI, while ensu­ring that clai­mants can bene­fit from the same level of pro­tec­tion regard­less of the tech­no­lo­gy in question.”

“(12) Pro­ducts in the digi­tal age can be tan­gi­ble or intan­gi­ble. Soft­ware, such as ope­ra­ting sys­tems, firm­ware, com­pu­ter pro­grams, appli­ca­ti­ons or AI sys­tems, is incre­asing­ly com­mon on the mar­ket and plays an incre­asing­ly important role for pro­duct safe­ty. Soft­ware is capa­ble of being pla­ced on the mar­ket as a stan­da­lo­ne pro­duct and may sub­se­quent­ly be inte­gra­ted into other pro­ducts as a com­po­nent, and is capa­ble of caus­ing dama­ge through its exe­cu­ti­on. In the inte­rest of legal cer­tain­ty it should the­r­e­fo­re be cla­ri­fied that soft­ware is a pro­duct for the pur­po­ses of app­ly­ing no-fault lia­bi­li­ty, irre­spec­ti­ve of the mode of its sup­p­ly or usa­ge, and the­r­e­fo­re irre­spec­ti­ve of whe­ther the soft­ware is stored on a device or acces­sed through cloud technologies […].” 

As a con­se­quence, Art. 4 I (1) of the Pro­duct Lia­bi­li­ty Direc­ti­ve (draft) con­ta­ins a defi­ni­tio­nal frame­work of the con­cept of pro­duct that is signi­fi­cant­ly broa­der than the cur­rent one:

“(1) ‘pro­duct’ means all mov­a­bles, even if inte­gra­ted into ano­ther mova­ble or into an immo­va­ble. ‘Pro­duct’ includes elec­tri­ci­ty, digi­tal manu­fac­tu­ring files and software.”

Open source soft­ware that has not been crea­ted com­mer­ci­al­ly is excluded from the scope of appli­ca­ti­on; to what ext­ent this is a reli­ef for the multi­tu­de of pro­jects that deve­lop and dis­tri­bu­te open source soft­ware seems ques­tionable, at least in view of the lack of a defi­ni­ti­on of com­mer­cia­li­ty in the sen­se of a trade.

As a con­se­quence of the inclu­si­on of non-material pro­ducts, intan­gi­ble legal goods are also included in the con­cept of dama­ge in Art. 4 par. 6 lite­ra c): Dama­ge to or loss of data also counts as com­pen­sable dama­ge. Alt­hough data in the purely pro­fes­sio­nal field of appli­ca­ti­on are excluded, every con­su­mer who depo­sits data on a car­ri­er or device beco­mes a sub­ject of lia­bi­li­ty. This is becau­se the data on the defec­ti­ve pro­duct its­elf is also protected.

Like­wi­se, Art. 6 No. 1 lite­ra c) and No. 1 lite­ra f) of the Pro­duct Lia­bi­li­ty Direc­ti­ve expand the defi­ni­ti­on of a pro­duct defect. Accor­ding to this, both the abili­ty of a pro­duct to learn after being pla­ced on the mar­ket and the influence of cyber­se­cu­ri­ty are cri­te­ria for a pro­duct defect. 

Pro­duct defects are thus also effects cau­sed by self-learning func­tions as well as the effects of other pro­ducts that can reason­ab­ly be expec­ted to be used tog­e­ther with the pro­duct in ques­ti­on. This addres­ses the Inter­net of Things (IoT) and the incre­asing inclu­si­on of machi­ne lear­ning. When using such tech­no­lo­gies, com­pa­nies must the­r­e­fo­re also keep an eye on the results of the lear­ning pro­cess and check which inter­ac­tions with other pro­ducts may ari­se even befo­re pla­cing the pro­duct on the market.

The inclu­si­on of cyber­se­cu­ri­ty leads to legal­ly dif­fi­cult demar­ca­ti­on cri­te­ria, in par­ti­cu­lar, when it comes to the respon­si­bi­li­ty for a deli­bera­te­ly ille­gal inter­ven­ti­on in the soft­ware by an exter­nal third par­ty.[22] In this case, a very clear distinc­tion will have to be made as to whe­ther the pro­duct or the soft­ware cor­re­spon­ded to the available sta­te of sci­ence and tech­no­lo­gy at the time it was pla­ced on the mar­ket or at the time of the attack and the attack could nevert­hel­ess take place or whe­ther an attack was made pos­si­ble in the first place due to the omis­si­on of pos­si­ble secu­ri­ty mea­su­res[23] .

Cyber­se­cu­ri­ty also ent­ails a fur­ther, hard­ly resol­va­ble con­tra­dic­tion bet­ween legis­la­ti­ve demand and fac­tu­al pos­si­bi­li­ty, which can also be found in the AI Regu­la­ti­on. Codi­fied tech­ni­cal know­ledge plays a decisi­ve role in enab­ling eco­no­mic actors to achie­ve the requi­red secu­ri­ty objec­ti­ves, both in the sta­te of the art of sci­ence and tech­no­lo­gy of the Pro­duct Lia­bi­li­ty Direc­ti­ve and in the har­mo­nis­ed stan­dards and dele­ga­ted acts of the AI Regu­la­ti­on. Howe­ver, such norms and stan­dards do not exist across the board, neither in the AI Regu­la­ti­on nor in the field of cyber­se­cu­ri­ty. While the NIS2 direc­ti­ve[24] has at least been adopted and will enter into force by 2024 at the latest, the Cyber Resi­li­ence Act[25] is still at the draft stage. Accor­din­gly, the­re are sim­ply no rele­vant tech­ni­cal norms or stan­dards that, on the one hand, repre­sent a de fac­to aid for eco­no­mic actors, but abo­ve all form a lower limit of the neces­sa­ry sta­te of the art as an inte­gral part of the sta­te of sci­ence and tech­no­lo­gy rele­vant under pro­duct lia­bi­li­ty law. Accor­din­gly, eco­no­mic actors deve­lop digi­tal pro­ducts wit­hout cor­re­spon­ding guard rails, who­se lia­bi­li­ty in turn is to be bla­tant­ly tightened.

To faci­li­ta­te the enforce­ment of the­se claims, the Euro­pean Com­mis­si­on is resort­ing to the same methods as alre­a­dy used for the AI Lia­bi­li­ty Directive.

a) Right to disclosure

Accor­ding to Art. 8 of the Pro­duct Lia­bi­li­ty Direc­ti­ve, if the clai­mant plau­si­bly pres­ents evi­dence and facts, it is pos­si­ble for the court to demand the dis­clo­sure of rele­vant evi­dence by the defen­dant. Admit­ted­ly, the­re is a pro­por­tio­na­li­ty test and busi­ness secrets must be pro­tec­ted. Nevert­hel­ess, the pro­ce­du­ral pos­si­bi­li­ties are cle­ar­ly shifted to the dis­ad­van­ta­ge of defen­dant com­pa­nies. Such a “dis­clo­sure” is not yet found in Ger­man law, but rather cor­re­sponds to the pro­ce­du­re known from US civil pro­ce­du­re law.

b) Legal pre­sump­ti­on of conformity

To sub­stan­tia­te the cla­im, the defec­ti­ve­ness of the pro­duct, the dama­ge cau­sed and the cau­sal link bet­ween the two must be pro­ven, as befo­re. Under cur­rent law, the bur­den of pro­of is on the clai­mant, § 1 IV Pro­duct Lia­bi­li­ty Act. In the future, defec­ti­ve­ness will be pre­su­med if one of the fol­lo­wing con­di­ti­ons is met:

  • the defen­dant refu­ses to hand over the docu­ments reques­ted by the court,
  • the appli­cant sub­mits that the pro­duct does not com­ply with man­da­to­ry requi­re­ments as to its safe­ty under natio­nal or Uni­on law, which are inten­ded pre­cis­e­ly to pro­tect against the risk of harm that has occurred,
  • the plain­ti­ff sets out that the dama­ge was cau­sed by an obvious mal­func­tion of the pro­duct under nor­mal or ordi­na­ry circumstances.

That the pro­duct defect also cau­sed the dama­ge is pre­su­med if the plain­ti­ff can show that the pro­duct has a defect and that the dama­ge cau­sed is typi­cal­ly con­sis­tent with the defect in ques­ti­on. Pre­cis­e­ly becau­se courts other­wi­se deci­de on the basis of free con­vic­tion, defen­dant com­pa­nies will regu­lar­ly have to pro­vi­de evi­dence against the­se pre­sump­ti­ons. In con­trast to the past, the dama­ges clai­med under this pro­vi­si­on are no lon­ger to be cap­ped at 85 mil­li­on euros, alt­hough this limit was alre­a­dy rare­ly the sub­ject of pro­cee­dings brought by lawyers.

Over­all, the Pro­duct Lia­bi­li­ty Direc­ti­ve thus covers almost all facets of soft­ware, inclu­ding AI, and also the pro­per­ties of a pro­duct asso­cia­ted with the almost obli­ga­to­ry con­nec­ti­vi­ty. With the new dis­tri­bu­ti­on of the bur­den of pro­of in the area of civil pro­ce­du­re, enforce­ment has been con­sider­a­b­ly regu­la­ted in favour of the inju­red party.

III. Con­clu­si­on 

As descri­bed, the legal frame­work for manu­fac­tu­r­ers of soft­ware in gene­ral and AI in par­ti­cu­lar is chan­ging com­pre­hen­si­ve­ly. In terms of lia­bi­li­ty law, neither the AI Regu­la­ti­on nor the AI Lia­bi­li­ty Direc­ti­ve seems to be as rele­vant as the Pro­duct Safe­ty Regu­la­ti­on, but espe­ci­al­ly the Pro­duct Lia­bi­li­ty Direc­ti­ve. This result is cer­tain­ly dia­me­tri­cal­ly oppo­sed to the per­cep­ti­on in the legal com­mu­ni­ty and does not cor­re­la­te with the amount of sci­en­ti­fic recep­ti­on. From an attorney’s per­spec­ti­ve, the com­pre­hen­si­ble lack of trans­pa­ren­cy also con­tri­bu­tes to the way in which pro­duct lia­bi­li­ty pro­cee­dings are alre­a­dy resol­ved in favour of the inju­red con­su­mer in cases of doubt. The new Pro­duct Lia­bi­li­ty Direc­ti­ve, which is curr­ent­ly being draf­ted, will give fur­ther impe­tus to this ten­den­cy of the civil courts, wit­hout the need for a spe­cial legal refer­ral to the AI Regu­la­ti­on, for exam­p­le. The ext­ent to which this risk expo­sure will actual­ly lead to com­pa­nies in the EU con­scious­ly inves­t­ing in soft­ware in gene­ral and AI in par­ti­cu­lar seems doubtful in view of the signi­fi­cant­ly increased lia­bi­li­ty framework.

This artic­le was published in issue 4/2023, page 152 of the maga­zi­ne “Recht Digi­tal – RDi”. You can access the com­ple­te essay here (fee required).


[1] Pro­po­sal for a Direc­ti­ve of the Euro­pean Par­lia­ment and of the Coun­cil adap­ting the rules on non-contractual civil lia­bi­li­ty to arti­fi­ci­al intel­li­gence (AI Lia­bi­li­ty Direc­ti­ve), COM(2022) 496 final.
[2] Ebers/Heinze/Krügel/Steinrötter/Eichelberger, Künst­li­che Intel­li­genz und Robo­tik, 2020; Kaulartz/Braegelmann/Reusch, AI und Machi­ne Lear­ning, ch. 4; Ebers RDi 2021, 588; most recent­ly also: Zech, Gut­ach­ten A zum 73. Deut­schen Juris­ten­tag A 93 and on this: Oechs­ler NJW 2022, 2713; Ebers RDi 2021, 588; Wie­be BB 2022, 899.
[3] Mül­ler, Soft­ware als “Gegen­stand ” der Pro­dukt­haf­tung, pp. 176, 177 ff; Tae­ger, Haf­tung für feh­ler­haf­te Com­pu­ter­pro­gram­me; Kaulartz/Braegelmann/Reusch, AI und Machi­ne Lear­ning, ch. 4; Thö­ne, Auto­no­me Sys­te­me und delikt­i­sche Haftung.
[4] Pro­po­sal for a Regu­la­ti­on of the Euro­pean Par­lia­ment and of the Coun­cil on hori­zon­tal cyber­se­cu­ri­ty requi­re­ments for pro­ducts with digi­tal ele­ments and amen­ding Regu­la­ti­on (EU) 2019/1020, COM (2022) 454 final.
[5] Pro­po­sal for a Regu­la­ti­on lay­ing down har­mo­nis­ed rules for Arti­fi­ci­al Intel­li­gence (Arti­fi­ci­al Intel­li­gence Act), COM(2021) 206 final.
[6] The NLF is based on Regu­la­ti­on 756/2008/EC, Decis­i­on 768/2008/EU and the new Mar­ket Sur­veil­lan­ce­Re­gu­la­ti­on 2019/1020/EU.
[7] An over­view of the pro­ces­sing sta­tu­s­es can be found at: Digi­ti­sing Euro­pe, Docu­ments and Time­lines: The Arti­fi­ci­al Intel­li­gence Act (part 3), 12.10.2022, available at:
[8] See on fur­ther deve­lo­p­ments in the NLF: Felz InTeR 2022, Issue 04, Sup­ple­ment, 3–6.
[9] White Paper on Arti­fi­ci­al Intel­li­gence – A Euro­pean Approach to Excel­lence and Trust v. 19.05.2020, COM(2020) final.
[10] Report from the Com­mis­si­on to the Euro­pean Par­lia­ment, the Coun­cil and the Euro­pean Eco­no­mic and Social Com­mit­tee on the safe­ty and lia­bi­li­ty impli­ca­ti­ons of Arti­fi­ci­al Intel­li­gence, the Inter­net of Things and Robo­tics v. 19.02.2020, COM(2020) 64 final; Euro­pean Par­lia­ment reso­lu­ti­on v. 20.10.2020 with recom­men­da­ti­ons to the Com­mis­si­on on a civil lia­bi­li­ty régime for the use of arti­fi­ci­al intel­li­gence (2020/2014[INL]).
[11] Reusch ZdiW 11, 2022, 429, (430); Bla­sek DSB 2022, 299.
[12] Exe­cu­ti­ve Sum­ma­ry Impact Assess­ment, COM(2022) 496 final.
[13] COM(2021)346 final.
[14] COM(2021)346 – C9-0245/2021 – 2021/0170(COD).
[15] Offi­ci­al Jour­nal of the Euro­pean Uni­on, L 135/1 of 23.5.2023. 
[16] Schucht Gew­Arch 10/2022, 394.
[17] Reusch BB 2019, 904.
[18] Reusch InTeR 2022, Issue 04, Sup­ple­ment, 12–14; Felz InTeR 2022, Issue 04, Sup­ple­ment, 3–6; Reusch BB 2021, Issue 31, Cover, I; Wie­be BB 2022, 899.
[19] Schucht, Die neue Markt­über­wa­chungs­ver­ord­nung, 2020. 
[20] BGH, NJW 2009, 1080; Reusch, StoffR, 6 (2009), 5.
[21] Pro­po­sal for a Direc­ti­ve of the Euro­pean Par­lia­ment and of the Coun­cil on lia­bi­li­ty for defec­ti­ve pro­ducts, COM(2022) 495 final – 2022/0302 (COD). 
[22] Kapoor/Klindt BB 3/2023, 65 (68).
[23] Hessel/Schneider KR 2/2022, 82.
[24] COM(2020)0823 – C9-0422/2020 ‑2020/0359(COD).
[25] Hessel/Callewaert K&R 12/2022, 798.

Stay up-to-date

We use your email address exclusively for sending our newsletter. You have the right to revoke your consent at any time with effect for the future. For further information, please refer to our privacy policy.