The intensive discussions about the new AI Regulation and also the AI Liability Directive[1] often cause the much more far-reaching regulations of the Product Safety Regulation and Product Liability Directive, which are currently being resp. have been revised, to be lost from view. Especially for software manufacturers or manufacturers of products with digital elements, the AI Regulation is not always necessarily relevant, but the set of rules of the product safety regulation and product liability directive are. The delimitation will be presented, as well as the influence of the state of the art from product safety law and the standards and delegated acts from the AI Regulation on liability from the product liability directive and sect. 823 para. 1 (§ 823 I) German Civil Code.
1. Thematic breakdown
The legal discussions around the legislative developments of recent years related to AI have been and continue to be very intense[2] . The AI Regulation has undergone countless changes since the EU first proposed it in April 2021 and has occupied at least five Council presidencies so far. With much less discursive background noise so far, but with equally relevant content, the AI Liability Directive was published in draft form in September last year as a necessary addition to the public law regulations from the perspective of the European legislator.
The AI-specific regulations are likely to have much less influence in practice than the intensity of the public dialogue would suggest. The scope of application of the AI Regulation and the civil law reflection of the AI Liability Directive is simply too small for a comprehensive regulation of the European software market to be assumed.
It is all the more worthwhile to deal with the new regulations of the old Product Liability Directive 85/374/EEC and the Product Safety Directive 2001/95/EC, which are also in the legislative process. The latter will have a direct influence on the issues relevant here in the form of a regulation, which can already be seen in the inclusion of software expressis verbis in the scope of application, to be discussed in more detail later. This puts an end to decades of discussion, at least among German lawyers[3] , while at the same time subjecting an entire industry to new regulations. The Product Liability Directive, which is also to be considered in detail and which, like the new product safety regulation, explicitly declares software to be the subject of regulation, also contributes to this. The distinction made by the different terms software and artificial intelligence already suggests that the last regulations described will have a much broader scope of application than the AI Regulation. The regulations are rounded off – for all risks from networking – by the Cybersecurity Ordinance for Products with Digital Elements CRA[4] and – specifically for machines – by the new Machinery Ordinance.
II. Overview of current legal developments related to software and AI
1. AI Regulation
As early as 2021,[5] the European Commission drafted an approach for regulating artificial intelligence corresponding to the New Legislative Framework (NLF),[6] which has been intensively discussed since then and has been subjected to manifold changes.[7] The core of the AI regulation is a conformity assessment geared towards the placing of artificial intelligence on the market by the manufacturer, which defines minimum requirements for artificial intelligence systems.
The addressees are in particular providers of artificial intelligence, but also its commercial users as well as manufacturers of products that in turn contain artificial intelligence. As is generally the case in the product safety law of the NLF,[8] a risk-based approach is taken that
- prohibits certain artificial intelligence practices under Art. 5,
- places high-risk AI under the conditions of Art. 8 according to the definition of Art. 6 in conjunction with Annex III, and
- subjects General Purpose AI in the definition of Art. 3, point 1 to a few remaining requirements under Art. 4.
The definition of artificial intelligence has now been made as an artificial intelligence system in Art. 3 I AI Regulation as follows:
“means a system that is designed to operate with elements of autonomy and that, based on machine and/or human-provided data and inputs, infers how to achieve a given set of objectives using machine learning and/or logic- and knowledge-based approaches, and produces system-generated outputs such as content (generative AI systems), predictions, recommendations or decisions, influencing the environments with which the AI system interacts”.
In contrast to earlier attempts at definition, this definition at least leaves software untouched that does not contain any degree of autonomy. If the regulation is adopted in this way, a deterministic system based on input – processing – output does not meet the requirements of the definition of Art. 3 I AI Regulation, which will rightly exclude numerous manufacturers of software from its scope of application.
The inclusion of general purpose AI, in turn, has made the draft regulation, which is actually aimed at prohibited systems and high-risk AI, a difficult instrument to assess for those manufacturers who integrate artificial intelligence components in their software and thus fall within the scope of the AI Regulation via the definition of Art. 3 Ib General Purpose. In contrast to the manufacturing industry, which has been familiar with the approach and requirements of the NLF for almost 15 years, the implementation of a conformity assessment procedure in compliance with basic requirements and the use of harmonised standards is a completely new field for software manufacturers.
2. AI Liability Directive (Proposal)
As early as 2020, the European legislator was considering a liability framework for artificial intelligence, inter alia in the White Paper on Artificial Intelligence[9] and has subsequently continued to deepen this discussion.[10] The planned AI Liability Directive expands the existing strict liability in the EU on the basis of the Product Liability Directive 85/374/EEC – which is also being revised – to include a civil law framework for damages caused by artificial intelligence systems.
Regardless of the classification of artificial intelligence according to the AI Regulation, the planned directive covers all damage caused by artificial intelligence systems. What will also become clear in the revision of the Product Liability Directive, which is still to be discussed, also forms the core of the AI Liability Directive: the intensive intervention of the European legislator in the civil procedural position of the plaintiff party in particular.[11] The starting point of the legislative considerations is the assumption, which has been presented like a mantra in the discussion about artificial intelligence systems,[12] of considerable difficulties for the injured party to prove the fault in an artificial intelligence and also the contribution to causation of the alleged tortfeasor, as a rule the manufacturer of the artificial intelligence. As a consequence, the AI Liability Directive introduces easements of the burden of proof unknown in previous civil procedural law, which constitute the actual core of the planned directive.
The AI Liability Directive only applies to non-contractual, fault-based claims for damages. Claims for damages resulting from the product liability directive as well as the exemptions from liability and duties of care under the Digital Services Act remain unaffected. The essential definitions of terms from related legal acts of the EU, in particular the AI Regulation, are adopted.
a) Disclosure obligations of the operators and users of AI systems
According to Art. 3 AI Liability Directive, a potential claimant in the event of damage should first request the operator of the AI system or persons assimilated to it to disclose relevant evidence. This requirement does not apply if the claim for damages is brought before a court. In this case – or if the operator refuses disclosure – the courts are empowered to order disclosure. For this, however, the plaintiff must present sufficient facts and evidence to make the claim for damages plausible. The order also requires that the claimant has done everything reasonable to obtain the evidence from the respondent.
If the defendant fails to comply with an order, the court shall presume a breach of the duty of care of the claimant and thus its probative value for the claim for damages. This presumption is rebuttable.
b) Reversal of the burden of proof
Under the following three conditions, a causal link between the defendant’s fault and the damage caused by the AI system is rebuttably presumed by the court according to Art. 4 AI Liability Directive:
- The court presumes because of non-disclosure or the plaintiff proves that the defendant breached duties of care that were directly intended to prevent the damage that occurred.
- There is a reasonable likelihood that the breach of duties of care has influenced the harmful effects of the AI system.
- The plaintiff proves that the harmful effects of the AI system caused the damage.
If one of these conditions is met, the defendant bears the burden of proving that he is not responsible for the damage. However, the reversal of the burden of proof does not apply if the defendant proves that the claimant has sufficient evidence and expertise to prove causality.
The aforementioned easements of the burden of proof are in each case rebuttable. The intensity to which the defendant must provide this evidence is left open by the proposal, so that at least under German law full proof must be assumed. In practice, this is likely to lead to a considerable shift in the equality of arms in civil procedure, which in the end simply shifts the possible difficulties in classifying AI-induced damage from the plaintiff to the defendant.
The extent to which such a significant encroachment on the civil procedural law of the Member States, in particular Art. 114 TFEU, is covered, cannot be definitively assessed here. However, the above considerations undoubtedly represent a considerable encroachment to the detriment of the manufacturers of artificial intelligence systems, which can hardly be justified with the presumed special features of artificial intelligence systems, namely the black box problem. Rather, these special features can also be found in many conventional products, the cause and effect relationships of which can only be plausibilised by the injured party with the help of expert support. In addition, the classification as high-risk AI is relevant for relevant disclosure and easements of proof of the AI Liability Directive, which in turn can be adapted, but above all extended, by the European Commission via Annex III of the AI Regulation. Artificial intelligence systems that have not been covered so far can therefore get within the scope of application and thus also become the subject of liability under the AI Liability Directive. Such a dynamisation of the law with possible retroactive effect must be viewed critically at the very least, probably also in view of the fact that especially in the case of high-risk AI not its risk for personal and material damage was the force behind the classification.
3. Product Safety Regulation
In contrast to the generically new regulatory instruments of the AI Regulation and AI Liability Directive, the Product Safety Directive 2001/95/EC already exists as an instrument for regulating the minimum requirements for products under public law. Due to the significant changes brought about by digitalisation, the Product Safety Directive is currently being revised. Already in June 2021, the European Commission published its proposal for the revision of the Product Safety Directive 2001/95/EC,[13] following the new spirit as the Product Safety Regulation. It aims to update the legal framework for the safety of non-food products for consumers and to adapt the legal framework to the specific challenges of new technologies and business models. The Council of the EU has taken up the Commission’s proposals and partly toned them down, as can be seen from a consolidated version from December 2022.[14] The final version was published in the Official Journal of the European Union in May 2023.[15] The new Product Safety Regulation also has considerable effects for economic actors of classic non-food products,[16] but of particular interest here is the extension of the regulatory effect of general product safety law to the aspects of digital properties and functions.
Central to this is the definition of product in Art. 3 I Product Safety Regulation:
“product means any item, interconnected or not to other items supplied or made available, whether for consideration or not, including in the context of providing a service – which is intended for consumers or is likely, under reasonably foreseeable conditions, to be used by consumers even if not intended for them”.
The reference to any kind of connection (interconnected) opens the reference to all risks from cybersecurity and software in general, without requiring an embodiment of the objects connected to the product. This may seem surprising at first glance given the use of the same term (item), but it is supported by a consideration of the European legislator’s considerations.
Recital 25, for example, indicates the extent to which software and the changes necessary over the life of the product, e.g. through updates,[17] must already be taken into account by the responsible economic operator in the initial risk assessment of the product:
“New technologies might pose new risks to consumers’ health and safety or change the way the existing risks could materialise, such as an external intervention hacking the product or changing its characteristics. New technologies might substantially modify the original product, for instance through software updates, which should then be subject to a new risk assessment if that substantial modification were to have an impact on the safety of the product”.
Obviously, the manufacturer of a product containing software is thus forced to take a much more intensive look at the properties of the software components he processes as a component than seemed necessary under the existing product safety law. This interpretation is equally supported by recital 26:
“Specific cybersecurity risks affecting the safety of consumers, as well as protocols and certifications, can be dealt with by sectoral legislation. However, it should be ensured that, in cases where such sectoral legislation does not apply, the relevant economic operators and national authorities take into consideration risks linked to new technologies, when designing the products and assessing them respectively, in order to ensure that changes introduced in the product do not jeopardise its safety”.
The Product Safety Regulation thus obliges the responsible economic operators to carry out a risk assessment of their respective product, which must take into account the influence of connected components.
As in the previous product safety legislation, deviations from these requirements give rise to obligations on the part of the responsible economic operators, both with regard to products on the market and the distribution of further products,[18] due to the interaction of the existing Directive 2001/95/EC in its respective national transposition in connection with the new Market Surveillance Regulation 2019/1020/EU.[19]
The risks here range from the obvious ban on distribution to fines – greatly mitigated by the final text – to a market measure with regard to products already placed on the market. However, the new Product Safety Regulation considerably expands the obligations of the responsible economic operator by interfering with the intensity and variants of a market measure with mandatory minimum contents. The central provision in this context is Art. 37 Product Safety Regulation:
“Without prejudice to other remedies that may be offered by the economic operator, it shall offer to the consumer the choice between at least two of the following remedies:
a) repair of the recalled product;
b) replacement of the recalled product with a safe one of the same type and at least the same value and quality; or
c) an adequate refund of the value of the recalled product, provided that the amount of the refund shall be at least equal to the price paid by the consumer”.
Here, quite obviously unimpressed by national contractual regulations, a warranty régime of its own is being established in combination with market surveillance measures.
In relation to digital content, the wheel has come full circle insofar as product defects triggered by software or connections to other products can be solved by an update, but the user must still be offered another variant with contents of a replacement of the existing product. Software- or connectivity-induced risks of a product thus lead to a considerable market risk that did not exist in a comparable way under the previously applicable legal situation. Rather, the legal situation in Germany has been diametrically concretised by the Federal Court of Justice (BGH)[20] in civil law for the area of B2B to the effect that a notice (warning) to the user can suffice if the limitation period for material defects has expired.
The overall scope of application of product safety law is obviously much broader than the AI Regulation and its liability law annex in the AI Liability Directive can depict. Any connection or incorporation of digital elements, including intended connectivity, must then also be assessed by the responsible economic operator with regard to their safety, regardless of the digital element’s characteristic as an artificial intelligence system according to the AI Regulation. Accordingly, the defects of a product must also be assessed on the basis of these properties and lead to a non-compliant product and the measures presented. Furthermore, the relevant regulations of the previous Product Safety Act as national implementation of the Product Safety Directive 2001/95/EC have so far been classified as protective law in the sense of § 823 II BGB, so that civil liability can already be derived here.
4. Product Liability Directive (draft)
Certainly the broadest approach to liability risks for manufacturers of software or products with digital components can be found in the revision of the almost 40-year-old Product Liability Directive 85/374/EEC. The European Commission published a draft[21] on 28 September 2022, which will bring significant changes.
Unlike before, the definitions are no longer scattered throughout the text of the Directive, but have been brought together under one article. Overall, the draft is oriented towards the terminology of the NLF and thus necessarily parallels the approach of the NLF with a civil liability régime. For the first time, the term “related services” is defined, which means digital services that are integrated into or inter-connected with a product and would lead to the failure of one or more of its functions if they were missing. Recitals 3 and 12 of the draft make the objective clear:
“(3) Directive 85/374/EEC needs to be revised in the light of developments related to new technologies, including artificial intelligence (AI), new circular economy business models and new global supply chains, which have led to inconsistencies and legal uncertainty, in particular as regards the meaning of the term ‘product’. Experience gained in the application of Directive 85/374/EEC has also shown that injured parties face difficulties in obtaining compensation due to limitations in claiming damages and difficulties in gathering evidence to prove liability, especially in the light of increasing technical and scientific complexity. This includes claims for damages related to new technologies, including AI. The revision will therefore encourage the provision and use of such new technologies, including AI, while ensuring that claimants can benefit from the same level of protection regardless of the technology in question.”
“(12) Products in the digital age can be tangible or intangible. Software, such as operating systems, firmware, computer programs, applications or AI systems, is increasingly common on the market and plays an increasingly important role for product safety. Software is capable of being placed on the market as a standalone product and may subsequently be integrated into other products as a component, and is capable of causing damage through its execution. In the interest of legal certainty it should therefore be clarified that software is a product for the purposes of applying no-fault liability, irrespective of the mode of its supply or usage, and therefore irrespective of whether the software is stored on a device or accessed through cloud technologies […].”
As a consequence, Art. 4 I (1) of the Product Liability Directive (draft) contains a definitional framework of the concept of product that is significantly broader than the current one:
“(1) ‘product’ means all movables, even if integrated into another movable or into an immovable. ‘Product’ includes electricity, digital manufacturing files and software.”
Open source software that has not been created commercially is excluded from the scope of application; to what extent this is a relief for the multitude of projects that develop and distribute open source software seems questionable, at least in view of the lack of a definition of commerciality in the sense of a trade.
As a consequence of the inclusion of non-material products, intangible legal goods are also included in the concept of damage in Art. 4 par. 6 litera c): Damage to or loss of data also counts as compensable damage. Although data in the purely professional field of application are excluded, every consumer who deposits data on a carrier or device becomes a subject of liability. This is because the data on the defective product itself is also protected.
Likewise, Art. 6 No. 1 litera c) and No. 1 litera f) of the Product Liability Directive expand the definition of a product defect. According to this, both the ability of a product to learn after being placed on the market and the influence of cybersecurity are criteria for a product defect.
Product defects are thus also effects caused by self-learning functions as well as the effects of other products that can reasonably be expected to be used together with the product in question. This addresses the Internet of Things (IoT) and the increasing inclusion of machine learning. When using such technologies, companies must therefore also keep an eye on the results of the learning process and check which interactions with other products may arise even before placing the product on the market.
The inclusion of cybersecurity leads to legally difficult demarcation criteria, in particular, when it comes to the responsibility for a deliberately illegal intervention in the software by an external third party.[22] In this case, a very clear distinction will have to be made as to whether the product or the software corresponded to the available state of science and technology at the time it was placed on the market or at the time of the attack and the attack could nevertheless take place or whether an attack was made possible in the first place due to the omission of possible security measures[23] .
Cybersecurity also entails a further, hardly resolvable contradiction between legislative demand and factual possibility, which can also be found in the AI Regulation. Codified technical knowledge plays a decisive role in enabling economic actors to achieve the required security objectives, both in the state of the art of science and technology of the Product Liability Directive and in the harmonised standards and delegated acts of the AI Regulation. However, such norms and standards do not exist across the board, neither in the AI Regulation nor in the field of cybersecurity. While the NIS2 directive[24] has at least been adopted and will enter into force by 2024 at the latest, the Cyber Resilience Act[25] is still at the draft stage. Accordingly, there are simply no relevant technical norms or standards that, on the one hand, represent a de facto aid for economic actors, but above all form a lower limit of the necessary state of the art as an integral part of the state of science and technology relevant under product liability law. Accordingly, economic actors develop digital products without corresponding guard rails, whose liability in turn is to be blatantly tightened.
To facilitate the enforcement of these claims, the European Commission is resorting to the same methods as already used for the AI Liability Directive.
a) Right to disclosure
According to Art. 8 of the Product Liability Directive, if the claimant plausibly presents evidence and facts, it is possible for the court to demand the disclosure of relevant evidence by the defendant. Admittedly, there is a proportionality test and business secrets must be protected. Nevertheless, the procedural possibilities are clearly shifted to the disadvantage of defendant companies. Such a “disclosure” is not yet found in German law, but rather corresponds to the procedure known from US civil procedure law.
b) Legal presumption of conformity
To substantiate the claim, the defectiveness of the product, the damage caused and the causal link between the two must be proven, as before. Under current law, the burden of proof is on the claimant, § 1 IV Product Liability Act. In the future, defectiveness will be presumed if one of the following conditions is met:
- the defendant refuses to hand over the documents requested by the court,
- the applicant submits that the product does not comply with mandatory requirements as to its safety under national or Union law, which are intended precisely to protect against the risk of harm that has occurred,
- the plaintiff sets out that the damage was caused by an obvious malfunction of the product under normal or ordinary circumstances.
That the product defect also caused the damage is presumed if the plaintiff can show that the product has a defect and that the damage caused is typically consistent with the defect in question. Precisely because courts otherwise decide on the basis of free conviction, defendant companies will regularly have to provide evidence against these presumptions. In contrast to the past, the damages claimed under this provision are no longer to be capped at 85 million euros, although this limit was already rarely the subject of proceedings brought by lawyers.
Overall, the Product Liability Directive thus covers almost all facets of software, including AI, and also the properties of a product associated with the almost obligatory connectivity. With the new distribution of the burden of proof in the area of civil procedure, enforcement has been considerably regulated in favour of the injured party.
III. Conclusion
As described, the legal framework for manufacturers of software in general and AI in particular is changing comprehensively. In terms of liability law, neither the AI Regulation nor the AI Liability Directive seems to be as relevant as the Product Safety Regulation, but especially the Product Liability Directive. This result is certainly diametrically opposed to the perception in the legal community and does not correlate with the amount of scientific reception. From an attorney’s perspective, the comprehensible lack of transparency also contributes to the way in which product liability proceedings are already resolved in favour of the injured consumer in cases of doubt. The new Product Liability Directive, which is currently being drafted, will give further impetus to this tendency of the civil courts, without the need for a special legal referral to the AI Regulation, for example. The extent to which this risk exposure will actually lead to companies in the EU consciously investing in software in general and AI in particular seems doubtful in view of the significantly increased liability framework.
This article was published in issue 4/2023, page 152 of the magazine “Recht Digital – RDi”. You can access the complete essay here (fee required).