8.1 Introduction
Artificial intelligence (AI) is becoming increasingly important in our daily lives and so is academic research on its impact on various legal domains.Footnote 1 One of the fields that has attracted much attention is extra-contractual or tort liability. That is because AI will inevitably cause damage, for instance, following certain actions/decisions (e.g., an automated robot vacuum not recognizing a human and eventually harming them) or when it provides incorrect information that results in harm (e.g., when AI used in construction leads to the collapse of a building that hurts a bystander). Reference can also be made to accidents involving autonomous vehicles.Footnote 2 The autopilot of a Tesla car, for instance, was not able to distinguish a white tractor-trailer crossing the road from the bright sky above, leading to a fatal crash.Footnote 3 A self-driving Uber car hit a pedestrian in Arizona. The woman later died in the hospital.Footnote 4 These – and many other – examples show that accidents may happen despite optimizing national and supranational safety rules for AI. This is when questions of liability become significant.Footnote 5 The importance of liability and AI systems has already been mentioned in several documents issued by the European Union (EU). The White Paper on Artificial Intelligence, for instance, stresses that the main risks related to the use of AI concern the application of rules designed to protect fundamental rights as well as safety and liability-related issues.Footnote 6 Scholars have also concluded that “[l]iability certainly represents one of the most relevant and recurring themes”Footnote 7 when it comes to AI systems. Extra-contractual liability also encompasses many fundamental questions and problems that arise in the context of AI and liability.
Both academic researchFootnote 8 and policy initiativesFootnote 9 have already addressed many pressing issues in this legal domain. Instead of discussing the impact of AI on different (tort) liability regimes or issues of legal personality for AI systems,Footnote 10 we will touch upon some of the main challenges and proposed solutions at the EU and national level. More specifically, we will illustrate the remaining importance of national law (Section 8.2) and procedural elements (Section 8.3). We will then focus on the problematic qualification and application of certain tort law concepts in an AI-context (Section 8.4). The most important findings are summarized in the chapter’s conclusion (Section 8.5).Footnote 11
8.2 The Remaining Importance of National Law for AI-Related Liability
In the recent years, several initiatives with regard to liability for damage involving AI have been taken or discussed at the EU level. Without going into detail, we will provide a high-level overview to give the reader the necessary background to understand some of the issues that we will discuss later.Footnote 12
The European Parliament (EP) issued its first report on civil law rules for robots in 2017. It urged the European Commission (EC) to consider a legislative instrument that would deal with the liability for damage caused by autonomous systems and robots, thereby evaluating the feasibility of a strict liability or a risk management approach.Footnote 13 This was followed by a report issued by an Expert Group set up by the EC on the “Liability for artificial intelligence and other emerging digital technologies” in November 2019. The report explored the main liability challenges posed to current tort law by AI. It concluded that liability regimes “in force in member states ensure at least basic protection of victims whose damage is caused by the operation of such new technologies.”Footnote 14 However, the specific characteristics of AI systems, such as their complexity, self-learning abilities, opacity, and limited predictability, may make it more difficult to offer victims a claim for compensation in all cases where this seems justified. The report also stressed that the allocation of liability may be unfair or inefficient. It contains several recommendations to remedy potential gaps in EU and national liability regimes.Footnote 15 The EC subsequently issued a White Paper on AI in 2020. It had two main building blocks, namely an “ecosystem of trust” and an “ecosystem of excellence.”Footnote 16 More importantly, the White Paper was accompanied by a report on safety and liability. The report identified several points that needed further attention, such as clarifying the scope of the product liability directive (PLD) or assessing procedural aspects (e.g., identifying the liable person, proving the conditions for a liability claim or accessing the AI system to substantiate the claim).Footnote 17 In October 2020, the EP adopted a resolution with recommendations to the EC on a civil liability regime for AI. It favors strict liability for operators of high-risk AI systems and fault-based liability for operators of low-risk AI systems,Footnote 18 with a reversal of the burden of proof.Footnote 19 In April 2021, the EC issued its draft AI Act, which entered into force in August 2024 after a long legislative procedure.Footnote 20 The AI Act adheres to a risk-based approach. While certain AI systems are prohibited, several additional requirements apply for placing high-risk AI systems on the market. The AI Act also imposes obligations upon several parties, such as providers and users of high-risk AI systems.Footnote 21 Those obligations will be important to assess the potential liability of such parties, for instance, when determining whether an operator or user committed a fault (i.e., violation of a specific legal norm or negligence).Footnote 22 More importantly, the EC published two proposals in September 2022 that aim to adapt (tort) liability rules to the digital age, the circular economy, and the impact of the global value chain. The “AI Liability Directive” contains rules on the disclosure of information and the alleviation of the burden of proof in relation to damage caused by AI systems.Footnote 23 The “revised Product Liability Directive” substantially modifies the current product liability regime by including software within its scope, integrating new circumstances to assess the product’s defectiveness and introducing provisions regarding presumptions of defectiveness and causation.Footnote 24
These evolutions show that much is happening at the EU level regarding liability for damage involving AI. The problem, however, is that the European liability landscape is rather heterogeneous. With the exception of the (revised) PLD and the newly proposed AI Liability Directive, contractual and extra-contractual liability frameworks are usually national. While initiatives are thus taken at the EU level, national law remains the most important source when it comes to tort liability and AI. Several of these proposals and initiatives discussed in the previous paragraph contain provisions and concepts that refer to national law or that rely on the national courts for their interpretation.Footnote 25 According to Article 8 of the EP Resolution, for instance, the operator will not be liable if he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds: (a) the AI system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken or (b) due diligence was observed by performing all the following actions: selecting a suitable AI system for the right task and skills, putting the AI system duly into operation, monitoring the activities, and maintaining the operational reliability by regularly installing all available updates.Footnote 26 The AI Liability Directive also relies on concepts that will eventually have to be explained and interpreted by judges. National courts will, for instance, need to limit the disclosure of evidence to that which is necessary and proportionate to support a potential claim or a claim for damages.Footnote 27 It also relies on national law to determine the scope and definition of “fault” and “causal link.”Footnote 28 The revised PLD includes different notions that will have to be interpreted, explained, and refined by national judges as well according to their legal tradition. These concepts, for instance, include “reasonably foreseeable,” “substantial,” “relevant,” “proportionate,” and “necessary.”Footnote 29 The definitions provided by courts may vary from one jurisdiction to another, which does give some flexibility to Member States, but may create legal fragmentation as well.Footnote 30
8.3 Procedural Elements
A “general, worldwide accepted rule”Footnote 31 in the law of evidence is that each party has to prove its claims and contentions (actori incumbit probatio).Footnote 32 The application of this procedural rule can be challenging when accidents involve AI systems. Such systems are not always easily understandable and interpretable but can come in forms of “black boxes” that evolve through self-learning. Several actors are also involved in the AI life cycle (e.g., the developers of the software, the producer of the hardware, owners of the AI product, suppliers of data, public authorities, or the users of the product). Victims are therefore confronted with the increasingly daunting task of trying to identify and prove AI systems as their source of harm.Footnote 33 Moreover, injured parties, especially if they are natural persons, do not always have the needed knowledge on the specific AI system or access to the necessary information to build a case in court.Footnote 34 Under the Product Liability Directive, the burden of proof is high as well. A victim has to prove that the product caused the damage because it is defective, implying that it did not provide the safety one is legitimately entitled to expect.Footnote 35 It is also uncertain what exactly constitutes a defect of an advanced AI system. For instance, if an AI diagnosis tool delivers a wrong diagnosis, “there is no obvious malfunctioning that could be the basis for a presumption that the algorithm was defective.”Footnote 36 It may thus be difficult and costly for consumers to prove the defect when they have no expertise in the field, especially when the computer program is complex and not readable ex post.Footnote 37 An additional hurdle is that the elements of a claim in tort law are governed by national law. An example is the requirement of causation including procedural questions such as the standard of proof or the laws and practice of evidence.Footnote 38
In sum, persons who have suffered harm may not have effective access to the evidence that is necessary to build a case in court and may have less effective redress possibilities compared to situations in which the damage is caused by “traditional” products.Footnote 39 It is, however, important that victims of accidents involving AI systems are not confronted with a lower level of protection compared to other products and services for which they would get compensation under national law. Otherwise, societal acceptance of those AI systems and other emerging technologies could be hampered and a hesitance to use them could be the result.Footnote 40
To remedy this “vulnerable” or “weak” position, procedural mechanisms, and solutions have been proposed and discussed in academic scholarship.Footnote 41 One can think of disclosure requirements. Article 3 of the AI Liability Directive, for instance, contains several provisions on the disclosure of evidence. A court may, upon the request of a (potential) claimant, order the disclosure of relevant evidence about a specific high-risk AI system that is suspected of having caused damage. Such requests for evidence may be addressed to inter alia the provider of an AI system, a person subject to the provider’s obligations or its user.Footnote 42 Several requirements must be fulfilled by the (potential) claimant before the court can order the disclosure of evidence.Footnote 43 National courts also need to limit the disclosure of evidence to what is necessary and proportionate to support a potential claim or an actual claim for damages.Footnote 44 To that end, the legitimate interests of all parties – including providers and user – as well as the protection of confidential information should be taken into account.Footnote 45 The revised PLD contains similar provisions. Article 8 allows Member States’ courts to require the defendant to disclose to the injured person – the claimant – relevant evidence that is at its disposal. The claimant must, however, present facts and evidence that are sufficient to support the plausibility of the claim for compensation.Footnote 46 Moreover, the disclosed evidence can be limited to what is necessary and proportionate to support a claim.Footnote 47
Several policy initiatives also propose a reversal of the burden of proof. The Expert Group on Liability and New Technologies, for instance, proposes that “where the damage is of a kind that safety rules were meant to avoid, failure to comply with such safety rules, should lead to a reversal of the burden of proving (a) causation, and/or (b) fault, and/or (c) the existence of a defect.”Footnote 48 It adds that if “it is proven that an emerging digital technology caused harm, and liability therefore is conditional upon a person’s intent or negligence, the burden of proving fault should be reversed if disproportionate difficulties and costs of establishing the relevant standard of care and of proving their violation justify it.”Footnote 49 The burden of proving causation may also be alleviated in light of the challenges of emerging digital technologies if a balancing of the listed factors warrants doing so (e.g., the likelihood that the technology at least contributed to the harm or the kind and degree of harm potentially and actually caused).Footnote 50 It has already been mentioned that the Resolution issued by the EP in October 2020 also contains a reversal of the burden of proof regarding fault-based liability for operators of low-risk AI systems.Footnote 51
In addition to working with a reversal of the burden of proof, one can also rely on rebuttable presumptions. In this regard, both the AI Liability Directive and the revised PLD are important. Article 4.1 of the AI Liability Directive, for instance, introduces a rebuttable presumption of a “causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output.” However, this presumption only applies when three conditions are met. First, the fault of the defendant has to be proven by the claimant according to the applicable EU law or national rules, or presumed by the court following Article 3.5 of the AI Liability Directive. Such a fault can be established, for example, “for non-compliance with a duty of care pursuant to the AI Act.”Footnote 52 Second, it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output. Third, the claimant needs to demonstrate that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage. The defendant, however, has the right to rebut the presumption of causality.Footnote 53 Moreover, in the case of a claim for damages concerning a high-risk AI system, the court is not required to apply the presumption when the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link.Footnote 54
The revised PLD also introduces presumptions of defectiveness and causality that apply under certain conditions. Such conditions include the defendant’s failure to disclose relevant evidence, when the claimant provides evidence that the product does not comply with mandatory safety requirements set in EU or national law, or when the claimant establishes that the damage was caused by an “obvious malfunction” of the product during normal use or under ordinary circumstances. Article 9.3 also provides a presumption of causality when “it has been established that the product is defective and the damage caused is of a kind typically consistent with the defect in question.” In other words, Article 9 contains two specific presumptions, one of the product’s defectiveness and one related to the causal link between the defectiveness of the product and the damage. In addition, Article 9.4 contains a more general presumption. Where a national court decides that “the claimant faces excessive difficulties, due to the technical or scientific complexity, to prove the product’s defectiveness or the causal link between its defectiveness and the damage” (or both), the defectiveness of the product or causal link between its defectiveness and the damage (or both) are presumed when certain conditions are met. The claimant must demonstrate, based on “sufficiently relevant evidence,” that the “product contributed to the damage”Footnote 55 and that it is “likely that the product was defective or that its defectiveness is a likely cause of the damage, or both.”Footnote 56 The defendant, however, has the right “to contest the existence of excessive difficulties” or the mentioned likelihood.Footnote 57 Of course, the defendant is allowed to rebut any of these presumptions as well.Footnote 58
8.4 Problematic Qualification of Certain Tort Law Concepts
The previous parts focused on more general evolutions regarding AI and liability. The application of “traditional” tort law concepts also risks to become challenging in AI context. Regulatory answers will need to be found to remedy the gaps that could potentially arise. We will illustrate this with two notions used in the Product Liability Directive, namely “product” (part 8.4.1) and “defect” (part 8.4.2). We will also show that the introduction of certain concepts in (new) supranational AI-specific liability legislation can be challenging due to the remaining importance of national law. More specifically, we will discuss the requirement of “fault” in the proposed AI Liability Directive (part 8.4.3).
8.4.1 Software as a Product?
Article 1 of the Product Liability Directive stipulates that the producer is liable for damage caused by a defect in the product. Technology and industry, however, have evolved drastically over the last decades. The division between products and services is no longer as clear-cut as it was. Producing products and providing services are increasingly intertwined.Footnote 59 In this regard, the question arises whether software is a product or instead is provided as a service, and thus falling outside the scope of the PLD.Footnote 60 Software and AI systems merit specific attention in respect of product liability. Software is essential to the functioning of a large number of products and affects their safety. It is integrated into products but it can also be supplied separately to enable the use of the product as intended. Neither a computer nor a smartphone would be of particular use without software. The question whether stand-alone software can be qualified as a product within the meaning of the Product Liability Directive or implementing national legislation has already attracted a lot of attention, both in academic scholarshipFootnote 61 and in policy initiatives.Footnote 62 That is because software is a collection of data and instructions that is imperceptible to the human eye.Footnote 63
Unclarity remains as to whether software is (im)movable and/or a (in)tangible good.Footnote 64 The Belgian Product Liability Act – implementing the PLD – stipulates that the regime only concerns tangible goods.Footnote 65 Although the Belgian Court of Cassation and/or the European Court of Justice have not yet ruled on the matter, the revised PLD specifically qualifies software and digital manufacturing files as products.Footnote 66 The inclusion of software is rather surprising, yet essential.Footnote 67 Recital (13) of the revised PLD states that it should not apply to “free and open-source software developed or supplied outside the course of a commercial activity” in order not to hamper innovation or research. However, where software is supplied in exchange for a price or personal data is provided in the course of a commercial activity (i.e., for other purposes than exclusively improving the security, compatibility or interoperability of the software), the Directive should apply.Footnote 68 Regardless of the qualification of software, the victim of an accident involving an AI system may have a claim against the producer of a product incorporating software such as an autonomous vehicle, a robot used for surgery or a household robot. Software steering the operations of a tangible product could be considered as a part or component of that product.Footnote 69 This means that an autonomous vehicle or material robot used for surgery would be considered as a product in the sense of the Product Liability Directive and can be defective if the software system it uses is not functioning properly.Footnote 70
8.4.2 “Defective” Product
Liability under the Product Liability Directive requires a “defect” in the product. A product is defective when it does not provide the safety that a person is entitled to expect, taking all circumstances into account (the so-called “consumer expectations test” as opposed to the “risk utility test”).Footnote 71 This does not refer to the expectations of a particular person but to the expectations of the general publicFootnote 72 or the target audience.Footnote 73 Several elements can be used to determine the legitimate expectations regarding the use of AI systems. These include the presentation of the product, the normal or reasonably foreseeable use of it and the moment in time when the product was put into circulation.Footnote 74 This enumeration of criteria, however, is not exhaustive as other factors may play a role as well.Footnote 75 Especially the criterion of the presentation of the product is important for manufacturers of autonomous vehicles or medical robots. That is because they often tend to market their products explicitly as safer than existing alternatives. The presentation of the product may on the other hand also provide an opportunity for manufacturers of AI systems to reduce their liability risk through appropriate warnings and user information. Nevertheless, it remains uncertain how technically detailed or accessible such information should be.Footnote 76 The revised PLD also refers to the legitimate safety expectations.Footnote 77 A product is deemed defective if it fails to “provide the safety which the public at large is entitled to expect, taking all circumstances into account.”Footnote 78 The non-exhaustive list of such circumstances that allow to assess the product’s defectiveness is expanded and also includes “the effect on the product of any ability to continue to learn after deployment.”Footnote 79 It should, however, be noted that the product cannot be considered defective for the sole reason that a better product, including updates or upgrades to a product, is already or subsequently placed on the market or put into service.Footnote 80
That being said, the criterion of legitimate expectations remains very vague (and problematicFootnote 81). It gives judges a wide margin of appreciation.Footnote 82 As a consequence, it is difficult to predict how this criterion will and should be applied in the context of AI systems.Footnote 83 The safety expectations will be very high for AI systems used in high-risk contexts such as healthcare or mobility.Footnote 84 At the same time, however, the concrete application of this test remains difficult for AI systems because of their novelty, the complexity to compare these systems with human or technological alternatives and the characteristics of autonomy and opacity.Footnote 85 The interconnectivity of products and systems also makes it hard to identify the defect. Sophisticated AI systems with self-learning capabilities also raise the question of whether unpredictable deviations in the decision-making process can be treated as defects. Even if they constitute a defect, the state-of-the-art defenseFootnote 86 may eventually apply. The complexity and the opacity of emerging digital technologies such as AI systems further complicate the chance for the victim to discover and prove the defect and/or causation.Footnote 87 In addition, there is some uncertainty on how and to what extent the Product Liability Directive applies in the case of certain types of defects, for example, those resulting from weaknesses in the cybersecurity of the product.Footnote 88 It has already been mentioned that the revised PLD establishes a presumption of defectiveness under certain conditions to remedy these challenges.Footnote 89
8.4.3 The Concept of Fault in the AI Liability Directive
In addition to the challenging application of “traditional” existing tort law concepts in an AI context, the introduction of new legislation in this field may also contain notions that are unclear. This unclarity could affect legal certainty, especially considering the remaining importance of national law. We will illustrate this with the requirement of “fault” as proposed in the AI Liability Directive.
It has already been mentioned that Article 4.1 of the AI Liability Directive contains a rebuttable presumption of a “causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output.” The fault of the defendant has to be proven by the claimant according to the applicable EU law or national rules. Such a fault can be established, for example, “for non-compliance with a duty of care pursuant to the AI Act.”Footnote 90 The relationship between the notions of “fault” and “duty of care” under the AI Liability Directive, and especially in Article 4, is unclear and raises interpretation issues.Footnote 91 The AI Liability Directive uses the concept of “duty of care” at several occasions. Considering that tort law is still to a large extent national, the reliance on the concept of “duty of care” in supranational legislation is rather surprising. A “duty of care” is defined as “a required standard of conduct, set by national or Union law, in order to avoid damage to legal interests recognized at national or Union law level, including life, physical integrity, property and the protection of fundamental rights.”Footnote 92 It refers to how a reasonable person should act in a specific situation, which also “ensure[s] the safe operation of AI systems in order to prevent damage to recognized legal interests.”Footnote 93 In addition to the fact that the content of a duty of care will ultimately have to be determined by judges, a more conceptual issue arises as well. That is because the existence of a generally applicable positive duty of care has already been contested, for instance, in Belgium. Kruithof concludes that case law and scholarship commonly agree that no breach of a “pre-existing” duty is required for a fault to be established. As noted by Kruithof, what is usually referred to as the generally required level or the duty of care, “is therefore more properly qualified not as a legal duty or obligation, but merely a standard of behavior serving as the yardstick for judging whether an act is negligent or not for purposes of establishing liability.”Footnote 94 However, Article 4.1 (a) seems to equate the “fault” with the noncompliance with a duty of care, thereby implicitly endorsing the view that the duty of care consists in a standalone obligation. This does not necessarily fit well in some national tort law frameworks, and may thus cause interpretation issues and fragmentation.Footnote 95
Article 1.3 (d) of the AI Liability Directive mentions that the Directive will not affect “how fault is defined, other than in respect of what is provided for in Articles 3 and 4.” A fault under Belgian law (and by extension other jurisdictions) consists of both a subjective component and an objective component. The (currently still applicable) subjective component requires that the fault can be attributed to the free will of the person who has committed it (“imputability”), and that this person generally possesses the capacity to control and to assess the consequences of his or her conduct (“culpability”).Footnote 96 This subjective element does, however, not seem to be covered by the AI Liability Directive. This raises the question whether the notion of “fault,” as referred to in the Articles 3 and 4, requires such a subjective element to be present and/or allows for national law to require this. The minimal harmonization provision of Article 1.4 does not answer this question.Footnote 97 The objective component of a fault refers to the wrongful behavior in itself. Belgian law traditionally recognizes two types of wrongdoings, namely a violation of a specific legal rule of conductFootnote 98 and the breach of a standard of care.Footnote 99 Under Belgian law, a violation of a standard of care requires that it was reasonably foreseeable for the defendant that his or her conduct could result in some kind of damage.Footnote 100 This means that a provider of a high-risk AI system would commit a fault when he or she could reasonable foresee that a violation of a duty of care following provisions of the AI Act would result in damage. However, it is unclear whether the notion of a “duty of care” as relied upon in the AI Liability Directive also includes this requirement of foreseeability or, instead, whether it is left to national (case) law to determine the additional modalities under which a violation of a “duty of care” can be established.Footnote 101
8.5 Concluding Remarks and Takeaways
We focused on different challenges that arise in tort law for damage involving AI. The chapter started by illustrating the remaining importance of national law for the interpretation and application of tort law concepts in an AI context. There will be an increasing number of cases in which the role of AI systems in causing damage, and especially the interaction between humans and machines, will have to be assessed. Therefore, a judge must have an understanding on how AI works and the risks it entails. As such, it should be ensured that judges – especially in the field of tort law – have the required digital capacity. We also emphasized the importance of procedural elements in claims involving AI systems. Although the newly proposed EU frameworks introduce disclosure requirements and rebuttable presumptions, it remains to be seen how these will be applied in practice, especially considering the many unclarities these proposals still entail. The significant amount of discretion that judges have in interpreting the requirements and concepts used in these new procedural solutions may result in various and differing applications throughout the Member States. While these different interpretations might be interesting case studies, they will not necessarily contribute to the increased legal certainty that the procedural solutions aim to achieve. We also illustrated how AI has an impact on “traditional” and newly proposed tort law concepts. From a more general perspective, we believe that interdisciplinarity – for instance through policy prototypingFootnote 102 – will become increasingly important to remedy regulatory gaps and to devise new “rules” on AI and tort law.