I. Introduction
Artificial intelligence (AI) and recent breakthroughs in machine–human interactions and machine learning technology are affecting ever more aspects of our lives. AI technologies are not limited to increased pervasiveness, but are also characterised by continuous and surprising breakthroughs fostered by computation capabilities, algorithm design and communication technology.Footnote 1 AI is exponentially growing, and certain of its materialisations bring greater threats to privacy, are ethically questionable and are possibly even dangerous, risky and could cause potential catastrophic risk. Footnote 2 Whether to pursue the creation of non-natural AIFootnote 3 able to make choices via an evaluative process is one of the most pressing questions in the world today. Namely, AIFootnote 4 is unleashing a new industrial revolution, and it is vital that lawmakers systemically address its challenges and regulate its economic and social effects while not stifling innovation.
Russell, for example, argues that no one can predict exactly how the new AI technology will develop but, if autonomous machines start to far exceed our thinking capacity and we leave this issue unaddressed, AI could well be the last event in human history. Footnote 5 He suggests that poorly designed autonomous machines might pose a serious risk to humanity.Footnote 6 Moreover, Turner argues that AI is creating a growing legal vacuum in almost every domain touched by this unprecedented technological “development”. Footnote 7 Similarly, Buyers suggests that lawyers are currently flummoxed as to what should happen when a self-driving car has a software failure and hits a pedestrian, or a drone’s camera happens to catch someone skinny-dipping in a pool or taking a shower, or a robot kills a human in self-defence. Footnote 8 Furthermore, Teubner shows that AI agents may actually pose three new liability risks: (1) autonomy risk, stemming from standalone “decisions” taken by the AI agents; (2) association risk, arising from the close cooperation between people and AI agents; and (3) network risk, which occurs when computer systems are closely integrated with other computer systems.Footnote 9 In addition, AI might lead to serious indirect or direct harm.Footnote 10 For example, high-speed trading algorithms that could destabilise the stock market or cognitive radio systems that might interfere in emergency communications may, either alone or in combination, cause serious damage.Footnote 11
Given such developments, many lawmakers around the globe have started on intensive law-making activity Footnote 12 with respect to the issue of liability and other broader challenges brought by emerging digital technologies. For example, the European Commission established a special expert group on liability made up of the “Product Liability Directive” formation and the “New Technologies” formation. Footnote 13 These two expert groups have also been given the task of determining whether regulatory intervention on AI technologies is appropriate and necessary and, if so, whether such an intervention should be developed in a horizontal or sectoral way. Footnote 14 Moreover, the issues of legal liability for AI and related civil liability for damages caused by AI have produced an impressive amount of scholarly literature, turning them into subjects of major interest for lawyers.Footnote 15
By incorporating the main insights from the tort law and economics literature, Footnote 16 this paper joins this critical debate and offers an additional set of arguments on the appropriate role of the civil liability regime in regulating AI. This paper complements my earlier work on judgment-proof robots in two noteworthy respects.Footnote 17 First, the paper contributes to the literature by highlighting the practical and theoretical importance of the AI-related judgment-proof problem. Second, this paper focuses on the judgment-proofness of existing legal persons engaged with AI, who might not be held responsible for substantial harm as the current liability-related tort law stipulates.
The analysis presented here is both positive and normative. The analytical approach employs interdisciplinary analysis enriched with concepts used in the economic analysis of law.Footnote 18 However, several caveats should be issued. Namely, the limited scope of the paper considers the narrow fields of tort and product liability law while omitting analysis of consumer protection and anti-trust law. It solely focuses on European Union (EU) product liability and on AI that interacts with its environment in unforeseeable ways. Moreover, the aim of the paper is not to impose a final word on the matter, but to undertake an exploratory analysis of the relationship between AI and the judgment-proof problem.
This paper is structured as follows. The next section presents the general background and several definitions and provides a manual for the field of AI’s development and deployment. In Section III, key questions for AI policy are considered. Moreover, crucial questions imposed by lawmakers are described, along with three critical fields of application. In Section IV, the paper provides several recommendations for lawmakers. Finally, some conclusions are presented.
II. General background and key concepts
Dartmouth College and the two-month workshop at Dartmouth in the summer of 1956 was to become the official birthplace of the AI field, and the following seventy years have seen a revolution in both the content and the methodology of work in AI.Footnote 19 Scientific progress has also enabled the return of neural networksFootnote 20 and the re-emergence of intelligent agents.Footnote 21 This re-emergence of intelligent agents suggests that previously isolated subfields of AI are to be tied together and has drawn AI into much closer contact with other fields, such as control theory and economics.Footnote 22
1. Setting the scene: concepts and research trends
Influential founding fathers believed that AI should put less emphasis on creating applications that are good at performing specific tasks and should instead strive for machines that think, that learn and that create.Footnote 23 Closely related is the idea of artificial general intelligence (AGI) that looks for a universal algorithm for learning and acting in any environment.Footnote 24 It is also helpful at the outset to introduce a distinction between narrow and general AI. Narrow AI, which also represents the focus of this paper, denotes the ability of a system to achieve a certain stipulated goal or set of goals, and the great majority of AI systems today are of this narrow type.Footnote 25 However, general AI denotes the ability to achieve an unlimited range of goals and even to set new goals independently.Footnote 26
Furthermore, one must note the distinction between data-based and semantic AI systems. Semantic AI employs formal semantics to derive meaning from disparate sets of raw data into content.Footnote 27 This enables a computer system to have human-like understanding and reasoning. This is done by using tools, methods and techniques that help categorise and process data as well as define the relationships between different concepts and datasets. Thus, semantic technologies allow computers not only to process strings of characters, but also to store, manage and retrieve information based on meaning and logic.Footnote 28 The data-based AI systems are provided with a very large corpus of data (combined with learning methods), which enables the AI to learn new patterns, resulting in excellent performance.Footnote 29 In addition, it has to be emphasised that in recent years the field of AI has seen a shift from simply building systems that are intelligent to building intelligent systems that are human-aware and trustworthy.Footnote 30
2. Literature review
The issue of an appropriate civil liability regime for AI has already produced an impressive amount of legal scholarship.Footnote 31 Some scholars investigate algorithms as such,Footnote 32 whereas others explore multiple set of issues and jurisdictions.Footnote 33 Amongst legal scholars there also appears to be a general preference for some form of strict liability for algorithms and robots, analogous to liability for animals or movable objects (or the liability for motorised vehicles).Footnote 34 Providing a comprehensive comparative analysis, Tjong Tjin Tai suggests that there are three areas that may require change: strict liability, product liability for algorithms and extending the protected interests.Footnote 35 Wagner argues that the main focus of current liability rules and the legal practice developed under them is on the users of technical appliances, not on the manufacturers.Footnote 36 Yet Wagner also suggests that the manufacturer who determines the safety features and the behaviour of the robot or Internet of Things device “clearly is the cheapest cost avoider, in fact, he is the only person in a position to take precautions at all”.Footnote 37
On the other hand, Abbott and Sarch discuss the difficulties involved in punishing AI and offer modest expansions to criminal law, including, most importantly, new negligence crimes centred around the improper design, operation and testing of AI applications, as well as possible criminal penalties for designated parties who fail to discharge statutory duties.Footnote 38 Rachum-Twaig suggests that current law and doctrine, such as product liability and negligence, cannot provide an adequate framework for these technological advancements, mainly due to the lack of personhood and agency and to the inability to predict and explain robot behaviour.Footnote 39 He argues that the inherent lack of foreseeability is challenging basic principles in tort law, which requires foresight prior to imposing liability.Footnote 40 Moreover, product liability doctrine “seems to struggle with the lack of foreseeability characterizing AI-based robots, preventing a swift application of the design defect doctrine”.Footnote 41 While exploring the deep normative structures of our societies, Eidenmüller argues that it would dehumanise the world if we were to treat machines like humans, even though machines may be smart – possibly even much smarter than humans.Footnote 42
Moreover, Borghetti shows that broad liability regimes that have been designed to handle damage caused by humans are ill-suited for the compensation of harm caused by, or associated with, the use of AI.Footnote 43 However, Borghetti advances that sector-specific liability regimes or compensation mechanisms are applicable and do not require that an abnormal behaviour or conduct be established.Footnote 44 Wendehorst, for example, argues that AI-driven robots in public spaces should be subject to strict liability for damage resulting from their operation and that AI manufacturers should be liable for damage caused by defects in their products, even if the defect was caused by changes made to the product under the producer’s control after it had been placed on the market.Footnote 45 However, Cabral suggests that the current EU Product Liability Directive is not up to the task of regulating AI, and it can neither adequately protect consumers nor foster innovation.Footnote 46
III. A synthesis of law and economics scholarship: torts and safety regulation
This section presents a set of law and economics recommendations that might shed light on the improved deterrence of hazards and the inducement of optimal precautions while simultaneously keeping dynamic efficiency – incentives to innovate – undistorted.
1. The economic function of tort law
Tort law defines the conditions in which a person is entitled to compensation for damage if not based on a contractual obligation and encompasses all legal norms that concern the claim made by an injured party against a wrongdoer (tortfeasor). Economically speaking, any “reduction of an individual’s utility level caused by a tortious act can be regarded as damage”.Footnote 47 A thorough overview of the tort law and economics literature exceeds the limits of this paper and is available elsewhere.Footnote 48 However, it should be emphasised that this literature traditionally addresses three broad aspects of tortious liability. The first is assessing its incentives (including incentives to participate in activities and incentives to mitigate the associated risk) – analytically speaking, tort law is thus an instrument that improves the flow of inducementsFootnote 49 ; the second concerns risk-bearing capacity and insurance; while the third is related to the necessary administrative expense entailing the costs of legal services, the value of litigants’ time and related lost opportunities and the court operating costs.Footnote 50 The literature also shows that since the administrative and procedural costs of a tort law case can be very high, alternative legal mechanisms such as ex ante safety regulation might be more cost-effective at reducing the overall costs of accidents.Footnote 51
2. Liability for harm versus safety regulation
In his seminal paper on liability for harm versus the regulation of safety, Shavell paved the way to an analytical understanding of the optimal employment of tort liability and/or regulatory standards.Footnote 52 Shavell instrumentally addressed the effects of liability rules and direct regulation on a rational self-interested party’s decision-making process.Footnote 53 Namely, liability in tort and safety regulation are two different approaches to controlling activities that create risks of harm and inducing the optimal amount of precaution.Footnote 54 Yet Shavell stressed that major mistakes have occurred in the use of liability and safety regulation.Footnote 55 Regulation, where applied exclusively, has for various reasons often proven inadequate, whereas due to causation problems tort liability might also provide suboptimal deterrence incentives.Footnote 56 In addition, Rose-Ackerman suggests that regulation (statutes) should generally dominate provided agencies are able to employ rule-making to shape policy,Footnote 57 whereas Schmitz argues that the joint use of liability and safety regulation is optimal if wealth varies among injurers.Footnote 58
3. Liability issues and the classic human-centric judgment-proof problem
In its original, narrow meaning of the concept human-centric “judgment-proof”, the problem refers to the fact that human tortfeasors are unable to pay fully for the harm they may cause, giving them a bigger incentive than otherwise to engage in risky activities. Shavell and Summers coined the term “judgment-proof” in their path-breaking articles on this problem where they showed that the judgment-proof problem’s very existence seriously undermines the deterrence and insurance goals of tort law. Shavell notes that judgment-proof parties do not have the right incentive to either prevent accidents or purchase liability insurance.Footnote 59 In other words, the judgment-proof problem is critical because if injurers are unable to pay in full for the harm they have caused, then their incentives to participate in risky activities will be greater than otherwise. Summers also shows that judgment-proof injurers tend to take too little precaution under strict liability since accident costs are only partly internalised.Footnote 60
Moreover, one should note that strict liability provides incentives for the optimal engagement in an activity if “parties’ assets are enough to cover the harm they might cause, but their incentives will be inadequate if they are unable to pay for the harm”.Footnote 61 Furthermore, Shavell argues that, also under the negligence rule in situations where injurers are not induced to take optimal care (or there are errors in the determination of negligence), the “existence of the judgment-proof problem induces injurers to engage more frequently (sub-optimally) in the activity than they normally would”.Footnote 62
In addition, when injurers are for any reason unwilling to pay for all of the harm caused by virtue of complex asset ownership-shifting arrangements put in place in advance of a risky activity, this fact alone also distorts their incentive to make optimal precautionary and damage mitigation decisions, including distorting or even completely eliminating any reason to purchase liability insurance.Footnote 63 Namely, risk-averse injurers who may not be able to pay for all of the harm they cause will tend not to purchase full liability insurance or any at all.Footnote 64 Here, Shavell notes that “the nature and consequences of this judgement-proof’s effect depend on whether liability insurers have information about the risk and hence link premiums to that risk”.Footnote 65 Consequently, “reduction in the purchase of liability insurance tends to undesirably increase incentives to engage in the harmful activity”.Footnote 66
4. The judgment-proof problem in the context of artificial intelligence
The classic law and economics concept of the judgment-proof problem informs us that if injurers lack sufficient assets to pay for the damage they cause, then their incentives to reduce risk will be inadequate. Footnote 67 Yet, the judgment-proof problem could also be defined much more broadly to include the problem of the dilution of incentives to lower risk that emerge when an existing legal person engaged with AI is completely indifferent to both the ex ante possibility of being found legally liable for harm done to others and potential accident liability (given that the value of the expected sanction equals zero). In other words, existing legal persons might be completely indifferent to the ex ante possibility of being found liable by the human-imposed legal system for harm caused, and hence their incentives to engage in risky activities might be inadequate. For example, since the actions of AI agents are likely to become increasingly unforeseeable, designers or producers of AI might think that such unforeseeable development might excuse them from any tortious liability and, in such a scenario, the classic tort law mechanism might (except for at a very high level of abstraction and generality) become inadequate to deal with potential harm caused by AI agents.Footnote 68 It must be stressed that this problem of diluted incentives (a broad judgment-proof definition) is distinct from what scholars and practitioners often call a “judgment-proof problem”, generally described as when a tortfeasor is merely financially unable to pay for all of the losses, leaving the victim without full compensation.Footnote 69 Thus, the judgment-proof characteristics of the existing legal persons engaged with AI might potentially undermine the deterrence and insurance goals of classic tort law. Namely, the evolution of AI and its capacity to develop characteristics in a manner never envisaged by its designers or producers could undermine the effectiveness of the traditional strict liability and other tort law instruments. The prospect that AI might behave in ways that designers or manufacturers did not expect challenges the prevailing assumption within tort law that courts only compensate for foreseeable injuries.
The judgment-proof characteristic also implies that AI’s activity levels will tend to be socially excessive and will contribute to excessive risk-taking by the existing legal persons associated with AI. Footnote 70 They might have no liability-related incentive to mitigate risk and their incentives to reduce the risk and harm will be completely diluted. The deterrence goal might be corrupted irrespective of the liability rule since the judgment-proofness of existing legal persons associated with AIFootnote 71 might not ex ante internalise the costs of any accident such AI might cause. Hence, as the literature suggests, tortious liability might fail to provide adequate incentives to alleviate the risk.Footnote 72 In other words, the insurance goal will be undermined to the extent that the judgment-proof tortfeasor proves unable to fully compensate their victims. Moreover, as shown by Logue, first-party insurance markets also will not provide an adequate remedy. Footnote 73
It has to be emphasised that AI is applied in many different settings, implying that different liability systems will also have to be applied for each sector. However, for most technological ecosystems there is no specific liability regime.Footnote 74 This means that product liability, general tort law rules (fault-based liability, tort of negligence, breach of statutory duty) and possibly contractual liability occupy centre stage.Footnote 75 Generally speaking, the potential independent development and self-learning capacity of an AI agent might thus cause its de facto immunity from tort law’s deterrence capacity and consequential externalisation of the precaution costs.
5. Legal problems, artificial intelligence and liability causing judgment-proof problems
The existing liability frameworks that could conceivably apply to AI-generated consequences can be broken down (apart from contract law) into two distinct categories: tortious liability (negligence, strict liability) and product liability under consumer protection legislation. As to the former, current laws of tortious liability rely on concepts of causality and foreseeability. In common law systems,Footnote 76 foreseeability is, as Turner suggests, employed in establishing both the range of the potential claimants (was it foreseeable that this person would be harmed?) and the recoverable harm (what type of damage was foreseeable?).Footnote 77 Alfonseca et al report that the new AI uses purely unsupervised deep reinforcement learning, which does not require the provision of correct input/output pairs or any correction of suboptimal choices and is motivated by the maximisation of some notion of reward in an online fashion.Footnote 78 In principle, these representations may be difficult for humans to understand and scrutinise.Footnote 79 AI is becoming multifaceted and therefore potentially capable of mobilising a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable or foreseeable.Footnote 80
Moreover, as Rahwan et al show, the ability of AI to “adapt using sophisticated machine learning algorithms makes it even more difficult to make assumptions about the eventual behavior of an AI”.Footnote 81 Thus, the actions of AI are likely to become increasingly unforeseeable (ie the insolvability of the program-prediction problem),Footnote 82 and this could, as Karnow argues, challenge the prevailing assumption within common tort law that courts only compensate for foreseeable injuries.Footnote 83 Namely, AI may generate solutions that even an objective reasonable human being, no matter how optimal or experienced an observer they are, would not expect or may not have even considered.Footnote 84 Martin-Casals then suggests that if a particular legal system chooses to view the “experiences of some learning AI systems as so unforeseeable that it would be unfair to hold the system’s designers liable for harm that these systems cause, victims might be left with no way of obtaining compensation for their losses”.Footnote 85 However, one has to note that in such cases multiple tort law regimes apply simultaneously and that theoretical difficulties in establishing foreseeability are often in practice solved by flexible factual interpretations or evidentiary techniques.Footnote 86
On the other hand, legal systems are not entirely devoid of statutes governing extra-contractual liability. The two most developed systems of product liability are the EU’s Product Liability Directive of 1985 (Council Directive 85/374/EEC)Footnote 87 and the US Restatement (Third) of Torts on Products Liability, 1997.Footnote 88 According to the EU Product Liability Directive, a product is defective “when it does not provide the safety which a person is entitled to expect, taking all circumstances into account, including (a) the presentation of a product; (b) the use to which it could reasonably be expected that the product would be put; (c) the time when the product was put into circulation”.Footnote 89
Thus, the literature suggests that the current EU Product Liability Directive might also suffer from the same shortcomings as the classic tort law system.Footnote 90 Namely, as suggested by the Expert Group on Liability and New Technologies and New Technologies Formation, the current Directive focuses on the moment when the product was put into circulation as the key turning point for the producer’s liability, and this cuts off claims for anything the producer may subsequently add via some update or upgrade.Footnote 91 In addition, the EU Product Liability Directive does not provide for any duties to monitor the products after putting them into circulation.Footnote 92 Moreover, most EU Member States adopted the so-called development risk defence, which allows the producer to avoid liability if the state of scientific and technical knowledge at the time when they put the product into circulation was not such as to enable the existence of the defect to be discovered.Footnote 93 Furthermore, product liability regimes operate on the assumption that the product does not continue to change in an unpredictable manner once it has left the production line and, as shown, autonomous AI does not follow this paradigm.Footnote 94 However, one has to note that these potential legal challenges do not fundamentally hinder the application of the current Directive to AI producers. Namely, the current high bar for the development risk defence may actually exclude the unforeseeability of AI-related damages as a potential liability exception. Moreover, one may argue that such damages might not be regarded as unforeseeable since societies already know that AI has an autonomous potential that may cause all kinds of hazards.
IV. Towards the optimal regulatory artificial intelligence intervention: what can law and economics offer lawmakers?
The previous discussion and application of the main findings of the law and economics literature to AI suggests that lawmakers might be facing the unprecedented challenge of simultaneously regulating possibly harmful and hazardous activity while not deterring innovation in the AI sector and associated industries. Yet, economically speaking, law is a much more resilient and robust mechanism than is often believed. Footnote 95 However, one may question whether the existing strict liability regimes are adequate to deal with the diluted incentives to reduce the risk associated with AI.Footnote 96 Thus, the classic debate on the two different means of controlling risks, namely ex post liability for harm done or ex ante safety regulation, may, due to the shortcomings of human-centred, liability-related tort law instruments, boil down to a question of efficient ex ante regulation.Footnote 97
1. Policy suggestions to ameliorate the judgment-proof artificial intelligence problem
The law and economics literatures offer several potential types of policy responses to mitigate the identified judgment-proof problem. The first instrument is vicarious liability.Footnote 98 Shavell, for example, suggests that if another party (principal) has some control over the behaviour of the party whose assets are limited (agent), the principal can be held vicariously liable for the losses caused by the agent.Footnote 99 Hence, vicarious liability (indirect reduction of risk) and a specific principal–agent relationship between the owner (a human who uses AI) and their AI agent features as a satisfactory remedy for the AI-related risks. The principal (owner) should be held vicariously liable for the losses that the agent causes. If the principal can observe the agent’s level of care, the imposition of vicarious liability will induce the principal to compel the agent to exercise optimal care. In other words, an extension of liability should indirectly lead to a reduction of risk.
How, then, would such a vicarious liability be applied to an AI agent? Turner offers an example of a police force that employs patrol AI agents that might, according to such a rule, be vicariously liable in instances where such a patrolling AI agent assaults an innocent person during its patrol.Footnote 100 Moreover, the unilateral or autonomous actions of AI agents that are not foreseeable do not necessarily operate (as in the instance of product liability) so as to break the chain of causation between the person held liable and the harm.Footnote 101 Yet the literature also shows that if the principal is unable to observe and control the level of care exercised by the agent (AI), then they will generally be unable to compel the agent.Footnote 102 Nevertheless, if the principal can control the AI’s level of activity (but has no observation capacity), then such vicarious liability will induce the principal to reduce the AI’s participation in risky activity.
However, what if AI is truly autonomous, being capable of self-learning, developing emergent properties and adapting its behaviour to the environment? In these circumstances, the imposition of vicarious liability might prove inadequate due to the extreme judgment-proof problem. Lawmakers should then combine strict liability and vicarious liability – the strict liability of the manufacturer and the vicarious liability of the principal (any existing legal person). Moreover, the identified judgment-proof problem also implies that the current option proposed by the EU Expert Group on Liability for New Technologies and New Technologies Formation to introduce vicarious liability for autonomous systemsFootnote 103 in order to address the risks of emerging digital technologies might fall short of attempted deterrence and prevention goals. In order to mitigate the identified shortcomings of vicarious liability, the literatureFootnote 104 offers the following instruments.
First, lawmakers could require any principal to have a certain minimum amount of assets in order to be allowed to engage in an AI-related activity.Footnote 105 Such an amount of assets then acts as an insurance for the acts of AI agents, induces principals to take precautions and serves as a mechanism to indirectly mitigate the judgment-proof problem.Footnote 106 Pitchford, for example, suggests that partial lender liability and an equivalent minimum equity requirement deliver the highest level of efficiency.Footnote 107 However, as Shavell points out, such a minimum asset requirements may also undesirably prevent some individuals who ought to engage in AI-related activity from doing so.Footnote 108
Second, lawmakers could introduce the compulsory purchase of liability insurance coverage in order for any principal to be allowed to engage in autonomous AI-related activity.Footnote 109 Such insurance coverage would provide ex ante incentives for optimal precaution and for optimal principals’ decisions as to whether to engage with superhuman AI-related activity at all.Footnote 110 For example, AI developers or users seeking coverage for an agent could submit it to a certification procedure and, if successful, would be quoted with an insurance rate depending on the probable risks posed by the AI agent.Footnote 111 However, one has to note that liability insurance requirements tend to improve parties’ incentives to reduce risk when insurers can observe levels of care, but they dilute incentives to reduce risk when insurers cannot observe levels of care.Footnote 112 In the former case, if principals/users purchase full liability insurance coverage, their incentives to reduce risk would be optimal; in the latter case, compulsory liability insurance may be inferior to minimum asset requirements.Footnote 113 Moreover, if insurers indeed cannot observe AI-related risk and the moral hazard exists, then mandating the purchase of liability insurance may not be desirable. In such circumstances, Shavell suggests that an opposite form of insurance regulation may be advantageous: barring the purchase of liability insurance.Footnote 114
Third, lawmakers could directly ex ante regulate the AI’s risk-creating behaviour. Namely, regulatory agencies could ex ante set detailed standards for the behaviour, employment, operation and functioning of any AI.Footnote 115 For example, the idea would be to simply regulatorily limit the AI’s abilities in order to prevent it from doing harm to humans.Footnote 116 Such ex ante regulations and safety pre-emptions would also significantly reduce the degree of uncertainty regarding liability risk, and this, in general, increases research and development.Footnote 117 Furthermore, harmonising different, slow-moving Member States-wide regulations could also speed up experimentation and safe AI adoption.Footnote 118
Fourth, regulatory agencies could set a detailed set of sector-specific safety standardsFootnote 119 (similar to those in the air travel or pharmaceutical industries).Footnote 120 For example, under the existing rules, AI must meet essential health and safety requirements, Footnote 121 and efforts to produce harmonised European standards for AI are ongoing. Footnote 122 Such standards could, for example, require a special driver’s license to operate a self-driving car.Footnote 123 Similarly, doctors may be required to take a minimum number of training sessions with a robotic system before being allowed to perform certain types of procedures on patients.Footnote 124 The literature shows that such safety standards (and related liability for the breach of these safety standards) may incentivise users themselves to innovate in ways that help them to take more effective precautions and may incentivise producer innovation because users would demand safer and easier-to-use design features, and mandatory training would favour “easier-to-teach” designs in order to reduce adoption costs.Footnote 125
However, one has to note that such safety standards alone are inadequate and should be combined with the ex ante registration of both the principal and the superhuman AI agents (ie Turing registries). Such all-encompassing registries, like that for vehicles or ships, decrease information asymmetries, enable more effective regulatory control of hazardous activities and act as efficient ex ante mechanisms to deter and prevent disastrous events.Footnote 126
Fifth, criminal liability for the principalFootnote 127 could be introduced in order to provide additional pressure to optimise the principal’s decision as to whether to engage with the AI activity at all. Namely, a principal who would not take care if only their assets were at stake might be induced to do so for fear of imprisonment.Footnote 128
Sixth, lawmakers could extend liability from the actual injurer (the AI) to the company that engages or employs such an AI agent. Such an extension of liability could be achieved by piercing the veil of incorporation, for example.
Seventh, lawmakers could introduce corrective ex ante taxes that would equal the expected harm. Such corrective taxes, while implying the ex ante internalisation of potential damages (negative externalities), would then ex ante induce the optimal level of activity and AI-related engagement. Namely, when harm is caused with a low probability, the expected harm is much less than the actual harm, and parties with limited assets may be able to pay the appropriate tax on risk-creating behaviour even though they could not pay for the harm itself.Footnote 129 For example, owners, developers or users of AI – or just certain types of AI – could pay a tax into a fund to ensure adequate compensation for victims of AI crime.Footnote 130
Eighth, lawmakers could establish a regime of compulsory compensationFootnote 131 or a wide insurance fund for instances of catastrophic losses that is publicly and privately financed. Such insurance implies a risk-sharing and risk-pooling mechanism (throughout the entire society) and is the optimal risk allocation in instances of unforeseeable, unpreventable catastrophic harms.Footnote 132 One should note that such insurance schemes already exist in the nuclear industry.Footnote 133 For example, the Price Anderson Act for nuclear power establishes a pool of funds to compensate victims in the event of a nuclear incident through a chain of indemnity regardless of who was ultimately at fault.Footnote 134
In addition, one should consider introducing the AI manufacturer’s strict liability supplemented by the requirement that an unexcused violation of a statutory safety standard is negligence per se. Moreover, compliance with the regulation standard should not relieve the injurer’s principal from tort liability. Thus, the rule per se (violation of a regulatory standard implies tort liability – including strict liability) should also be applied to AI-related torts, and the compliance defence of an AI manufacturer or its principal should not be accepted as an excuse.Footnote 135 One has to note that regulation and tort law should be applied simultaneously. Ex post liability and ex ante regulation (safety standards) are generally viewed as substitutes for correcting externalities where the usual recommendation is to employ the policy that produces lower administrative costs. However, Schmitz shows that the joint use of liability and regulation can enhance social wealth.Footnote 136 Namely, regulation removes problems that affect liability while liability limits the cost of regulation.Footnote 137 That is, by introducing an ex ante regulatory standard, the principal might be prevented from taking low levels of precaution and might find it convenient to comply with the regulatory standard despite the judgment-proof problem.Footnote 138
2. A new special electronic legal person should not be created
Regarding the specific legal status in paragraph 59 of its Resolution on Civil Law Rules in Robotics,Footnote 139 the European Parliament suggests that the EU create a specific legal status for robots so that at least the most sophisticated autonomous robots can be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying an electronic personality to cases where robots (AI) make autonomous decisions or otherwise interact with third parties independently. Moreover, Solum,Footnote 140 Wright,Footnote 141 TeubnerFootnote 142 and Koops et alFootnote 143 also argue that AI should be given a legal personality and that there is no compelling reason to restrict the attribution of action exclusively to humans and social systems. Furthermore, Allen and Widdison state that when an AI is capable of developing its own strategy it makes sense that the AI should be held responsible for its independent actions.Footnote 144 Yet it must be emphasised that Teubner, for example, suggests that software agents should be given a carefully calibrated legal status.Footnote 145 The solution to the risk brought by autonomy would, according to Teubner, be their status as actants, as actors with partial legal personhood whose autonomous decisions are made legally binding in case they trigger liability for damages.Footnote 146
Obviously, from a law and economics perspective, the establishment of a special status of an electronic person for AI that would have its own legal personality and responsibility for potential damages should be avoided. Namely, the establishment of such a legal personality would actually amplify the existing judgment-proof problem. It would completely dilute the incentives of the existing legal persons engaged with AI to reduce the risk that arises due to their complete indifference both to the ex ante possibility of being found legally liable for harm (as the liability now falls upon the AI) done to others and to the potential accident liability (where the value of the expected sanction equals zero). In other words, the establishment of such a specific electronic person might institutionalise the judgment-proofness of existing legal persons engaged with AI. Accordingly, the establishment of an unregulated human-like electronic personality is not an effective or adequate response to the identified AI-related judgment-proofness, but might be seen as an amplifier making the problem even more persistent. Consequently, granting legal personality to autonomous AI might open Pandora’s Box of moral hazard and create perverse incentives on the side of human principals, the AI industry, designers, users and owners, and it would exacerbate the AI judgment-proof problem.
V. Conclusions
This paper sought to address the role of public regulatory policy in regulating AI and the related risk and civil liability for damage caused by such AI. As argued, existing legal persons associated with AI in their daily enterprises might be completely indifferent to the ex ante possibility of being found liable by the human-imposed legal system for harm caused, and hence their incentives to engage in risky activities might be socially excessive. The judgment-proof characteristic also implies that it might induce excessive risk-taking by the existing legal persons associated with AI. Thus, the identified judgment-proof characteristics may undermine the deterrence and insurance goals of tort law. In order to mitigate the identified effects of the judgement-proofness of AI-associated companies, a specific set of ex ante regulatory interventions is suggested.
Furthermore, the contemplated new special electronic legal personality for AI should not be introduced. As this paper attempts to show, the judgment-proofness of AI implies that any establishment of a legal personality would, while exacerbating the judgment-proof problem, bring about unexpected adverse effects. This paper also shows that, due to the identified shortcomings, the debate regarding the different ways of controlling hazardous activities may be reduced to a question of efficient ex ante safety regulation. In other words, regulatory intervention is, from the law and economics perspective, the best option for governing AI systems.