I. INTRODUCTION
Medicine is currently undergoing an Artificial Intelligence (“AI”) revolution.Footnote 1 The rise of big data and development of sophisticated machine-learning techniques holds great potential to improve every step of the clinical process.Footnote 2 However, the increasing use of clinical AI systems raises the important question of how the law should deal with liability for harms arising from the use of these systems. Recently, IBM Watson recommended unsafe and incorrect cancer treatments, underscoring the danger that a flawed algorithm poses to patients.Footnote 3 It is imperative that tort law adapt to this technological challenge. Not only is an adequate liability regime important for victims seeking compensation, but also a flawed regime may compromise the expected benefits that AI technology holds for the health care system as a whole.Footnote 4
This Article proposes a theory of liability whereby physicians, manufacturers of clinical AI systems, and hospitals that employ the systems are considered to be engaged in a common enterprise for the purposes of tort liability. As members of a common enterprise, they should furthermore be held jointly and strictly liable for harms caused by clinical AI systems. The argument put forth in this Article is an extension of David Vladeck’s proposal to impose common enterprise liability (“CEL”) on component manufacturers of autonomous vehicles.Footnote 5 By appropriating Vladeck’s criteria of “common objective” to determine who is part of a common enterprise, the proposed framework can facilitate the apportioning of responsibility among disparate actors under a single legal theory. The proposed framework thereby accounts for the dispersion of responsibility among those involved in the creation, implementation, and operation of clinical AI systems.
This proposal draws inspiration from the scholarly literature on hospital enterprise liability (“EL”). In particular, it mobilizes the insight that the hospital acts as a locus of accountability for patient safety to justify the hospital’s inclusion in the common enterprise. Medical errors are often the result of faulty systems implemented by health care organizations.Footnote 6 The introduction of new technology such as AI increases the risk of new errors, and it falls on health care organizations to implement proper procedures and surveillance systems to prevent such risks from materializing. This has long been implicit in the doctrine of hospital corporate liability, which holds that the hospital has the duty to use reasonable care in the maintenance of safe and adequate equipment.Footnote 7 Holding hospitals liable will also incentivize greater attention being paid to patient safety in the clinical use of AI technology.
This Article proceeds in three parts. Part II consists of a brief overview of AI’s potential to revolutionize medicine. In the near future, AI systems may acquire the capacity to perform a clinical diagnosis without the intervention of human clinicians.Footnote 8 Underlying AI’s potential is the application of powerful machine learning algorithms to generate accurate predictions.Footnote 9 Despite AI’s great promise, the technology faces significant limitations that can lead to inaccurate results.Footnote 10 Furthermore, the technology’s lack of ability to explain its diagnoses or recommendations will make it difficult for physicians to evaluate an AI system’s output against their own expertise.
Part III undertakes a critical analysis of multiple liability frameworks that have been proposed for AI systems: products liability, negligence, agency law, AI personhood, EL, and CEL. The first four frameworks have received the most scholarly attention in discussions of AI liability, though all of these encounter significant problems and uncertainties in their application to clinical AI systems. While products liability has been considered a natural liability framework for harms arising from AI systems, there are numerous doctrinal obstacles that stand in the way.Footnote 11 Moreover, the difficulty of proving a defect and the application of the learned intermediary (“LI”) principle add uncertainty regarding a plaintiff’s ability to recover from the manufacturer.Footnote 12 Negligence suffers from a similar weakness by requiring the plaintiff to prove causation and fault, which may prove burdensome given AI’s explainability problem. Both products liability and negligence are also limited in their ability to apportion liability among disparate actors. While agency law can account for multiple tortfeasors (as co-principals) and accommodate the lack of foreseeability of AI actions, the control requirement between principal and agent makes it too limited of a legal theory to account for the diverse ways in which actors can contribute to AI harm. Proposals to recognize AI as persons remain contentious due to, among other reasons, the ethical implications.Footnote 13
Part IV consists of arguments for why a CEL-based approach to clinical AI systems is both reasonable and desirable. One can plausibly consider physicians, manufacturers, and hospitals to be engaged in a common enterprise as clinical AI systems are designed for use by health care professionals and organizations to provide care to patients. Therefore, there is a strong conceptual overlap in the objectives pursued by each actor. As for the advantages of a CEL-based approach, a single theory of liability for AI harms resonates with concerns about the responsibility gap engendered by clinical AI technology. Moreover, the imposition of strict liability is responsive to the realities of modern tort litigation and generates a fairer outcome given the benefits that physicians, manufacturers, and health care organizations derive from the use of clinical AI systems. This Article ends with a discussion about future legal reforms aimed at implementing a CEL-based state-level approach, with the possibility of incorporating limited AI personhood into the proposed framework.
II. BACKGROUND: THE PROMISE AND PERIL OF CLINICAL AI
The development of AI technology has led to a paradigm shift in medicine. In the long run, AI has the potential to push the boundaries of what human health providers can do and provide a tool to manage patients and medical resources.Footnote 14 Importantly, AI technology is predicted to have a significant impact on each step of the clinical process.Footnote 15 AI has the potential to improve medical prognosis by using thousands of predictor variables taken from electronic records and other sources, unlike human providers who rely on scoring tools.Footnote 16 Moreover, clinical decision-support systems based on AI algorithms will soon be able to “tailor relevant advice to specific decisions and treatment recommendations.”Footnote 17
Timothy Craig Allen predicts that the future of clinical AI systems will develop in three phases.Footnote 18 In the first (“present”) phase, the physician is “in the loop” in being fully in control of the AI system. Here, the AI system serves merely as “another tool for diagnosis” that renders the clinician more efficient. In the second (“near future”) phase, the clinician moves from being “in the loop” to being “on the loop,” as the AI system now has the capacity to perform a clinical diagnosis and issue a report without the clinician’s review—though the clinician may still want to exercise quality control over the results. The AI system in this phase will have drastically reduced—if not entirely eliminated—human involvement. In the third (“distant future”) phase, the AI system becomes autonomous, and the clinician is taken “out of the loop” entirely. Here, a human clinician is “entirely unnecessary to render an actionable diagnosis or to institute treatment.”Footnote 19 The second and third phases, if they come to fruition, would require a radical rethinking of existing systems of liability and regulation.
Underlying AI’s powerful potential in the field of health care is the use of sophisticated machine learning algorithms to extract insights “from a large volume of health care data, and then use the obtained insights to assist clinical practice.”Footnote 20 Clinical machine learning tools are generally based on supervised machine learning (“ML”),Footnote 21 which refers to techniques in which a model is trained on datasets consisting of a range of inputs (or features) that are associated with a known outcome.Footnote 22 When applied to new data, a well-trained algorithm will uncover patterns or structures that are implicitly present, thereby allowing for accurate predictions to be made. Supervised ML algorithms are iteratively retrained to improve their predictive accuracy using an optimization technique.Footnote 23 A special kind of ML algorithm known as deep neural networks has been gaining in popularity due to its ability to accurately label objects in images.Footnote 24 Neural networks hold great promise in their potential application to image-based medical subfields such as radiology, ophthalmology, dermatology, and pathology.Footnote 25
Despite its great promise, ML algorithms currently face significant limitations which can result in inaccurate clinical predictions or recommendations.Footnote 26 Among these limitations is the potential for ML algorithms to accidentally exploit an unknown and possibly unreliable confounding variable. This could negatively affect the algorithm’s applicability to new data sets. For example, unreliable confounders exploited by deep learning models have contributed to inaccurate detections of melanoma and hip fractures.Footnote 27 There are also challenges relating to the generalizability of AI findings to new populations due to factors such as technical differences between sites and variations in administrative practices.Footnote 28 Generalizability has been called the “Achilles heel” of deep learning, given that “algorithms trained on a specific data set often perform very well on similar data, but often may yield poor performance in the case of data that have not been seen in the training process.”Footnote 29 Studies have shown that even subtle differences between populations can greatly affect a clinical AI system’s predictive accuracy.Footnote 30
Recent reports on the underperformance of IBM’s cancer AI algorithm, called Watson for Oncology (“WFO”), provides a good example of the limitations and pitfalls that future clinical AI systems are likely to encounter. In the years following WFO’s well-publicized victory over two human contestants on the gameshow Jeopardy!, IBM advertised and sold WFO to doctors around the world as a platform with the capacity to recommend the best cancer treatments for specific patients.Footnote 31 However, a 2018 STAT report revealed statements from company specialists and customers that WFO had generated “multiple examples of unsafe and incorrect treatment recommendations.”Footnote 32 For example, WFO had recommended bevacizumab (Avastin) to a patient with evidence of severe bleeding, despite a clear contraindication and a warning from the FDA. There were purported flaws in the methods used to train WFO, including the small number of cases used as inputs.Footnote 33 While fortunately no patients were harmed by WFO’s underperformance, this case underlines the real danger that a flawed algorithm can pose to patients.
AI algorithms’ lack of explainability will also be an obstacle for the clinical application of AI technology. The term explainability (often used interchangeably with “interpretability”) refers to that “characteristic of an AI-driven system allowing a person to reconstruct why a certain AI came up with the presented predictions.”Footnote 34 Current ML algorithms do not provide an explanation or justification of why a certain result was generated due to the opacity of ML algorithms’ complex form of mathematical representation.Footnote 35 As such, human users are confronted with what has been called the “black box” problem–defined as an “inability to fully understand an AI’s decision-making process and the inability to predict the AI’s decisions or outputs.”Footnote 36 Computer scientist Geoffrey Hinton, a pioneer in deep learning, explains why there is no simple explanation for how a neural net arrives at a specific result:
Understandably, clinicians, scientists, patients, and regulators would all prefer to have a simple explanation for how a neural net arrives at its classification of a particular case. In the example of predicting whether a patient has a disease, they would like to know what hidden factors the network is using. However, when a deep neural network is trained to make predictions on a big data set, it typically uses its layers of learned, nonlinear features to model a huge number of complicated but weak regularities in the data. It is generally infeasible to interpret these features because their meaning depends on complex interactions with uninterpreted features in other layers.Footnote 37
One consequence of this lack of explainability is that it becomes difficult for clinicians and health care organizations to evaluate product quality in the marketplace.Footnote 38 Unlike drugs and other medical technologies, algorithms are not normally conducive to verification through clinical trials.Footnote 39 Black-box models also renders it difficult to identify and correct biases such as errors among patients belonging to underrepresented or marginalized groups.Footnote 40 Bias is difficult to avoid due to imperfect training data; even a theoretically fair model can in practice be biased upon interacting with the larger healthcare system.Footnote 41
Moreover, it will be difficult for physicians to evaluate the soundness of an AI system’s diagnosis or recommendation against their own knowledge; both will be considered experts of sorts, yet with different training and distinct ways of reasoning.Footnote 42 Some argue that this inability of physicians to vet the quality of training labels and data is contrary to evidence-based medicine.Footnote 43 The use of black box medicine also raises questions of patient autonomy and informed consent as patients are unable to question the AI system. Without the benefit of a justification for an AI-generated recommendation that is understandable to a human user, patients may be deprived of the opportunity to engage in a dialogue with their clinicians about the underlying reasoning. This in turn places patients in a situation where they have to make important health care decisions without sufficient information.Footnote 44 For these and other reasons, there have been strong calls for explainability in clinical AI systems.Footnote 45 However, greater AI explainability may come with trade-offs in the form of having to limit an AI system’s complexity and, in doing so, its performance.Footnote 46 There are also questions regarding the inherent limitations of AI explainability techniques.Footnote 47
III. CURRENT TORT FRAMEWORKS: A CRITICAL ANALYSIS
While there has yet to be any case law involving the use of clinical AI systems or algorithms, this is likely to change with the increasing use of clinical AI systems.Footnote 48 A coherent, fair, and predictable liability regime is important to patients, health professionals, and health organizations.Footnote 49 As noted in the European Commission’s Expert Group on Liability and New Technologies’ (“European Commission”) report Liability for Artificial Intelligence, inadequacies in a system of liability might “compromise the expected benefits” of such a technology.Footnote 50 Tort liability also encourages physicians/hospitals to safely use AI and manufacturers to be diligent in their design. The following Section will critically examine some of the major liability frameworks that have been proposed for harms arising out of the operation of AI systems.
A. Products Liability
Products liability has received attention as a natural framework for AI liability.Footnote 51 The Restatement (Third) of Products Liability holds that “[o]ne engaged in the business of selling or otherwise distributing products who sells or distributes a defective product is subject to liability for harm to persons or property caused by the defect.”Footnote 52 This seems to apply rather straightforwardly to AI systems, which are, after all, “products manufactured, distributed, and sold to customers.”Footnote 53 Products liability requires the existence of a defect. Under a products liability framework, plaintiffs can recover for injuries resulting from defective design, defective manufacture, or defective warning from the manufacturer.Footnote 54 For a patient harmed by an AI system, a products liability framework presents certain advantages. Not only do manufacturers have deeper pockets than physicians, but also products liability law has a history of pro-plaintiff bias.Footnote 55 Accordingly, this framework has been proposed for AI systems such as autonomous vehicles.Footnote 56
Nevertheless, the applicability of products liability remains a contentious issue among scholars. For one, it is unclear whether an AI algorithm would fall within the scope of products liability. The law has traditionally held that only personal property in tangible form can be considered “products.”Footnote 57 Some commentators have argued that this is not necessarily an obstacle to the application of AI algorithms embedded in hardware because there is some legal basis for treating information integrated with a physical object as a product. Footnote 58 Others, in contrast, see this as a serious doctrinal impediment when it comes to applying products liability law to AI systems.Footnote 59 Added to this complexity is the fact that software has traditionally been considered by the law to be a service and thereby falls outside the reach of products liability law.Footnote 60 However, this distinction may ultimately prove to be untenable given the permanent interaction between products and services in the case of AI systems.Footnote 61
Even if products liability law could be clarified or broadened to include AI systems, there may be compelling policy reasons not to go down this path. Under a products liability framework, the plaintiff bears the burden of proving a defect, which may be difficult to establish when it comes to harms arising from the operation of AI systems.Footnote 62 Samir Chopra and Laurence White note that
[i]n the absence of a clear manufacturing defect, this will involve trying to persuade the court of a design defect or a failure to warn the user of dangers inherent in a product. In states that have adopted the Restatement (Third) of Products Liability, proving a design defect requires proof of a reasonable design alternative - a difficult challenge in a highly technical field.Footnote 63
And while the Third Restatement allows for an inference of a defect, the plaintiff still bears the considerable burden of proving that the harm suffered was of a kind that would ordinarily occur as a result of a product defect and not solely the result of other causes at the time of the product’s sale or distribution.Footnote 64 If the injury results from the AI system’s machine learning capabilities, this will greatly complicate the analysis.Footnote 65
There is also the practical difficulty of meeting products liability’s causation requirement.Footnote 66 This was evident in the case of Mracek v Bryn Mawr Hospital,Footnote 67 a case involving injuries caused by the da Vinci surgical robot.Footnote 68 Here, the allegation was that the robot malfunctioned during a prostatectomy procedure when it flashed ‘error’ messages. This malfunction allegedly resulted in the plaintiff suffering severe injuries. The Third Circuit upheld the summary judgment against the plaintiff for a failure to adduce expert evidence permitting him to prove that the defect caused the injury.Footnote 69 The plaintiff faced an uphill battle from the beginning as the pool of independent experts with knowledge of this novel and proprietary technology was very limited.Footnote 70 While Mracek was a robotics case, one can imagine a similar fact pattern in the case of a novel AI diagnostic system where the only experts are employees or former employees of the manufacturer.
Another potential obstacle to the application of product liability law to AI is the LI doctrine. This doctrine holds that manufacturers of drugs and medical devices do not have a legal obligation to warn a patient about risks associated with their product when the manufacturer has already provided a warning to the doctor.Footnote 71 It has been applied by courts as “a blanket exemption from a duty to warn the consumer of a prescription drug of the potential dangers of a drug.”Footnote 72 The rationale behind the LI doctrine is that manufacturers should be able to rely upon physicians to pass warnings regarding medical products to their patients.Footnote 73 The extent to which the LI doctrine would apply to clinical AI systems remains an unsettled question. On one hand, some commentators have posited that the LI doctrine could shield AI manufacturers from liability for harm arising out of the use of their products.Footnote 74 On the other hand, courts have also recognized that if the physician does not play an active role with regard to the product and patient, then the manufacturer is precluded from invoking the LI doctrine as a defense against liability.Footnote 75 This suggests that whether the physician is considered a learned intermediary under product liability law may ultimately depend on the level of interaction between the physician and the AI system.Footnote 76
A further complication is that the LI doctrine is concerned with the duty to warn. AI’s unpredictability means that many AI-related risks are not foreseeable by the manufacturer, irrespective of the level of physician interaction. This differs from a case where the risk of a particular drug (e.g., oral contraception) is known to the company through the results of clinical trials. Moreover, the doctrine of the duty to warn is “premised on an unequal access to information i.e., that the manufacturer (for example) knows more about the risks than a relatively unsuspecting consumer.”Footnote 77 However, this theory breaks down where the product acts autonomously and unpredictably. A court might find it unfair to hold an AI manufacturer liable for failure to warn about risks that were not and could not have been known. As such, uncertainty remains as to whether the LI doctrine can be invoked to shield an AI manufacturer from liability.
B. Negligence
Negligence frequently features as a candidate framework for addressing AI harms, as it is the default for cases of medical malpractice.Footnote 78 For a claim in negligence to succeed, the plaintiff must establish four elements: (1) the existence of a legal duty; (2) the breach of that duty by the defendant; (3) an injury suffered by the plaintiff; and (4) causation.Footnote 79 Under traditional malpractice law, a physician or other health professional could be held liable in negligence for harmful medical errors that fall below the standard of care (“the malpractice standard”).Footnote 80 While the standard of care has traditionally been determined by reference to customary medical practice,Footnote 81 state courts have increasingly turned to the reasonable physician standard: what a physician with the same kind of technical background, training, and expertise as the defendant would have done in a similar situation.Footnote 82 The reasonable physician standard is, at least in principle, less deferential to physicians as it affords courts greater latitude in reviewing medical knowledge and custom in determining the applicable standard of care. In practice, the differences between the standards are subtle with a tendency for the two to overlap as the question of what a reasonable physician would do is often determined by reference to the customary practice of a local or national comparison group of physicians.Footnote 83
One obstacle in applying a negligence framework relates to the uncertainty surrounding the standard of care. This is partly explained by the fact that, like any new medical device, the risk implications of clinical AI remain ambiguous. This in turn introduces ambiguity into the standard of care.Footnote 84 A related difficulty is determining whether a physician satisfied their duty of care in the absence of any principled basis for a reasonable physician to reject AI recommendations.Footnote 85 As discussed above, AI errors may be inherently unforeseeable owing to the opacity of the computational models used to generate decisions or recommendations.Footnote 86 That is, the algorithms used in a clinical AI system may be non-transparent because they rely on rules that are too complex for humans to explicitly understand. The opacity can also derive from the fact that no one, not even the programmers, knows what factors go into the machine-learning process that generates the output.Footnote 87 For example, a physician will not be able to reject an AI system’s recommendation for a personalized medical treatment on the basis of a better counterfactual treatment. The physician cannot refer to generalized medical studies as a basis for their decision since recommendations are personalized for each patient.Footnote 88 One can only know ex post facto whether it was the right treatment or not. Given that there has not been any case law on medical AI, uncertainty remains as to how a court would address a situation involving a divergence between the standard of care and an AI recommendation.Footnote 89
Many of the ambiguities and challenges surrounding the physician standard of care apply equally to negligence claims against hospitals. A hospital can be held liable pursuant to theories of corporate liability and vicarious liability.Footnote 90 Under the former, a health care organization can be found directly liable for failure to safeguard patient safety and welfare.Footnote 91 Corporate hospital liability’s scope encompasses patient injuries sustained as a result of inadequate maintenance of new medical equipment or as a consequence of inadequate policies to ensure that staff have the proper training and expertise.Footnote 92 One complication is that a negligence claim against a hospital for harm caused by a clinical AI system would invoke its own standard of care, potentially on the basis of what a reasonable hospital would do in similar circumstances.Footnote 93 Uncertainty remains around what courts would recognize as the standard of care for hospitals with respect to the proper use of clinical AI systems.Footnote 94 Plaintiffs would also face the potentially significant burden of demonstrating that the hospital had actual or constructive knowledge of the AI system’s flaw, or that it failed to train its staff to safely integrate the system into clinical care.Footnote 95 Alternatively, a hospital can be found vicariously liable for the negligent acts of its employees.Footnote 96 This liability can, in certain cases, extend to the negligent acts of staff physicians, particularly if a hospital imposes workplace rules and regulations on them.Footnote 97 That being said, a claim of vicarious liability will be successful only if the plaintiff can establish that the physicians violated their own standard of care.Footnote 98 Moreover, “[v]icarious liability claims are particularly challenging in negligence cases, given the diverse contractual relationships that exist between hospitals and physicians, and factual issues surrounding the level of control exerted by the principal over the agent.”Footnote 99
As for negligence claims against AI manufacturers, the relevant duty of care will likely be whether the manufacturer provided a faulty AI system.Footnote 100 A key element to establishing the standard of care for AI manufacturers is to determine whether there exists a custom or usage in the industry applicable to the AI system in question.Footnote 101 However, custom establishes only evidence of ordinary care, and the exact standard must be determined on a case-by-case basis.Footnote 102 A defendant manufacturer would likely counter by arguing that liability should be premised on the plaintiff establishing that a reasonable manufacturer, taking into account industry custom or usage, would have detected and corrected the defect in the system.Footnote 103 This will not be a simple task. As one author has noted:
In determining the scope of a vendor’s duty, relevant factors include the foreseeability of the harm, the connection between the [clinical decision support] recommendation and the harm, the burden on the vendor in structuring alternative recommendations, the feasibility and practicality of affording adequate protections, and the public policy desire to prevent the harm.Footnote 104
Assuming that the standard of care is ascertainable in a given situation, the plaintiff will once again encounter the evidentiary burden of proving that this standard was breached. Meeting this burden will be difficult and expensive.Footnote 105
Ultimately, the decision to use AI for a particular patient is tethered to the decision to use AI in general. The clinician is left in a position where the decision to use AI hinges on an “article of faith” with respect to whether the recommendation will work out for a particular patient.Footnote 106 In using black box clinical AI systems, physicians and hospitals “place trust not only in the equation of the model, but also in the entire database used to train it and, in the handling (e.g. labelling) of that database by the designers.”Footnote 107 Andrew Selbst has characterized this predicament as a question of whether harms caused by AI systems are sufficiently foreseeable such that a reasonable person can be held liable for their occurrence. In Selbst’s view, negligence law compensates for humans’ “bounded rationality” with the requirement of foreseeability.Footnote 108 Because many harmful consequences can be imagined, the law must have some way of deciding at what point an agent has accepted an impermissible amount of risk. Our rationality is bounded by the fact that we have neither perfect information nor the capacity to process all the variable risks involved in a given situation. This limitation means that courts must conduct an inquiry into: “1) what is it reasonable for a person to know?; and 2) [h]ow much can we expect them to be able to process?”Footnote 109 Decision-assistance AI systems, with their exponentially greater processing power, exist to compensate for our inherent limitations. However, this does not mean that we have succeeded entirely in “unbounding” our rationality. Rather, AI opacity means that we find ourselves unable to understand and, therefore, supervise or question the decision-making processes. Selbst sees this as being the real challenge to the adoption of negligence law for AI harms:
… it is in precisely the contexts where human limitations currently cause the most injuries that demand for AI will be the greatest. Thus, though the injury rates may improve overall with AI, the people who are injured—and there may still be many—will be without remedy if negligence treats AI errors as functionally unforeseeable.Footnote 110
In short, AI’s opacity will make it a complicated matter for courts to determine the harms that are reasonably foreseeable in the case of clinical AI systems. Even if the harm is somehow foreseeable, it can be burdensome for the plaintiff to prove that there was a breach of the standard of care, especially when the harm in question may have resulted from the interaction of multiple actors.Footnote 111 All of this points to negligence being a sub-optimal liability scheme for AI harms. Not only will the law fail to adequately deter potential tortfeasors, but it will also lead to an increase in the costs of adjudicating legal disputes over the standard of care.Footnote 112
C. Agency Law
More recently, scholars have proposed creative extensions of agency law to address some of the concerns raised above. Matthew Scherer, a proponent of this approach, holds that agency law does not depend on the characteristic of the agent (i.e., it does not need to be a legal person) but rather on the relationship between the principal and agent.Footnote 113 As such, an AI system could be considered an agent of a principal despite lacking legal personhood.Footnote 114 Under this framework, the principal of an AI system would be held vicariously liable for the tortious acts of an AI system when the system “acts within the scope of the agency.”Footnote 115 The advantage of an agency approach is in allowing victims to hold the principal(s) liable even where the agent cannot, itself, be held liable.Footnote 116 At the same time, agency law contemplates that agents will act autonomously and use their discretion in carrying out the principal’s tasks. That an agent might even use its autonomy to contradict the express instructions of the principal expressly falls within the contemplation of agency law.Footnote 117
Who should be considered an AI’s principal? Scherer takes a broad approach and argues that an AI system’s principals should be the “designers, manufacturers, and developers i.e., those who gave the A.I. system the ability to do legally meaningful things.”Footnote 118 This would essentially capture anyone involved in the A.I. system’s design, manufacturing, updating, maintenance, and use. An AI system could be considered an agent to multiple co-principals or, alternatively, a subagent of a principal. The scope of agency could be “defined in terms of the AI system’s capabilities and the precautions that the system’s upstream designers and deployers took to prevent downstream operators and users from expanding or altering those capabilities.”Footnote 119 There are limits to liability. Designers should be excused from liability “if a downstream individual or entity modifies the system in a manner that makes it capable of performing tasks that go beyond even its learnable capabilities.”Footnote 120 This immunity from liability is qualified by the designer’s duty to ensure there are safeguards built into the AI system against potentially dangerous modifications.Footnote 121
It is unclear, however, whether agency law has the conceptual resources to justify holding both health care actors and the manufacturer liable for harms caused by clinical AI systems. According to the Restatement (Third) of Agency, a “principal” designates “a person who has authorized another to act on his account and subject to his control.”Footnote 122 This control does not have to be physical, but the law is clear that the principal must have the right to exercise control over the agent’s activities.Footnote 123 For instance, an airplane owner is not considered the principal of a pilot because the owner had no right to control the pilot’s performance during flights.Footnote 124 Similarly, the manufacturer of a clinical AI system lacks not only control over how the hospital or the physician actually uses the AI system, but also any right to control this use. It is the physician who directly employs the technology in diagnosing and treating patients while the healthcare institution “selects, installs, trains, and operates an AI system that its physicians may utilize.”Footnote 125 It bears noting that the underlying justification for the theory of vicarious liability is the employer’s right to control the means and methods of the employee’s work. The threat of being held vicariously liable presumably incentivizes the employer to develop and implement sound procedures to control their employees.Footnote 126 In the case of clinical AI systems, any putative line of control between the manufacturer and the AI system is disrupted by the presence and actions of health care intermediaries.Footnote 127 This remains the case unless the physician (and perhaps even the health care institution) is entirely taken out of the loop.Footnote 128 Until that happens, to include the manufacturer as a principal for the purposes of vicarious liability would conflict with the underlying justification of agency law.
Alternatively, one can take a more selective approach in determining who counts as the principal. In a recent article, Anat Lior proposed an agency law approach to AI harms where the “identity of the principal will change per instance and will heavily depend on the circumstances of the accident.”Footnote 129 Under her proposal, the owner and operator of an AI system may be considered principals in one circumstance while the manufacturer may be considered the principal in another.Footnote 130 The problem with this approach is that it will often be a contentious matter to determine which party, if any, exercised the relevant control and supervision. Despite being a case on robotics and the LI principle, the Supreme Court of Washington decision in Taylor v. Intuitive Surgical Inc. Footnote 131 is instructive in this regard. A physician disregarded the robot manufacturer’s guidelines when using the device on a patient, leading to post-surgery complications and death. On appeal, the question facing the court was whether the manufacturer had a duty to warn the hospital in addition to the physician. The decision resulted in a sharp disagreement between the majority and dissenting opinions over the duties of each party in relation to the patient. The majority held that the manufacturer owed duties to the patient—duties that could only be discharged by warning the hospital.Footnote 132 The dissent, in contrast, would not have held the manufacturer liable based on there being “several steps” between the manufacturer and the patient.Footnote 133 The case of Taylor suggests that the connection between a party’s activity and the harmful actions of the clinical AI system will often be highly attenuated, which makes it unwieldy to identify who is properly the principal.Footnote 134
The underlying problem with agency law is that the notion of control or right to control is ultimately too limited to capture many of the material ways in which an actor can be responsible for harm arising from the use of clinical AI systems. AI-induced harms are usually the product of the actions and omissions of multiple actors, with few of these actors exercising direct (or even indirect) control or supervision over the AI system.Footnote 135 While the ‘many hands’ problem has been attributed to various AI technologies (notably autonomous vehicles),Footnote 136 the problem is arguably more acute in the medical context since whether and how a clinical AI system is used will depend on the interactions of numerous actors, processes, and institutions. These include members of the care team, hospital systems, malpractice insurers, and regulators. Other potential elements include the payment structure, data providers, software components providers, and trainers.Footnote 137 Few of the interactions between these elements will involve control of the AI system in any meaningful sense of the term. In predicating responsibility exclusively on a principal’s (or principals’) supposed control or supervision of its agents and subagents, agency law is simply too limited of a legal theory to account for these complex layers of interactions and relationships.Footnote 138 As such, an agency law approach may struggle to account for various actors’ contribution to making the use clinical AI systems more prone to harmful errors in instances where these actors did not exercise the kind of control that is the touchstone of the principal-agent relation.
D. AI Personhood
Perhaps the most contentious proposed framework for AI liability consists in giving AI systems personhood. In his seminal 1992 article, Lawrence Solum famously suggested that the law could recognize a limited form of legal personhood for AI systems capable of serving as a limited-purpose trustee.Footnote 139 More recently, a 2017 report from the European Parliament opened the door to recognizing sophisticated autonomous robots as “having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.”Footnote 140 The authors of the report also recommended the establishment of a compulsory insurance scheme whereby producers or owners of robots would be required to take out insurance to compensate for damages caused by the robots, as well as a compensation fund for damages not covered by the insurance scheme.Footnote 141 The proposal was met with vociferous opposition by a group of AI experts who argued that adopting such a status would be ethically and legally inappropriate.Footnote 142 This opposition was echoed in a report by a European Commission expert group tasked with examining the question of liability for artificial intelligence.Footnote 143
Despite this high-profile opposition, there remains a lively debate among scholars over the merits of AI personhood. Some of those who support the idea focus on autonomous AI systems as being analogous to natural persons.Footnote 144 There have been arguments, for instance, for IBM’s WFO to be considered legally analogous to a consulting physicianFootnote 145 or medical student.Footnote 146 Some commentators have even suggested granting fully autonomous AI systems state licensure for the practice of medicine.Footnote 147 Others emphasize the idea of AI personhood as an instrument to achieve socially useful ends.Footnote 148 Along these lines, some have argued for the legitimacy of robot personhood on the basis that corporations have long been accorded personhood status despite lacking key features of natural persons.Footnote 149
While it is beyond the scope of this Article to resolve the complex question of AI personhood, certain points are worth bearing in mind. First, it seems risky in the near term to accord AI full-blown legal personhood with rights and duties on par with natural persons given the host of ethical concerns that have yet to be addressed.Footnote 150 Second, while the recognition of limited AI personhood (e.g., analogous to corporate personhood) may not be ethically problematic to the same degree as full-blown personhood, careful consideration should be given to the specific purpose that such recognition would serve and what advantage it has, if any, over other legal solutions.Footnote 151 Third, even limited AI personhood will require robust safeguards such as having funds or assets assigned to the AI person.Footnote 152 Fourth, assuming that such safeguards are in place, it is conceivable that recognizing limited AI personhood could serve useful cost-spreading and accountability functions.Footnote 153 On this point, careful consideration will also have to be given to how AI personhood fits into and reinforces existing liability frameworks.
E. Theories of Enterprise Liability
EL holds that losses caused by an enterprise should be borne by the enterprise or the activity itself.Footnote 154 Unlike agency law, a direct relationship of control is not a prerequisite to a finding of liability; the focus is rather on distributing enterprise’s accident costs broadly among members of the enterprise. EL is premised on the principle that “[t]he costs of an injury should be shared by those who profit from the activity responsible for the injury; they should not be concentrated on the injured party or be dispersed across unrelated activities.”Footnote 155 Similar to products liability, fault is not an element of EL.Footnote 156
The idea of applying some form of EL to health care has been the subject of intense legal scholarship going back decades.Footnote 157 In 1991, the American Law Institute (“ALI”) published a report that proposed shifting the locus of liability from individual physicians to the health care institution.Footnote 158 Under this proposal, physicians are exculpated from liability (eliminating their need to purchase liability insurance) on the condition that the health care institution takes out insurance to allow the patient to recover for injuries caused by the physician.Footnote 159 Physicians affiliated with a hospital would be treated as members of a single enterprise engaged in delivering health care to patients.Footnote 160 This approach was predicated on the belief that the health care organization is in the best position to identify and address the inadvertent mishaps of individual physicians.Footnote 161 Paul Weiler, one of the lead authors of the ALI report, advocated for hospital EL on the basis that EL would be a more sensible compensation scheme, more economically efficient, and more effective in preventing injuries to patients.Footnote 162 However, the idea of imposing hospital EL never took off due to worries on the part of health care organizations and lawmakers about the potential costs of a no-fault liability scheme and concerns from physicians about the loss of professional autonomy.Footnote 163 There have nonetheless been notable examples of voluntary EL.Footnote 164
Other scholars have proposed imposing a system of EL on managed care organizations (“MCOs”).Footnote 165 William Sage argued that doing so would result in improved quality, improved compensation for negligence injury, and lower administrative costs.Footnote 166 Shifting some accountability away from physicians to MCOs, Sage argued, would track the increasing control that organizations exercise over the provision of health care (e.g., in managing the flow of information between the organization and patients).Footnote 167 In contrast to traditional theories of medical malpractice, “enterprise liability explicitly acknowledge[s] that health care has become more an institutional process than a series of discrete interactions between patients and individual physicians.”Footnote 168 Importantly, he points to “increasing evidence that most errors in health care delivery, while human in proximate cause, are ultimately the result of faulty institutional processes.”Footnote 169 Along a similar line of reasoning, another author has argued that Health Management Organizations (“HMOs”) dictate the parameters of medical care and are locked in a single enterprise affecting patient care.Footnote 170 EL has also been suggested for Accountable Care Organizations that were formed in response to the Affordable Care Act.Footnote 171
There have moreover been proposals to extend EL to harms caused by novel medical technology. Thomas McLean has advocated for a single health care provider, specifically the medical service payor, to be liable for negligent acts that occur in the process of performing cybersurgery.Footnote 172 The idea is that litigation would be simplified if only one party is held responsible for providing all compensation.Footnote 173 As a result, EL would avoid finger-pointing, facilitate litigation, and generally decrease the transaction costs associated with adverse cybersurgical events. EL would also facilitate patient safety by financially incentivizing medical service providers to choose only the best technology and conduit service providers.Footnote 174 More recently, EL has been suggested as a liability framework for harm caused by medical AI systems, though there have been few attempts to spell out such a framework in any great detail.Footnote 175
A further development of EL, and one that has been recently proposed for AI harms, is CEL. Under the settled or classical version of the CEL doctrine, “entities within a set of interrelated companies may be held jointly and severally liable for the actions of other entities that are part of the group.”Footnote 176 David Vladeck has proposed a variation of classical CEL as a response to the question of who should bear the cost of harms caused by autonomous vehicles under a strict liability regime.Footnote 177 Here, he draws inspiration from a line of federal cases involving deceptive marketing practices where the theory was used to hold a group of corporate entities liable for harm directly caused by only one member of the group. In the leading CEL case of FTC v. Tax Club Inc., the U.S. District Court for the Southern District of New York affirmed CEL as an exception to the rule against group pleading, i.e., lumping defendants together in a way that does not distinguish the misconduct of each.Footnote 178 The CEL exception has been applied to cases of corporate misconduct where defendants strategically formed and used various corporate entities to violate consumer protection law.Footnote 179 These corporations were considered to be functioning jointly as a common enterprise for the purposes of liability.Footnote 180
In Vladeck’s variation of CEL, it suffices that legal entities work toward a common end.Footnote 181 Applying this principle to the situation of AI, component manufacturers are considered to be engaged in the common objective of designing, programming, and manufacturing a vehicle despite not functioning jointly. The reasoning why CEL is appropriate in the context of AI is that:
A common enterprise theory permits the law to impose joint liability without having to lay bare and grapple with the details of assigning every aspect of wrongdoing to one party or another; it is enough that in pursuit of a common aim the parties engaged in wrongdoing. That principle could be engrafted onto a new, strict liability regime to address the harms that may be visited on humans by intelligent autonomous machines when it is impossible or impracticable to assign fault to a specific person.Footnote 182
For Vladeck, the common enterprise is between the manufacturer of the autonomous vehicle and sub-component manufacturers (e.g., of radar and laser sensors, computers). He provides two reasons why the manufacturer should not absorb the full cost of accidents as is often the case under prevailing products law: (1) the component may be at the root cause of the accident and (2) insulating the component manufacturers from liability would not incentivize them from innovating and improving their products.Footnote 183
Vladeck’s modified CEL also differs from versions of EL where the cost of accidents is spread among companies engaged in the same hazardous industry.Footnote 184 Whereas this kind of industry-wide cost spreading might be appropriate for small and highly concentrated industries, it is less so for expansive industries with many players using different technology and manufacturing processes.Footnote 185 Notwithstanding key conceptual differences, there is a strong “family resemblance” between EL, classical CEL, and Vladeck’s modified CEL. All of these theories shift away from negligence’s tendency to treat accidents as resulting from the misconduct/omission of a single defendant. As such, they allow courts to overcome the finger-pointing problem that arises in situations where responsibility may be dispersed among various actors.Footnote 186
IV. COMMON ENTERPRISE LIABILITY: A LIABILITY FRAMEWORK FOR HEALTH AI
Of all the competing liability frameworks, a version of EL is the most appropriate to address harms arising from the use of clinical AI systems. EL’s shift from focusing liability on acts to activities resonates with the dispersion of responsibility among various actors (and networks of actors) involved in the operation of a clinical AI system. To this end, Vladeck’s CEL-based theory of liability can be fruitfully applied to the case of health AI systems with one major variation: the law should recognize a common enterprise among the physician, the manufacturer, and the hospital. By appropriating the criteria of “common objective” to determine who is part of a common enterprise, the proposed framework can facilitate the apportioning of responsibilities and liability among disparate actors under a single legal theory.Footnote 187
A. CEL and the Responsibility Gap
As discussed above, injuries caused by health technology are usually the result of the actions and omissions of multiple actors who relate to and influence each other in complex ways. The numerous stakeholders involved in the implementation and operation of a clinical AI system obscure the attribution of fault and therefore responsibility. As Taddeo and Floridi explain:
The effects of decisions or actions based on AI are often the result of countless interactions among many actors, including designers, developers, users, software and hardware. This is known as distributed agency. With distributed agency comes distributed responsibility. Existing ethical frameworks address individual, human responsibility, with the goal of allocating punishment or reward based on the actions and intentions of an individual. They were not developed to deal with distributed responsibility.Footnote 188
Distributed agency is particularly acute in the medical space. While a clinical AI system might generate a wrong treatment recommendation based on a faulty algorithm developed by the manufacturer, it is the physician who makes the final decision. Moreover, the actions of other actors may bear on the physician’s decision to endorse the AI recommendation, such as a hospital pressuring its physicians to rely on the AI system’s outputs. This culminates in “a situation where each of the stakeholders involved have contributed to medical treatment, with [none] of them being fully to blame.”Footnote 189
The result is a responsibility gap whereby it is difficult to assign blame to any one party.Footnote 190 This difficulty has prompted calls for a more distributed or collective conception of responsibility with respect to harm caused by clinical AI.Footnote 191 Vladek’s CEL is a promising solution to the problem of distributed agency in that it allows the allocation of legal responsibility among multiple actors without having to parse out the contributions, interactions, and wrongdoing of each individual actor. This avoids having to investigate out the “long causal chain of human agency” that characterizes the use and development of AI technology—a task that would invariably be challenging and resource intensive.Footnote 192
Agency law is, admittedly, not entirely bereft of conceptual resources to apportion liability to multiple actors on the basis of these actors being co-principals. The difficulty is that, to qualify as an AI principal and thereby be liable, an actor must have the ability right to control the AI system.Footnote 193 This would likely shield the manufacturer, and possibly the hospital, from liability given the gaps in control between these parties and the operations of the AI system.Footnote 194 For a victim to recover from the enterprise under the proposed CEL-based approach, it needs only be demonstrated that the actors worked toward a common end; there is no requirement that one party had the ability or right to control the actions of the AI system. This approach is, therefore, more responsive to the problem of the responsibility gap and will make it easier for victims to seek compensation.
B. Common Enterprise Among Physicians, AI Manufacturers, and Hospitals
While somewhat removed from Vladeck’s initial intentions, the idea that the physician, manufacturer, and hospital work toward a common objective is a logical application and extension of CEL theory. It bears noting that clinical AI is conceived and designed to be used by medical health professionals in health care organizations for the purpose of providing care to patients.Footnote 195 The pursuit of this common aim is also underlined by AI systems’ increasing embeddedness in crucial aspects of health care operations. AI systems have the “potential to facilitate diagnostics, decision-making, big-data analytics, and administration.”Footnote 196 As such, AI systems are beginning to assume roles that have been traditionally occupied by health care professionalsFootnote 197 and are already being envisioned as future replacement for some of these positions.Footnote 198
That clinical AI systems are designed to duplicate, complement, or (in certain instances) take over certain defined activities within health care sets them apart from AI systems with more open-ended applications. Clinical AI systems have been designed to address deficiencies that are specific to the practice of medicine and to the operations of health care organizations, deficiencies that have long evaded resolution by human intelligence alone. Indeed, this understanding of health AI as fulfilling compensatory and enhancement functions was explicitly adopted by the American Medical Association (“AMA”) in a policy statement expressing preference for the term “augmented intelligence” over artificial intelligence.Footnote 199 This terminology was intended to reflect “the enhanced capabilities of human clinical decision making when coupled with these computational methods and systems [for data analysis].”Footnote 200 While other medical technologies also fulfill compensatory and enhancement functions to a certain extent, clinical AI systems are distinguished by, among other things, their potential to be ubiquitous in medical interactions and to issue treatment recommendations.Footnote 201
Indeed, Eric Topol observes a “convergence of human and artificial intelligence” at various levels of medicine.Footnote 202 At the clinical level, Topol predicts that at some point in the future every clinician will use AI technology involving deep neural networks to help “interpret medical scans, pathology slides, skin lesions, retinal images electro cardiograms, endoscopy, faces, and vital signs.”Footnote 203 The point here is not so much that AI has fully lived up to its promise or will in the future; rather, it is that manufacturers design health AI systems to augment, enhance, or compensate for the capacities of health care professionals in pursuit of the same objectives.Footnote 204 The goal, as Topol notes, is not to develop fully automated AI with no backup clinicians but to achieve “synergy, offsetting functions that machines do best combined with those that are best suited for clinicians.”Footnote 205 And while it is physicians that typically operate these systems, they do so in a tightly integrated ecosystem that includes the hospital and AI device manufacturers.Footnote 206 All of this suggests that physicians, AI manufacturers, and hospitals are engaged in what can be broadly characterized as a common enterprise.
C. Policy Reasons for inclusion in the Common Enterprise
There are compelling policy reasons for including the foregoing actors in the common enterprise and thereby holding them liable for harms caused by clinical AI systems. First, the inclusion of the physician is warranted given the physician’s role in operating the AI system, interpreting the output, and endorsing or rejecting the AI system’s recommendation. In a straightforward sense, the physician is the actor most closely implicated in the harm. This seems to be a prima facie reason to include the physician as part of the common enterprise, notwithstanding any good faith reliance on the AI’s recommendation or the involvement of other actors. As Maliha and colleagues have observed, courts have allowed malpractice suits to proceed against health professionals even when there were mistakes in medical literature given to patients or when a pharmaceutical company had provided inadequate warning of a therapy’s adverse effects. Moreover, courts have held physicians liable for malpractice based on errors made by system technicians or manufacturers.Footnote 207
The issue becomes complicated given the “black box” nature of AI systems in that a physician will be unable to understand, let alone challenge, the underlying reasoning of the AI recommendation. That being said, it is the physician who ultimately makes an independent judgment whether or not to follow an AI’s recommendation in a particular case.Footnote 208 If a physician becomes aware of a malfunction or defect in the AI system, for instance, the physician arguably has a duty to cease use of the equipment and report the problem to the hospital and possibly the manufacturer.Footnote 209 It would therefore be questionable policy to exclude the physician from the common enterprise, and thereby allow physicians to rely on (or disregard) AI recommendations with impunity.Footnote 210
Second, the manufacturer is an equally natural candidate for inclusion in the common enterprise despite having little control over the operation of the clinical AI system. Manufacturers have intimate knowledge of the characteristics and features of their products. They are the party that typically exercises control over the product’s design and programming.Footnote 211 Accordingly, they are in a unique position to invest in and implement ways to make the AI system safer for end-users such as health care organizations and physicians. As expressed by the European Commission, “it is the producer [of AI systems] who is the cheapest cost avoider and who is primarily in a position to control the risk of accidents.”Footnote 212 Moreover, making the manufacturer bear financial responsibility for injuries caused by their products will incentivize them to research ways to avoid losses that are currently unavoidable.Footnote 213
Finally, the choice to include the hospital may be less obvious given that the hospital neither designs, manufactures, nor directly operates the clinical AI system. As such, one might argue that the hospital is somewhat removed from the harm caused by clinical AI systems. Its inclusion is nonetheless warranted on the basis that hospitals constitute the “major institutional bodies responsible for the quality of health care.”Footnote 214 The hospital is uniquely situated to guard against the omnipresent risk of dangerous error using the collective wisdom and experience of its members.Footnote 215 As the Institute of Medicine observed in its seminal report To Err is Human over 20 years ago, medical errors are often the result of faulty systems within health care organizations.Footnote 216 This report was prescient in anticipating the human-machine interface as an important focus of preventative efforts.Footnote 217 The introduction of new technology invariably raises the possibility of new errors. It falls on the health care organizations to take preventative measures as “safe equipment design and use depend on a chain of involvement and commitment that begins with the manufacture and continues with careful attention to the vulnerabilities of a new device or system.”Footnote 218 Among health care actors, hospitals hold arguably the most influence over the kind of technology that is used, how it is used, and by whom it is used.Footnote 219
Including the hospital in the common enterprise should not be taken as an ascription of omniscience or omnipotence to the hospital as an institution. Hospitals are ultimately run by human administrators who can no more predict or prevent any specific AI-induced harm than physicians or (in some instances) AI engineers. Nonetheless, a hospital is well-situated to assess whether a physician has the requisite training, experience, and safety record to treat patients within that hospital’s premises.Footnote 220 Where there have been issues of misconduct or malpractice on the part of a certain physician, the hospital is in a unique position to ensure that the physician practices in a way that minimizes risk to patient safety.Footnote 221 Similarly, the hospital is well-situated to implement structures and systems to minimize the risks associated with the use of clinical AI systems.Footnote 222 Cases involving cybersurgery misadventures such as Taylor have revealed the extent to which the introduction of advance technology has intensified the institutional and systems-based character of modern medical practice. Assigning liability to health care organizations could facilitate systemwide improvements in the safe use of medical technology.
A hospital’s responsibility to the patient vis-à-vis supervision of staff, accreditation, and equipment has long been recognized in cases of corporate liability. In Darling v. Charleston Community Memorial Hospital, Footnote 223 the judicial decision often regarded as the origin of corporate liability, the Illinois Supreme Court was faced with the question of whether the hospital owed responsibilities directly to the patient. The Court ruled that such responsibilities were indeed desirable and feasible based on accreditation standards, state licensing requirements, and the hospital’s own bylaws, as well as the expectations of the medical profession and other responsible authorities.Footnote 224 The court in Darling recognized for the first time that hospitals may incur liability for the negligent selection and monitoring of physicians who commit malpractice notwithstanding physicians’ status as independent contractors, thereby establishing a legal duty on the part of hospitals to adequately credential physicians.Footnote 225 Along this line of reasoning, some have proposed that hospitals should be held liable for negligent credentialling when they fail to adequately vet clinical AI systems prior to clinical implementation.Footnote 226
It should be noted that corporate liability, as distinct from strict liability coupled with CEL, is a fault-based regime that requires plaintiffs to prove hospital negligence as a prerequisite for recovery.Footnote 227 Such a fault-based approach poses significant obstacles to plaintiff recovery.Footnote 228 That being said, the case law on corporate negligence is instructive in affirming the role of the modern hospital as the patient’s health care coordinator and, correspondingly, an important source of responsibility for harms that occur within its premises.Footnote 229 This generates responsibilities regarding the maintenance of equipment and establishment of adequate rules and policies.Footnote 230 In short, the idea that the hospital is particularly well-situated to minimize the risks that technology such as AI poses to patients is firmly established by the judicial recognition of corporate liability as a legal basis for holding hospitals liable to patients. This points toward the adoption of some form of EL for hospitals.Footnote 231
D. Strict vs. Fault-Based Liability
The preceding discussion has suggested that fault-based liability regimes would place asymmetrical and unfairly onerous burdens on plaintiffs who seek to recover for harm arising from the use of clinical AI systems. While this unfairness and asymmetry favors a shift away from fault-based liability, it leaves open the question of what kind of no-fault or strict liability standard would be appropriate in these circumstances. There exist multiple instantiations of strict liability in American law, some more fault-like than others. On one end of the spectrum is modern products liability, which in some states resembles negligence in requiring plaintiffs to demonstrate “fault-infused” elements such as reasonableness, foreseeability, and causation.Footnote 232 On the other end of the spectrum is a Rylands v. Fletcher-styleFootnote 233 regime that imposes strict liability for harm resulting from “abnormally dangerous” activity.Footnote 234 Under this common law tort, liability applies even if the owner takes the appropriate precautions to prevent this risk from materializing. The Rylands rule was eventually incorporated into American law and today applies most commonly to damage resulting from activities such as the use of explosives and the transportation of nuclear materials.Footnote 235
There are strong normative reasons to adopt some form of what Vladek calls “true strict liability,” which dispenses with legal tests found under products liability law and negligence.Footnote 236 Most crucially, providing victims redress for injuries sustained through no fault of their own, even if they cannot demonstrate elements such as foreseeability and causation, is consistent with “basic notions of fairness, compensatory justice, and the apportionment of risk in society.”Footnote 237 This applies especially in the medical space given patients’ lack of bargaining power in opting for the use of AI systems in the first place. Patients are rarely in a position to “negotiate all aspects of treatment, where, for example, they may consent to procedure without full comprehension of the procedure and its risks.”Footnote 238 Nor do patients have the ability to validate the soundness of clinical AI algorithms. On this point, it bears noting that strict liability has long been justified on normative grounds as a means of addressing the power imbalance between manufacturers and consumers.Footnote 239 In dispensing with the requirement that the tortfeasor and victim be linked together by the elements of fault or defect, strict liability also finds parallel in theories of distributed morality, espoused by scholars such as Luciano Floridi, where good or evil outcomes can be the “result of otherwise morally-neutral or at least morally negligible [] interactions among agents, constituting a multiagent-system, which might be human, artificial, or hybrid.”Footnote 240 Under such a theory, responsibility for good and evil outcomes (the system’s outputs) can be backpropagated to all of the system’s nodes and agents for the purposes of improving the outcome.Footnote 241
At the same time, it may not be appropriate to treat the use of clinical AI systems as a Rylands-style ultrahazardous or abnormally dangerous activity. The Third Restatement qualifies an activity as “abnormally dangerous” if “(1) the activity creates a foreseeable and highly significant risk of physical harm even when reasonable care is exercised by all actors; and (2) the activity is not one of common usage.”Footnote 242 While it is debatable whether the use of clinical AI system creates a “highly significant risk of physical harm” unsusceptible to mitigation through reasonable care, the criteria of non-common usage will be increasingly difficult to meet as this technology becomes more prevalent. If clinical AI systems advance to a stage where they match or even exceed physician performance, then it is possible that the combination of human and AI will become the standard of care.Footnote 243 To characterize medical AI as abnormally dangerous would at this point amount to saying that the practice of medicine is abnormally dangerous.
A shift away from fault-based liability would ultimately facilitate recovery from manufacturers and health care institutions for harmful results that are unpredictable, and to a large extent, unavoidable. The result would be to shift the cost of AI accidents to these enterprises. In the case of CEL, the costs would then be spread among the members of the enterprise, i.e., the actors most relevant to the creation, operation, and surveillance of the clinical AI system.Footnote 244 On this point, courts have long recognized the distributive goals that can be realized through a strict liability regime.Footnote 245 Moreover, a strict liability regime is more predictable and, as such, may be more conducive to innovation than a liability regime premised on the “quixotic search for, and then assignment, of fault.”Footnote 246 Like products liability, a CEL-based approach coupled with strict liability would recognize that decisions resulting in alleged harms can ultimately be traced upstream to choices made by manufacturing companies. By shifting at least some of the blame to the manufacturer without imposing on the plaintiff the burden of proving a defect or fault, the proposed approach would provide a strong incentive to take care in the design, programming, and manufacturing of clinical AI.Footnote 247
The distributive objectives that strict liability seeks to further is not, contrary to what some commentators might think, entirely divorced from the corrective notion of justice privileged by fault-based torts such as negligence. Corrective justice seeks to “restore a pre-existing relationship between two parties, one that was unjustly disturbed by one party’s misconduct and the resulting injury to the other.”Footnote 248 It is commonly on this basis that commentators reject strict liability standard for AI harms; the critique is that it would be unfair to hold the defendant liable for harms over which they had no control and were, as such, unavoidable.Footnote 249 However, this is a misguided criticism, as it assumes that by holding defendants liable for unavoidable harms, it punishes a blameless party. Under the internal morality of harm-based strict liability, the wrong done is not the harm itself (which might very well have been unavoidable) but the failure to repair the harm done.Footnote 250 As Gregory Keating observes, “[t]he primary duty that harm-based strict liability institutes is not a duty not to harm; it is a duty to harm only through reasonable, justified conduct, and to make reparation for any harm done even though due care has been exercised.”Footnote 251 In this sense, strict liability does fulfill a corrective role. By holding the defendant liable for a failure to make reparation while benefiting from the activity that gave rise to the harm, strict liability restores the pre-existing relationship between the two parties.Footnote 252
At the core of strict liability’s internal morality is a distributive idea of justice whereby it is wrong to inflict harm on another party—and benefit from that harm—without making reparations.Footnote 253 EL theories (including their CEL variants) embody most fully this principle of fairness in its focus on activities over individual actions. These are otherwise legitimate activities carried out by firms that, by their nature, expose consumers, employees, or the public to generalized and systemic risk. Activities in this sense are contrasted with the actions of separate, unrelated, and individual actors acting independently from each other that negligence law is most concerned with. The introduction of these kinds of risks does not constitute the activity itself but is, even for the most responsible enterprises, a necessary by-product that renders the activity possible and/or economically profitable. EL assumes that even if all reasonable precautions were followed, the enterprise bears a moral responsibility to make reparations for accidents due to the benefit accrued from the activity. It is this activity that serves as the basic unit of responsibility.Footnote 254
Strict liability is fair to victims who, even if they are participants in the enterprise, do not benefit from the enterprise in proportion to the harm suffered as a result of the materialized risk. Strict liability is also fair to tortfeasors because it forces them to bear the cost of their choice to introduce these risks, which was presumably done in pursuit of their advantage.Footnote 255 This internal morality of EL is particularly germane to the medical context. If AI systems turn out to be more accurate than human physicians, their widespread use will likely drive down medical costs (including malpractice insurance) in the long term—a clear financial benefit for health care providers and organizations.Footnote 256 While patients will no doubt benefit from the use of health AI, this benefit is disproportionate to the detriment they suffer when physically harmed by an incorrect diagnosis or treatment recommendation.Footnote 257 The clear organizational and financial value derived by health care actors justifies a shift in who bears the cost of accidents.
E. Future Legal Reforms
Reforms to implement the proposed approach will likely have to take place at the state level given that tort actions, including those against hospitals, physicians, and manufacturers, have traditionally been the domain of state courts. Given the current stage of AI technology and lack of litigation involving clinical AI systems, it may be premature for state legislatures to mandate the replacement of product liability and medical malpractice law with CEL coupled with strict liability for harms arising out of the use of clinical AI systems. Instead, legislatures can pass statutes explicitly permitting hospitals and hospital systems to experiment with a CEL-based framework.Footnote 258 By comparing the effects of a hospital that assumes CEL for AI harms with a similar organization that does not, we can identify the effects of a CEL framework on metrics such as patient safety and number of lawsuits.Footnote 259 The results of these experiments can form the basis for future statutory reforms making CEL the exclusive remedy for harms arising out of the use of clinical AI systems.
Under the proposed approach, members of the common enterprise might want to set out the precise division of liability through the use of contractual indemnification clauses.Footnote 260 How liability is distributed in a given common enterprise will depend on a variety of factors such as the relative bargaining powers of the parties and the desire to promote innovation, though some liability on each party will be necessary to incentivize safety.Footnote 261 In practice, each actor’s part of the claim will likely be paid by their respective insurers. Physicians and hospitals have access to coverage through the commercial insurance market, self-insurance, and the use of captive insurers.Footnote 262 While the kinds of insurance available to technology manufacturers typically exclude coverage for bodily injury, the insurance market is starting to close this gap.Footnote 263 Not only does the involvement of multiple insurers perform a useful loss-spreading function, but it may also promote patient safety in that insurers have a financial incentive to mandate AI safety requirements such as testing.Footnote 264 Admittedly, the involvement of a loss-absorbing entity in the form of the insurer appears to be in tension with the proposition that losses caused by an enterprise’s activity ought to be borne by that enterprise.Footnote 265 This tension is mitigated, however, by the fact that medical liability insurance premiums and access to coverage have to a significant extent become risk-based. Liability insurers adjust premiums based on loss histories and may even refuse to insure high-risk medical providers.Footnote 266 As such, the introduction of liability insurance does not break the connection between liability and financial loss, even if it is the case that the members of the common enterprise would not bear the entirety of the loss.
At a certain point, the AI system may reach a level of autonomy and unpredictability such that we will have to reconsider the manufacturer’s place in the common enterprise. While not essential to the proposed framework, recognizing a limited form of AI personhood would help deliver on some of the benefits of the CEL framework.Footnote 267 The AI “person” could be considered a participant in the common enterprise for the purposes of liability—taking the place of the manufacturer and component suppliers. Gerhard Wagner has noted that robot personhood can serve to “bundle” responsibility and allow liability to be attributed to a single entity to which the victim may turn for compensation.Footnote 268 Like corporate personality, AI personality is one way of ensuring accountability where the harm can be traced to the activities of a group but not to any single individual. It would also avoid the complication of having one common enterprise (among the AI manufacturer and subcomponent manufacturers as envisioned by Vladeck) form part of another larger common enterprise (among the physician, manufacturer, and the hospital).
Many of the concerns about AI personhood can be allayed either with mandatory liability insurance and/or assets backing the AI person.Footnote 269 Karnow has proposed the creation of a Turing Registry that would certify the risk level of an AI system, charge the developer a commensurate premium for liability coverage, and pay compensation for harms without any inquiry into fault or causation.Footnote 270 Similarly, the EU parliament has recommended the creation of an obligatory AI insurance, along with a supplemental insurance fund, as corollary to their call for AI personhood.Footnote 271 A more limited version of this proposal could be adapted for health AI systems with members of the common enterprise as the payors. The AI person would therefore function as little more than a conduit to directly channel the costs of insurance to certain actors.Footnote 272
V. CONCLUSION
While WFO’s recent misadventures should not scare us off from exploiting the clinical promise of AI technology, the legal system would be wise to prepare for novel legal disputes involving AI-related harms in the health care space. Due to their complexity, opacity, and lack of foreseeability, AI systems are not easily accommodated by traditional liability frameworks. As such, frameworks based on fault or defects will make it difficult for victims of AI harms to obtain compensation. Agency law, on the other hand, is too limited of a legal theory to account for the dispersion of responsibility that characterizes the operation of clinical AI systems. A key insight of this Article is that clinical AI is deeply intertwined with the operation, mission, and expertise of health care organizations. Given this background, applying CEL would be both fitting and desirable. By recognizing a common enterprise among physicians, AI manufacturers, and hospitals, the law can address the threat of a responsibility gap and leverage the hospital’s unique influence over the safe use of health technology. The proposed framework’s shift away from fault-based liability serves a deterrent function while favoring victim compensation. Notwithstanding unresolved issues, including whether AI personhood could help deliver on some of the benefits of the proposed approach, a move towards CEL would likely facilitate the adoption of clinical AI technology while ensuring fair compensation for harms arising out of the technology’s use.