Hostname: page-component-6bf8c574d5-h6jzd Total loading time: 0.001 Render date: 2025-02-21T04:41:00.888Z Has data issue: false hasContentIssue false

We, the Robots? Regulating Artificial Intelligence and the Limits of the Law by CHESTERMAN Simon. Cambridge: Cambridge University Press, 2021. xx + 290 pp. Hardcover: $39.99; available as eBook. doi: 10.1017/9781009047081

Published online by Cambridge University Press:  03 December 2021

Hitoshi NASU*
Affiliation:
United States Military Academy at West Point, New York, United States
Rights & Permissions [Opens in a new window]

Abstract

Type
Book Review
Copyright
Copyright © The Author(s), 2021. Published by Cambridge University Press

In his usual astute style, Simon Chesterman tackles structural problems that artificial intelligence poses to meaningful regulation in the form of public control. Building on the author's past work, looking at public authority in times of crisis, his latest book explores the law's potential to regulate emerging technology against three sets of challenges – speed, autonomy, and opacity. As discussed in Part I of the book, these challenges are not entirely legal or normative in nature. Many of the questions raised and discussed in the literature on artificial intelligence and law involve practical difficulties, moral questions, or the perception of legitimacy. Chesterman judiciously sifts through various types of concerns through these lenses, which distinguishes this book from its peers as unproven weariness that artificial intelligence may cause harm with impunity runs deep in the psyche of those who advocate and overstate the need for new laws. This scholarly analysis makes a welcome contribution to the better understanding of the law's potential and limit as a regulatory tool to manage human interaction with technological challenges.

Part II of the book examines various legal tools to address potential accountability gaps in the regulation of artificial intelligence. Chesterman employs a wide range of legal concepts across the field, from product liability to command responsibility, the notion of “agency” for civil liability to “inventor” in patent law, and algorithmic impact assessment to a novel “right to explanation”. The gamut of legal edifice covered in this book is a testament to his broad conversance with the discipline of law. However, as Chesterman emphasizes, “understanding how to regulate may be less important than understanding why” (p. 86). The book thus casts a critical eye on various guides, frameworks, and principles focused on artificial intelligence that proliferated over the last few years.

Chesterman develops his vision for the regulation of artificial intelligence in Part III, based on the view that regulation is necessary on the grounds of morality and legitimacy. While adopting the view (with which I would entirely agree) that existing state-based institutions and rules are capable of regulating most applications of artificial intelligence, Chesterman proposes the establishment of an International Artificial Intelligence Agency, drawing on lessons from the experience of the International Atomic Energy Authority (IAEA) in the area of nuclear regulation. Readers may dispute the desirability or feasibility of such an institution, and the meaning of key terms such as “human control” and “explainabilty”. However, this does not discount the value of the analytical framework that it offers to facilitate an informed debate without, unlike many commentators who do, yielding to speculation or relying on failures at an experimental stage for illustrating the technology's inability to operate within the bounds of the law. It is hoped that this book gives readers pause for thought before engaging in this debate, as Chesterman himself hopes to ensure, “are we asking the right questions?”

Footnotes

This article has been updated since original publication and the error rectified in online PDF and HTML versions. A notice detailing the changes has also been published at https://doi.org/10.1017/S2044251322000078.