Skip to main content Accessibility help
×
Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-06T12:43:22.802Z Has data issue: false hasContentIssue false

Part I - AI, Ethics and Philosophy

Published online by Cambridge University Press:  06 February 2025

Nathalie A. Smuha
Affiliation:
KU Leuven
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BY
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY 4.0 https://creativecommons.org/cclicenses/

1 Artificial Intelligence A Perspective from the Field

Wannes Meert , Tinne De Laet , and Luc De Raedt
1.1 Introduction

Since the early days of computers and programming, humankind has been fascinated by the question whether machines can be intelligent. This is the domain of artificial intelligence (AI),Footnote 1 a term first coined by John McCarthy when he organized the now legendary first summer project in Dartmouth in 1956. The field of AI seeks to answer this question by developing actual machines (robots or computers) that exhibit some kind of intelligent behavior.

Because intelligence encompasses many distinct aspects, one more complicated than the other, research toward AI is typically focused on one or only a few of these aspects. There exist many opinions and lengthy debates about how (artificial) intelligence should be defined. However, a reoccurring insight is that the capabilities of learning and reasoning are essential to achieve intelligence. While most practical AI systems rely on both learning and reasoning techniques, each of these techniques developed rather independently. One of the grand challenges of AI is achieving a truly integrated learning and reasoning mechanism.Footnote 2 The difference between both can be thought of in terms of “System I” and “System II” thinking, as coined in cognitive psychology.Footnote 3 System I thinking concerns our instincts, reflexes, or fast thinking. In AI we can relate this to the subdomain of machine learning, which aims to develop machines that learn patterns from data (e.g., do I see a traffic light). System II thinking concerns our more deliberate, multistep, logical, slow thinking. It relates to the subdomain of reasoning and focuses on knowledge and (logical or probabilistic) inference (e.g., do I need to stop in this traffic situation). In this chapter, we dive deeper into both machine learning and machine reasoning and describe why they matter and how they function.

1.2 What Is Machine Learning?

To answer the question whether machines can learn and reason, we first need to define what is meant by a “machine” that can “learn” and “reason.” For “machine learning” we go to the, within the domain generally accepted, definition of machine learning by Tom Mitchell. A machine is said to learn if its performance at the specific task improves with experience.Footnote 4 The term machine herein refers to a robot, a computer, or even a computer program. The machine needs to perform a given task, which is typically a task with a narrow scope such that the performance can be measured numerically. The more the machine performs the task and gets feedback on its performance, the more it is exposed to experiences and the better its performance. A more informal definition by Arthur Samuel, an American computer scientist, isFootnote 5 “computers that have the ability to learn without being explicitly programmed.”Footnote 6

One of the original, but still fascinating, examples of machine learning is a computer program (the machine) developed by Arthur Samuel to play checkers (the task). After playing multiple games (the experience), the program became a stronger player. This was measured by counting the number of games won or the ranking the program achieved in tournaments (the performance). This computer program was developed in the 1950s and 1960s and was one of the first demonstrations of AI. Already then, the program succeeded in winning against one of the best US checkers players. By the early 1990s, the checkers program Chinook, developed at the University of Alberta, outperformed all human players.Footnote 7 Nowadays, checkers is a “solved” game. This means that a computer program can play optimally, and the best result an opponent, human or machine, can achieve is to draw. Since then, we have observed AI conquer increasingly complicated games. Playing chess at a human level was reached when Deep Blue won against world chess champion Gary Kasparov in 1997. The game of Go, for which playing strategies were considered too difficult to be even represented in computer memory, was conquered when the program AlphaGo won against Lee Sedol in 2016.Footnote 8 And recently also games where not all information is available to a player can be played by AI at the same level as top human players, such as the game of Stratego where DeepNash reached human expert level in 2022.Footnote 9

Another ubiquitous example of learning machines are mail filters (the machine) that automatically remove unwanted emails, categorize mails into folders, or automatically forward the mail to the relevant person within an organization (the task). Since email is customized to individuals and dependent on one’s context, mail handling should also be different from person to person and organization to organization. Therefore, mail filters ought to be adaptive, so that they can adapt to the needs and contexts of individual users. A user can correct undesired behavior or confirm desired behavior by moving and sorting emails manually, hereby indicating (lack) of performance. This feedback (the experiences) is used as examples from which the computer program can learn. Based on certain properties of those emails, such as sender, style, or word choice, the mail filter can learn to predict whether a new email is spam, needs to be deleted, moved, forwarded, or kept as is. Moreover, by analyzing the text and recognizing a question and intention, the mail filter can also learn to forward the mail to the person that previously answered a similar question successfully. The more examples or demonstrations are provided to the system, the more its performance improves.

A third example is a recommender system (the machine), which is used by shops to recommend certain products to their customers (the task). If, for example, it is observed that many of the customers who have watched Pulp Fiction by Quentin Tarantino also liked Kill Bill, this information can be used to recommend Kill Bill to customers that have watched Pulp Fiction. The experience is here the list of movies that customers have viewed (or rated), and the performance is measured by the revenue or customer retention, or customer satisfaction of the company.

These examples illustrate how machines need to process (digital) data to learn and thus perform machine learning. By analyzing previous experiences (e.g., games played, emails moved, and movies purchased), the system can extract relevant patterns and build models to improve the execution of their task according to the performance metric used. This also illustrates the inherent statistical nature of machine learning: It analyzes large datasets to identify patterns and then makes predictions, recommendations, or decisions based on those patterns. In that way, machine learning is also closely related to data science. Data science is a form of intelligent data analysis that allows us to reformat and merge data in order to extract novel and useful knowledge from large and possibly complex collections of data. Machine learning hence provides tools to conduct this analysis in a more intelligent and autonomous way. Machine learning allows machines to learn complicated tasks based on (large) datasets. While high performance is often achieved, it is not always easy to understand how the machine learning algorithm actually works and to provide explanations for the output of the algorithm. This is what is referred to as a “black box.”

1.3 What Is Machine Reasoning?

Machine learning has powered systems to identify spam emails, play advanced games, provide personalized recommendations, and chat like a human; the question remains whether these systems truly understand the concepts and the domain they are operating in. AI chatbots for instance generate dialogues that are human-like, but at the same time have been reported to invent facts and lack “reasoning” and “understanding.” ChatGPTFootnote 10 will, when asked to provide a route description between two addresses, confidently construct a route that includes a turn from street A to street B without these streets even being connected in reality or propose a route that is far from being the fastest or safest. The model underlying current versions of ChatGPT does not “understand” the concept of streets and connections between streets, and it is not fast and safe. Similarly, a recommender engine could recommend a book on Ethics in AI based on the books that friends in my social network have bought without “understanding” the concept of Ethics and AI and how they are related to my interests. The statistical patterns exploited in machine learning can be perceived as showing some form of reasoning because these patterns originate from (human) reasoning processes. Sentences generated with ChatGPT look realistic because the underlying large language models are learned from a huge dataset of real sentences, or driving directions can be correct because guidebooks used as training data contain these directions. A slightly different question may however cause ChatGPT to provide a wrong answer because directions for a changed or previously unseen situation cannot always be constructed only from linguistic patterns.

This is where reasoning comes into the picture. Léon Bottou put forward a plausible definition of reasoning in 2011: “[the] algebraic manipulation of previously acquired knowledge in order to answer a new question.”Footnote 11 Just like in Mitchell’s definition, we can distinguish three elements. There is knowledge about the world that is represented, that knowledge can be used to answer (multiple) questions, and answering questions requires the manipulation of the available knowledge, a process that is often termed inference. A further characteristic of reasoning is that Bottou argues that his definition covers both logical and probabilistic reasoning, the two main paradigms in AI for representing and reasoning about knowledge.

Logical knowledge of a specific domain can be represented symbolically using rules, constraints, and facts. Subsequently, an inference engine can use deductive, abductive, or inductive inference to derive answers to questions about that domain. The logical approach to machine reasoning is well suited for solving complex problems that require a thorough understanding of multistep reasoning on the knowledge base. It is of particular interest for domains where understanding is crucial and the stakes are high, as deductive reasoning will lead to sound conclusions, thus conclusions that logically follow from the knowledge base. For example, to explore and predict optimal payroll policies, one needs to reason over the clauses or rules present in the tax legislation.Footnote 12

Probabilistic knowledge is often represented in graphical models.Footnote 13 These are graphical representations that represent not only the variables of interest but also the (in)dependencies between these variables. The variables are the nodes in the graphs and direct dependencies are specified using the edges (or arcs), and graphical models can be used to query the probability of some variables given that one knows the value of other variables.

Numerous contemporary expert systems are represented as graphical models. Expert systems are computer programs that mimic the decision-making ability of a human expert in a specific domain. Consider, for example, diagnosis in a medical domain such as predicting the preterm birth risk of pregnant womenFootnote 14 or the impact of combining medication (see Figure 1.1).Footnote 15 The variables would then include the symptoms, the possible tests that can be carried out, and the diseases that the patient could suffer from. Probabilistic inference then corresponds to computing the answers to questions such as what is the probability that the patient has pneumonia, given a positive X-ray and coughing. Probabilistic inference can reason from causes to effects (here: diseases to symptoms) and from effects to causes (diagnostic reasoning) or, in general, draw conclusions about the probability of variables given that the outcome of other variables is known. Furthermore, one can use the (in)dependencies modeled in the graphical model to infer which tests are relevant in the light of what is already known about the patient. Or in the domain of robotics, machine reasoning is used to determine the optimal sequence of actions to complete a manipulation or manufacturing task. An example is CRAM (Cognitive Robot Abstract Machine), equipping autonomous robots performing everyday manipulation with lightweight reasoning mechanisms that can automatically infer control decisions rather than requiring the decisions to be preprogrammed.Footnote 16

Figure 1.1 A (simple) Bayesian network to reason over the (joint) effects of two different medications that are commonly administered to patients suffering from epigastric pains because of pyrosis.

Logical and probabilistic knowledge can be created by knowledge experts encoding the domain knowledge elicited from domain experts, textbooks, and so on but can also be learned from data, hereby connecting the domain of reasoning to machine learning. Machine reasoning is, in contrast to machine learning, considered to be knowledge driven rather than data driven. It is also important to remark that logical and probabilistic inference naturally provides explanations for the answers to the questions it provides; therefore, machine reasoning is inherently explainable AI.

1.4 Why Machine Learning and Reasoning?

The interest in machine learning and reasoning can be explained from different perspectives. First, the domain of AI has a general interest in developing intelligent systems, and it is this interest that spurred the development of machine learning and reasoning. Second, it is hoped that a better understanding of machine learning and reasoning can provide novel insights into human behavior and intelligence more generally. Third, from a computer science point of view, it is very useful to have machines that learn and reason autonomously as not everything can be explicitly programmed or as the task may require answering questions that are hard to anticipate.

In this chapter, we focus on the third perspective. Our world is rapidly digitizing and programming machines manually is in the best case a tedious task, and in the worst case a nearly impossible endeavor. Data analysis requires a lot of laborious effort, as it is nowadays far easier to generate data than it is to interpret data, as also reflected by the popular phrase: “We are drowning in data but starving for knowledge.” As a result, machine learning and data mining are by now elementary tools in domains that deal with large amounts of data such as bio- and chem-informatics, medicine, computer linguistics, or prognostics. Increasingly, they are also finding their way into the analysis of data from social and economic sciences. Machine learning is also very useful to develop complex software that cannot be implemented manually. The mail filter mentioned earlier is a good example of this. It is impossible to write a custom computer program for each user or to write a new program every time a new type of message appears. We thus need computer programs that adapt automatically to their environment or user. Likewise, for complex control systems, such as autonomous cars or industrial machines, machine learning is essential. Whether it is to translate pixels into objects or a route into steering actions, it is not feasible to program all the subtleties that are required to successfully achieve this task. However, it is easy to provide ample examples of how this task can be carried out by gathering data while driving a car, or by annotating parts of the data.

In 2005, it was the first time that five teams succeeded in developing a car that could autonomously drive an entire predefined route over dust roads.Footnote 17 Translating all the measurements gathered from cameras, lasers, and sensors to steering would not have been possible if developers had to write down all computer code explicitly themselves. While there is still a significant work ahead to achieve a fully autonomous vehicle that safely operates in all possible environments and conditions, assisted driving and autonomous vehicles in constrained environments are nowadays operated daily thanks to advances in machine learning.

Machine reasoning is increasingly needed to support reasoning in complex domains, especially when the stakes are high, such as in health and robotics. When there is knowledge available about a particular domain and that knowledge can be used to flexibly answer multiple types of questions, it is much easier to infer the answer using a general-purpose reasoning technique than having to write programs for every type of question. So, machine reasoning allows us to reuse the same knowledge for multiple tasks. At the same time, when knowledge is already available, it does not make sense to still try to learn it from data. Consider applying the taxation rules in a particular country, we could directly encode this knowledge and it therefore does not make sense to try to learn these rules from tax declarations.

1.5 How Do Machine Learning and Reasoning Work?

The examples in the introduction illustrate that the goal of machine learning is to make machines more intelligent, thus allowing them to achieve a higher performance in executing their tasks by learning from experiences. To this end, they typically use input data (e.g., pixels, measurements, and descriptions) and produce an output (e.g., a move, a classification, and a prediction). Translating the input to the output is typically achieved by learning a mathematical function, also referred to as the machine learning model. For the game of checkers, this is a function that connects every possible game situation to a move. For mail filters, this is a function that takes an email and its metadata to output the categorization (spam or not). For recommender systems, the function links purchases of costumers to other products.

Within the domain of machine learning, we can distinguish different learning problems along two dimensions: (1) the type of function that needs to be learned and (2) the type of feedback or experiences that are available. While machine learning techniques typically cover multiple aspects of these dimensions, no technique covers all possible types of functions and feedback. Different methods exploit and sacrifice different properties or make different assumptions, resulting in a wide variety of machine learning techniques. Mapping the right technique to the right problem is already a challenge in itself.Footnote 18

1.5.1 Type of Function

Before explaining how different types of functions used in machine learning differ, it is useful to first point out what they all have in common. As indicated earlier, machine learning requires the machine learning function, or model, to improve when given feedback, often in the form of examples or experiences. This requires a mechanism that can adapt our model based on the performance of the model output for a new example or experience. If, for instance, the prediction of an algorithm differs from what is observed by the human (e.g., the prediction is a cat, while the picture shows a dog), the predictive model should be corrected. Correcting the model means that we need to be able to compute how we should change the function to better map the input to the output for the available examples, thus, to better fit the available observations. Computing an output such as a prediction from an input is referred to as forward inference, while computing how our function should be changed is referred to as backward inference. All types of functions have in common that a technique exists that allows us to perform backward inference. We can relate this to human intelligence by means of philosopher Søren Kierkegaard’s quote that says: “Life must be lived forward but can only be understood backwards.”

We will provide more details for three commonly used types of functions: Symbolic functions, Bayesian functions, and Deep functions. For each of these functions, the domain of machine learning studies how to efficiently learn the function (e.g., how much data is required), which classes of functions can be learned tractably (thus in a reasonable time), whether the function can represent the problem domain sufficiently accurate (e.g., a linear function cannot represent an ellipse), and whether the learned function can be interpreted or adheres to certain properties (e.g., feature importance and fairness constraints). We explain these types based on the supervised learning setting that will be introduced later. For now, it suffices to know that our feedback consists of observations that include a (target) label or an expected outcome (e.g., pictures with the label “cat” or “dog”).

1.5.1.1 Deep Functions

With deep functions we refer to neural network architectures which, in their simplest form, are combinations of many small (nonlinear or piecewise linear) functions. We can represent this combination of small functions as a graph where each node is one function that takes as input the output of previous nodes. The nodes are organized in layers where nodes in one layer use the outputs of the nodes in the previous layer as input and send their outputs to the nodes in the next layer. The term “deep” refers to the use of many consecutive layers. Individually, these small functions cannot accurately represent the desired function. However, together these small functions can represent any continuous function. The resulting function can fit the data very closely. This is depicted in Figure 1.2 where two simple functions can only linearly separate two halves of a flat plane, while the combination of two such functions already provides a more complicated separation.

Figure 1.2 Each simple sigmoid function expresses a linear separation; together they form a more complicated function of two hyperbolas.

One way to think about this architecture is that nodes and layers introduce additional dimensions to look at the data and express chains of continuous transformations. Suppose we have a sheet of paper with two sets of points as depicted in Figure 1.3, and we want to learn a function that separates these two sets of points. We can now lift this piece of paper and introduce further dimensions in which we can rotate, stretch or twist the piece of paper.Footnote 19 This allows us to represent the data differently and ideally such that the points of the same group are close to each other and far away from the other group of points such that they are easier to distinguish (e.g., by a simple straight line). The analogy with a piece of paper does not hold completely when dealing with many layers, but we can intuitively still view it as stretching and twisting this paper until we find a combination of transformations for which the points of each class or close to each other but far apart from the other class. If we find such a set of transformations, we have learned a function that can now be used to classify any point that we would draw on this piece of paper. In Figure 1.3, one can observe that all points close to the (dark gray) circles would be labeled as a circle (like the question mark) and similarly for the (light gray) squares.

Figure 1.3 A geometric interpretation of adding layers and nodes to a neural network.

Computing the outcome of this function from the inputs is called forward inference. To update the parameters that define what functions we will combine and how (e.g., amount of rotation, stretching or folding of the paper, and which combinations of transformations), we need to perform backward inference to decide in what direction we should slightly alter the parameters based on the observed performance (e.g., a wrong prediction). This algorithm is often a variation of what is called the backpropagation algorithm. This refers to the propagation of results backward through the functions and adapting the parameters slightly to compensate for errors and reinforce correct results in order to improve the performance of the task. In our example of the classification of squares and circles (Figure 1.3), the observed wrong classification of a point as a square instead of a circle will require us to adapt the parameters of the neural network.

1.5.1.2 Symbolic Functions

Symbolic functions that are used in machine learning are in line with logic-based reasoning. The advantage is that the learned symbolic functions are typically tractable and that rigorous proof techniques can be used to learn and analyze the function. The disadvantage is that they cannot easily cope with uncertainty or fit numerical data. While classical logic is based on deductive inference, machine learning uses inductive inference. For deductive inference, one starts from a set of premises from which conclusions are derived. If the premises are true, then this guarantees that the conclusions are also true. For example, IF we know that all swans are white, and we know there is a swan, THEN we know this swan will also be white. For inductive reasoning, we start from specific observations, and derive generic rules. For example, if we see two white swans, then we can derive a rule that all swans are white. Inductive inference does not guarantee, in contrast to classical deductive inference, that all conclusions are true if the premises are true. It is possible that the next swan we observe, in contrast to the two observed earlier and our deductively inferred symbolic rule, is a black swan. This means that inductive inference does not necessarily return universally true rules. Therefore, inductively inferred rules are often combined with statistical interpretations. In our example, the rule that all swans are white would only be true with a certain probability.

Another form of inference that is sometimes used is abductive reasoning. In this case, possible explanations for observations (or experiments) are generated. For example, if we know the following rule: “IF stung by mosquito AND mosquito carries malaria THEN malaria is transferred” and we know that someone has malaria, then there is a possible explanation, which states that the person is stung by a mosquito with malaria. There might be also other explanations. For example, that the person has received a blood transfusion with infected blood. Thus, also abductive inference does not offer guarantees about the correctness of the conclusion. But we can again associate probabilities with the possible explanations. This form of inference is important when building tests of theories and has been used by systems such as the Robot Scientist to select the most relevant experiments.Footnote 20 The goal of the Robot Scientist is to automate parts of the scientific method, notable the incremental design of a theory and to test hypotheses to (dis)prove this theory based on experiments. In the case of the Robot Scientist, an actual robot was built that operates in a microbiology laboratory. The robot starts from known theories about biological pathways for yeast. These known theories are altered on purpose to be incorrect, and the experiment was to verify whether the robot could retrieve the correct theories by autonomously designing experiments and executing these experiments in practice. When machine learning is not only learning from observations but also suggesting novel observations and asking for labels, this is called active learning.

1.5.1.3 Bayesian Functions

Some behaviors cannot be captured by logical if-then statements or by fitting a function because they are stochastic (e.g., rolling dice), thus the output or behavior of the system is uncertain. When learning the behavior of such systems, we need a function that can express and quantify stochasticity (e.g., the probability to get each side of a dice after a throw is 1/6). This can be expressed by a function using probability distributions. When dealing with multiple distributions that influence each other, one often uses Bayesian networks that model how different variables relate to each other probabilistically (Figure 1.1 shows a Bayesian network). These functions have as additional advantage that they allow us to easily incorporate domain knowledge and allow for insightful models (e.g., which variables influence or have a causal effect on another variable). For this type of function, we also need to perform forward and backward inference. In the forward direction these are conditional probabilities. In the spam example, forward inference entails calculating the probability that a mail spells your name correctly (correct) given that it is a spam email (spam): Pcorrect|spam. For the backward direction we can use the rule of Bayes – therefore the name Bayesian networks – that tells us how to invert the reasoning: Pspam|correct=Pcorrect|spamPspam/Pcorrect. For the example, if we know Pcorrect|spam, that is, the probability that a spam email spells your name correctly, we can use Bayes rule to calculate Pspam|correct, that is, the probability that a new mail with your name spelled correctly is a spam email.

Bayesian functions are most closely related to traditional statistics where one assumes that the type of distribution from which data is generated is known (e.g., a Gaussian or normal distribution) and then tries to identify the parameters of the distribution to fit the data. In machine learning, one can also start from the data and assume nothing is known about the distribution and thus needs to be learn as part of the machine learning. Furthermore, machine learning also does not require a generative view of the model – the model does not need to explain everything we observe. It suffices if it generates accurate predictions for our variable(s) of interest. However, finding this function is in both cases achieved by applying the laws of probability. Bayesian functions additionally suffer from limited expressional power: not all interactions between variables can be modeled with probability distributions alone.

1.5.2 Type of Feedback

The second dimension on which machine learning settings can be distinguished is based on the type of feedback that is available and is used in machine learning. The type of feedback is related to what kind of experience, examples, or observations we have access to. If the observation includes the complete feedback we are interested in directly, we refer to this as supervised learning. For example, supervised learning can take place if we have a set of pictures where each picture is already labeled as either “cat” or “dog,” which is the information we ultimately want to retrieve when examining new unclassified pictures. For the spam example, it means we have a set of emails already classified as spam or not spam supplemented with the information regarding the correct spelling of our name in these emails. This is also the case when data exists about checkers or chess game situations that are labeled by grandmasters to indicate which next moves are good and bad. A second type of feedback is to learn from (delayed) rewards, also called reinforcement learning. This is the case, for example, when we want to learn which moves are good or bad in a game of checkers by actually playing the game. We only know at the end of a game whether it was won or lost and need to derive from that information which moves throughout the game were good or bad moves. A third type of feedback concerns the situation when we do not have direct feedback available, which is also referred to as unsupervised learning. For example, when we are seeking to identify good recommendations for movies, no direct labels of good or bad recommendations are available. Instead, we try to find patterns in the observations themselves. In this case, it concerns observations about which people watch which combinations of movies, based on which we can then identify sound recommendations for the future.

1.6 Supervised Learning

As an example of a supervised learning technique, we discuss how decision trees can be derived from examples. Decision trees are useful for classification problems, which appear in numerous applications. The goal is to learn, in case of supervised learning, a function from a dataset with examples that are already categorized (or labeled) such that we can apply this function to predict the class of new, not yet classified examples. A good example concerns the classification of emails as spam or not spam.

Closely related with classification problems are regression problems where we want to predict a numerical value instead of a class. This is, for example, the case when the system is learning how to drive a car, and we want to predict the angle of the steering wheel and the desired speed that the car should maintain.

There is vast literature on supervised classification and regression tasks as it is the most studied problem in machine learning. The techniques cover all possible types of functions we have introduced before and combinations thereof. Here we use a simple but popular technique for classification that uses decision trees. In this example, we start from a table of examples, where each row is an example, and each column is an attribute (or feature) of an example. The class of each example can be found in a special column in that table. Take for example the table in Figure 1.4 containing (simplified) data about comic books. Each example expresses the properties of a comic book series: language, length, genre, and historical. The goal is to predict whether this customer would buy an album from a particular series. A decision tree is a tree-shaped structure where each node in the tree represents a decision made based on the value of a particular attribute. The branches emerging from a node represent the possible outcomes based on the values of that attribute. The leaves of this tree represent the predicted classification, in this example buy or not buy. When a new example is given, we traverse the tree following the branch that corresponds to the value that the attribute has in the example. Suppose there exists a series that the customer has not yet bought, with attribute values (NL/FR, Strip, Humor, Historical). When we traverse the tree, we follow the left branch (Strip) for the top node that splits on length. The next node selects based on language and we follow the left branch (NL/FR) ending in the prediction that the customer will buy this new series.

Figure 1.4 Table representing the dataset and the resulting decision tree

The algorithm to learn a decision tree works as follows: we start with a single node and all available examples. Next, we estimate by means of a heuristic which attribute differentiates best between the different classes. A simple heuristic would be to choose the attribute where, if we split the examples based on the possible values for this attribute, this split is most similar to when we would have split the examples based on their class values (buy or not buy). Once the attribute is decided, we create a branch and a new node per value of that attribute. The examples are split over the branches according to their value for that attribute. In each node we check if this new node contains (almost) only once class. If this is the case, we stop and make the node a leaf with as class the majority class. If not, we repeat the procedure on this smaller set of examples.

An advantage of decision trees is that they are easy and fast to learn and that they often deliver accurate predictions, especially if multiple trees are learned in an ensemble where each tree of the ensemble “votes” for a particular classification outcome. The accuracy of the predictions can be estimated from the data and is crucial for a user to decide whether the model is good enough to be used. Furthermore, decision trees are interpretable by users, which increases the user’s trust in the model. In general, their accuracy increases when more data is available and when the quality of this data increases. Defining what are good attributes for an observation, and being able to measure these, is one of the practical challenges that does not only apply for decision trees but for machine learning in general. Also, the heuristic that is used to decide which attribute to use first is central to the success of the method. Ideally, trees are compact by using the most informative attributes. Trying to achieve the most compact or simple tree aligns with the principle of parsimony from thirteenth-century philosopher William of Ockham. This is known as Ockham’s Razor and states that when multiple, alternative theories all explain a given set of observations, then the best theory is the simplest theory that makes the smallest number of assumptions. Empirical research in machine learning has shown that applying this principle often leads to more accurate decision trees that generalize better to unseen data. This principle has also led to concrete mathematical theories such as minimum description length used in machine learning.

1.7 Reinforcement Learning

Learning from rewards is used to decide which actions a system best takes given a certain situation. This technique was developed first by Arthur Samuel and has been further perfectioned since. We illustrate this technique using the Menace program developed by Donald Michie in 1961 to play the Tic-Tac-Toe game. While we illustrate this technique to learn from rewards using a game, these techniques are widely applied in industrial and scientific contexts (e.g., control strategies for elevators, robots, complex industrial processes, autonomous driving). Advances in this field are often showcased in games (e.g., checkers, chess, Go, Stratego) because these are controlled environments where performance is easily and objectively measured. Furthermore, it is a setting where human and machine performance are easily compared.

Tic-Tac-Toe is played on a board with three-by-three squares (see Figure 1.5: The Menace program playing Tic-Tac-Toe). There are two players, X and O, that play in turns. Player X can only put an X in an open square, and player O an O. The player that first succeeds in making a row, column, or diagonal that contains three identical letters wins the game. The task of the learning system is to decide which move to perform in any given situation on the board. The only feedback that is available is whether the game is eventually won or lost, not if a particular move is good or bad. For other strategy games such as checkers or chess, we can also devise rewards or penalties for winning or losing pieces on the board. Learning from rewards differs significantly from supervised learning for classification and regression problems because for every example, here a move, the category is not known. When learning from rewards, deriving whether an example (thus an individual move) is good or bad is part of the learning problem, as it must first be understood how credit is best assigned. This explains why learning from rewards is more difficult than supervised learning.

Figure 1.5 The Menace program playing Tic-Tac-Toe

Donald Michie has developed Menace from the observation that there are only 287 relevant positions for the game of Tic-Tac-Toe if one considers symmetry of the board. Because Donald Michie did not have access to computers as we have now, he developed the “hardware” himself. This consisted of 287 match boxes, one for each possible situation on the board. To represent each of the nine possible moves of player X – one for each open position on the board – he had many marbles in nine different colors. Each color represents one of the nine possible squares. These marbles were then divided equally over the match boxes, only excluding colors in those boxes representing a board situation where the move is not possible. Menace then decided on the move as follows:

  1. a. Take the match box that represents the current situation on the board.

  2. b. Randomly take a marble from the match box.

  3. c. Play the move that corresponds to the color of the marble.

The Menace program thus represents a function that for every board situation and possible next moves returns a probability that this move should be played from this position. The probabilities are given by the relative number of marbles of a certain color in the corresponding match box. The learning then happens as follows. If the game is won by X, then for every match box from which one marble was taken, two marbles of that color are again added to these match boxes. If X loses the game, then no marbles are returned. The consequence of these actions is that the probability of winning moves in the relevant boxes (and thus board situations) is increased and that of losing moves is decreased. The more games that are played, the better the probabilities represent a good policy to follow to win a game. The rewards from which Menace learns are thus the won and lost games, where lost games are negative rewards or penalties.

When learning from rewards, it is important to find a good balance between exploration and exploitation. Exploration is important to explore the space of all possible strategies thoroughly, while exploitation is responsible for using the gained knowledge to improve performance. In the case of Menace, a stochastic strategy is used where a move is decided by randomly selecting a marble. Initially, the probability for any possible move in a particular situation is completely at random, which is important for exploration, as there are about an equal number of marbles for each (possible) color in each box. But after a while, the game converges to a good strategy when there are more marbles of colors that represent good moves, which is important for exploitation.

Today, learning from rewards does not use matchboxes anymore but still follows the same mathematical principles. These principles have been formalized as Markov Decision Processes and often a so-called Q-function Q(s,a) is learned. Here Q(s,a) represents the reward that is expected when an action a is taken in a state s. In the Tic-Tac-Toe example, the action is the next move a player takes and the state s is the current situation on the board. The Q-function is learned by using the famous Belmann equation, Qs,a=Rs,a+γ maxaQs,a, where Rs,a is the immediate reward received after taking action a in situation s, γ is a number between 0 and 1 that indicates how future rewards relate to the immediate reward (rewards obtained in the future are less valuable than an equal immediate reward), and s′ the state that is reached after taking action a in situation s. The Q-function is also used to select the actions. The best action in a situation s is the action a for which Qs,a is maximal. To illustrate Q-learning, consider again the Menace program. Each box can be considered as a state, and each color as an action that can be executed in that state. The Q-function then contains the probability of selecting a marble from that color in that box, and the best action is the one is that with the maximum probability (i.e., the color that occurs most in that box).

1.8 Unsupervised Learning

For the third type of feedback, we look at learning associations. Here we have no labels or direct feedback available. This technique became popular as part of recommender systems used by online shops such as Amazon and streaming platforms such as Netflix. Such companies sell products such as books or movies and advise their customers by recommending products they might like. These recommendations are often based on their previous consuming behavior (e.g., products bought or movies watched). Such associations can be expressed as rules like:

IF X and Y are being consumed,THEN Z will also be consumed.

X, Y, and Z represent specific items such as books or movies. For example, X = Pulp Fiction, Y = Kill Bill, and Z = Django Unchained. Such associations are derived from transaction data gathered about customers. From this data frequently occurring subsets of items are derived. This is expressed as a frequency of the number of times this combination of items occurs together. A collection of items is considered frequent if their frequency is at least x%, thus that it occurs in at least x% of all purchases. From these frequent collections, the associations are derived. Take for example a collection of items X,Y,Z that is frequent since it appears in 15% of all purchases. In that case, we know that the collection X,Y is also frequent and has a frequency of at least 15%. Say that the frequency of X,Y is 20%, then we can assign some form of probability and build an association rule. The probability that Z will be consumed, if we know that X and Y have been consumed then:

Frequency{X,Y,Z}/frequency{X,Y}=0.15/0.20=75%

The information on frequent collections and associations allows us to recommend products (e.g., books or movies). If we want to suggest products that fit with product X and Y, we can simply look at all frequent collections X,Y,Z and recommend products Z based on increasing frequency of the collections X,Y,Z.

Learning associations are useful in various situations, for instance, when analyzing customer information in a grocery store. When the products X and Y are often bought together, then we can strategically position product Z in the store. The store owner can put the products close to each other to make it easy for customers to buy this combination or to be reminded of also buying this product. Or the owner can put them far apart and hope the customer picks up some additional products when traversing from one end of the store to the other end.

Another form of unsupervised learning is clustering. For clustering, one inspects the properties of the given set of items and tries to group them such that similar items are in the same group and dissimilar items are in other groups. Once a set of clusters is found, one can recommend items based on the most nearby group. For example, in a database of legal documents, clustering of related documents can be used to simplify locating similar or more relevant documents.

1.9 Reasoning

When considering reasoning, we often refer to knowledge as input to the system, as opposed to data for machine learning. Knowledge can be expressed in many ways, but logic and constraints are popular choices. We have already seen how logic can be used to express a function that is learned, but more deliberate, multi-step types of inference can be used when considering reasoning. As an example, consider a satisfiability problem, also known as a SAT problem. The goal of a SAT problem is to find, given a set of constraints, a solution that satisfies all these constraints. This type of problem is one of the most fundamental ones of computer science and AI. It is the prototypical hard computational problem and many other problems can be reduced to it. You also encounter SAT problems daily (e.g., suggesting a route to drive, which packages to pick up and when to deliver them, configuring a car). Say, we want to choose a restaurant with a group of people, and we know that Ann prefers Asian or Indian and is Vegan; Bob likes Italian or Asian, and if it is Vegan then he prefers Indian. Carry likes vegan or Indian but does not like Italian. We also know that Asian food includes Indian. We can express this knowledge using logic constraints:

(AsianIndian)Vegan(ItalianAsian)VeganIndian(VeganIndian)¬ItalianIndianAsian

Observe that ∨ stands for OR (disjunction), and ∧ for AND (conjunction). Furthermore, AB stands for IF A THEN B (implication).

When feeding these constraints to a solver, the computer will tell you the solution is to choose Vegan.Footnote 21 Actually, the solution that the solver would find is Vegan, Indian, not Italian, and Asian. It is easy that starting from the solution Vegan, then we can also derive Indian, and from Indian, we can derive Asian. Furthermore, the conjunction also specifies that not Italian should be true. With these, all elements of the conjunction are satisfied, and thus this provides a solution to the SAT problem.

While we presented an example that required pure reasoning, the integration of learning and reasoning is required in practice. For the previous example, this is the case when we also want to learn preferences. Similarly, when choosing a route to drive, we want to consider learned patterns of traffic jams; or when supplying stores, we want to consider learned customer buying patterns.

1.10 Trustworthy AI

Models learned by machine learning techniques are typically evaluated based on their predictive performance (e.g., accuracy, f1-score, AUC, squared error) on a test set – a held-aside portion of the data that was not used for learning the model. A good value on these performance metrics indicates that the learned model can also predict other unseen (i.e., not used for learning) examples accurately. While such an evaluation is crucial, in practice it is not sufficient. We illustrate this with three examples. (1) If a model achieves 99% accuracy, what do we know about the 1% that is not predicted accurately? If our training data is biased, the mistakes might not be distributed equally over our population. A well-known example is facial recognition where the training data contained less data about people of color causing more mistakes to be made on this subpopulation.Footnote 22 (2) If groups of examples in our population are not covered by our training data, will the model still predict accurately? If you train a medical prediction model on adults – because consent is easier to obtain – the model cannot be trusted for children because their physiology is different.Footnote 23 Instead of incorrect predictions, more subtly this might lead to bias. If part of the population is not covered, say buildings in poor areas that are not yet digitized, should we then ignore such buildings in policies based on AI models? (3) Does our model conform to a set of given requirements? These can be legal requirements such as the prohibition to drive on the sidewalk, or ethical requirements such as fairness constraints.Footnote 24

These questions are being tackled in the domain of trustworthy AI.Footnote 25 AI researchers have been trying to answer questions about the trustworthiness and interpretability of their models since the early days of AI. Especially when systems were deployed in production like in the expert systems of the 1980s. But the recent explosion of deployed machine learning and reasoning systems together with the introduction of legislation such as the General Data Protection Regulation (GDPR) and the upcoming AI-act of the European Union has led to a renewed and much larger interest in all aspects related to trustworthy AI. Unfortunately, it is technically much more challenging to answer these questions as only forward and backward inference does not suffice. The field of trustworthy AI encompasses a few different questions that we will now discuss.

1.11 Explainable AI (XAI)

When an AI model, that is, a function, translates input information into an output (e.g., a prediction or recommendation), knowing only the output may not be acceptable for all persons or in all situations. When making a decision based on machine learning output, it is important to understand at least the crucial parts that led to the output. his is important to achieve appropriate trust in the model when these decisions impact humans or for instance the yield or efficiency of a production process. This is also reflected in the motivation behind legislation such as the GDPR.Footnote 26 Often the need for explainability is driven by the realization that machine learning and reasoning models are prone to errors or bias. The training data might contain errors or bias that are replicated by the model, the model itself might have limitations in what it can express and induce errors or bias, inaccurate or even incorrect assumptions might have been made when modeling the problem, or there might simply be a programming error. On top of the mere output of a machine learning or reasoning algorithm, we thus need techniques to explain these outputs.

One can approach explaining AI models in two ways: only allowing white box models that can be inspected by looking at the model (e.g., a decision tree) or using and developing mechanisms to inspect black box models (e.g., neural networks). While the former is easier, there is also a trade-off with respect to accuracy.Footnote 27 We thus need to be able to obtain explainability of black box models. However, full interpretability of the internal mechanisms of the algorithms or up to the sensory inputs might not be required. We also do not need to explain how our eyes and brain exactly translate light beams into objects and shapes such as a traffic light to explain that we stopped because the traffic light is red. Explainability could in these cases focus on generating a global understanding of how outputs follow from particular inputs (e.g., in the most relevant or most prominent cases that occur). For particular cases though, full explainability or a white box model might be a requirement. For example, when applying legislation to a situation where we need to explain which clauses are used where and why.

There have been great advances in explaining black box models. Model-specific explainers are explainers that work only on a particular type of black box models, such as explainers for neural networks. As these explainers are developed for particular models, the known underlying function can be reverse-engineered to explain model outputs for individual examples. Model-agnostic explainers (e.g., LIMEFootnote 28 and SHAPFootnote 29) on the other hand can be applied to any black box model and therefore cannot rely on the internal structure of the model. Their broad applicability often comes at the cost of precision: they can only rely on the black box model’s behavior between input and output and in contrast to the model-specific explainers cannot inspect the underlying function. Local explainers try to approximate the black box function around a given example and hereby generate the so-called “local explanations,” thus explanations of the behavior of the black box model in the neighborhood of the given example. One possibility is to use feature importance as explanations as it indicates which features are most important to explain the output (e.g., to decide whether a loan gets approved or not the model based its decision for similar clients most importantly on the family income and secondly on the health of the family). Another way to explain decisions is to search for counterfactual examplesFootnote 30 that give us, for example, the most similar example that would have received a different categorization (e.g., what should I minimally change to get my loan approved?). Besides local explanations one could ideally also provide global explanations that hold for all instances, also those not yet covered by the training data. Global explanations are in general more difficult to obtain.

1.12 Robustness

Robustness is an assessment of whether our learned function meets the expected specifications. Its scope is broader than explanations in that it also requires certain guarantees to be met. A first aspect of robustness is to verify whether an adversarial example exists. An adversarial example is like a counterfactual example that flips the category, but one that is designed to deceive the model as a human would not observe a difference between the normal and the adversarial example (see Figure 1.6). For example, if by changing a few pixels in an image, changes that are meaningless to a human observer, the learned function can be convinced to change the predicted category (e.g., a picture that is clearly a stop sign for a human observer but deceives the model to be classified as a speed limit sign). A second popular analysis is about data privacy: does the learned function leak information about individual data examples (e.g., a patient)? A final aspect is that of fairness, sometimes also considered separately from robustness. It is vaguer by nature as it can differ for cultural or generational reasons. In general, it is an unjust advantage for one side. Remember the facial recognition example where the algorithm’s goal to optimize accuracy disadvantages people of color because they are a minority group in the data. Another example of fairness can be found in reinforcement learning where actions should not block something or somebody. A traffic light that never allows one to pass or an elevator that never stops on the third floor (because in our training data nobody was ever on the third floor) is considered unfair and to be avoided.

Figure 1.6 Adversarial examples for digits.Footnote 31

Robustness thus entails testing strategies to verify whether the AI system does what is expected under stress, when being deceived, and when confronted with anomalous or rare situations. This is also mentioned in the White Paper on Artificial Intelligence: A European approach to excellence and trust.Footnote 32 Offering such guarantees, however, is also the topic of many research projects since proving that the function adheres to certain statements or constraints is in many cases computationally intractable and only possible by approximation.

1.13 Conclusions

Machine learning and machine reasoning are domains within the larger field of AI and computer sciences that are still growing and evolving rapidly. AI studies how one can develop a machine that can learn from observations and what fundamental laws guide this process. There is consensus about the nature of machine learning, in that it can be formalized as learning of functions. There is also consensus that machine reasoning enables the exploitation of knowledge to infer answers to a wide range of queries. However, for now, there is neither a known set of universal laws that govern all AI and machine learning and reasoning, nor do we understand how machine learning and reasoning can be fully integrated. Therefore, many different approaches and techniques exist that push forward our insights and available technology. Despite the work ahead there are already many practical learning and reasoning systems and exciting applications that are being deployed and influence our daily life.

2 Philosophy of AI A Structured Overview

Vincent C. Müller
2.1 Topic and Method
2.1.1 Artificial Intelligence

The term Artificial Intelligence became popular after the 1956 “Dartmouth Summer Research Project on Artificial Intelligence,” which stated its aims as follows:

The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.Footnote 1

This is the ambitious research program that human intelligence or cognition can be understood or modeled as rule-based computation over symbolic representation, so these models can be tested by running them on different (artificial) computational hardware. If successful, the computers running those models would display artificial intelligence. Artificial intelligence and cognitive science are two sides of the same coin. This program is usually called Classical AI:Footnote 2

  1. a) AI is a research program to create computer-based agents that have intelligence.

The terms Strong AI and Weak AI as introduced by John Searle stand in the same tradition. Strong AI refers to the idea that: “the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.” Weak AI is means that AI merely simulates mental states. In this weak sense “the principal value of the computer in the study of the mind is that it gives us a very powerful tool.”Footnote 3

On the other hand, the term “AI” is often used in computer science in a sense that I would like to call Technical AI:

  1. b) AI is a set of computer-science methods for perception, modelling, planning, and action (search, logic programming, probabilistic reasoning, expert systems, optimization, control engineering, neuromorphic engineering, machine learning (ML), etc.).Footnote 4

There is also a minority in AI that calls for the discipline to focus on the ambitions of (a), while maintaining current methodology under (b), usually under the name of Artificial General Intelligence (AGI).Footnote 5

This existence of the two traditions (classical and technical) occasionally leads to suggestions that we should not use the term “AI,” because it implies strong claims that stem from the research program (a) but have very little to do with the actual work under (b). Perhaps we should rather talk about “ML” or “decision-support machines,” or just “automation” (as the 1973 Lighthill Report suggested).Footnote 6 In the following we will clarify the notion of “intelligence” and it will emerge that there is a reasonably coherent research program of AI that unifies the two traditions: The creation of intelligent behavior through computing machines.

These two traditions now require a footnote: Both were largely developed under the notion of classical AI, so what has changed with the move to ML? Machine learning is a traditional computational (connectivist) method in neural networks that does not use representations.Footnote 7 Since ca. 2015, with the advent of massive computing power and massive data for deep neural networks, the performance of ML systems in areas such as translation, text production, speech recognition, games, visual recognition, and autonomous driving has improved dramatically, so that it is superior to humans in some cases. Machine learning is now the standard method in AI. What does this change mean for the future of the discipline? The honest answer is: We do not know yet. Just like any method, ML has its limits, but these limits are less restrictive than was thought for many years because the systems exhibit a nonlinear improvement – with more data they may suddenly improve significantly. Its weaknesses (e.g., overfitting, causal reasoning, reliability, relevance, and black box) may be quite close to those of human rational choice, especially if “predictive processing” is the correct theory of the human mind (Sections 2.4 and 2.6).

2.1.2 Philosophy of AI and Philosophy

One way to understand the philosophy of AI is that it mainly deals with three Kantian questions: What is AI? What can AI do? What should AI be? One major part of the philosophy of AI is the ethics of AI but we will not discuss this field here, because there is a separate entry on “Ethics of AI” in the present CUP handbook.Footnote 8

Traditionally, the philosophy of AI deals with a few selected points where philosophers have found something to say about AI, for example, about the thesis that cognition is computation, or that computers can have meaningful symbols.Footnote 9 Reviewing these points and the relevant authors (Turing, Wiener, Dreyfus, Dennett, Searle, …) would result in a fragmented discussion that never achieves a picture of the overall project. It would be like writing an old-style human history through a few “heroes.” Also, in this perspective, the philosophy of AI is separated from its cousin, the philosophy of cognitive science, which in turn is closely connected to the philosophy of mind.Footnote 10

In this chapter we use a different approach: We look at components of an intelligent system, as they present themselves in philosophy, cognitive science, and AI. One way to consider such components is that there are relatively simple animals that can do relatively simple things, and then we can move “up” to more complicated animals that can do those simple things, and more. As a schematic example, a fly will continue to bump into the glass many times to get to the light; a cobra will understand that there is an obstacle here and try to avoid it; a cat might remember that there was an obstacle there the last time and take another path right away; a chimpanzee might realize that the glass can be broken with a stone; a human might find the key and unlock the glass door … or else take the window to get out. To engage in the philosophy of AI properly, we will thus need a wide range of philosophy: philosophy of mind, epistemology, language, value, culture, society, …

Furthermore, in our approach, the philosophy of AI is not just “applied philosophy” it is not that we have a solution ready in the philosopher’s toolbox and “apply” it to solve issues in AI. The philosophical understanding itself changes when looking at the case of AI: It becomes less anthropocentric, less focused on our own human case. A deeper look at concepts must be normatively guided by the function these concepts serve, and that function can be understood better when we consider both the natural cases and the case of actual and possible AI. This chapter is thus also a “proof of concept” for doing philosophy through the conceptual analysis of AI: I call this AI philosophy.

I thus propose to turn the question from its head onto its feet, as Marx would have said: If we want to understand AI, we have to understand ourselves; and if we want to understand ourselves, we have to understand AI!

2.2 Intelligence
2.2.1 The Turing Test

“I propose to consider the question ‘Can Machines Think?’” Alan Turing wrote at the outset of his paper in the leading philosophical journal Mind.Footnote 11 This was 1950, Turing was one of the founding fathers of computers, and many readers of the paper would not even have heard of such machines, since there were only half a dozen universal computers in the world (Z3, Z4, ENIAC, SSEM, Harvard Mark III, and Manchester Mark I).Footnote 12 Turing moves swiftly to declare that searching for a definition of “thinking” would be futile and proposes to replace his initial question by the question whether a machine could successfully play an “imitation game.” This game has come to be known as the “Turing Test”: A human interrogator is connected to another human and a machine via “teleprinting,” and if the interrogator cannot tell the machine from the human by holding a conversation, then we shall say the machine is “thinking.” At the end of the paper he returns to the issue of whether machines can think and says: “I believe that at the end of the century the use of words and general educated opinion will have altered so much that one will be able to speak of machines thinking without expecting to be contradicted.”Footnote 13 So, Turing proposes to replace our everyday term of “thinking” by an operationally defined term, a term for which we can test with some procedure that has a measurable outcome.

Turing’s proposal to replace the definition of thinking by an operational definition that relies exclusively on behavior fits with the intellectual climate of the time where behaviorism became a dominant force: In psychology, behaviorism is a methodological proposal that psychology should become a proper scientific discipline by relying on testable observation and experiment, rather than on subjective introspection. Given that the mind of others is a “black box,” psychology should become the science of stimulus and behavioral response, of an input–output relation. Early analytic philosophy led to reductionist behaviorism; so if the meaning of a term is its “verification conditions,” then a mental term such as “pain” just means the person is disposed to behaving a certain way.

Is the Turing Test via observable behavior a useful definition of intelligence? Can it “replace” our talk of intelligence? It is clear that there will be intelligent beings that will not pass this test, for example, humans or animals that cannot type. So I think it is fair to say that Turing very likely only intended the passing of the test as being sufficient for having intelligence and not as necessary. So, if a system passes that test, does it have to be intelligent? This depends on whether you think intelligence is just intelligent behavior, or whether you think for the attribution of intelligence we also need to look at internal structure.

2.2.2 What Is Intelligence?

Intuitively, intelligence is an ability that underlies intelligent action. Which action is intelligent depends on the goals that are pursued, and on the success in achieving them – think of the animal cases mentioned earlier. Success will depend not only on the agent but also on the conditions in which it operates, so a system with fewer options how to achieve a goal (e.g., find food) is less intelligent. In this vein, a classical definition is: “Intelligence measures an agent’s ability to achieve goals in a wide range of environments.”Footnote 14 Here intelligence is the ability to flexibly pursue goals, where flexibility is explained with the help of different environments. This notion of intelligence from AI is an instrumental and normative notion of intelligence, in the tradition of classical decision theory, which says that a rational agent should always try to maximize expected utility (see Section 2.6).Footnote 15

If AI philosophy understands intelligence as relative to an environment, then to achieve more intelligence, one can change the agent or change the environment. Humans have done both on a huge scale through what is known as “culture”: Not only have we generated a sophisticated learning system for humans (to change the agent), we have also physically shaped the world such that we can pursue our goals in it; for example, to travel, we have generated roads, cars with steering wheels, maps, road signs, digital route planning, and AI systems. We now do the same for AI systems; both the learning system, and the change of the environment (cars with computer interfaces, GPS, etc.). By changing the environment, we will also change our cognition and our lives – perhaps in ways that turn out to be to our detriment.

In Sections 2.42.9, we will look at the main components of an intelligent system; but before that we discuss the mechanism used in AI: computation.

2.3 Computation
2.3.1 The Notion of Computation

The machines on which AI systems run are “computers,” so it will be important for our task to find out what a computer is and what it can do, in principle. A related question is whether human intelligence is wholly or partially due to computation – if it is wholly due to computation, as classical AI had assumed, then it appears possible to recreate this computation on an artificial computing device.

In order to understand what a computer is, it is useful to remind ourselves of the history of computing machines – I say “machines” because before ca. 1945, the word “computer” was a term for a human who has a certain profession, for someone who does computations. These computations, for example, the multiplication of two large numbers, are done through a mechanical step-by-step procedure that will lead to a result once carried out completely. Such procedures are called “algorithms.” In 1936, in response to Gödel’s challenge of the “Entscheidungsproblem,” Alan Turing suggested that the notion of “computing something” could be explained by “what a certain type of machine can do” (just like he proposed to operationalize the notion of intelligence in the “Turing Test”). Turing sketched what such a machine would look like, with an infinitely long tape for memory, a head that can read from and write symbols to that tape. These states on the tape are always specific discrete states, such that each state is of a type from a finite list (symbols, numbers,…), so for example it either is the letter “V” or the letter “C,” not a bit of each. In other words, the machine is “digital” (not analog).Footnote 16 Then there is one crucial addition: In the “universal” version of the machine, one can change what the computer does through further input. In other words, the machine is programable to perform a certain algorithm, and it stores that program in its memory.Footnote 17 Such a computer is a universal computer, that is, it can compute any algorithm. It should be mentioned that wider notions of computation have been suggested, for example, analog computing and hypercomputing.Footnote 18

There is also the question whether computation is a real property of physical systems, or whether it is rather a useful way of describing these. Searle has said: “The electrical state transitions are intrinsic to the machine, but the computation is in the eye of the beholder.”Footnote 19 If we take an anti-realist account of computation, then the situation changes radically.

The exact same computation can be performed on different physical computers, and it can have a different semantics. There are thus three levels of description that are particularly relevant for a given computer: (a) The physical level of the actual “realization” of the computer, (b) the syntactic level of the algorithm computed, and (c) the symbolic level of content, of what is computed.

Physically, a computing machine can be built out of anything and use any kind of property of the physical world (cogs and wheels, relays, DNA, quantum states, etc.). This can be seen as using a physical system to encode a formal system.Footnote 20 Actually, all universal computers have been made with large sets of switches. A switch has two states (open/closed), so the resulting computing machines work on two states (on/off, 0/1), they are binary – this is a design decision. Binary switches can easily be combined to form “logic gates” that operate on input in the form of the logical connectives in Boolean logic (which is also two-valued): NOT, AND, OR, and so on. If such switches are in a state that can be syntactically understood as 1010110, then semantically, this could (on current ASCII/ANSI conventions) represent the letter “V,” the number “86,” a shade of light gray, a shade of green, and so on.

2.3.2 Computationalism

As we have seen, the notion that computation is the cause of intelligence in natural systems, for example, humans, and can be used to model and reproduce this intelligence is a basic assumption of classical AI. This view is often coupled with (and motivated by) the view that human mental states are functional states and that these functional states are that of a computer: “machine functionalism.” This thesis is often assumed as a matter of course in the cognitive sciences and neuroscience, but it is also the subject of significant criticism in recent decades.Footnote 21 The main sources for this view are an enthusiasm for the universal technology of digital computation, and early neuroscientific evidence indicated that human neurons (in the brain and body) are also somewhat binary, that is, either they send a signal to other neurons, they “fire,” or they don’t. Some authors defend the Physical Symbol System Hypothesis, which is computationalism, plus the contention that only computers can be intelligent.Footnote 22

2.4 Perception and Action
2.4.1 Passive Perception

You may be surprised to find that the heading of this chapter combines perception and action in one. We can learn from AI and cognitive science that the main function of perception is to allow action; indeed that perception is a kind of action. The traditional understanding of perception in philosophy is passive perception, watching ourselves watching the world in what Dan Dennett has called the Cartesian Theatre: It is as though I had a little human sitting inside my head, listening to the outside world through our ears, and watching the outside world through our eyes.Footnote 23 That notion is absurd, particularly because it would require there to be yet another little human sitting in the head of that little human. And yet, a good deal of the discussion of human perception in the philosophical literature really does treat perception as though it were something that happens to me when inside.

For example, there is the 2D–3D problem in vision, the problem of how I can generate the visual experience of a 3D world through a 2D sensing system (the retina, a 2D sheet that covers our eyeballs from the inside). There must be a way of processing the visual information in the retina, the optical nerve and the optical processing centers of the brain that generates this 3D experience. Not really.Footnote 24

2.4.2 Active Perception

Actually, the 3D impression is generated by an interaction between me and the world (in the case of vision it involves movement of my eyes and my body). It is better to think of perception along with the lines of the sense of touch: Touching is something that I do, so that I can find out the softness of an object, the texture of its surface, its temperature, its weight, its flexibility, and so on. I do this by acting and then perceiving the change of sensory input. This is called a perception-action-loop: I do something, that changes the world, and that changes the perception that I have.

It will be useful to stress that this occurs with perception of my own body as well. I only know that I have a hand because my visual sensation of the hand, the proprioception, and the sense of touch are in agreement. When that is not the case it is fairly easy to make me feel that a rubber hand is my own hand – this is known as the “rubber hand illusion.” Also, if a prosthetic hand is suitably connected to the nervous system of a human, then the perception-action-loop can be closed again, and the human will feel this as their own hand.

2.4.3 Predictive Processing and Embodiment

This view of perception has recently led to a theory of the “predictive brain”: What the brain does is not to passively wait for input, but it is always on to actively participate in the action-perception-loop. It generates predictions what the sensory input will be, given my actions, and then it matches the predictions with the actual sensory input. The difference between the two is something that we try to minimize, which is called the “free energy principle.”Footnote 25

In this tradition, the perception of a natural agent or AI system is something that is intimately connected to the physical interaction of the body of the agent with the environment; perception is thus a component of embodied cognition. A useful slogan in this context is “4E cognition,” which says that cognition is embodied; it is embedded in an environment with other agents; it is enactive rather than passive; and it is extended, that is, not just inside the head.Footnote 26 One aspect that is closely connected to 4E cognition is the question whether cognition in humans is fundamentally representational, and whether cognition in AI has to be representational (see Section 2.5).

Embodied cognition is sometimes presented as an empirical thesis about actual cognition (especially in humans) or as a thesis on the suitable design of AI systems, and sometimes as an analysis of what cognition is and has to be. In the latter understanding, non-embodied AI would necessarily miss certain features of cognition.Footnote 27

2.5 Meaning and Representation
2.5.1 The Chinese Room Argument

As we saw earlier, classical AI was founded on the assumption that the appropriately programmed computer really is a mind – this is what John Searle called strong AI. In his famous paper “Minds, Brains and Programs,” Searle presented a thought experiment of the “Chinese Room.”Footnote 28 The Chinese Room is a computer, constructed as follows: There is a closed room in which John Searle sits and has a large book that provides him with a computer program, with algorithms, on how to process the input and provide output. Unknown to him, the input that he gets is Chinese writing, and the output that he provides are sensible answers or comments about that linguistic input. This output, so the assumption, is indistinguishable from the output of a competent Chinese speaker. And yet Searle in the room understands no Chinese and will learn no Chinese from the input that he gets. Therefore, Searle concludes, computation is not sufficient for understanding. There can be no strong AI.

In the course of his discussion of the Chinese room argument, Searle looks at several replies: The systems reply accepts that Searle has shown that no amount of simple manipulation of the person in the room will enable that person to understand Chinese, but objects that perhaps symbol manipulation will enable the wider system, of which the person is a component, to understand Chinese. So perhaps there is a part-whole fallacy here? This reply raises the question, why one might think that the whole system has properties that the algorithmic processor does not have.

One way to answer this challenge, and change the system, is the robot reply, which grants that the whole system, as described, will not understand Chinese because it is missing something that Chinese speakers have, namely a causal connection between the words and the world. So, we would need to add sensors and actuators to this computer, that would take care of the necessary causal connection. Searle responds to this suggestion by saying input from sensors would be “just more Chinese” to Searle in the room; it would not provide any further understanding, in fact Searle would have no idea that the input is from a sensor.Footnote 29

2.5.2 Reconstruction

I think it is best to view the core of the Chinese room argument as an extension of Searle’s remark:

No one would suppose that we could produce milk and sugar by running a computer simulation of the formal sequences in lactation and photosynthesis, but where the mind is concerned many people are willing to believe in such a miracle.Footnote 30

Accordingly, the argument that remains can be reconstructed as:

  1. a. If a system does only syntactical manipulation, it will not acquire meaning.

  2. b. A computer does only syntactical manipulation.

  3. c. A computer will not acquire meaning.

In Searle’s terminology, a computer has only syntax and no semantics; the symbols in a computer lack the intentionality (directedness) that human language use has. He summarizes his position at the end of the paper:

“Could a machine think?” The answer is, obviously, yes. We are precisely such machines. […] But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? […] the answer is no.Footnote 31

2.5.3 Computing, Syntax, and Causal Powers

Reconstructing the argument in this way, the question is whether the premises are true. Several people have argued that premise 2 is false, because one can only understand what a computer does as responding to the program as meaningful.Footnote 32 I happen to think that this is a mistake, the computer does not follow these rules, it is just constructed in such a way that it acts according to these rules, if its states are suitably interpreted by an observer.Footnote 33 Having said that, any actual computer, any physical realization of an abstract algorithm processor, does have causal powers, it does more than syntactic manipulation. For example, it may be able to turn the lights on or off.

The Chinese room argument has moved the attention in the philosophy of language away from convention and logic toward the conditions for a speaker to mean what they say (speakers’ meaning), or to mean anything at all (have intentionality); in particular, it left us with the discussion on the role of representation in cognition, and the role of computation over representations.Footnote 34

2.6 Rational Choice
2.6.1 Normative Decision Theory: MEU

A rational agent will perceive the environment, find out which options for action exist, and then take the best decision. This is what decision theory is about. It is a normative theory on how a rational agent should act, given the knowledge they have – not a descriptive theory of how rational agents will actually act.

So how should a rational agent decide which is the best possible action? They evaluate the possible outcomes of each choice and then select the one that is best, meaning the one that has the highest subjective utility, that is, utility as seen by the particular agent. It should be noted that rational choice in this sense is not necessarily egoistic, it could well be that the agent puts a high utility on the happiness of someone else, and thus rationally chooses a course of action that maximizes overall utility through the happiness of that other person. In actual situations, the agent typically does not know what the outcomes of particular choices will be, so they act under uncertainty. To overcome this problem, the rational agent selects the action with maximum expected utility (MEU), where the value of a choice equals the utility of the outcome multiplied by the probability of that outcome occurring. This thought can be explained with the expected utility of certain gambles or lotteries. In more complicated decision cases the rationality of a certain choice depends on subsequent choices of other agents. These kinds of cases are often described with the help of “games” played with other agents. In such games it is often a successful strategy to cooperate with other agents in order to maximize subjective utility.

In artificial intelligence it is common to perceive of AI agents as rational agents in the sense described. For example, Stuart Russell says: “In short, a rational agent acts so as to maximise expected utility. It’s hard to over-state the importance of this conclusion. In many ways, artificial intelligence has been mainly about working out the details of how to build rational machines.”Footnote 35

2.6.2 Resources and Rational Agency

It is not the case that a rational agent will always choose the perfect option. The main reason is that such an agent must deal with the fact that their resources are limited, in particular, data storage and time (most choices are time-critical). The question is thus not only what the best choice is, but how many resources I should spend on optimizing my choice; when should I stop optimizing and start acting? This phenomenon is called bounded rationality, bounded optimality, and in cognitive science, it calls for resource rational analysis.Footnote 36 Furthermore, there is no set of discrete options from which to choose, and a rational agent needs to reflect on the goals to pursue (see Section 2.9).

The point that agents (natural or artificial) will have to deal with limited resources when making choices, has tremendous importance for the understanding of cognition. It is often not fully appreciated in philosophy – even the literature about the limits of rational choice seems to think that there is something “wrong” with using heuristics that are biased, being “nudged” by the environment, or using the environment for “extended” or “situated” cognition.Footnote 37 But it would be irrational to aim for perfect cognitive procedures, not to mention for cognitive procedures that would not be influenced by the environment.

2.6.3 The Frame Problem(s)

The original frame problem for classical AI was how to update a belief system after an action, without stating all the things that have not changed; this requires a logic where conclusions can change if a premise is added – a non-monotonic logic.Footnote 38 Beyond this more technical problem, there is a philosophical problem of updating beliefs after action, popularized by Dennett, which asks how to find out what is relevant, how wide the frame should be cast for relevance. As Shanahan says “relevance is holistic, open-ended, and context-sensitive” but logical inference is not.Footnote 39

There is a very general version of the frame problem, expressed by Jerry Fodor, who says, the frame problem really is: “Hamlet’s problem: when to stop thinking.” He continues by saying that “modular cognitive processing is ipso facto irrational […] by attending to less than all the evidence that is relevant and available.”Footnote 40 Fodor sets the challenge that in order to perform a logical inference, especially an abduction, one needs to have decided what is relevant. However, he seems to underestimate that one cannot attend to all that is relevant and available (rationality is bounded). It is currently unclear whether the frame problem can be formulated without dubious assumptions about rationality. Similar concerns apply to the claims that Gödel has shown deep limitations of AI systems.Footnote 41 Overall, there may be more to intelligence than instrumental rationality.

2.6.4 Creativity

Choices that involve creativity are often invoked as something special, not merely mechanical, and thus inaccessible to a mere machine. The notion of “creation” has significant impact in our societal practice particularly when that creation is protected by intellectual property rights – and AI systems have created or cocreated music, painting, and text. It is not clear that there is a notion of creativity that would provide an argument against machine creativity. Such a notion would have to combine two aspects that seem to be in tension: On the one hand, creativity seems to imply causation that includes acquiring knowledge and techniques (think of J. S. Bach composing a new cantata), on the other hand, creativity is supposed to be a non-caused, non-predictable, spark of insight. It appears unclear whether such a notion of creativity can, or indeed should, be formulated.Footnote 42 Perhaps a plausible account is that creativity involves moving between different spaces of relevance, as in the frame problem.

2.7 Free Will and Creativity
2.7.1 Determinism, Compatibilism

The problem that usually goes under the heading of “free will” is how physical beings like humans or AI systems can have something like free will. The traditional division for possible positions in the space of free will can be put in terms of a decision tree. The first choice is whether determinism is true, that is, the thesis that all events are caused. The second choice is whether incompatibilism is true, that is, the thesis that if determinism is true, then there is no free will.

The position known as hard determinism says that determinism is indeed true, and if determinism is true then there is no such thing as free will – this is the conclusion that most of its opponents try to avoid. The position known as libertarianism (not the political view) agrees that incompatibilism is true, but adds that determinism is not, so we are free. The position known as compatibilism says that determinism and free will are compatible and thus it may well be that determinism is true and humans have free will (and it usually adds that this is actually the case).

This results in a little matrix of positions:

IncompatibilismCompatibilism
DeterminismHard DeterminismOptimistic/Pessimistic Compatibilism
Non-DeterminismLibertarianism[Not a popular option]
2.7.2 Compatibilism and Responsibility in AI

In a first approximation, when I say I did something freely, it means that it was up to me that I was in control. That notion of control can be cashed out by saying I could have done otherwise than I did, specifically I could have done otherwise if I had decided otherwise. To this we could add that I would have decided otherwise if I had had other preferences or knowledge (e.g., I would not have eaten those meatballs if I had a preference against eating pork, and if I had known that they contain pork). Such a notion of freedom thus involves an epistemic condition and a control condition.

So, I act freely if I do as I choose according to the preferences that I have (my subjective utility). But why do I have these preferences? As Aristotle already knew, they are not under my voluntary control, I could not just decide to have other preferences and then have them. However, as Harry Frankfurt has pointed out, I can have second-order preferences or desires, that is, I can prefer to have other preferences than the ones I actually have (I could want not to have a preference for those meatballs, for example). The notion that I can overrule my preferences with rational thought is what Frankfurt calls the will, and it is his condition for being a person. In a first approximation one can thus say, to act freely is to act as I choose, to choose as I will, and to will as I rationally decide to prefer.Footnote 43

The upshot of this debate is that the function of a notion of free will for agency in AI or humans is to allow personal responsibility, not to determine causation. The real question is: What are the conditions such that an agent is responsible for their actions and deserves being praised or blamed for them. This is independent of the freedom from causal determination; that kind of freedom we do not get, and we do not need.Footnote 44

There is a further debate between “optimists” and “pessimists” whether humans actually do fulfil those conditions (in particular whether they can truly cause their preferences) and can thus properly be said to be responsible for their actions and deserve praise or blame – and accordingly whether reward or punishment should have mainly forward-looking reasons.Footnote 45 In the AI case, an absence of responsibility has relevance for their status as moral agents, for the existence of “responsibility gaps,” and for what kinds of decisions we should leave to systems that cannot be held responsible.Footnote 46

2.8 Consciousness
2.8.1 Awareness and Phenomenal Consciousness

In a first approximation, it is useful to distinguish two types of consciousness: Awareness and phenomenal consciousness. Awareness is the notion that a system has cognitive states on a base level (e.g., it senses heat) and on a meta level, it has states where it is aware of the states on the object level. This awareness, or access, involves the ability to remember and use the cognitive states on the base level. This is the notion of “conscious” that is opposed to “unconscious” or “subconscious” – and it appears feasible for a multi-layered AI system.

Awareness is often, but not necessarily, connected to a specific way that the cognitive state at the base level feels to the subject – this is what philosophers call phenomenal consciousness, or how things seem to me (Greek phaínetai). This notion of consciousness is probably best explained with the help of two classical philosophical thought experiments: the bat, and the color scientist.

If you and I go out to have the same ice cream, then I can still not know what the ice cream tastes like to you, and I would not know that even if I knew everything about the ice cream, you, your brain, and your taste buds. Somehow, what it is like for you is something epistemically inaccessible to me, I can never know it, even if I know everything about the physical world. In the same way, I can never know what it is like to be a bat.Footnote 47

A similar point about what we cannot know in principle is made by Frank Jackson in the article “What Mary didn’t know.”Footnote 48 In his thought experiment, Mary is supposed to be a person who has never seen anything with color in her life, and yet she is a perfect color scientist, she knows everything there is to know about color. One day, she gets out of her black and white environment and sees color for the first time. It appears that she learns something new at that point.

The argument that is suggested here seems to favor an argument for a mental-physical dualism of substances or at least properties: I can know all the physics, and I cannot know all the phenomenal experience, therefore, phenomenal experience is not part of physics. If dualism is true, then it may appear that we cannot hope to generate phenomenal consciousness with the right physical technology, such as AI. In the form of substance dualism, as Descartes and much of religious thought had assumed, dualism is now unpopular since most philosophers assume physicalism, that “everything is physical.”

Various arguments against the reduction of mental to physical properties have been brought out, so it is probably fair to say that property dualism has a substantial following. This is often combined with substance monism in some version of “supervenience of the mental on the physical,” that is, the thesis that two entities with the same physical properties must have the same mental properties. Some philosophers have challenged this relation between property dualism and the possibility of artificial consciousness. David Chalmers has argued that “the physical structure of the world – the exact distribution of particles, fields, and forces in spacetime – is logically consistent with the absence of consciousness, so the presence of consciousness is a further fact about our world.” Despite this remark, he supports computationalism: “… strong artificial intelligence is true: there is a class of programs such that any implementation of a program in that class is conscious.”Footnote 49

What matters for the function of consciousness in AI or natural agents is not the discussion about dualisms, but rather why phenomenal consciousness in humans is the way it is, how one could tell whether a system is conscious, and whether there could be a human who is physically just like me, but without consciousness (a “philosophical zombie”).Footnote 50

2.8.2 The Self

Personal identity in humans is mainly relevant because it is a condition for allocating responsibility (see Section 2.7): In order to allocate blame or praise, there has to be a sense in which I am the same person as the one who performed the action in question. We have a sense that there is a life in the past that is mine, and only mine – how this is possible is known as the “persistence question.” The standard criteria for me being the same person as that little boy in the photograph are my memory of being that boy, and the continuity of my body over time. Humans tend to think that memory or conscious experience, or mental content are the criteria for personal identity, which is why we think we can imagine surviving our death, or living in a different body.Footnote 51

So, what is a “part” of that persistent self? Philosophical phantasies and neurological raritiesFootnote 52 aside, there is now no doubt what is “part of me” and what is not – I continuously work on maintaining that personal identity by checking that the various senses are in agreement, for example, I try to reach for the door handle, I see my hand touching the handle, I can feel it … and then I can see the door opening and feel my hand going forward. This is very different from a computer: The components of the standard Von Neumann architecture (input-system, storage, random-access memory, processor, output-system) can be in the same box or miles apart, they can even be split into more components (e.g., some off-board processing of intensive tasks) or stored in spaces such as the “cloud” that are not defined through physical location. And that is only the hardware, the software faces similar issues, so a persistent and delineated self is not an easy task for an AI system. It is not clear that there is a function for a self in AI, which would have repercussions for attributing moral agency and even patiency.

2.9 Normativity

Let us return briefly to the issues of rational choice and responsibility. Stuart Russell said that “AI has adopted the standard model: we build optimising machines, we feed objectives into them, and off they go.”Footnote 53 On that understanding, AI is a tool, and we need to provide the objectives or goals for it. Artificial intelligence has only instrumental intelligence on how to reach given goals. However, general intelligence also involves a metacognitive reflection on which goals are relevant to my action now (food or shelter?) and a reflection on which goals one should pursue.Footnote 54 One of the open questions is whether a nonliving system can have “real goals” in the sense required for choice and responsibility, for example, of goals that have subjective value to the system, and that the system recognizes as important after reflection. Without such reflection on goals, AI systems would not be moral agents and there could be no “machine ethics” that deserves the name. Similar considerations apply to other forms of normative reflection, for example, in aesthetics and politics. This discussion in AI philosophy seems to show that there is a function for normative reflection in humans or AI as an elementary part of the cognitive system.

3 Ethics of AI Toward a “Design for Values” Approach

Stefan Buijsman , Michael Klenk , and Jeroen van den Hoven
3.1 Introduction

Artificial intelligence can (help) make decisions and can steer actions of (autonomous) agents. Now that it gets better and better at performing these tasks, in large part due to breakthroughs in deep learning (see Chapter 1 of this Handbook), there is an increasing adoption of the technology in society. AI is used to support fraud detection, credit risk assessments, education, healthcare diagnostics, recruitment, autonomous driving, and much more. Actions and decisions in these areas have a high impact on individuals, and therefore AI becomes more and more impactful every day. Fraud detection supported by AI has already led to a national scandal in the Netherlands, where widespread discrimination (partly by an AI system) led to the fall of the government.Footnote 1 Similarly, healthcare insurance companies using AI to estimate the severity of people’s illness seriously discriminated against black patients. A correlation between race and healthcare spending in the data caused the AI system to give lower risk scores to black patients, leading to lower reimbursements for black patients even when their condition was worse.Footnote 2 The use of AI systems to conduct first-round interviews in recruitment has led to more opacity in the process, harming job seekers’ autonomy.Footnote 3 Self-driving cars can be hard to keep under meaningful human control,Footnote 4 leading to situations where the driver cannot effectively intervene and even situations where nobody may be accountable for accidents.Footnote 5, Footnote 6 In all of these cases, AI is part of a socio-technical system where the new technologies interact with social elements (operators, affected persons, managers, and more). As we will see, ethical challenges emerge both at the level of technology and at the level of the new socio-technical systems. This wide range of ethical challenges associated with the adoption of AI is discussed further in Section 3.2.

At the same time, many of these issues are already well known. They come up in the context of AI because it gets integrated into high-impact processes, but the processes were in many cases already present without AI. For instance, discrimination has been studied extensively, as have complementary notions of justice and fairness. Autonomy, control, and responsibility have likewise received extensive philosophical attention. We also shouldn’t forget about the long tradition of normative ethical theories, such as virtue ethics, deontology, and consequentialism, which have all reflected on what makes an action the right one to take. AI and the attention it gets provides a new spotlight on perennial moral issues, some of which are novel and have not been encountered by humanity before and some of which are new instances of familiar problems. We discuss the main normative ethical accounts that may apply to AI in Section 3.3, along with their applicability to the ethical challenges raised earlier.

As we argue, the general ethical theories of the past are helpful but at the same time often lack the specificity needed to tackle the issues raised by new technologies. Instead of applying highly abstract traditional ethical theories such as Aristotle’s account of Virtue, Mill’s principle of utility, or Kant’s Categorical Imperative, straightforwardly to particular AI issues it is often more helpful to utilize mid-level normative ethical theories, which are less abstract, more testable and which focus on technology, interactions between people, organizations, and institutions. Examples of mid-level ethical theories are Rawls’ theory of justice,Footnote 7 Pettit’s account of freedom in terms of non-domination,Footnote 8 or Klenk’s account of manipulation,Footnote 9 which could be construed as broadly Kantian, Amartya Sen and Martha Nussbaum’s capability approach,Footnote 10 which can be construed as broadly Aristotelian, and Posner’s economic theory of law,Footnote 11 which is broadly utilitarian. These theories already address a specific set of moral questions in their social, psychological, economic, or social context. They also point to the empirical research that needs to be done in order to apply the theory sensibly. A meticulous understanding of the field to which ethical theory is being applied is essential and part of (applied) ethics itself. We need to know what the properties of artificially intelligent agents are, how they differ from human agents; we need to establish what the meaning and scope is of the notion of, for example, “personal data,” what the morally relevant properties of virtual reality are. These are all examples of preparing the ground conceptually before we can start to apply normative ethical considerations.

We then need to ensure that normative ethical theories and the consideration to which they give rise are recognized and incorporated in technology design. This is where design approaches to ethics come in (Value-sensitive design,Footnote 12 Design for ValuesFootnote 13 and others). Ethics needs to be present when and where it can make a difference and in the form that increases the chances of making a difference. We discuss these approaches in Section 3.4, along with the way in which they relate to the ethical theories from Section 3.3. These new methods are needed to realize the responsible development and use of artificial intelligence, and require close cooperation between philosophy and other disciplines.

3.2 Prominent Ethical Challenges

Artificial intelligence differs from other technologies in at least two ways. First, AI systems can have a greater degree of agency than other technologies.Footnote 14 AI systems can, in principle, make decisions on their own and act in dynamic fashion, responding to the environment they find themselves in. Whether they can act and make decisions is a matter of dispute, but what we can say in any case is that they can initiate courses of events that would not have occurred without their initiating it. A self-driving car is thus very different from a typical car, even though both are technological artifacts. While a car can automatically perform certain actions (e.g., prevent the brakes from locking when the car has to stop abruptly), these systems lack the more advanced agency that a self-driving car has when it takes us from A to B without further instructions from the driver.

Second, AI systems have a higher degree of epistemic opacity than other technical systems.Footnote 15 While most people may not understand how a car engine works, there are engineers who can explain exactly why the engine behaves the way it does. They are also able to provide clear explanations of why an engine fails under certain conditions and can to a great extent anticipate these situations. In the case of AI systems – and in particular for deep learning systems – we do not know why the systems give us these individual outputs rather than other ones.Footnote 16, Footnote 17 Computer scientists do understand how these systems work generally speaking and can explain general features of their behavior such as why convolutional neural networks are well suited for computer vision tasks, whereas recurrent neural networks are better for natural language processing. However, for individual outputs of a specific AI system, we do not have explanations available as to why the AI generates this specific output (e.g., why it classifies someone as a fraudster, or rejects a job candidate). Likewise, it is difficult to anticipate the output of AI systems on new inputs,Footnote 18 which is exacerbated by the fact that small changes to the input of a system can have big effects on the output.Footnote 19

These two features of AI systems make it difficult to develop, deploy, and use them responsibly. They have more agency than other technologies, which exacerbates the challenge – though we should be clear that AI systems do not have moral agency (and, for example, developments of artificial moral agents are still far from achieving this goalFootnote 20), and thus should not be anthropomorphized and cannot bear responsibility for results of their outputs.Footnote 21 In addition, even its developers struggle to anticipate (due to the opacity) what the AI system will output and why. As a result, familiar ethical problems that arise out of irresponsible or misaligned action are repeated and exacerbated by the speed, scale, and opacity that come with AI systems. It makes it difficult to work with them responsibly in the wider socio-technical system in which AI is embedded, and also complicates efforts to ensure that AI systems realize ethical valuesFootnote 22 as we cannot easily verify if their behavior is aligned with these values (also known as the alignment problemFootnote 23). It is a pressing issue to find ways to embed these values despite the difficulties that AI systems present us with.

This brings us to the ethical challenges that we face when developing and using AI systems. There have already been a number of attempts to systematize these in the literature. Mittelstadt et al.Footnote 24 group them into epistemic concerns (inconclusive evidence, inscrutable evidence and misguided evidence) and normative concerns (unfair outcomes and transformative effects) in addition to issues of traceability/responsibility. Floridi et al.Footnote 25 use categories from bioethics to group AI ethics principles into five categories. There are principles of beneficence (promoting well-being and sustainability), nonmaleficence (encompassing privacy and security), autonomy, justice, and explicability. The inclusion of explicability as an ethical principle is contested,Footnote 26 but is not unusual in such overviews. For example, Kazim and KoshiyamaFootnote 27 use the headings human well-being, safety, privacy, transparency, fairness, and accountability, which again include opacity as an ethical challenge. Huang et al.,Footnote 28 in an even more extensive overview, again include it as an ethical challenge at the societal level (together with, for example, fairness and controllability), as opposed to challenges at the individual (autonomy, privacy, and safety) and environmental (sustainability) level. In addition to these, there are myriad ethics guidelines and principles from organizations and states, such as the statement of the European Group on Ethics (European Group on Ethics in Science and New Technologies, “Artificial Intelligence, Robotics and ‘Autonomous’ Systems”) and EU High-Level Expert Group’s guidelines that mention human oversight, technical robustness and safety, privacy, transparency, diversity and fairness, societal, and environmental well-being and accountability. Recent work suggests that all these guidelines do converge on similar terminology (transparency, justice and fairness, non-maleficence, responsibility, and privacy) on a higher level but that at the same time there are very different interpretations of these terms once you look at the details.Footnote 29

Given these different interpretations, it helps to look in a little more detail at the different ethical challenges posed by AI. Such an examination will show that, while overviews are certainly helpful starting points, they can also obscure the relevance of socio-technical systems to, and context-specificity of, the ethical challenges that AI systems can raise. Consider, first of all, the case of generative natural language processing of which ChatGPT is a recent and famous example. Algorithms such as ChatGPT can generate text based on prompts, such as to compose an email, generate ideas for marketing slogans, or even summarize research papers.Footnote 30 Along with many (potential) benefits, such systems also raise ethical questions because of the content that they generate.

There are prominent issues of bias, as the text that such algorithms generate is often discriminatory.Footnote 31 Privacy can be a challenge, as these algorithms can also remember personal information that they have seen as part of the training data and – at least under certain conditions – can as a result output social security numbers, bank details, and other personal information.Footnote 32 Sustainability is also an issue, as ChatGPT and other Large Language Models require massive amounts of energy to be trained.Footnote 33 But in addition to all of these ethical challenges that are naturally derived from the overviews there are more specific issues. ChatGPT and other generative algorithms may produce outputs that heavily draw on the work of specific individuals without giving credit to them, raising questions of plagiarism.Footnote 34 The possibility to use such algorithms to help write essays or formulate answers to exam questions has also been raised, as ChatGPT already performs reasonably well on a range of university exams.Footnote 35, Footnote 36 One may also wonder how such algorithms end up being used in corporate settings, and whether this will replace part of the writing staff that we have. Issues about the future of workFootnote 37 are thus quickly connected to the rapidly improving language models. Finally, large language models can produce highly personalized influence at a massive scale and their outputs can be used to mediate communication between people (augmented many-to-many communicationFootnote 38); they raise a peculiar risk of manipulation at scale. The ethical issues surrounding manipulation are certainly related to issues of autonomy. For example, manipulation may be of ethical relevance insofar as it negatively impacts people’s autonomy and well-being.Footnote 39 At the same time, manipulation does not necessarily impact autonomy, but instead raises ethical issues all on its own; issues that may well be aggravated in their scope and importance by the use of large language models.Footnote 40, Footnote 41 This illustrates our main point in this section, namely that general frameworks offer a good start, but that they are insufficient as comprehensive accounts of the ethical issues of AI.

A second and very different example is that of credit scoring algorithms that help to decide whether someone qualifies for a bank loan. A recent review shows that the more complex deep learning systems are more accurate at this task than simpler statistical models,Footnote 42 so we can expect that AI is used more and more by banks for credit scoring. While this may lead to a larger amount of loans being granted, because the risk per loan is lower (as a result of more accurate risk assessments), there are of course also a number of ethical considerations to take into account that stem from the function of distributing finance to individuals. Starting off again with bias, there is a good chance of unfairness in the distribution of loans. AI systems may offer proportionally fewer loans to minoritiesFootnote 43 and are often also less accurate for these groups.Footnote 44 This can be a case of discrimination, and a range of statistical fairness metricsFootnote 45 has been developed to capture this. This particular case brings with it different challenges, as fairness measures rely on access to group membership (e.g., race or gender) in order to work, raising privacy issues.Footnote 46 Optimizing for fairness can also drastically reduce the accuracy of an AI system, leading to conflicts with their reliability.Footnote 47 From a more socio-technical lens, there are questions of how bank personnel will interact with these models and rely on them, raising questions of meaningful human control, responsibility, and trust in these systems. The decisions made can also have serious impacts for decision subjects, requiring close attention to their contestabilityFootnote 48 and institutional mechanisms to correct mistakes.

Third, and lastly, we can consider an AI system that the government uses to detect fraud among social benefits applications. Anomaly detection is an important subfield of artificial intelligence.Footnote 49 Along with other AI techniques, it can be used to more accurately find deviant cases. Yeung describes how New Public Management in the Public Sector is being replaced by what she calls New Public Analytics.Footnote 50 Such decisions by government agencies have a major impact on potentially very vulnerable parts of the population, and so come with a host of ethical challenges. There is, again, bias that might arise in the decision-making where a system may disproportionately (and unjustifiably) classify individuals from one group as fraudsters – as actually happened in the Dutch childcare allowance affair.Footnote 51 Decisions about biases here are likely to be made differently than in the bank case, because we consider individuals to have a right to social benefits if they need them, whereas there is no such right to a bank loan. Some other challenges, such as those to privacy and reliability, are similar, though again different choices will likely be made due to the different decisions resulting from the socio-technical system. At the same time, new challenges arise around the legitimacy of the decision being made. As the distribution of social benefits is a decision that hinges on political power, it is subject to the acceptability of how that power is exercised. In an extreme case, as with the social benefits affair, mistakes here can lead to the resignation of the government.Footnote 52 Standards of justice and transparency, like other standards such as those of contestability/algorithmic recourse,Footnote 53 are thus different depending on the context.

What we hope to show with these three examples is that the different classifications of ethical challenges and taxonomies of moral values in the literature are certainly valid. They show up throughout the different applications of AI systems and to some extent they present overarching problems that may have solutions that apply across domains. We already saw this for bias across the different cases. Another example comes from innovations in synthetic data, which present general solutions to the trade-off between privacy and (statistical) fairness by generating datasets with the attributes needed to test for fairness, but for fake people.Footnote 54 However, even when the solution is domain-general, the task of determining when such a synthetic dataset is relevantly similar to the real world is a highly context-specific issue. It needs to capture the relevant patterns in the world. For social benefits, this includes correlations between gender, nationality, and race with one’s job situation and job application behavior, whereas for a bank, patterns related to people’s financial position and payment behavior are crucial. This means that synthetic datasets cannot easily be reused and care must be taken to include the context. Even then, recent criticisms have raised doubts that synthetic data do not fully preserve privacy,Footnote 55 and thus may not be the innovative solution that we hope for. Overviews are therefore helpful to remind ourselves of commonly occurring ethical challenges, but they should not be taken as definitive lists, nor should they tempt us into easily transferring answers to ethical questions from one domain to another.

Finally, we pointed already to the socio-technical nature of many of the ethical challenges. This deserves a little more discussion, as the overviews of ethical challenges can often seem to focus more narrowly on the technical aspects of AI systems themselves,Footnote 56 leaving out the many people that interact with them and the institutions of which they are a part. Bias can come back into the decision-making if operators can overrule an AI system, and reliability may suffer if operators do not appropriately rely on AI systems.Footnote 57 Values such as safety and security are likewise just as dependent on the people and regulations surrounding AI systems as they are on the technologies themselves. Without appropriate design of these surroundings we may also end up with a situation where operators lack meaningful human control, leading to gaps in accountability.Footnote 58 The list goes on, as contestability, manipulation and legitimacy also in many ways depend on the interplays of socio-technical elements rather than the AI models themselves. Responsible AI thus often involves changes to the socio-technical system in which AI is embedded. In short, even though the field is called “AI ethics” it should concern itself with more than just the AI models in a strict sense. It is just as much about the people interacting with AI and the institutions and norms in which AI is employed. With that said, the next question is how we can deal with the challenges that AI presents us with.

3.3 Main Ethical Theories and Their Application to AI

The first place to look when one wants to tackle these ethical challenges is the vast philosophical literature centered around the main ethical theories. We have millennia of thinking on the grounds of right and wrong action. Therefore, since the problems that AI raises typically involve familiar ethical values, it would be wise to benefit from these traditions. To start with, the most influential types of normative ethical theories are virtue ethics, deontology, and consequentialism. Normative ethical theories are attempts to formulate and justify general principles – normative laws or principles if you willFootnote 59 – about the grounds of right and wrong (there are, of course, exceptions to this way of seeing normative ethicsFootnote 60). Insofar as the development, deployment, and use of AI systems involves actions just like any other human activity, the use of AI falls under the scope of ethical theories: it can be done in right or wrong fashion, and normative ethical theories are supposed to tell just why what was done was right or wrong. In the context of AI, however, the goal is often not understanding (why is something right or wrong?) but action-guidance: what should be done, in a specific context? Partly for that reason, normative ethical theories may be understood or used as decision aids that should resolve concrete decision problems or imply clear design guidelines. When normative ethical theories are (mis-)understood in that way, when they are construed as a decisional algorithm, for example, when scholars aim to derive ethical precepts for self-driving cars from normative theories and different takes on the trolley problem, it is unsurprising that the result is disappointment in a rich and real world setting. At the same time, there is a pressing need to find concrete and justifiable answers to the problems posed by AI and we can use all the help we can get. We therefore aim to not only highlight the three main ethical theories here the history of ethics has handed down to us but also point to the many additional discussions in ethics and philosophy that promise insights that are more readily applicable to practice and that can be integrated in responsible policymaking, professional reflection, and societal debates. Here the ethical traditions in normative ethical theory are like “sensitizing concepts.”Footnote 61 that draw our attention to particular aspects of complex situations. Following Thomas Nagel, we could say that these theoretical perspectives each champion one particular type of value at the expense of other types. Some take agent relative perspectives into account, but others disregard the individual’s perspective and consider the agent’s place in a social network or champion a universalistic perspective.

The focus of virtue ethics is on the character traits of agents. Virtue ethicists seek to answer the question of “how ought one to live” by describing the positive character traits – virtues – that one ought to cultivate. Virtue ethicists have no problem talking about right or wrong actions, however, for the right action is the action that a virtuous person would take. How this is worked out precisely differs, and in modern contexts, one can see a difference between, for example, Slote who holds that one’s actual motivations and dispositions matter and that if those are good/virtuous then the action was good.Footnote 62 On the other hand, Zagzebski thinks that one’s actual motives are irrelevant, and that what matters is whether it matches the actions of a hypothetical/ideal virtuous person.Footnote 63 In yet another version, Swanton holds that virtues have a target at which they aimFootnote 64: for example, courage aims to handle danger and generosity aims to share resources. An action is good if it contributes to the targets of these virtues (either strictly by being the best action to promote the different targets, or less strictly as one that does so well enough). In each case, virtues or “excellences” are the central point of analysis and the right action in a certain situation depends somehow on how it relates to the relevant virtues, linking what is right to do to what someone is motivated to do.

This is quite different from consequentialism, though consequentialists can also talk about virtues in the sense that a virtue is a disposition that often leads to outcomes that maximize well-being. Virtues can be acknowledged, but are subsumed under the guiding principle that the right action is the one that maximizes (some understanding of) well-being.Footnote 65 There are then differences on whether the consequences that matter are the actual consequences or the consequences that were foreseeable/intended,Footnote 66 whether one focuses on individual acts or rules,Footnote 67 and on what consequences matter (e.g., pleasure, preference satisfaction, or a pluralist notion of well-beingFootnote 68). Whichever version of consequentialism one picks, however, it is consequences that matter and there will be a principle that the right action leads to the best consequences.

The third general view on ethics, namely deontology, looks at norms instead. So, rather than grounding right action in its consequences, what is most important for these theories is whether actions meet moral norms or principles.Footnote 69 A guiding idea here is that we cannot predict the consequences of our actions, but we can make sure that we ourselves act in ways that satisfy the moral laws. There are, again, many different ways in which this core tenet has been developed. Agent-centered theories focus on the obligations and permissions that agents have when performing actions.Footnote 70 There may be an obligation to tell the truth, for example, or an obligation not to kill another human being. Vice versa, patient-centered theories look not at the obligations of the agent but at the rights of everyone else.Footnote 71, Footnote 72 There is a right to not be killed that limits the purview of morally permissible actions. Closer to the topic of this chapter, we may also think of, for example, a right to privacy that should be respected unless someone chooses to give up that right in a specific situation.

All three accounts can be used to contribute to AI ethics, though it is important to remember that they are conflicting and thus cannot be used interchangeably (though they can be complementary). A philosophically informed perspective on AI ethics will need to take a stand on how these theories are understood, but for here we will merely highlight some of the ways they might be applied. First, we can look at the practices and character of the developers and deployers of artificial intelligence through the lens of virtue ethics. What virtues should be instilled in those who develop and use AI? How can the education of engineers contribute to this, to instill core virtues such as awareness of the social context of technology and a commitment to public goodFootnote 73 and sensitivity to the needs of others? It can also help us to look at the decision procedure that led to the implemented AI system. Was this conducted in a virtuous way? Did a range of stakeholders have a meaningful say in critical design choices, as would be in line with value sensitive design and participatory design approaches?Footnote 74 While it is typically difficult to determine what a fully virtuous agent would do, and virtue ethics may not help us to guide specific trade-offs that have to be made, looking at the motivations and goals of the people involved in realizing an AI system can nevertheless help.

The same goes for consequentialism. It’s important to consider the consequences of developing an AI system, just as it is important for those involved in the operation of the system to consider the consequences of the individual decisions made once the AI is up and running. Important as it is, it is also difficult to anticipate consequences beforehand and often the more we can still shape the workings of a technology (in the early design stages), the less we know about the impacts it will have.Footnote 75 There are, of course, options to redesign technologies and make changes as the impacts start to emerge, and consequentialism rightly draws our attention to the consequences of using AI. The point we want to make here, rather, is that in practice the overall motto to optimize the impact of an AI system is often not enough to help steer design during the development phase.

Deontology is no different in this respect. It can help to look at our obligations as well as at the rights of those who are impacted by AI systems, but deontology as it is found in the literature is too coarse-grained to be of practical assistance. We often do not exactly know what our moral obligations are on these theories, or how to weigh prima facie duties and rights to arrive at what we should do, all things considered. The right to privacy of one person might be overruled by someone else’s right not to be killed, for example, and deontological theories typically do not give the detailed guidance needed to decide to what extent one right may be waived in favor of another. In short, we need to supplement the main ethical theories with more detailed accounts that apply to more specific concerns raised by emerging technologies.

These are readily available for a wide range of values. When we start with questions of bias and fairness, there is a vast debate on distributive justice, with for example Rawls’ Justice as FairnessFootnote 76 as a substantive theory of how benefits and harms should be distributed.Footnote 77 Currently, these philosophical theories are largely disconnected from the fairness debate in the computer science/AI Ethics literature,Footnote 78 but there are some first attempts to develop connections between the two.Footnote 79 The same goes for other values, where for example the philosophical work on (scientific) explanation can be used to better understand and perhaps improve the explainability of machine learning systems.Footnote 80, Footnote 81 Philosophical views on responsibility and control have also already been developed in the context of AI, specifically linked to the concept of meaningful human control over autonomous technology.Footnote 82 More attention has also been paid to the ethics of influence, notably the nature and ethics of manipulation, which can inform the design and deployment of AI-mediated influence, such as (hyper-)nudges.Footnote 83, Footnote 84 None of these are general theories of ethics, but the more detailed understanding of important (ethical) values that they provide are nevertheless useful when trying to responsibly design and use AI systems. Even then, however, we need an idea of how we go from the philosophical, conceptual, analysis to the design of a specific AI system. For that, the (relatively recent) design approaches to (AI) ethics are crucial. They require input from all the different parts of philosophy mentioned in this section, but add to that a methodology to make these ethical reflections actionable in the design and use of AI.

3.4 Design-Approaches to AI Ethics

In response to these challenges the ethics of technology has switched, since the 1980sFootnote 85 to a constructive approach of integrating ethical aspects already in the design stage of technology. Frameworks such as value-sensitive designFootnote 86 and design for values,Footnote 87 coupled with methods such as participatory designFootnote 88 have led the way in doing precisely this. Here we will highlight the design for values approach, but note that there are close ties with other design approaches to ethics of technology and design for values is not privileged among these. It shares with other frameworks the starting point that technologies are not value neutral, but instead embed or embody particular values.Footnote 89 For example, biases can be (intentionally or unintentionally) replicated in technologies, whether it is in the design of park benches with middle armrests to make sleeping on them impossible or in biased AI systems. The same holds for other values, as the design of an engine will strike a balance between cost-effectiveness and sustainability or content moderation at a social media platform realizes values of the decision-makers. The challenge is to ensure that the relevant values are embedded in AI systems and the socio-technical systems of which they are a part. This entails three different challenges: identifying the relevant values, embedding them in systems, and assessing whether these efforts were successful.

When identifying values, it is commonly held important to consider values of all stakeholders, both those directly interacting with the AI system and those indirectly affected by its use.Footnote 90 This requires the active involvement of (representatives of) different stakeholder groups, to elicit the different values that are important to them. At the same time, it comes with a challenge. Design approaches to AI ethics require that values of a technology’s stakeholders (bottom-up) are weighed up against values derived from theoretical and normative frameworks (top-down). Just because people think that, for example, autonomy is valuable does not imply that it is valuable. To go from the empirical work identifying values of stakeholders to a normative take on technologies requires a justification that will likely make recourse to one of the normative ethical approaches discussed earlier. Engaging stakeholders is thus important, because it often highlights aspects of technologies that one would otherwise miss, but not sufficient. The fact that a solution or application would be de facto accepted by stakeholders, does not imply that it would be (therefore) also morally acceptable. Moral acceptability needs to be independently established, a good understanding of the arguments and reasons that all directly and indirectly affected parties bring to the table is a good starting point, but not the end of the story. We should aim at a situation where technology is accepted, because it is morally acceptable, and that if technologies are not accepted, that is because they are not acceptable.

Here the ethical and more broadly philosophical theories touched upon in the previous section can help. They are needed for two reasons: first, to justify and ground the elicited values in a normative framework, the way, for example, accounts of fairness, responsibility, and even normative takes on the value of explainabilityFootnote 91 can justify the relevance of certain values. Here, it also helps to consider the main ethical theories as championing specific values (per Nagel), be they agent-relative, focused on social relations or universalistic. For these sets of values, these theories help to justify their relevance. Second, they help in the follow-up from the identification of values to their implementation. Saying that an AI system should respect autonomy is not enough, as we need to know what that entails for the concrete system at issue.

As different conceptualizations of these values often lead to different designs of technologies, it is necessary to both assess different conceptions and develop new conceptions. This work can be fruitfully linked to the methods of conceptual engineeringFootnote 92 and can often draw on the existing conceptions in extant philosophical accounts. Whether those are used or new conceptions are developed, one needs to make the steps from values to norms, and then from norms to design requirements.Footnote 93 To give a concrete example, one may start from the value of privacy. There are various aspects to privacy, which can be captured in the conceptual engineering step to norms. Here things such as mitigating risks of personal harm, preventing biased decision-making, and protecting people’s freedom to choose are all aspects that emerge from a philosophical analysis of privacyFootnote 94 and can act as norms in the current framework. For they, in turn, can be linked to specific design requirements. When mitigating risks, one can look at specific technologies such as coarse grainingFootnote 95 or differential privacyFootnote 96 that aim to minimize how identifiable individuals are, thus reducing their risks for personal harm. Likewise, socio-technical measures against mass surveillance can support the norm for protecting people’s freedom to choose, by preventing a situation where their choices are impacted by the knowledge that every action is stored somewhere.

For the actual implementation of values there are a number of additional challenges to consider. Most prominently is the fact that conflicts can occur between different design requirements, which is more often referred to as value conflicts or trade-offs.Footnote 97 These already came up in passing in the cases discussed in Section 3.2, such as conflicts between accuracy and fairness or between privacy and fairness. If we want to use statistical fairness measures to promote equal treatment of, for example, men and women, then they need datasets labeled with gender, thus reducing privacy. Likewise, it turns out that when optimizing an AI system for conformity with a statistical fairness measure its accuracy is (greatly) reduced.Footnote 98 Such conflicts can be approached in a number of waysFootnote 99: (1) maximizing the score among alternative solutions to the conflict, assuming that there is a way to rank them; (2) satisficing among alternatives, finding one that is good enough on all the different values; (3) respecifying design requirements to ones that still fit the relevant norms but no longer conflict; and (4) innovating, as with synthetic data and the privacy/fairness conflict, to allow for a way to meet all the original design requirements. All of these are easier said than done, but highlight different strategies for dealing with the fact that often we have to balance competing prima facie (ethical) requirements on AI systems.

Another problem is that recent work has drawn attention to the possibility of changing values. Perceptions of values certainly change over time. That is, people’s interpretation of what it means for a technology to be sustainable (to adhere to or embody that value) may change over time and people may begin to value things that they did not value before: sustainability is a case in point. That means that, even if people’s perceptions of values are correctly identified at the beginning of a design project, they may change, and insofar as people’s perceptions of values matter (see above), the possibility of value change represents another methodological challenge for design for value approaches. Actively designing for this scenario, by including adaptability, flexibility and robustnessFootnote 100 is thus a good practice. We may not be able to anticipate value changes, just as it is hard to predict more generally the impact of an AI system before it is used, but that is no reason not to try to do everything in our power to realize systems that are as responsible as possible.

Because we cannot predict everything, and because values may change over time, it is also important to assess the AI systems once they are in use – and to keep doing so over time. Did the envisaged design requirements indeed manage to realize the identified values? Were values missed during the design phase that now emerge as relevant – the way Uber found out that surge pricing during emergencies is ethically wrong (because it privileges the rich who can then still afford to flee the site of an attack) only after this first happened in 2014.Footnote 101 And are there no unintended effects that we failed to predict? All of these questions are important, and first attempts to systematically raise them can be found in the emerging frameworks for ethics-based auditingFootnote 102 as well as in the EU AI Act’s call for continuous monitoring of AI systems. In these cases, too, the translation from values to design requirements can help. Design requirements should be sufficiently concrete to be both implementable and verifiable, specifying for example a degree of privacy in terms of k-anonymity (how many people have the same attributes in an anonymized dataset) or fairness in terms of a statistical measure. These can then guide the assessment afterward, though we have to be careful that the initial specification of the values may be wrong. Optimizing for the wrong fairness measure can, for example, have serious negative long-term consequences for vulnerable groupsFootnote 103 and these should not be missed due to an exclusive focus on the earlier chosen fairness measure during the assessment.

In all three stages (identification, implementation, and assessment), we should not forget the observations from Section 3.2: we should design more than just the technical AI systems and what implications values have will differ from context to context. The problem of Uber’s algorithm raising prices whenever demand increases regardless of the cause for that demand was ultimately not solved in the AI system, but by adding on a control room where a human operator can turn off the algorithm in emergencies. Response times were an issue initially,Footnote 104 but it shows that solutions need not be purely technical. Likewise, an insurance company in New Zealand automated its claims processing and massively improved efficiency while maintaining explainability when it counts, by automatically paying out every claim that the AI approved but sending any potential rejections to humans for a full manual review.Footnote 105 In this case, almost 95% of applications get accepted almost instantaneously, while every rejected application still comes with a clear motivation and an easily identifiable person who is accountable should a mistake have been made. A combination that would be hard to achieve using AI alone is instead managed through the design of the wider socio-technical system. Of course, this will not work in every context. Crucial to this case is that the organization knew that fraudulent claims are relatively rare and that the costs of false positives are thus manageable compared to the saving in manpower and evaluation time. In other situations, or in other sectors such as healthcare (imagine automatically giving someone a diagnosis and only manually checking when the AI system indicates that you do not have a certain illness) different designs will be needed.

To sum up, design approaches to AI ethics focus on the identification of values, the translation of these values into design requirements, and the assessment of technologies in the light of values. This leads to a proactive approach to ethics, ideally engaging designers of these systems in the ethical deliberation and guiding important choices underlying the resulting systems. It is an approach that aims to fill in the oft-noted gap between ethical principles and practical development.Footnote 106 With the increasing adoption of AI, it becomes ever more pressing to fill this gap, and thus to work on the translation from ethical values to design requirements. Principles are not enoughFootnote 107 and ethics should find its way into design. Not only are designs value laden as we discussed earlier, but values are design consequential. In times where everything is designed, commitment to particular values implies that one is bent on exploring opportunities to realize these values – when and where appropriate – in technology and design that can make a difference. We therefore think that we can only tackle the challenges of AI ethics by combining normative ethical theories, and detailed philosophical accounts of different values, with a design approach.Footnote 108 Such an approach additionally requires a range of interdisciplinary, and often transdisciplinary, collaborations. Philosophy alone cannot solve the problems of AI ethics, but it has an important role to play.

3.5 Conclusion

Artificial intelligence poses a host of ethical challenges. These come from the use of AI systems to take actions and support decision-making and are exacerbated by our limited ability to steer and predict the outputs of AI systems (at least the machine learning kind). AI thus raises familiar problems, of bias, privacy, autonomy, accountability, and more, in a new setting. This can be both a challenge, as we have to find new ways of ensuring the ethical design of decision-making procedures, and an opportunity to create even more responsible (socio-technical) systems. Thanks to the developments of AI we now have fairness metrics that can be used just as easily outside of the AI context, though we have to be careful in light of their limitations (see also Chapter 4 of this Handbook).Footnote 109 Ethics can be made more actionable, but this requires renewed efforts in philosophy as well as strong interdisciplinary collaborations.

Existing philosophical theories, starting with the main ethical theories of virtue ethics, consequentialism and deontology, are a good starting point. They can provide the normative framework needed to determine which values are relevant and what requirements are normatively justified. More detailed accounts, such as those of privacy, responsibility, distributive justice, and explanation, are also needed to take the first step from values that have been identified to conceptualizations of them in terms of norms and policies or business rules. Often, we cannot get started on bridging the gap from values and norms to (concrete) design requirements before we have done the conceptual engineering work that yields a first specification of these values. After that design approaches to AI ethics kick in, helping guide us through the process of identifying values for a specific case, and then specifying them in requirements that can finally be used to assess AI systems and the broader socio-technical system in which they have been embedded.

While we have highlighted these steps here from a philosophical perspective, they require strong interdisciplinary collaborations. Identifying values in practical contexts is best done in collaboration with empirical sciences, determining not only people’s preferences but also potential impacts of AI systems. Formulating design requirements requires a close interaction with the actual designers of these systems (both technical and socio-technical), relating the conceptions of values to technological, legal, and institutional possibilities and innovations. Finally, assessment again relies heavily on an empirical understanding of the actual effects of socio-technical (and AI) systems. To responsibly develop and use AI, we have to be proactive in integrating ethics into the design of these systems.

4 Fairness and Artificial Intelligence

Laurens Naudts and Anton Vedder
4.1 Introduction

Within the increasing corpus of ethics codes regarding the responsible use of AI, the notion of fairness is often heralded as one of the leading principles. Although omnipresent within the AI governance debate, fairness remains an elusive concept. Often left unspecified and undefined, it is typically grouped together with the notion of justice. Following a mapping of AI policy documents commissioned by the Council of Europe, researchers found that the notions of justice and fairness show “the least variation, hence the highest degree of cross-geographical and cross-cultural stability.”Footnote 1 Yet, once we attempt to interpret these notions concretely, we soon find that they are perhaps best referred to as essentially contested concepts: over the years, they have sparked constant debate among scholars and policymakers regarding their appropriate usage and position.Footnote 2 Even when some shared understanding concerning their meaning can be found on an abstract level, people may still disagree on their actual relation and realization. For instance, fairness and justice are often interpreted as demanding some type of equality. Yet equality, too, has been the subject of extensive discussions.

In this chapter, we aim to clear up some of the uncertainties surrounding these three concepts. Our goal, however, is not to put forward an exhaustive overview of the literature, nor to promote a decisive view of what these concepts should entail. Instead, we want to increase scholars’ sensibilities as to the role these concepts can perform in the debate on AI and the (normative) considerations that come with that role. Taking one particular interpretation of fairness as our point of departure (fairness as nonarbitrariness), we first investigate the distinction and relationship between procedural and substantive conceptions of fairness (Section 4.2). We build upon this distinction to further analyze the relationship between fairness, justice, and equality (Section 4.3). We start with an exploration of Rawls’ conception of justice as fairness, a theoretical framework that is both procedural and substantively egalitarian in nature. This analysis forms a stepping stone for the discussion of two distinct approaches toward justice and fairness. In particular, Rawls’ outcome-oriented or distributive approach is critiqued from a relational perspective. In parallel, throughout both sections, we pay attention to the challenges these conceptions may face in light of technological innovations. In the final step, we consider the limitations of techno-solutionism and attempts to formalize fairness by design in particular (Section 4.4), before concluding (Section 4.5).

4.2 Conceptions of Fairness: Procedural and Substantive

In our digital society, public and private actors increasingly rely on AI systems for the purposes of knowledge creation and application. In this function, data-driven technologies guide, streamline, and/or automate a host of decision-making processes. Given their ubiquity, these systems actively co-mediate people’s living environment. Unsurprisingly then, it is expected for these systems to operate in correspondence to people’s sense of social justice, which we understand here as their views on how a society should be structured, including the treatment, as well as the social and economic affordances citizens are owed.

Regarding the rules and normative concepts used to reflect upon the ideal structuring of society, a distinction can generally be made between procedural notions or rules and substantive ones. Though this distinction may be confusing and is equally subject to debate, substantive notions and rules directly refer to a particular political or normative goal or outcome a judgment or decision should effectuate.Footnote 3 Conversely, procedural concepts and rules describe how judgments and decisions in society should be made rather than prescribing what those judgments and decisions should ultimately be. Procedural notions thus appear seemingly normatively empty: they simply call for certain procedural constraints in making a policy, judgment, or decision, such as the consistency or the impartial application of a rule. In the following sections, we elaborate on the position fairness typically holds in these discussions. First, we discuss fairness understood as a purely procedural constraint (Section 4.2.1), and second, how perceptions of fairness are often informed by a particular substantive, normative outlook (Section 4.2.2). Finally, we illustrate how procedural constraints that are often claimed to be neutral nonetheless tend to reflect a specific normative position as well (Section 4.2.3).

4.2.1 Fairness as a Procedural Constraint

Fairness can be viewed as a property or set of properties of processes, that is, particular standards that a decision-making procedure or structure should meet.Footnote 4 Suppose a government and company want to explore the virtues of automation. A government wants to streamline the distribution of welfare benefits and a company seeks the same with its hiring process. Understood as a procedural value, fairness should teach us something about the conditions under which (a) the initial welfare or hiring policy was decided upon and (b) how that policy will be translated and applied to individuals by means of an automated procedure. A common approach to fairness in this regard is to view it as a demand for nonarbitrariness: a procedure is unfair when it arbitrarily favors or advantages one person or group or situation over others, or arbitrarily favors the claims of some over those of others.Footnote 5 In their analysis of AI-driven decision-making procedures, Creel and Hellmann evaluate three different, yet overlapping, understandings that could be given to the notion of arbitrariness, which we will also use here as a springboard for our discussion.Footnote 6

First, one could argue that a decision is arbitrary when it is unpredictable. Under this view, AI-driven procedures would be fair only when their outcome is reasonably foreseeable and predictable for decision subjects. Yet, even in the case a hiring or welfare algorithm would be rendered explicable and reasonably foreseeable, would we still call it fair when its reasoning process placed underrepresented and marginalized communities at a disproportionate disadvantage?

Second, the arbitrariness of a process may lie in the fact that it was “unconstrained by ex-ante rules.”Footnote 7 An automated system should not have the capacity to set aside the predefined rules it was designed to operate under. Likewise, government case workers or HR personnel acting as a human in the loop should not use their discretionary power to discard automated decisions to favor unemployed family members. Instead, they should maintain impartiality. Once a given ruleset has been put in place, it creates the legitimate expectation among individuals that those rules will be consistently applied. Without consistency, the system would also become unpredictable. Yet, when seen in isolation, most AI-driven applications operate on some predefined ruleset or instructions.Footnote 8 Even in the case of neural networks, unless some form of randomization is involved, there is some method to their madness. In fact, one of AI’s boons is its ability to streamline the application of decision-making procedures uniformly and consistently. However, the same observation would apply: would we consider decisions fair when they are applied in a consistent, rule-bound, and reproducible manner, even when they place certain people or groups at a disproportionate social or economic disadvantage?

Finally, one could argue that arbitrariness is synonymous with irrationality.Footnote 9 Fairness as rationality partly corresponds to the principle of formal equal treatment found within the law.Footnote 10 Fairness as rationality mandates decision-makers to provide a rational and reasonable justification or motivation for the decisions they make. Historically, the principle of equal treatment was applied as a similar procedural and institutional benchmark toward good governance: whenever a policy, decision, or action creates a distinction between a (group of) people or situations, that differentiation had to be reasonably justified. Without such justification, a differentiating measure was seen as being in violation of the procedural postulate that “like situations should be treated alike.”Footnote 11 This precept could be read as the instruction to apply rules consistently and predictably. However, where a differentiating measure is concerned, the like-cases axiom is often used to question not only the application of a rule but also that rule’s content: did the decision-maker consider the differences between individuals, groups, or situations that were relevant or pertinent?Footnote 12 Yet, this conception might be too easily satisfied by AI-driven decisions. Indeed, is it often not the entire purpose of AI-driven analytics to find relevant points of distinction that can guide a decision? As observed by Wachter: “Since data science mainly focuses on correlation and not causation […] it can seemingly make any data point or attribute appear relevant.”Footnote 13 However, those correlations can generate significant exclusionary harm: they can make the difference between a person’s eligibility or disqualification for a welfare benefit or job position. Moreover, due to the scale and uniformity at which AI can be rolled out, such decisions do not affect single individuals but large groups of people. Perhaps then, we should also be guided by the disadvantage a system will likely produce and not only by whether the differences relied upon to guide a procedure appear rational or nonarbitrary.Footnote 14

Through our analysis of the notion of nonarbitrariness, a series of standards have been identified that could affect the fairness of a given decision-making procedure. In particular, fairness can refer to the need to motivate or justify a particular policy, rule, or decision, and to ensure the predictable and consistent application of a rule, that is, without partiality and favoritism. In principle, those standards can also be imposed on the rules governing the decision-making process itself. For example, when a law is designed or agreed upon, it should be informed by a plurality of voices rather than be the expression of a dominant majority. In other words, it should not arbitrarily exclude certain groups from having their say regarding a particular policy, judgment, or decision. Likewise, it was shown how those standards could also be rephrased as being an expression of the procedural axiom that “like cases ought to be treated alike.” Given this definition, we might also understand why fairness is linked to other institutional safeguards, such as transparency, participation, and contestability. These procedural mechanisms enable citizens to gauge whether or not a given procedure was followed in a correct and consistent fashion and whether the justification provided took stock of those elements of the case deemed pertinent.

4.2.2 Toward a Substantive Understanding of Fairness

As the above analysis hints, certain standards imposed by a purely procedural understanding of fairness could be easily met where AI is relied upon to justify, guide, and apply decision-making rules. As any decision-making procedure can be seemingly justified on the basis of AI analytics, should we then deem every decision fair?

In the AI governance debate, the notion of fairness is seldom used purely procedurally. The presence of procedural safeguards, like a motivation, is typically considered a necessary but often an insufficient condition for fairness. When we criticize a decision and its underlying procedure, we usually look beyond its procedural components. People’s fairness judgments might draw from their views on social justice: they consider the context in which a decision is made, the goals it aims to materialize and the (likely) disadvantage it may cause for those involved. In this context, Hart has argued that justice and fairness seemingly comprise two parts: “a uniform or constant feature, summarized in the precept ‘Treat like cases alike’ and a shifting or varying criterion used in determining when, for any given purpose, cases are alike or different.”Footnote 15 This varying criterion entails a particular political or moral outlook, a standard we use to evaluate whether a specific policy or rule contributes to the desired structuring of society.

For example, we could invoke a substantive notion of equality that a procedure should maintain or achieve. We might say that AI-driven procedures should not bar oppressed social groups from meaningfully engaging with their social environment or exercising meaningful control and agency over the conditions that govern their lives.Footnote 16 In so doing, we could also consider the exclusionary harm algorithms might introduce. Hiring and welfare programs, for instance, affect what Creel and Hellman refer to as “realms of opportunities:” the outcomes of these decisions give people more choices and access to alternative life paths.Footnote 17 In deciding upon eligibility criteria for a welfare benefit or job opportunity, we should then carefully consider whether the chosen parameters risk reflecting or perpetuating histories of disadvantage. From a data protection perspective, fairness might represent decision-makers’ obligation to collect and process all data they use transparently.Footnote 18 Needless to say, articulating one’s normative outlook is one thing. Translating those views into the making, structuring, and application of a rule is another. While a normative perspective might support us in the initial design of a decision-making procedure, the latter’s ability to realize a set of predefined goals will often only show in practice. In that regard, the normative standard relied upon, and its procedural implementation should remain subject to corrections and criticisms.Footnote 19

Of course, purely procedural constraints could maintain their value regardless of one’s particular moral outlook: whether a society is structured alongside utilitarian or egalitarian principles, in both cases, consistency and predictability of a rule’s application benefit and respects people’s legitimate expectations. Given this intrinsic value, we might not want to discard the application of an established procedure outright as soon as the outcomes they produce conflict with our normative goals and ambitions.Footnote 20 The point, however, is that once a substantive or normative position has been taken, it can be used to scrutinize existing procedures where they fail to meet the desired normative outcome. Or, positively put, procedural constraints can now be modeled to better enable the realization of the specific substantive goals we wanted to realize. For example, we may argue that the more an AI application threatens to interfere with people’s life choices, the more institutional safeguards we need to facilitate our review and evaluation of the techniques and procedures AI decision-makers employ and the normative values they have incorporated into their systems.Footnote 21 The relationship between procedural and substantive fairness mechanisms is, therefore, a reciprocal one.

4.2.3 The Myth of Impartiality

Earlier we said that procedural fairness notions appear normatively empty. For example, the belief that a given rule should not arbitrarily favor one group over others might be seen as a call toward impartiality. If a decision-making process must be impartial to be fair, does this not exclude the decision-making process of being informed by a substantive, and hence, partial normative outlook? Even though the opposite may sometimes be claimed, efforts to remain impartial are not as neutral as they appear at first sight.Footnote 22 For one, suppose an algorithmic system automates the imposition of traffic fines for speeding. Following a simple rule of logic, any person driving over the speed limit allocated to a given area must be handed the same fine. The system is impartial in the sense that without exception it will consistently apply the rules as they were written regardless of those who were at the wheel. It will not act more favorably toward politicians speeding than ordinary citizens for instance. At the same time, impartiality thus understood excludes the system from taking into account contextual factors that could favor leniency, as might be the case when a person violates the speed limit as they are rushing to the hospital to visit a sick relative. Second, in decision-making contexts made in relation to the distribution of publicly prized goods, such as job and welfare allocation, certain traits, such as a person’s gender or ethnicity, are often identified as arbitrary. Consequently, any disadvantageous treatment on the basis of those characteristics is judged to be unfair. The designation of these characteristics as arbitrary, however, is not neutral either: it represents a so-called color-blind approach toward policy and decision-making. Such an approach might intuitively appear as a useful strategy in the pursuit of socially egalitarian goals, and it can be. For instance, in a hiring context, there is typically no reason to assume that a person’s social background, ethnicity, or gender will affect their ability to perform a given job. At the same time, this color-blind mode of thinking can be critiqued for its tendency to favor merit-based criteria as the most appropriate differentiating metric instead. Under this view, criteria reflecting merit are (wrongfully) believed to be most objective and least biased.Footnote 23 In automating a hiring decision, designers may need to define what a “good employee” is, and they will look for technical definitions and classifications that further specify who such an employee may be. As observed by Young, such specifications are not scientifically objective, nor neutrally determined, but instead “they concern whether the person evaluated supports and internalizes specific values, follows implicit or explicit social rules of behavior, supports social purposes, or exhibits specific traits or character, behavior, or temperament that the [decision-maker] finds desirable.”Footnote 24 Moreover, a person’s social context and culture have a considerable influence on the way they discover, experience, and develop their talents, motivations, and preferences.Footnote 25 Where a person has had fewer opportunities to attain or develop a talent or skill due to their specific social condition, their chance of success is more limited than those who could.Footnote 26 A mechanical interpretation of fairness as impartiality obscures the differences that exist between people and their relationship with social context and group affinities: individual identities are discarded and rendered abstract in favor of “impartial” or “universal” criteria. The blind approach risks decontextualizing the disadvantage certain groups face due to their possession of, or association with, a given characteristic. Though neutral at first glance, the criteria chosen might therefore ultimately favor the dominant majority disadvantaging those minorities a color-blind approach was supposed to protect in the first place. At the same time, it also underestimates how certain characteristics are often a valuable component of one’s identity.Footnote 27 Rather than render differences between people, such as their gender or ethnicity, invisible, differences could instead be accommodated and harnessed to eliminate the (social and distributive) disadvantage attached to them.Footnote 28 For example, a person’s gender or ethnicity may become a relevant and nonarbitrary criterion if we want to redress the historical disadvantage faced by certain social groups by imposing positive or affirmative action measures on AI developers.

4.3 Justice, Fairness, and Equality

In the previous section, we illustrated how a procedural understanding of fairness is often combined with a more substantive political or normative outlook. This outlook we might find in political philosophy, and theories of social justice in particular. In developing a theory of social justice, one investigates the relationship between the structure of society and the interests of its citizens.Footnote 29 The interplay and alignment between the legal, economic, and civil aspects of social life determine the social position as well as the burdens and benefits that the members of a given society will carry. A position will be taken as to how society can be structured so it best accommodates the interests of its citizens. Of course, different structures will affect people in different ways, and scholars have long theorized as to what structure would suit society the best. Egalitarian theories for instance denote the idea that people should enjoy (substantive) equality of some sort.Footnote 30 This may include the recognition of individuals as social equals in the relationships they maintain, or their ability to enjoy equal opportunities in their access to certain benefits. In order to explain the intricate relationship that exists between the notions of justice, fairness, and equality as a normative and political outlook, John Rawls is a good place to start.

4.3.1 Justice as Fairness

In his book A Theory of Justice, Rawls defines justice as fairness.Footnote 31 For Rawls, the subject of justice is the basic structure of society. These institutions are the political constitution and the principal economic and social arrangements. They determine people’s life prospects: their duties and rights, the burdens, and benefits they carry. In our digital society, AI applications are technological artifacts that co-mediate the basic structure of society: they affect the options we are presented (e.g., recommender systems), the relationships we enter into (e.g., AI-driven social media), and/or the opportunities we have access to (e.g., hiring and welfare algorithms).Footnote 32 While AI-driven applications must adhere to the demands of justice, the concept of fairness is, however, fundamental to arrive at a proper conception of justice.Footnote 33 More specifically, Rawls argues that the principles of justice can only arise out of an agreement made under fair conditions: “A practice will strike the parties as fair if none feels that, by participating in it, they or any of the others are taken advantage of, or forced to give in to claims which they do not regard as legitimate.”Footnote 34 It is this position of initial equality, where free and rational persons choose what course of action best suits the structure of society, from which principles of justice may arise.Footnote 35 Put differently, fairness does not directly inform the regulation, design, and development of AI, the principles of justice do so, but these principles are chosen from a fair bargaining position. While fairness could thus be perceived as a procedural decision-making constraint, the principles that follow from this position are substantive. And as the principles of justice are substantive in nature, Rawls argues, justice as fairness is not procedurally neutral either.

One major concern Rawls had was the deep inequalities that arise between people due to the different social positions they are born into, the differences in their natural talents and abilities, and the differences in the luck they have over the course of their life.Footnote 36 The basic structure of society favors certain starting positions over others, and the principles of justice should correct as much as possible for the inequalities people may incur as a result thereof. Rawls’ intuitive understanding regarding the emergence of entrenched social inequality, which AI applications tend to reinforce, could therefore function as a solid basis for AI governance.Footnote 37

In his A Theory of Justice, Rawls proposes (among others) the difference principle, which stipulates that once a society has been able to realize basic equal liberties to all and fair equality of opportunity in social and economic areas of life, social and economic inequalities can only be justified when they are to the benefit of those least advantaged within society. As AI applications not only take over social inequality but also have a tendency to reinforce and perpetuate the historical disadvantage faced by marginalized or otherwise oppressed communities, the difference principle could encourage regulators and decision-makers, when a comparison is made between alternative regulatory and design options, to choose for those policy or design options that are most likely to benefit the least advantaged within society. In this context, one could contend that justice should not only mitigate and avoid the replication of social and economic injustice but also pursue more ambitious transformative goals.Footnote 38 AI should be positively harnessed to break down institutional barriers that bar those least advantaged from participating in social and economic life.Footnote 39

4.3.2 Distributive Accounts of Fairness

Like conceptions of fairness, people’s understanding of what justice is, and requires, is subject to dispute. Rawls’ understanding of justice for instance is distributive in nature. His principles of justice govern the distribution of the so-called primary goods: basic rights and liberties; freedom of movement and free choice of occupation against a background of diverse opportunities; powers and prerogatives of offices and positions of authorities and responsibility; income and wealth; and the social bases of self-respect.Footnote 40 These primary goods are what “free and equal persons need as citizens.”Footnote 41 A distributive approach toward fairness may also be found in the work of Hart, who considered fairness to be a notion relevant (among others) to the way classes of people are treated when some burden or disadvantage must be distributed among them. In this regard, unfairness is a property not only of a procedure but also of the shares produced by that procedure.Footnote 42 Characteristic of the distributive paradigm is that it formulates questions of justice as questions of distribution. In general terms, purely distributive-oriented theories argue that any advantage and disadvantage within society can be explained in terms of people’s possession of, or access to, certain material (e.g., wealth and income) or nonmaterial goods (e.g., opportunities and social positions).Footnote 43 Likewise, social and economic inequalities can be evaluated in light of the theory’s proposed or desired distribution of those goods it has identified as “justice-relevant.”Footnote 44 Inequality between people can be justified as long as it contributes to the desired state of affairs. If it does not, however, mechanisms of redistribution must be introduced to accommodate unjustified disadvantages.Footnote 45

Distributive notions of fairness have an intuitive appeal as AI-driven decisions are often deployed in areas that can constrain people in their access to publicly prized goods, such as education, credit, or welfare benefits.Footnote 46 Hence, when fairness is to become integrated into technological applications, the tendency may be for design solutions to focus on the distributive shares algorithms produce and, conversely, to correct AI applications when they fail to provide the desired outcome.Footnote 47

4.3.3 Relational Accounts of Fairness

Though issues of distribution are important, relational scholars have critiqued the dominance of the distributive paradigm as the normative lens through which questions of injustice are framed.Footnote 48 They believe additional emphasis must be placed on the relationships people hold and how people ought to treat one another as part of the relationships they maintain with others, such as their peers, institutions, and corporations. Distributive views on fairness might be concerned with transforming social structures, institutions, and relations, but their reason for doing so lies in the outcomes these changes would produce.Footnote 49 Moreover, as Young explains, certain phenomena such as rights, opportunities, and power are better explained as a function of social processes, rather than thing-like items that are subject to distribution.Footnote 50 Likewise, inequality cannot solely be explained or evaluated in terms of people’s access to certain goods. Instead, inequality arises and exists, and hence is formed, within the various relationships people maintain. For example, people cannot participate as social equals and have an equal say in political decision-making processes when prejudicial world views negatively stereotype them. They might have “equal political liberties” on paper, but not in practice.

When fairness not only mandates “impartial treatment” in relation to distributive ideals but also requires a specific type of relational treatment, the concept’s normative reach goes even further.Footnote 51 AI applications are inherently relational. On the one hand, decision-makers hold a position of power over decision-subjects, and hence, relational fairness could constrain the type of actions and behaviors AI developers may impose onto decision-subjects. At the same time, data-driven applications, when applied onto people, divide the population into broad, but nonetheless consequential categories based upon generalized statements concerning similarities people allegedly share.Footnote 52 Relational approaches toward fairness will specify the conditions under which people should be treated as part of and within AI procedures.

Take for instance the relational injustice of cultural imperialism. According to Young, cultural imperialism involves the social practice in which a (dominant) group’s experience and culture is universalized and established as the norm.Footnote 53 A group or actor is able to universalize their world views when they have access to the most important “means of interpretation and communication.”Footnote 54 The process of cultural imperialism stereotypes and marks out the perspectives and lived experiences of those who do not belong to the universal or dominant group as an “Other.”Footnote 55 Because AI-applications constitute a modern means of interpretation and communication in our digital society, they in turn afford power to those who hold control over AI: AI-driven technologies can discover and/or apply (new) knowledge and give those with access to them the opportunity to interpret and structure society. They give those in power the capacity to shape the world in accordance with their perspective, experiences, and meanings and to encode and naturalize a specific ordering of the world.Footnote 56 For example, in the field of computer vision methods are sought to understand the visual world via recognition systems. In order to do so AI must be trained on the basis of vast amounts of images or other pictorial material. To be of any use; however, these images must be classified as to what they contain. Though certain classification acts seemingly appear devoid of risk (e.g., does a picture contain a motorbike), others are not.Footnote 57 Computer vision systems that look to define and classify socially constructed categories, such as gender, race, and sexuality, tend to wrongfully present these categories as universal and detectable, often to the detriment of those not captured by the universal rule.Footnote 58 Facial recognition systems and body scanners at airports that have been built based on the gender binary risk treating trans-, non-binary, and gender nonconforming persons as nonrecognizable human beings.Footnote 59 In a similar vein, algorithmic systems may incorporate stereotyped beliefs concerning a given group. This was the case in the Netherlands, where certain risk scoring algorithms used during the evaluation of childcare benefit applications operated on the prejudicial assumption that ethnic minorities and people living in poverty were more likely to commit fraud.Footnote 60 The same holds true for highly subjective target variables, such as the specification of the “ideal employee” in hiring algorithms. As aforementioned, technical specifications may gain an aura of objectivity once they become incorporated within a decision-making chain and larger social ecosystem.Footnote 61

Under a relational view, these acts, and regardless of the outcomes they may produce, are unjust because they impose representational harms onto people: they generalize, misrepresent, and deindividualize persons. From a relational perspective, these decisions may be unjustified because they interfere with people’s capacity to learn, develop, exercise, and express skills, capacities, and experiences in socially meaningful and recognized ways (self-development) and their capacity to exercise control over, and participate in determining, their own options, choices, and the conditions of their actions (capacity to self-determination).Footnote 62 They do so however, not by depriving a particular good to people, but by rendering the experiences and voices of certain (groups of) people invisible and unheard. Unlike outcome-focused definitions of justice, whose violation may appear as more immediate and apparent, these representational or relational harms are less observable due to the opacity and complexity of AI.Footnote 63

If we also focus on the way AI-developers treat people as part of AI procedures, a relational understanding of fairness will give additional guidance as to the way these applications can be structured. For instance, procedural safeguards could be implemented to facilitate people’s ability to exercise self-control and self-development when they are likely to be affected by AI. This may be achieved by promoting diversity and inclusion within the development, deployment, and monitoring of decision-making systems as to ensure AI-developers are confronted by a plurality of views and the lived experiences of others, rather than socially dominant conventions.Footnote 64 Given the power they hold, AI-developers should carefully consider their normative assumptions.Footnote 65 Procedural safeguards may attempt to equalize power asymmetries within the digital environment and help those affected by AI to regain, or have increased, control over those structures that govern and shape their choices and options in socially meaningful and recognized ways. The relational lens may contribute to the democratization of modern means of interpretation and communication to realize the transformative potential of technologies.

4.4 Limitations of Techno Solutionism

From a technical perspective, computer scientists have explored more formalized approaches toward fairness. These efforts attempt to abstract and embed a given fairness notion into the design of a computational procedure. The goal is to develop a “reasoning” and “learning” processes that will operate in such a way that the ultimate outcome of these systems corresponds to what was defined beforehand as fair.Footnote 66 While these approaches are laudable, it is also important to understand their limitations. Hence, they should not be seen as the only solution toward the realization of fairness in the AI-environment.

4.4.1 Choosing Fairness

During the development of AI systems, a choice must be made as to the fairness metric that will be incorporated. Since fairness is a concept subject to debate, there has been an influx of various fairness metrics.Footnote 67 Yet, as should be clear from previous sections, defining fairness is a value-laden and consequential exercise. And even though there is room for certain fairness conceptions to complement or enrich one another, others might conflict. In other words, trade-offs will need to be made in deciding what type of fairness will be integrated, if the technical and mathematical formalization thereof would already be possible in the first place.Footnote 68

Wachter and others distinguish between bias preserving and bias transforming metrics and support the latter to achieve substantive equality, such as fair equality of opportunity and the ability to redress disadvantage faced by historically oppressed social groups.Footnote 69 Bias-preserving metrics tend to lock in historical bias present within society and cannot effectuate social change.Footnote 70 In related research, Abu-Elyounes suggested that different fairness metrics can be linked to different legal mechanisms.Footnote 71 Roughly speaking, she makes a distinction between individual fairness, group fairness, and causal reasoning fairness metrics. The first aim to achieve fairness toward the individual regardless of their group affiliation and is closely associated with the ideal of generating equal opportunity. Group fairness notions aim to achieve fairness to the group an individual belongs to, which is more likely to be considered as positive or affirmative action. Finally, due process may be realized through causal reasoning notions that emphasize the close relationship between attributes of relevance and outcomes.Footnote 72 This correspondence between fairness metrics and the law could affect system developers and policymakers’ design choices.Footnote 73 For example, affirmative action measures can be politically divisive. The law might mandate decision-makers to implement positive action measures but limit their obligation to do so only for specific social groups and within areas such as employment or education because they are deemed critical for people’s social and economic participation. Thus, the law might (indirectly) specify which fairness metrics are technologically fit for purpose in which policy domains.

Regardless of technical and legal constraints, formalized approaches may still be too narrowly construed in terms of their inspiration. For instance, Kasirzadeh has observed how “most mathematical metrics of algorithmic fairness are inherently rooted in a distributive conception of justice.”Footnote 74 More specifically, “theories or principles of social justice are often translated into the distribution of material (such as employment opportunities) or computational (such as predictive performance) goods across the different social groups or individuals known to be affected by algorithmic outputs.”Footnote 75 In other words, when outcome-based approaches are given too much revery, we may discard the relational aspects of AI-systems. In addition, and historically speaking, machine learnings efforts arose out of researchers’ attempts to realize discrimination-aware data mining or machine learning.Footnote 76 In this regard, the notion of fairness has often been closely entwined with more substantive interpretations of equality and nondiscrimination law. This often results in the identification of certain “sensitive attributes” or “protected characteristics,” such as gender or ethnicity. The underlying idea would be that fairness and equality are realized as soon as the outcome of a given AI-system does not disproportionately disadvantage individuals because of their membership of a socially salient group. For instance, one could design a hiring process so the success rate of an application procedure should be (roughly) the same between men and women when individuals share the same qualifications. Even though these approaches aspire to mitigate disadvantage experienced by underrepresented groups, they may do so following a (distributive), single-axis and difference-based nondiscrimination paradigm. This could be problematic for a two-fold reason. First, intersectional theorists have convincingly demonstrated the limitations of nondiscrimination laws’ single-attribute focus.Footnote 77 Following an intersectional approach, discrimination must also be evaluated considering the complexity of people’s identities, whereby particular attention must be paid to the struggles and lived experiences of those who carry multiple burdens. For instance, Buolamwini and Gebru demonstrated how the misclassification rate in commercial gender classification systems is the highest for darker-skinned females.Footnote 78 Second, the relational and distributive harms generated by AI-driven applications are not only faced by socially salient groups. For instance, suppose a credit scoring algorithm links an applicant’s trustworthiness to a person’s keystrokes during their online file application. Suppose our goal is to achieve fair equality of opportunity or equal social standing for all. Should we not scrutinize any interference therewith, and not only when the interference is based upon people’s membership of socially salient groups?Footnote 79

Yet, in our attempt to articulate and formalize fairness, Birhane and others rightfully point out that we should be wary of overly and uncontestedly relying on white, Western ontologies to the detriment and exclusion of marginalized philosophies and systems of ethics.Footnote 80 More specifically, attention should also be paid to streams of philosophy that are grounded “in down-to-earth problems and […] strive to challenge underlying oppressive social structures and uneven power dynamics,” such as Black Feminism, Critical Theory, and Care Ethics and other non-Western and feminist philosophies.Footnote 81 Hence, questions regarding fairness and justice of AI systems must be informed by the lived experiences of those they affect, rather than rendered into a purely abstract theoretical exercise of reflection or technological incorporation.

4.4.2 Disadvantages of Abstraction

If fairness is constructed toward the realization of a given outcome by design, they run the risk of oversimplifying the demands of fairness as found within theories of justice or the law. Fairness should not be turned into a simplified procedural notion the realization of which can be achieved solely via the technological procedures that underlie decision-making systems. While fairness can be used to specify the technical components underlying a decision-making process and their impact, it could also offer broader guidance regarding the procedural, substantive, and contextual questions that surround their deployment. Suppose a system must be rendered explicable. Though technology can help us in doing so, individual mechanisms of redress via personal interaction may enable people to better understand the concrete impact AI has had on their life. Moreover, when fairness is seen as a technical notion that governs the functioning of one individual or isolated AI-system only, the evaluation of their functioning may become decontextualized from the social environment in which these systems are embedded and from which they draw, as well as their interconnection with other AI-applications.Footnote 82 Taking a relational perspective as a normative point of departure, the wider social structures in which these systems are developed, embedded, and deployed, become an essential component for their overall evaluation. For example, fairness metrics are often seen as a strategy to counter systemic bias within data sets.Footnote 83 Large datasets, such as CommonCrawl, used for training high-profile AI applications are built from information mined from the world wide web. Once incorporated into technology, subtle forms of racism and sexism, as well as more overt toxic and hateful opinions shared by people on bulletin boards and fora, risk being further normalized by these systems. As Birhane correctly notes: “Although datasets are often part of the problem, this commonly held belief relegates deeply rooted societal and historical injustices, nuanced power asymmetries, and structural inequalities to mere datasets. The implication is that if one can ‘fix’ a certain dataset, the deeper problems disappear.”Footnote 84 Computational approaches might wrongfully assume complex (social) issues can be formulated in terms of problem/solution. Yet this, she believes, paints an overly simplistic picture of the matter at hand: “Not only are subjects of study that do not lend themselves to this formulation discarded, but also, this tradition rests on a misconception that injustice, ethics, and bias are relatively static things that we can solve once and for all.”Footnote 85 As AI systems operate under background conditions of structural injustice, efforts to render AI fairer are fruitless if not accompanied by genuine efforts to dismantle existing social and representational injustice.Footnote 86 Fairness thus requires us to view the bigger picture, where people’s relationships and codependencies become part of the discussion. Such efforts should equally extend to the labor conditions that make the development and deployment of AI systems possible. For instance, in early January 2023, reports emerged how OpenAI, the company behind ChatGPT, outsourced the labeling of data as harmful to Kenyan data workers as part of their efforts to reduce users’ exposure to toxic-generated content. For little money, data workers have to expose themselves to sexually graphic, violent, and hateful imagery under taxing labor conditions.Footnote 87 This begs the question: can we truly call a system fair once it has been rid of its internal biases knowing this was achieved through exploitative labor structures, which rather than the exception, appear to be standard practice?Footnote 88

Finally, one should be careful as to which actors are given the discretionary authority to decide how fairness should be given shape alongside the AI value-chain. For example, the EU AI Act, which governs the use of (high-risk) AI systems, affords considerable power to the providers of those systems as well as (opaque) standardization bodies.Footnote 89 Without the public at large, including civil society and academia, having access to meaningful procedural mechanisms, such as the ability to contest, control, or exert influence over the normative assumptions and technical metrics that will be incorporated into AI-systems, the power to choose and define what is fair will be predominantly decided upon by industry actors. This discretion may, in the words of Barocas, lead to situations “in which the work done by socially conscious computer scientists working in the service of traditional civil rights goals, which was really meant to be empowering, suddenly becomes something that potentially fits in quite nicely with the existing interests of companies.”Footnote 90 In other words, it could give those in control of AI the ability to pursue economic interests under the veneer of fairness.Footnote 91 In this regard, Sax has argued how the regulation of AI, and the choices made therein, may not only draw inspiration from liberal and deliberative approaches to democracy, but could also consider a more agonistic perspective. While the former try to look for rational consensus among political and ideological conflict through rational and procedural means, agonism questions the ability to solve such conflicts: “from an agonistic perspective, pluralism should be respected and promoted not by designing procedures that help generate consensus, but by always and continuously accommodating spaces and means for the contestation of consensus(-like) positions, actors, and procedures.”Footnote 92

4.5 Conclusion

The notion of fairness is deep and complex. This chapter could only scratch the surface. This chapter demonstrated how a purely procedural conceptualization of fairness completely detached from the political and normative ideals a society wishes to achieve, is difficult to maintain. In this regard, the moral aspirations a society may have regarding the responsible design and development of AI-systems, and the values AI-developers should respect and incorporate, should be clearly articulated first. When we have succeeded in doing so, we can then start investigating how we could best translate those ideals into procedural principles, policies, and concrete rules that can facilitate the realization of those goals.Footnote 93 In this context, we argued that as part of this articulation process, we should not only be focused on how AI-systems interfere with the distributive shares or outcomes people hold. In addition, we should also pay attention to the relational dynamics AI systems impose and their interference into social processes, structures, and relationships. Moreover, in so doing, we should be informed by the lived experiences of the people that those AI systems threaten to affect the most.

Seeking fairness is an exercise that cannot be performed within, or as part of, the design phase only. Technology may assist in mitigating the societal risks AI systems threaten to impose, but it is not a panacea thereto. The realization of fair AI requires a holistic response; one that incorporates the knowledge of various disciplines, including computer and social sciences, political philosophy, ethics, and the law, and where value-laden decisions are meaningfully informed and open to contestation by a plurality of voices and experiences.

5 Moral Responsibility and Autonomous Technologies Does AI Face a Responsibility Gap?

Lode Lauwaert and Ann-Katrien Oimann
5.1 Introduction

There are several ethical conundrums associated with the development and use of AI. Questions around the avoidance of bias, the protection of privacy, and the risks associated with opacity are three examples, which are discussed in several chapters of this book. However, society’s increased reliance on autonomous systems also raises questions around responsibility, and more specifically the question whether a so-called responsibility gap exists. When autonomous systems make a mistake, is it unjustified to hold anyone responsible for it?Footnote 1 In recent years, several philosophers have answered in the affirmative – we think primarily of Andreas Matthias and Robert Sparrow. If, for example, a self-driving car hits someone, in their opinion, no one can be held responsible. The argument we put forward in this chapter is twofold. First, there does not necessarily exist a responsibility gap in the context of AI systems and second, even if there would be, this is not necessarily a problem.

We proceed as follows. First, we provide some conceptual background by discussing respectively what autonomous systems are, how the notion of responsibility can be understood, and what the responsibility gap is about. Second, we explore to which extent it could make sense to assign responsibility to artificial systems. Third, we argue that the use of autonomous system does not necessarily lead to a responsibility gap. In the fourth and last section of this chapter, we set out why the responsibility gap is not necessarily problematic and provide some concluding remarks.

5.2 Conceptual Clarifications

In the section, we first discuss what autonomous systems are. Next, we explain the concept of responsibility and what the responsibility gap is about. Finally, we describe how the responsibility gap differs from related issues.

5.2.1 Autonomous Systems

Before we turn to responsibility, let us begin with a brief exploration of AI systems, which are discussed in more details in the second chapter of this book. One of the most controversial examples are autonomous weapons systems or the so-called “killer robots,”Footnote 2 designed to kill without human intervention. It is to date unclear to which extent such technology currently already exists in fully autonomous form, yet the use of AI in warfare (which is also discussed in Chapter 20 of this book) is on the rise. For instance, a report by the UN Panel of Experts on Libya of 2020 mentions the system Kargu-2, a drone which may have hunted down and attacked retreating soldiers without any data connectivity between the operator and the system.Footnote 3 Unsurprisingly, the propensity toward ever greater autonomy in weapon systems is also accompanied by much speculation, debate, and protest.

For another example of an AI system, one can think of Sony’s 1999 robot dog AIBO, a type of toy that can act as a substitute for a pet, which is capable of learning. The robot dog learns to respond to specific phrases of its “owner,” or learns to adapt its originally programmed walking motion to the specific shape of the owner’s house. AI systems are, however, not necessarily embedded in hardware. Consider, for instance, a software-based AI system that is capable of detecting lung cancer based on a pattern analysis of radiographic images, which can be especially useful in poorer regions where there are not enough radiologists. Amazon’s Mechanical Turk platform is also a good example, as the software autonomously allocates tasks to suitable workers who subscribed to the platform, and subsequently handles their payment in case it – autonomously – verified that the task was adequately carried out. The uptake of AI systems is on the rise in all societal domains, which also means that questions around responsibility arise in various contexts.

5.2.2 Notions of Responsibility

The term “responsibility” can be interpreted in several ways. When we say “I am responsible,” we can mean more than one thing by it. In general, a distinction can be made between three meanings: causal responsibility, moral responsibility, and role responsibility.Footnote 4 We will discuss each in turn.

Suppose a scientist works in a laboratory and uses a glass tube that contains toxic substances that if released would result in the death of many colleagues. Normally the scientist is careful, but a fly in the eye causes her to stumble. The result is that the glass tube breaks and the toxins are released, causing deaths. Asked who is responsible for the havoc, some will answer that it is the scientist. They then understand “responsibility” in a well-defined sense, namely in a causal sense. They mean that the scientist is (causally) responsible because she plays a role in the course of events leading to the undesirable result.

Let us make a slight modification. Say the same scientist works in exactly the same context with exactly the same toxic substances, but now also belongs to a terrorist group and wants the colleagues to die, and therefore deliberately drops the glass tube, resulting in several people dying. We will again hold the scientist responsible, but the content of this responsibility is clearly different from the first kind of responsibility. Without the scientist’s morally wrong act, the colleagues would still be alive, and so the scientist is the cause of the colleagues’ deaths. So, while the scientist is certainly causally responsible, in this case she will also be morally responsible.

Moral responsibility usually refers to one person, although it can also be about a group or organization. That person is then held responsible for something. Often this “something” is undesirable, such as death, but you can also be held responsible for good things, such as saving people. If a person is morally responsible, it means that others can respond to that person in a certain way: praise or reward when it comes to desirable things; disapproval or punishment when it comes to bad things. In addition, if one were to decide to punish or to reward, it would also mean that it is morally right to punish or reward that person. In other words, there would be good reasons to punish or reward that particular person, and not someone else. Note that moral responsibility does not necessarily involve punishment or reward. It only means that someone is the rightful candidate for such a response, that punishment or reward may follow. So, I may be responsible for something undesirable, but what happened was not so bad that I should be punished.

The third form, role responsibility, refers to the duties that come with a role or position. Parents are responsible in this sense because they must ensure their children grow up in a safe environment, just as it is the role responsibility of a teacher to ensure a safe learning environment for students. When revisiting the earlier example of the scientist, we can also discuss her responsibility without referring to her role in a chain of events (causal responsibility) or to the practice of punishment and reward (moral responsibility). Those who believe that the scientist is responsible may in fact refer to her duty to watch over the safety of the building, to ensure that the room is properly sealed, or to verify that the glass tubes she uses do not have any cracks.

These three types of responsibilities are related. The preceding paragraphs make it clear that a person can be causally responsible without being responsible in a moral sense. We typically do not condemn the scientist who trips over a shoelace. Conversely, though, moral responsibility always rests on causal responsibility. We do not hold someone morally responsible if they are in no way part of the process that led to the (un)desired result. That causal involvement, by the way, should be interpreted in a broad sense. Suppose the scientist is following an order. The person who gave the order is then not only causally but also morally responsible, despite not having committed the murder itself. Finally, role responsibility is always accompanied by moral responsibility. If, for example, as a scientist it is your duty to ensure that the laboratory is safe, it follows at least that you are a candidate for moral disapproval or punishment if it turns out that you have not done your duty adequately, or that you can be praised or rewarded if you have met the expectations that come with your role.

5.2.3 Responsibility Gap

Autonomous systems lead to a responsibility gap, some claim.Footnote 5 But what does one understand by “responsibility” here? Clearly, one is not talking about causal responsibility in this context. AI systems are normally created by humans (we say “normally” because there already exist AI systems that design other AI systems). Therefore, if one would claim that no humans are involved in the creation of AI systems, this would come down to a problematic view of technology.

The responsibility gap is also not about the third form of responsibility namely role responsibility. That argument refers to the duty of engineers, not so much to create more sustainability or well-being, but to make things that have as little undesirable effect on moral values as possible, and thus to think about such possible effects in advance. Since there is no reason why this should not apply to the developers of autonomous systems, the responsibility gap does not mean that developers and users of AI systems have no special duties attached to them. On the contrary, such technology precisely affirms the importance of moral duties. Because the decision-making power is being transferred to that technology, and because it is often impossible to predict exactly what decision will be made, the developers of AI systems must think even more carefully than other tech designers about the possible undesirable effects that may result from the algorithms’ decisions in, for example, the legal or medical world.Footnote 6

The thesis of the so-called responsibility gap is thus concerned with moral responsibility. It can be clarified as follows: in the case of mistakes made by autonomous AI, despite a possible spontaneous tendency to punish someone, that tendency has no suitable purpose as there is no candidate for punishment.

5.2.4 Related but Different Issues

Before we examine whether the thesis of the responsibility gap holds water, it is useful to briefly touch upon the difference between the alleged problem of the responsibility gap and two other problems. The first problem is reminiscent of a particular view of God and the second is the so-called problem of many hands.

Imagine strolling through the city on a sunny afternoon and stepping in chewing gum. You feel it immediately: with every step, your shoe sticks a little to the ground and your mood changes, the sunny afternoon is gone (at least for a while) and you are looking for a culprit. However, the person who left the gum on the ground is long gone. There is definitely someone causally responsible here: someone dropped the gum at some point. And the causally responsible person is also morally responsible. You’re not supposed to leave gum, and if you do it anyway, then you’re ignoring your civic duty and you’re justified in being reprimanded. However, the annoying thing about the situation is that it is not possible to detect the morally responsible person.

The problem in this example is that you do not know who the morally responsible person is, even though there is a responsible person. This is reminiscent of the relationship between man and God as described in the Old Testament. God created the world, but has subsequently distanced Himself so far from His creation that it is impossible for man to perceive Him. In the case of the responsibility gap the problem is of a different nature. Here it is not an epistemic problem, but an ontological problem. The difficulty is not that I do not know who is responsible; the problem is that there is no one morally responsible for the errors caused by an autonomous system, so the lack of knowledge cannot be the problem here.

The second problem that deviates from the responsibility gap is the problem of many hands.Footnote 7 This term is used to describe situations where many actors have contributed to an action that has caused harm and it is unclear how responsibility should be allocated. It is often used with respect to new technologies such as AI systems because a large number of actors are involved in their development and use, but the problem also occurs in nontechnical areas such as climate change.

To illustrate this problem, we turn to the disaster of the Herald of Free Enterprise, the boat that capsized on March 6, 1987, resulting in the deaths of nearly 200 people. An investigation revealed that water had flowed into the boat. As a result, the already unstable cargo began to shift to one side. This displacement eventually caused the ferry to disappear under the waves just outside the port of Zeebrugge in Belgium. This fatal outcome was not the result of just one cause. Several things led to the boat capsizing. Doors had been left open, the ship was not stable in the first place, the bulkheads that had been placed on the car deck were not watertight, there were no lights in the captain’s cabin, and so on. Needless to say, this implies that several people were involved: the assistant boatman who had gone to sleep and left the doors open; the person who had not checked whether the doors were closed; and finally, the designers of the boat who had not fitted it with lights.

There are so many people involved in this case that not only one person can be held responsible. But this differs from saying that no one is responsible. The case is not an example of an ontological problem; there is no lack of moral responsibility in the case of the capsized ferry. Indeed, there are multiple individuals who are morally responsible. There is, however, an epistemic problem. The problem is that there are so many hands involved that it is very difficult (if not impossible) to know exactly who is responsible for what and to what extent each person involved is responsible. In the case of the Herald of Free Enterprise, many knots had to be untangled in terms of moral responsibility, but that is different from claiming that the use of a technology is associated with a responsibility gap.

5.3 Can AI Be Morally Responsible?

Is it true that there is no moral responsibility for mistakes made by an AI system? There is an answer to that question that is often either not taken seriously or overlooked, namely the possibility of AI systems being responsible themselves.Footnote 8 To be clear, we refer here to moral responsibility and not the causal type of responsibility. After all, autonomous technologies very often play a causal role in a chain of events with an (un)desirable outcome. Our question is: is it utter nonsense to see an AI system as the object of punishment and reward, praise, and indignation?

One of the sub-domains of philosophy is philosophical anthropology. A central question in that domain is whether there are properties that separate humans from, say, plants and nonhuman animals, as well as from artificial entities. In that context, one can think, for instance, of the ability to play and communicate, to suffer psychologically or to get gray hair. However, it is almost impossible not to consider responsibility here. After all, today, we only attribute that moral responsibility to human beings. Sure, there are some exceptions to this rule. For instance, we do not hold people with mental disabilities or disorders responsible for a number of things. And we also punish and reward animals that are not humans. But moral responsibility is something we currently reserve exclusively for humans, and thus do not attribute to artifacts.

Part of the reason we do not hold artificial entities responsible has to do with what responsibility entails. We recall that a morally responsible person is the justifiable target of moral reactions such as punishment and reward, anger and indignation. Those reactions do not necessarily follow, but if they follow then the responsible person is the one who is justifiably the subject of such a reaction. But that presupposes the possibility of some form of sensation, the ability to be affected in the broad sense, whether on a mental or physical level. There is no point in designating someone as responsible if that person cannot be affected by the moral reactions of others. But where do we draw the line? Of course, the ability to experience pain or pleasure in the broad sense of the word is not sufficient to be morally responsible. This is evident from our dealings with nonhuman animals: dogs can experience pain and pleasure, but we do not hold them responsible when they tip a vase with their tail. However, the ability to be physically or mentally affected by another’s reaction is a necessary condition. And since artifacts such as autonomous technologies do not currently have that ability, it would be downright absurd to hold them responsible for what they do.

On the other hand, moral practices are not necessarily fixed forever. They can change over the course of history. Think about the allocation of legal rights. At the end of the eighteenth century, people were still arguing against women’s rights based on the following argument: if we grant rights to women, then we must also grant rights to animals. The concealed assumption was that animal rights are unthinkable. Meanwhile, it is completely immoral to deny women rights that are equal to those of men. One can also think of the example of the robot Sophia who was granted citizenship of Saudi Arabia in 2017. If throughout history more and more people have been granted rights, and if other moral practices have changed over time, why couldn’t there be a change when it comes to moral responsibility? At the time of writing, we cannot hold artifacts responsible; but might it be possible in the future?

That question only makes sense if it is not excluded that robots in the future may be affected on a physical or mental level, and they may later experience pain or pleasure in some way. If that ability can never exist, then it is out of the question that our moral attitudes will change, then we will never hold AI systems morally responsible. And exactly that, some say, is the most realistic scenario: we will never praise technology because it will never be capable of sensation on a physical or mental level. Much can be said about that assertion. We will limit ourselves to a brief response to the following thought experiment that is sometimes given to support that claim.

Suppose a robot that looks like a human falls down the stairs and reacts as humans normally do by providing the output that usually follows the feeling of pain: yelling, crying, and so on. Is the robot in pain? Someone may react to the robot’s fall, for example, because it is a human reflex to react to signs of pain. However, one is unlikely to respond because the robot is in pain. Although the robot does show signs of pain, there is no pain, just as computer programs such as Google Translate and DeepL do not really understand the sentence that they can nevertheless translate perfectly.

AI can produce things that indicate pain in humans, but those signals, in the case of the software, are not in themselves a sufficient reason to conclude that the technology is in pain. However, we cannot conclude at this point that AI systems will never be able to experience pain nor exclude that machines will never be able to be affected mentally. Indeed, next to software, technologies usually consist of hardware as well and the latter might be a reason not to immediately cast aside the possibility of pain.Footnote 9 Why?

Like all physiological systems of a human body, the nervous system is made up of cells, mainly neurons, which constantly interact. This causal link ensures that incoming signals lead to the sensation of pain. Now, suppose that you are beaten up, and that for 60 minutes, you are actually in pain, but that science has advanced to the point where the neurons can be replaced by a prosthesis, microchips, for example, without it making any difference otherwise. The chips are made on a slice of silicon – but other than that, those artificial entities do exactly the same thing as the neurons: they send signals to other cells and provide sensation. Well, imagine that, during one month and step by step, a scientist replaces every cell with a microchip so that your body is no longer only made up of cells but also of chips. Is it still utter nonsense to claim that robots might one day be able to feel pain?

To avoid confusion, we would like to stress the following: we are not claiming that intelligent systems will one day be able to feel pain, that robots will one day resemble us – us, humans – in terms of sensation. At most, the last thought experiment was meant to indicate that it is perhaps a bit short-sighted to simply brush this option aside as nonsense. Furthermore, if it does turn out that AI systems can experience pain, we will not automatically hold them morally responsible for the things they do. The reason is that the ability to feel pain is not enough to be held responsible. Our relationships with nonhuman animals, for example, demonstrate this, as we pointed out earlier. Suppose, however, that all conditions are met (we will explain the conditions in the next section), would that immediately imply that we will see AI systems as candidates for punishment and reward? Attributing responsibility exclusively to humans is an age-old moral practice, which is why this may not change any time soon. At the same time, history shows that moral practices are not necessarily eternal, and that the time-honored practice of attributing rights only to humans is only gradually changing in favor of animals that are not humans. That alone is a reason to suspect that ascribing moral responsibility to robots may not become a reality in the near future, even if robots could be affected physically or mentally by reward or punishment.

5.4 There Is No Responsibility Gap

So we must return to the central question: do AI systems create a responsibility gap? Technologies themselves cannot be held morally responsible today, but does the same apply to the people behind the technology?

There is reason to suspect that you can hold people morally responsible for mistakes made by an AI system. Consider an army officer who engages a child soldier. The child is given a weapon to fight the enemy. But in the end, the child kills innocent civilians, thus committing a war crime. Perhaps not many people would say that the child is responsible for the civilian casualties, but in all likelihood we would believe that at least someone is responsible, that is, the officer. However, is there a difference between this case and the use of, for example, autonomous weapons? If so, is that difference relevant? Of course, child soldiers are human beings, robots are not. In both cases, however, a person undertakes an action knowing that undesirable situations may follow and that one can no longer control them. If the officer is morally responsible, why shouldn’t the same apply to those who decide to use autonomous AI systems? Are autonomous weapons and other autonomous AI systems something exceptional in that regard?

At the same time, there is also reason to be skeptical about the possibility of assigning moral responsibility. Suppose you are a soldier and kill a terror suspect. If you used a classic weapon that functions as it should, a 9-mm pistol for example, then without a doubt you are entirely – or at least to a large extent – responsible for the death of the suspect. Suppose, however, that you want to kill the same person, and you only have a semiautomatic drone. You are in a room far away from the war zone where the suspect is, and you give the drone all the information about the person you are looking for. The drone is able to scout the area itself, and when the technology indicates that the search process is over, you can assess the result of the search and then decide whether or not the drone should fire. Based on the information you gathered, you give the order to fire. But what actually happens? The person killed is not the terror suspect and was therefore killed by mistake. That mistake has everything to do with a manufacturing error, which led to a defect in the drone’s operating. Of course, that does not imply that you are in no way morally responsible for the death of the suspect. So, there is no responsibility gap, but probably most people would feel justified in saying that you are less responsible than if you used a 9-mm pistol. This has to do with the fact that the decision to fire is based on information that comes not from yourself but from the drone, information that incidentally happens to be incorrect.

For many, the decrease in the soldier’s causal role through technology is accompanied by a decrease in responsibility. The siphoning off of an activity – the acquisition of information –implies, not that humans are not responsible, but that they are responsible to a lesser degree. This fuels the suspicion that devolving all decisions onto AI systems leads to the so-called responsibility gap. But is that suspicion correct? If not, why? These questions bring us to the heart of the analysis of the issue of moral responsibility and AI.

5.4.1 Conditions for Responsibility

Our thesis is that reliance on autonomous technologies does not imply that we can never hold anyone responsible for their mistakes. To argue this, we must consider whether the classical conditions for responsibility are also met. We already referred to the capacity for sensation in the broad sense of the word, but what other conditions must be fulfilled for someone to be held responsible? Classically, these are three sufficient conditions for moral responsibility: causal responsibility, autonomy, and knowledge.

It goes without saying that moral responsibility presupposes causal responsibility. Someone who is not involved at all in the creation of the (undesirable) result of an action cannot be held morally responsible for that result. In the context of AI systems, several people meet this condition: the programmer, the manufacturer, and the user. However, this does not mean that we have undermined the responsibility gap theorem. Not every (causal) involvement is associated with moral responsibility. Recall the example of the scientist in the laboratory we discussed earlier: we do hold the scientist responsible, but only in a causal sense.

Thus, more is needed. Moral responsibility also requires autonomy. This concept can be understood in at least two ways. First, “autonomy” can be interpreted in a negative way. In that case, it means that the one who is autonomous in that respect can function completely independently, without human intervention. For our reasoning, only the second, positive form is relevant. This variant means that you can weigh things against each other, and that you can make your own decision based on that. However, the fact that you are able to deliberate and decide is not sufficient to be held morally responsible. For example, you may make the justifiable decision to kill the king, but when the king is killed, you are not necessarily responsible for it, for example, because someone else does it just before you pull the trigger and independently of your decision. You are only responsible if your deliberate decision is at the root of the murder, that is, if there is a causal link between the autonomy and the act.

Knowledge is the final condition. You can only be held morally responsible if you have the necessary relevant knowledge.Footnote 10 One who does not know that an action is wrong cannot be responsible for it. Furthermore, if the consequences of an act are unforeseeable, then you cannot be punished either. Note that, the absence of knowledge does not necessarily exonerate you. If you may not know certain things while you should have known them, and the lack of knowledge leads to an undesirable result, then you are still morally responsible for that result. For example, if a driver runs a red light and causes an accident as a result, then the driver is still responsible for the accident, even if it turns out that she was unaware of the prohibition against running a red light. After all, it is your duty as a citizen and car driver – read: your role responsibility – to be aware of that rule.Footnote 11

5.4.2 Control as Requirement

So, whoever is involved in the use of a technology, whoever makes the well-considered decision to use that technology, and whoever is aware of the necessary relevant consequences of that technology, they can all be held morally responsible for everything that goes wrong with the technology. At least that is what the classical analysis of responsibility implies. So why do authors such as Matthias and Sparrow nevertheless conclude that there are responsibility gaps?

They point to an additional condition that must be met. Once an action or certain course of events has been set in motion, they believe you must have control over it. So even if you are causally involved, for example, because you have made the decision that the action or course of events should take place, while you can do nothing else about it at the time it was initiated, it would be unfair to punish you when it all results in an undesirable outcome. They argue that, since AI systems can function completely independently, in such a way that you cannot influence their decisions due to their high degree of autonomy and capacity for self-learning, you cannot hold anyone responsible for the consequences.

If you are held responsible for an action, it usually means that you have control. As CEO, I am responsible for my company’s poor numbers because I could have made different decisions that benefited the company more. Conversely, I have no control over a large number of factors and thus bear no responsibility for them. For example, I have no control over the weather conditions, nor do I bear any responsibility for the consequences of good or bad weather. Thus, responsibility is often accompanied by control, just as the absence of control is usually accompanied by the absence of responsibility. Yet we argue that it is false to say that you must have control over an initiated action or course of events to be held responsible, and that not having control takes away your responsibility. This is demonstrated by the following.

Imagine you are driving and after a few minutes, you have an epileptic seizure that causes you to lose control of the wheel and to seriously injure a cyclist. It is not certain that you will be punished, let alone receive a severe sentence, but perhaps few, if any, will hold you responsible for the cyclist’s injury, in spite of your lack of control of the car’s steering wheel. This is mainly the case because you possess all the relevant knowledge. You do not know that a seizure will occur within a few minutes, but as someone with epilepsy you do know that there is a risk of a seizure and that it may be accompanied by an accident. Furthermore, you are autonomous (in a positive sense). You are able to weigh up the desire to drive somewhere by yourself against the risk of an attack, and to decide on that basis. Finally, you purposefully get in the car. As a result, you are causally connected to the undesirable consequence in a way that sufficiently grounds moral responsibility. After all, if you decide knowing that it may lead to undesirable consequences, then you are justified in considering yourself a candidate for punishment at the time the undesirable consequence actually occurs. Again, it is not certain that punishment will follow, but those who take a risk are responsible for that risk, and thus can be punished when it turns out that the undesirable consequence actually occurs.

We can conclude from the above that not having control does not absolve moral responsibility. Therefore, we do not believe that AI systems are associated with a responsibility gap due to a lack of control over the technology. However, we cannot conclude from the foregoing that the idea of a responsibility gap in the case of autonomous AI is incorrect and that in all cases someone is responsible for the errors caused by that technology. After all, perhaps situations might occur in which the other conditions for moral responsibility are not met, thus still leading us to conclude that the use of autonomous AI goes hand in hand with a responsibility gap.

5.4.3 Is Someone Responsible?

To prove that it is not true that no one can ever be held responsible, we invoke some previously cited examples: a civilian is killed by an autonomous weapon and a self-driving car hits a cyclist.

To begin with, it is important to note that both dramatic accidents are the result of a long chain of events that stretch from the demand for production, through the search for funding, and finally to the programming and use. If we are looking for a culprit, we might be able to identify several people – one could think of the designer or producer, for example – but the most obvious culprit is the user: the commander who decides to deploy an autonomous weapon during a conflict, or the occupant of the autonomous car. It is justified to put them forward as candidates for punishment for the following reasons, just as the epilepsy patient is responsible for the cyclist’s injury.

First of all, both are aware of the context of the use and of the possible undesirable consequences. They do not know whether or not an accident will happen, let alone where and when exactly. After all, autonomous cars and weapons are (mainly) based on machine learning, which means that it is not (always) possible to predict what decision will be made. But the kinds of accidents that can happen are not unlimited. Killing civilians and destroying their homes (autonomous weapons) and hitting a cyclist or crashing into a group of people (self-driving car) are dramatic but foreseeable; as a user, you know such things can happen. And if you don’t know, that is a failure from your part: you should know. It is your duty, your role responsibility, to consider the possible negative consequences of the things you use.

Second, both commander and owner are sufficiently autonomous. They are able to weigh up the advantages and disadvantages: the chance of fewer deaths in their own ranks and war crimes (autonomous weapons), the chance of being able to work while on the move and traffic casualties (self-driving car).

Third, if, based on these considerations, the decision is made to effectively pursue the use of autonomous cars and weapons, while knowing that it may bring undesirable consequences, then it is justifiable to hold both the commander and owner responsible for deliberately allowing the undesirable anticipated consequences to occur. Those who take risks accept responsibility for that risk; they accept that they may be penalized in the event that the unwanted, unforeseen consequence actually occurs.

Thus, in terms of responsibility, the use of AI systems is consistent with an existing moral practice. Just as you can hold people responsible for using nonautonomous technologies, people are also responsible for things over which they have no control but with which they are connected in a relevant way. So not only does the autonomy of technology not erase the role responsibility of the user; it does not absolve moral responsibility either. The path the system takes to decide may be completely opaque to the user, but the system does not create a responsibility gap.

Those who disagree must either demonstrate what is wrong with the existing moral practice in which we ascribe responsibility to people or demonstrate the relevant difference between moral responsibility in the case of autonomous systems and everyday moral practice. Of course, there are differences between using an autonomous system on the one hand and driving a car as a patient on the other. The question, however, is whether those differences matter when it comes to moral responsibility.

To be clear, our claim here is only that the absence of control does not necessarily lead to a gap. The thesis we put forward is not that there can never be a gap in the case of AI. The reason is that the third, epistemic condition must be met. There is no gap if the consequences are and should be foreseen (and if there is autonomy and a causal link). In contrast, there may be a gap in case the consequences are unforeseeable (or in case one of the other conditions is not met).

5.5 Is a Responsibility Gap Problematic?

We think there are good reasons to believe that at least someone is responsible when autonomous AI makes mistakes – maybe there is even collective responsibilityFootnote 12 – since it is sufficient to identify one responsible person to undermine the thesis of a responsibility gap (assuming the other conditions are met). However, suppose that our analysis goes wrong in several places, and that you really cannot hold anyone responsible for the damage caused by the toy robot AIBO, Google’s self-driving car, Amazon’s recruitment system, or the autonomous weapon system. In that case, would that make an argument for the conclusion that ethics is being disrupted by autonomous systems? In other words, would this gap also be morally problematic? To answer that question, we look at two explanations for the existence of the practice of responsibility. The first has to do with prevention; the second points to the symbolic meaning of punishment.

Someone robs a bank, a soldier kills a civilian, and a car driver ignores a red light: these are all examples of undesirable situations that we, as a society, do not want to happen. To prevent this, to ensure that the norm is not infringed again later, something like the imputation of responsibility was created, a moral practice based on the psychological mechanism of classical conditioning. After a violation, a person is held responsible and is a candidate for unpleasant treatment, with the goal of preventing the violation from happening again in the future.

That goal, prevention, must obviously be there, and it is clear that the means – punishing the responsible party – is often sufficient to achieve the goal. Yet prevention is not necessarily related to punishment; punishing the person responsible is not necessary for the purpose of prevention. There are ways other than punishment to ensure that the same mistake is not made again. You can teach people to follow the rules, for example, by giving them extra explanations and setting a good example. It is possible that undesirable situations will not occur in the future without moral responsibility. This appears to be exactly the case in the context of AI.

Take an algorithm that ignores all women’s cover letters, or the Amazon Mechanical Turk platform that wrongfully blocks your account, preventing you from accepting jobs. To prevent such a morally problematic event from occurring again in the future, it is natural that the AI system is tinkered with by someone with sufficient technical knowledge, such as the programmer. It is quite possible that the system has so many layers that the designer cannot see the problem and therefore cannot fix it. But it is also possible that the programmer can successfully intervene, to the extent that the AI system will not make that mistake in future. In that case, the technical work is sufficient for preventing the problem, and further, for the purpose of prevention, you don’t need anyone to be a candidate for punishment – we raise again that this is the definition of moral responsibility. In other words, if the goal is purely preventive in nature, then the solely technical intervention of the designer can suffice and thus the alleged absence of moral responsibility is not a problem.

There is another purpose that is often cited to justify the imputation of responsibility. That purpose has a symbolic character. Namely, it is about respecting the dignity of a human being. Is that goal, too, related to the designation of a candidate for punishment? In light of that objective, would a responsibility gap be a problem?

In a liberal democracy, everyone has moral standing. Whatever your characteristics are and regardless of what you do, you have moral standing due to the mere fact of being a human, and that counts for everyone. That value is only substantial insofar as legal rights are attached to that value. The principle that every human being has moral value implies that you have rights and that others have duties toward you. Among other things, you have the right to education and employment, and others may not intentionally hurt or insult you without good reason. It is permitted for an employer to decide not to hire you on the basis of relevant criteria, but it flagrantly violates your status as a being with moral standing if they belittle or ridicule you during a job interview without good reason.

Imagine the latter happens. This is a problem, because it is a denial of the fact that you have moral standing. Well, the practice of imputing moral responsibility is at least in part a response to such a problem. Something undesirable takes place – a person’s dignity is violated – and in response someone is punished, or at least that person is designated as a candidate for punishment. Punishment here means that a person is hurt and experiences an unpleasant sensation, something that you do not wish for. Now the purpose of that punishment, that unpleasant experience, is to underscore that the violation of dignity was a moral wrong, and thus to affirm the dignity of the victim. The punishment does not heal the wound or undo the error, but it has symbolic importance. It cuts through the denial of the moral status that was inherent to the crime.

The affirmation of moral value is clearly a good, and a goal that can be realized by means of punishment. However, it is questionable whether that goal can be achieved exclusively by these means. Suppose an autonomous weapon kills a soldier. Suppose, moreover, that it is true, contrary to what we have just argued, that no one can be held responsible for this death. Does that mean that the moral value of the soldier can no longer be emphasized? It is true that assigning responsibility expresses the idea that the value of the soldier is taken seriously. Moreover, it is undoubtedly desirable that, out of respect for the value of individual, someone should be designated as a candidate for punishment. However, the claim that responsibility is necessary for the recognition of dignity is false. One can also do justice to the deceased without holding anyone responsible. Perhaps the most obvious example of this is a funeral. After all, the significance of this ritual lies primarily in the fact that it underscores that the deceased has intrinsic value.

To be clear, we are not claiming that ascribing moral responsibility is a meaningless practice. Nor do we mean to say that, if the use of AI led to a gap, the impossibility of holding someone responsible would never be a problem. Our point is that prevention and respect are not in themselves sufficient reasons to conclude that a responsibility gap in the context of AI is a moral tragedy.Footnote 13

5.6 Conclusion

AI offers many opportunities, but also comes with (potential) problems – many of which are discussed in the various chapters of this handbook. In this contribution, we focused on the relationship between AI and moral responsibility, and make two arguments. First, the use of autonomous AI does not necessarily involve a responsibility gap. Second, even if this were the case, we argued why that is not necessarily morally problematic.

6 Artificial Intelligence, Power and SustainabilityFootnote *

Gry Hasselbalch and Aimee Van Wynsberghe
6.1 Introduction

Artificial intelligence (AI) has the potential to address several issues related to sustainable development. It can be used to predict the environmental impact of certain actions, to optimize resource use and streamline production processes. However, AI is also unsustainable in numerous ways, both environmentally and socially. From an environmental perspective, both the training of AI algorithms and the processing and storing of the data used to train AI systems result in heavy carbon emissions, not to mention the mineral extraction, water and land usage that is associated with the technology’s development. From a social perspective, AI to date has worked to maintain discriminatory impacts on minorities and vulnerable demographics resulting from nonrepresentative and biased training data sets. It has also been used to carry out invisible surveillance practices or to influence democratic elections through microtargeting. These issues highlight the need to address the long-term sustainability of AI, and to avoid getting caught up in the hype, power dynamics, and competition surrounding this technology.

In this chapter we outline the ethical dilemma of sustainable AI, centering on AI as a technology that can help tackle some of the biggest challenges of an evolving global sustainable development agenda, while at the same time in and by itself may adversely impact our social, personal, and natural environments now and for future generations.

In the first part of the chapter, AI is discussed against the background of the global sustainable development agenda. We then continue to discuss AI for sustainability and the sustainability of AI,Footnote 1 which includes a view on the physical infrastructure of AI and what this means in terms of the exploitation of people and the planet. Here, we also use the example of “data pollution” to examine the sustainability of AI from multiple angles.Footnote 2 In the last part of the chapter, we explore the ethical implications of AI on sustainability. Here, we apply a “data ethics of power”Footnote 3 as an analytical tool that can help further explore the power dynamics that shape the ethical implications of AI for the sustainable development agenda and its goals.

6.2 AI and the Global Sustainable Development Agenda

Public and policy discourse around AI is often characterized by hype and technological determinism. Companies are increasingly marketing their big data initiatives as “AI” projectsFootnote 4 and AI has gained significant strategic importance in geopolitics as a symbol of regions’ and countries’ competitive advantages in the world. However, in all of this, it is important to remember that AI is a human technology with far-reaching consequences for our environment and future societies. Consequently, the ethical implications of AI must be considered integral to the ongoing global public and policy agenda on sustainable development. Here, the socio-technical constitution of AI necessitates reflection on its sustainability in our present and a new narrative about the role it plays in our common futures.Footnote 5 The “sustainable” approach is one that is inclusive in both time and space; where the past, present, and future of human societies, the planet, and environment are considered equally important to protect and secure, as is the integration of all countries in economic and social change.Footnote 6 Furthermore, our use of the concept “sustainable” demands we ask what practices in the current development and use of AI we want to maintain and alternatively what practices we want to repair and/or change.

AI technologies are today widely recognized as having the potential to help achieve sustainability goals such as those outlined in the EU’s Green DealFootnote 7 and the UN’s Sustainable Development goals.Footnote 8 Indeed, AI can be deployed for climate action by turning raw data into actionable information. For example, AI systems can analyze satellite images and identify deforestation or help improve predictions with forecasts of solar power generation to balance electrical grids. In cities, AI can be used for smart waste management, to measure air pollution, or to reduce energy use in city lighting.Footnote 9

However, the ethical implications of AI are also intertwined with the sustainability of our social, personal, and natural environments. As described before, AI’s impacts on those environments come in many shapes and forms, such as carbon footprints,Footnote 10 biased or “oppressive” search algorithms,Footnote 11 or the use of AI systems for microtargeting voters on social media.Footnote 12 It is hence becoming increasingly evident that – if AI is in and by itself an unsustainable technology – it cannot help us reach the sustainable development goals that have been defined and refined over decades by multiple stakeholders.

Awareness of the double edge of technological progress and the role of humans in the environment has long been a central part of the global political agenda of collaborative sustainable action. The United Nations Conference on the Human Environment, held in Stockholm in 1972, was the first global conference to recognize the impact of science and technology on the environment and emphasize the need for global collaboration and action stating. As the report from the conference states:

In the long and tortuous evolution of the human race on this planet, a stage has been reached when, through the rapid acceleration of science and technology, man has acquired the power to transform his environment in countless ways and on an unprecedented scale.Footnote 13

This report also coined the term “Environmentally Sound Technologies” (ESTs) to refer to technologies or technological systems that can help reduce environmental pollution while being sustainable in their design, implementation, and adoption.

The Brundtland report Our Common Future,Footnote 14 published in 1987 by the United Nations, further developed the direction for the sustainable development agenda. It drew attention to the fact that global environmental problems are primarily the result of the poverty of the Global South and the unsustainable consumption and production in the Global North. Thus, the report emphasized that while risks of cross-border technology use are shared globally, the activities that give rise to the risks as well as the benefits received from the use of these technologies are concentrated in a few countries.

At the United Nations Conference on Environment and Development (UNCED) held in Brazil in 1992, also known as the Earth Summit, the “Agenda 21 Action Plan” was created calling on governments and other influential stakeholders to implement a variety of strategies to achieve sustainable development in the twenty-first century. The plan reiterated the importance of developing and transferring ESTs: “Environmentally sound technologies protect the environment, are less polluting, use all resources in a more sustainable manner, recycle more of their wastes and products, and handle residual wastes in a more acceptable manner than the technologies for which they were substitutes.”Footnote 15

In a subsequent step, the United Nations Member States adopted the 17 Sustainable Development Goals (SDGs) in 2015 as part of the UN 2030 Agenda for Sustainable Development. The goals are set to achieve a balance between economic, social, and environmental sustainability and address issues such as climate change, healthcare and education, inequality, and economic growth.Footnote 16 They also emphasized the need for ESTs to achieve these goals and stressed the importance of adopting environmentally sound development strategies and technologies.Footnote 17

If we look at how the global policy agenda on AI and sustainability has developed in tandem with the sustainable development agenda, the intersection of AI and sustainability become clear. HasselbalchFootnote 18 has illustrated how a focus on AI and sustainability is the result of a recognition of the ethical and social implications of AI combined with a long-standing focus on the environmental impact of science and technology in a global and increasingly inclusive sustainable development agenda. In this context, the growing awareness of AI’s potential to support sustainable development goals is discussed in several AI policies, strategies, research efforts, and investments in green transitions and circular economies around the world.Footnote 19

In this regard, the European Union (EU) has been taking a particularly prominent role in establishing policies and regulations for the responsible and sustainable development of AI. In 2018, the European Commission for instance established the High-Level Group on AI (HLEG),Footnote 20 as part of its European AI Strategy, tasked with the development of ethics guidelines as well as policy and investment recommendations for AI within the EU. The group was composed of 52 individual experts and representatives from various stakeholder groups. The HLEG developed seven key requirements that AI systems should meet in order to be considered trustworthy. One of these requirements specifically emphasized “societal and environmental well-being”:

AI systems should benefit all human beings, including future generations. It must hence be ensured that they are sustainable and environmentally friendly. Moreover, they should take into account the environment, including other living beings, and their social and societal impact should be carefully considered.Footnote 21

The establishment of the HLEG on AI and the publication of its ethics guidelines and requirements illustrate a growing awareness in the EU of the environmental impact of AI on society and the natural environment. The EU’s Green Deal presented in 2019 highlighted several environmental considerations related to AI and emphasized that the principles of sustainability must be a fundamental starting point for not only the development of AI technologies but also the creation of a digital society.

Furthermore, the European Commission’s Communication on Fostering a European approach to artificial intelligenceFootnote 22 and its revised Coordinated Plan on AI emphasized the European Green Deal’s encouragement to use AI to achieve its objectives and establish leadership in environmental and climate change related sectors. This includes activities aimed at developing trustworthy (values-based with a “culture by design” approachFootnote 23) AI systems, as well as an environmentally sound AI socio-technical infrastructure for the EU. For example, the European Commission’s proposal on the world’s first comprehensive AI legislation lays down a uniform legal framework for the development, marketing and use of AI according to Union values based on the categorization of risks posed by AI systems to the fundamental rights and safety of citizens. In early 2023, the European Parliament suggested adding further transparency requirements on AI’s environmental impact to the proposal. Moreover, the coordinated plan on AI also focuses on creating a “green deal data space” and seeks to incorporate environmental concerns in international coordination and cooperation on AI.

6.3 AI for Sustainability and the Sustainability of AI

In 2019, van Wynsberghe argued that the field of AI ethics has neglected the value of sustainability in its discourse on AI. Instead, at the time, this field was concentrated on case studies and particular applications that allowed industry and academics to ignore the larger systemic issues related to the design, development, and use of AI. Sustainable AI, as van Wynsberghe proposed, forces one to take a step back from individual applications and to see the bigger picture, including the physical infrastructure of AI and what this means in terms of the exploitation of people and the planet. Van Wynsberghe defines Sustainable AI as “a movement to foster change in the entire lifecycle of AI products (i.e. idea generation, training, re-tuning, implementation, governance) towards greater ecological integrity and social justice.”Footnote 24 She also outlines two branches of sustainable AI: “AI for sustainability” (for achieving the global sustainable agenda) and the “sustainability of AI” (measuring the environmental impact of making and using AI). There are numerous examples of the former, as AI is increasingly used to accelerate efforts to mitigate the climate crisis (think, for instance, of initiatives around “AI for Good,” and “AI for the sustainable development goals”). However, relatively little is done for the latter, namely, to measure and decrease the environmental impact of making and using AI. To be sure, the sustainability of AI is not just a technical problem and cannot be reduced to measuring the carbon emissions from training AI algorithms. Rather, it is about fostering a deeper understanding of AI as exacerbating and reinforcing patterns of discrimination across borders. Those working in the mines to extract minerals and metals that are used to develop AI are voiceless in the AI discourse. Those whose backyards are filled with mountains of electronic waste from the disposal of the physical infrastructure underpinning AI are also voiceless in the AI debate. Sustainable AI is meant to be a lens through which to uncover ethical problems and power asymmetries that one can only see when one begins from a discussion of environmental consequences. Thus, sustainable AI is meant to bring the hidden, vulnerable demographics who bear the burden of the cost of making and using AI to the fore and to show that the environmental consequences of AI also shed light on systemic social injustices that demand immediate attention.

The environmental and social injustices resulting from the making and using of AI inevitably raises the question: what is it that we, as a society, want to sustain? When sustainability carries with it a connotation of maintenance and to continue something, is sustainable AI then just about maintaining the environmental practices that give rise to such social injustices? Or, is it also possible to suggest that sustainable AI carries with it the possibility to open a dialogue on how to repair and transform such injustices?Footnote 25

6.3.1 Examining the Sustainability of AI: Data Pollution

Taking an interest in sustainability and AI is simultaneously a tangible and an intangible endeavor. As SætraFootnote 26 has emphasized, many of AI’s ethical implications as well as impacts on society and nature (positive and negative) are intangible and potential, meaning that they cannot be empirically verified or observed. At the same time, many of its impacts are also visible, tangible, and even measurable. Understanding the ethical implications of AI in the context of a global sustainability agenda should hence involve both a philosophical analysis and an ethical analysis about its intangible and potential impacts and their role in our personal, social, and natural environments, as well as a sociological and technological analyses of the tangible impacts of AI’s very concrete technology design, adoption, and development.

One way of examining the sustainability of AI from multiple angles is to explore the sustainability of the data of AI, often associated with concerns around “data pollution,” as discussed further below.Footnote 27 Since the mid-1990s, societies have transformed through processes of “datafication,”Footnote 28 converting everything into data configurations. This process has enabled a wide range of new technological capabilities and applications, including the currently most practical application of the idea of AI (conceptualized as a machine that mimics human intelligence in one form or another), namely machine learning (ML). ML is a method used to autonomously or semiautonomously make sense of big data generated in the areas such as health care, transportation, finance, and communication. As datafication continues to expand and evolve as the fuel of AI/ML models, its ethical implications become more apparent as well. HasselbalchFootnote 29 has argued that AI can be seen as an extension of “Big Data Socio-Technical Infrastructures” (BDSTIs) that are institutionalized in IT practices and regulatory frameworks. “Artificial Intelligence Socio-Technical Infrastructures” (AISTIs) are then an evolution of BDSTIs, with added components that allow for real-time sensing, learning, and autonomy.

In turn, the term “data pollution” can then be considered a discursive response to the implications of BDSTI and AISTI in society. It is used as a catch-all metaphor to describe the adverse impacts that the generation, storing, handling, and processing of digital data has on our natural environment, social environment, and personal environment.Footnote 30 As an unsustainable handling, distribution, and generation of data resources,Footnote 31 data pollution due diligence in a business setting, for example, will hence imply managing the adverse effects and risks of what could be described as the data exhaust of big data.

Firstly, the data pollution of AI has been understood as a tangible impact, that is, as “data-driven unsustainability”Footnote 32 with environmental effects on the natural environment. For example, a famous study by Strubell et al. found that training (including tuning and experimentation) a large AI model for natural language processing, such as machine translation, uses seven times more carbon than an average human in one year.Footnote 33 The environmental impact of digital technologies such as AI is not limited to just the data they use, but also includes the disposal of information and communication technology and other effects that may be harder to identify (such as consumers’ energy consumption when making use of digital services).Footnote 34

Secondly, data pollution is also described as the more intangible impacts of big data on our social and personal environments. Originally, the term was used to illustrate mainly the privacy implications for citizens of the big data economy and the datafication of individual lives and societies. Schneier has emphasized the effects of the massive collection and processing of big data by companies and governments alike on people’s right to privacy by stating that “this tidal wave of data is the pollution problem of the information age. All information processes produce it.”Footnote 35 Furthermore, Hirsch and King have deployed the term “data pollution” as analogous to the “negative externalities” of big data as used in business management.Footnote 36 They argue that when managing negative impacts of big data, such as data spills, privacy violations, and discrimination, businesses can learn from the strategies adopted to mitigate traditional forms of pollution and environmental impacts. Conversely, Ben-ShaharFootnote 37 has introduced data pollution in the legal field as a way to “rethink the harms of the data economyFootnote 38 to manage the negative externalities of big data with an “environmental law for data protection.”Footnote 39 He, however, also recognizes that harmful data exhaust is not only disrupting the privacy and data protection rights of individuals but that it adversely affects an entire digital ecosystem of social institutions and public interests.Footnote 40 The scope of “data pollution” hence evolved over time and expanded into a more holistic approach to the adverse effects of the big data economy. In this way, the term is also a testimony to the rising awareness of what is at stake in a big data society, including a disruption of the power balances in society, across multiple environments. As argued by Hasselbalch and Tranberg in their 2016 book on data ethics: “The effects of data practices without ethics can be manifold – unjust treatment, discrimination and unequal opportunities. But privacy is at its core. It’s the needle on the gauge of society’s power balance.”Footnote 41

6.3.2 AI as Infrastructure

Let us be clear that we are not speaking of isolated events when we discuss AI, ML, and the data practices necessary to train and use these algorithms. Rather, we are talking about a massive infrastructure of algorithms used for business models of large tech companies as well as for the infrastructure to power startups and the like. And this infrastructure has internalized the exploitation of people and the planet. A key issue is here that the material constitution of AI and data is often ignored, or we are oblivious to it. The idea that data is “stored on the cloud,” for example, invokes a symbolic reference to the data being stored “somewhere out there” and not in massive data centers around the world requiring large amounts of land and water.

AI not only uses existing infrastructures to function, such as power grids and water supply chains, but it is also used to enhance existing infrastructures. Google famously used the algorithm created by DeepMind to conserve electricity in their data centers. In addition, Robbins and van Wynsberghe have shown how AI itself ought to be conceptualized as an infrastructure in so far as it is embedded, transparent, visible upon breakdown, and modular.Footnote 42

Understanding AI as infrastructure demands that we question the building blocks of said infrastructure and the practices in place that maintain the functioning of said infrastructure. Without careful consideration, we run the risk of lock-in, not only in the sense of carbon emissions, but also in the sense of the power asymmetries that are maintained, the kinds of discrimination that run through our society, the forms of data collection underpinning the development and use of algorithms, and so on. In other words, “…the choices we make now regarding our new AI-augmented infrastructure not only relate to the carbon emissions that it will have; but also relate to the creation of constraints that will prevent us from changing course if that infrastructure is found to be unsustainable.”Footnote 43

As raised earlier, the domain of sustainable AI aims not only at addressing unsustainable environmental practices at the root of AI production, but it also asks the question of what we, society, wish to maintain. What practices of data collection and of data sovereignty do we want to pass on to future generations? Alternatively, what practices, both environmental and social, require a transformative kind of repair to better align with our societal values?

6.4 Analyzing AI and Sustainability with a Data Ethics of Power

Exploring AI’s sustainability implies understanding AI in context; that is, a conception of AI as socio-technical infrastructure created and directed by humans in social, economic, political, and historical contexts with impacts in the present as well as for future generations. Thus, AISTIs, as explored by Hasselbalch,Footnote 44 also represent power dynamics among various actors at the local, regional, and global levels. This is because they are human-made spaces evolving from the very negotiation and tension between different societal interests and aspirations.Footnote 45 An ethical analysis of AI and sustainability therefore necessitates an exploration of these power dynamics that are transformed, impacted, and even produced by AI in natural, social, and personal environments. We can here consider AISTIs as “socio-technical infrastructures of power,”Footnote 46 infrastructures of empowerment and disempowerment, and ask questions such as whose or what interest and values does the core infrastructure serve? For example, which “data interests”Footnote 47 are embedded in the data design? Which interests and values conflict with each other, and how are these conflicts resolved in, for example, AI policies or standards?

Hasselbalch’s “data ethics of power” is an applied ethics approach concerned with making the power dynamics of the big data society and the conditions of their negotiation visible in order to point to design, business, policy, and social and cultural processes that support a human(-centric) distribution of power.Footnote 48 When taking a “data ethics of power” approach, the ethical challenges of AI and sustainability are considered from the point of view of power dynamics, with the aim of making these power dynamics visible and imagining alternative realities in design, culture, policy, and regulation. The assumption is that the ethical implications of AI are linked with architectures of powers. Thus, the identification of – and our response to – these ethical implications are simultaneously enabled and inhibited by structural power dynamics.

A comprehensive understanding of the power dynamics that shape and are shaped by AISTIs of power and their effect on sustainable development requires a multi-level examination of a “data ethics of power” that takes into account perspectives on the micro, meso, and macro levels.Footnote 49 This means, as Misa describes it, that we take into consideration different levels in the interaction between humans, technology, and the social and material world we live in.Footnote 50 In addition, as Edwards describes it, we should also consider “scales of time”Footnote 51 when grasping larger patterns of technological systems’ development and adoption in society on a historical scale, while also looking at their specific life cycles.Footnote 52 This approach allows for a more holistic understanding of the complex design, political, organizational, and cultural contexts of power of these technological developments. The objective of this approach is to avoid reductive analyses of complex socio-technical developments either focusing on the ethical implications of designers and engineers’ choices in micro contexts of interaction with technology or, on the other hand, reducing ethical implications to outcomes of larger macroeconomic or ideological patterns only. A narrow focus on ethical dilemmas in the micro contexts of design will steal attention from the wider social conditions and power dynamics, while an analysis constrained to macro structural power dynamics will fail to grasp individual nuances and factors by making sense of them only in terms of these larger societal dynamics. A “multi-level analysis”Footnote 53 hence has an interest in the micro, meso, and macro levels of social organization and space, which also includes looking beyond the here and now into the future, so as to ensure intergenerational justice.

The three levels of analysis of power dynamics (micro, meso, and macro) in time and space are, as argued by Hasselbalch,Footnote 54 central to the delineation of the ethical implications of AI and its sustainability. Let us concretize how these lenses can foster our understanding of what is at stake.

First, on the micro level, ethical implications are identified in the contexts and power dynamics of the very design of an AI system. Ethical dilemmas pertaining to issues of sustainability can be identified in the design of AI and a core component of a sustainable approach to AI would be to design AI systems differently. What are the barriers and enablers on a micro design level for achieving sustainable AI? Think, for example, about an AI systems developer in Argentina who depends on the cloud infrastructure from one of the big cloud providers Amazon or Microsoft, which locks in her choices.

Second, on the meso level, we have institutions, companies, governments, and intergovernmental organizations that are implementing institutionalized requirements, such as international standards and laws on, for example, data protection. While doing so, interests, values, and cultural contexts (such as specific cultures of innovation) are negotiated, and some interests will take precedence in the implementation of these requirements. What are the barriers and enablers on an institutional, organizational, and governmental levels for tackling ethical implications and achieving sustainable AI? Think for example about a social media company in Silicon Valley with a big data business model implementing the requirements of the EU Data Protection Regulation for European users of the platform.

Lastly, socio-technical systems such as AISTIs need what Hughes has famously referred to as a “technological momentum”Footnote 55 in society to evolve and consolidate. A technological momentum will most often be preceded by sociotechnical change that take the form of negotiations of interests. A macro-level analysis could therefore consider the increasing awareness of the sustainability of AI on the geopolitical agenda and how different societal interests are being negotiated, expressed in cultures, norms, and histories on macro scales of time. This analysis would thus seek to understand the power dynamics of the geopolitical battle between different approaches to data and AI. What are the barriers and enablers on a historical and geopolitical scale for achieving sustainable AI data? Think for example about the conflicts between different legal systems, or between different political and business “narratives” that shape the development of global shared governance frameworks between UN member states.

6.5 Conclusion

The public and policy discourse surrounding AI is frequently marked by excessive optimism and technological determinism. Most big data business endeavors are today promoted as “AI,” and AI has acquired a crucial significance in geopolitics as a representation of nations’ and regions’ superiority in the global arena. However, it is crucial to acknowledge that AI is a human-created technology with significant effects on our environment and on future societies. The field of sustainable AI is focused on addressing the unsustainable environmental practices in AI development, but not only that. It also asks us to consider the societal goals for AI’s role in future societies. This involves examining and shaping the design and use of AI, as well as the policy practices that we want to pass down to future generations.

In this chapter we brought together the concepts of sustainable AI with a “data ethics of power.” The public discourse on AI is increasingly recognizing the importance of both frameworks, and yet not enough is done to systematically mitigate the concerns they identify. Thus, we addressed the ethical quandary of using AI for sustainability, as it presents opportunities both for addressing sustainable development challenges and for causing harm to the environment and society. By discussing the concept of AI for sustainability within the context of a global sustainable development agenda, we aimed to shed light on the power dynamics that shape AI and its impact on sustainable development goals. We argued that exploring the powers that shape the “data pollution” of AI can help to make the social and ethical implications of AI more tangible. It is our hope that, by considering AI through a more holistic lens, its adverse effects both in the present and in the future can be more effectively mitigated.

Footnotes

1 Artificial Intelligence A Perspective from the Field

1 Luc De Raedt, “Over machines die leren” in Pieter d’Hoine and Bart Pattyn (eds), Wetenschap in een veranderende wereld, Lessen voor de eenentwintigste eeuw (Leuven University Press, 2020); Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach, 4th ed (Pearson, 2020).

2 Luc De Raedt, Robin Manhaeve, Sebastijan Dumancic, Thomas Demeester, and Angelika Kimmig, “Neuro-symbolic = neural + logical + probabilistic” (2019) Proceedings of the International Workshop on Neural-Symbolic Learning and Reasoning at IJCAI.

3 Daniel Kahneman, Thinking, Fast and Slow (Farrar, Straus and Giroux, 2013).

4 Tom Mitchell, Machine Learning (McGraw Hill, 1997).

5 Arthur Samuel, “Some studies in machine learning using the game of checkers” (1959) IBM Journal of Research and Development, 3(3): 210–229.

6 Note that the “machine” requires programming to be created. The “without programming” refers to the machine adapting to a task it has not seen before and is thus not explicitly programmed for.

7 Schaeffer Jonathan, One Jump Ahead: Challenging Human Supremacy in Checkers (Springer, 1997).

8 David Silver et al., “Mastering the game of Go with deep neural networks and tree search” (2016) Nature, 589: 224.

9 Julien Perolat et al., “Mastering the game of Stratego with model-free multiagent reinforcement learning” (2022) Science, 378(6623): 990–996.

11 Léon Bottou. “From machine learning to machine reasoning: An essay” (2014) Machine learning, 94: 133–149.

12 Sebastijan Dumancic, Wannes Meert, Stijn Goethals, Tim Stuyckens, Jelle Huygen, and Koen Denies. “Automated reasoning and learning for automated payroll management” (2021) In Proceedings of the AAAI Conference on Artificial Intelligence, 35(17): 15107–15116.

13 Daphne Koller and Nir Friedman, Probabilistic Graphical Models: Principles and Techniques (The MIT Press, 2009).

14 Linda Woolery and Jerzy Grzymala-Busse, “Machine learning for an expert system to predict preterm birth risk” (1994) Journal of the American Medical Informatics Association, 1(6): 439–446.

15 Steven Woudenberg, Linda van der Gaag, and Carin Rademaker, “An intercausal cancellation model for Bayesian-network engineering” (2015) International Journal of Approximate Reasoning, 63: 32–47.

16 Michael Beetz, Lorenz Mösenlechner, and Moritz Tenorth, “CRAM – A Cognitive Robot Abstract Machine for everyday manipulation in human environments,” (2010) In Proceedings of IEEE/RSJ International Conference on Intelligent Robots and Systems.

17 Sebastian Thrun et al. “Stanley: The Robot That Won the DARPA Grand Challenge” (2007) Springer Tracts in Advanced Robotics, vol 36. Springer.

18 When one tries to solve a machine learning problem using machine learning, this is referred to as meta-learning. See Luc De Raedt et al., “Elements of an automatic data scientist” (2018) In Proceedings of Advances in Intelligent Data Analysis XVII, Springer.

19 Note that all methods allow only a certain set of operations to allow for backward inference and thus not all possible operations. In the case of neural nets, for example, ripping the paper is an operation that is not supported.

20 Ross D. King, Jem Rowland, Wayne Aubrey, Maria Liakata, Magdalena Markham, Larisa N. Soldatova, Ken E. Whelan et al. “The robot scientist Adam.” (2009) Computer, 42(8): 46–54.

21 A game based on SAT that illustrates the hardness of the problem can be found online: www.cril.univ-artois.fr/~roussel/satgame/satgame.php?level=3&lang=eng

22 Joy Buolamwini and Timnit Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification.” (2018) Machine Learning Research, 81: 1–15.

23 Dries Van der Plas et al., “A reject option for automated sleep stage scoring.” (2021) In Proceedings of the Workshop on Interpretable ML in Healthcare at the International Conference on Machine Learning (ICML).

24 Laurens Devos, Wannes Meert, and Jesse Davis, “Versatile verification of tree ensembles” (2021) In the Proceedings of the 38th International Conference on Machine Learning (ICML).

25 The TAILOR Handbook of Trustworthy AI, https://tailor-network.eu/handbook/

26 While explanations are mentioned in legislation such as GDPR, it is not a legal norm. Therefore, it is not clear to what level an explanation is required and opinions differ. See Andrew D. Selbst and Julia Powles, “Meaningful information and the right to explanation” (2017) International Data Privacy Law, 7(4); Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, “Why a right to explanation of automated decision-making does not exist in the general data protection regulation” (2017) International Data Privacy Law, 7(2): https://iapp.org/news/a/is-there-a-right-to-explanation-for-machine-learning-in-the-gdpr/

27 Note that this trade-off is not always accurately portrayed and (mis)used as an excuse to avoid responsibility. See https://hdsr.mitpress.mit.edu/pub/f9kuryi8/release/8

28 Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why should I trust you?”: Explaining the predictions of any classifier (2016). In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD ’16). Association for Computing Machinery, New York, NY, USA, 1135–1144. https://doi.org/10.1145/2939672.2939778

29 Scott M. Lundberg and Su-In Lee. “A unified approach to interpreting model predictions.” (2017) In Advances in Neural Information Processing Systems.

30 Riccardo Guidotti. “Counterfactual explanations and how to find them: Literature review and benchmarking. (2022) In Data Mining and Knowledge Discovery. https://doi.org/10.1007/s10618-022-00831-6

31 Laurens Devos, Wannes Meert, and Jesse Davis, “Versatile verification of tree ensembles.” (2021) International Conference on Machine Learning (ICML).

32 European Commission, “White Paper on Artificial Intelligence: A European approach to excellence and trust” (2020), https://ec.europa.eu/info/publications/white-paper-artificial-intelligence-european-approach-excellence-and-trust_en.

2 Philosophy of AI A Structured Overview

1 John McCarthy et al., “A proposal for the Dartmouth Summer Research Project on Artificial Intelligence” (1955), www-formal.stanford.edu/jmc/history/dartmouth/dartmouth.html, accessed October 1, 2006.

2 As a sample: Eric Dietrich, “Philosophy of artificial intelligence” The Encyclopedia of Cognitive Science 203. The classic historical survey is Margaret A. Boden, Mind as Machine: A History of Cognitive Science (Oxford University Press, 2006).

3 John R. Searle, “Minds, brains and programs” (1980) Behavioral and Brain Sciences, 3(3): 417–424.

4 Stuart Russell, Human Compatible: Artificial Intelligence and the Problem of Control (Viking, 2019); Stuart Russell and Peter Norvig, Artificial Intelligence: A Modern Approach (4th ed., Prentice Hall, 2020); Günther Görz, Ute Schmid, and Tanya Braun, Handbuch der Künstlichen Intelligenz (5th ed., De Gruyter, 2020); Judea Pearl and Dana Mackenzie, The Book of Why: The New Science of Cause and Effect (Basic Books, 2018).

5 AGI conferences have been organized since 2008.

6 James Lighthill, Artificial Intelligence: A General Survey (Science Research Council, 1973).

7 Frank Rosenblatt, “The Perceptron: a perceiving and recognizing automaton (Project PARA)” Vol. 85, Issues 460–461 of Report (Cornell Aeronautical Laboratory, 1957); Yann LeCun, Yoshua Bengio, and Geoffrey Hinton, “Deep learning” (2015) Nature, 521: 436–444; James Garson and Cameron Buckner, “Connectionism” in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy (CSLI, Stanford, 2019), https://plato.stanford.edu/entries/connectionism/; Cameron J. Buckner, From Deep Learning to Rational Machines: What the History of Philosophy Can Teach Us about the Future of Artificial Intelligence (Oxford University Press, 2024).

8 See Chapter 3 of this book. See also: Vincent C. Müller, “Ethics of artificial intelligence and robotics” in Edward N. Zalta (ed.), Stanford Encyclopedia of Philosophy, (CSLI, Stanford University, 2020), https://plato.stanford.edu/entries/ethics-ai/; Vincent C. Müller, Can Machines Think? Fundamental Problems of Artificial Intelligence (Oxford University Press, forthcoming).

9 There are very few surveys and no recent ones. See Jack B. Copeland, Artificial Intelligence: A Philosophical Introduction (Blackwell, 1993); Dietrich Matt Carter, Minds and Computers: An Introduction to the Philosophy of Artificial Intelligence (Edinburgh University Press, 2007); Luciano Floridi (ed.) The Blackwell Guide to the Philosophy of Computing and Information (Blackwell, 2003); Luciano Floridi, The Philosophy of Information (Oxford University Press, 2011). Some of what philosophers had to say can be seen as undermining the project of AI, compare Eric Dietrich et al., Great Philosophical Objections to Artificial Intelligence: The History and Legacy of the AI Wars (Bloomsbury Academic, 2021).

10 Eric Margolis, Richard Samuels, and Stephen Stich (eds.), The Oxford Handbook of Philosophy of Cognitive Science (Oxford University Press, 2012).

11 Alan Turing, “Computing machinery and intelligence” LIX Mind 433.

12 Anonymous, “Digital computing newsletter” (1950) 2 Office of Naval Research, Mathematical Sciences Division, Washington DC 1.

13 Turing 442.

14 Shane Legg and Marcus Hutter, “Universal intelligence: A definition of machine intelligence” (2007) Minds and Machines, 17(4): 391–444, 402.

15 See, for example, Herbert A. Simon, “A behavioral model of rational choice” (1955) Quarterly Journal of Economics, 69(1): 99–118; Johanna Thoma, “Decision theory” in Richard Pettigrew and Jonathan Weisberg (eds), The Open Handbook of Formal Epistemology (PhilPapers, 2019), see also the neo-behaviorist proposal in Dimitri Coelho Mollo, “Intelligent Behaviour” [2022] Erkenntnis.

16 Nicholas Negroponte, Being digital (Vintage, 1995); see also John Haugeland, Artificial intelligence: The very idea (MIT Press, 1985) 57; Vincent C. Müller, “What is a digital state?” in Mark J. Bishop and Yasemin J. Erden (eds), The Scandal of Computation – What Is Computation? – AISB Convention 2013 (AISB, 2013), www.aisb.org.uk/asibpublications/convention-proceedings.

17 Alan Turing, “On computable numbers, with an application to the Entscheidungsproblem” 2 Proceedings of the London Mathematical Society 230 Kurt Gödel, “Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I” 38 Monatshefte für Mathematik und Physik 173. The original program outlined in David Hilbert, “Mathematische Probleme” [Springer] Nachrichten von der Königlichen Gesellschaft der Wissenschaften zu Göttingen, Math-Phys Klasse 253. See, for example, Jack B. Copeland, Carl J Posy, and Oron Shagrir, Computability: Turing, Gödel, Church, and Beyond (MIT Press, 2013).

18 Hava T. Siegelmann, “Computation beyond the Turing limit” Science 545; Neural Networks and Analog Computation: Beyond the Turing Limit (Birkhäuser, 1997); Oron Shagrir, The Nature of Physical Computation (Oxford University Press, 2022); Gualtiero Piccinini, “Computation in physical systems” (2010) Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/computation-physicalsystems/.

19 John R. Searle, Mind: A Brief Introduction (Oxford University Press, 2004); Gordana Dodig-Crnkovic and Vincent C. Müller, “A dialogue concerning two world systems: Info-computational vs. mechanistic” in Gordana Dodig-Crnkovic and Mark Burgin (eds), Information and Computation: Essays on Scientific and Philosophical Understanding of Foundations of Information and Computation (World Scientific, 2011), https://worldscientific.com/worldscibooks/10.1142/7637#t=aboutBook.

20 Clare Horsman et al., “When does a physical system compute?” (2014) Proceedings of the Royal Society A: Mathematical, Physical and Engineering Science, 470(2169): 1–25.

21 Marcin Miłkowski, “Objections to computationalism: A surveyRoczniki Filozoficzne, LXVI: 1; Shimon Edelman, Computing the Mind: How the Mind Really Works (Oxford University Press, 2008), for the discussion Stevan Harnad, “The symbol grounding problemPhysica D, 42: 335; Matthias Scheutz (ed) Computationalism: New Directions (Cambridge University Press, 2002); Oron Shagrir, “Two dogmas of computationalismMinds and Machines, 7: 321; Francisco J. Varela, Evan Thompson, and Eleanor Rosch, The Embodied Mind: Cognitive Science and Human Experience (MIT Press, 1991).

22 Allen Newell and Herbert A. Simon, “Computer science as empirical inquiry: Symbols and searchCommunications of the ACM, 19(3): 113–126, 116; cf. Boden 1419ff.

23 Dennett D. C., Consciousness Explained (Little, Brown & Co., 1991), 107.

24 For an introduction to vision, see Kevin J. O’Regan, Why Red Doesn’t Sound Like a Bell: Understanding the Feel of Consciousness (Oxford University Press, 2011), chapters 1–5.

25 Andy Clark, “Whatever next? Predictive brains, situated agents, and the future of cognitive science.” Behavioral and Brain Sciences, 36: 181; Andy Clark, Surfing Uncertainty: Prediction, Action, and the Embodied Mind (Oxford University Press, 2016); Karl J. Friston, “The free-energy principle: A unified brain theory?Nature Reviews Neuroscience, 11: 127.

26 Andy Clark, Natural Born Cyborgs: Minds, Technologies, and the Future of Human Intelligence (Oxford University Press, 2003); Andy Clark and David J. Chalmers, “The extended mindAnalysis, 58: 7; Albert Newen, Shaun Gallagher, and Leon De Bruin, “4E Cognition: Historical roots, key concepts, and central issues” in Albert Newen, Leon De Bruin, and Shaun Gallagher (eds), The Oxford Handbook of 4E Cognition (Oxford Academic, Oxford University Press, 2018), pp. 3–16, https://doi.org/10.1093/oxfordhb/9780198735410.013.1, accessed June 29, 2023.

27 Hubert L. Dreyfus, What Computers Still Can’t Do: A Critique of Artificial Reason (2nd ed, MIT Press, 1992, 1972); Rolf Pfeifer and Josh Bongard, How the Body Shapes the Way We Think: A New View of Intelligence (MIT Press, 2007).

28 Searle, “Minds, brains and programs.”

29 David Cole, “The Chinese room argument” (2020), http://plato.stanford.edu/entries/chinese-room/,; John Preston and Mark Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence (Oxford University Press, 2002).

30 Searle, ‘Minds, brains and programs’ 424.

32 John McCarthy, “John Searle’s Chinese room argument” (2007), www-formal.stanford.edu/jmc/chinese.html, accessed June 10, 2007; John Haugeland, “Syntax, semantics, physics” in John Preston and Mark Bishop (eds), Views into the Chinese Room: New Essays on Searle and Artificial Intelligence (Oxford University Press, 2002) 385; Margaret A. Boden, Computer Models of the Mind: Computational Approaches in Theoretical Psychology (Cambridge University Press, 1988) 97.

33 Ludwig Wittgenstein, “Philosophische Untersuchungen” in Schriften I (Suhrkamp, 1980, 1953) §82.

34 John R. Searle, “Intentionality and its place in nature” in Consciousness and Language (Cambridge University Press, 2002, 1984); Searle, Mind: A brief introduction.

35 Russell 23.

36 Simon 99. Gregory Wheeler, “Bounded rationality” in Edward N. Zalta (ed), The Stanford Encyclopedia of Philosophy, vol Fall 2020 Edition (CSLI, 2020), https://plato.stanford.edu/archives/fall2020/entries/bounded-rationality/; Stuart Russell, “Rationality and intelligence: A brief update” in Vincent C. Müller (ed), Fundamental Issues of Artificial Intelligence (Springer, 2016) 16ff; Falk Lieder and Thomas L. Griffiths, “Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources” (Cambridge University Press) 43; Behavioral and Brain Sciences, e1.

37 Daniel Kahneman and Amos Tversky, “Prospect theory: An analysis of decision under risk” Econometrica, 47: 263; Daniel Kahnemann, Thinking Fast and Slow (Macmillan, 2011); Richard H. Thaler and Cass Sunstein, Nudge: Improving Decisions about Health, Wealth and Happiness (Penguin, 2008) vs. David Kirsh, “Problem solving and situated cognition” in P. Robbins and M. Aydede (eds), The Cambridge Handbook of Situated Cognition (Cambridge University Press, 2009).

38 Murray Shanahan, “The frame problem” in Edward N. Zalta (ed), Stanford Encyclopedia of Philosophy, vol Spring 2016 edition (CSLI, Stanford University, 2016), https://plato.stanford.edu/archives/spr2016/entries/frame-problem/.

39 Daniel C. Dennett, “Cognitive wheels: The frame problem of AI” in Christopher Hookway (ed), Minds, Machines, and Evolution: Philosophical Studies (Cambridge University Press, 1984).

40 Jerry A. Fodor, “Modules, frames, fridgeons, sleeping dogs, and the music of the spheres” in J. L. Garfield (ed), Modularity in Knowledge Representation and Natural-Language Understanding (The MIT Press, 1987) 140f; Dan Sperber and Deirdre Wilson, “Fodor’s frame problem and relevance theoryBehavioral and Brain Sciences, 19: 530.

41 J. R. Lucas, “Minds, machines and Gödel: A retrospect” in Peter J. R. Millican and Andy Clark (eds), Machines and Thought (Oxford University Press, 1996); Peter Koellner, “On the question of whether the mind can be mechanized, I: From Gödel to PenroseJournal of Philosophy, 115: 337; Peter Koellner, “On the question of whether the mind can be mechanized, II: Penrose’s new argumentJournal of Philosophy, 115: 453.

42 Margaret A. Boden, “Creativity and artificial intelligence: A contradiction in terms?” in Elliot Samuel Paul and Scott Barry Kaufman (eds), The Philosophy of Creativity: New Essays (Oxford University Press, 2014), 224–244, https://philpapers.org/archive/PAUTPO-3.pdf; Simon Colton and Geraint A. Wiggins, Computational Creativity: The Final Frontier? (Montpelier, 2012); Martha Halina, “Insightful artificial intelligence” Mind and Language, 36: 315.

43 Harry Frankfurt, “Freedom of the will and the concept of a personThe Journal of Philosophy, LXVIII: 5; Daniel C. Dennett, Elbow Room: The Varieties of Free Will Worth Wanting (Cambridge, Mass. ed, MIT Press, 1984).

44 Chapter 6 of this book by Lode Lauwaert and Ann-Katrien Oimann delves further into the subject of AI and responsibility.

45 Galen Strawson, “Free will” (2011) Routledge Encyclopedia of Philosophy, Taylor and Francis, doi:10.4324/9780415249126-V014-2, www.rep.routledge.com/articles/thematic/free-will/; Thomas Pink, Free Will: A Very Short Introduction (Oxford University Press, 2004); Alfred R. Mele, Free Will and Luck (Oxford University Press, 2006); Daniel C. Dennett and Gregg D. Caruso, “Just deserts” Aeon.

46 Thomas W. Simpson and Vincent C. Müller, “Just war and robots’ killingsThe Philosophical Quarterly, 66: 302; Rob Sparrow, “Killer robotsJournal of Applied Philosophy, 24: 62; Vincent C. Müller, “Is it time for robot rights? Moral status in artificial entitiesEthics & Information Technology, 23: 579.

47 Thomas Nagel, “What is it like to be a bat?Philosophical Review, 83: 435; Thomas Nagel, What Does It All Mean? A Very Short Introduction to Philosophy (Oxford University Press, 1987), chapter 3.

48 Frank Jackson, “What Mary didn’t knowJournal of Philosophy, 83: 291.

49 David J. Chalmers and John R. Searle, “‘Consciousness and the philosophers’: An exchange” (1997) The New York Review of Books, www.nybooks.com/articles/1997/05/15/consciousness-and-the-philosophers-an-exchange/; David J. Chalmers, “Précis of the Conscious Mind” (1999) Philosophy and Phenomenological Research, LIX(2): 435–438, 436; Donald Davidson, “Mental events” in L. Foster and J. Swanson (eds), Experience and Theory (Amherst, MA: University of Massachusetts Press, 1970).

50 O’Regan.

51 Thomas Metzinger, The Ego Tunnel: The Science of the Mind and the Myth of the Self (Basic Books, 2009); Eric Olsen, “Personal identity” (2023) Stanford Encyclopedia of Philosophy, https://plato.stanford.edu/entries/identity-personal/.

52 See, for example, “The man who fell out of bed” in Oliver Sacks, The Man Who Mistook His Wife for a Hat, and Other Clinical Tales (New York: Summit Books, 1985) or the view of humans as superorganisms, based on the human microbiome.

53 Russell, Human Compatible: Artificial Intelligence and the Problem of Control, 172.

54 Vincent C. Müller and Michael Cannon, “Existential risk from AI and orthogonality: Can we have it both ways?Ratio, 35: 25.

3 Ethics of AI Toward a “Design for Values” Approach

1 Heikkilä, M. “Dutch scandal serves as a warning for Europe over risks of using algorithms” (2022) Politico, March 29. www.politico.eu/article/dutch-scandal-serves-as-a-warning-for-europe-over-risks-of-using-algorithms/

2 Ledford, H.Millions of black people affected by racial bias in health-care algorithms” (2019) Nature, 574(7780): 608–609.

3 Aizenberg, E. and Van Den Hoven, J.Designing for human rights in AI.” (2020) Big Data & Society, 7(2): 2053951720949566.

4 Heikoop, D. D., Hagenzieker, M., Mecacci, G., Calvert, S., Santoni De Sio, F., and van Arem, B.Human behaviour with automated driving systems: A quantitative framework for meaningful human control” (2019) Theoretical Issues in Ergonomics Science, 20(6): 711–730.

5 Santoni de Sio, F. and Mecacci, G.Four responsibility gaps with artificial intelligence: Why they matter and how to address them” (2021) Philosophy & Technology, 34: 1057–1084.

6 On the so-called responsibility gap, see also Chapter 6 of this book.

7 Rawls, J., Justice as Fairness: A Restatement (Harvard University Press, 2001).

8 Pettit, P. A Theory of Freedom: from the Psychology to the Politics of Agency (Oxford University Press, 2001).

9 Klenk, M.Digital well-being and manipulation online” in C. Burr and L. Floridi (eds.), Ethics of Digital Well-Being: A Multidisciplinary Perspective (Cham: Springer, 2020), pp. 81–100. https://doi.org/10.1007/978-3-030-50585-1_4

10 Robeyns, I.The capability approach: A theoretical survey” (2005) Journal of Human Development, 6(1): 93–117.

11 Posner, R. A. Economic Analysis of Law (Aspen Publishing, 2014).

12 Umbrello, S. and De Bellis, A. F.A value-sensitive design approach to intelligent agents” in Roman V. Yampolskiy (eds.), Artificial Intelligence Safety and Security (Chapman & Hall/CRC, 2018), 395–409.

13 Van den Hoven, J., Vermaas, P., and van de Poel, I. eds.), Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains (Dordrecht: Springer Netherlands, 2015).

14 List, C.Group agency and artificial intelligence” (2021) Philosophy & Technology, 34(4): 1213–1242.

15 Durán, J. M. and Jongsma, K. R.Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI” (2021) Journal of Medical Ethics, 47(5): 329–335.

16 Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., … and Herrera, F.Explainable artificial intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI” (2020) Information Fusion, 58: 82–115.

17 Buijsman, S.Defining explanation and explanatory depth in XAI” (2022) Minds and Machines, 32(3): 563–584.

18 van der Waa, J., Nieuwburg, E., Cremers, A., and Neerincx, M.Evaluating XAI: A comparison of rule-based and example-based explanations” (2021) Artificial Intelligence, 291: 103404.

19 Akhtar, N. and Mian, A. “Threat of adversarial attacks on deep learning in computer vision: A survey” (2018) IEEE Access, 6: 14410–14430.

20 Cervantes, J. A., López, S., Rodríguez, L. F., Cervantes, S., Cervantes, F., and Ramos, F.Artificial moral agents: A survey of the current status” (2020) Science and Engineering Ethics, 26: 501–532.

21 In this regard, see also Chapter 6 of this book.

22 Van de Poel, I.Embedding values in artificial intelligence (AI) systems” (2020) Minds and Machines, 30(3): 385–409.

23 Gabriel, I.Artificial intelligence, values, and alignment” (2020) Minds and Machines, 30(3): 411–437.

24 Mittelstadt, B. D., Allo, P., Taddeo, M., Wachter, S., and Floridi, L.The ethics of algorithms: Mapping the debate” (2016) Big Data & Society, 3(2): 2053951716679679.

25 Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … and Vayena, E.AI4People – An ethical framework for a good AI society: Opportunities, risks, principles, and recommendations” (2018) Minds and Machines, 28: 689–707.

26 Cortese, J. F. N. B., Cozman, F. G., Lucca-Silveira, M. P., and Bechara, A. F. “Should explainability be a fifth ethical principle in AI ethics?” (2022) AI and Ethics, 1–12.

27 Kazim, E. and Koshiyama, A. S.A high-level overview of AI ethics” (2021) Patterns, 2(9): 100314.

28 Huang, C., Zhang, Z., Mao, B. and Yao, X. “An overview of artificial intelligence ethics” (2022) IEEE Transactions on Artificial Intelligence, 4(4): 799–819.

29 Jobin, A., Ienca, M., and Vayena, E.The global landscape of AI ethics guidelines” (2019) Nature Machine Intelligence, 1(9): 389–399.

31 Hovy, D. and Prabhumoye, S.Five sources of bias in natural language processing” (2021) Language and Linguistics Compass, 15(8): e12432.

32 Carlini, N., Liu, C., Erlingsson, Ú., Kos, J., and Song, D. “The secret sharer: Evaluating and testing unintended memorization in neural networks” (2019, August) in USENIX Security Symposium (Vol. 267).

33 Bender, E. M., Gebru, T., McMillan-Major, A., and Mitchell, S. “On the dangers of stochastic parrots: Can language models be too big?” (2021, March) in Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (pp. 610–623).

34 Lee, J., Le, T., Chen, J., and Lee, D. “Do language models plagiarize?” (2022) arXiv preprint arXiv:2203.07618.

35 Choi, J. H., Hickman, K. E., Monahan, A., and Schwarcz, D. “ChatGPT goes to law school” (2023) Available at SSRN. doi:https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4335905

36 Gilson, A., Safranek, C., Huang, T., Socrates, V., Chi, L., Taylor, R. A., and Chartash, D. “How well does ChatGPT do when taking the medical licensing exams? The implications of large language models for medical education and knowledge assessment” (2022) medRxiv, 2022–12. doi:https://doi.org/10.1101/2022.12.23.22283901

37 Wang, W. and Siau, K.Artificial intelligence, machine learning, automation, robotics, future of work and future of humanity: A review and research agenda” (2019) Journal of Database Management (JDM), 30(1): 61–79.

38 Cappuccio, M. L., Sandis, C., and Wyatt, A.Online manipulation and agential risk” in M. Klenk and F. Jongepier (eds.), The Philosophy of Online Manipulation (New York, NY: Routledge, 2022), pp. 72–90.

39 Klenk, 2020.

40 Klenk, M. and Hancock, J. “Autonomy and online manipulation” (2019) Internet Policy Review. Retrieved from https://policyreview.info/articles/news/autonomy-and-online-manipulation/1431

41 Klenk, M. and Jongepier, F. (eds.). The Philosophy of Online Manipulation (New York, NY: Routledge, 2022).

42 Dastile, X., Celik, T., and Potsane, M.Statistical and machine learning models in credit scoring: A systematic literature survey” (2020) Applied Soft Computing, 91: 106263.

43 Zou, L. and Khern-am-nuai, W. “AI and housing discrimination: The case of mortgage applications” (2022) AI and Ethics, 1–11.

44 Liu, L. T., Dean, S., Rolf, E., Simchowitz, M., and Hardt, M. “Delayed impact of fair machine learning.” (2018, July) in International Conference on Machine Learning, pp. 3150–3158. PMLR.

45 Mitchell, S., Potash, E., Barocas, S., D’Amour, A., and Lum, K. “Algorithmic fairness: Choices, assumptions, and definitions” (2021) Annual Review of Statistics and Its Application, 8: 141–163.

46 Alves, G., Bernier, F., Couceiro, M., Makhlouf, K., Palamidessi, C., and Zhioua, S. “Survey on fairness notions and related tensions” (2022) EURO Journal on Decision Processes, 11, 100033, arXiv preprint arXiv:2209.13012.

47 Wang, Y., Wang, X., Beutel, A., Prost, F., Chen, J., and Chi, E. H. “Understanding and improving fairness-accuracy trade-offs in multi-task learning” (2021, August) in Proceedings of the 27th ACM SIGKDD Conference on Knowledge Discovery & Data Mining, pp. 1748–1757.

48 Henin, C. and Le Métayer, D. “Beyond explainability: justifiability and contestability of algorithmic decision systems” (2021) AI & SOCIETY, 1–14.

49 Pang, G., Shen, C., Cao, L., and Hengel, A. V. D.Deep learning for anomaly detection: A review” (2021) ACM Computing Surveys (CSUR), 54(2): 1–38.

50 Yeung, K.The new public analytics as an emerging paradigm in public sector administration” (2023) Tilburg Law Review, 27(2): 1–32.

51 Heikkilä, 2022.

52 Ten Seldam, B. and Brenninkmeijer, A. “The Dutch benefits scandal: A cautionary tale for algorithmic enforcement” (2021) EU Law Enforcement, April 30, 2021, https://eulawenforcement.com/?p=7941.

53 Venkatasubramanian, S. and Alfano, M. “The philosophical basis of algorithmic recourse” (2020) in Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 284–293.

54 Nikolenko, S. I. Synthetic Data for Deep Learning (2021) Springer Nature, Vol. 174: Springer Optimization and Its Applications (SOIA).

55 Stadler, T., Oprisanu, B., and Troncoso, C. “Synthetic data–anonymisation groundhog day” (2022) in 31st USENIX Security Symposium (USENIX Security 22), pp. 1451–1468.

56 Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., and Vertesi, J. “Fairness and abstraction in sociotechnical systems” (2019, January) in Proceedings of the Conference on Fairness, Accountability, and Transparency, pp. 59–68.

57 Schemmer, M., Hemmer, P., Kühl, N., Benz, C., and Satzger, G. “Should I follow AI-based advice? Measuring appropriate reliance in human-AI decision-making” (2022) arXiv preprint arXiv:2204.06916.

58 Santoni de Sio and Mecacci, 2021.

59 Berker, Selim. “The explanatory ambitions of moral principles2019 Noûs, 53: 904–36.

60 Dancy, Jonathan, “Moral particularism,” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2017 Edition), https://plato.stanford.edu/archives/win2017/entries/moral-particularism/.

61 Zerubavel, Eviatar, “Toward a concept-driven sociology: Sensitizing concepts and the prepared mind” in Wayne H. Brekhus, Thomas DeGloma, and William Ryan Force (eds), The Oxford Handbook of Symbolic Interactionism (online ed., Oxford Academic, April 14, 2021), https://doi.org/10.1093/oxfordhb/9780190082161.013.10

62 Slote, M. Morals from Motives (Oxford University Press, 2001).

63 Zagzebski, L. Divine Motivation Theory (New York: Cambridge University Press, 2004).

64 Swanton, C. Virtue Ethics: A Pluralistic View (Oxford University Press, 2003).

65 Sinnott-Armstrong, W. “Consequentialism” in Edward N. Zalta and Uri Nodelman (eds.), The Stanford Encyclopedia of Philosophy (2022) https://plato.stanford.edu/archives/win2022/entries/consequentialism/.

66 Feldman, F.Actual utility, the objection from impracticality, and the move to expected utility” (2006) Philosophical Studies, 129: 49–79.

67 Emmons, D. C.Act vs. rule-utilitarianism” (1973) Mind, 82(326): 226–33.

68 Mulgan, T. Understanding Utilitarianism (Routledge, 2014).

69 Alexander, L. and Moore, M. “Deontological ethics” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), https://plato.stanford.edu/archives/win2021/entries/ethics-deontological/.

70 Kamm, F. M. Intricate Ethics: Rights, Responsibilities, and Permissible Harm (Oxford University Press, 2007).

71 Nozick, R. Anarchy, State and Utopia (New York: Basic Books, 1974).

72 Vallentyne, P. and Steiner, H. (eds.). Left-Libertarianism and Its Critics (Houndmills: Palgrave, 2000).

73 Harris, C. E.The good engineer: Giving virtue its due in engineering ethics” (2008) Science and Engineering Ethics, 14: 153–164.

74 Liao, Q. V. and Muller, M. “Enabling value sensitive AI systems through participatory design fictions” (2019) arXiv preprint arXiv:1912.07381.

75 Genus, A. and Stirling, A.Collingridge and the dilemma of control: Towards responsible and accountable innovation” (2018) Research Policy, 47(1): 61–69.

76 Rawls, 2001.

77 See also the discussion of Rawls in Chapter 5 of this book.

78 Kuppler, M., Kern, C., Bach, R. L., and Kreuter, F. “Distributive justice and fairness metrics in automated decision-making: How much overlap is there?” (2021) arXiv preprint arXiv:2105.01441.

79 Barsotti, F. and Koçer, R. G. “MinMax fairness: From Rawlsian Theory of Justice to solution for algorithmic bias” (2022) AI & SOCIETY, 1–14.

80 Beisbart, C. and Räz, T.Philosophy of science at sea: Clarifying the interpretability of machine learning” (2022) Philosophy Compass, 17(6): e12830.

81 Buijsman, 2022.

82 Santoni de Sio, F. and Van den Hoven, J. “Meaningful human control over autonomous systems: A philosophical account” (2018) Frontiers in Robotics and AI, 5: 15.

83 Klenk and Jongepier, 2022.

84 Yeung, K.‘Hypernudge’: Big Data as a mode of regulation by design” (2017) Information, Communication & Society, 20(1): 118–136.

85 Friedman, B., Kahn, P., and Borning, A.Value sensitive design: Theory and methods” (2002) University of Washington Technical Report, 2: 12.

86 Umbrello and De Bellis, 2018.

87 van den Hoven et al., 2015.

88 Spinuzzi, C.The methodology of participatory design” (2005) Technical Communication, 52(2): 163–174.

89 Van de Poel, 2020.

90 Friedman, B., Hendry, D. G., and Borning, A.A survey of value sensitive design methods” (2017) Foundations and Trends® in Human–Computer Interaction, 11(2): 63–125.

91 Cortese et al., 2022.

92 Veluwenkamp, H. and van den Hoven, J.Design for values and conceptual engineering” (2023) Ethics and Information Technology, 25(1): 1–12.

93 Van de Poel, I. “Translating values into design requirements” (2013) Philosophy and Engineering: Reflections on Practice, Principles and Process, 253–66.

94 Moore, A. D.Privacy: its meaning and value” (2003) American Philosophical Quarterly, 40(3): 215–27.

95 Gedik, B. and Liu, L.Protecting location privacy with personalized k-anonymity: Architecture and algorithms” (2007) IEEE Transactions on Mobile Computing, 7(1): 1–18.

96 Dwork, C. “Differential privacy” in Automata, Languages and Programming: 33rd International Colloquium, ICALP 2006, Venice, Italy, July 10–14, 2006, Proceedings, Part II 33 (pp. 1–12). Springer Berlin Heidelberg.

97 Van de Poel, I. “Conflicting values in design for values,” Handbook of Ethics, Values, and Technological Design: Sources, Theory, Values and Application Domains (2015), 89–116.

98 Kozodoi, N., Jacob, J. and Lessmann, S.Fairness in credit scoring: Assessment, implementation and profit implications” (2022) European Journal of Operational Research, 297(3): 1083–1094.

99 Van de Poel, 2015.

100 Van de Poel, I.Design for value change” (2021) Ethics and Information Technology, 23(1): 27–31.

101 Sullivan, W. “Uber backtracks after jacking up prices during Sydney hostage crisis” (2014) Washington Post, December 15, 2014, www.washingtonpost.com/news/morning-mix/wp/2014/12/15/uber-backtracks-after-jacking-up-prices-during-syndey-hostage-crisis/

102 Mökander, J. and Floridi, L.Ethics-based auditing to develop trustworthy AI” (2021) Minds and Machines, 31(2): 323–27.

103 Liu et al., 2018.

104 Cox, J. “London terror attack: Uber slammed for being slow to turn off ‘surge pricing’ after rampage” (2017) Independent, June 4, 2017, www.independent.co.uk/news/uk/home-news/london-terror-attack-uber-criticised-surge-pricing-after-london-bridge-black-cab-a7772246.html

105 Zerilli, J., Knott, A., Maclaurin, J., and Gavaghan, C.Algorithmic decision-making and the control problem” (2019) Minds and Machines, 29: 555–78.

106 Georgieva, I., Lazo, C., Timan, T., and van Veenstra, A. F.From AI ethics principles to data science practice: A reflection and a gap analysis based on recent frameworks and practical experience” (2022) AI and Ethics, 2(4): 697–711.

107 Mittelstadt, B.Principles alone cannot guarantee ethical AI” (2019). Nature Machine Intelligence, 1(11): 501–07.

108 van den Hoven, J., Miller, S., and Pogge, T. (eds.). Designing in Ethics (Cambridge University Press, 2017). doi:10.1017/9780511844317.

109 Carey, A. N. and Wu, X.The statistical fairness field guide: Perspectives from social and formal sciences” (2023) AI and Ethics, 3(1): 1–23.

4 Fairness and Artificial Intelligence

1 In addition to the notion of privacy. Isaac Ben-Israel et al., “Towards regulation of AI systems: Global perspectives on the development of a legal framework on artificial intelligence systems based on the Council of Europe’s Standards on Human Rights, Democracy and the Rule of Law” (Council of Europe, 2020) 2020/16 50, https://rm.coe.int/prems-107320-gbr-2018-compli-cahai-couv-texte-a4-bat-web/1680a0c17a.

2 W. B. Gallie, “IX.—Essentially contested concepts” (1956) Proceedings of the Aristotelian Society, 56: 167.

3 See, for instance: Christine M. Korsgaard, “Self‐constitution in the ethics of Plato and Kant” in Christine M. Korsgaard (ed.), The Constitution of Agency: Essays on Practical Reason and Moral Psychology (Oxford University Press, 2008), 106–107, https://doi.org/10.1093/acprof:oso/9780199552733.003.0004, accessed February 15, 2023.

4 T. M. Scanlon, “Rights, Goals, and Fairness” 85.

5 See, for example: Scanlon (n 4); Jonathan Wolff, “Fairness, respect, and the egalitarian ethos” (1998) Philosophy & Public Affairs, 27: 97; Christopher McMahon, Reasonableness and Fairness: A Historical Theory (1st ed., Cambridge University Press, 2016), www.cambridge.org/core/product/identifier/9781316819340/type/book, accessed January 31, 2023.

6 Creel and Hellmann do not necessarily position these three understandings as the sole interpretations that could be given to the notion of arbitrariness. Kathleen Creel and Deborah Hellman, “The algorithmic leviathan: Arbitrariness, fairness, and opportunity in algorithmic decision-making systems” (2022) Canadian Journal of Philosophy, 52: 26, 34, 37–38. For their analysis of these definitions, and their limitations in light of AI-driven decision-making, reference can be made to the aforementioned work.

7 Creel and Hellman (n 6).

8 Footnote Ibid., 28–29.

10 H. L. A. Hart, The Concept of Law (Clarendon Press, 1961); Stefan Sottiaux, “Het Gelijkheidsbeginsel: Langs Oude Paden En Nieuwe Wegen (Artikel, 2008) [WorldCat.Org]” (2008) Rechtskundig Weekblad, 72: 690.

11 See among others: Stefan Sottiaux, “Het Gelijkheidsbeginsel : Langs Oude Paden En Nieuwe Wegen (Artikel, 2008) [WorldCat.Org]” (2008) Rechtskundig Weekblad, 72: 690. Christopher McCrudden and Haris Kountouros, “Human Rights and European Equality Law,” in Equality Law in an Enlarged European Union: Understanding the Article 13 Directives, ed. Helen Meenan (Cambridge University Press, 2007), 73–116, https://doi.org/10.1017/CBO9780511493898.004.

12 Creel and Hellman (n 6).

13 Sandra Wachter, “The theory of artificial immutability: protecting algorithmic groups under anti-discrimination law” (2022) SSRN Electronic Journal, 20, www.ssrn.com/abstract=4099100, accessed May 27, 2022.

14 Of course, differences will play a role in our evaluation of decision-making procedures. We need to assess whether characteristics were reasonable or sensible in light of the task at hand. The point made, however, is that it might not be the only thing that should take up our attention. Wachter, for instance, argues that AI-guided decisions and procedures should be based on empirically coherent information that has a proven connection or an intuitive link to the procedure at hand Wachter (n 13). See also: Sandra Wachter and Brent Mittelstadt, “A right to reasonable inferences: Re-thinking data protection law in the age of big data and AI” (2019) Columbia Business Law Review, 2019: 494.

15 See also: Peter Westen, “The empty idea of equality” (1982) Harvard Law Review, 95: 537; Hart (n 10) 159–160.

16 Iris Marion Young, Justice and the Politics of Difference (Princeton University Press, 1990); Naudts, Fair or Unfair Differentiation? Reconsidering the Concept of Equality for the Regulation of Algorithmically Guided Decision-Making (Doctoral Dissertation). (2023).

17 Creel and Hellman (n 6) 22.

18 Damian Clifford and Jef Ausloos, “Data protection and the role of fairness” (2018) Yearbook of European Law, 37: 130.

19 See also: Westen (n 15); Hart (n 10) 159–160.

20 On this point, see also: Christine M. Korsgaard, “Self‐Constitution in the Ethics of Plato and Kant” in Christine M. Korsgaard (ed), The Constitution of Agency: Essays on Practical Reason and Moral Psychology (Oxford University Press, 2008) 106–108, https://doi.org/10.1093/acprof:oso/9780199552733.003.0004, accessed 15 February 2023.

21 Creel and Hellman (n 6). See also: Jeremy Waldron, One Another’s Equals: The Basis of Human Equality (Belknap Press: Harvard University Press, 2017).

22 See also: Takis Tridimas, The General Principles of EU Law (2nd ed., Oxford University Press, 2006), p. 62.

23 Young (n 16) 201. See also: Michael J. Sandel, The Tyranny of Merit: What’s Become of the Common Good? (Penguin Books, 2021).

24 Young (n 16) 204.

25 In this sense, Rawls observes, the principle of fair opportunity can only be imperfectly carried out: “the extent to which natural capacities develop and reach fruition is affected by all kinds of social conditions and class attitudes.” John Rawls, A Theory of Justice (Harvard University Press (Belknap Press, 1971), p. 74.

26 Richard J. Arneson, “Against Rawlsian equality of opportunity” (1999) Philosophical Studies: An International Journal for Philosophy in the Analytic Tradition, 93: 77.

27 See also: Sandra Fredman, “Substantive equality revisited” (2016) International Journal of Constitutional Law, 14: 712; Sandra Fredman, “Providing equality: Substantive equality and the positive duty to provide” (2005) South African Journal on Human Rights, 21: 163.

28 This is also a criticism that can be leveled against “fairness-by-unawareness” design metrics. These metrics construct fairness as achieved when certain characteristics are not explicitly used in a prediction process. Fredman, “Substantive equality revisited” (n 27) 720. See also: Naudts (n 16).

29 Philip Pettit, Judging Justice. An Introduction to Contemporary Political Philosophy (Routledge & Kegan Paul, 1980).

30 See also Richard Arneson, “Egalitarianism” in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy (Summer 2013, Metaphysics Research Lab, Stanford University, 2013), https://plato.stanford.edu/archives/sum2013/entries/egalitarianism/, accessed February 8, 2023.

31 Rawls, A Theory of Justice (n 25).

32 Iason Gabriel, “Towards a theory of justice for artificial intelligence” (2022) Daedalus, 151: 12.

33 John Rawls, “Justice as fairness” (1958) The Philosophical Review, 67: 164, 178.

34 Footnote Ibid. Fairness is guaranteed as a result of the fair conditions under which people are able to reach an agreement regarding the principles of justice. They are the outcome of an original agreement in a suitably defined initial situation. The participants of this initial situation – or the original position – decide upon the principles that will govern their association. While the participants are rational and in the pursuit of their own interests, they are also each other’s equals. They view themselves and each other as a source of legitimate claims. In addition, the parties that partake in this hypothetical original position are situated behind a veil of ignorance. No participant knows their place in society, their natural talents. They do not know the details of their life. From this position, they are to derive the appropriate principles of justice. Rawls, A Theory of Justice (n 25) chapter 3, The Original Position, and p. 119.

35 Rawls, A Theory of Justice (n 25) 11.

36 Rawls, A Theory of Justice (n 25).

37 See also: Gabriel (n 32) 10; Jamie Grace, “‘AI theory of justice’: Using Rawlsian approaches to better legislate on machine learning in government” (2020) SSRN Electronic Journal, www.ssrn.com/abstract=3588256, accessed August 10, 2022.

38 Gabriel (n 32) 9–10. See also: Virginia Eubanks, Automating Inequality: How High-Tech Tools Profile, Police and Punish the Poor (Macmillan Publishers, 2018); Caroline Criado Perez, Invisible Women: Exposing Data Bias in a World Designed for Men (1st ed., Chatto & Windus, 2019); Safiya Umoja Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (New York University Press, 2018), www.degruyter.com/document/doi/10.18574/9781479833641/html, accessed December 8, 2021.

39 See also: Gabriel (n 32) 9–10.

40 John Rawls, Justice as Fairness: A Restatement (Kelly Erin ed., Belknap Press: Harvard University Press, 2001), pp. 58–59.

41 John Rawls, “The basic liberties and their priority” in Sterling M. Mcmurrin (ed.) (1981), p. 89; Rawls, Justice as Fairness: A Restatement (n 40) 60.

42 An additional area of social life where fairness is mandated is the situation where a person has been done some injury and must be compensated. Hart (n 10) 159.

43 Young (n 16) 8.

44 Thomas Pogge, “Relational conceptions of justice: Responsibilities for health outcomes” in Sudhir Anand, Fabienne Peter, and Amartya Sen (eds), Public Health, Ethics, and Equity (Oxford University Press, 2004), p. 147; Christian Schemmel, “Distributive and relational equality”: (2011) Politics, Philosophy & Economics 127, http://journals.sagepub.com/doi/10.1177/1470594X11416774, accessed August 4, 2020.

45 Thomas W. Pogge, “Three problems with Contractarian-Consequentialist ways of assessing social institutions” (1995) Social Philosophy and Policy, 12: 241. Young (n 16) 24–25.

46 Reuben Binns, “Fairness in machine learning: Lessons from political philosophy” (2018) Proceedings of Machine Learning Research.

47 Atoosa Kasirzadeh, “Algorithmic fairness and structural injustice: Insights from feminist political philosophy” (2022) Proceedings of the 2022 AAAI/ACM Conference on AI, Ethics, and Society, http://arxiv.org/abs/2206.00945, accessed February 3, 2023; Pratik Gajane and Mykola Pechenizkiy, “On formalizing fairness in prediction with machine learning” (2017) arXiv:1710.03184 [cs, stat], http://arxiv.org/abs/1710.03184, accessed July 23, 2018; presented during the 5th Workshop on Fairness, Accountability, and Transparency in Machine Learning (Stockholm, 2018).

48 Young (n 16); Carina Fourie, Fabian Schuppert, and Ivo Walliman-Helmer (eds), Social Equality: On What It Means to Be Equals (Oxford University Press, 2015). For a relational perspective on AI, see: Abeba Birhane, “Algorithmic injustice: A relational ethics approach” (2021) Patterns, 2: 100205; Salomé Viljoen, “A relational theory of data governance” (2021) The Yale Law Journal, 82; Kasirzadeh (n 47); Virginia Dignum, “Responsible Artificial Intelligence: Recommendations and Lessons Learned,” in Responsible AI in Africa: Challenges and Opportunities, ed. Damian Okaibedi Eke, Kutoma Wakunuma, and Simisola Akintoye (Cham: Springer International Publishing, 2023), 195–214, https://doi.org/10.1007/978-3-031-08215-3_9; Virginia Dignum, “Relational artificial intelligence” (2022) arXiv:2202.07446 [cs], http://arxiv.org/abs/2202.07446, accessed February 17, 2022; Naudts (n 16); Laurens Naudts, “The Digital Faces of Oppression and Domination: A Relational and Egalitarian Perspective on the Data-driven Society and its Regulation.” In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Association for Computing Machinery, New York, NY, USA (2024), 701–12. https://doi.org/10.1145/3630106.3658934.

49 Schemmel (n 44); Pogge (n 44).

50 Young (n 16) 25–31.

51 See, for example: Schemmel (n 44); John Baker, “Conceptions and dimensions of social equality” in Carina Fourie, Fabian Schuppert, and Ivo Walliman-Helmer (eds), Social Equality: On What It Means to Be Equals (Oxford University Press, 2015); Marie Garrau and Cécile Laborde, “Relational equality, non-domination, and vulnerability” in Carina Fourie, Fabian Schuppert, and Ivo Walliman-Helmer (eds), Social Equality: On What It Means to Be Equals (Oxford University Press, 2015).

52 See also: Viljoen (n 48).

53 Young (n 16) 59. See also: María C. Lugones and Elizabeth V. Spelman, “Have we got a theory for you! Feminist theory, cultural imperialism and the demand for ‘the woman’s voice’” (1983) Women’s Studies International Forum, 6: 573; For a more in-depth application of this notion onto AI, as well as Young’s other “faces of oppression,” see also: Laurens Naudts, The Digital Faces of Oppression and Domination: A Relational and Egalitarian Perspective on the Data-driven Society and its Regulation. In Proceedings of the 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT ’24). Association for Computing Machinery, New York, NY, USA (2024), 701–12. https://doi.org/10.1145/3630106.3658934.

54 Nancy Fraser, “Talking about needs: Interpretive contests as political conflicts in welfare-state societies” (1989) Ethics, 99: 291; Nancy Fraser, “Toward a discourse ethic of solidarity” (1985) PRAXIS International, 5: 425.

55 Young (n 16) 59. See also: WEB Du Bois, The Souls of Black Folk (Oxford University Press, 2007).

56 The notion classification is used here in a broad sense. It not only refers to the ways in which an algorithmic decision-making process may group together individuals. It also refers to instances where data are classified or labelled in a training set. Kate Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (Yale University Press, 2021) 128 and 139, https://doi.org/10.12987/9780300252392; Kate Crawford and Trevor Paglen, “Excavating AI: The politics of images in machine learning training sets” (Excavating AI, September 19, 2019), www.excavating.ai, accessed February 7, 2020.

57 On the role of power in image data sets, see also the work by Milagros Miceli et al.: Milagros Miceli et al., “Documenting computer vision datasets: An invitation to reflexive data practices,” Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency (ACM, 2021), https://dl.acm.org/doi/10.1145/3442188.3445880, accessed March 10, 2021; Milagros Miceli, Julian Posada, and Tianling Yang, “Studying up machine learning data: Why talk about bias when we mean power?” (2021) arXiv:2109.08131 [cs], http://arxiv.org/abs/2109.08131, accessed October 4, 2021; Milagros Miceli, “AI’s symbolic power: Classification in the age of automation” (2019); Milagros Miceli and Julian Posada, “A question of power: How task instructions shape training data” (2020) Symposium on Biases in Human Computation and Crowdsourcing (BHCC2020, November 11, 2020), https://sites.google.com/sheffield.ac.uk/bhcc-2020/program?authuser=0; Milagros Miceli, Martin Schuessler, and Tianling Yang, “Between subjectivity and imposition: Power dynamics in data annotation for computer vision” (2020) Proceedings of the ACM on Human-Computer Interaction, 4: 1.

58 Crawford (n 56) 144. See also: Miceli et al. (n 57); Miceli and Posada, “A question of power: How task instructions shape training data” (n 57); Milagros Miceli and Julian Posada, “Wisdom for the crowd: Discoursive power in annotation instructions for computer vision” (arXiv, May 23, 2021), http://arxiv.org/abs/2105.10990, accessed August 29, 2022.

59 Lucas Waldron and Medina Brenda, “TSA’s body scanners are gender binary. Humans are not.” (ProPublica), www.propublica.org/article/tsa-transgender-travelers-scanners-invasive-searches-often-wait-on-the-other-side?token=7bjY-MRzWk5Ed4DCZRvFVYwt2HBrAFXd, accessed February 7, 2022. See also: Os Keyes, “The misgendering machines: Trans/HCI implications of automatic gender recognition” (2018) Proceedings of the ACM on Human-Computer Interaction, 2: 1.

60 “Xenophobic machines: Discrimination through unregulated use of algorithms in the Dutch childcare benefits scandal.” (Amnesty International, 2021), www.amnesty.nl/content/uploads/2021/10/20211014_FINAL_Xenophobic-Machines.pdf?x42580; “Dutch childcare benefit scandal an urgent wake-up call to ban racist algorithms” (Amnesty International, October 25, 2021), www.amnesty.org/en/latest/news/2021/10/xenophobic-machines-dutch-child-benefit-scandal/, accessed August 15, 2022; Jan Kleinnijenhuis, “Hoe de Belastingdienst lage inkomens profileerde in de jacht op fraude” (Trouw, November 22, 2021), www.trouw.nl/gs-bbb66add, accessed November 23, 2021.

61 Crawford (n 56) chapter 4, Classification.

62 For an application of both notions onto AI, see also Naudts (n 16 and 48), drawing from the work of Young (n 16).

63 Solon Barocas, Moritz Hardt, and Arvind Narayanan, “Fairness and machine learning” 253, chapter Introduction.

64 See also Naudts (n 48).

65 See, for instance: Miceli et al. (n 57).

66 Laurens Naudts, “Towards accountability: The articulation and formalization of fairness in machine learning” (2018) SSRN Electronic Journal, www.ssrn.com/abstract=3298847, accessed July 30, 2020.

67 Gajane and Pechenizkiy (n 47); Doaa Abu Elyounes, “Contextual fairness: A legal and policy analysis of algorithmic fairness” (September 1, 2019), https://papers.ssrn.com/abstract=3478296, accessed February 5, 2023; Sandra Wachter, Brent Mittelstadt, and Chris Russell, “Bias preservation in machine learning: The legality of fairness metrics under EU non-discrimination law” (Social Science Research Network, 2021) SSRN Scholarly Paper 3792772, https://papers.ssrn.com/abstract=3792772, accessed April 28, 2022.

68 Jon Kleinberg, Sendhil Mullainathan, and Manish Raghavan, “Inherent trade-offs in the fair determination of risk scores” (2016) arXiv:1609.05807 [cs, stat], http://arxiv.org/abs/1609.05807, accessed October 11, 2020. See also Section 1.4.2 The Limitations of Abstraction.

69 Wachter, Mittelstadt, and Russell (n 67).

71 Doaa Abu-Elyounes, “Contextual fairness: A legal and policy analysis of algorithmic fairness” (2020) University of Illinois Journal of Law, Technology & Policy, 2020: 1, 5.

72 Abu Elyounes (n 67).

73 Footnote Ibid. See also: Agathe Balayn and Seda Gurses, “Beyond debiasing: Regulating AI and its inequalities.” (EDRi, 2021).

74 Kasirzadeh (n 47) 4.

76 Binns (n 46). Bettina Berendt and Sören Preibusch, “Better decision support through exploratory discrimination-aware data mining: Foundations and empirical evidence” (2014) Artificial Intelligence and Law, 22: 175; Bettina Berendt and Sören Preibusch, “Toward accountable discrimination-aware data mining: The importance of keeping the human in the loop—and under the looking glass” (2017) Big Data, 5: 135; Dino Pedreschi Salvatore Ruggieri and Franco Turini, “Discrimination-aware data mining,” 9.

77 Kimberle Crenshaw, “Demarginalizing the intersection of race and sex: A black feminist critique of antidiscrimination doctrine, feminist theory and antiracist politics” (1989) University of Chicago Legal Forum, 1989: 31; Kimberle Crenshaw, “Mapping the margins: Intersectionality, identity politics, and violence against women of color” (1991) Stanford Law Review, 43: 1241.

78 Joy Buolamwini and Timnit Gebru, “Gender shades: Intersectional accuracy disparities in commercial gender classification,” 15.

79 That is not to say that our approach in tackling the harms faced by socially oppressed and non-oppressed groups should be identical. Indeed, in our attempt to protect the interests of both groups, we may need to distinguish in the protective measures we envisage to accommodate their respective needs and struggles. Naudts (n 16). See also: Wachter (n 13).

80 Abeba Birhane et al., “The forgotten margins of AI ethics,” 2022 ACM Conference on Fairness, Accountability, and Transparency (ACM, 2022), https://dl.acm.org/doi/10.1145/3531146.3533157, accessed February 2, 2023.

81 Footnote Ibid., 949–50.

82 See, for instance: Andrew D. Selbst et al., “Fairness and abstraction in sociotechnical systems,” Proceedings of the Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery 2019), https://doi.org/10.1145/3287560.3287598, accessed February 2, 2023.

83 Balayn and Gurses (n 73).

84 Birhane (n 48) 6.

86 See also: Annette Zimmermann and Chad Lee-Stronach, “Proceed with caution” (2021) Canadian Journal of Philosophy, 1.

87 Billy Perrigo, “The $2 per hour workers who made ChatGPT safer” (2023) Time, January 18, https://time.com/6247678/openai-chatgpt-kenya-workers/, accessed July 5, 2023.

88 Milagros Miceli and Julian Posada. 2022. The Data-Production Dispositif. Proc. ACM Hum.-Comput. Interact. 6, CSCW2, Article 460 (November 2022), 37 pages. https://doi.org/10.1145/3555561

89 See among others, Article 16 (Obligations of Providers of High-Risk AI Systems), as well as Article 40 (Harmonised Standards and Standardisation Deliverables), read in conjunction with Section 2 (Requirements for High-Risk Systems) of the AI Act. Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)Text with EEA relevance. See also: Nathalie A Smuha et al., “How the EU can achieve legally trustworthy AI: A response to the European Commission’s Proposal for an Artificial Intelligence Act” (August 5, 2021) <https://papers.ssrn.com/abstract=3899991> accessed July 21, 2023; Johann Laux, Sandra Wachter, and Brent Mittelstadt, ‘Three Pathways for Standardisation and Ethical Disclosure by Default under the European Union Artificial Intelligence Act’, Computer Law & Security Review 53 (1 July 2024): 105957, https://doi.org/10.1016/j.clsr.2024.105957.

90 Solon Barocas, “Machine learning is a co-opting machine” (Public Books, June 18, 2019), www.publicbooks.org/machine-learning-is-a-co-opting-machine/, accessed February 15, 2023.

91 Ben Wagner, “Ethics as an escape from regulation. From ‘ethics-washing’ to ethics-shopping?” in Emre Bayamlioglu et al. (eds), BEING PROFILED (Amsterdam University Press, 2019), www.degruyter.com/view/books/9789048550180/9789048550180-016/9789048550180-016.xml, accessed August 26, 2020; Luciano Floridi, “Translating principles into practices of digital ethics: Five risks of being unethical” (2019) Philosophy & Technology, 32: 185; Elettra Bietti, “From ethics washing to ethics bashing: A view on tech ethics from within moral philosophy,” Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (Association for Computing Machinery, 2020), https://doi.org/10.1145/3351095.3372860.

92 Marijn Sax, “Algorithmic news diversity and democratic theory: Adding agonism to the mix” (2022) Digital Journalism, 10: 1650, 1651. In it, the author draws on the work of political theorist Chantal Mouffe.

93 See also: Wibren van der Burg, “The morality of aspiration: A neglected dimension of law and morality” in Willem Witteveen J. and Wibren van der Burg (eds), Rediscovering Fuller: Essays on Implicit Law and Institutional Design (Amsterdam University Press, 2009).

5 Moral Responsibility and Autonomous Technologies Does AI Face a Responsibility Gap?

1 By “AI” in this text, we mean “autonomous AI.”

2 See among others: video Slaughterbots of 2017 by the Future of Life Institute and AI expert Stuart Russell, open letters in 2015 and 2017 by renowned technology experts to raise awareness among the general public around the dangers associated with the technology, The Campaign to Stop Killer Robots calling for a new international treaty.

3 UN Security Council, Final report of the Panel of Experts on Libya established pursuant to Security Council resolution 1973 (2011), S/2021/229, 8 March 2021, https://documents.un.org/doc/undoc/gen/n21/037/72/pdf/n2103772.pdf?token=DtEs8GLF0OLY8vGG39&fe=true

4 These terms already make it clear that we are not concerned here with the domain of the law and are therefore not talking about liability or legal responsibility. For a good overview of the different kinds of responsibility, see Nicole A. Vincent, “A Structured Taxonomy of Responsibility Concepts” in Nicole Vincent, Ibo van de Poel, and Jeroen van den Hoven (eds), Moral Responsibility. Library of Ethics and Applied Philosophy (Dordrecht Springer, 2011) who develops a taxonomy of responsibility concepts inspired by H. L. A Hart’s illustration of the drunken ship captain.

5 See among others: Roos De Jong, “The retribution-gap and responsibility-loci related to robots and automated technologies: A reply to Nyholm” (2020) Science and Engineering Ethics, 26; Robert Sparrow, “Killer robots” (2007) Journal of Applied Philosophy, 24; Andreas Matthias, “The responsibility gap: Ascribing responsibility for the actions of learning automata” (2004) Ethics and Information Technology, 6. The term was first used by Andreas Matthias with respect to autonomous machines (2004) and was later applied to autonomous weapon systems by Robert Sparrow (2007). For a recent overview of the discussion on autonomous weapons, see: Ann-Katrien Oimann, “The responsibility gap and LAWS: A critical mapping of the debate” (2023) Philosophy & Technology, 36.

6 See in this regard also Hans Jonas’ study Das Prinzip Vernatwortung (1979), which is one of the first major works on the ethics of technology. Jonas suggested that in a modern world, the effects of technology are not so uncertain that designers need to think about the consequences even more so than before.

7 The expression “many hands” was reportedly first used by Dennis Thompson, “Moral responsibility and public officials: The problem of many hands” (1980) American Political Science Review, 74 and later applied to computer technology by Helen Nissenbaum, “Accountability in a computerized society” (1996) Science and Engineering Ethics, 2.

8 An author like Joanna Bryson explicitly rejects this option, emphasizing that autonomous systems are essentially nonexistent and should be viewed as nothing more than tools: Joanna Bryson, “Robots should be slaves” in Yorick Wilks (ed), Close Engagements with Artificial Companions: Key Social, Psychological, Ethical and Design Issues (John Benjamins Publishing Company, 2010).

9 Debates about embodied information are discussed for many years in philosophy of mind. In this regard, see, among others: Daniel Dennett, Consciousness Explained (Little Brown, 1992); John Rogers Searle, “Minds, brains, and programs” (1980) The Behavioral and Brain Sciences, 3: 417–57; Hubert Dreyfus, What Computers Can’t Do: The Limits of Artificial Intelligence (Harper & Row, 1972); Alan Turing, “Computer Machinery and Intelligence” in Edward A. Feigenbaum and Julian Feldman (eds), Computers and Thought (McGraw-Hill, 1963).

10 According to an ordinary conception of responsibility attribution, it is only fitting to hold someone responsible if the agent can foresee that the device will or is likely to create a certain kind of outcome. This is usually termed the epistemic condition and many philosophers agree that such a requirement is a necessary condition for moral responsibility. See among others: John Martin Fischer and Neal A. Tognazzini, “The truth about tracing” (2009) Noûs, 43; John Martin Fischer and Mark Ravizza, Responsibility and Control: A Theory of Moral Responsibility (Cambridge University Press, 2000); Michael J. Zimmerman, “Moral responsibility and ignorance” (1997) Ethics, 107.

11 The epistemic condition often relies on a tracing strategy and plays an important role in many theories of responsibility. It is used in cases where an agent is blameworthy for the harm caused based on the ground that her responsibility can be traced back to previous acts of the agent when she did meet the conditions to fulfill on moral responsibility. See for example: John Martin Fischer and Neal A. Tognazzini, “The truth about tracing” (2009) Noûs, 43.

12 It is debated whether collective entities can be qualified as group agents that can be held morally responsible. See: Neta C. CrawfordOrganizational responsibility” in Accountability for Killing: Moral Responsibility for Collateral Damage in America’s Post-9/11 Wars (Oxford University Press, 2013); Christian List, “Group agency and artificial intelligence” (2021) Philosophy & Technology, 34.

13 This manuscript is based partly on: Lode Lauwaert, Wij robots: Een filosofische blik op technologie en artificiële intelligentie (LannooCampus, 2021); Lode Lauwaert, “Artificial intelligence and responsibility” (2021) AI & Society; Lode Lauwaert, “Artificiële intelligentie en normatieve ethiek: Wie is verantwoordelijk voor de misdaden van LAWS?” (2019) Algemeen Nederlands tijdschrift voor wijsbegeerte.

6 Artificial Intelligence, Power and SustainabilityFootnote *

* Thank you to the Data Pollution & Power Group at the Bonn Sustainable AI Lab for empowering discussions: Carolina Aguerre, Larissa Bolte, Jenny Brennan, Signe Daugbjerg, Lynn H. Kaack, Federica Lucivero, Pak-Hang Wong, and Sebnem Yardimci-Geyikci. A portion of this work has been funded through the Alexander von Humboldt Foundation through the professorship of Prof. Dr. Aimee van Wynsberghe.

1 Aimee van Wynsberghe, “Sustainable AI: AI for sustainability and the sustainability of AI” (2021) AI and Ethics, 1: 213, 218.

2 Gry Hasselbalch, “Data pollution & power: A white paper for a global sustainable development agenda on AI” (2022), www.datapollution.eu/, accessed June 27, 2023.

3 Gry Hasselbalch, Data Ethics of Power. A Human Approach in the Big Data and AI Era (Edward Elgar, 2021).

4 Madeleine C Elish and Danah Boyd, “Situating methods in the magic of big data and artificial intelligence” (2018) Communication Monographs, 85: 57.

5 Francesco Lapenta, Our Common AI Future (JCU Future and Innovation, 2021).

6 As outlined throughout the “Brundtland report”/World Commission on Environment and Development, “Our Common Future,” https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf, accessed June 6, 2023.

7 Council of the European Union, European Green Deal, accessed July 19, 2024: www.consilium.europa.eu/en/policies/green-deal/

8 United Nations, “The future we want outcome document” (United Nations Conference on Sustainable Development, Rio de Janeiro, Brazil, 2012).

9 Global Partnership on AI Report, “Climate change and AI. Recommendations for government action” (November 2021), GPAI, www.gpai.ai/projects/climate-change-and-ai.pdf, accessed June 6, 2023, 18–19.

10 Alan Winfield, “Energy and exploitation: AIs dirty secrets” (June 28, 2019), https://alanwinfield.blogspot.com/2019/06/energy-and-exploitation-ais-dirty.html, accessed June 27, 2023.

Lynn H Kaack et al., “Aligning artificial intelligence with climate change mitigation” (2022) Nature Climate Change, 12: 518.

11 As described in Safiya U Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (NYU Press, 2018).

Tolga Bolukbasi et al., “Man is to computer programmer as woman is to homemaker? Debiasing word embeddings” (30th Conference on Neural Information Processing Systems, Barcelona, 2016).

12 As described in Christopher Wylie, Mindf*ck: Cambridge Analytica and the Plot to Break America (Random House, 2019).

13 Report of the United Nations Conference on the Human Environment, Stockholm, June 5–16, 1972, 3. Quoted in Gry Hasselbalch, “Data pollution & power: A white paper for a global sustainable development agenda on AI” (2022), www.datapollution.eu/, accessed June 27, 2023.

14 World Commission on Environment and Development, Our Common Future (1987), https://sustainabledevelopment.un.org/content/documents/5987our-common-future.pdf, accessed June 27, 2023.

15 United Nations, “Agenda 21” (United Nations Conference on Environment & Development, Rio de Janeiro, June 3–14, 1992).

16 United Nations, “Transforming our world: The 2030 agenda for sustainable development” (March 22–24, 2023), https://sdgs.un.org/2030agenda, accessed June 6, 2023.

17 UN Environment Programme, “Environmentally sound technologies,” www.unep.org/regions/asia-and-pacific/regional-initiatives/supporting-resource-efficiency/environmentally-sound, accessed June 6, 2023. Quoted in Gry Hasselbalch, “Data pollution & power: A white paper for a global sustainable development agenda on AI” (2022), www.datapollution.eu/, accessed June 27, 2023.

18 Footnote Ibid., Hasselbalch, 2022, 36–49.

19 See, for example, The EU’s Green Deal (Footnote Ibid. Council of the European Union) and AI Strategy (https://digital-strategy.ec.europa.eu/en/policies/european-approach-artificial-intelligence accessed July 19, 2024) and AI for SDGs Canada, www.ai4sdgs.org/, accessed June 6, 2023;

United Nations, “The United Nation’s Resource Guide on Artificial Intelligence (AI) Strategies, June 2021,” https://sdgs.un.org/sites/default/files/2021-06/Resource%20Guide%20on%20AI%20Strategies_June%202021.pdf, accessed June 6, 2023.

20 The authors of this chapter were members of the HLEG.

21 European Commission, “Ethics guidelines for trustworthy AI” (HLEG A, 2019), https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai, accessed June 6, 2023.

22 European Commission, “Communication on fostering a European approach to artificial intelligence” (April 21, 2021), https://digital-strategy.ec.europa.eu/en/library/communication-fostering-european-approach-artificial-intelligence, accessed June 6, 2023.

23 Gry Hasselbalch, “Culture by design: A data interest analysis of the European AI policy agenda” (2020) First Monday, 25, https://firstmonday.org/ojs/index.php/fm/article/view/10861/10010, accessed June 27, 2023.

24 Aimee van Wynsberghe, “Sustainable AI: AI for sustainability and the sustainability of AI” (2021) AI and Ethics, 1: 213.

25 Taylor Stone and Aimee van Wynsberghe, “Repairing AI” in Mark Young and Mark Coeckelbergh (eds), Maintenance and Philosophy of Technology: Keeping Things Going (1st ed, Routledge, 2024).

26 Henrik S Sætra, AI for the Sustainable Development Goals (1st ed, CRC Press, 2022), 5.

27 This approach has been taken by Hasselbalch and the University of Bonn’s Data Pollution and Power Group: www.datapollution.eu. See also Gry Hasselbalch, “Data pollution & power: A white paper for a global sustainable development agenda on AI” (2022), www.datapollution.eu/, accessed June 27, 2023.

28 Viktor Mayer-Schönberger and Kenneth Cukier, Big Data: A Revolution that Will Transform How We Live, Work and Think (John Murray, 2013), 15.

29 Footnote Ibid. Hasselbalch, 2021.

30 Gry Hasselbalch “Data pollution & power: A white paper for a global sustainable development agenda on AI” (2022), www.datapollution.eu/, accessed June 27, 2023.

31 Dennis D Hirsch and Jonathan King, “Big data sustainability: An environmental management systems analogy” (2016) Washington and Lee Law Review, 72, http://dx.doi.org/10.2139/ssrn.2716785, accessed June 27, 2023; Omri Ben-Shahar, “Data pollution” (2019) Journal of Legal Analysis, 11: 104.

32 Federica Lucivero et al., “Data-driven unsustainability? An interdisciplinary perspective on governing the environmental impacts of a data-driven society” (2020), Available at SSRN, http://dx.doi.org/10.2139/ssrn.3631331, accessed June 27, 2023.

33 Alan Winfield, “Energy and exploitation: AIs dirty secrets” (June 28, 2019), https://alanwinfield.blogspot.com/2019/06/energy-and-exploitation-ais-dirty.html, accessed June 27, 2023.

Emma Strubell, Ananya Ganesh, and Andrew McCallum, “Energy and policy considerations for deep learning in NLP” (2019) Cornell University, https://arxiv.org/abs/1906.02243, accessed June 27, 2023.

34 Federica Lucivero, “Big data, big waste? A reflection on the environmental sustainability of big data Initiatives” (2019) Science and Engineering Ethics, 26: 1009.

35 Bruce Schneier, “The future of privacy” (2006), www.schneier.com/blog/archives/2006/03/the_future_of_p.html, accessed June 27, 2023.

36 Dennis D Hirsch and Jonathan H King, “Big data sustainability: An environmental management systems analogy” (2016) Washington and Lee Law Review Online, 72(3): 406–419. https://scholarlycommons.law.wlu.edu/wlulr-online/vol72/iss3/4/, accessed June 27, 2023.

37 Omri Ben-Shahar, “Data pollution” (2019) Journal of Legal Analysis, 11: 104.

38 Footnote Ibid., 104.

39 Footnote Ibid., 104.

40 Footnote Ibid., 105.

41 Gry Hasselbalch and Pernille Tranberg, Data Ethics. The New Competitive Advantage (1st ed, Publishare, 2016), 183.

42 Scott Robbins and Aimee van Wynsberghe, “Our new artificial intelligence infrastructure: Becoming locked into an unsustainable future” (2022) Sustainability, 14(8): 4829, www.mdpi.com/2071-1050/14/8/4829, accessed June 27, 2023.

44 Gry Hasselbalch, Data Ethics of Power. A Human Approach in the Big Data and AI Era (Edward Elgar, 2021).

45 Susan L Star and Geoffrey C Bowker, “How to infrastructure?” in Leah A Lievrouw and Sonia Livingstone (eds), Handbook of New Media. Social Shaping and Social Consequences of ICTs (SAGE Publications, 2006);

Geoffrey C Bowker, “Toward information infrastructure studies: Ways of knowing in a networked environment” in Jeremy Hunsinger et al. (eds), International Handbook of Internet Research (Springer Netherlands, 2010);

Penelope Harvey, Casper Jensen, and Asturo Morita (eds), Infrastructures and Social Complexity: A Companion (Routledge, 2017).

46 Gry Hasselbalch, Data Ethics of Power. A Human Approach in the Big Data and AI Era (Edward Elgar, 2021), 11.

47 Gry Hasselbalch, “A framework for a data interest analysis of artificial intelligence” (2021) 26 First Monday, https://doi.org/10.5210/fm.v26i7.11091, accessed August 21, 2023.

48 Gry Hasselbalch, “Making sense of data ethics. The powers behind the data ethics debate in European policymaking” (2019) Internet Policy Review, 8(2) https://policyreview.info/articles/analysis/making-sense-data-ethics-powers-behind-data-ethics-debate-european-policymaking, accessed June 27, 2023.

49 Thomas J Misa, “How machines make history, and how historians (and others) help them to do so” (1988) Science, Technology, and Human Values, 13: 308;

Thomas J Misa, “Theories of technological change: Parameters and purposes” (1992) Science, Technology and Human Values, 17: 3; Thomas J Misa, “Findings follow framings: Navigating the empirical turn” (2009) Synthese, 168: 357.

50 Thomas J Misa, “How machines make history, and how historians (and others) help them to do so” (1988) Science, Technology, and Human Values, 13: 308.

51 Paul N Edwards, “Infrastructure and modernity: Scales of force, time, and social organization in the history of sociotechnical systems” in Thomas J Misa, Philip Brey, and Andrew Feenberg (eds), Modernity and Technology (MIT Press, 2002).

53 Misa, Findings follow framings.

54 Gry Hasselbalch, “Data pollution & power: A white paper for a global sustainable development agenda on AI” (2022), www.datapollution.eu/, accessed June 27, 2023.

55 Thomas P Hughes, “The evolution of large technological systems” in Wiebe E Bijker, Thomas P Hughes, and Trevor Pinch (eds), The Social Construction of Technological Systems (MIT Press, 1987).

Figure 0

Figure 1.1 A (simple) Bayesian network to reason over the (joint) effects of two different medications that are commonly administered to patients suffering from epigastric pains because of pyrosis.

Figure 1

Figure 1.2 Each simple sigmoid function expresses a linear separation; together they form a more complicated function of two hyperbolas.

Figure 2

Figure 1.3 A geometric interpretation of adding layers and nodes to a neural network.

Figure 3

Figure 1.4 Table representing the dataset and the resulting decision tree

Figure 4

Figure 1.5 The Menace program playing Tic-Tac-Toe

Figure 5

Figure 1.6 Adversarial examples for digits.31

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • AI, Ethics and Philosophy
  • Edited by Nathalie A. Smuha, KU Leuven
  • Book: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence
  • Online publication: 06 February 2025
  • Chapter DOI: https://doi.org/10.1017/9781009367783.002
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • AI, Ethics and Philosophy
  • Edited by Nathalie A. Smuha, KU Leuven
  • Book: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence
  • Online publication: 06 February 2025
  • Chapter DOI: https://doi.org/10.1017/9781009367783.002
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • AI, Ethics and Philosophy
  • Edited by Nathalie A. Smuha, KU Leuven
  • Book: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence
  • Online publication: 06 February 2025
  • Chapter DOI: https://doi.org/10.1017/9781009367783.002
Available formats
×