Skip to main content Accessibility help
×
Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-06T12:40:49.484Z Has data issue: false hasContentIssue false

Part II - AI, Law and Policy

Published online by Cambridge University Press:  06 February 2025

Nathalie A. Smuha
Affiliation:
KU Leuven
Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BY
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY 4.0 https://creativecommons.org/cclicenses/

7 AI Meets the GDPR Navigating the Impact of Data Protection on AI Systems

Pierre Dewitte
7.1 Introduction

To state that artificial intelligence (“AI”) has seen drastic improvements since the age of expert systems is rather euphemistic at a time when language models have become so powerful they could have authored this piece – hint, they didn’t. If, conceptually speaking, AI systems refer to the ability of a software to mimic the features of human-like reasoning, most are used to draw predictions from data through the use of a trained model, that is, an algorithm able to detect patterns in data it has never encountered before. When such models are used to derive information relating to individuals, personal data are likely involved somewhere in the process, whether at the training or deployment stage. This can certainly result in many benefits for those individuals. However, as abundantly illustrated throughout this book, the link between personal information and natural persons also exposes them to real-life adverse consequences such as social exclusion, discrimination, identity theft or reputational damage, all the while directly contributing to the opacification of the decision-making processes that impact their daily lives. For all these reasons, specific legal guarantees have been adopted at various levels to minimize these risks by regulating the processing of personal data and equipping individuals with the appropriate tools to understand and challenge the output of AI systems.

In Europe, the General Data Protection Regulation (“GDPR”)Footnote 1 is the flagship piece of legislation in that regard, designed to ensure both the protection of natural persons and the free movement of personal data. Reconciling the intrinsic characteristics of AI systems with the principles and rules contained therein is a delicate exercise, though. For two reasons. First, the GDPR has been conceived as a technology-neutral instrument comprised of voluntarily open-ended provisions meant to carry their normative values regardless of the technological environment they are applied in.Footnote 2 Such is the tradeoff necessary to ensure resilience and future-proofness when technological progresses have largely outpaced the capacity of regulators to keep up with the unbridled rhythm of innovation.Footnote 3 In turn, navigating that ecosystem comprised of multiple layers of regulation designed to reconcile flexibility and legal certainty can prove particularly daunting. Second, AI systems have grown more and more complex, to the point where the opacity of their reasoning process has become a common ground for concern.Footnote 4 This reinforces the need for interdisciplinary collaboration, as the proper understanding of their functioning is essential for the correct application of the law. In short, regulating the processing of personal data in AI systems requires to interpret and apply a malleable regulatory framework to increasingly complex technological constructs. This, in itself, is a balancing act between protecting individuals’ fundamental rights and guaranteeing a healthy environment for innovation to thrive.

The purpose of this chapter is not to provide a comprehensive overview of the implications of the GDPR for AI systems. Nor is it to propose concrete solutions to specific problems arising in that context.Footnote 5 Rather, it aims to walk the reader though the core concepts of EU data protection law, and highlight the main tensions between its principles and the functioning of AI systems. With that goal in mind, Section 7.2 first sketches the broader picture of the European privacy and data protection regulatory framework, and clarifies the focus for the remainder of this chapter. Section 7.3 then proceeds to delineate the scope of application of the GDPR and its relevance for AI systems. Finally, Section 7.4 breaks down the main friction points between the former and the latter and illustrates each of these with examples of concrete data protection challenges raised by AI systems in practice.

7.2 Setting the Scene – The Sources of Privacy and Data Protection Law in Europe

While the GDPR is the usual suspect when discussing European data protection law, it is but one piece of a broader regulatory puzzle. Before delving into its content, it is therefore crucial to understand its position and role within that larger ecosystem. Not only will this help clarify the different sources of privacy and data protection law, but it will also equip the reader with keys to understand the interaction between these texts. The goal of this section is hence to contextualize the GDPR in order to highlight its position within the hierarchy of legal norms.

In Europe, two coexisting legal systems regulate the processing of personal data.Footnote 6 First, that of the Council of Europe (“CoE”) through Article 8 of the European Convention on Human Rights (“ECHR”)Footnote 7 as interpreted by the European Court of Human Rights (“ECtHR”).Footnote 8 Second, that of the European Union (“EU”) through Articles 7 and 8 of the Charter of Fundamental Rights of the European Union (“CFREU”)Footnote 9 as interpreted by the Court of Justice of the European Union (“CJEU”).Footnote 10 While these systems differ in scope and functioning, the protection afforded to personal data is largely aligned as the case law from both Courts influence each other.Footnote 11 National legislation constitutes an extra layer of privacy and data protection law, bringing the amount of regulatory silos up to three (see Figure 7.1).

Figure 7.1 A fundamental rights perspective on the sources of privacy and data protection law

For the purpose of this chapter, let’s zoom in on the EU legal order, comprised of primary and secondary legislation. While the former sets the foundational principles and objectives of the EU, the latter breaks them down into actionable rules that can then be directly applied or transposed by Member States into national law. This is further supplemented by “soft law” instruments issued by a wide variety of bodies to help interpret the provisions of EU. While these are not strictly binding, they often have quasi-legislative authority.Footnote 12 As illustrated in Figure 7.2, the GDPR is only a piece of secondary EU law meant to protect all data subjects’ fundamental rights – including but not limited to privacy and data protection – when it comes to the processing of their personal data. As illustrated in the following sections, the Guidelines issued by the Article 29 Working Party (“WP29”) and its successor the European Data Protection Board (“EDPB”) are particularly helpful when fleshing out the scope and substance of the rules contained in the GDPR.Footnote 13 While all three of the silos detailed above impact – to a certain extent – the processing of personal data by AI systems, the remainder of this chapter focuses exclusively on the EU legal order, more specifically on the GDPR and its accompanying soft law instruments.

Figure 7.2 The EU legal order – general and data protection specific

7.3 Of Personal Data, Controllers and Processors – The Applicability of the GDPR to AI Systems

As hinted at earlier, the GDPR is likely to come into play when AI systems are trained and used to make predictions about natural persons. Turning that intuition into a certainty nonetheless requires a careful analysis of its precise scope of application. In fact, this is the very first reflex anyone should adopt when confronted to any piece of legislation, as it typically only regulates certain types of activities (i.e., its “material scope”) by imposing rules on certain categories of actors (i.e., its “personal scope”). Should the situation at hand fall outside the remit of the law, there is simply no need to delve into its content. Before discussing the concrete impact of the GDPR on AI systems in Section 7.4, it is therefore crucial to clarify whether (Section 7.3.1) and to whom it applies (Section 7.3.2).

7.3.1 Material Scope of Application – The Processing of Personal Data
7.3.1.1 The Notion of Personal Data and the Legal Test of Identifiability

Article 2(1) GDPR limits the applicability of the Regulation “to the processing of personal data wholly or partly by automated means.” Equally important, Article 4(1) defines the concept of personal data as “any information relating to an identified or identifiable natural person.” The reference to “any information” implies that the qualification as personal data is nature-, content-, and format-agnostic,Footnote 14 while “relating to” must be read as “linked to a particular person.”Footnote 15 As such, the notion of personal data is not restricted to “information that is sensitive or private, but encompasses all kinds of information, not only objective but also subjective, in the form of opinions or assessments.”Footnote 16 The term “natural persons,” then, refers to human beings, thereby excluding information relating to legal entities, deceased persons, and unborn children from the scope of protection of the Regulation.Footnote 17

The pivotal – and most controversial – element of that definition is the notion of “identified or identifiable.” According to the WP29’s Opinion 4/2007, a person is “identified” when “within a group of persons, he or she is ‘distinguished’ from all other members of the group.” This can be the case when that piece of information is associated with a name, but any other indirect identifier or combination thereof, such as a telephone number or a social security number, might also lead to the identification of that individual. A person is “identifiable” when, “although he or she has not been identified yet, it is possible to do so.”Footnote 18 “To determine whether a natural person is identifiable,” states Recital 26 GDPR, “account should be taken of all the means reasonably likely to be used, such as singling out, either by the controller or by another person to identify the natural person directly or indirectly.” In turn, “to ascertain whether means are reasonably likely to be used to identify the natural person, account should be taken of all objective factors, such as the costs of and the amount of time required for identification, taking into consideration the available technology at the time of the processing and technological developments.” This makes the qualification of “personal data” a dynamic, context-sensitive assessment that calls for a case-by-case analysis of the reidentification potential.

Such an assessment was conducted by the CJEU in the Breyer case,Footnote 19 in which it held that a dynamic IP address collected by a content provider was to be considered as a piece of personal data, even though that provider was not able, by itself, to link the IP address back to a particular individual. German law indeed allowed content providers, in the context of criminal proceedings following cyberattacks for instance, to obtain from the internet service provider the information necessary to turn that dynamic IP address back to its static form, and therefore link it to an individual user. That means of reidentification was considered “reasonably likely” to be used, thereby falling under the scope of Article 4(1) read in combination with Recital 26 GDPR. On the contrary, that likelihood test would not have been met if such reidentification was “prohibited by law or practically impossible on account of the fact that it requires disproportionate efforts in terms of time, cost, and workforce, so that the risk of identification appears in reality to be insignificant.”Footnote 20 By investigating the actual means of reidentification at the disposal of the content provider to reidentify the data subject to whom the dynamic IP address belonged, the Court embraced a “risk-based” approach to the notion of personal data, as widely supported in legal literature and discussed in Section 7.4.3.Footnote 21

Data for which the likelihood of reidentification falls below that “reasonable” threshold are considered “anonymous” and are not subject to the GDPR. Lowering the risk of reidentification to meet the GDPR standard of anonymity is no small feat, however, and depends on multiple factors such as the size and diversity of the dataset, the categories of information it contains, and the effectiveness of the techniques applied to reduce the chances of reidentification.Footnote 22 For instance, swapping names for randomly generated number-based identifiers might not be sufficient to reasonably exclude the risk of reidentification if the dataset at stake is limited to the employees of a company paired with specific categories of data such as hobbies, gender, or device fingerprints. In that case, singling someone out, linking two records, or deducing the value of an attribute based on other attributes – in this example, the name of a person based on a unique combination of the gender and hobbies – remains possible. For the same reason, hashing the license plate of a car entering a parking before storing it into the payment system, even when the hash function used is strictly nonreversible, might not reasonably shield the driver from reidentification if the hash value is stored alongside other information such as the time of arrival or departure, which might later be combined with unblurred CCTV footages to retrieve the actual plate number.Footnote 23 These techniques are therefore considered as “pseudonymization” rather than “anonymization,”Footnote 24 with the resulting “pseudonymized data” falling under the scope of the GDPR in the same way as regular personal data. As detailed in Section 7.4.3, pseudonymization techniques nonetheless play a critical role as mitigation strategies in the risk-based ecosystem of the Regulation.Footnote 25

7.3.1.2 The Processing of Personal Data in AI Systems

AI systems, and more specifically machine learning algorithms, process data at different stages, each of which is likely to involve information that qualifies as personal data. The first of these is the training stage, if the target and predictor variables are sufficiently granular to allow a third party to reidentify the individuals included in the training dataset.Footnote 26 This could be the case, for instance, when training a model to detect tax fraud based on taxpayers’ basic demographic data, current occupation, life history, income, or previous tax returns, the intimate nature of which increases the risk of reidentification. Anonymization – or pseudonymization, depending on the residual risk – techniques can be used to randomize variables by adding noise (e.g., replacing the exact income of each taxpayer by a different yet comparable amount) or permutating some of them (e.g., randomly swapping the occupation of two taxpayers).Footnote 27 Generalization techniques such as k-anonymity (i.e., ensuring that the dataset contains at least k-records of taxpayers with identical predictors by decreasing their granularity, such as replacing the exact age with a range) or l-diversity (i.e., extending k-anonymity to make sure that the variables in each set of k-records have at least l-different values) are also widely used in practice. Synthetic data, namely artificial data that do not relate to real individuals but are produced using generative modeling, can serve as an alternative to actual, real-life personal data to train machine learning models.Footnote 28 Yet, doing so is only a workaround, as the underlying generative model also needs to be trained on personal data. Plus, the generated data might reveal information about the natural persons who were included in the training dataset in cases where one or more specific variable stand out.

Second, a trained machine learning model might leak some of the personal data included in the training dataset. Some models might be susceptible to model inversion or membership inference attacks, which respectively allow an entity that already knows some of the characteristics of the individuals who were part of the training dataset to infer the value of other variables simply by observing the functioning of the said model, or to deduce whether a specific individual was part of that training dataset.Footnote 29 Other models might leak by design.Footnote 30 The qualification of trained models as personal – even if pseudonymized – data means that the GDPR will regulate their use, as the mere sharing of these models with third parties, for instance, will be considered as a “processing” of personal data within the meaning of Article 4(2) GDPR.

As detailed in Section 7.3.1.1, the criteria used for the identifiability test of Article 4(1) lead to a broad understanding of the notion of personal data; so much so that the GDPR has been coined as the “law of everything.”Footnote 31 This is especially true when it comes to the role of “the available technology” in assessing the risk of reidentification, the progress of which increases the possibility that a technique considered as proper anonymization at time t is reverted and downgraded to a mere pseudonymizations method at time t + 1.Footnote 32 Many allegedly anonymous datasets have already been reidentified using data that were not available at the time of their release, or by more powerful computational means.Footnote 33 This mostly happens through linkage attacks, which consist in linking an anonymous dataset with auxiliary information readily available from other sources, and looking for matches between the variables contained in both datasets. AI makes these types of attacks much easier to perform, and paves the way for even more efficient reidentification techniques.Footnote 34

7.3.2 Personal Scope of Application – Controllers and Processors
7.3.2.1 The Controller–Processor Dichotomy and the Notion of Joint Control

Now that Section 7.3.1 has clarified what the GDPR applies to, it is crucial to determine who bears the burden of compliance.Footnote 35 “Controllers” are the primary addressees of the Regulation, and are responsible to comply with virtually all the principles and rules it contains. Article 4(7) defines the controller as “the natural or legal person that, alone or jointly with others, determines the purposes and means of the processing of personal data.” The EDPB provides much needed clarifications on how to interpret these notions.Footnote 36 First, the reference to “natural or legal person” – in contrast with a mere reference to the former in Article 4(1) GDPR – implies that both individuals and legal entities can qualify as controllers. The capacity to “determine” then refers to “the controller’s influence over the processing, by virtue of an exercise of decision making power.” That influence can either stem from a legal designation, such as when national law specifically appoints a tax authority as the controller for the processing of the personal data necessary to calculate citizens’ tax returns, or follow from a factual analysis. In the latter case, the EPBD emphasizes that the notion of controller is a “functional concept” meant to “allocate responsibilities according to the actual roles of the parties.” It is therefore necessary to look past any existing formal designation – in a contract, for instance – and to analyze the factual elements or circumstances indicating a decisive influence over the processing.

Next, the “purposes” and “means” relate, respectively, to the “why’s” and “how’s” of the processing. An entity must exert influence over both those elements to qualify as a controller, although there is a margin of maneuver to delegate certain “non-essential means” without shifting the burden of control. This would be the case, for instance, for the “practical aspects of implementation.” For example, a company that decides to store a backup copy of its customers’ data on a cloud platform remains the controller for that processing even though it does not determine the type of hardware used for the storage, nor the transfer protocol, the security measures or the redundancy settings. On the contrary, decisions pertaining to the type of personal data processed, their retention period, the potential recipients to whom they will be disclosed, and the categories of data subjects they concern typically fall within the exclusive remit of the controller; any delegation of these aspects to another actor would turn that entity into a (joint) controller in its own right.

Finally, the wording “alone or jointly with others” hints at the possibility for two or more entities to be considered as joint controllers. According to the EDPB, the overarching criterion for joint controllership to exist is “the joint participation of two or more entities in the determination of the purposes and means of a processing operation.” This is the case when the entities at stake adopt “common” or “converging” decisions. Common decisions, on the one hand, involve “a common intention.” Converging decisions, on the other, “complement each other and are necessary for the processing to take place in such a manner that they have a tangible impact on the determination of the purposes and the means of the processing.” Another indication is “whether the processing would not be possible without both parties’ participation in the sense that the processing by each party is inseparable, i.e. inextricably linked.” The CJEU has, for instance, recognized a situation of joint controllership between a religious community and its members for the processing of the personal data collected in the course of door-to-door preaching, as the former “organized, coordinated and encouraged” the said activities despite the latter being actually in charge of the processing.Footnote 37 The Court held a similar reasoning with regard to Facebook and the administrator of a fan page, as creating such a page “gives Facebook the opportunity” to place cookies on visitors’ computer that can be used to both “improve its system of advertising” and to “enable the fan page administrator to obtain statistics from the visit of the page.”Footnote 38 Lastly, the Court also considered Facebook and Fashion ID, an online clothing retailer that had embedded Facebook’s “Like” plugin on its page, as joint controllers for the collection and transmission of the visitors’ IP address and unique browser string, since both entities benefitted from that processing. Facebook, because it could use the collected data for its own commercial purpose. And Fashion ID, because the presence of a “Like” button would contribute to increasing the publicity of its goods.Footnote 39

Next to “controllers,” “processors” also fall within the scope of the GDPR. These are entities distinct from the controller that process personal data on its behalf (Article 4(8) GDPR). This is typically the case for, say, a call center that processes prospects’ phone numbers in the context of a telemarketing campaign organized by another company. The requirement to be a separate entity implies that internal departments, or employees acting under the direct authority of their employer, will – at least in the vast majority of cases – not qualify as processors. Besides, processors can only process personal data upon the documented instructions and for the benefit of the controller. Should a processor go beyond the boundaries set by the controller and process personal data for its own benefit, it will be considered as a separate controller for the portion of the processing that oversteps the original controller’s instructions. If the said call center decides, for instance, to reuse the phone numbers it has obtained from the controller to conduct its own marketing campaign or to sell it to third parties, it will be considered as a controller for those activities. Compared to controllers, processors must only comply with a subset of the rules listed in the GDPR, such as the obligation keep a record of processing activities (Article 30(2)), to cooperate with national supervisory authorities (Article 31), to ensure adequate security (Article 32), to notify data breaches to controllers (Article 33(2)), and to appoint a Data Protection Officer (DPO) when certain conditions are met (Article 44).

7.3.2.2 The Allocation of Responsibilities in AI Systems

The CJEU has repeatedly emphasized the importance to ensure, through a broad definition of the concept of controller, the “effective and complete protection of data subjects.”Footnote 40 The same goes for the notion of joint control, which the Court now seems to have extended to any actor that has made the processing possible by contributing to it.Footnote 41 In the context of complex processing operations involving multiple actors intervening at different stages of the processing chain, such as the ones at stake in AI systems, an overly broad interpretation of the notion of joint control might lead to situations where everyone is considered as a joint controller.Footnote 42 Properly allocating responsibilities is therefore essential, as the qualification of each party will drastically impact the scope of their compliance duties. Doing so requires the adoption of a “phase-oriented” approach, by slicing complex sets of processing operations into smaller bundles that pursue an identical overarching purpose before proceeding with the qualification of the actors involved.Footnote 43 Machine learning models, for instance, are the products of different activities ranging from the gathering and cleaning of training datasets, to the actual training of the model and its later use to make inferences in concrete scenarios. The actors involved do not necessarily exert the same degree of influence over all these aspects. As a result, their qualification might differ depending on the processing operation at stake. This makes it particularly important to circumscribe the relevant processing activities before applying the criteria detailed in Section 7.3.2.1.Footnote 44

Let’s illustrate the above by breaking down the processing operations typically involved in machine learning, starting with the collection and further use of the training datasets. Company X might specialize in the in-house development and commercialization of trained machine learning models. When doing so, it determines why the training datasets are processed (i.e., to train their model with the view of monetizing it) as well as the essential and nonessential means of the processing (e.g., which personal data are included in the training dataset and the technical implementation of the training process). It will therefore be considered as the sole controller for the processing of the training datasets. Company X might also decide to collaborate with Company Y, the latter providing the training dataset in exchange for the right to use the model once trained. This could be considered as converging decisions leading to a situation of joint controllership between Companies X and Y. Looking at the inference stage, then, Company X might decide to offer its trained model to Company Z, a bank, that will use it to predict the risk of default before granting loans. By doing so, Company Z determines the purposes for which it processes its clients’ personal data (i.e., calculating the risk of default), as well as the essential means of the processing (e.g., the granularity of the data fed to the model). As a result, Company Z will be considered as the sole controller for the processing of its customers’ data, regardless of whether Company X retains a degree of influence over how the algorithm works under the hood. Company X could also be considered as a processor in case it computes the risk score on behalf of Company Z using its own hardware and software infrastructure. This is a common scenario in the context of software- or platform-as-a-service cloud-based solutions.

7.4 AI Systems Meet the GDPR – Overview and Friction Points

Controllers – and, to a certain extent, processors – that process personal in the context of the development and/or use of AI systems must comply with the foundational principles detailed in Article 5 GDPR, namely lawfulness, fairness, transparency, purpose limitation, data minimization, accuracy, storage limitation, integrity and confidentiality, and accountability. These are the pillars around which the rest of the Regulation is articulated. While AI systems are not, per se, incompatible with the GDPR, reconciling their functioning with the rules of the Regulation is somewhat of a balancing act. The following sections aim at flagging the most pressing tensions by contrasting some of the characteristics of AI systems against the guarantees laid down in Article 5 GDPR.

7.4.1 The Versatility of AI Systems v. the Necessity and Compatibility Tests
7.4.1.1 Lawfulness and Purpose Limitation at the Heart of the GDPR

In order to prevent function creep, Article 5(1)a introduces the principle of “lawfulness,” which requires controllers to justify their processing operations using one of the six lawful grounds listed in Article 6. These include not only the consent of the data subject – often erroneously perceived as the only option – but also the alternatives such as the “performance of a contract” or the “legitimate interests of the controller.” Relying on any of these lawful grounds (except for consent) requires the controller to assess and demonstrate that the processing at stake is “objectively necessary” to achieve the substance of that lawful ground. In other words, there is no other, less-intrusive way to meet that objective. As recently illustrated by the Irish regulator’s decision in the Meta Ireland case,Footnote 45 the processing of Facebook and Instagram users’ personal data for the purpose of delivering targeted advertising is not, for instance, objectively necessary to fulfil the essence of the contractual relationship between these platforms and their users.Footnote 46 As a result, the processing cannot be based on Article 6(1)b, and it has to rely on another lawful ground. Consent, on the other hand, must be “freely given, specific, informed and unambiguous,” thereby undermining its validity when obtained in a scenario that involves unbalanced power or information asymmetries, such as when given by an employee to its employer.Footnote 47

With that same objective in mind, Article 5(1)b lays down the principle of “purpose limitation,” according to which personal data shall be “collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes.”Footnote 48 In practice, this requires controllers to, first, determine the exact reasons why personal data are collected and, then, assess the compatibility of every subsequent processing activity in light of the purposes that were specified at the collection stage. Doing so requires to take into account various criteria such as, for instance, the context in which the personal data have been collected and the reasonable expectations of the data subjects.Footnote 49 While compatible further processing can rely on the same lawful ground used to justify the collection, incompatible processing must specify a new legal basis. Reusing a postal address originally collected to deliver goods purchased online for marketing purposes is a straightforward example of an incompatible further processing. The purposes specified during the collection also serve as the basis to assess the amount of personal data collected (i.e., “data minimization”), the steps that must be taken to ensure their correctness (i.e., “accuracy”) and their retention period (i.e., “storage limitation”).

Lawfulness and purpose limitation are strongly interconnected, as the purposes specified for the collection will influence the outcome of both the necessity test required when selecting the appropriate lawful ground – with the exception of consent, for which the purposes delimit what can and cannot be done with the data – and the compatibility assessment that must be conducted prior to each further processing. Ensuring compliance with these principles therefore calls for a separate analysis of each “personal data – purpose(s) – lawful ground” triad, acting as a single, indissociable whole (see Figure 7.3).

Figure 7.3 Lawfulness and purpose limitation, combined

Severing the link between these three elements would empty Articles 5(1)a and 5(1)b from their substance and render any necessity or compatibility assessment meaningless. Whether a webshop can rely on its legitimate interests (Article 6(1)f) to profile its users and offers targeted recommendations, for instance, heavily depends on the actual personal data used to tailor their experience, and therefore the intrusiveness of the processing.Footnote 50

7.4.1.2 Necessity and Compatibility in AI Systems

While complying with the principles of lawfulness and purpose limitation is already a challenge in itself, the very nature of AI systems spices it up even more. The training of machine learning models, for example, often involves the reuse, as training datasets, of personal data originally collected for completely unrelated purposes. While it is still unclear whether scraping publicly accessible personal data should be regarded as a further processing activity subject to the compatibility assessment pursuant to Articles 6(1)b and 6(4) GDPR, or as a new collection for which the said entity would automatically need to rely on a different lawful ground than the one used to legitimize the original collection, this raises the issue of function creep and loss over one’s personal data. The case of Clearview AI is a particularly telling example. Back in 2020, the company started to scrape the internet, including social media platforms, to gather images and videos to train its facial recognition software and offer its clients – among which law enforcement authorities – a search engine designed to look up individuals on the basis of another picture. After multiple complaints and a surge in media attention, Clearview was fined by the Italian,Footnote 51 Greek,Footnote 52 French,Footnote 53 and UKFootnote 54 regulators for having processed these images without a valid lawful ground. The Austrian regulator issued a similar decision, if not paired with a fine.Footnote 55 As detailed in Section 7.4.1.1, the fact that these images are publicly accessible does not, indeed, mean that they are freely reusable for any purpose. All five authorities noted the particularly intrusive nature of the processing at stake, the amount of individuals included in the database, and the absence of any relationship between Clearview AI and the data subjects who could therefore not reasonably expect their biometric data to be repurposed for the training of a facial recognition algorithm.

The training of Large Language Models (“LLMs”) such as OpenAI’s GPT-4 or EleutherAI’s GPT-J raises similar concerns, which the Garante recently flagged in its decision to temporarily banFootnote 56 – then conditionally reauthorize –Footnote 57 ChatGPT on the Italian territory.Footnote 58 This even prompted the EDPB to set up a dedicated task force to “foster cooperation and to exchange information on possible enforcement actions conducted by data protection authorities.”Footnote 59 Along the same lines, but looking at the inference rather than the training phase, relying on algorithmic systems to draw predictions might not always be proportional – or even necessary – to achieve a certain objective. Think about an obligation to wear a smart watch to dynamically adjust a health insurance premium, for instance.

As hinted at earlier, the principle of “data minimization” requires to limit the amount of personal data processed to what is objectively necessary to achieve the purposes that have been specified at the collection stage (Article 5(1)c GDPR). At first glance, this seems to clash with the vast amount of data often used to train and tap into the potential of AI systems. It is therefore essential to reverse the “collect first, think after” mindset by laying down the objectives that the AI system is supposed to achieve before harvesting the data used to train or fuel its predictive capabilities. Doing so, however, is not always realistic when such systems are designed outside any concrete application area and are meant to evolve over time. Certain techniques can nonetheless help reduce their impact on individuals’ privacy. At the training stage, pseudonymization methods such as generalization and randomization – both discussed in Section 7.3.1.2 – remain pertinent. Standard feature selection methods can also assist controllers in pruning their training datasets from variables that are of little added-value in the development of their model.Footnote 60 In addition, federated machine learning, which relies on the training, sharing and aggregation of “local” models, is a viable alternative to the centralization of training datasets in the hands of a single entity, and reduces the risks associated with their duplication.Footnote 61 At the inference stage, running the machine learning model on the device itself rather than hosting it on the cloud is also an option to cut on the need to share personal data with a central entity.Footnote 62

7.4.2 The Complexity of AI Systems v. Transparency and Explainability
7.4.2.1 Ex-ante and Ex-post Transparency Mechanisms

As a general principle, transparency percolates through the entire Regulation and plays a critical role in an increasingly datified society. As noted in Recital 39 GDPR, “it should be transparent to natural persons that personal data concerning them are collected, used, consulted or otherwise processed and to what extent the personal data are or will be processed.” To meet that objective, Articles 13 and 14 detail the full list of information controllers must provide to data subjects. It includes, among others, the contact details of the controller and its representative, the purposes and legal basis of the processing, the categories of personal data concerned, any recipient, and information on how to exercise their rights.Footnote 63 Article 12 then obliges controllers to communicate that information in a “concise, transparent, intelligible and easily accessible way, using clear and plain language,” in particular for information addressed to children. This requires them to tailor the way they substantiate transparency to their audience by adapting the tone and language to the targeted group. Beyond making complex environments observable, this form of ex-ante transparency also pursues an instrumental goal by enabling other prerogatives.Footnote 64 As pointed out in literature, “neither rectification or erasure […] nor blocking or objecting to the processing of personal data seems easy or even possible unless the data subject knows exactly what data [are being processed] and how.”Footnote 65 Articles 13 and 14 therefore ensure that data subjects are equipped with the necessary information to later exercise their rights.

In this regard, Articles 15 to 22 complement Articles 13 and 14 by granting data subjects an arsenal of prerogatives they can use to regain control or balance information asymmetries. These include the right to access, to rectify, to erase, restrict, and move one’s data, as well as the right to challenge and to object to certain types of automated decision-making processes. More specifically, Article 15 grants data subjects the right to request a confirmation that personal data concerning them are being processed, more information on the relevant processing operations and a copy of the personal data involved. As a form of ex-post transparency mechanism, it allows data subjects to look beyond what is provided in a typical privacy policy and obtain an additional, individualized layer of transparency. Compared to the information provided in the context of Articles 13 and 14, controllers should, when answering an access request, tailor the information provided to the data subject’s specific situation. This would involve sharing the recipients to whom their personal data have actually been disclosed, or the sources from which these have actually been obtained – a point of information that might not always be clear at the time the privacy policy is drafted.Footnote 66 By allowing data subjects to verify controllers’ practices, Article 15 paves the way for further remedial actions, should it be necessary. It is therefore regarded as one of the cornerstones of data protection law, and is one of the few guarantees explicitly acknowledged in Article 8 CFREU.

7.4.2.2 Algorithmic Transparency – And Explainability?

AI systems are increasingly used to make or support decisions concerning individuals based on their personal data. Fields of applications range from predictive policing to hiring strategies and healthcare, but all share a certain degree of opacity as well as the potential to adversely affect the data subjects concerned. The GDPR seeks to address these risks through a patchwork of provisions regulating what Article 22(1) defines as “decisions based solely on automated processing, including profiling, which produce legal effects concerning [the data subject] or similarly significantly affect him or her.” This would typically include, according to Recital 71, the “automatic refusal of an online credit applications” or “e-recruiting practices without any form of human intervention.” Based solely, in this case, suggests that the decision must not necessarily be taken by an automated system for it to fall within the scope of Article 22(1). The routine usage of a predictive system by a person who is not in a position to exercise any influence or meaningful oversight over its outcome would, for instance, also fall under Article 22(1).Footnote 67 While fabricating human involvement is certainly not a viable way out, national data protection authorities are still refining the precise contours of that notion.Footnote 68

Controllers that rely on such automated decision-making must inform data subjects about their existence, and provide them with “meaningful information about the logic involved,” as well as their “significance and the envisaged consequences.” This results from the combined reading of Articles 13(2)f, 14(2)g, and 15(1)h. Additionally, Article 22(3) and Recital 71 grant data subjects the right to obtain human intervention, express their point of view, contest the decision and – allegedly – obtain an explanation of the decision reached. Over the last few years, these provisions have fueled a lively debate as to the existence of a so-called “right to explanation” that would allow data subjects to enquire about how a specific decision was reached rather than only about the overall functioning of the underlying system.Footnote 69 Regardless of these controversies, it is commonly agreed that controllers should avoid “complex mathematical explanations” and rather focus on concrete elements such as “the categories of data that have been or will be used in the profiling or decision-making process; why these categories are considered pertinent; how the profile is built, including any statistics used in the analysis; why this profile is relevant and how it is used for a decision concerning the data subject.”Footnote 70 The “right” explanation will therefore strongly depend on the sector and audience at stake.Footnote 71 A media outlet that decides to offer users a personalized news feed might, for instance, need to explain the actual characteristics taken into account by its recommender system, as well as their weight in the decision-making process and how past behavior has led the system to take a specific editorial decision.Footnote 72

7.4.3 The Dynamicity of AI v. the Risk-Based Approach
7.4.3.1 Accountability, Responsibility, Data Protection by Design and DPIAs

Compared to its predecessor,Footnote 73 one of the main objectives of the GDPR was to move away from compliance as a mere ticking-the-box exercise – or window dressingFootnote 74 – by incentivizing controllers to take up a more proactive role in the implementation of appropriate measures to protect individuals’ rights and freedoms. This led to the abolition of the antique, paternalistic obligation for controllers to notify their processing operations to national regulators in favor of a more flexible approach articulated around the obligation to maintain a record of processing activities (Article 30), to notify data breaches to competent authorities and the affected data subjects (Articles 33 and 34) and to consult the former in cases where a data protection impact assessment (“DPIA”) indicates that the processing would result in a high risk in the absence of measures taken by the controller to mitigate the risk (Article 36). The underlying idea was to responsibilize controllers by shifting the burden of analyzing and mitigating the risks to data subject’s rights and freedoms onto them. Known as the “risk-based approach,” it ensures both the flexibility and scalability needed for the underlying rules to remain pertinent in a wide variety of scenarios. As noted in legal literature, the risk-based approach “provides a way to carry out the shift to accountability that underlies much of the data protection reform, using the notion of risk as a reference point in light of which we can assess whether the organisational and technical measures taken by the controller offer a sufficient level of protection.”Footnote 75

The combined reading of Articles 5(2) (“accountability”), 24(1) (“responsibility”), and 25(1) (“data protection by design”) now requires controllers to take into account the state of the art, the cost of implementation, and the nature, scope, context, and purposes as well as the risks posed by the processing. They should implement, both at the time of determination of the means for processing and at the time of the processing itself, appropriate technical and organizational measures to ensure and demonstrate compliance with the Regulation. In other words, they must act responsibly as of the design stage, and throughout the entire data processing lifecycle. Data protection-specific risks are usually addressed in a DPIA, which should at least provide a detailed description of the relevant processing activities, an assessment of their necessity and proportionality, as well as an inventory of the risks and corresponding mitigation strategies (see Figure 7.4).Footnote 76 While Article 35(1) obliges controllers to conduct a DPIA for processing activities that are “likely to result in a high risk for rights and freedoms of natural persons,” such an exercise, even if succinct, is also considered as best practice for all controllers regardless of the level of risk.Footnote 77

Figure 7.4 Overview of the main steps of a Data Protection Impact Assessment

7.4.3.2 From DPIAs to AIAs, and the Rise of Algorithmic Governance

The development and use of AI systems are often considered as processing likely to result in a “high risk,” for which a DPIA is therefore mandatory. In fact, Article 35(3) GDPR, read in combination with the Guidelines from the WP29 on the matter,Footnote 78 extends that obligation to any processing that involves, among others, the evaluation, scoring or systematic monitoring of individuals, the processing of data on a large scale, the matching or combining of datasets or the innovative use or application of new technological or organizational solutions. All these attributes are, in most cases, inherent to AI systems and therefore exacerbate the risks for individuals’ fundamental rights and freedoms. Among these is, for instance, the right not to be discriminated. This is best illustrated by the Dutch “Toeslagenaffaire,” following which the national regulator fined the Tax Administration for having unlawfully created erroneous risk profiles using a machine learning algorithm in an attempt to detect and prevent child care benefits fraud, which led to the exclusion of thousands of alleged fraudsters from social protection.Footnote 79 Recent research has also uncovered the risk of bias in predictive policing and offensive speech detection systems, both vulnerable to imbalanced training datasets, and susceptible to reflect past discrimination.Footnote 80

Addressing these risks requires more than just complying with the principles of lawfulness, purpose limitation, and data minimization. It also goes beyond the provision of explanations, however accessible and accurate these may be. In fact, that issue largely exceeds the boundaries of the GDPR itself which, as hinted in Section 7.3, is but one regulatory angle among many others. The AI Act is, for instance, a case in point.Footnote 81 More generally, this book is a testimony to the diversity of the regulatory frameworks applicable to AI systems. This calls for a drastic rethinking of how AI systems are designed and deployed to mitigate their adversarial impact on society. This has led to the development of Algorithmic – rather than Data Protection – Impact Assessments (“AIAs”), conceived as broader risk management approaches that integrate but are not limited to data protection concerns.Footnote 82 While these assessments can assist controllers in developing their own technology, they are also relevant for controllers relying on off-the-shelf AI solutions offered by third parties, who are increasingly resorting to auditing and regular testing to ensure that these products comply with all applicable legislation. All in all, the recent surge in awareness of AI’s risks has laid the groundwork for the rise of a form of algorithmic accountability.Footnote 83 Far from an isolated legal exercise, however, identifying and mitigating the risks associated with the use of AI systems is, by nature, an interdisciplinary exercise. Likewise, proper solutions will mostly follow from the research conducted in fora that bridge the gap between these different domains, such as the explainable AI (“XAI”) and human–computer interaction (“HCI”) communities.

7.5 Conclusion

As pointed out from the get go, this chapter serves as an entry point into the intersection of AI and data protection law, and strives to orient the reader toward the most authoritative sources on each of the subjects it touches upon. It is hence but a curated selection of the most relevant data protection principles and rules articulated around the most salient characteristics of AI systems. Certain important issues therefore had to be left out, among which the obligation to ensure a level of security appropriate to the risks at stake, the rules applicable to special categories of personal data, the exercise of data subjects rights, the role of certification mechanisms and codes of conduct, or the safeguards surrounding the transfers of personal data to third countries. Specific sources on these issues are, however, plentiful.

There is no doubt that AI systems, and the large-scale processing of personal data that is often associated with their development and use, has put a strain on individuals’ fundamental rights and freedoms. The goal of this chapter was to highlight the role of the GDPR in mitigating these risks by clarifying its position and function within the broader EU regulatory ecosystem. It also aimed to equip the reader with the main concepts necessary to decipher the complexity of its material and personal scope of application. More importantly, it ambitioned to debunk the myth according to which the applicability of the GDPR to AI systems would inevitably curtail their deployment, or curb innovation altogether. As illustrated throughout this contribution, tensions do exist. But the open-ended nature of Article 5, paired with the interpretation power granted to European and national supervisory authorities, provide the flexibility needed to adapt the GDPR to a wide range of scenarios. As with all legislation that aims to balance competing interests, the key mostly – if not entirely – lies in ensuring the necessity and proportionality of the interferences of the rights at stake. For that to happen, it is crucial that all stakeholders are aware of both the risks raised by AI systems for the fundamental rights to privacy and data protection, and of the solutions that can be deployed to mitigate these concerns and hence guarantee an appropriate level of protection for all the individuals involved.

8 Tort Liability and Artificial Intelligence Some Challenges and (Regulatory) Responses

Jan De Bruyne and Wannes Ooms
8.1 Introduction

Artificial intelligence (AI) is becoming increasingly important in our daily lives and so is academic research on its impact on various legal domains.Footnote 1 One of the fields that has attracted much attention is extra-contractual or tort liability. That is because AI will inevitably cause damage, for instance, following certain actions/decisions (e.g., an automated robot vacuum not recognizing a human and eventually harming them) or when it provides incorrect information that results in harm (e.g., when AI used in construction leads to the collapse of a building that hurts a bystander). Reference can also be made to accidents involving autonomous vehicles.Footnote 2 The autopilot of a Tesla car, for instance, was not able to distinguish a white tractor-trailer crossing the road from the bright sky above, leading to a fatal crash.Footnote 3 A self-driving Uber car hit a pedestrian in Arizona. The woman later died in the hospital.Footnote 4 These – and many other – examples show that accidents may happen despite optimizing national and supranational safety rules for AI. This is when questions of liability become significant.Footnote 5 The importance of liability and AI systems has already been mentioned in several documents issued by the European Union (EU). The White Paper on Artificial Intelligence, for instance, stresses that the main risks related to the use of AI concern the application of rules designed to protect fundamental rights as well as safety and liability-related issues.Footnote 6 Scholars have also concluded that “[l]iability certainly represents one of the most relevant and recurring themes”Footnote 7 when it comes to AI systems. Extra-contractual liability also encompasses many fundamental questions and problems that arise in the context of AI and liability.

Both academic researchFootnote 8 and policy initiativesFootnote 9 have already addressed many pressing issues in this legal domain. Instead of discussing the impact of AI on different (tort) liability regimes or issues of legal personality for AI systems,Footnote 10 we will touch upon some of the main challenges and proposed solutions at the EU and national level. More specifically, we will illustrate the remaining importance of national law (Section 8.2) and procedural elements (Section 8.3). We will then focus on the problematic qualification and application of certain tort law concepts in an AI-context (Section 8.4). The most important findings are summarized in the chapter’s conclusion (Section 8.5).Footnote 11

8.2 The Remaining Importance of National Law for AI-Related Liability

In the recent years, several initiatives with regard to liability for damage involving AI have been taken or discussed at the EU level. Without going into detail, we will provide a high-level overview to give the reader the necessary background to understand some of the issues that we will discuss later.Footnote 12

The European Parliament (EP) issued its first report on civil law rules for robots in 2017. It urged the European Commission (EC) to consider a legislative instrument that would deal with the liability for damage caused by autonomous systems and robots, thereby evaluating the feasibility of a strict liability or a risk management approach.Footnote 13 This was followed by a report issued by an Expert Group set up by the EC on the “Liability for artificial intelligence and other emerging digital technologies” in November 2019. The report explored the main liability challenges posed to current tort law by AI. It concluded that liability regimes “in force in member states ensure at least basic protection of victims whose damage is caused by the operation of such new technologies.”Footnote 14 However, the specific characteristics of AI systems, such as their complexity, self-learning abilities, opacity, and limited predictability, may make it more difficult to offer victims a claim for compensation in all cases where this seems justified. The report also stressed that the allocation of liability may be unfair or inefficient. It contains several recommendations to remedy potential gaps in EU and national liability regimes.Footnote 15 The EC subsequently issued a White Paper on AI in 2020. It had two main building blocks, namely an “ecosystem of trust” and an “ecosystem of excellence.”Footnote 16 More importantly, the White Paper was accompanied by a report on safety and liability. The report identified several points that needed further attention, such as clarifying the scope of the product liability directive (PLD) or assessing procedural aspects (e.g., identifying the liable person, proving the conditions for a liability claim or accessing the AI system to substantiate the claim).Footnote 17 In October 2020, the EP adopted a resolution with recommendations to the EC on a civil liability regime for AI. It favors strict liability for operators of high-risk AI systems and fault-based liability for operators of low-risk AI systems,Footnote 18 with a reversal of the burden of proof.Footnote 19 In April 2021, the EC issued its draft AI Act, which entered into force in August 2024 after a long legislative procedure.Footnote 20 The AI Act adheres to a risk-based approach. While certain AI systems are prohibited, several additional requirements apply for placing high-risk AI systems on the market. The AI Act also imposes obligations upon several parties, such as providers and users of high-risk AI systems.Footnote 21 Those obligations will be important to assess the potential liability of such parties, for instance, when determining whether an operator or user committed a fault (i.e., violation of a specific legal norm or negligence).Footnote 22 More importantly, the EC published two proposals in September 2022 that aim to adapt (tort) liability rules to the digital age, the circular economy, and the impact of the global value chain. The “AI Liability Directive” contains rules on the disclosure of information and the alleviation of the burden of proof in relation to damage caused by AI systems.Footnote 23 The “revised Product Liability Directive” substantially modifies the current product liability regime by including software within its scope, integrating new circumstances to assess the product’s defectiveness and introducing provisions regarding presumptions of defectiveness and causation.Footnote 24

These evolutions show that much is happening at the EU level regarding liability for damage involving AI. The problem, however, is that the European liability landscape is rather heterogeneous. With the exception of the (revised) PLD and the newly proposed AI Liability Directive, contractual and extra-contractual liability frameworks are usually national. While initiatives are thus taken at the EU level, national law remains the most important source when it comes to tort liability and AI. Several of these proposals and initiatives discussed in the previous paragraph contain provisions and concepts that refer to national law or that rely on the national courts for their interpretation.Footnote 25 According to Article 8 of the EP Resolution, for instance, the operator will not be liable if he or she can prove that the harm or damage was caused without his or her fault, relying on either of the following grounds: (a) the AI system was activated without his or her knowledge while all reasonable and necessary measures to avoid such activation outside of the operator’s control were taken or (b) due diligence was observed by performing all the following actions: selecting a suitable AI system for the right task and skills, putting the AI system duly into operation, monitoring the activities, and maintaining the operational reliability by regularly installing all available updates.Footnote 26 The AI Liability Directive also relies on concepts that will eventually have to be explained and interpreted by judges. National courts will, for instance, need to limit the disclosure of evidence to that which is necessary and proportionate to support a potential claim or a claim for damages.Footnote 27 It also relies on national law to determine the scope and definition of “fault” and “causal link.”Footnote 28 The revised PLD includes different notions that will have to be interpreted, explained, and refined by national judges as well according to their legal tradition. These concepts, for instance, include “reasonably foreseeable,” “substantial,” “relevant,” “proportionate,” and “necessary.”Footnote 29 The definitions provided by courts may vary from one jurisdiction to another, which does give some flexibility to Member States, but may create legal fragmentation as well.Footnote 30

8.3 Procedural Elements

A “general, worldwide accepted rule”Footnote 31 in the law of evidence is that each party has to prove its claims and contentions (actori incumbit probatio).Footnote 32 The application of this procedural rule can be challenging when accidents involve AI systems. Such systems are not always easily understandable and interpretable but can come in forms of “black boxes” that evolve through self-learning. Several actors are also involved in the AI life cycle (e.g., the developers of the software, the producer of the hardware, owners of the AI product, suppliers of data, public authorities, or the users of the product). Victims are therefore confronted with the increasingly daunting task of trying to identify and prove AI systems as their source of harm.Footnote 33 Moreover, injured parties, especially if they are natural persons, do not always have the needed knowledge on the specific AI system or access to the necessary information to build a case in court.Footnote 34 Under the Product Liability Directive, the burden of proof is high as well. A victim has to prove that the product caused the damage because it is defective, implying that it did not provide the safety one is legitimately entitled to expect.Footnote 35 It is also uncertain what exactly constitutes a defect of an advanced AI system. For instance, if an AI diagnosis tool delivers a wrong diagnosis, “there is no obvious malfunctioning that could be the basis for a presumption that the algorithm was defective.”Footnote 36 It may thus be difficult and costly for consumers to prove the defect when they have no expertise in the field, especially when the computer program is complex and not readable ex post.Footnote 37 An additional hurdle is that the elements of a claim in tort law are governed by national law. An example is the requirement of causation including procedural questions such as the standard of proof or the laws and practice of evidence.Footnote 38

In sum, persons who have suffered harm may not have effective access to the evidence that is necessary to build a case in court and may have less effective redress possibilities compared to situations in which the damage is caused by “traditional” products.Footnote 39 It is, however, important that victims of accidents involving AI systems are not confronted with a lower level of protection compared to other products and services for which they would get compensation under national law. Otherwise, societal acceptance of those AI systems and other emerging technologies could be hampered and a hesitance to use them could be the result.Footnote 40

To remedy this “vulnerable” or “weak” position, procedural mechanisms, and solutions have been proposed and discussed in academic scholarship.Footnote 41 One can think of disclosure requirements. Article 3 of the AI Liability Directive, for instance, contains several provisions on the disclosure of evidence. A court may, upon the request of a (potential) claimant, order the disclosure of relevant evidence about a specific high-risk AI system that is suspected of having caused damage. Such requests for evidence may be addressed to inter alia the provider of an AI system, a person subject to the provider’s obligations or its user.Footnote 42 Several requirements must be fulfilled by the (potential) claimant before the court can order the disclosure of evidence.Footnote 43 National courts also need to limit the disclosure of evidence to what is necessary and proportionate to support a potential claim or an actual claim for damages.Footnote 44 To that end, the legitimate interests of all parties – including providers and user – as well as the protection of confidential information should be taken into account.Footnote 45 The revised PLD contains similar provisions. Article 8 allows Member States’ courts to require the defendant to disclose to the injured person – the claimant – relevant evidence that is at its disposal. The claimant must, however, present facts and evidence that are sufficient to support the plausibility of the claim for compensation.Footnote 46 Moreover, the disclosed evidence can be limited to what is necessary and proportionate to support a claim.Footnote 47

Several policy initiatives also propose a reversal of the burden of proof. The Expert Group on Liability and New Technologies, for instance, proposes that “where the damage is of a kind that safety rules were meant to avoid, failure to comply with such safety rules, should lead to a reversal of the burden of proving (a) causation, and/or (b) fault, and/or (c) the existence of a defect.”Footnote 48 It adds that if “it is proven that an emerging digital technology caused harm, and liability therefore is conditional upon a person’s intent or negligence, the burden of proving fault should be reversed if disproportionate difficulties and costs of establishing the relevant standard of care and of proving their violation justify it.”Footnote 49 The burden of proving causation may also be alleviated in light of the challenges of emerging digital technologies if a balancing of the listed factors warrants doing so (e.g., the likelihood that the technology at least contributed to the harm or the kind and degree of harm potentially and actually caused).Footnote 50 It has already been mentioned that the Resolution issued by the EP in October 2020 also contains a reversal of the burden of proof regarding fault-based liability for operators of low-risk AI systems.Footnote 51

In addition to working with a reversal of the burden of proof, one can also rely on rebuttable presumptions. In this regard, both the AI Liability Directive and the revised PLD are important. Article 4.1 of the AI Liability Directive, for instance, introduces a rebuttable presumption of a “causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output.” However, this presumption only applies when three conditions are met. First, the fault of the defendant has to be proven by the claimant according to the applicable EU law or national rules, or presumed by the court following Article 3.5 of the AI Liability Directive. Such a fault can be established, for example, “for non-compliance with a duty of care pursuant to the AI Act.”Footnote 52 Second, it can be considered reasonably likely, based on the circumstances of the case, that the fault has influenced the output produced by the AI system or the failure of the AI system to produce an output. Third, the claimant needs to demonstrate that the output produced by the AI system or the failure of the AI system to produce an output gave rise to the damage. The defendant, however, has the right to rebut the presumption of causality.Footnote 53 Moreover, in the case of a claim for damages concerning a high-risk AI system, the court is not required to apply the presumption when the defendant demonstrates that sufficient evidence and expertise is reasonably accessible for the claimant to prove the causal link.Footnote 54

The revised PLD also introduces presumptions of defectiveness and causality that apply under certain conditions. Such conditions include the defendant’s failure to disclose relevant evidence, when the claimant provides evidence that the product does not comply with mandatory safety requirements set in EU or national law, or when the claimant establishes that the damage was caused by an “obvious malfunction” of the product during normal use or under ordinary circumstances. Article 9.3 also provides a presumption of causality when “it has been established that the product is defective and the damage caused is of a kind typically consistent with the defect in question.” In other words, Article 9 contains two specific presumptions, one of the product’s defectiveness and one related to the causal link between the defectiveness of the product and the damage. In addition, Article 9.4 contains a more general presumption. Where a national court decides that “the claimant faces excessive difficulties, due to the technical or scientific complexity, to prove the product’s defectiveness or the causal link between its defectiveness and the damage” (or both), the defectiveness of the product or causal link between its defectiveness and the damage (or both) are presumed when certain conditions are met. The claimant must demonstrate, based on “sufficiently relevant evidence,” that the “product contributed to the damage”Footnote 55 and that it is “likely that the product was defective or that its defectiveness is a likely cause of the damage, or both.”Footnote 56 The defendant, however, has the right “to contest the existence of excessive difficulties” or the mentioned likelihood.Footnote 57 Of course, the defendant is allowed to rebut any of these presumptions as well.Footnote 58

8.4 Problematic Qualification of Certain Tort Law Concepts

The previous parts focused on more general evolutions regarding AI and liability. The application of “traditional” tort law concepts also risks to become challenging in AI context. Regulatory answers will need to be found to remedy the gaps that could potentially arise. We will illustrate this with two notions used in the Product Liability Directive, namely “product” (part 8.4.1) and “defect” (part 8.4.2). We will also show that the introduction of certain concepts in (new) supranational AI-specific liability legislation can be challenging due to the remaining importance of national law. More specifically, we will discuss the requirement of “fault” in the proposed AI Liability Directive (part 8.4.3).

8.4.1 Software as a Product?

Article 1 of the Product Liability Directive stipulates that the producer is liable for damage caused by a defect in the product. Technology and industry, however, have evolved drastically over the last decades. The division between products and services is no longer as clear-cut as it was. Producing products and providing services are increasingly intertwined.Footnote 59 In this regard, the question arises whether software is a product or instead is provided as a service, and thus falling outside the scope of the PLD.Footnote 60 Software and AI systems merit specific attention in respect of product liability. Software is essential to the functioning of a large number of products and affects their safety. It is integrated into products but it can also be supplied separately to enable the use of the product as intended. Neither a computer nor a smartphone would be of particular use without software. The question whether stand-alone software can be qualified as a product within the meaning of the Product Liability Directive or implementing national legislation has already attracted a lot of attention, both in academic scholarshipFootnote 61 and in policy initiatives.Footnote 62 That is because software is a collection of data and instructions that is imperceptible to the human eye.Footnote 63

Unclarity remains as to whether software is (im)movable and/or a (in)tangible good.Footnote 64 The Belgian Product Liability Act – implementing the PLD – stipulates that the regime only concerns tangible goods.Footnote 65 Although the Belgian Court of Cassation and/or the European Court of Justice have not yet ruled on the matter, the revised PLD specifically qualifies software and digital manufacturing files as products.Footnote 66 The inclusion of software is rather surprising, yet essential.Footnote 67 Recital (13) of the revised PLD states that it should not apply to “free and open-source software developed or supplied outside the course of a commercial activity” in order not to hamper innovation or research. However, where software is supplied in exchange for a price or personal data is provided in the course of a commercial activity (i.e., for other purposes than exclusively improving the security, compatibility or interoperability of the software), the Directive should apply.Footnote 68 Regardless of the qualification of software, the victim of an accident involving an AI system may have a claim against the producer of a product incorporating software such as an autonomous vehicle, a robot used for surgery or a household robot. Software steering the operations of a tangible product could be considered as a part or component of that product.Footnote 69 This means that an autonomous vehicle or material robot used for surgery would be considered as a product in the sense of the Product Liability Directive and can be defective if the software system it uses is not functioning properly.Footnote 70

8.4.2 “Defective” Product

Liability under the Product Liability Directive requires a “defect” in the product. A product is defective when it does not provide the safety that a person is entitled to expect, taking all circumstances into account (the so-called “consumer expectations test” as opposed to the “risk utility test”).Footnote 71 This does not refer to the expectations of a particular person but to the expectations of the general publicFootnote 72 or the target audience.Footnote 73 Several elements can be used to determine the legitimate expectations regarding the use of AI systems. These include the presentation of the product, the normal or reasonably foreseeable use of it and the moment in time when the product was put into circulation.Footnote 74 This enumeration of criteria, however, is not exhaustive as other factors may play a role as well.Footnote 75 Especially the criterion of the presentation of the product is important for manufacturers of autonomous vehicles or medical robots. That is because they often tend to market their products explicitly as safer than existing alternatives. The presentation of the product may on the other hand also provide an opportunity for manufacturers of AI systems to reduce their liability risk through appropriate warnings and user information. Nevertheless, it remains uncertain how technically detailed or accessible such information should be.Footnote 76 The revised PLD also refers to the legitimate safety expectations.Footnote 77 A product is deemed defective if it fails to “provide the safety which the public at large is entitled to expect, taking all circumstances into account.”Footnote 78 The non-exhaustive list of such circumstances that allow to assess the product’s defectiveness is expanded and also includes “the effect on the product of any ability to continue to learn after deployment.”Footnote 79 It should, however, be noted that the product cannot be considered defective for the sole reason that a better product, including updates or upgrades to a product, is already or subsequently placed on the market or put into service.Footnote 80

That being said, the criterion of legitimate expectations remains very vague (and problematicFootnote 81). It gives judges a wide margin of appreciation.Footnote 82 As a consequence, it is difficult to predict how this criterion will and should be applied in the context of AI systems.Footnote 83 The safety expectations will be very high for AI systems used in high-risk contexts such as healthcare or mobility.Footnote 84 At the same time, however, the concrete application of this test remains difficult for AI systems because of their novelty, the complexity to compare these systems with human or technological alternatives and the characteristics of autonomy and opacity.Footnote 85 The interconnectivity of products and systems also makes it hard to identify the defect. Sophisticated AI systems with self-learning capabilities also raise the question of whether unpredictable deviations in the decision-making process can be treated as defects. Even if they constitute a defect, the state-of-the-art defenseFootnote 86 may eventually apply. The complexity and the opacity of emerging digital technologies such as AI systems further complicate the chance for the victim to discover and prove the defect and/or causation.Footnote 87 In addition, there is some uncertainty on how and to what extent the Product Liability Directive applies in the case of certain types of defects, for example, those resulting from weaknesses in the cybersecurity of the product.Footnote 88 It has already been mentioned that the revised PLD establishes a presumption of defectiveness under certain conditions to remedy these challenges.Footnote 89

8.4.3 The Concept of Fault in the AI Liability Directive

In addition to the challenging application of “traditional” existing tort law concepts in an AI context, the introduction of new legislation in this field may also contain notions that are unclear. This unclarity could affect legal certainty, especially considering the remaining importance of national law. We will illustrate this with the requirement of “fault” as proposed in the AI Liability Directive.

It has already been mentioned that Article 4.1 of the AI Liability Directive contains a rebuttable presumption of a “causal link between the fault of the defendant and the output produced by the AI system or the failure of the AI system to produce an output.” The fault of the defendant has to be proven by the claimant according to the applicable EU law or national rules. Such a fault can be established, for example, “for non-compliance with a duty of care pursuant to the AI Act.”Footnote 90 The relationship between the notions of “fault” and “duty of care” under the AI Liability Directive, and especially in Article 4, is unclear and raises interpretation issues.Footnote 91 The AI Liability Directive uses the concept of “duty of care” at several occasions. Considering that tort law is still to a large extent national, the reliance on the concept of “duty of care” in supranational legislation is rather surprising. A “duty of care” is defined as “a required standard of conduct, set by national or Union law, in order to avoid damage to legal interests recognized at national or Union law level, including life, physical integrity, property and the protection of fundamental rights.”Footnote 92 It refers to how a reasonable person should act in a specific situation, which also “ensure[s] the safe operation of AI systems in order to prevent damage to recognized legal interests.”Footnote 93 In addition to the fact that the content of a duty of care will ultimately have to be determined by judges, a more conceptual issue arises as well. That is because the existence of a generally applicable positive duty of care has already been contested, for instance, in Belgium. Kruithof concludes that case law and scholarship commonly agree that no breach of a “pre-existing” duty is required for a fault to be established. As noted by Kruithof, what is usually referred to as the generally required level or the duty of care, “is therefore more properly qualified not as a legal duty or obligation, but merely a standard of behavior serving as the yardstick for judging whether an act is negligent or not for purposes of establishing liability.”Footnote 94 However, Article 4.1 (a) seems to equate the “fault” with the noncompliance with a duty of care, thereby implicitly endorsing the view that the duty of care consists in a standalone obligation. This does not necessarily fit well in some national tort law frameworks, and may thus cause interpretation issues and fragmentation.Footnote 95

Article 1.3 (d) of the AI Liability Directive mentions that the Directive will not affect “how fault is defined, other than in respect of what is provided for in Articles 3 and 4.” A fault under Belgian law (and by extension other jurisdictions) consists of both a subjective component and an objective component. The (currently still applicable) subjective component requires that the fault can be attributed to the free will of the person who has committed it (“imputability”), and that this person generally possesses the capacity to control and to assess the consequences of his or her conduct (“culpability”).Footnote 96 This subjective element does, however, not seem to be covered by the AI Liability Directive. This raises the question whether the notion of “fault,” as referred to in the Articles 3 and 4, requires such a subjective element to be present and/or allows for national law to require this. The minimal harmonization provision of Article 1.4 does not answer this question.Footnote 97 The objective component of a fault refers to the wrongful behavior in itself. Belgian law traditionally recognizes two types of wrongdoings, namely a violation of a specific legal rule of conductFootnote 98 and the breach of a standard of care.Footnote 99 Under Belgian law, a violation of a standard of care requires that it was reasonably foreseeable for the defendant that his or her conduct could result in some kind of damage.Footnote 100 This means that a provider of a high-risk AI system would commit a fault when he or she could reasonable foresee that a violation of a duty of care following provisions of the AI Act would result in damage. However, it is unclear whether the notion of a “duty of care” as relied upon in the AI Liability Directive also includes this requirement of foreseeability or, instead, whether it is left to national (case) law to determine the additional modalities under which a violation of a “duty of care” can be established.Footnote 101

8.5 Concluding Remarks and Takeaways

We focused on different challenges that arise in tort law for damage involving AI. The chapter started by illustrating the remaining importance of national law for the interpretation and application of tort law concepts in an AI context. There will be an increasing number of cases in which the role of AI systems in causing damage, and especially the interaction between humans and machines, will have to be assessed. Therefore, a judge must have an understanding on how AI works and the risks it entails. As such, it should be ensured that judges – especially in the field of tort law – have the required digital capacity. We also emphasized the importance of procedural elements in claims involving AI systems. Although the newly proposed EU frameworks introduce disclosure requirements and rebuttable presumptions, it remains to be seen how these will be applied in practice, especially considering the many unclarities these proposals still entail. The significant amount of discretion that judges have in interpreting the requirements and concepts used in these new procedural solutions may result in various and differing applications throughout the Member States. While these different interpretations might be interesting case studies, they will not necessarily contribute to the increased legal certainty that the procedural solutions aim to achieve. We also illustrated how AI has an impact on “traditional” and newly proposed tort law concepts. From a more general perspective, we believe that interdisciplinarity – for instance through policy prototypingFootnote 102 – will become increasingly important to remedy regulatory gaps and to devise new “rules” on AI and tort law.

9 Artificial Intelligence and Competition Law

Friso Bostoen
9.1 Introduction

Algorithmic competition issues have been in the public eye for some time.Footnote 1 In 2017, for example, The Economist warned: “Price-bots can collude against consumers.”Footnote 2 Press attention was fueled by Ezrachi and Stucke’s Virtual Competition, a well-received book on the perils of the algorithm-driven economy.Footnote 3 For quite some time, however, academic and press interest outpaced the reality on the ground.Footnote 4 Price algorithms had been used to fix prices, but the collusive schemes were relatively low-tech (overseen by sellers themselves) and the consumer harm seemingly limited (some buyers of Justin Bieber posters overpaid).Footnote 5 As such, the AI and competition law literature was called “the closest ever our field came to science-fiction.”Footnote 6 More recently, that has started to change – with an increase in science, and a decrease in fiction. New economic models show that sellers cannot just use pricing algorithms to collude – algorithms can actually supplant human decision-makers and learn to charge supracompetitive prices autonomously.Footnote 7 Meanwhile, in the real world, pricing algorithms became even more common and potentially pernicious, affecting markets as essential as real estate.Footnote 8

The topic of AI and competition law is thus ripe for reexamination, for which this chapter lays the groundwork. The chapter only deals with substantive competition law (and related areas of law), not with more institutional questions like enforcement, which deserve a separate treatment. Section 9.2 starts with the end-goal of competition law, that is, consumer welfare, and how algorithms and the increasing availability of data may affect that welfare. Section 9.3 dives into the main algorithmic competition issues, starting with restrictive agreements, both horizontal and vertical (Section 9.3.1), and moving on to abuse of dominance, both exclusionary and exploitative (Section 9.3.2). The guiding question is whether EU competition rules are up to the task of remedying these issues. Section 9.4 concludes with an agenda for future research.

Before we jump in, a note on terminology. The careful reader will have noticed that, despite the “AI” in the title, I generally refer to “algorithms.” An algorithm is simply a set of steps to be carried out in a specific way.Footnote 9 This “specific way” can be pen and paper, but algorithms truly show their potential when executed by computers that are programmed to do so. At that point, we enter the “computational” realm, but when can we refer to AI? The problem is that AI is somewhat of a nebulous concept. In the oft-quoted words of the late Larry Tesler: “AI is whatever hasn’t been done yet” (the so-called “AI Effect”).Footnote 10 Machine learning (ML) is a more useful term, referring to situations where the computer (machine) itself extracts the algorithm for the task that underlies the data.Footnote 11 Thus, with ML, “it is not the programmers anymore but the data itself that defines what to do next.”Footnote 12 In what follows, I continue to refer to algorithms to capture its various uses and manifestations. For a more extensive discussion of the technological aspects of AI, see Chapter 1 of this book.

9.2 Consumer Welfare, Data, and Algorithms

The goal of EU competition law has always been to prevent distortions of competition, in other words, to protect competition.Footnote 13 But protecting competition is a means to an end. As the General Court put it: “the ultimate purpose of the rules that seek to ensure that competition is not distorted in the internal market is to increase the well-being of consumers.”Footnote 14 Competition, and thus consumer welfare, has different parameters, in particular price, choice, quality or innovation.Footnote 15 A practice’s impact on those parameters often determines its (il)legality.

Algorithmic competition can affect the parameters of competition. At the outset, though, it is important to understand that algorithms need input – that is, data – to transform into output. When it comes to competition, the most relevant type of data is price data. Such data used to be hidden from view, requiring effort to collect (e.g., frequenting competitors’ stores). Nowadays, price transparency has become the norm, at least in business-to-consumer (B2C) settings, so at the retail level.Footnote 16 Prices tend to be available online (e.g., on the seller’s website). And digital platforms, including price comparison websites (PCWs), aggregate prices of different sellers in one place.

The effects of price transparency are ambiguous, as the European Commission (EC) found in its E-Commerce Sector Inquiry.Footnote 17 The fact that consumers can easily compare prices online leads to increased price competition between sellers.Footnote 18 At the same time, price transparency also allows firms to monitor each other’s prices, often algorithmically.Footnote 19 In a vertical relation between supplier and distributor, the supplier can more easily spot deviations from the retail price it recommended – and perhaps ask retailers for adjustment. In a horizontal relation between competitors, it has become common for firms to automatically adjust their prices to those of competitors.Footnote 20 In this case, the effects can go two ways. As EU Commissioner Vestager noted: “the effect of an algorithm depends very much on how you set it up.”Footnote 21 You can use an algorithm to undercut your rivals, which is a boon for consumers. Or you can use algorithms to increase prices, which harms consumers.

Both types of algorithms (undercutting and increasing) feature in the story of The Making of a Fly, a book that ended up being priced at over $23 million on Amazon. What happened? Two sellers of the book relied on pricing algorithms, with one systematically undercutting the other (but only just), and the other systematically charging a price 27% higher than the other. An upward price spiral ensued, resulting in the book’s absurd price. In many other instances, however, the effects are less absurd and more harmful. Various studies have examined petrol prices, which are increasingly transparent.Footnote 22 In Chile, the government even obliged petrol station owners to post their prices on a public website. After the website’s introduction in 2012, coordination by petrol station owners increased their margins by 9%, at the expense of consumers.Footnote 23 A similar result can be reached in the absence of such radical transparency. A study of German petrol stations found that adoption of algorithmic pricing also increased their margins by 9%.Footnote 24 Companies such as A2i specialize in providing such pricing software.Footnote 25

Algorithms can create competition issues beyond coordination on a supracompetitive price point. They can also be at the basis of unilateral conduct, of which two types are worth highlighting. First, algorithms allow for personalized pricing.Footnote 26 The input here is not pricing data from competitors but rather personal data from consumers. If personal data allows the seller to infer the consumers’ exact willingness to pay, they can perfectly price discriminate, although this scenario is theoretical for now. The impact of price discrimination is not straightforward: while some consumers pay more than they otherwise would, it can also allow firms to serve consumers they otherwise would not.Footnote 27 Second, algorithms are widely used for non-pricing purposes, in particular for ranking.Footnote 28 Indeed, digital platforms have sprung up to bring order to the boundless internet (e.g., Google Search for websites, Amazon Marketplace for products). Given the platforms’ power over consumer choice, a tweak of their ranking algorithm can marginalize one firm while bringing fortune to another. As long as tweaks are made in the interests of consumers, they are not problematic. But if tweaks are made simply to give prominence to the platform’s own products (“self-preferencing”), consumers may suffer the consequences.

9.3 Algorithmic Competition Issues

Competition law protects competition, thus guaranteeing consumer welfare, via specific rules. I focus on two provisions: the prohibitions of restrictive agreements (Article 101 TFEU) and of abuse of dominance (Article 102 TFEU).Footnote 29 The next sections examine these prohibitions, and the extent to which they substantively cover algorithmic competition issues.

9.3.1 Restrictive Agreements

Restrictive agreements come in two types: they are horizontal when entered into between competitors (“collusion”) and vertical when entered into between firms at different levels of the supply chain (e.g., supplier and distributor). An agreement does not require a contract; more informal types of understanding between parties (“concerted practices”) also fall under Article 101 TFEU.Footnote 30 To be illegal, the common understanding must have the object or effect of restricting competition. According to the case law, “by object” restrictions are those types of coordination that “can be regarded, by their very nature, as being harmful to the proper functioning of normal competition.”Footnote 31 Given that such coordination reveals, in itself, a sufficient degree of harm to competition, it is not necessary to assess its effects.Footnote 32 “By effect” restrictions do require such an assessment. In general, horizontal agreements are more likely to fall into the “by object” category (price-fixing being the typical example), while vertical agreements are more likely to be categorized as “by effect” (e.g., recommending retail prices). Let us look at horizontal and vertical agreements in turn.

9.3.1.1 Horizontal Agreements

There are two crucial aspects to every horizontal price-fixing agreement or “cartel”: the moment of their formation and their period of stability (i.e., when no cartelist deviates from the arrangement). In the physical world, cartel formation and stability face challenges.Footnote 33 It can be difficult for cartelists to reach a common understanding on the terms of the cartel (in particular the price charged), and coordination in any case requires contact (e.g., meeting in a hotel in Hawaii). Once an agreement is reached, the cartelists have to abide by it even while having an incentive to cheat (deviating from the agreement, e.g., by charging a lower price). Such cheating returns a payoff: in the period before detection, the cheating firm can win market/profit share from its co-cartelists (after detection, all cartelists revert to the competitive price level). The longer the period before detection, the greater the payoff and thus the incentive to cheat.

In a digital world, cartel formation and stability may face fewer difficulties.Footnote 34 Cartel formation does not require contact when algorithms themselves reach a collusive equilibrium. When given the objective to maximize profits (in itself not objectionable), an ML algorithm may figure out that charging a supracompetitive price, together with other firms deploying similar algorithms, satisfies that objective. And whether or not there is still an agreement at the basis of the cartel, subsequent stability is greater. Price transparency and monitoring algorithms allow for quicker detection of deviations from the cartel agreement.Footnote 35 As a result, the expected payoff from cheating is lower, meaning there is less of an incentive to do so.Footnote 36 When a third party algorithmically sets prices for different sellers (e.g., Uber for its drivers), deviation even becomes impossible. In these different ways, algorithmic pricing makes cartels more robust. Moreover, competition authorities may have more trouble detecting cartels, given that there is not necessarily a paper trail.

In short, digitization – in particular price transparency and the widespread use of algorithms to monitor/set prices – does not make cartels less likely or durable. Taking a closer look at algorithmically assisted price coordination, it is useful to distinguish three scenarios.Footnote 37 First, firms may explicitly agree on prices and use algorithms to (help) implement that agreement. Second, firms may use the same pricing algorithm provided by a third party, which results in price coordination without explicit agreement between them. Third, firms may instruct distinct pricing algorithms to maximize profits, which results in a collusive equilibrium/supracompetitive prices. With each subsequent scenario, the existence of an agreement becomes less clear; in the absence of it, Article 101 TFEU does not apply. Let us test each scenario against the legal framework.

The first scenario, in which sellers algorithmically implement a prior agreement, does not raise difficult questions. The Posters case, referenced in the introduction, offers a model.Footnote 38 Two British sellers of posters, Trod and GB, agreed to stop undercutting each other on Amazon Marketplace. Given the difficulty of manually adjusting prices on a daily basis, the sellers implemented their cartel agreement via re-pricing software (widely available from third parties).Footnote 39 In practice, GB programmed its software to undercut other sellers but match the price charged by Trod if there were no cheaper competing offers. Trod configured its software with “compete rules” but put GB on an “ignore list” so that the rules it had programmed to undercut competitors did not apply to GB. Still, humans were still very much in the loop, as evidenced by emails in which employees complained about apparent noncompliance with the arrangement, in particular when the software did not seem to be working properly.Footnote 40 The UK Competition and Markets Authority had no trouble establishing agreement, which fixed prices and was thus restrictive “by object.”

In this first scenario, the use of technology does not expose a legal vacuum; competition law is up to the task. But what if there was no preexisting price-fixing agreement? In that case, the sellers would simply be using repricing software to undercut other sellers and each other. At first sight, that situation appears perfectly competitive: undercutting competitors is the essence of competition – if that happens effectively and rapidly, all the better. The reality is more complex. Brown has studied the economics of pricing algorithms, finding that they change the nature of the pricing game.Footnote 41 The logic is this: once a firm commits to respond to whatever price its competitors charge, those competitors internalize that expected reaction, which conditions their pricing (they are more reluctant to decrease prices in the first place).Footnote 42 In short, even relatively simple pricing algorithms can soften competition. This is in line with the aforementioned study of algorithmic petrol station pricing in Germany.Footnote 43

The second scenario, in which sellers rely on a common algorithm to set their prices, becomes more difficult but not impossible to fit within Article 101 TFEU. There are two sub-scenarios to distinguish. First, the sellers may be suppliers via an online platform that algorithmically sets the price for them. This setting is not common as platforms generally leave their suppliers free to set a price but Uber, which sets prices for all of its drivers, provides an example.Footnote 44 Second, sellers may use the same “off-the-shelf” pricing software offered by a third party. The U.S. firm RealPage, for example, offers its YieldShare pricing software to a large number of landlords.Footnote 45 It relies not on public information (e.g., real estate listings) but on private information (actual rent charged) and even promotes communication between landlords through groups.Footnote 46 In either sub-scenario, there is not necessarily communication between the different sellers, be they Uber drivers or landlords. Rather, the coordination originates from a third party, the pricing algorithm provider. Such scenarios can be classified as “hub-and-spoke” cartels, where the hub refers to the algorithm provider and the spokes are the sellers following its pricing guidance.Footnote 47

The guiding EU case on this second scenario is Eturas.Footnote 48 The case concerned the Lithuanian firm Eturas, operator of the travel booking platform E-TURAS. At one point, Eturas messaged the travel agencies using its platforms that discounts would be automatically reduced to 3% “to normalise the conditions of competition.”Footnote 49 In a preliminary reference, the European Court of Justice (ECJ) was asked whether the use of a “common computerized information system” to set prices could constitute a concerted practice between travel agencies under Article 101 TFEU.Footnote 50 The ECJ started from the foundation of cartel law, namely that every economic operator must independently determine their conduct on the market, which precludes any direct or indirect contact between operators so as to influence each other’s conduct.Footnote 51 Even passive modes of participation can infringe Article 101 TFEU.Footnote 52 But the burden of proof is on the competition authority, and the presumption of innocence precludes the authority from inferring from the mere dispatch of a message that travel agencies were also aware of that message.Footnote 53 Other objective and consistent indicia may justify a rebuttable presumption that the travel agencies were aware of the message.Footnote 54 In that case, the authority can conclude the travel agencies tacitly assented to a common anticompetitive practice.Footnote 55 That presumption too must be rebuttable, including by (i) public distancing, or a clear and express objection to Eturas; (ii) reporting to the administrative authorities; or (iii) systematic application of a discount exceeding the cap.Footnote 56

With this legal framework in mind, we can return to the case studies introduced earlier. With regard to RealPage’s YieldShare, it bears mentioning that the algorithm does not impose but suggests a price, which landlords can deviate from (although very few do). Nevertheless, the U.S. Department of Justice (DOJ) has opened an investigation.Footnote 57 The fact that RealPage also brings landlords into direct contact with each other may help the DOJ’s case. Uber has been subject to investigations around the globe, including the U.S. and Brazil, although no infringement was finally established.Footnote 58 In the EU, there has not been a case, although Eturas could support a finding of infringement: drivers are aware of Uber’s common price-setting system and can thus be presumed to participate in a concerted practice.Footnote 59 That is not the end of it though, as infringements of Article 101(1) TFEU can be justified under Article 101(3) TFEU if they come with countervailing efficiencies, allow consumers a fair share of the benefit, are proportional, and do not eliminate competition.Footnote 60 Uber might meet those criteria: its control over pricing is indispensable to the functioning of its efficient ride-hailing system (which reduces empty cars and waiting times), and that system comes with significant consumer benefits (such as convenience and lower prices). In its Webtaxi decision on a platform that operates like Uber, the Luxembourgish competition authority exempted the use of a common pricing algorithm based on this reasoning.Footnote 61

To conclude, this second scenario of sellers relying on a common price-setting algorithm, provided by either a platform or a third party, can still be addressed by EU competition law, even though it sits at the boundary of it. And if a common pricing algorithm is essential to a business model that benefits consumers, it may be justified.

The third scenario, in which sellers’ use of distinct pricing algorithms results in a collusive equilibrium, may escape the grasp of Article 101 TFEU. The mechanism is the following: sellers instruct their ML algorithms to maximize profits, after which the algorithms figure out that coordination on a supracompetitive price best attains that objective. These algorithms tend to use “reinforcement learning” and more specifically “Q-learning”: the algorithms interact with their environment (including the algorithms of competing sellers) and, through trial and error, learn the optimal pricing policy.Footnote 62 Modeling by Salcedo showed “how pricing algorithms not only facilitate collusion but inevitably lead to it,” albeit under very strong assumptions.Footnote 63 More recently, Calvano et al. took an experimental approach, letting pricing algorithms interact in a simulated marketplace.Footnote 64 These Q-learning algorithms systematically learned to adopt collusive strategies, including the punishment of deviations from the collusive equilibrium. That collusive equilibrium was typically below the monopoly level but substantially above the competitive level. In the end, while these theoretical and experimental results are cause for concern, it remains an open question to what extent autonomous price coordination can arise in real market conditions.Footnote 65

Nevertheless, it is worth asking whether EU competition law is up to the task if/when the third scenario of autonomously coordinating pricing algorithms materializes. The problem is in fact an old one.Footnote 66 In oligopolistic markets (with few players), there is no need for explicit collusion to set prices at a supracompetitive level; high interdependence and mutual awareness may suffice to reach that result. Such tacit collusion, while societally harmful, is beyond the reach of competition law (the so-called “oligopoly problem”). Tacit collusion is thought to occur rarely given the specific market conditions it requires but some worry that, through the use of algorithms, it “could become sustainable in a wider range of circumstances possibly expanding the oligopoly problem to non-oligopolistic market structures.”Footnote 67 To understand the scope of the problem, let us take a closer look at the EU case law.

In case of autonomous algorithmic collusion, there is no agreement. Might there be a concerted practice? The ECJ has defined a concerted practice as “a form of coordination between undertakings by which, without it having reached the stage where an agreement properly so called has been concluded, practical cooperation between them is knowingly substituted for the risks of competition.”Footnote 68 This goes back to the requirement that economic operators independently determine their conduct on the market.Footnote 69 The difficulty is that, while this requirement strictly precludes direct or indirect contact between economic operators so as to influence each other’s conduct, it “does not deprive economic operators of the right to adapt themselves intelligently to the existing and anticipated conduct of their competitors.”Footnote 70 Therefore, conscious parallelism – even though potentially as harmful as a cartel – does not meet the concertation threshold of Article 101 TFEU. Indeed, “parallel conduct cannot be regarded as furnishing proof of concertation unless concertation constitutes the only plausible explanation for such conduct.”Footnote 71 Discarding every other plausible explanation for parallelism is a Herculean task with little chance of success. The furthest the EC has taken the concept of concertation is in Container Shipping.Footnote 72 The case concerned shipping companies that regularly announced their intended future price increases, doing so 3–5 weeks beforehand, which allowed for customer testing and competitor alignment. According to the EC, this could be “a strategy for reaching a common understanding about the terms of coordination” and thus a concerted practice.Footnote 73

Truly autonomous collusion can escape the legal framework in a way that tacit collusion has always done. In this sense, it is a twist on the unsolved oligopoly problem. Even the price signaling theory of Container Shipping, already at the outer boundary of Article 101 TFEU, hardly seems to capture autonomous collusion. If/when autonomous pricing agents are widely deployed, however, it may pose a bigger problem than the oligopoly one we know. Scholars have made suggestions on how to adapt the legal framework to fill the regulatory gap, but few of proposed rules are legally, economically and technologically sound and administrable by competition authorities and judges.Footnote 74

9.3.1.2 Vertical Agreements

When discussing horizontal agreements, I only referenced the nature of the restrictions in passing, given that price-fixing is the quintessential “by object” restriction. Vertical agreements require more careful examination. An important distinction exists between recommended resale prices, which are presumptively legal, and fixed resale prices (“resale price maintenance” or RPM), which are presumptively illegal as “by object” restrictions.Footnote 75 The difference between the two can be small, especially when a supplier uses carrots (e.g., reimbursing promotional costs) or sticks (e.g., withholding supply) to turn a recommendation into more of an obligation. Algorithmic monitoring/pricing can play a role in this process. It can even exacerbate the anticompetitive effects of RPM.

In the wake of its E-Commerce Sector Inquiry, the EC started a number of investigations into online RPM. In four decisions, the EC imposed more than €110 million in fines on consumer electronics suppliers Asus, Denon & Marantz, Philips, and Pioneer.Footnote 76 These suppliers restricted the ability of online retailers to price kitchen appliances, notebooks, hi-fi products, and so on. Although the prices were often “recommendations” in name, the suppliers intervened in case of deviation, including through threats or sanctions. The online context held dual relevance. First, suppliers used monitoring software to effectively detect deviations by retailers and to intervene swiftly when prices decreased. Second, many retailers used algorithms to automatically adjust their prices to other retailers. Given that automatic adjustment, the restrictions that suppliers imposed on low-pricing retailers had a wider impact on overall prices than they would have had in an offline context.

There is also renewed interest in RPM at the national level. The Authority for Consumers & Markets (ACM) fined Samsung some €40 million for RPM of television sets.Footnote 77 Samsung took advantage of the greater transparency offered by web shops and PCWs to monitor prices through so-called “spider software,”Footnote 78 and confronted retailers that deviated from its price “recommendations.” Retailers also used “spiders” to adjust their prices (often downward) to those of competitors. Samsung regularly asked retailers to disable their spiders so that they would not automatically switch along to lower online prices. The ACM, like the EC, classified these practices as anticompetitive “by object.” Thus, while the methods of RPM may evolve, the traditional legal analysis remains applicable.

9.3.2 Abuse of Dominance

Abusive conduct comes in two types: it is exclusionary when it indirectly harms consumers by foreclosing competitors from the market and exploitative when it directly harms consumers, for example, by charging excessive prices. I discuss the main algorithmic concern under each category of abuse, that is, discriminatory ranking and personalized pricing, respectively. While I focus on abusive conduct, remember that such conduct only infringes Article 102 TFEU if the firm in question is also in a dominant position.

9.3.2.1 Exclusion

Given the abundance of online options (of goods, videos, webpages, etc.), curation is key. The role of curator is assumed by platforms, which rank the options for consumers; think, for example, of Amazon Marketplace, TikTok, and Google Search. Consumers trust that a platform has their best interests in mind, which is generally the case, and thus tend to rely on their ranking without much further thought. This gives the platform significant power over consumer choice, which can be abused. A risk of skewed rankings exists particularly when the platform does not only intermediate between suppliers and consumers, but also offers its own options. In that case, the platform may want to favor its own offering through choice architecture (“self-preferencing”).Footnote 79

The landmark case in this area is Google Search (Shopping).Footnote 80 At the heart of the abusive conduct was Google’s Panda algorithm, which demoted third-party comparison shopping services (CSS) in the search results, while Google’s own CSS was displayed prominently on top. Even the most highly ranked non-Google CSS appeared on average only on page four of the search results. This had a significant impact on visibility, given that users tend to focus on the first 3–5 results, with the first 10 results accounting for 95% of user clicks.Footnote 81 Skewed rankings distort the competitive process by excluding competitors and can harm consumers, especially when the promoted results are not the most qualitative ones.Footnote 82

Google was only the first of many cases of algorithmic exclusion.Footnote 83 Amazon has also been on the radar of competition authorities, with a variety of cases regarding the way it ranks products (and in particular, selects the winner of its “Buy Box”).Footnote 84 It is also under investigation for its “algorithmic control of price setting by third-party sellers,” which “can make it difficult for end customers to find offers by sellers or even lead to these offers being no longer visible at all.”Footnote 85

EU legislators considered the issue of discriminatory ranking serious enough to justify the adoption of ex ante regulation to complement ex post competition law. The Digital Markets Act (DMA) prohibits “gatekeepers” from self-preferencing in ranking, obliging them to apply “transparent, fair and non-discriminatory conditions to such ranking.”Footnote 86 Earlier instruments, like the Consumer Rights Directive (CRD)Footnote 87 and the Platform-to-Business (P2B) Regulation,Footnote 88 already mandated transparency in ranking.Footnote 89

9.3.2.2 Exploitation

Price discrimination, and more specifically personalized pricing, is of particular concern in algorithmically driven markets. Dynamic pricing, that is, firms adapting prices to market conditions (essentially, supply and demand) has long existed. Think for example of airlines changing prices over time (as captured by the saying that “the best way to ruin your flight is to ask your neighbor what they paid”). With personalized pricing, prices are tailored to the characteristics of the consumers in question (e.g., location and previous purchase behavior) so as to approach their willingness to pay. Authorities have put limits to such personalized pricing. Following action by the ACM, for example, the e-commerce platform Wish decided to stop using personalized pricing.Footnote 90

The ACM did not intervene based on competition law.Footnote 91 Article 102(a) TFEU prohibits excessive prices, but personalized prices are not necessarily excessive as such, and competition authorities are in any case reluctant to intervene directly in price-setting. Price discrimination, explicitly prohibited by Article 102 TFEU(c), may seem like a more fitting option, but that provision is targeted at discrimination between firms rather than between consumers.Footnote 92 Another limitation is that Article 102 TFEU requires dominance, and most firms engaged in personalized pricing do not have market power. While competition law is not an effective tool to deal with personalized pricing, other branches of law have more to say on the matter.Footnote 93

First, personalization is based on data, and the General Data Protection Regulation (GDPR) regulates the collection and processing of such data.Footnote 94 The DMA adds further limits for gatekeepers.Footnote 95 Various other laws – including the Unfair Commercial Practices Directive (UCPD),Footnote 96 the CRD,Footnote 97 and the P2B RegulationFootnote 98 – also apply to personalized pricing but are largely restricted to transparency obligations. The recent Digital Services Act (DSA)Footnote 99 and AI ActFootnote 100 go a step further with provisions targeted at algorithms, although their applicability to personalized pricing is yet to be determined.

Despite different anecdotes on personalized pricing (e.g., by Uber), there is no empirical evidence of widespread personalized pricing.Footnote 101 One limiting factor may be the reputational costs a firm incurs when its personalized pricing is publicized, given how consumers tend to view such practices as unfair. In addition, the technological capability to effectively personalize prices is sometimes overstated.Footnote 102 It would be good, however, to have a clear view of the fragmented regulatory framework for when the day of widespread personalized pricing does arrive.

9.4 Conclusion

Rather than revisiting interim conclusions, I end with a research agenda. This chapter has set out the state of the art on AI and competition, at least on the substantive side. Algorithms also pose risks – and opportunities – on the institutional (enforcement) side. Competition authority heads have vowed that they “will not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the Internet using complex pricing algorithms.”Footnote 103 While this elegant one-liner is a common-sense policy statement, the difficult question is “how?”. Substantive issues aside, algorithmic anticompetitive conduct can be more difficult to detect and deter. Compliance by design is key. Just like the ML models that have become world-class at playing Go and Texas Hold’em have the rules of those games baked in, firms deploying algorithms should think about programming them with the rules of economic rivalry, that is, competition law. At the same time, competition authorities will have to build out their algorithmic detection capabilities.Footnote 104 They may even want to go a step further and intervene algorithmically – or, in the words of the Economist article this chapter started with: “Trustbusters might have to fight algorithms with algorithms.”Footnote 105

Returning to substantive questions, the following would benefit from further research:

  • Theoretical and experimental research shows that autonomous algorithmic collusion is a possibility. To what extent are those results transferable to real market conditions? Do new developments in AI increase the possibility of algorithmic collusion?

  • Autonomous algorithmic collusion presents a regulatory gap, at least if such collusion exits the lab and enters the outside world. Which rule(s) would optimally address this gap, meaning they are legally, economically, and technologically sound and administrable by competition authorities and judges?

  • Algorithmic exclusion (ranking) and algorithmic exploitation (personalized pricing) are regulated to varying degrees by different instruments, including competition law, the DMA, the DSA, the P2B Regulation, the CRD, the UCPD and the AI Act. How do these instruments fit together – do they exhibit overlap? A lot of instruments are centered around transparency – is that approach effective given the bounded rationality of consumers?

The enforcement questions (relating, e.g., to compliance by design) are no less pressing and difficult. Even more so than the substantive questions, they will require collaboration between lawyers and computer scientists.

10 AI and Consumer Protection An Introduction

Evelyne Terryn and Sylvia Martos Marquez
10.1 Introduction

AI brings risks but also opportunities for consumers. For instance, AI can help consumers to optimize their energy use, detect fraud with their credit cards, simplify or select relevant information or translate. Risks do however also exist, for instance, in the form of biased erroneous information and advice or manipulation into choices that do not serve consumers best interests. Also when it comes to consumer law, which traditionally focuses on protecting consumers’ autonomy and self-determination, the increased use of AI poses major challenges, which will be the focal point of this chapter.

We start by setting out how AI systems can affect consumers in both positive and negative ways (Section 10.2). Next, we explain how the fundamental underpinnings and basic concepts of consumer law are challenged by AI’s ubiquity, and we caution against a silo approach to the application of this legal domain in the context of AI (Section 10.3). Subsequently, we provide a brief overview of some of the most relevant consumer protection instruments in the EU and discuss how they apply to AI systems (Section 10.4). Finally, we illustrate the shortcomings of the current consumer protection law framework more concretely by taking dark patterns as a case study (Section 10.5). We conclude that additional regulation is needed to protect consumers against AI’s risks (Section 10.6).

10.2 Challenges and Opportunities of AI for Consumers

The combination of AI and data offers traders a vast range of new opportunities in their relationship with consumers. Economic operators may use, among other techniques, machine learning algorithms, a specialized subdiscipline of AI, to analyze large datasets. These algorithms process extensive examples of desired and interesting behavior, known as the “training data,” to generate computer-readable data-learned knowledge. This knowledge can then be used to optimize various processes.Footnote 1 The (personal) data of consumers thus becomes a valuable source of information for companies.Footnote 2 Moreover, with the increasing adoption of the Internet of Things and advances in Big Data, the accuracy and amount of information obtained about individual consumers and their behavior is only expected to increase.Footnote 3 In an ideal situation consumers know which input (data set) was employed by the market operator to train the algorithm, which learning algorithm was applied and which assignment the machine was trained for.Footnote 4 However, market operators using AI often fail to disclose this information to consumers.Footnote 5 In addition, consumers also often face the so-called “black box” or “inexplicability” problem with data-driven AI, which means that the exact reasoning that led to the output, the final decision as presented to humans, remains unknown.Footnote 6 Collectively, this contributes to an asymmetry of information between businesses and consumers with market players collecting a huge amount of personal data on consumers.Footnote 7 In addition, consumers often remain unaware that pricing, or advertising have been tailored to their supposed preferences, thus creating an enormous potential to exploit the inherent weaknesses in the consumers’ ability to understand that they are being persuaded.Footnote 8 Another major challenge, next to the consumer’s inability to understand business behavior, is that automated decisions of algorithmic decision-making can lead to biased or discriminatory results, as the training data may not be neutral (selected by a human and thus perpetuating human biases) and may contain outdated data, data reflecting consumer’s behavioral biases or existing social biases against a minority.Footnote 9 This could lead directly to consumers receiving biased and erroneous advice and information.

In addition, AI brings significant risks of influencing consumers into making choices that do not serve their best interests.Footnote 10 The ability to predict the reactions of consumers allows businesses to trigger the desired behavior of consumers, potentially making use of consumer biases,Footnote 11 for instance through choice architecture. Ranging from the color of the “buy” button on online shopping stores to the position of a default payment method – the choice in design architecture can be based on algorithms that define how choices are presented to consumers in order to influence them.Footnote 12

Economic operators may furthermore influence or manipulate consumers by restricting the information or offers they can access and thus their options and this for purely economic goals.Footnote 13 Clustering techniques are used to analyze consumer behavior to classify them into meaningful categories and treat them differently.Footnote 14 This personalization can occur in different forms, including the “choice architecture,” the offers that are presented to consumers or in the form of different prices for the same product for different categories of consumers.Footnote 15 AI systems may also be used to determine and offer consumers the reserve price – the highest price they are able or willing to pay for a good or service.Footnote 16

Although AI entails risks, it also provides opportunities for consumers, in various sectors. Think of AI applications in healthcare (e.g., through mental health chatbots, diagnosticsFootnote 17), legal services (e.g., cheaper legal advice), finance and insurance services (e.g., fraud prevention), information services (e.g., machine translation, selection of more relevant content), and energy services (e.g., optimization of energy use through “smart homes”), to name but a few.Footnote 18 Personalized offers by traders and vendors could (at least in theory) also assist consumers to overcome undesirable information overload. An example of a consumer empowering technology in the legal sector is CLAUDETTE. This online system detects potentially unfair clauses in online contracts and privacy policies, to empower the weaker contract party.Footnote 19

10.3 Challenges of AI for Consumer Law

Section 10.2 illustrated how AI systems can both positively and negatively affect consumers. However, the digital transformation in general and AI specifically also raises challenges to consumer law. The fundamental underpinnings and concepts of consumer law are increasingly put under pressure, and these new technologies also pose enormous challenges in terms of enforcement. Furthermore, because of the different types of concerns that AI systems raise in this context, these challenges make it clear that consumer law cannot be seen or enforced in isolation from data protection or competition law. These aspects are briefly discussed in Sections 10.3.110.3.3.

10.3.1 Challenges to the Fundamental Underpinnings of Consumer Law

Historically, the emergence of consumer law is linked to the development of a consumer society. In fact, this legal domain has been referred to as a “reflection of the consumer society in the legal sphere.”Footnote 20 The need for legal rules to protect those who consume, was indeed felt more urgently when consumption, above the level of basic needs, became an important aspect of life in society.Footnote 21 The trend to attach increasing importance to consumption had been ongoing for several centuries,Footnote 22 but the increasing affluence, the changing nature of the way business was conducted, and the massification of consumption, all contributed to a body of consumer protection rules being adopted, mainly from the 1950s.Footnote 23 More consumption was thought to equal more consumer welfare and more happiness. Consumer protection law in Europe first emerged at national level.Footnote 24 It was only from the 1970s on that European institutions started to develop an interest in consumer protection and that the first consumer protection programs followed.Footnote 25 The first binding instruments were adopted in the 1980s, and consisted mostly of minimum harmonization instruments. This means that member states are allowed to maintain or adopt more protective provisions, as long as the minimum standards imposed by the harmonization instrument are respected. From 2000 onwards, the shift to maximum harmonization in the European consumer protection instruments reduced the scope for a national consumer (protection) policy.

While originally the protection of a weaker consumer was central in many national regimes, the focus in European consumer law came to be on the rational consumer whose right to self-determination (private autonomy) on a market must be guaranteed.Footnote 26 This right to self-determination can be understood as the right to make choices in the (internal) market according to one’s own preferencesFootnote 27 thereby furthering the realization of the internal market.Footnote 28 This focus on self-determination presupposes a consumer capable of making choices and enjoying the widest possible options to choose from.Footnote 29 EU consumer law could thus be described as the guardian of the economic rights of the nonprofessional player in the (internal) market. Private autonomy and contractual freedom should in principle suffice to protect these economic rights and to guarantee a bargain in accordance with one’s own preferences, but consumer law acknowledges that the preconditions for such a bargain might be absent, especially due to information asymmetries between professional and nonprofessional players.Footnote 30 Information was and is therefore used as the main corrective mechanism in EU consumer law.Footnote 31 Further reaching intervention – for example, by regulating the content of contracts – implies a greater intrusion into private autonomy and is therefore only a subsidiary protection mechanism.Footnote 32

AI and the far-reaching possibilities of personalization and manipulation it entails, especially when used in combination with personal data, now challenges the assumption of the rational consumer with its “own” preferences even more fundamentally. The efficiency of information as a means of protection had already been questioned before the advent of new technologies,Footnote 33 but the additional complexity of AI leaves no doubt that the mere provision of information will not be a solution to the ever increasing information asymmetry and risk of manipulation. The emergence of an “attention economy” whereby companies strive to retain consumers’ attention in order to generate revenue based on advertising and data gathering, furthermore also makes clear that “more consumption is more consumer welfare” is an illusion.Footnote 34 The traditional underpinnings of consumer law therefore need revisiting.

10.3.2 Challenges to the Basic Concepts of Consumer Law

European consumer law uses the abstract concept of the “average” consumer as a benchmark.Footnote 35 This is a “reasonably well informed and reasonably observant and circumspect” consumer;Footnote 36 a person who is “reasonably critical […], conscious and circumspect in his or her market behaviour.”Footnote 37 This benchmark, as interpreted by the Court of Justice of the European Union, has been criticized for not taking into account cognitive biases and limitations of the consumers and for allowing companies to engage in exploitative behavior.Footnote 38 AI now creates exponential possibilities to exploit these cognitive biases and the need to realign the consumer benchmark with the realities of consumer behavior is therefore even more urgent. There is furthermore some, but only limited, attention to the vulnerable consumer in EU consumer law.Footnote 39 Thus, the Unfair Commercial Practices Directive, for example, allows to assess a practice from the perspective of the average member of a group of vulnerable consumers even if the practice was directed to a wider group, if the trader could reasonably foresee that the practice would distort the behavior of vulnerable consumers.Footnote 40 The characteristics the UCPD identifies to define vulnerability (such as mental or physical infirmity, age, or credulity) are however not particularly helpful nor exhaustive in a digital context. Interestingly, however, the Commission Guidance does stress that vulnerability is not a static concept, but a dynamic and situational conceptFootnote 41 and that the characteristics mentioned in the directive are indicative and non-exhaustive.Footnote 42 The literature has however rightly argued that a reinterpretation of the concept of vulnerability will not be sufficient to better protect consumers in a digital context. It is submitted that in digital marketplaces, most, if not all consumers are potentially vulnerable; digitally vulnerable and susceptible “to (the exploitation of) power imbalances that are the result of increasing automation of commerce, datafied consumer-seller relations and the very architecture of digital marketplaces.”Footnote 43 AI and digitalization thus create a structural vulnerability that requires a further reaching intervention than just to reinterpret vulnerability.Footnote 44 More attention to tackling the sources of digital vulnerability and to the architecture of digital marketplaces is hence definitely necessary.Footnote 45

10.3.3 Challenges to the Silo Approach to Consumer Law

Consumer law has developed in parallel with competition law and data protection law but, certainly in digital markets, it is artificial – also in terms of enforcement – to strictly separate these areas of the law.Footnote 46 The use of AI often involves the use of (personal) consumer data and concentration in digital markets creates a risk of abuses of personal data also to the detriment of consumers. Indeed, there are numerous and frequent instances where the same conduct will be covered simultaneously by consumer law, competition law, and data protection law.Footnote 47 The German Facebook case of the BundesgerichtshofFootnote 48 is just one example where competition law (abuse of dominant position) was successfully invoked also to guarantee consumer’s choice in the data they want to share and in the level of personalization of the services provided.Footnote 49 There is certainly a need for more convergence and a complementary application of these legal domains, rather than artificially dividing them, especially when it comes to enforcement. The case law allowing consumer protection organizations to bring representative actions on the basis of consumer law (namely unfair practices or unfair contract terms), also for infringements of data protection legislation, is therefore certainly to be welcomed.Footnote 50

10.4 Overview of Relevant Consumer Protection Instruments

The mentioned challenges of course do not imply that AI currently operates in a legal vacuum and that there is no protection in place. The existing consumer law instruments provide some safeguards, both when AI is used in advertising or in a precontractual stage, and when it is the actual subject matter of a consumer contract (e.g., as part of a smart product). The current instruments are however not well adapted to AI, as will be illustrated by the brief overview of the most relevant instruments below.Footnote 51 An exercise is ongoing to potentially adapt several of these instrumentsFootnote 52 and make them fit for the digital age.Footnote 53 In addition, several new acts were adopted or proposed in the digital sphere that also have an impact on consumer protection and AI.

10.4.1 The Unfair Commercial Practices Directive

The UCPD is a maximum harmonization instrument that regulates unfair commercial practices occurring before, during and after a B2C transaction. It has a broad scope of application and the combination of open norms and a blacklist of practices that are prohibited in all circumstances allows it to tackle a wide range of unfair business practices, also when these practices result from the use of AI.Footnote 54 Practices are unfair, according to the general norm, if they are contrary to the requirements of “professional diligence and are likely to materially distort the economic behaviour of the average consumer.”Footnote 55 The UCPD furthermore prohibits misleading and aggressive practices. Misleading practices are actions or omissions that deceive or are likely to deceive and cause the average consumer to make a transactional decision they would not have taken otherwise.Footnote 56 Aggressive practices are practices that entail the use of coercion or undue influence which significantly impairs the average consumer’s freedom of choice and causes them to make a transactional decision they would not have taken otherwise.Footnote 57

The open norms definitely offer some potential to combat the use of AI to manipulate consumers, either using the general norm or the prohibition of misleading or aggressive practices.Footnote 58 However, the exact application and interpretation of these open norms makes the outcome of such cases uncertain.Footnote 59 When exactly does the use of AI amount to “undue influence,” how is the concept of the “average consumer” to be used in a digital context; when exactly does personalized advertising become misleading. We make these problems more concrete in our analysis of dark patterns below (Section 10.5). More guidance on the application of these open norms could make the application to AI-based practices easier.Footnote 60 Additional blacklisted practices could also provide more legal certainty.

10.4.2 Consumer Rights Directive

The CRD – also a maximum harmonization directiveFootnote 61 – regulates the information traders must provide to consumers when contracting, both for on premises contracts and for distance and doorstep contracts. In addition, it regulates the right of withdrawal from the contract. The precontractual information requirements are extensive and they include an obligation to provide information about the main characteristics and total price of goods or services; about the functionality and interoperability of digital content and digital services, and the duration and conditions for termination of the contract.Footnote 62 However, as Ebers mentions, these obligations are formulated quite generally, making it difficult to concretize their application to AI systems.Footnote 63 The Modernization directiveFootnote 64 – adopted to “modernize” a number of EU consumer protection directives in view of the development of digital toolsFootnote 65 – introduced a new information obligation for personal pricing.Footnote 66 Art. 6 (1) (ea) of the modernized CRD now requires the consumer to be informed that the price was personalized on the basis of automated decision-making. There is however no obligation to reveal the algorithm used nor its methodology; neither is there an obligation to reveal how the price was adjusted for a particular consumer.Footnote 67 This additional information obligation has therefore been criticized for being too narrow as it hinders the finding of price discrimination.Footnote 68

10.4.3 Unfair Contract Terms Directive

The UCTD in essence requires contract terms to be drafted in plain, intelligible language and the terms must not cause a significant imbalance in the parties’ rights and obligations, to the detriment of the consumerFootnote 69 Contract terms that do not comply with these requirements can be declared unfair and therefore nonbinding.Footnote 70 The directive has a very broad scope of application and applies to (not individually negotiated) clauses in contracts between sellers/suppliers and consumers “in all sectors of economic activity.”Footnote 71 It does not require that the consumer provides monetary consideration for a good or service. Contracts whereby the consumer “pays” with personal data or whereby the consideration provided consists in consumer generated content and profiling are also covered.Footnote 72 It is furthermore a minimum harmonization directive, so stricter national rules can still apply.Footnote 73

The UCTD can help consumers to combat unfair clauses (e.g., exoneration clauses, terms on conflict resolution, terms on personalization of the service, terms contradicting the GDPR)Footnote 74 in contracts with businesses that use AI. It could also be used to combat untransparent personalized pricing whereby AI is used. In principle, the UCTD does not allow for judges to control the unfairness of core contract terms (clauses that determine the main subject matter of the contract), nor does it allow to check the adequacy of price and remuneration.Footnote 75 This is however only the case if these clauses are transparent.Footnote 76 The UCTD could furthermore also be invoked if AI has been used to personalize contract terms without disclosure to the consumer.Footnote 77 Unfair terms do not bind the consumer and may even lead to the whole contract being void if the contract cannot continue to exist without the unfair term.Footnote 78

10.4.4 Consumer Sales Directive and Digital Content and Services Directive

When AI is the subject matter of the contract, the new Consumer Sales Directive 2019/771 (“CSD”) and Digital Content and Services Directive 2019/770 (“DCSD”), provide the consumer with remedies in case the AI application fails. The CSD will apply when the digital element – provided under the sales contract – is thus incorporated or connected with the good that the absence of the digital element would prevent the good from performing its function.Footnote 79 If this is not the case, the DCSD will apply. Both directives provide for a similar – but not identical – regime that determines the requirements for conformity and the remedies in case of nonconformity. These remedies include specific performance (repair or replacement in case of a good with digital elements), price reduction and termination. Damages caused by a defect in an AI application continue to be governed by national law. The directives also provide for an update obligation (including security updates) for the seller of goods with digital elements and for the trader providing digital content or services.Footnote 80

10.4.5 Digital Markets Act and Digital Services Act

The Digital Markets Act (“DMA”), which applies as of May 2, 2023Footnote 81 aims to maintain an open and fair online environment for businesses users and end users by regulating the behavior of large online platforms, known as “gatekeepers,” which have significant influence in the digital market and act as intermediaries between businesses and customers.Footnote 82 Examples of such gatekeepers are Google, Meta, and Amazon. The regulation has only an indirect impact on the use of AI, as it aims to prevent these gatekeepers from engaging in unfair practices, which give them significant power and control over access to content and services.Footnote 83 Such practices may involve the use of biased or discriminatory AI algorithms. The regulation imposes obligations on gatekeepers such as providing the ability for users to uninstall default software applications on the operating system of the gatekeeper,Footnote 84 a ban on self-preferencing,Footnote 85 and the obligation to provide data on advertising performance and ad pricing.Footnote 86 The DMA certainly provides for additional consumer protection, but it does so indirectly, by mainly regulating the relationship between platforms and business users and by creating more transparency. Consumer rights are not central in the DMA and this is also apparent from the lack of involvement of consumers and consumer organizations in the DMA’s enforcement.Footnote 87

The Digital Services Act (“DSA”),Footnote 88 which applies as of February 17, 2024,Footnote 89 establishes a harmonized set of rules on the provision on online intermediary services and aims to ensure a safe, predictable, and trustworthy online environment.Footnote 90 The regulation mainly affects online intermediaries (including online platforms), such as online marketplaces, online social networks, online travel and accommodation platforms, content-sharing platforms, and app stores.Footnote 91 It introduces additional transparency obligations, including advertising transparency requirements for online platformsFootnote 92 and a ban on targeted advertisement of minors based on profilingFootnote 93 as well as a ban on targeted advertising based on profiling using special categories of personal data, such as religious belief or sexual orientation.Footnote 94 It also introduces recommender system transparency for providers of online platforms.Footnote 95 The regulation furthermore obliges very large online platforms to carry out a risk assessment of their services and systems, including their algorithmic systems.Footnote 96

10.4.6 Artificial Intelligence Act

The Artificial Intelligence Act (“AI Act”) Act, adopted June 13, 2024, provides harmonized rules for “the placing on the market, the putting into service and the use of AI systems in the Union.”Footnote 97 It uses a risk-based methodology to classify certain uses of AI systems as entailing a low, high, or unacceptable risk.Footnote 98 AI practices that pose an unacceptable risk are prohibited, including subliminal techniques that distort behavior and cause significant harm.Footnote 99 The regulation foresees penalties for noncomplianceFootnote 100 and establishes a cooperation mechanism at European level (the so-called European Artificial Intelligence Board), composed of representatives from the Member States and the Commission, to ensure enforcement of the provisions of the AI Act across Europe.Footnote 101 Concerns have been expressed whether the AI Act is adequate to also tackle consumer protection concerns. It has been argued that the list of “high-risk” applications and the list of forbidden AI practices does not cover all problematic AI applications or practices for consumers.Footnote 102 Furthermore, the sole focus on public enforcement and the lack of appropriate individual rights for consumers and collective rights for consumers organization to ensure an effective enforcement has been criticized.Footnote 103

10.5 Dark Patterns as a Case Study
10.5.1 The Concept of Dark Patterns

The OECD Committee on Consumer Policy uses the following working definition of dark patterns:

business practices employing elements of digital choice architecture, in particular in online user interfaces, that subvert or impair consumer autonomy, decision-making or choice. They often deceive, coerce or manipulate consumers and are likely to cause direct or indirect consumer detriment in various ways, though it may be difficult or impossible to measure such detriment in many instances.Footnote 104

A universally accepted definition is lacking, but dark patterns can be described by their common features involving the use of hidden, subtle, and often manipulative designs or marketing tactics that exploit consumer biases, vulnerabilities, and preferences to benefit the business or provider of intermediary services that presents the information that may not align with the consumer’s own preferences or best interest.Footnote 105 Examples of such marketing practices include (i) false hierarchy (the button for the business’ desired outcome is more prominent or visually appealing than the others),Footnote 106 (ii) hidden information,Footnote 107 (iii) creating a sense of false urgency,Footnote 108 (iv) forced continuity or roach motel (making it significantly more difficult for consumers to cancel their subscription than it was to sign up or automatically renew the service without the user’s express consent and repeatedly asking consumers to reconsider their choice).Footnote 109 All of these illustrations are practices closely related to the concept of choice architecture and hyper personalization discussed in Section 10.2 presenting choices in a non-neutral way.

Dark patterns may involve the use of personal data of consumers and the use of AI.Footnote 110 AI is an asset for modifying dark patterns to have a greater impact on consumers behavior in a subtle way. It allows business operators to examine which dark patterns work best, especially when personal data is involved, and dark patterns are adapted accordingly. Examples of the power of the combination of dark patterns and AI can be found in platforms encouraging consumers to become paying members by presenting this option in different ways and over different time periods.Footnote 111 Machine learning applications can analyze personal data to optimize dark patterns and find more innovative ways to convince consumers to buy a subscription. They can examine how many hours are spent a day watching videos, how many advertisements are being skipped and whether the app is closed when an ad is shown.Footnote 112 The ad play may be increased if the consumer refuses to become a paying member.Footnote 113 Such a process can be stretched over quite a long time, making the consumer believe it is its own decision to subscribe, without him feeling tricked.Footnote 114 In essence, the combination of AI, personal data and dark patterns, results in an increased ability to manipulate consumers.

10.5.2 Overview of the Relevant Instruments of Consumer Protection against Dark Patterns

The UCPD is a first instrument that offers a number of possible avenues to combat dark patterns. As mentioned, it covers a wide range of prohibited practices in a business to consumer context.Footnote 115 First, the general prohibition of unfair commercial practices of art. 5 UCPD that functions as a residual control mechanism can be invoked. It prohibits all practices that violate a trader’s professional diligence obligation and that cause the average consumer to make a transactional decision that they would not otherwise have made.Footnote 116 This includes not only the decision to purchase or not purchase a product but also related decisions, such as visiting a website, or viewing content.Footnote 117 As mentioned, the standard of the “average” consumer (of the target group) is a normative standard that has (so far) been applied rather strictly, as rational behavior is the point of departure in the assessment.Footnote 118 The fact that the benchmark can be modulated to the target group does however offer some possibilities for a less strict standard in case of personalization, as the practice could then even be assessed from the perspective of a single targeted person.Footnote 119

Article 5(3) UCPD, furthermore creates some possibilities to assess a practice from the perspective of a vulnerable consumer, but the narrow definition of vulnerability as mental or psychical disability, age or credulity is – as mentioned – not suitable for the digital age. Indeed, any consumer can be temporarily vulnerable due to contextual and psychological factors.Footnote 120 According to the European Commission, the UCPD provides a non-exhaustive list of characteristics that make a consumer “particularly susceptible” and therefore states that the concept of vulnerability should include these context-dependent vulnerabilities, such as interests, preferences, psychological profile, and even mood.Footnote 121 It will indeed be important to adopt such a broader interpretation to take into account the fact that all consumers can be potentially vulnerable in a digital context. The open norms of the UCPD might indeed be sufficiently flexible for such an interpretation,Footnote 122 but a clearer text in the directive – and not only (nonbinding) guidance of the Commission guidance – would be useful.

The more specific open norms prohibiting misleading and especially aggressive practices (arts. 6–9 UCPD) can also be invoked. But it is again uncertain how open concepts such as “undue influence” (art. 8 UCPD) must be interpreted in an AI context and to what extent the benchmark of the average consumer can be individualized. At what point does an increased exposure to advertising, tailored on past behavior, in order to convince a consumer to “choose” a paid subscription, amount to undue influence? More guidance on the interpretation of these open norms would be welcome.Footnote 123

The blacklist in Annex I of the UCPD avoids the whole discussion on the interpretation of these benchmarks. That list prohibits specific practices that are considered unfair in all circumstancesFootnote 124 and does not require an analysis of the potential effect on the average (or – exceptionally – vulnerable) consumer. The practices also do not require proof that the trader breached his professional diligence duty.Footnote 125 The list prohibits several online practices, including disguised ads,Footnote 126 false urgency (e.g., fake countdown timers),Footnote 127 bait and switch,Footnote 128 and direct exhortations to children.Footnote 129 However, these practices were not specifically formulated to be applied in an AI context and interpretational problems therefore also occur when applying the current list to dark patterns. Thus, it is for instance mentioned in the Commission guidance that “making repeated intrusions during normal interactions in order to get the consumer to do or accept something (i.e., nagging) could amount to a persistent and unwanted solicitation.”Footnote 130 The same interpretational problem then rises: how much intrusion and pressure is exactly needed to make a practice a “persistent and unwanted solicitation”? Additional blacklisted (AI) practices would increase legal certainty and facilitate enforcement.

Finally, the recently added Article 7(4a) UCPD requires traders to provide consumers with general information about the main parameters that determine the ranking of search results and their relative importance. The effectiveness of this article in protecting consumers by informing them can be questioned, as transparency about the practices generated by an AI system collides with the black box problem. Sharing information about the input-phase, such as the data set and learning algorithm that were used, may to some extent mitigate the information asymmetry but it will not suffice as a means of protection.

While the UCPD has broad coverage for most types of unfair commercial practices, the case-by-case approach does not allow to effectively address all forms of deceptive techniques known as “dark patterns.” For example, BEUC’s report of 2022 highlights the lack of consumer protection for practices that use language and emotion to influence consumers to make choices or take specific actions, often through tactics such as shaming, also referred to as confirmshaming.Footnote 131 In addition, there is uncertainty about the responsibilities of traders under the professional diligence duty and whether certain practices are explicitly prohibited.Footnote 132 Insufficient enforcement by both public and private parties further weakens this instrument.Footnote 133

A second piece of legislation that provides some protection against dark patterns is the DSA. The regulation refers to dark patterns as practices “that materially distort or impair, either purposefully or in effect, the ability of recipients of the service to make autonomous and informed choices or decisions.”Footnote 134 The DSA prohibits online platforms from designing, organizing, or operating their interfaces in a way that “deceives, manipulates, or otherwise materially distorts or impacts the ability of recipients of their services to make free and informed decisions”Footnote 135 in so far as those practices are not covered under the UCPD and GDPR.Footnote 136 Note that the important exception largely erodes consumer protection. Where the UCPD applies, and that includes all B2C practices, the vague standards of the UCPD will apply and not the more specific prohibition of dark patterns in the DSA. A cumulative application would have been preferable. The DSA inter alia targets exploitative design choices and practices as “forced continuity,” that make it unreasonably difficult to discontinue purchases or to sign out from services.Footnote 137

The AI Act contains two specific prohibitions on manipulation practices carried out through the use of AI systems that may cover dark patterns.Footnote 138 These bans prohibit the use of subliminal techniques to materially distort a person’s behavior in a manner that causes or is likely to cause significant harm and the exploitation of vulnerabilities in specific groups of people to materially distort their behavior in a manner that causes or is likely to cause significant harm.Footnote 139 These prohibitions are similar to those in the UCPD, except that they are limited to practices carried out through the use of AI systems.Footnote 140 They furthermore have some limitations. The ban relating to the abuse of vulnerabilities only applies to certain explicitly listed vulnerabilities, such as age, disability or specific social or economic situation, yet the mentioned problem of digital vulnerability is not tackled. A further major limitation was fortunately omitted in the final text of the AI Act. Whereas in the text of the AI proposal, these provisions only applied in case of physical and mental harm – which will often not be present and may be difficult to proveFootnote 141 – the prohibitions of the final AI Act also apply to (significant) economic harm.

The AI Act is complementary to other existing regulations, including data protection, consumer protection, and digital service legislation.Footnote 142 Finally, taking into account the fact that this Regulation strongly focuses on high-risk AI and that there are not many private services that qualify as high risk, the additional protection for consumers from this regulation seems limited.

The Consumer Rights Directive with its transparency requirement for pre-contractual informationFootnote 143 and its prohibition to use pre-ticked boxes implying additional payments might also provide some help.Footnote 144 However, the prohibition on pre-ticked boxes does not apply to certain sectors that are excluded from the directive, such as financial services.Footnote 145 The UCPD could however also be invoked to combat charging for additional services through default interface settings and that directive does apply to the financial sector.Footnote 146 The CRD does not regulate the conditions for contract termination, except for the right of withdrawal. An obligation for traders to insert a “withdrawal function” or “cancellation button” in contracts concluded by means of an online interface has recently been added to the CRD.Footnote 147 This function is meant to make it easier for consumers to terminate distance contracts, particularly subscriptions during the period of withdrawal. This has could be a useful tool to combat subscription traps.

10.6 Conclusion

AI poses major challenges to consumers and to consumer law and the traditional consumer law instruments are not well adapted to tackle these challenges. The mere provision of information on how AI operates will definitely not suffice to adequately protect consumers. The current instruments do allow to tackle some of the most blatant detrimental practices, but the application of the open norms in a digital context creates uncertainty and hinders effective enforcement, as our case study of dark patterns has shown. The use of AI in a business context creates a structural vulnerability for all consumers. This requires additional regulation to provide better protection, as well as additional efforts in raising awareness of the risks AI entails.

11 Artificial Intelligence and Intellectual Property Law

Jozefien Vanherpe
11.1 Introduction

This chapter reflects on the interaction between AI and Intellectual Property (IP) law. IP rights are exclusive rights vested in intangible assets that grant their owner a temporary monopoly as to the use thereof in a given territory. IP rights may be divided into industrial property and literary and artistic property. Industrial property rights protect creations that play a largely economic role and primarily include patents, trademarks, and design rights. The concept of literary and artistic property rights refers to copyright and related rights. Copyright offers the author(s) protection for literary and artistic works, while the three main related rights are granted to performing artists, producers, and broadcasting organizations.

The interface of AI and IP law has been the subject of much research already.Footnote 1 This chapter analyzes some of the relevant legal issues from a primarily civil law perspective, with a focus on the European Union (EU), and with the caveat that its limited length leaves little leeway for the nuance that this intricate, multifaceted topic demands. Section 11.2 treats the avenues open to innovators who seek to protect AI technology. Section 11.3 examines whether AI systems qualify as an author or inventor and who “owns” AI-powered content. Section 11.4 briefly notes the issues surrounding IP infringement by AI systems, the potential impact of AI on certain key concepts of IP law and the growing use of AI in IP practice.

11.2 Protection of AI Technology

Companies may protect innovation relating to AI technology through patent law and/or copyright law. Both avenues are treated in turn below.

11.2.1 Protection under Patent Law

Patent law seeks to reward investment into research and development in order to spur future innovation. It does so by providing patentees with a temporary right to exclude others from using a certain “invention,” a technological improvement that takes the form of a product or a process (or both). This monopoly right is limited to 20 years following the patent application, subject to payment of the applicable annual fees.Footnote 2 It is also limited in scope: while patentees can bring both direct and indirect infringements of their patent(s) to an end, they must accept certain exceptions as a defense to their claims, including use for experimental purposes and noncommercial use.Footnote 3 In order to be eligible for a patent, the invention must satisfy a number of conditions.

First, certain exclusions apply. The list of excluded subject matter under the European Patent Convention (EPC)Footnote 4 includes ideas that are deemed too abstract, such as computer programs as such, methods for performing mental acts and mathematical methods.Footnote 5 Pure abstract algorithms, which are essential to AI systems, qualify as a mathematical method, and are thus ineligible for patent protection as such.Footnote 6 However, this does not exclude patent protection for computer-implemented inventions such as technology related to AI algorithms, especially given the lenient interpretation of the “as such” proviso in practice. If the invention has a technical effect beyond its implementation on a computer – a connection to a material object in the “real” world – patentability may yet arise.Footnote 7 This will for example be the case for a neural network used “in a heart monitoring apparatus for the purpose of identifying irregular heartbeats,” as well as – in certain circumstances – methods for training AI systems.Footnote 8

Further, a patentable invention must satisfy a number of substantive conditions: it must be novel and inventive as well as industrially applicable.Footnote 9 The novelty requirement implies that the invention may not have been made available to the public at the date of filing of the patent application, indicated as the “state of the art.”Footnote 10 The condition of inventive step requires the invention to not have been obvious to a theoretical person skilled in the art (PSA) on the basis of this state of the art.Footnote 11 Finally, the invention must be susceptible to use in an industrial context.Footnote 12 Both the novelty and industrial applicability requirements do not appear to pose any challenges specific to AI-related innovation.Footnote 13 However, the inventiveness analysis only takes account of the patent claim features that contribute to the “technical character” of the invention, to the solution of a technical problem. Conversely, nontechnical features (such as the abstract algorithm) are removed from the equation.Footnote 14

The “patent bargain” between patentee and issuing government may lead to another obstacle. This implies that a prospective patentee must disclose their invention in a way that is sufficiently clear and complete for it to be carried out by a PSA, in return for patent protection.Footnote 15 This requirement of disclosure may be at odds with the apparent “black box” nature of many forms of AI technology, particularly in a deep learning context. This refers to a situation where we know which data were provided to the system (input A) and which result is reached (output B), but where it is unclear what exactly makes the AI system go from A to B.Footnote 16 Arguably, certain AI-related inventions cannot be explained in a sufficiently clear and complete manner, excluding the procurement of a patent therefor. However, experts will generally be able to disclose the AI system’s structure, the applicable parameters and the basic principles to which it adheres.Footnote 17 It is plausible that patent offices will deem this to be sufficient. The risk of being excluded from patent protection constitutes an additional incentive to invest in so-called “explainable” and transparent AI.Footnote 18 The transparency requirements established by the EU AI Act also play a role in this context.Footnote 19 Simultaneously, an overly strict assessment of the requirement of disclosure may push innovators toward trade secrets as an alternative way to protect AI-related innovation.Footnote 20

It is often difficult to predict the outcome of the patenting process of AI-related innovation. This uncertainty does not seem to deter prospective patentees, as evidenced by the rising number of AI-related patent applications.Footnote 21 Since the 1950s, over 300,000 AI-related patent applications have been filed worldwide, with a sharp increase in the past decade: in 2019, it was already noted that more than half of these applications had been published since 2013.Footnote 22 It is to be expected that more recent numbers will confirm this evolving trend.

11.2.2 Protection under Copyright Law

AI-related innovation may also enjoy copyright protection. Copyright protection is generated automatically upon the creation of a literary and artistic work that constitutes a concrete and original expression by the author(s).Footnote 23 It offers exclusive exploitation rights as to protected works, such as the right of reproduction and the right of communication to the public (subject to a number of exceptions), as well as certain moral rights.Footnote 24 Copyright protection lasts until a minimum period of 50 years has passed following the death of the longest living author, a period that has been extended to 70 years in, for example, the EU Member States.Footnote 25

The validity conditions for copyright are the requirement of concrete form and the requirement of originality. First, copyright protection is not available to mere abstract ideas and principles; these must be expressed in a concrete way.Footnote 26 Second, the condition of originality implies that the work must be an intellectual creation of the author(s), reflecting their personality and expressing free and creative choices.Footnote 27 Applied to AI-related works in particular, the functional algorithm in its purest sense does not satisfy the first condition and is therefore not susceptible to copyright protection.Footnote 28 However, the object and source code of the computer program expressing this idea are sufficiently concrete, allowing for copyright protection once the condition of originality is fulfilled.Footnote 29 Given the low threshold set for originality in practice, software that implements AI technology is likely to receive automatic protection as a computer program under copyright law upon its creation.Footnote 30

11.3 Protection of AI-Assisted and AI-Generated Output

This section analyzes whether AI systems could – and, if not, should – claim authorship and/or inventorship in their output.Footnote 31 It then focuses on IP ownership as to such output.

11.3.1 AI Authorship

Can AI systems ever claim authorship? To answer this question, we must first ascertain whether “creative” machines already exist. Second, we discuss whether an AI system can be considered an author and, if not, whether it should be.

Certain AI systems available today can be used as a tool to create works that would satisfy the conditions for copyright protection if they had been solely created by humans. Many examples can be found in the music sector.Footnote 32 You may be reading this chapter with AI-generated music playing, such as piano music by Google’s “DeepMind” AI,Footnote 33 an album released by the “Auxuman”Footnote 34 algorithm, a soundscape created by the “Endel”Footnote 35 app or one of the unfinished symphonies of Franz Schubert or Ludwig van Beethoven as completed with the aid of an AI system.Footnote 36 If you would rather create music yourself, Sony’s “Flow Machines” project may offer assistance by augmenting your creativity through its AI algorithm.Footnote 37 If you are bored with this text, which was written (solely) by a human author, you may instead start a conversation with “ChatGPT 4,”Footnote 38 read a novelFootnote 39 drafted by an AI algorithm or translate it using “DeepL.”Footnote 40 AI-generated artwork is also available.Footnote 41 Most famously, Rembrandt van Rijn’s paintings were fed to an AI algorithm that went on to create a 3D-printed painting in Rembrandt’s style in 2016.Footnote 42 Since then, the use of AI in artwork has skyrocketed, with AI-powered image-generating applications such as “DALL-E 3”Footnote 43 and “Midjourney”Footnote 44 gaining exponential popularity.Footnote 45

In most cases, there is still some human intervention, be it by a programmer, a person training the AI system through data input or somebody who modifies and/or selects output deemed “worthy” to disclose.Footnote 46 If such human(s) were to have created the work(s) without the intervention of an AI system, copyright protection would likely be available.

Copyright law requires the work at issue to show authorship; the personal stamp of the author. The author is considered to be a physical person, especially in the civil law tradition, where copyright protection is viewed as a natural right, granted to the author to protect emanations of their personality.Footnote 47 Creativity is viewed as a quintessentially human faculty, whereby a sentient being expresses their personality by making free, deliberate choices.Footnote 48 This tenet pervades all aspects of copyright law. First, copyright laws grant initial ownership of copyright in a certain work to its author.Footnote 49 Further, the term of protection is calculated from the author’s death. Also, certain provisions expressly seek to protect the author, such as those included in copyright contract law as well as the resale right applicable to original works of art. Moreover, particular copyright exceptions only apply if the author is acknowledged and/or if an equitable remuneration is paid to the author, such as the exception for private copies. The focus on the human author also explains the importance of the author’s moral rights to disclosure, integrity, and attribution.Footnote 50 Such a system leaves no room for the authorship of a nonhuman entity.Footnote 51 If there is insufficient human input in the form of free and creative choices on the part of an author, if the AI crosses a certain threshold of autonomy, copyright protection is unavailable.Footnote 52 This anthropocentric view is unsurprising, since IP laws were largely drafted at a time when the concept of nonhuman “creators” belonged squarely in the realm of fiction.

However, the core of the issue is whether the abstract idea of originality should be held to include the creating behavior of an AI system. Account must hereby be taken of the broad range of potential AI activity and the ensuing distinction between AI-assisted and truly AI-generated content. At the one end of the spectrum, we may find AI systems that function as a tool to assist and/or enhance human creativity, where the AI itself acts as a mere executer.Footnote 53 We can compare this to the quill used by William Shakespeare.Footnote 54 Further down the line, there are many forms of AI-exhibited creativity that still result from creative choices made by a human, where the output flows directly from previously set parameters.Footnote 55 Such AI activity may still be viewed as pure execution. In such cases, copyright should be reserved to the human actor behind the machine.

At the far end of the spectrum, we could find a hypothetical, more autonomous, “creative” AI, having independently created a work that exhibits the requisite creativity, which experts and nonexperts alike cannot distinguish from a work generated by a human. Even in such a case, it may be argued that there is no real act of “conception” in the AI system, given that every piece of AI-generated output is the result of prior human input.Footnote 56 Arguably, precisely this act, the process of creation, is the essence of creativity. As long as the human thought process cannot be formulated as an algorithm that may be implemented by a computer, this process will remain human, thus excluding AI authorship. However, the “prior input” argument also applies mutatis mutandis to humans, who create literary and artistic works while “standing on the shoulders of giants.”Footnote 57 This could render the “act of conception” argument against AI authorship moot, as could choosing the end result and thus the originality of the output as a (functionalist) focal point instead of the creative process.Footnote 58 Additionally, it is argued that granting AI systems authorship may stimulate further creative efforts on the part of AI systems. This appears to be in line with the economic, utilitarian rationale of copyright.Footnote 59 However, copyright seeks to incentivize human creators, not AI systems.Footnote 60 Moreover, it is difficult to see how AI systems may respond to incentives in the absence of human consciousness.Footnote 61 Without convincing economic evidence, caution is advised against tearing down one of the fundamental principles of copyright law. The mere fact that we can create certain incentives does not in itself imply that we should. Further, if we were to allow AI authorship, we must be prepared for an upsurge in algorithmic creations, as well as the effects on human artistic freedom that this would entail.Footnote 62

The risk of extending authorship to AI systems could be mitigated by instead establishing a related or sui generis right to AI-generated works and provide a limited degree of exclusivity in order to protect investments and incentivize research in this area. Such a right could be modelled in a similar way to the database right established by the EU in 1996.Footnote 63 This requires a substantial investment for protection to be available.Footnote 64

11.3.2 AI Inventorship

We now turn to AI inventorship. By analogy to the previous section, the first question is whether “inventive” machines already exist. Such systems are much scarcer than AI systems engaged in creative endeavors.Footnote 65 However, progress on this front is undeniable.Footnote 66 The AI sector’s primary allegedly inventive champion is “DABUS,”Footnote 67 labelled the “Creativity Machine” by its inventor, physicist Dr Stephen Thaler.Footnote 68 DABUS is a neural network-based system meant to generate “useful information” autonomously, thereby “simulating human creativity.”Footnote 69 In 2018, a number of patent applications were filed for two of DABUS’ inventions.Footnote 70 The prosecution files indicate DABUS as the inventor and clarify that Dr Thaler obtained the right to the inventions as its successor in title.Footnote 71 These patent applications offer a test case for the topic of AI inventorship.

Patent law requires inventors to be human. While relevant legislative provisions do not contain any explicit requirement in this sense, the inventor’s need for physical personhood is implied in the law.Footnote 72 While the focus on the human inventor is much less pronounced than it is on the human author, a number of provisions would make no sense if we were to accept AI inventorship. First, many patent laws stipulate that the “inventor” is the first owner of an invention, except in an employment context, where the employer is deemed to be the first owner under the laws of some countries.Footnote 73 Since AI systems do not have legal personality (as of yet), they cannot have ownership rights, nor can they be an employee as such.Footnote 74 Given that those are the only two available options, AI systems cannot be considered “inventors” as the law currently stands, as confirmed in the DABUS case, not only by the Boards of Appeal of the European Patent Office in the DABUS case, but also by the UK Supreme Court and the German Federal Supreme Court.Footnote 75 Another argument against AI inventorship may be drawn from the inventor’s right of attribution. Every inventor has the right to be mentioned as such and all patent applications must designate the inventor.Footnote 76 This moral right, which is meant to incentivize the inventor to innovate further, may become meaningless upon the extension of the concept of inventorship to AI systems.Footnote 77

The second aspect of the discussion is whether there should be room for AI inventorship. The main argument in favor of this is that it would incentivize research and development in the field of AI.Footnote 78 However, in the absence of compelling empirical evidence, the incentive argument is not convincing, especially since AI systems as such are not susceptible to incentives and the cost of AI invention will likely decrease over time.Footnote 79 Another reason to accept AI inventorship would be to avoid humans incorrectly claiming inventorship. However, the as of yet instrumental nature of AI systems provides a counterargument.Footnote 80 Further, there is no AI-generated output without some form of prior human input. The resulting absence of an act of “conception,” of the process of invention, excludes any extension of the scope of inventorship to nonhuman actors such as AI systems.Footnote 81 Again, however, the “prior input” argument also applies mutatis mutandis to humans. Also as to patent law, therefore, the “act of conception” argument against AI inventorship is susceptible to counterarguments.Footnote 82 A final aspect is that allowing AI inventorship would entail an increased risk of both overlapping sets of patents indicated as “patent thickets,” and the so-called “patent trolls,” which are nonpracticing entities that maintain an aggressive patent enforcement strategy while not exploiting the patent(s) at issue themselves.Footnote 83

11.3.3 Ownership

The next question is how ownership rights in AI-powered creations should be allocated.Footnote 84 As explained earlier, IP law does not allow AI systems to be recognized as either an author or an inventor. This begs the question whether the intervention of a creative and/or inventive AI excludes any kind of human authorship or inventorship (and thus ownership) as to the output at issue. It is submitted that it does not, as long as there is a physical person who commands the AI system and maintains the requisite level of control over its output.Footnote 85 In such a case, IP rights may fulfil their role of protecting the interests of creators as well as provide an indirect incentive for future creation and/or innovation.Footnote 86 However, if there is no sufficient causal relationship between the (in)actions of a human and the eventual end result, the argument in favor of a human author and/or inventor becomes untenable. What exactly constitutes “sufficient” control is tough to establish. A further layer of complexity is added by the black box nature of some AI systems: How can we determine whether a sufficient causal link exists between the human and the output, if it is impossible to find out exactly why this output was reached?Footnote 87 However, both copyright and patent protection may be available to works and/or inventions that result from coincidence or even dumb luck.Footnote 88 If we take a step back, both AI systems and serendipity may be considered as a factor outside the scope of human control. Given that Jackson Pollock may claim protection in his action paintings and given the role that chance plays in Pollock’s creation process, can we really deny such protection to the person(s) behind “the next Rembrandt”?

In copyright jargon, we could say that for a human to be able to claim copyright in a work created through the intervention of AI, their “personal stamp” must be discernible in the end result. If we continue the above analogy, Pollock’s paintings clearly reflect his personal choices as an artist. In patent law terms, human inventorship may arise in case of a contribution that transcends the purely financial, abstract or administrative and that is aimed at conceiving the claimed invention – be it through input or output selection, algorithm design, or otherwise.Footnote 89 In an AI context, different categories of people may stake a claim in this regard.

First in line are the programmer(s),Footnote 90 designer(s),Footnote 91 and/or producer(s) of the AI system (hereinafter collectively referred to as “AI creators”). By creating the AI system itself, these actors play a substantive role in the production of AI-generated output.Footnote 92 However, the allocation of rights to the creator sits uneasily with the unpredictable nature of AI-generated output.Footnote 93 While the AI creator’s choices define the AI system, they do not define the final form of the output.Footnote 94 This argument gains in strength the more autonomous the AI algorithm becomes.Footnote 95 Then again, a programmer who is somehow dissatisfied with the AI’s initial output may tweak the AI’s algorithm, thus manipulating and shaping further output, as well as curate the AI output based on their personal choices.Footnote 96 However, an economic argument against granting the AI creator rights in AI-generated output is that this may lead to “double-dipping.” This would be the case if the creator also holds rights in patents granted as to the AI system or the copyright therein, or if the AI system is acquired by a third party for a fee and the output at issue postdates this transfer.Footnote 97 In both cases, the creator would obtain two separate sources of income for essentially the same thing. Moreover, enforcing the AI creator’s ownership rights would be problematic if the AI system generates the output at issue after a third party has started using it. Indeed, knowing that ownership rights would be allocated to the creator, the user would have strong incentives not to report back on the (modalities of) creation of output.Footnote 98

A similar claim to the AI system’s creator may be made by the AI’s trainer who feeds input to the AI system.Footnote 99 Alternatively, the user who has contributed substantially to the output at issue may claim ownership.Footnote 100 The list of stakeholders continues with the investor, the owner of the AI system and/or the data used to train the algorithm, the publisher of the work, the general public, and even the government. Moreover, some form of joint ownership may be envisaged.Footnote 101 However, this would entail other issues, such as an unnecessary fragmentation of ownership rights and difficulties in proving (the extent of) ownership claims.Footnote 102 It could even be argued that, in view of the ever-rising number of players involved, no individual entity can rightfully claim to have made a significant contribution “worthy” of IP ownership.Footnote 103

As of yet, no solution to the ownership conundrum appears to be wholly satisfactory. The void left by this lingering uncertainty will likely be filled with contractual solutions.Footnote 104 Consequent to unequal bargaining power, instances of unfair ownership and licensing arrangements are to be expected.Footnote 105 A preferable solution could be to not allocate ownership in AI-generated output to anyone at all and instead allot such output to the public domain. Stakeholders could sufficiently protect their investment in AI-related innovation by relying on patent protection for the AI system itself, first-mover advantage, trade secret law, contractual arrangements, and technological protection measures, as well as general civil liability and the law of unfair competition.Footnote 106 However, there is a very pragmatic reason not to ban AI-generated output to the public domain, namely that it is increasingly difficult to distinguish output in the creation of which AI played a certain role from creations that were made solely by a human author.Footnote 107 This could be remedied by requiring aspiring IP owners to disclose the intervention of an AI-powered system in the creation and/or innovation process. However, the practical application of such a requirement remains problematic at present. The prospect of having a work be banished to the public domain would provide stakeholders seeking a return on investment with strong incentives to keep quiet on this point. This could invite misleading statements on authorship and/or inventorship of AI-generated output in the future.Footnote 108 Transparency obligations, such as the watermarking requirement imposed on providers of certain AI systems (including general-purpose AI models) under the EU AI Act, may bring us closer to a solution in this regard, likely combined with a “General-Purpose AI Code of Practice” that is to be drafted under the auspices of the AI Office at the EU level.Footnote 109

11.4 Miscellaneous Topics

In addition to the above, the interface between AI and IP has many other dimensions. Without any pretense of exhaustivity, this section treats some of them briefly, namely the issues surrounding IP infringement by AI systems, the potential impact of AI on certain key concepts of IP law and the growing use of AI in IP practice.

11.4.1 IP Infringement

First, in order to train an AI algorithm, a significant amount of data is often required. If (part of) the relevant training data is subject to IP protection, the reproduction and/or communication to the public thereof in principle requires authorization by the owner, subject to the applicability of relevant exceptions and limitations to copyright. The question thus arises whether actively scraping the internet for artists’ work to reuse in the context of, for example, generative AI art tools constitutes an infringement. At the time of writing, several legal proceedings are pending on this question across the globe.Footnote 110 Importantly, the EU AI Act (1) confirms the applicability of text and data mining exceptions to the training of general-purpose AI models, subject to a potential opt-out on the part of rightholders; and (2) mandates the drawing up and public availability of “a sufficiently detailed summary about the content used for training of the general-purpose AI model.”Footnote 111 Further, in order to ensure that authors, performers and other rightholders receive fair and appropriate remuneration for the use of their content as training data, contractual solutions may be envisaged.Footnote 112

Also after the training process, AI systems may infringe IP rights. By way of example, an AI program could create a song containing original elements of a preexisting work, thus infringing the reproduction right of the owner of the copyright in the musical work at issue. An inventive machine may develop a process and/or product that infringes a patent, or devise a sign that is confusingly similar to a registered trademark, or a product that falls within the scope of a protected (un)registered design. This in turn leads to further contentious matters, such as whether or not relevant exceptions and/or limitations (should) apply and whether fundamental rights such as freedom of expression may still play a role.Footnote 113

11.4.2 Impact of AI on Key Concepts of IP Law

Next, the rise of AI may significantly affect a number of key concepts of IP law that are clearly tailored to humans, in addition to the concepts of “authorship” and “inventorship.” First in line in this regard is the inventiveness standard under patent law, which centers around the so-called “person skilled in the art” (PSA).Footnote 114 This is a hypothetical person (or team) whose level of knowledge and skill depend on the field of technology.Footnote 115 If it is found that the PSA would have arrived at the invention, the invention will be deemed obvious and not patentable. If the use of inventive machines becomes commonplace in certain sectors of technology, the PSA standard will evolve into a PSA using such an inventive machine – and maybe even an inventive machine as such.Footnote 116 This would raise the bar for inventive step and ensuing patentability, since such a machine would be able to innovate based on the entirety of available prior art.Footnote 117 Taken to its logical extreme, this argument could shake the foundations of our patent system. Indeed, if the “artificially superintelligent” PSA is capable of an inventive step, everything becomes obvious, leaving no more room for patentable inventions.Footnote 118 We therefore need to start thinking about alternatives and/or supplements to the current nonobviousness analysis – and maybe even to the patent regime as a way to incentivize innovation.Footnote 119

Questions also arise in a trademark law context, such as how the increased intervention of AI in the online product suggestion and purchasing process may be reconciled with the anthropocentric conception of trademark law, as apparent from the use of criteria such as the “average consumer,” “confusion,” “imperfect recollection” – all of which are criteria that have a built-in margin for human error.Footnote 120

11.4.3 Use of AI in IP Practice

Finally, the clear hesitancy of the IP community toward catering for additional incentive creation in the AI sphere by amending existing IP laws may be contrasted with apparent enthusiasm as to the use of AI in IP practice. Indeed, the increased (and still increasing) use of AI systems as a tool in the IP sector is striking. The ability of AI systems to process and analyze vast amounts of data quickly and efficiently offers a broad range of opportunities. First, the World Intellectual Property Organization (WIPO) has been mining the possibilities offered by AI with regard to the automatic categorization of patents and trademarks as well as prior art searches, machine translations, and formality checks.Footnote 121 Other IP offices are following suit.Footnote 122 Second, AI technology may be applied to the benefit of registrants. On a formal level, AI technology may be used to suggest relevant classes of goods and services for trademarks and/or designs. On a substantive level, AI technology may be used to aid in patent drafting and to screen registers for existing registrations to minimize risk. AI technology may assist in determining the similarity of trademarks and/or designs and even in evaluating prior art relating to patents.Footnote 123 AI-based IP analytics and management software is also available.Footnote 124 Finally, AI-powered applications are used in the fight against counterfeit products.Footnote 125

11.5 Conclusion

The analysis of the interface between AI and IP reveals a field of law and technology of increasing intricacy. As the term suggests, “intellectual” property law has traditionally catered for creations of the human mind. Technological evolutions in the field of AI have prompted challenges to this anthropocentric view. The most contentious questions are whether authorship and inventorship should be extended to AI systems and who, if anybody, should acquire ownership rights as to AI-generated content. Valid points may be raised on all sides of the argument. However, we should not unreservedly start tearing down the foundations of IP law for the mere sake of additional incentive creation.

In any case, regardless of the eventual (legislative) outcome, the cross-border exploitation of AI-assisted or -generated output and the pressing need for transparency of the legal framework require a harmonized solution based on a multi-stakeholder conversation, preferably on a global scale. Who knows, maybe one day an artificially super-intelligent computer will be able to find this solution in our stead. Awaiting such further hypothetical technological evolutions, however, the role of WIPO as a key interlocutor on AI and IP remains paramount, in tandem with the newly established AI Office at the EU level.Footnote 126

12 The European Union’s AI Act Beyond Motherhood and Apple Pie?

Nathalie A. Smuha and Karen Yeung Footnote *
12.1 Introduction

In spring 2024, the European Union formally adopted the “AI Act,”Footnote 1 purporting to create a comprehensive EU legal regime to regulate AI systems across sectors. In so doing, it signaled its commitment to the protection of core EU values against AI’s adverse effects, to maintain a harmonized single market for AI in Europe and to benefit from a first mover advantage (the so-called “Brussels effect”)Footnote 2 to establish itself as a leading global standard-setter for AI regulation. The AI Act reflects the EU’s recognition that, left to its own devices, the market alone cannot protect the fundamental values upon which the European project is founded from unregulated AI applications.Footnote 3 Will the AI Act’s implementation succeed in translating its noble aspirations into meaningful and effective protection of people whose everyday lives are already directly affected by these increasingly powerful systems? In this chapter, we critically examine the proposed conceptual vehicles and regulatory architecture upon which the AI Act relies to argue that there are good reasons for skepticism. Despite its laudable intentions, the Act may deliver far less than it promises in terms of safeguarding fundamental rights, democracy, and the rule of law. Although the Act appears to provide meaningful safeguards, many of its key operative provisions delegate critical regulatory tasks largely to AI providers themselves without adequate oversight or effective mechanisms for redress.

We begin in Section 12.2 with a brief history of the AI Act, including the influential documents that preceded and inspired it. Section 12.3 outlines the Act’s core features, including its scope, its “risk-based” regulatory approach, and the corollary classification of AI systems into risk-categories. In Section 12.4, we critically assess the AI Act’s enforcement architecture, including the role played by standardization organizations, before concluding in Section 12.5.

12.2 A Brief History of the AI Act

Today, AI routinely attracts hyperbolic claims about its power and importance, with one EU institution even likening it to a “fifth element after air, earth, water and fire.”Footnote 4 Although AI is not new,Footnote 5 its capabilities have radically improved in recent years, enhancing its potential to effect major societal transformation. For many years, regulators and policymakers largely regarded the technology as either wholly beneficial or at least benign. However, in 2015, the so-called “Tech Lash” marked a change in tone, as public anxiety about AI’s potential adverse impacts grew.Footnote 6 The Cambridge Analytica scandal, involving the alleged manipulation of voters via political microtargeting, with troubling implications for democracy, was particularly important in galvanizing these concerns.Footnote 7 From then on, policy initiatives within the EU and elsewhere began to take a “harder” shape: eschewing reliance on industry self-regulation in the form of non-binding “ethics codes” and culminating in the EU’s “legal turn,” marked by the passage of the AI Act. To understand the Act, it is helpful to briefly trace its historical origins.

12.2.1 The European AI Strategy

The European Commission published a European strategy for AI in 2018, setting in train Europe’s AI policyFootnote 8 to promote and increase AI investment and uptake across Europe in pursuit of its ambition to become a global AI powerhouse.Footnote 9 This strategy was formulated against a larger geopolitical backdrop in which the US and China were widely regarded as frontrunners, battling it out for first place in the “AI race” with Europe lagging significantly behind. Yet the growing Tech-Lash made it politically untenable for European policymakers to ignore public concerns. How, then, could they help European firms compete more effectively on the global stage while assuaging growing concerns that more needed to be done to protect democracy and the broader public interest? The response was to turn a perceived weakness into an opportunity by making a virtue of its political ideals and creating a unique “brand” of AI: infused with “European values” – charting a “third way,” distinct from both the Chinese state-driven approach and the US’ laissez-faire approach to AI governance.Footnote 10

At that time, the Commission resisted calls for the introduction of new laws. In particular, in 2018 the long-awaited General Data Protection Regulation (GDPR) finally took effect,Footnote 11 introducing more stringent legal requirements for collecting and processing personal data. Not only did EU policymakers believe these would guard against AI-generated risks, but it was also politically unacceptable to position this new legal measure as outdated even as it was just starting to bite. By then, the digital tech industry was seizing the initiative, attempting to assuage rising anxieties about AI’s adverse impacts by voluntarily promulgating a wide range of “Ethical Codes of Conduct” proudly proclaiming they would uphold. This coincided with, and concurrently nurtured, a burgeoning academic interest by humanities and social science scholars in the social implications of AI, often proceeding under the broad rubric of “AI Ethics.” In keeping with industry’s stern warning that legal regulation would stifle innovation and push Europe even further behind, the Commission decided to convene a High-Level Expert Group on AI (AI HLEG) to develop a set of harmonized Ethics Guidelines based on European values that would serve as “best practice” in Europe, for which compliance was entirely voluntary.

12.2.2 The High-Level Expert Group on AI

This 52 member group was duly convened, to much fanfare, selected through open competition and comprised of approximately 50% industry representatives, with the remaining 50% from academia and civil society organizations.Footnote 12 Following a public consultation, the group published its Ethics Guidelines for Trustworthy AI in April 2019,Footnote 13 coining “Trustworthy AI” as its overarching objective.Footnote 14 The Guidelines’ core consists of seven requirements that AI practitioners should take into account throughout an AI system’s lifecycle: (1) human agency and oversight (including the need for a fundamental rights impact assessment); (2) technical robustness and safety (including resilience to attack and security mechanisms, general safety, as well as accuracy, reliability and reproducibility requirements); (3) privacy and data governance (including not only respect for privacy, but also ensuring the quality and integrity of training and testing data); (4) transparency (including traceability, explainability, and clear communication); (5) diversity, nondiscrimination and fairness (including the avoidance of unfair bias, considerations of accessibility and universal design, and stakeholder participation); (6) societal and environmental wellbeing (including sustainability and fostering the “environmental friendliness” of AI systems, and considering their impact on society and democracy); and finally (7) accountability (including auditability, minimization, and reporting of negative impact, trade-offs, and redress mechanisms).Footnote 15

The group was also mandated to deliver Policy Recommendations which were published in June 2019,Footnote 16 oriented toward Member States and EU Institutions.Footnote 17 While attracting considerably less attention than the Ethics Guidelines, the Recommendations called for the adoption of new legal safeguards, recommending “a risk-based approach to AI policy-making,” taking into account “both individual and societal risks,”Footnote 18 to be complemented by “a precautionary principle-based approach” for “AI applications that generate ‘unacceptable’ risks or pose threats of harm that are substantial.”Footnote 19 For the use of AI in the public sector, the group stated that adherence to the Guidelines should be mandatory.Footnote 20 For the private sector, the group asked the Commission to consider introducing obligations to conduct a “trustworthy AI” assessment (including a fundamental rights impact assessment) and stakeholder consultations; to comply with traceability, auditability, and ex-ante oversight requirements; and to ensure effective redress.Footnote 21 These Recommendations reflected a belief that nonbinding “ethics” guidelines were insufficient to ensure respect for fundamental rights, democracy, and the rule of law, and that legal reform was needed. Whether a catalyst or not, we will never know, for a few weeks later, the then President-elect of the Commission, Ursula von der Leyen, announced that she would “put forward legislation for a coordinated European approach on the human and ethical implications of Artificial Intelligence.”Footnote 22

12.2.3 The White Paper on AI

In February 2020, the Commission issued a White Paper on AI,Footnote 23 setting out a blueprint for new legislation to regulate AI “based on European values,”Footnote 24 identifying several legal gaps that needed to be addressed. Although it sought to adopt a risk-based approach to regulate AI, it identified only two categories of AI systems: high-risk and not-high-risk, with solely the former being subjected to new obligations inspired by the Guidelines’ seven requirements for Trustworthy AI. The AI HLEG’s recommendation to protect fundamental rights as well as democracy and the rule of law were largely overlooked, and its suggestion to adopt a precautionary approach in relation to “unacceptable harm” was ignored altogether.

On enforcement, the White Paper remained rather vague. It did, however, suggest that high-risk systems should be subjected to a prior conformity assessment by providers of AI systems, analogous to existing EU conformity assessment procedures for products governed by the New Legislative Framework (discussed later).Footnote 25 In this way, AI systems were to be regulated in a similar fashion to other stand-alone products including toys, measuring instruments, radio equipment, low-voltage electrical equipment, medical devices, and fertilizers rather than embedded within a complex and inherently socio-technical system that may be infrastructural in nature. Accordingly, the basic thrust of the proposal appeared animated primarily by a light-touch market-based orientation aimed at establishing a harmonized and competitive European AI market in which the protection of fundamental rights, democracy, and the rule of law were secondary concerns.

12.2.4 The Proposal for an AI Act

Despite extensive criticism, this approach formed the foundation of the Commission’s subsequent proposal for an AI Act published in April 2021.Footnote 26 Building on the White Paper, it adopted a “horizontal” approach, regulating “AI systems” in general rather than pursuing a sector-specific approach. The risk-categorization of AI systems was more refined (unacceptable risk, high risk, medium risk, and low risk), although criticisms persisted given that various highly problematic applications were omitted from the list of “high-risk” and “unacceptable” systems, and with unwarranted exceptions.Footnote 27 The conformity (self)assessment scheme was retained, firmly entrenching a product-safety approach to AI regulation, yet failing to confer any rights whatsoever for those subjected to AI systems; it only included obligations imposed on AI providers and (to a lesser extent) deployers.Footnote 28

In December 2022, the Council of the European Union adopted its “general approach” on the Commission’s proposal.Footnote 29 It sought to limit the regulation’s scope by narrowing the definition of AI and introducing more exceptions (for example for national security and research), sought stronger EU coordination for the Act’s enforcement; and proposed that AI systems listed as “high-risk” systems would not be automatically subjected to the Act’s requirements. Instead, providers could self-assess whether their system is truly high-risk based on a number of criteria – thereby further diluting the already limited protection the proposal afforded. Finally, the Council took into account the popularization of Large Language Models (LLMs) and generative AI applications such as ChatGPT, which at that time were drawing considerable public and political attention, and included modest provisions on General-Purpose AI models (GPAI).Footnote 30

By the time the European Parliament formulated its own negotiating position in June 2023, generative AI was booming and called for more demanding restrictions. Additional requirements for the GPAI models that underpin generative AI were thus introduced, including risk-assessments and transparency obligations.Footnote 31 Contrary to the Council, the Parliament sought to widen some of the risk-categories; restore a broader definition of AI; strengthen transparency measures; introduce remedies for those subjected to AI systems; include stakeholder participation; and introduce mandatory fundamental rights impact assessments for high-risk systems. Yet it retained the Council’s proposal to allow AI providers to self-assess whether their “high-risk” system could be excluded from that category, and hence from the legal duties that would otherwise apply.Footnote 32 It also sprinkled the Act with references to the “rule of law” and “democracy,” yet these were little more than rhetorical flourishes given that it retained the underlying foundations of the original proposal’s market-oriented product-safety approach.

12.3 Substantive Features of the AI Act

The adoption of the AI Act in spring 2024 marked the culmination of a series of initiatives that reflected significant policy choices which determined its form, content and contours. We now provide an overview of the Act’s core features, which – for better or for worse – will shape the future of AI systems in Europe.

12.3.1 Scope

The AI Act aims to harmonize Member States’ national legislation, to eliminate potential obstacles to trade on the internal AI market, and to protect citizens and society against AI’s adverse effects, in that order of priority. Its main legal basis is Article 114 of the Treaty of the Functioning of the European Union (TFEU), which enables the establishment and functioning of the internal market. The inherent single-market orientation of this article limits the Act’s scope and justification.Footnote 33 For this reason, certain provisions on the use of AI-enabled biometric data processing by law enforcement are also based on Article 16 TFEU, which provides a legal basis to regulate matters related to the right to data protection.Footnote 34 Whether these legal bases are sufficient to regulate AI practices within the public sector or to achieve nonmarket-related aims remains uncertain, and could render the Act vulnerable to (partial) challenges for annulment on competence-related grounds.Footnote 35 In terms of scope, the regulation applies to providers who place on the market or put into service AI systems (or general purpose AI models) in the EU, regardless of where they are established; deployers of AI systems that have their place of establishment or location in the EU; and providers and deployers of AI systems that are established or located outside the EU, while the output produced by their AI system is used in the EU.Footnote 36

The definition of AI for the purpose of the regulation has been a significant battleground,Footnote 37 with every EU institution proposing different definitions, each attracting criticism. Ultimately, the Commission’s initial proposal to combine a broad AI definition in the regulation’s main text with an amendable Annex that exhaustively enumerates the AI techniques covered by the Act was rejected. Instead, the legislators opted for a definition of AI which models that of the OECD, to promote international alignment: “a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”Footnote 38

AI systems used exclusively for military or defense purposes are excluded from the Act, as are systems used for “nonprofessional” purposes. So too are AI systems “solely” used for research and innovation, which leaves open a substantive gap in protection given the many problematic research projects that can adversely affect individuals yet do not fall within the remit of university ethics committees. The AI Act also foresees that Member States’ competences in national security remain untouched, thus risking very weak protection of individuals in one of the potentially most intrusive areas for which AI might be used.Footnote 39 Finally, the legislators also included certain exemptions for open-source AI models and systems,Footnote 40 and derogations for microenterprises.Footnote 41

12.3.2 A Risk-based Approach

The AI Act adopts what the Commission describes as a “risk-based” approach: AI systems and/or practices are classified into a series of graded “tiers,” with proportionately more demanding legal obligations that vary in accordance with the EU’s perceptions of the severity of the risks they pose.Footnote 42 “Risks” are defined rather narrowly in terms of risks to “health, safety or fundamental rights.” The Act’s final risk categorization consists of five tiers: (1) systems that pose an “unacceptable” risk are prohibited; (2) systems deemed to pose a “high risk” are subjected to requirements akin to those listed in the Ethics Guidelines; (3) GPAI models are subjected to obligations that primarily focus on transparency, intellectual property protection, and the mitigation of “systemic risks”; (4) systems posing a limited risk must meet specified transparency requirements; and (5) systems that are not considered as posing significant risks do not attract new legal requirements.

12.3.2.1 Prohibited Practices

Article 5 of the AI Act prohibits several “AI practices,” reflecting a view that they pose an unacceptable risk. These include the use of AI to manipulate human behavior in order to circumvent a person’s free willFootnote 43 and to exploit the vulnerability of natural persons in light of their age, disability, or their social or economic situation.Footnote 44 It also includes the use of AI systems to make criminal risk assessments and predictions of natural persons without human involvement,Footnote 45 or to evaluate or classify people based on their social behavior or personal characteristics (social scoring), though only if it leads to detrimental or unfavorable treatment in social contexts that are either unrelated to the contexts in which the data was originally collected, or that is unjustified or disproportionate.Footnote 46 Also prohibited is the use of emotion recognition in the workplace and educational institutions,Footnote 47 thus permitting their use in other domains despite their deeply problematic nature.Footnote 48 The untargeted scraping of facial images from the internet or from CCTV footage to create facial recognition databases is likewise prohibited.Footnote 49 Furthermore, biometric categorization is not legally permissible to infer sensitive characteristics, such as political, religious, or philosophical beliefs, sexual orientation or race.Footnote 50

Whether to prohibit the use of real-time remote biometric identification by law enforcement in public places was a lightning-rod for controversy. It was prohibited in the Commission’s original proposal, but subject to three exceptions. The Parliament sought to make the prohibition unconditional, yet the exceptions were reinstated during the trilogue. The AI Act therefore allows law enforcement to use live facial recognition in public places, but only if a number of conditions are met: prior authorization must be obtained from a judicial authority or an independent administrative authority; and it is used either to conduct a targeted search of victims, to prevent a specific and imminent (terrorist) threat, or to localize or identify a person who is convicted or (even merely) suspected of having committed a specified serious crime.Footnote 51 These exceptions have been heavily criticized, despite the Act’s safeguards. In particular, they pave the way for Member States to install and equip public places with facial recognition cameras which can then be configured for the purposes of remote biometric identification if the exceptional circumstances are met, thus expanding the possibility of function creep and the abuse of law enforcement authority.

12.3.2.2 High-Risk Systems

The Act identifies two categories of high-risk AI systems: (1) those that are (safety components of) products that are already subject to an existing ex ante conformity assessment (in light of exhaustively listed EU harmonizing legislation on health and safety in Annex I, for example, for toys, aviation, cars, medical devices or lifts) and (2) stand-alone high-risk AI systems, which are mainly of concern due to their adverse fundamental rights implications and exhaustively listed in Annex III, referring to eight domains in which AI systems can be used. These stand-alone high-risk systems are arguably the most important category of systems regulated under the AI Act (since those in Annex I are already regulated by specific legislation), and will hence be our main focus.

Only the AI applications that are explicitly listed under one of those eight domains headings are deemed high-risk (see Table 12.1). While the list of applications under each domain can be updated over time by the European Commission, the domain headings themselves cannot.Footnote 52 The domains include biometrics; critical infrastructure; educational and vocational training; employment, workers management and access to self-employment; access to and enjoyment of essential private services and essential public services and benefits; law enforcement; migration, asylum and border control management; and the administration of justice and democratic processes. Even if their system is listed in Annex III, AI providers can self-assess whether their system truly poses a significant risk to harm “health, safety or fundamental rights” and only then are they subjected to the high-risk requirements.Footnote 53

Table 12.1 High-risk AI systems listed in Annex III

1. Biometric AI systems
  • remote biometric identification systems (excluding biometric verification the sole purpose of which is to confirm that a specific natural person is the person he or she claims to be);

  • biometric categorisation according to sensitive or protected attributes or characteristics based on the inference of those attributes or characteristics;

  • emotion recognition systems.

2. Critical infrastructureAI systems intended to be used as safety components in the management and operation of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.
3. Education and vocational trainingAI systems intended to be used:
  • to determine access or admission or to assign natural persons to educational and vocational training institutions at all levels

  • to evaluate learning outcomes, including when those outcomes are used to steer the learning process of natural persons in educational and vocational training institutions at all levels;

  • for the purpose of assessing the appropriate level of education that an individual will receive or will be able to access, in the context of or within educational and vocational training institutions at all levels;

  • for monitoring and detecting prohibited behaviour of students during tests in the context of or within educational and vocational training institutions at all levels.

4. Employment, workers management and access to self-employmentAI systems intended to be used:
  • for the recruitment or selection of natural persons, in particular to place targeted job advertisements, to analyse and filter job applications, and to evaluate candidates;

  • to make decisions affecting terms of work-related relationships, the promotion or termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics or to monitor and evaluate the performance and behaviour of persons in such relationships.

5. Access to and enjoyment of essential private services and essential public services and benefitsAI systems intended to be used:
  • by public authorities or on behalf of public authorities to evaluate the eligibility of natural persons for essential public assistance benefits and services, including healthcare services, as well as to grant, reduce, revoke, or reclaim such benefits and services;

  • to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud;

  • for risk assessment and pricing in relation to natural persons in the case of life and health insurance;

  • to evaluate and classify emergency calls by natural persons or to be used to dispatch, or to establish priority in the dispatching of, emergency first response services, including by police, firefighters and medical aid, as well as of emergency healthcare patient triage systems.

6. Law enforcement, in so far as their use is permitted under relevant Union or national lawAI systems intended to be used by or on behalf of law enforcement authorities, or by Union institutions, bodies, offices or agencies in support of law enforcement authorities or on their behalf:
  • to assess the risk of a natural person becoming the victim of criminal offences;

  • as polygraphs or similar tools;

  • to evaluate the reliability of evidence in the course of the investigation or prosecution of criminal offences;

  • for assessing the risk of a natural person offending or re-offending not solely on the basis of the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680, or to assess personality traits and characteristics or past criminal behaviour of natural persons or groups;

  • for the profiling of natural persons as referred to in Article 3(4) of Directive (EU) 2016/680 in the course of the detection, investigation or prosecution of criminal offences.

7. Migration, asylum and border control management, in so far as their use is permitted under relevant Union or national lawAI systems intended to be used by or on behalf of competent public authorities or by Union institutions, bodies, offices or agencies:
  • to assess a risk, including a security risk, a risk of irregular migration, or a health risk, posed by a natural person who intends to enter or who has entered into the territory of a Member State;

  • to assist competent public authorities for the examination of applications for asylum, visa or residence permits and for associated complaints with regard to the eligibility of the natural persons applying for a status, including related assessments of the reliability of evidence;

  • in the context of migration, asylum or border control management, for the purpose of detecting, recognising or identifying natural persons, with the exception of the verification of travel documents.

8. Administration of justice and democratic processesAI systems intended to be used:
  • by a judicial authority or on their behalf to assist a judicial authority in researching and interpreting facts and the law and in applying the law to a concrete set of facts, or to be used in a similar way in alternative dispute resolution;

  • for influencing the outcome of an election or referendum or the voting behaviour of natural persons in the exercise of their vote in elections or referenda. This does not include AI systems to the output of which natural persons are not directly exposed, such as tools used to organise, optimise or structure political campaigns from an administrative or logistical point of view.

High-risk systems must comply with “essential requirements” set out in Articles 8 to 15 of the AI Act (Chapter III, Section 2). These requirements pertain, inter alia, to:

  • the establishment, implementation, documentation and maintenance of a risk-management system pursuant to Article 9;

  • data quality and data governance measures regarding the datasets used for training, validation, and testing; ensuring the suitability, correctness and representativeness of data; and monitoring for bias pursuant to Article 10;

  • technical documentation and (automated) logging capabilities for record-keeping, to help overcome the inherent opacity of software, pursuant to Articles 11 and 12;

  • transparency provisions, focusing on information provided to enable deployers to interpret system output and use it appropriately as instructed through disclosure of, for example, the system’s intended purpose, capabilities, and limitations, pursuant to Article 13;

  • human oversight provisions requiring that the system can be effectively overseen by natural persons (e.g., through appropriate human–machine interface tools) so as to minimize risks, pursuant to Article 14;

  • the need to ensure an appropriate level of accuracy, robustness, and cybersecurity and to ensure that the systems perform consistently in those respects throughout their lifecycle, pursuant to Article 15.

Finally, Articles 16 and 17 require that high-risk AI providersFootnote 54 establish a “quality management system” that must include, among other things, the aforementioned risk management system imposed by Article 9 and a strategy for regulatory compliance, including compliance with conformity assessment procedures for the management of modifications for high-risk AI. These two systems – the risk management system and the quality management system – can be understood as the AI Act’s pièce de resistance. While providers have the more general obligation to demonstrably ensure compliance with the “essential requirements,” most of these requirements are concerned with technical functionality, and are expected to offer assurance that AI systems will function as stated and intended, that the software’s functional performance will be reliable, consistent, “without bias,” and in accordance with what providers claim about system design and performance metrics. To the extent that consistent software performance is a prerequisite for facilitating its “safe” and “rights-compliant” use, these are welcome requirements. They are, however, not primarily concerned, in a direct and unmediated manner, with guarding against the dangers (“risks”) that the AI Act specifically states it is intended to protect against, notably potential dangers to health, safety and fundamental rights.

This is where the AI Act’s characterization of the relevant “risks,” which the Article 9 risk management system must identify, estimate and evaluate, is of importance. Article 9(2) refers to “the known and reasonably foreseeable risks that the high-risk AI system can pose to health, safety or fundamental rights” when used in accordance with its intended purpose and an estimate and evaluation of risks that may emerge under conditions of “reasonably foreseeable misuse.”Footnote 55 Risk management measures must be implemented such that any “residual risk associated with each hazard” and the “relevant residual risk of the high-risk AI system” is judged “acceptable.”Footnote 56 High-risk AI systems must be tested prior to being placed on the market to identify the “most appropriate” risk management measures and to ensure the systems “perform consistently for their intended purposes,” in compliance with the requirements of Section 2 and in accordance with “appropriate” preliminarily defined metrics and probabilistic thresholds – all of which are to be further specified.

While, generally speaking, the imposition of new obligations is a positive development, their likely effectiveness is a matter of substantial concern. We wonder, for instance, whether it is at all acceptable to delegate the identification of risks and their evaluation as “acceptable” to AI providers, particularly given the fact that their assessment might differ very significantly from those who are the relevant risk-bearers and who are most likely to suffer adverse consequences if those risks ripen into harm or rights-violations. Furthermore, Article 9(3) is ambiguous: purporting to limit the risks that must be considered as part of the risk management system to “those which may be reasonably mitigated or eliminated through the development or design of the high-risk AI system, or the provision of adequate technical information.”Footnote 57 As observed elsewhere, this could be interpreted to mean that risks that cannot be mitigated through the high-risk system’s development and design or by the provision of information can be ignored altogether,Footnote 58 although the underlying legislative intent, as stated in Article 2, suggests an alternative reading such that if those “unmitigatable risks” are unacceptable, the AI system cannot be lawfully placed on the market or put into service.Footnote 59

Although the list-based approach to the classification of high-risk systems was intended to provide legal certainty, critics pointed out that it is inherently prone to problems of under and over-inclusiveness.Footnote 60 As a result, problematic AI systems that are not included in the list are bound to appear on the market, and might not be added to the Commission’s future list-updates. In addition, allowing AI providers to self-assess whether their system actually poses a significant risk or not undermines the legal certainty allegedly offered by the Act’s list-based approach.Footnote 61 Furthermore, under pressure from the European Parliament, high-risk AI deployers that are bodies governed by public law, or are private entities providing public services, must also carry out a “fundamental rights impact assessment” before the system is put into use.Footnote 62 However, the fact that an “automated tool” will be provided to facilitate compliance with this obligation “in a simplified manner” suggests that the regulation of these risks is likely to descend into a formalistic box-ticking exercise in which formal documentation takes precedence over its substantive content and real-world effects.Footnote 63 While some companies might adopt a more prudent approach, the effectiveness of the AI Act’s protection mechanisms will ultimately depend on how its oversight and enforcement mechanisms will operate on-the-ground, which we believe, for reasons set out below, are unlikely to provide a muscular response.

12.3.2.3 General-Purpose AI Models

The AI Act defines a general-purpose AI (GPAI) model as one that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market, and can be integrated into a variety of downstream systems or applications (GPAI systems).Footnote 64 The prime example of GPAI models are Large Language Models (LLMs) that converse in natural language and generate text (which, for instance, form the basis of Open AI’s Chat-GPT or Google’s Bard), yet there are also models that can generate images, videos, music or some combination thereof.

The primary obligations of GPAI model-providers are to draw up and maintain technical documentation, comply with EU copyright law and disseminate “sufficiently detailed” summaries about the content used for training models before they are placed on the market.Footnote 65 These minimum standards apply to all models, yet GPAI models that are classified as posing a “systemic risk” due to their “high impact capabilities” are subject to additional obligations. Those include duties to conduct model evaluations, adversarial testing, assess and mitigate systemic risks, report on serious incidents, and ensure an adequate level of cybersecurity.Footnote 66 Note, however, that providers of (systemic risk) GPAI models can conduct their own audits and evaluations, rather than rely on external independent third party audits. Nor is any public licensing scheme required.

More problematically, while the criteria to qualify GPAI models as posing a “systemic risk” are meant to capture their “significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain,”Footnote 67 the legislator opted to express these criteria in terms of a threshold pertaining to the size of the data on which the models are trained. Models trained with more than 1025 floating-point operations reach this threshold and are presumed to qualify as posing a systemic risk.Footnote 68 This threshold, though amendable, is rather arbitrary, as many existing models do not cross that threshold but are nevertheless capable of posing systemic risks. More generally, limiting “systemic risks” to those arising from GPAI models is difficult to justify, given that even traditional rule-based AI systems with far more limited capabilities can pose systemic risks.Footnote 69 Moreover, as Hacker has observed,Footnote 70 the industry is moving toward smaller yet more potent models, which means many more influential GPAI models may fall outside the Act, shifting the regulatory burden “to the downstream deployers.”Footnote 71 Although these provisions can, in theory, be updated over time, their effectiveness and durability are open to doubt.Footnote 72

12.3.2.4 Systems Requiring Additional Transparency

For a subset of AI applications, the EU legislator acknowledged that specific risks can arise, such as impersonation or deception, which stand apart from high-risk systems. Pursuant to Article 50 of the AI Act, these applications are subjected to additional transparency obligations, yet they might also fall within the high-risk designation. Four types of AI systems fall into this category. The first are systems intended to interact with natural persons, such as chatbots. To avoid people mistakenly believing they are interacting with a fellow human being, these systems must be developed in such a way that the natural person who is exposed to the system is informed thereof, in a timely, clear and intelligible manner (unless this is obvious from the circumstances and context of the use). An exception is made for AI systems authorized by law to detect, prevent, investigate, and prosecute criminal offences.

A similar obligation to provide transparency exists when people are subjected either to an emotion recognition system or a biometric categorization system (to the extent it is not prohibited by Article 5 of the AI Act). Deployers must inform people subjected to those systems of the system’s operation and must, pursuant to data protection law, obtain their consent prior to the processing of their biometric and other personal data. Again, an exception is made for emotion recognition systems and biometric categorization systems that are permitted by law to detect, prevent, and investigate criminal offences.

Finally, providers of AI systems that generate synthetic audio, image, video or text must ensure that the system’s outputs are marked in a machine-readable format and are detectable as artificially generated or manipulated.Footnote 73 Deployers of such systems should disclose that the content has been artificially generated or manipulated.Footnote 74 This provision was already present in the Commission’s initial AI Act proposal, but it became far more relevant with the boom of generative AI, which “democratized” the creation of deep fakes, enabling them to be easily created by those without specialist skills. As regards AI systems that generate or manipulate text, which is published with “the purpose of informing the public on matters of public interest,” deployers must disclose that the text was artificially generated or manipulated, unless the AI-generated content underwent a process of human review or editorial control with editorial responsibility for its publication.Footnote 75 Here, too, exceptions exist. In each case, the disclosure measures must take into account the generally acknowledged state of the art, whereby the AI Act also refers to relevant harmonized standards,Footnote 76 to which we will return later.

12.3.2.5 Non-High-Risk Systems

All other AI systems that do not fall under one of the aforementioned risk-categories are effectively branded as “no risk” and do not attract new legal obligations. To the extent they fall under existing legal frameworks – for instance, when they process personal data – they must still comply with those frameworks. In addition, the AI Act provides that the European Commission, Member States and the AI Office (a supervisory entity that we discuss in the next section) should encourage and facilitate the drawing up of codes of conduct that are intended to foster the voluntary application of the high-risk requirements to those no-risk AI systems.Footnote 77

12.3.3 Supporting Innovation

The White Paper on AI focused not only on the adoption of rules to limit AI-related risks, but also included a range of measures and policies to boost AI innovation in the EU. Clearly, the AI Act is a tool aimed primarily at achieving the former, but the EU still found it important to also emphasize its “pro-innovation” stance. Chapter VI of the AI Act therefore lists “measures in support of innovation,” which fits into the EU’s broader policy narrative which recognizes that regulation can facilitate innovation, and even provide a “competitive advantage” in the AI “race.”Footnote 78 These measures mainly concernFootnote 79 the introduction of AI regulatory sandboxes, which are intended to offer a safe and controlled environment for AI providers to develop, test, and validate AI systems, including the facilitation of “real-world-testing.” National authorities must oversee these sandboxes and help ensure that appropriate safeguards are in place, and that their experimentation occurs in compliance with the law. The AI Act mandates each Member State to establish at least one regulatory sandbox, which can also be established jointly with other Member States.Footnote 80 To avoid fragmentation, the AI Act further provides for the development of common rules for the sandboxes’ implementation and a framework for cooperation between the relevant authorities that supervise them, to ensure their uniform implementation across the EU.Footnote 81

Sandboxes must be made accessible especially to Small and Medium Enterprises (SMEs), thereby ensuring that they receive additional support and guidance to achieve regulatory compliance while retaining the ability to innovate. In fact, the AI Act explicitly recognizes the need to take into account the interests of “small-scale providers” and deployers of AI systems, particularly costs.Footnote 82 National authorities that oversee sandboxes are hence given various tasks, including increasing awareness on the regulation, promoting AI literacy, offering information and communication services to SMEs, start-ups, and deployers, and helping them identify methods that lower their compliance costs. Collectively, these measures are aimed to offset the fact that smaller companies will likely face heavier compliance and implementation burdens, especially compared to large tech companies that can afford an army of lawyers and consultants to implement the AI Act. It is also hoped that the sandboxes will help national authorities to improve their supervisory methods, develop better guidance, and identify possible future improvements of the legal framework.

12.4 Monitoring and Enforcement

Our discussion has hitherto focused on the substantive dimensions of the Act. However, whether these provide effective protection of health, safety and fundamental rights will depend critically on the strength and operation of its monitoring and enforcement architecture, to which we now turn. We have already noted that the proposed regulatory enforcement framework underpinning the Commission’s April 2021 blueprint was significantly flawed, yet these flaws remain unaltered in the final Act. As we shall see, the AI Act allocates considerable interpretative discretion to the industry itself, through a model which has been described by regulatory theorists as “meta-regulation.” We also discuss the Act’s approach to technical standards and the institutional framework for evaluating whether high-risk AI systems are in compliance with the Act, to argue that the regime as a whole fails to offer adequate protection against the adverse effects that it purports to counter.

12.4.1 Legal Rules and Interpretative Discretion

Many of the AI Act’s core provisions are written in broad, open-ended language, leaving the meaning of key terms uncertain and unresolved. It will be here that the rubber will hit the road, for it is through the interpretation and application of the Act’s operative provisions that it will be given meaning and be translated into on-the-ground practice.

For example, when seeking to apply the essential requirements applicable to high-risk systems, three terms used in Chapter III, Section 2 play a crucial role. First, the concept of “risk.” Article 3 defines risk as “the combination of the probability of an occurrence of harm and the severity of that harm,” reflecting conventional statistical risk assessment terminology. Although risks to health and safety is a relatively familiar and established concept in legal parlance and regulatory regimes, the Annex III high-risk systems are more likely to interfere with fundamental rights and may adversely affect democracy and the rule of law. But what, precisely, is meant by “risk to fundamental rights,” and how should those risks be identified, evaluated and assessed? Secondly, even assuming that fundamental rights-related risks can be meaningfully assessed, how then is a software firm to adequately evaluate what constitutes a level of residual risk judged “acceptable”? And thirdly, what constitutes a “risk management system” that meets the requirements of Article 9?

The problem of interpretative discretion is not unique to the AI Act. All rules which take linguistic form, whether legally mandated or otherwise, must be interpreted before they can be applied to specific real-world circumstances. Yet how this discretion is exercised, and by whom, will be a product of the larger regulatory architecture in which those rules are embedded. The GDPR, for instance, contains a number of broadly defined “principles” which those who collect and process personal data must comply with. Both the European Data Protection Board (EDPB) and national level data protection authorities – as public regulators – issue “guidance” documents offering interpretative guidance about what the law requires. Compliance with this guidance (often called “soft law”) does not guarantee compliance – for it does not bind courts when interpreting the law – but it nevertheless offers a valuable, and reasonably authoritative assistance to those seeking to comply with their legal obligations. This kind of guidance is open, published, transparent, and conventionally issued in draft form before-hand so that stakeholders and the public can provide feedback before it is issued in final form.Footnote 83

In the AI Act, similar interpretative decisions will need to be made and, in theory, the Commission has a mandate to issue guidelines on the AI Act’s practical implementation.Footnote 84 However, in contrast with the GDPR, the Act’s adoption of the “New Approach” to product-safety means that, in practice, providers of high-risk AI systems will likely adhere to technical standards produced by European Standardization Organizations on request from the Commission and which are expected to acquire the status of “harmonized standards” by publication of their titles in the EU’s Official Journal.Footnote 85 As we explain below, the processes through which these standards are developed are difficult to characterize as democratic, transparent or based on open public participation.

12.4.2 The AI Act as a Form of “Meta-Regulation”

At first glance, the AI Act appears to adopt a public enforcement framework with both national and European public authorities playing a significant role. Each EU Member State must designate a national supervisory authorityFootnote 86 to act as “market surveillance authority.”Footnote 87 These authorities can investigate suspected incidents and infringements of the AI Act’s requirements, and initiate recalls or withdrawals of AI systems from the market for non-compliance.Footnote 88 National authorities exchange best practices through a European AI Board comprised of Member States’ representatives. The European Commission has also set up an AI Office to coordinate enforcement at the EU level.Footnote 89 Its main task is to monitor and enforce the requirements relating to GPAI models,Footnote 90 yet it also undertakes several other roles, including (a) guiding the evaluation and review of the AI Act over time,Footnote 91 (b) offering coordination support for joint investigations between the Commission and Member States when a high-risk system presents a serious risk across multiple Member States,Footnote 92 and (c) facilitating the drawing up of voluntary codes of conduct for systems that are not classified as high-risk.Footnote 93

The AI Office will be advised by a scientific panel of independent experts to help it develop methodologies to evaluate the capabilities of GPAI models, to designate GPAI models as posing a systemic risk, and to monitor material safety risks that such models pose. An advisory forum of stakeholders (to counter earlier criticism that stakeholders were allocated no role whatsoever in the regulation) is also established under the Act, to provide both the Board and the Commission with technical expertise and advice. Finally, the Commission is tasked with establishing a public EU-wide database where providers (and a limited set of deployers) of stand-alone high-risk AI systems must register their systems to enhance transparency.Footnote 94

In practice, however, these public authorities are twice-removed from where much of the real-world compliance activity and evaluation takes place. The AI Act’s regulatory enforcement framework delegates many crucial functions (and thus considerable discretionary power) to the very actors whom the regime purports to regulate, and to other tech industry experts. The entire architecture of the AI Act is based on what regulatory governance scholars sometimes refer to as “meta-regulation” or “enforced self-regulation.”Footnote 95 This is a regulatory technique in which legally binding obligations are imposed on regulated organizations, requiring them to establish and maintain internal control systems that meet broadly specified, outcome-based, binding legal objectives.

Meta-regulatory strategies rest on the basic idea that one size does not fit all, and that firms themselves are best placed to understand their own operations and systems and take the necessary action to avoid risks and dangers. The primary safeguards through which the AI Act is intended to work rely on the quality and risk management systems within the regulated organizations, in which these organizations retain considerable discretion to establish and maintain their own internal standards of control, provided that the Act’s legally mandated objectives are met. The supervisory authorities oversee adherence to those internal standards, but they only play a secondary and reactionary role, which is triggered if there are grounds to suspect that regulated organizations are failing to discharge their legal obligations. While natural and legal persons have the right to lodge a complaint when they have grounds to consider that the AI Act was infringed,Footnote 96 supervisory authorities do not have any proactive role to ensure the requirements are met before high-risk AI systems are placed on the market or deployed.

This compliance architecture flows from the underlying foundations of the Act, which are rooted in the EU’s “New Legislative Framework,” adopted in 2008. Its aim was to improve the internal market for goods and strengthen the conditions for placing a wide range of products on the EU market.Footnote 97

The AI Act largely leaves it to Annex III high-risk AI providers and deployers to self-assess their conformity with the AI Act’s requirements (including, as discussed earlier, the judgment of what is deemed an “acceptable” residual risk). There is no routine or regular inspection and approval or licensing by a public authority. Instead, if they declare that they have self-assessed their AI system as compliant and duly lodge a declaration of conformity, providers can put their AI systems into service without any independent party verifying whether their assessment is indeed adequate (except for certain biometric systems).Footnote 98 Providers are, however, required to put in place a post-market monitoring system, which is intended to ensure that the possible risks emerging from AI systems that continue to “learn” or evolve once placed on the market or put into service can be better identified and addressed.Footnote 99 The role of public regulators is therefore largely that of ex post oversight, unlike the European regulation of pharmaceuticals, reflecting the regulatory regime as permissive rather than precautionary. This embodies the basic regulatory philosophy underpinning the New Legislative Framework, which builds on the “New Approach” to technical standardization. Together, these are concerned first and foremost with strengthening single market integration, and hence with ensuring a single EU market for AI.

12.4.3 The New Approach to Technical Standardization

Under the EU’s “Old Approach” to product safety standards, national authorities drew up detailed technical legislation, which was often unwieldy and usually motivated by a lack of confidence in the rigour of economic operators on issues of public health and safety. However, the “New Approach” framework introduced in 1985 sought instead to restrict the content of legislation to “essential requirements,” leaving technical details to European Harmonized StandardsFootnote 100 thereby laying the foundation for technical standards produced by European Standardization Organizations (ESOs) in support of Union harmonization legislation.Footnote 101

The animating purpose of the “New Approach” to standardization was to open up European markets in industrial products without threatening the safety of European consumers, by allowing the entry of those products across European markets if and only if they meet the “essential [safety] requirements” set out in sector-specific European rules developed by one of the three ESOs: the European Committee for Standardization (CEN), the European Committee for Electrotechnical Standardization (CENELEC) and the European Telecommunications Standards Institute (ETSI).Footnote 102

Under this approach, producers can choose to either interpret the relevant EU Directive themselves or to rely on “harmonized (European) standards” drawn up by one of the ESOs. This meta-regulatory approach combines compulsory regulation (under EU secondary legislation) and “voluntary” standards, made by ESOs. Central to this approach is that conformity of products with “essential safety requirements” is checked and certified by producers themselves who make a declaration of conformity and affix the CE mark to their products to indicate this, thereby allowing the product to be marketed and sold across the whole of the EU. However, for some “sensitive products,” conformity assessments must be carried out by an independent third-party “notified body” to certify conformity and issue a declaration of conformity. This approach was taken by the Commission in its initial AI Act proposal, and neither the Parliament nor the Council has sought to depart from it. By virtue of its reliance on the “New Approach,” the AI Act lays tremendous power in the hands of private, technical bodies who are entrusted with the task of setting technical standards intended to operationalize the “essential requirements” stipulated in the AI Act.Footnote 103

In particular, providers of Annex III high-risk AI systems that fall under the AI Act’s requirements have three options. First, they can self-assess the compliance of their AI systems with the essential requirements (which the AI Act refers to as the conformity assessment procedure based on internal control, set out in Annex VI). Under this option, whenever the requirements are vague, organizations need to use their own judgment and discretion to interpret and apply them, which – given considerable uncertainty about what they require in practice – exposes them to potential legal risks (including substantial penalties) if they fail to meet the requirements.

Second, organizations can rely on a conformity assessment by a “notified body,”Footnote 104 which they can commission to undertake the conformity assessment. These bodies are independent yet nevertheless “private” organizations that verify the conformity of AI systems based on an assessment of the quality management system and the technical documentation (a procedure set out in Annex VII). AI providers pay for these certification services, with a flourishing “market for certification” emerging in response. To carry out the tasks of a notified body, it must meet the requirements of Article 31 of the AI Act, which are mainly concerned with ensuring that they possess the necessary competences, a high degree of professional integrity, and that they are independent from and impartial to the organizations they assess to avoid conflicts of interest. Pursuant to the AI Act, only providers of biometric identification systems must currently undergo an assessment by a notification body. All others can opt for the first option (though in the future, other sensitive systems may also be obliged to obtain approval via third-party conformity assessment).

Third, AI providers can choose to follow voluntary standards currently under development by CEN/CENELEC following acceptance of the Commission’s standardization request which are intended, once drafted, to become “harmonized standards” following citation in the Official Journal of the European Commission. This would mean that AI providers and deployers could choose to follow these harmonized standards and thereby benefit from a legal presumption of conformity with the AI Act’s requirements. Although the presumption of compliance is rebuttable, it places the burden of proving non-compliance on those claiming that the AI Act’s requirements were not met, thus considerably reducing the risk that the AI provider will be found to be in breach of the Act’s essential requirements. If no harmonized standards are forthcoming, the Commission can adopt “common specifications” in respect of the requirements for high-risk systems and GPAI models, which likewise, will confer a presumption of conformity.Footnote 105

Thus, although harmonized standards produced by ESOs are formally voluntary, providers are strongly incentivized to follow them (or, in their absence, to follow the common specifications) rather than carrying the burden of demonstrating that their own specifications meet the law’s essential requirements. This means that harmonized standards are likely to become binding de facto, and will therefore in practice determine the nature and level of protection provided under the AI Act. The overwhelming majority of providers of Annex III high-risk systems can self-assess their own internal controls, sign and lodge a conformity assessment declaration, affix a CE mark to their software, and then notify the Commission’s public register.

12.4.4 Why Technical Standardization Falls Short in the AI Act’s Context

Importantly, however, several studies have found that products that have been self-certified by producers are considerably more likely to fail to meet the certified standard. For example, Larson and JordanFootnote 106 compared toy safety recalls in the US, within a toy safety regime requiring independent third-party verification, and the EU’s toy self-certification regime which relies on self-assessment and found stark differences. Over a two-year period, toy safety recalls in the EU were 9 to 20 times more frequent than those in the US. Their findings align with earlier policy studies finding that self-assessment models consistently produce substantially higher rates of worker injury compared with those involving independent third-party evaluation. Based on these studies, Larson and Jordon conclude that transnational product safety regulatory systems that rely on the self-assessment of conformity with safety standards fail to keep products off the market, which do not comply with those standards.

What is more, even third-party certification under the EU’s New Approach has shown itself to be weak and ineffective, as evidenced by the failure of the EU’s Medical Device regime which prevailed before its more recent reform. This was vividly illustrated by the PIP breast implants scandal in which approximately 40,000 women in France, and possibly 10 times more in Europe and worldwide, were implanted with breast implants that were filled with industrial grade silicon, rather than the compulsory medical grade standard required under EU law.Footnote 107 This occurred despite the fact that the implants had been certified as “CE compliant” by a reputable German notified body, which was possible because, under the relevant directive,Footnote 108 breast implant producers could choose between different methods of inspection. PIP had chosen the “full quality assurance system,” whereby the certifiers’ job was to audit PIP’s quality management system without having to inspect the breast implants themselves. In short, the New Approach has succeeded in fostering flourishing markets for certification services – but evidence suggests that it cannot be relied on systematically to deliver trustworthy products and services that protect individuals from harm to their health and safety.

Particularly troubling is the New Approach’s reliance on testing the quality of internal document keeping and management systems, rather than an inspection and evaluation of the service or product itself.Footnote 109 As critical accounting scholar Mike Power has observed, the process of “rendering auditable” through measurable procedures and performance – is a test of “the quality of internal systems rather than the quality of the product or service itself specified in standards.”Footnote 110 As Hopkins emphasizes in his analysis of the core features that a robust “safety case” approach must meet, “without scrutiny by an independent regulator, a safety case may not be worth the paper it is written on.”Footnote 111 The AI Act, however, does not impose any external auditing requirements. For Annex III high-risk AI systems, the compliance evaluation remains primarily limited to verification that there is requisite documentation in place. Accordingly, we are skeptical of the effectiveness of the CE marking regime for delivering meaningful and effective protections for those affected by rights-critical products and services regulated under the Act.Footnote 112

What, then, are the prospects that the technical standards which the Commission has tasked CEN/CENELEC to produce will translate into practice the Act’s noble aspirations to protect fundamental rights, health, safety and uphold the rule of law? We believe there are several reasons to worry. Technical standardization processes may appear “neutral” as they focus on mundane technical tasks, conducted in a highly specialized vernacular, yet these activities are in fact highly political. As Lawrence Busch puts it: “Standards are intimately associated with power.”Footnote 113 Moreover, these standards will not be publicly available. Rather, they are protected by copyright and thus only available on payment.Footnote 114 If an AI provider self-certifies its compliance with an ESO-produced harmonized standard, that will constitute “deemed compliance” with the Act. But, if, in fact, that provider has made no attempt to comply with the standard, no-one will be any the wiser unless and until action is taken by a market surveillance authority to evaluate that AI system for compliance, which it cannot do unless it has “sufficient reasons to consider an AI system to present a risk.”Footnote 115

In addition, technical standardization bodies have conventionally been dominated by private sector actors who have had both the capacity to develop particular technologies and can leverage their market share to advocate for the standardization of the technology in line with their own products and organizational processes. Standards committees tend to be stacked with people from large corporations with vested interests and extensive resources. As Joanna Bryson has pithily put it, “even when technical standards for software are useful they are ripe for regulatory capture.”Footnote 116 Nor are they subject to democratic mechanisms of public oversight and accountability that apply to conventional law-making bodies. Neither the Parliament nor the Member States have a binding veto over harmonized standards, and even the Commission has only limited powers to influence their content, at the point of determining whether the standard produced in response to its request meets the essential requirements set out in the Act, but otherwise the standard is essentially immune from judicial review.Footnote 117

Criticisms of the lack of the democratic legitimacy of these organizations has led to moves to open up their standard-setting process to “multi-stakeholder” dialogue, with civil society organizations seeking to get more involved.Footnote 118 In practice, however, these moves are deeply inadequate, as civil society struggles to obtain technical parity with their better-resourced counterparts from the business and technology communities. Stakeholder organizations also face various de facto obstacles to use the CEN/CENELEC participatory mechanisms effectively. Most NGOs have no experience in standardization and many lack EU level representation. Moreover, active participation is costly and highly time-consuming.Footnote 119

Equally if not more worrying is the fact that these “technical” standard-setting bodies are populated by experts primarily from engineering and computer science, who typically have little knowledge or expertise in matters related to fundamental rights, democracy, and the rule of law. Nor are they likely to be familiar with the analytical reasoning that is well established in human rights jurisprudence to determine what constitutes an interference with a fundamental right and whether it may be justified as necessary in a democratic society.Footnote 120 Without a significant cadre of human rights lawyers to assist them, we are deeply skeptical of the competence and ability of ESOs to translate the notion of “risks to fundamental rights” into tractable technical standards that can be relied upon to facilitate the protection of fundamental rights.Footnote 121

Furthermore, unlike risks to safety generated by chemicals, machinery, or industrial waste, all of which can be materially observed and measured, fundamental rights are, in effect, political constructs. These rights are accorded special legal protection so that an evaluation of alleged interference requires close attention to the nature and scope of the relevant right and the specific, localized context in which a particular right is allegedly infringed. We therefore seriously doubt whether fundamental rights can ever be translated into generalized technical standards that can be precisely measured in quantitative terms, and in a manner that faithfully reflects what they are and how they have been interpreted under the European Charter on Fundamental Rights and the European Convention on Human Rights.

Moreover, the CENELEC rules nevertheless state that any harmonized standard must contain “objectively verifiable requirements and test methods,”Footnote 122 which does not alleviate our difficulties in trying to conceive of how “risks to fundamental rights” can be subject to quantitative “metrics” and translated into technical standards such that the “residual risk” can be assessed as “acceptable.” Taken together, this leaves us rather pessimistic about the capacity and prospects for ESOs (even assuming a well-intentioned technical committee) to produce technical standards that will, if duly followed, provide the high level of protection to European values that the Act claims to aspire to, and which will constitute “deemed compliance” with the regulation. And if, as expected, providers of high-risk AI systems will choose to be guided by the technical standards produced by ESOs, this means that the “real” standard-setting for high-risk systems will take place within those organizations, with little public scrutiny or independent evaluation.

12.5 Conclusion

In this chapter, we have recounted the European Union’s path toward a new legal framework to regulate AI systems, beginning in 2018 with the European AI strategy and the establishment of a High-Level Expert Group on AI, culminating in the AI Act of 2024. Since most of the AI Act’s provisions will only apply two years after its entry into force,Footnote 123 we will not be in a position to acquire evidence of its effectiveness until the end of 2026. By then, both those regulated by the Act, and the supervisory actors at national and EU level will need to ramp up their oversight and monitoring capabilities. However, by that time, new AI applications may have found their way to the EU market, which – due to the AI Act’s list-based approach – will not fall within the Act, or which the Act may fail to guard against. In addition, since the AI Act aspires a maximum market harmonization for AI systems across Member States, any gaps are in principle not addressable through national legislation.

We believe that Europe can rightfully be proud of its acknowledgement that the development and use of AI systems requires mandatory legal obligations, given the individual, collective and societal harms they can engender,Footnote 124 and we applaud its aspirations to offer a protective legal framework. What remains to be seen is whether the AI Act will in practice deliver on its laudable objectives, or whether it provides a veneer of legal protection without delivering meaningful safeguards in practice. This depends, crucially, on how its noble aspirations are operationalized on the ground, particularly through the institutional mechanism and concepts through which the Act is intended to work.

Based on our analysis, it is difficult to conclude that the AI Act offers much more than “motherhood and apple pie.” In other words, although it purports to champion noble principles that command widespread consensus, notably “European values” including the protection of democracy, fundamental rights, and the rule of law, whether it succeeds in giving concrete expression to those principles in its implementation and operation remains to be seen. In our view, given the regulatory approach and enforcement architecture through which it is intended to operate, these principles are likely to remain primarily aspirational.

What we do expect to see, however, is the emergence of flourishing new markets for service-providers across Europe offering various “solutions” intended to satisfy the Act’s requirements (including the need for high-risk AI system providers and deployers to establish and maintain a suitable “risk management system” and “quality management system” that purport to comply with the technical standards developed by CEN/CENELEC). Accordingly, we believe it is likely that existing legal frameworks – such as the General Data Protection Regulation, the EU Charter of Fundamental Rights, and the European Convention on Human Rights – will prove even more important and instrumental in seeking to address the erosion and interference with foundational European values as ever more tasks are increasingly delegated to AI systems.

Footnotes

7 AI Meets the GDPR Navigating the Impact of Data Protection on AI Systems

1 Regulation 2016/679 of the European Parliament and of the Council of April 27, 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation) [2016] OJ L119/1 ELI: http://data.europa.eu/eli/reg/2016/679/oj.

2 This is recalled in Recital 15 GDPR: “In order to prevent creating a serious risk of circumvention, the protection of natural persons should be technologically neutral and should not depend on the techniques used.”

3 This is most commonly referred to as the “pacing problem” of the law. See Roger Brownsword, Rights, Regulation, and the Technological Revolution (Oxford University Press, 2008); Larry Downes, The Laws of Disruption: Harnessing the New Forces That Govern Life and Business in the Digital Age (Basic Books, 2009); Gary E Marchant, “The growing gap between emerging technologies and the law” in Gary E Marchant, Braden R Allenby, and Joseph R Herkert (eds), The Growing Gap Between Emerging Technologies and Legal-Ethical Oversight, vol. 7 (Springer Netherlands, 2011) 20–22, http://link.springer.com/10.1007/978-94-007-1356-7_2, accessed December 4, 2019.

4 For instance, in the context of predictive policing, where algorithms are used to assess the likelihood of defendants becoming recidivists. See ProPublica’s analysis of the COMPAS algorithm used by US courts: Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, “Machine Bias – There’s Software Used Across the Country to Predict Future Criminals. And It’s Biased against Blacks” ProPublica (May 23, 2016), www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing, accessed January 14, 2023. Their calculation is also available on GitHub at the following address: https://github.com/propublica/compas-analysis.

5 For that, I redirect the reader to dedicated reference manuscripts and studies such as, among many others: Dara Hallinan, Ronald Leenes, and Paul De Hert (eds), Data Protection and Privacy: Data Protection and Artificial Intelligence (Hart Publishing, 2021); Giovanni Sartor and Francesca Lagioia, “The impact of the general data protection regulation (GDPR) on artificial intelligence” (European Parliamentary Research Service, 2020) Think Tank: European Parliament, Study, www.europarl.europa.eu/thinktank/en/document/EPRS_STU(2020)641530, accessed January 11, 2023.

6 Juliane Kokott and Christoph Sobotta, “The distinction between privacy and data protection in the jurisprudence of the CJEU and the ECtHR” (2013) International Data Privacy Law, 3: 222, 222–223. See, for further information on these two systems: European Union Agency for Fundamental Rights and Council of Europe, Handbook on European Data Protection Law – 2018 Edition (2018), https://fra.europa.eu/en/publication/2018/handbook-european-data-protection-law-2018-edition, accessed January 16, 2023.

7 Convention for the Protection of Human Rights and Fundamental Freedoms (European Convention on Human Rights, as amended by Protocols n° 11, 14, and 15 and supplemented by Protocols n° 6, 7, 12, 13, and 16).

8 An overview of the jurisprudence of the ECtHR on Article 8 is available here: Registry of the European Court of Human Rights, “Guide on Article 8 of the European Convention on Human Rights. Right to respect for private and family life, home and correspondence” (April 9, 2024), https://ks.echr.coe.int/documents/d/echr-ks/guide_art_8_eng, accessed July 30, 2024.

9 Charter of Fundamental Rights of the European Union, O.J.E.U., December 18, 2000, C 364/01.

10 See, for an overview of the main relevant cases: Research and Documentation Directorate, “Fact Sheet: Protection of Personal Data” (Court of Justice of the European Union, 2021), https://curia.europa.eu/jcms/upload/docs/application/pdf/2018-10/fiche_thematique_-_donnees_personnelles_-_en.pdf, accessed January 16, 2023.

11 More specifically, Article 52(3) CFREU states that “in so far as this Charter contains rights which correspond to rights guaranteed by the Convention for the Protection of Human Rights and Fundamental Freedoms, the meaning and scope of those rights shall be the same as those laid down by the said Convention.”

12 See, for an overview of the GDPR soft law ecosystem and its limitations: Athena Christofi, Pierre Dewitte, and Charlotte Ducuing, “Erosion by standardisation: Is ISO/IEC 29134:2017 on privacy impact assessment up to (GDPR) standard?” in Maria Tzanou (ed), Personal Data Protection and Legal Developments in the European Union (IGI Global, 2020) 145–148, http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/978-1-5225-9489-5, accessed January 16, 2023.

13 The Article 29 Working Party (WP29) and its successor the European Data Protection Board (EDPB) are independent EU bodies composed of representative from national supervisory authorities tasked with ensuring the consistent interpretation of the GDPR throughout the Union. More specifically, the Board now plays a central role in the cooperation and consistency mechanism outlined in Chapter VII GDPR by issuing the so-called “binding decisions” in cases where national supervisory authorities disagree on substance of a draft decision (Article 65(1)a GDPR). The duties of the Board are detailed in Article 70 GDPR.

14 See the examples in: Lee A Bygrave and Luca Tosoni, “Article 4(1). Personal data” in Christopher Kuner et al. (eds), The EU General Data Protection Regulation (GDPR): A Commentary (Oxford University Press, 2020) 109–110, https://doi.org/10.1093/oso/9780198826491.003.0007, accessed January 17, 2023.

15 C-434/16 Nowak [2017] ECLI:EU:C:2017:994, para 35.

16 Footnote Ibid., para 34. In that case, the CJEU held that the written answers submitted by a candidate at a professional examination as well as any comments made by an examiner with respect to those answers constitute personal data, within the meaning of Article 4(1) GDPR.

17 On post-mortem privacy, see: Edina Harbinja, “Post-mortem privacy 2.0: Theory, law, and technology” (2017) International Review of Law, Computers & Technology, 31: 26. The author offers a deeper analysis of these issues in her doctoral thesis: Edina Harbinja, “Legal Aspects of Transmission of Digital Assets on Death” (University of Strathclyde, Law School, 2017), https://scholar.archive.org/work/owjux2fhlbbjnkiar2tfiowkki/access/wayback/https://stax.strath.ac.uk/downloads/pz50gw38v, accessed May 16, 2023.

18 Article 29 Working Party, “Opinion 4/2007 on the concept of personal data” 12, https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2007/wp136_en.pdf, accessed January 16, 2023.

19 C-582/14, Patrick Breyer v Bundesrepublik Deutschland [2016] ECLI:EU:C:2016:779, para 49.

20 Footnote Ibid., para 46.

21 Michèle Finck and Frank Pallas, “They who must not be identified – distinguishing personal from nonpersonal data under the GDPR” (2020) International Data Privacy Law, 10(11): 34–36; Daniel Groos and Evert-Ben van Veen, “Anonymised data and the rule of law” (2020) European Data Protection Law Review, 6(498): 5; Sophie Stalla-Bourdillon, “Anonymising personal data: Where do we stand now?” (2019) Privacy & Data Protection, 19(3): 3–5.

22 For examples of anonymization techniques and their robustness, see Article 29 Working Party, “Opinion 05/2014 on Anonymisation Techniques,” 11–19, https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2014/wp216_en.pdf, accessed January 16, 2023. It is worth noting that these guidelines, which have been abundantly criticized in legal literature for their extremely strict understanding of anonymization, are being revised as the time of writing. See Finck and Pallas (n 21) 15; Sophie Stalla-Bourdillon, “Anonymous data v. personal data – false debate: An EU perspective on anonymization, pseudonymization and personal data” (2016) Wisconsin International Law Journal, 34(384): 306–320.

23 Agencia Española de Protección de Datos and European Data Protection Supervisor, “Introduction to the hash function as a personal data pseudonymisation technique” (October 2019), https://edps.europa.eu/sites/default/files/publication/19-10-30_aepd-edps_paper_hash_final_en.pdf, accessed January 16, 2023.

24 Defined in Article 4(5) GDPR as “the processing of personal data in such a manner that the personal data can no longer be attributed to a specific data subject without the use of additional information, provided that such additional information is kept separately and is subject to technical and organizational measures to ensure that the personal data are not attributed to an identified or identifiable natural person.”

25 For an overview of the state of the art on pseudonymization, see European Union Agency for Cybersecurity, “Data pseudonymisation: Advanced techniques and use cases,” www.enisa.europa.eu/publications/data-pseudonymisation-advanced-techniques-and-use-cases, accessed January 16, 2023.

26 The target variable being the variable that the model, once trained, will be able to predict, and the predictor variables being the information on the basis of which the model will ground its prediction. For a simplified overview of the functioning of supervised and unsupervised machine learning, see Datatilsynet, “Artificial intelligence and privacy,” 7–14, www.datatilsynet.no/globalassets/global/english/ai-and-privacy.pdf, accessed January 11, 2023.

27 The Information Commissioner’s Office, UK’s supervisory authority, provides a solid introduction to anonymization techniques in: Information Commissioner’s Office, “Anonymisation: Managing data protection risk code of practice.” See also: Information Commissioner’s Office, “Big data, artificial intelligence, machine learning and data protection,” paras 130–138, https://ico.org.uk/media/for-organisations/documents/2013559/big-data-ai-ml-and-data-protection.pdf, accessed January 18, 2023.

28 For an overview of generative (adversarial) modeling, see Fida K Dankar and Mahmoud Ibrahim, “Fake it till you make it: Guidelines for effective synthetic data generation” (2021) Applied Sciences, 11(2158): 3–5. For a real-life example of a generative adversarial network, check the website, https://thispersondoesnotexist.com/.

29 Michael Veale, Reuben Binns, and Lilian Edwards, “Algorithms that remember: Model inversion attacks and data protection law” (2018) Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, 376: 20180083.

30 Such as support vector machines and k-nearest neighbors algorithms, as mentioned and explained in: Information Commissioner’s Office, “Guidance on AI and Data Protection,” 58, https://ico.org.uk/for-organisations/guide-to-data-protection/key-dp-themes/guidance-on-artificial-intelligence-and-data-protection/, accessed January 11, 2023.

31 Nadezhda Purtova, “The law of everything. Broad concept of personal data and future of EU data protection law” (2018) Law, Innovation and Technology, 10: 40.

32 Authors have even suggested that the current technological progress implies that 99.98% of Americans would be correctly reidentified in any dataset using 15 demographic attributes. See: Luc Rocher, Julien M Hendrickx, and Yves-Alexandre de Montjoye, “Estimating the success of re-identifications in incomplete datasets using generative models” (2019) Nature Communications, 10: 1.

33 Two examples are worth a mention. First, the linkage attack performed on mobility data that suggests that four spatiotemporal points are enough to uniquely identify 95% of individuals. See: Yves-Alexandre de Montjoye et al., “Unique in the crowd: The privacy bounds of human mobility” (2013) Scientific Reports, 3(1): 2. Second, the reidentification attack performed on Netflix’s user ratings dataset that uncovered that six ratings are sufficient to reidentify 84% of individuals. See: Arvind Narayanan and Vitaly Shmatikov, “How to break anonymity of the Netflix Prize dataset” (arXiv, November 22, 2007) 12, http://arxiv.org/abs/cs/0610105, accessed January 18, 2023.

34 See, for instance: Stefan Vamosi, Thomas Reutterer, and Michael Platzer, “A deep recurrent neural network approach to learn sequence similarities for user-identification” (2022) Decision Support Systems, 155: 113718.

35 See, for more a more detailed overview of the allocation of responsibilities under the GDPR, the seminal work of Brendan Van Alsenoy, Data Protection Law in the EU: Roles, Responsibilities and Liability, vol 6 (Intersentia, 2019), www.larcier-intersentia.com/en/data-protection-law-the-eu-roles-responsibilities-liability-9781780688282.html, accessed January 16, 2023.

36 European Data Protection Board, “Guidelines 07/2020 on the concepts of controller and processor in the GDPR” (July 2021), https://edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-072020-concepts-controller-and-processor-gdpr_en, accessed January 17, 2023. For the remainder of Section 3.2.1, reference is made to these guidelines. The notion of controller is covered in paras 15–45, that of joint control in paras 46–72 and that of processor in paras 73–84.

37 Case C-25/17 Tietosuojavaltuutettu [2018] ECLI:EU:C:2018:551, paras 70–75.

38 Case C-210/16 Wirtschaftsakademie Schleswig-Holstein GmbH [2018] ECLI:EU:C:2018:388, paras 25–44.

39 Case C-40-17 Fashion ID [2019] ECLI:EU:C:2019:629, paras 64–85.

40 Case C-131/12 Google Spain [2014] ECLI:EU:C:2014:317, para 34; Case C-210-16 (n 38), para 28; Case C-25/17 (n 37), para 21; Footnote ibid., para 66.

41 See, on that note, the remark of Advocate General Bobek in his Opinion on the Fashion ID case. Case C-40/17 (n 39), Opinion of Advocate General Bobek, ECLI:EU:C:2018:1039, para 74.

42 Concerns have been voiced by, for instance: Jiahong Chen et al., “Who is responsible for data processing in smart homes? Reconsidering joint controllership and the household exemption” (2020) International Data Privacy Law, 10: 279; Christopher Millard, “At this rate, everyone will be a [joint] controller of personal data!” (2019) International Data Privacy Law, 9: 217.

43 René Mahieu and Joris van Hoboken, “Fashion-ID: Introducing a phase-oriented approach to data protection?” (European Law Blog, September 30, 2019), https://europeanlawblog.eu/2019/09/30/fashion-id-introducing-a-phase-oriented-approach-to-data-protection/, accessed January 19, 2023.

44 See, for more examples, the ICO Guidance on AI and data protection, more specifically under the section “How should we understand controller/processor relationships in AI?” Information Commissioner’s Office, “Guidance on AI and Data Protection” (n 30) 23–27.

46 See, for other examples: European Data Protection Board, “Guidelines 2/2019 on the processing of personal data under Article 6(1)(b) GDPR in the context of the provision of online services to data subjects,” paras 23–29, https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines-art_6-1-b-adopted_after_public_consultation_en.pdf, accessed January 17, 2023.

47 European Data Protection Board, “Guidelines 05/2020 on Consent under Regulation 2016/679,” paras 13–54, https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_202005_consent_en.pdf, accessed January 15, 2023.

48 For a thorough overview of that principle, see: Article 29 Working Party, “Opinion 03/2013 on purpose limitation,” https://ec.europa.eu/justice/article-29/documentation/opinion-recommendation/files/2013/wp203_en.pdf, accessed January 16, 2023.

49 Recital 50 GDPR also highlights the relevance of other criteria such as “the nature of the personal data, the consequences of the intended further processing for data subjects, and the existence of appropriate safeguards in both the original and intended further processing operations.”

50 More examples can be found in Annex 2 of: Article 29 Working Party, “Opinion 06/2014 on the notion of legitimate interests of the data controller under Article 7 of Directive 95/46/EC,” www.dataprotection.ro/servlet/ViewDocument?id=1086, accessed January 14, 2023.

51 Garante per la protezione dei dati personali, Ordinanza ingiunzione nei confronti di Clearview AI [2022], www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9751362, accessed January 24, 2023.

52 Αρχή προστασίας δεδομένων προσωπικού χαρακτήρα, Επιβολή προστίμου στην εταιρεία Clearview AI, Inc [2022], www.dpa.gr/el/enimerwtiko/prakseisArxis/epiboli-prostimoy-stin-etaireia-clearview-ai-inc, accessed January 24, 2023.

53 Commission nationale de l’informatique et des libertés, Délibération de la formation restreinte n° SAN-2022-019 du octobre 17, 2022 concernant la société Clearview AI [2022], www.legifrance.gouv.fr/cnil/id/CNILTEXT000046444859, accessed January 24, 2023. See also, more recently, the 5.2 million penalty payment issued by the CNIL against Clearview AI for non-compliance with the above-mentioned injunction: Commission nationale de l’informatique et des libertés, Délibération de la formation restreinte n° SAN-2023-005 du 17 avril 2023 concernant la société Clearview AI [2023], www.legifrance.gouv.fr/cnil/id/CNILTEXT000047527412, accessed June 15, 2023.

54 Information Commissioner’s Office, Monetary Penalty Notice to Clearview AI Inc of May 26, 2022 [2022], https://ico.org.uk/media/action-weve-taken/mpns/4020436/clearview-ai-inc-mpn-20220518.pdf, accessed June 15, 2023; see also, for the order to stop obtaining and using the personal data of UK residents that is publicly available on the internet, and to delete the data of UK residents from its systems: Information Commissioner’s Office, Enforcement Notice to Clearview AI Inc. of May 26, 2022 [2022], https://ico.org.uk/media/action-weve-taken/enforcement-notices/4020437/clearview-ai-inc-en-20220518.pdf, accessed June 15, 2023.

55 Datenschutzbehörde, Decision of May 9, 2023 against Clearview AI [2023], https://noyb.eu/sites/default/files/2023-05/Clearview%20Decision%20Redacted.pdf.

56 Garante per la protezione dei dati personali, Provvedimento del 30 marzo 2023 [9870832] [2023], www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9870832, accessed June 15, 2023. An earlier decision issued against Luka Inc., the company behind Replika, also questioned the lawful ground applicable in the context of companion chatbots. See: Garante per la protezione dei dati personali, Provvedimento del 2 febbraio 2023 [9852214] [2023], www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/docweb/9852214, accessed June 15, 2023.

57 Garante per la protezione dei dati personali, ChatGPT: Garante privacy, limitazione provvisoria sospesa se OpenAI adotterà le misure richieste. L’Autorità ha dato tempo allá società fino al 30 aprile per mettersi in regola [2023], www.garanteprivacy.it/home/docweb/-/docweb-display/docweb/9874751, accessed June 15, 2023; ChatGPT: OpenAI riapre la piattaforma in italia garantendo più trasparenza e più diritti a utenti e non utenti europei, www.gpdp.it/home/docweb/-/docweb-display/docweb/9881490. For an overview of the new controls added by ChatGPT following the Garante’s ban, see the dedicated Help Centre Article on OpenAI’s website: https://help.openai.com/en/articles/7842364-how-chatgpt-and-our-language-models-are-developed, accessed June 15, 2023. Yet, OpenAI did not offer any solution to remedy the unlawfulness of the processing of the personal data contained in the dataset used to train ChatGPT.

58 It is also worth noting that OpenAI now faces a class action in California for a breach of both data protection and copyright law. See: Gerrit De Vynck, “ChatGPT maker OpenAI faces a lawsuit over how it used people’s data” (2023) Washington Post (June 28), www.washingtonpost.com/technology/2023/06/28/openai-chatgpt-lawsuit-class-action/, accessed July 4, 2023.

59 The EDPB announced the creation of the task force back in April 2023. See: www.edpb.europa.eu/news/news/2023/edpb-resolves-dispute-transfers-meta-and-creates-task-force-chat-gpt. In May 2024, it published a meager interim report documenting the results of the said taskforce that “reflect[s] the common denominator agreed by the Supervisory Authorities in their interpretation of the applicable provisions of the GDPR in relation to the matters that are within the scope of their investigation.” See: European Data Protection Board, “Report of the Work Undertaken by the ChatGPT Taskforce,” www.edpb.europa.eu/system/files/2024-05/edpb_20240523_report_chatgpt_taskforce_en.pdf. Looking beyond the EU, ChatGPT is also on the radar of the Office of the Privacy Commissioner of Canada. See: Office of the Privacy Commissioner of Canada, Announcement of April 4, 2023, www.priv.gc.ca/en/opc-news/news-and-announcements/2023/an_230404/, accessed June 15, 2023.

60 For an overview of these methods: Jason Brownlee, “How to choose a feature selection method for machine learning” (MachineLearningMastery.com, November 26, 2019), https://machinelearningmastery.com/feature-selection-with-real-and-categorical-data/, accessed January 25, 2023.

61 Stephanie Rossello, Luis Muñoz-González, and Roberto Díaz Morales, “Data protection by design in AI? The case of federated learning” (2021) Computerrecht: Tijdschrift voor Informatica, Telecommunicatie en Recht, 3: 273.

62 For other relevant examples of minimization techniques that can be deployed at the inference stage, see: Information Commissioner’s Office, “Guidance on AI and Data Protection” (n 30) 66–68.

63 For a detailed overview of Articles 12, 13, and 14 GDPR, see: Article 29 Working Party, “Guidelines on Transparency under Regulation 2016/679,” https://ec.europa.eu/newsroom/article29/redirection/document/51025, accessed January 16, 2023.

64 Laurens Naudts, Pierre Dewitte, and Jef Ausloos, “Meaningful transparency through data rights: A multidimensional analysis” (2022) Research Handbook on EU Data Protection Law 530, 540.

65 Jef Ausloos and Pierre Dewitte, “Shattering one-way mirrors – data subject access rights in practice” (2018) International Data Privacy Law, 8: 7, https://academic.oup.com/idpl/advance-article/doi/10.1093/idpl/ipy001/4922871, accessed May 16, 2023. See also the many references therein.

66 The fact that the elements listed in Article 15 partially overlap with the ones listed in Articles 13 and 14 does not mean that the controller can always answer an access request by recycling elements from its privacy policy or record of processing. See: European Data Protection Board, “Guidelines 01/2022 on data subject rights – right of access,” para 111, https://edpb.europa.eu/our-work-tools/our-documents/guidelines/guidelines-012022-data-subject-rights-right-access_en, accessed January 16, 2023.

67 Article 29 Working Party, “Guidelines on data protection impact assessment (DPIA) and determining whether processing is ‘likely to result in a high risk’ for the purposes of regulation 2016/679” 21, https://ec.europa.eu/newsroom/document.cfm?doc_id=47711, accessed January 25, 2022.

68 See, for the interpretation proposed by national supervisory authorities across Europe: Sebastião Barros Vale and Gabriela Zanfir-Fortuna, “Automated decision-making under the GDPR: Practical cases from courts and data protection authorities” (Future of Privacy Forum, 2022), https://fpf.org/wp-content/uploads/2022/05/FPF-ADM-Report-R2-singles.pdf, accessed January 11, 2023.

69 See, among others: Bryce Goodman and Seth Flaxman, “European Union Regulations on algorithmic decision-making and a ‘right to explanation’” (2017) AI Magazine, 38, http://arxiv.org/abs/1606.08813; Sandra Wachter, Brent Mittelstadt, and Luciano Floridi, “Why a right to explanation of automated decision-making does not exist in the general data protection regulation” (2017) International Data Privacy Law, 7: 76; Gianclaudio Malgieri and Giovanni Comandé, “Why a right to legibility of automated decision-making exists in the general data protection regulation” (2017) International Data Privacy Law, 7: 243.

70 See Annex 1 of Article 29 Working Party, “WP29, guidelines on DPIA” (n 67) 31.

71 The British regulator has provided a solid overview of the different types of explanations controllers could provide. See, more specifically, the Section “What goes into an explanation” from the Information Commissioner’s Office and Alan Turing Institute, “Explaining decisions made with AI,” https://ico.org.uk/media/for-organisations/guide-to-data-protection/key-dp-themes/explaining-decisions-made-with-artificial-intelligence-1-0.pdf, accessed January 25, 2023.

72 Max van Drunen, Natali Helberger, and Mariella Bastian, “Know your algorithm: What media organizations need to explain to their users about news personalization” (2019) International Data Privacy Law, 9: 220.

73 Directive 95/46/EC of the European Parliament and of the Council of 24 October 1995 on the protection of individuals with regard to the processing of personal data and on the free movement of such data [1995] OJ L281/31 ELI: http://data.europa.eu/eli/dir/1995/46/oj.

74 The EDPS indeed noted that “in the past, privacy and data protection have been perceived by many organisations as an issue mainly related to legal compliance, often confined to the mere formal process of issuing long privacy policies covering any potential eventuality and reacting to incidents in order to minimise the damage to their own interests.” See: European Data Protection Supervisor, “Opinion 5/2018 – Preliminary Opinion on Privacy by Design,” para 13, https://edps.europa.eu/sites/edp/files/publication/18-05-31_preliminary_opinion_on_privacy_by_design_en_0.pdf, accessed January 15, 2023.

75 Claudia Quelle, ‘Enhancing Compliance under the General Data Protection Regulation: The Risky Upshot of the Accountability- and Risk-Based Approach’ (2018) 9 European Journal of Risk Regulation 502, 505.

76 See, for a detailed overview of the steps involved in a DPIA: Article 35(7) GDPR and Annex 2 of the Article 29 Working Party, “WP29, Guidelines on DPIA” (n 67).

77 European Data Protection Board, “Guidelines 4/2019 on Article 25 Data Protection by Design and by Default,” para 32, https://edpb.europa.eu/sites/edpb/files/files/file1/edpb_guidelines_201904_dataprotection_by_design_and_by_default_v2.0_en.pdf, accessed May 3, 2022.

78 Article 29 Working Party, “WP29, Guidelines on DPIA” (n 67) 9–12.

79 Autoriteit Persoonsgegevens, “Boete Belastingdienst voor zwarte lijst FSV” April 12, 2022, https://autoriteitpersoonsgegevens.nl/nl/nieuws/boete-belastingdienst-voor-zwarte-lijst-fsv, accessed January 25, 2023.

80 Competition and Market Authority and others, “Auditing algorithms: The existing landscape, role of regulators and future outlook” (Digital Regulation Cooperation Forum) Findings from the DRCF Algorithmic Processing workstream – Spring 2022, www.gov.uk/government/publications/findings-from-the-drcf-algorithmic-processing-workstream-spring-2022/auditing-algorithms-the-existing-landscape-role-of-regulators-and-future-outlook, accessed January 26, 2023.

81 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act) (Text with EEA relevance) [2024] OJ L144/1 ELI: http://data.europa.eu/eli/reg/2024/1689/oj.

82 See, for a use case in the healthcare sector: Lara Groves, “Algorithmic impact assessment: A case study in healthcare” (Ada Lovelace Institute, 2022), www.adalovelaceinstitute.org/report/algorithmic-impact-assessment-case-study-healthcare/, accessed January 26, 2023.

83 Christian Katzenbach and Lena Ulbricht, “Algorithmic governance” (2019) Internet Policy Review, 8(4), https://policyreview.info/concepts/algorithmic-governance, accessed January 26, 2023.

8 Tort Liability and Artificial Intelligence Some Challenges and (Regulatory) Responses

1 See, for example, Ronald Leenes et al., “Regulatory challenges of robotics: some guidelines for addressing legal and ethical issues” (2017) Law, Innovation and Technology, 9(1): 2; Marcelo Corrales, Mark Fenwick, and Nikolaus Forgó, Robotics, AI and the Future of Law (Springer, 2018); Jacob Turner, Robot Rules: Regulating Artificial Intelligence (Springer, 2018); Martin Ebers and Susana Navas (eds), Algorithms and Law (Cambridge University Press, 2020); Matt Hervey and Matthew Lavy, The Law of Artificial Intelligence (Sweet & Maxwell, 2021); Jan De Bruyne and Cedric Vanleenhove, Artificial Intelligence and the Law (Intersentia, 2023).

2 See, for example, Jan De Bruyne and Jochen Tanghe, “Liability for damage caused by autonomous vehicles: a Belgian perspective” (2017) Journal of European Tort Law, 8(3): 324.

3 The Tesla Team, “A Tragic Loss” (June 30, 2016) Tesla.com, www.teslamotors.com/blog/tragic-loss, accessed February 16, 2023.

4 Sam Levin and Julia Carrie, “Self-driving Uber kills Arizona woman in first fatal crash involving pedestrian” The Guardian (March 19, 2018), www.theguardian.com/technology/2018/mar/19/uber-self-driving-car-kills-woman-arizona-tempe, accessed February 16, 2023.

5 European Commission, “Report on the safety and liability implications of Artificial Intelligence, the Internet of Things and robotics” COM(2020) 64 final.

6 European Commission, “White Paper on Artificial Intelligence – A European approach to excellence and trust” COM(2020) 65 final.

7 E. Palmerini et al., “RoboLaw: Towards a European framework for robotics regulation” (2016) Robotics and Autonomous Systems, 86: 78–85, 83.

8 See, for example, the many contributions in Sebastian Lohsse, Reiner Schulze, and Dirk Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Hart Publishing, 2019); Mihailis Diamantis, “Vicarious liability for AI” (2021) U Iowa Legal Studies, 27: Research Paper; Anna Beckers and Gunther Teubner, Three Liability Regimes for Artificial Intelligence: Algorithmic Actants, Hybrids, Crowds (Bloomsbury Publishing, 2021); Jan De Bruyne, Elias Van Gool and Thomas Gils, “Tort law and damage caused by AI systems” in Jan De Bruyne and Cedric Vanleenhove (eds), Artificial Intelligence and the Law (Intersentia, 2023); Mark A. Geistfeld et al., Civil Liability for Artificial Intelligence and Software (Walter de Gruyter GmbH & Co KG, 2022); Philipp Hacker, “The European AI liability directives – Critique of a half-hearted approach and lessons for the future” (2023) Computer Law & Security Review, 51:1–17; Jan De Bruyne, Orian Dheu and Charlotte Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (2023) Computer Law & Security Review 51: 1–19; Orian Dheu, and Jan De Bruyne, “Artificial Intelligence and Tort Law: A ‘Multi-faceted’ RealityEuropean Review of Private Law, 31: 261–298 with further references. It should be noted that research has also been done on the contractual liability of AI (e.g., Hervé Jacquemin and Jean-Benoit Hubin, “Aspects contractuels et de responsabilité civile en matière d’intelligence artificielle” in Hervé Jacquemin and Alexandre De Streel (eds), L’intelligence artificielle et le droit (Larcier, 2017) 77; Martin Ebers, Cristina Poncibo, and Mimi Zou (eds), Contracting and Contract Law in the Age of Artificial Intelligence (Bloomsbury Publishing, 2021); Jan De Bruyne and Maarten Herbosch, “Artificiële intelligentie, aansprakelijkheid en contractenrecht. Enkele aandachtspunten voor bedrijfsjuristen” in IBJ, Artificiële intelligentie door de ogen van de bedrijfsjurist / L’intelligence artificielle à travers les yeux des juristes d’entreprise (Larcier, 2022) 45.

9 See, for example, European Parliament, “Report with recommendations to the Commission on Civil Law Rules on Robotics” (2017) 2015/2103(INL); European Parliament, “Report with recommendations to the Commission on a civil liability regime for artificial intelligence” (2020) 2020/2014(INL); Expert Group on Liability and New Technologies – New Technologies Formation, “Liability for artificial intelligence and other emerging digital technologies” (Publications Office of the European Union, 2019); COM(2020) 64 final (n 5). The European Commission adopted two proposals containing liability rules for AI and providing some guidance on many of these issues. One proposal revises the Product Liability Directive (see n 24) and another one introduces an extra-contractual civil liability regime for AI systems (see n 23).

10 See on this topic, for example, Joanna J. Bryson, Mihailis E. Diamantis, and Thomas D. Grant, “Of, for, and by the people: The legal lacuna of synthetic persons” (2017) Artificial Intelligence and Law, 25: 273; Mark Fenwick and Stefan Wrbka, “AI and legal personhood” in Larry A. DiMatteo, Cristina Poncibò, and Michael Cannarsa (eds.) The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (Cambridge University Press, 2022) 288–303.

11 It should be noted that this chapter is based on a presentation given at the KU Leuven Summer School on the Law, Ethics and Policy of AI from 2021 to 2024. As such, it aims to be introductory and understandable to readers with a nonlegal background as well. This chapter also builds upon previous work. See, for example, De Bruyne, Van Gool, and Gils, “Tort law and damage” (n 8); Jan De Bruyne, Elias Van Gool, and Amber Boes, “Wat bracht 2022 en wat brengt de toekomst op het vlak van artificiële intelligentie en buitencontractuele aansprakelijkheid?” in Thierry Vansweevelt and Britt Weyts (eds), Recente ontwikkelingen in het aansprakelijkheids- en verzekeringsrecht (Intersentia, 2022); Jan De Bruyne and Orian Dheu, “Liability for damage caused by artificial intelligence – Some food for thought and current proposals” in Phillip Morgan (ed.), Tort Liability and Autonomous Systems Accidents Common and Civil Law Perspectives (Edward Elgar Publishing, 2024); De Bruyne Dheu, “Artificial Intelligence and Tort Law: A ‘Multi-faceted’ Reality” (n 8); De Bruyne, Dheu and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8).

12 See extensively: De Bruyne and Dheu, “Liability for damage caused by artificial intelligence – Some food for thought and current proposals” (n 11).

13 European Parliament, “Civil Law Rules on Robotics” (n 9). Note that several reports have also been published upon request by European institutions (e.g., Andrea Bertolini, “Artificial intelligence and civil liability” (Report for the European Parliament JURI Committee, 2020)).

14 Expert Group on Liability and New Technologies – New Technologies Formation, “Liability for artificial intelligence” (n 9).

15 Footnote Ibid.; Andrea Bertolini and Francesca Episcopo, “The Expert Group’s Report on Liability for Artificial Intelligence and Other Emerging Digital Technologies: A critical assessment,” (2021) European Journal of Risk Regulation, 12(3): 644.

16 COM(2020) 65 final (n 6).

17 COM(2020) 64 final (n 5).

18 Under the law of evidence, the default rule is that each party has to prove its claims and contentions (actori incumbit probatio). The claimant/victim would thus have to prove that a fault of the operator or provider caused the damage they suffered. In some cases, however, this burden can be reversed to other parties, such as the operator, producer, or provider of the AI system. See extensively Section 8.3.

19 European Parliament, “European Parliament resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence” (2020) 2020/2014(INL) art 4 (1).

20 The AI Act is extensively discussed in Chapter 12 of this book authored by Nathalie A. Smuha and Karen Yeung, “The European Union’s AI Act: beyond motherhood and apple pie?” For the original proposal of the AI Act, see Commission, “Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts” COM(2021) 206 final.

21 Regulation (EU) 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 art 16–27.

22 De Bruyne, Van Gool and Gils, ‘Tort Law and Damage’ (n 8) 407–408.

23 Commission, “Proposal for a Directive of the European Parliament and of the Council on adapting non-contractual civil liability rules to artificial intelligence” COM(2022) 496 final (hereafter referred to as “AI Liability Directive”).

24 Commission, “Proposal for a Directive of the European Parliament and of the Council on liability for defective products” COM(2022) 495 final (hereafter referred to as “revised PLD”).

25 De Bruyne and Dheu, “Liability for damage caused by artificial intelligence – Some food for thought and current proposals” (n 11) referring to the “tort law dilemma.”

26 European Parliament, “Recommendations on a civil liability regime for artificial intelligence” (n 19).

27 AI Liability Directive, art 3.4. See for an extensive analysis: Hacker, “The European AI liability directives – Critique of a half-hearted approach and lessons for the future” (n 8).

28 AI Liability Directive, art 4.1.

29 Dheu, De Bruyne and, Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8) 7.

31 Ivo Giesen, “The burden of proof and other procedural devices in tort law” in Helmut Koziol and Barbara C. Steiniger (eds), European Tort Law 2008 (Springer, 2009) 50.

32 Mojtaba Kazazi, Burden of Proof and Related Issues: A Study on Evidence Before International Tribunals (Martinus Nijhoff Publishers, 1996). See, for example, art 8.4, para 1, Civil Code (Wet 13 April 2019 tot invoering van een Burgerlijk Wetboek en tot invoeging van boek 8 ‘Bewijs’ in dat Wetboek, BS May 14, 2019, 46353.); art 870 Judicial Code.

33 Expert Group on Liability and New Technologies – New Technologies Formation, “Liability for artificial intelligence” (n 9) 32–33. Also see: AI Liability Directive, recitals (3)–(7); Dheu, De Bruyne, “Artificial Intelligence and Tort Law: A ‘Multi-faceted’ Reality” (n 8).

34 COM(2020) 65 final (n 6) 13; Expert Group on Liability and New Technologies – New Technologies Formation, “Liability for artificial intelligence” (n 9) 35 and 51.

35 Council Directive 85/374/EEC of July 25, 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products [1985] OJ L 210 (further referred to as the “PLD”). See in general: Bernhard Koch et al., “Response of the European Law Institute to the Public Consultation on Civil Liability – Adapting Liability Rules to the Digital Age and Artificial Intelligence” (2022) Journal of European Tort Law, 13: 43–46.

36 Jean-Sébastien Borghetti, “How can artificial intelligence be defective?” in Sebastian Lohsse, Reiner Schulze, and Dirk Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Hart Publishing, 2019) 67 (as referred to in Miriam Buiten, Alexandre de Streel, and Martin Peitz, “EU liability rules for the age of artificial intelligence” (2021) SSRN https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3817520 accessed February 22, 2023 34–35).

37 Also see revised PLD, recitals (30)–(31) (“Injured persons, are, however, often at a significant disadvantage compared to manufacturers in terms of access to, and understanding of, information on how a product was produced and how it operates. This asymmetry of information can undermine the fair apportionment of risk, in particular in cases involving technical or scientific complexity”).

38 Koch et al., “Response of the European Law Institute” (n 35) 44–46 and 57–58. Similarity in the context of the PLD: Daily Wuyts, “The product liability directive – more than two decades of defective products in Europe” (2014) Journal of European Tort Law, 5(1): 1–34.

39 COM(2020) 65 final (n 6) 13 Also see: Buiten, de Streel, and Peitz, “EU Liability Rules”(n 36) 24–38.

40 COM(2020) 64 final (n 5) 13; De Bruyne, Van Gool, and Gils, “Tort Law and Damage” (n 8) 396–397.

41 See, for example, Gerhard Wagner, “Robot Liability” in Sebastian Lohsse, Reiner Schulze, and Dirk Staudenmayer (eds), Liability for Artificial Intelligence and the Internet of Things (Hart Publishing, 2019) 47; Charlotte de Meeus, “The product liability directive at the age of the digital industrial revolution: Fit for innovation?” (2019) Journal of European Consumer and Market Law, 8(4): 149–154, 152; Christian Twigg-Flesner, “Guiding principles for updating the product liability directive for the digital age (Pilot ELI Innovation Paper)” (2021) ELI Innovation Paper Series, SSRN, 9–10, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3770796, accessed February 22, 2023; Koch et al., “Response of the European law institute” (n 35) 44. See extensively with further references: Dheu and De Bruyne, “Artificial Intelligence and Tort Law: A ‘Multi-faceted’ Reality” (n 8).

42 AI Liability Directive, art 3.1, first para.

43 Footnote Ibid. art 3.1 and 3.2.

44 Footnote Ibid. art 3.4, first para.

45 Footnote Ibid. art 3.4, second para.

46 Revised PLD, art 8.1.

47 Footnote Ibid. art 8.2.

48 Expert Group on Liability and New Technologies – New Technologies Formation, “Liability for artificial intelligence” (n 9) 7 and 48.

49 Footnote Ibid. 8 and 52.

50 Footnote Ibid. 8 and 49–50.

51 European Parliament, “recommendations to the Commission on a civil liability regime for artificial intelligence” (n 9) art 8.

52 AI Liability Directive, 13 and art 4.1 (a).

53 Footnote Ibid. art 4.4.

54 Footnote Ibid. art 4.5. See for an extensive analysis: Jan De Bruyne, Orian Dheu and Charlotte Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8).

55 Revised PLD, art 9.4 (a).

56 Footnote Ibid. art 9.4 (b).

57 Footnote Ibid. art 9.4, second para.

58 Footnote Ibid. art 9.5. See for an extensive analysis: De Bruyne, Dheu and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8).

59 See, for example, Bert Keirsbilck, Evelyne Terryn, and Elias Van Gool, “Consumentenbescherming bij servitisation en product-dienst-systemen (PDS)” (2019) Tijdschrift voor Privaatrecht 817; De Bruyne, Van Gool, and Gils, “Tort law and damage” (n 8) 417.

60 See, for example, Bertolini, “Artificial intelligence and civil liability” (n 13) 57.

61 See, for example, Duncan Fairgrieve and Eleonora Rajneri, “Is software a product under the product liability directive?” (2019) Zeitschrift für Internationales Wirtschaftsrecht, 24; Koch et al., “Response of the European Law Institute” (n 35) 34–36.

62 Previously, several EU policy documents already favored a broad interpretation of the notion of a product (e.g., Expert Group on Liability and New Technologies – New Technologies Formation, “Liability for artificial intelligence” (n 9) 42–43; COM(2020) 64 final (n 5) 14).

63 De Bruyne, Van Gool, and Gils, “Tort law and damage” (n 8) 418.

64 See extensively: De Bruyne, Van Gool, and Gils, “Tort law and damage” (n 8) 417–421 with further references.

65 Art. 2 Act 25 February 1991 concerning liability for defective products, BS 22 March 1991. Also see Dimitri Verhoeven, “Productveiligheid en productaansprakelijkheid: krachtlijnen en toekomstperspectieven” in Reinhard Steennot and Gert Straetmans (eds), Wetboek economisch recht en de bescherming van de consument (Intersentia, 2015) 198; Jacquemin and Hubin, “Aspects contractuels” (n 8) 129–130.

66 Revised PLD, art 4 (1).

67 See, for example, Jochen Tanghe and Jan De Bruyne, “Software aan het stuur. Aansprakelijkheid voor schade veroorzaakt door autonome motorrijtuigen” in Thierry Vansweevelt and Britt Weyts (eds), Nieuwe risico’s in het aansprakelijkheids- en verzekeringsrecht (Intersentia, 2018) 56–57; Buiten, de Streel, and Peitz, “EU liability rules” (n 36) 51; Twigg-Flesner, “Guiding principles” (n 41) 5; Koch et al., “Response of the European Law Institute” (n 35) 34–36.

68 AI Liability Directive, recital 13. See extensively De Bruyne, Dheu and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8) 11–13.

69 COM(2020) 64 final (n 5) 13–14.

70 De Bruyne and Tanghe, “Liability for damage caused by autonomous vehicles” (n 2) 357.

71 Product Liability Directive, art 6.

72 Product Liability Directive, recital 6. Bocken argues that it concerns the consumer as part of a group (Hubert Bocken, “Buitencontractuele aansprakelijkheid voor gebrekkige producten” in Hubert Bocken et al., (ed), Bijzondere overeenkomsten (Postuniversitaire cyclus Willy Delva 34, Wolters Kluwer, 2008–2009) 367).

73 Cass 26 September 2003 Arr.Cass. 2003 1765 RW 2004–05 22 annotation by Britt Weyts; Court of Appeal Antwerp 13 April 2005 RW 2008–09 803; Court of Appeal Antwerp 28 October 2009 TBBR 2011 381 annotation by Dimitri Verhoeven; Hubert Bocken and Ingrid Boone with cooperation by Marc Kruithof, Inleiding tot het schadevergoedingsrecht: buitencontractueel aansprakelijkheidsrecht en andere schadevergoedingsstelsels (Die Keure, 2014) 196; Jacquemin and Hubin, “Aspects contractuels” (n 8) 131.

74 Product Liability Directive, art 6, first para.

75 Bocken and Boone, Inleiding tot het schadevergoedingsrecht (n 73) 196; Marc Kruithof, “Wie is aansprakelijk voor schade veroorzaakt door onveilige producten?: de toepassing van de artikelen 1382, 1384 lid 1, en 1645 BW herbekeken in het licht van het – door het Hof van Justitie sterk beperkte – aanvullend karakter voorzien in artikel 13 Wet Productaansprakelijkheid” in Ignace Claeys and Reinhard Steennot (eds), Aansprakelijkheid, veiligheid en kwaliteit (Postuniversitaire cyclus Willy Delva 40, Wolters Kluwer, 2015) 148, fn 18.

76 De Bruyne, Van Gool, and Gils, “Tort law and damage” (n 8) 422 with further references.

77 De Bruyne, Dheu, and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8) 13–14.

78 Revised PLD, art 6.1.

80 Revised PLD, art 6.2.

81 Bertolini, “Artificial intelligence and civil liability” (n 13) 57.

82 Bocken, “Buitencontractuele aansprakelijkheid” (n 72) 368; Thierry Vansweevelt and Britt Weyts, Handboek Buitencontractueel Aansprakelijkheidsrecht (Intersentia, 2009) 515.

83 See extensively: Borghetti, “How can artificial intelligence” (n 36) 63–76.

84 De Bruyne and Tanghe, “Liability for damage caused by autonomous vehicles” (n 2) 362. See also: Thomas Malengreau, “Automatisation de la conduite: quelles responsabilités en droit belge? (Première partie)” (2019) RGAR, 5: 15578, no 27.

85 See: Borghetti, “How can artificial intelligence” (n 36) 68–69; De Bruyne and Tanghe, “Liability for damage caused by autonomous vehicles” (n 2) 358–362.

86 Under this defense, the producer will not be held liable if he or she proves that the state of scientific and technical knowledge at the time when he or she put the product into circulation was not such as to enable the existence of the defect to be discovered (Product Liability Directive, art 7, e).

87 Expert Group on Liability and New Technologies – New Technologies Formation, “Liability for artificial intelligence” (n 9) 28.

88 COM(2020) 65 final (n 6) 13.

89 See the discussion supra in part 3.

90 AI Liability Directive, 13 and art 4.1 (a).

91 See extensively: De Bruyne, Dheu, and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8) 7–9.

92 AI Liability Directive, art 2 (9).

93 AI Liability Directive, recital 24.

94 Marc Kruithof, Tort Law in Belgium (Kluwer Law International, 2018) 47 with references.

95 See extensively: De Bruyne, Dheu, and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8) 8–9.

96 See, for example, Court of Cassation 3 October 1994 (1984) Arr.Cass. 807; (1996–1997) RW 1227; Geert Jocqué, “Bewustzijn en subjectieve verwijtbaarheid” in Hubert Bocken, XXXIIIste Postuniversitaire cyclus Willy Delva 2006–2007 (Intersentia, 2007) 1–101; Vansweevelt and Weyts, Handboek (n 82) 147–148; Kruithof, Tort Law (n 94) 53–56.

97 De Bruyne, Dheu, and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8) 9.

98 See, for example, Cass 3 October 1994 Arr.Cass. 1994 807; Cass 10 April 2014 Arr.Cass. 2014 962.

99 See, for example, Cass 25 November 2002 Arr.Cass. 2002 2543; Bocken and Boone, Inleiding tot het schadevergoedingsrecht (n 73) 90–92.

100 Kruithof, Tort Law (n 94) 49 with references; Vansweevelt and Weyts, Handboek (n 82) 134–137.

101 De Bruyne, Dheu, and Ducuing, “The European Commission’s approach to extra-contractual liability and AI – An evaluation of the AI liability directive and the revised product liability directive” (n 8) 9.

102 See, for example, Thomas Gils, Frederic Heymans, and Wannes Ooms (Knowledge Centre Data & Society), “From Policy To Practice: Prototyping The EU AI Act’s Transparency Requirements,” January 2024, https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4714345, accessed August 2, 2024.

9 Artificial Intelligence and Competition Law

1 Generative AI applications fall outside the scope of this chapter, as it is updated until January 31, 2023. For more recent developments on the intersection of competition law and generative AI, see Friso Bostoen and Anouk van der Veer, “Regulating competition in generative AI: A matter of trajectory, timing and tools” (2024) Concurrences, 2-2024: 27–33.

2 “Price-bots can collude against consumers” The Economist (May 6, 2017), www.economist.com/finance-and-economics/2017/05/06/price-bots-can-collude-against-consumers.

3 Ariel Ezrachi and Maurice Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016). For an update, see Ariel Ezrachi and Maurice Stucke, “Sustainable and unchallenged algorithmic tacit collusion” (2020) Northwestern Journal of Technology and Intellectual Property, 17: 217.

4 See Thibault Schrepel, “Here’s why algorithms are NOT (really) a thing” Concurrentialiste (May 2017) www.networklawreview.org/algorithms-based-practices-antitrust/.

5 The case often referred to concerned Amazon sellers fixing the price of celebrity posters, which sparked enforcement in the US and the UK. See Competition and Markets Authority (CMA), Case 50223, Online sales of posters and frames, August 12, 2016.

6 Nicolas Petit, “Antitrust and artificial intelligence: A research agenda” (2017) Journal of European Competition Law & Practice, 8(6): 361–362, 361.

7 Emilio Calvano et al., “Artificial intelligence, algorithmic pricing, and collusion” (2020) American Economic Review, 110: 3267.

8 Heather Vogell, “Rent going up? One company’s algorithm could be why” ProPublica (October 15, 2022), www.propublica.org/article/yieldstar-rent-increase-realpage-rent.

9 Panos Louridas, Algorithms (MIT Press, 2020), Chapter 1.

10 On his CV (Section “Adages & Coinages”), Larry Tesler corrects the record: “What I actually said was: ‘Intelligence is whatever machines haven’t done yet,’” see, www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html.

11 Ethem Alpaydin, Machine Learning (MIT Press, 2021) 17–18. Alpaydin argues that ML is a requirement for AI (see 18–22) and defines AI as computers doing things, which – if done by humans – would be said to require intelligence (while stressing the problem that AI definitions tend to be human-centric).

12 Footnote Ibid., 12.

13 See the various references to “distort[ions of] normal competition” in the Treaty establishing the European Coal and Steel Community (1951); more recently, see the Consolidated version of the Treaty on European Union – Protocol (No 27) on the internal market and competition [2008] OJ C115/309 (“the internal market as set out in Article 3 [TFEU] includes a system ensuring that competition is not distorted”).

14 Joined Cases T-213/01 and T-214/01 Österreichische Postsparkasse v Commission EU:T:2006:151, para 115.

15 Case C-413/14 P Intel v Commission EU:C:2017:632, para 134.

16 In business-to-business (B2B) settings, prices are often individually negotiated, or in any case not made public.

17 EC, “Final report on the E-Commerce Sector Inquiry” (Staff Working Document) COM (2017) 229.

18 Footnote Ibid., para 12. The EC adds, however, that increased price competition may negatively affect competition on parameters other than price, such as quality and innovation.

19 Footnote Ibid., para 13.

20 Footnote Ibid. (“Two thirds of [retailers] use automatic software programmes that adjust their own prices based on the observed prices of competitors.”).

21 Margrethe Vestager, “Algorithms and competition” (Bundeskartellamt 18th Conference on Competition, Berlin, March 16, 2017).

22 Petrol prices are displayed prominently, so even in the past, they could be collected by driving by petrol stations. Meanwhile, specific apps have sprung up to compare petrol prices. Navigation apps such as Google’s Waze also provide information on the prices charged by petrol stations.

23 Fernando Luco, “Who benefits from information disclosure? The case of retail gasoline” (2019) American Economic Journal: Microeconomics, 11: 277 (due to differences in search behavior, low-income consumers were more affected than high-income consumers).

24 Stephanie Assad et al., “Algorithmic pricing and competition: Empirical evidence from the German retail gasoline market” (2020) CESifo Working Paper No. 8521 (the 9% increase was found in non-monopoly markets; in duopoly markets, the authors found that margins do not change when only one of the two stations adopts, but increase by 28% when both do).

25 See Sam Schechner, “Why do gas station prices constantly change? Blame the algorithm” The Wall Street Journal (May 8, 2017), www.wsj.com/articles/why-do-gas-station-prices-constantly-change-blame-the-algorithm-1494262674.

26 CMA, “Algorithms: How they can reduce competition and harm consumers” (Report) 2021, 2.9–2.20.

27 An important question is whether total output increases, see Hal Varian, “Price discrimination” in Richard Schmalensee and Robert Willig (eds), Handbook of Industrial Organization – Volume I (Elsevier, 1989) 597.

28 See Michael Schrage, Recommendation Engines (MIT Press, 2020).

29 The merger control regime is also important, but algorithmic competition issues have not played an important role there yet. For a primer, see Ai Deng and Cristián Hernández, “Algorithmic pricing in horizontal merger review: An initial assessment” (2022) Antitrust, 36(2): 36–41.

30 See Case C-8/08 T-Mobile Netherlands v Nederlandse Mededingingsautoriteit EU:C:2009:343, para 23 (“the definitions of ‘agreement’ … and ‘concerted practice’ are intended, from a subjective point of view, to catch forms of collusion having the same nature which are distinguishable from each other only by their intensity and the forms in which they manifest themselves”).

31 Case C-345/14 Maxima Latvija v Konkurences padome EU:C:2015:784, para 18.

32 Footnote Ibid., para 20.

33 This has been well documented in the case of the lysine cartel, where an executive from one of the firms served as FBI informant, making up to 300 audio and video recordings of cartel-related meetings. The picture that emerges is one of constant distrust between the cartelists. See John Connor, “‘Our customers are our enemies’: The Lysine Cartel of 1992–1995” (2001) Review of Industrial Organization, 18: 5.

34 Salil Mehra, “Antitrust and the robo-seller: Competition in the time of algorithms” (2016) Minnesota Law Review, 100: 1323–1375, 1348–49.

35 Note that quicker detection of deviations only works at the retail (B2C) level, where prices tend to be transparent. In addition to quicker detection of deviations, the use of algorithms also reduces the chance of errors and accidental deviations. See CMA, “Pricing algorithms” (Economic Working Paper) 2018, paras 5.7–5.11.

36 E-commerce Sector Inquiry (n 17), para 33.

37 These three scenarios are in line with Autorité de la concurrence and Bundeskartellamt, “Algorithms and competition” (Report) 2019, 26–60 and Autoridade da Concorrência, “Digital ecosystems, big data and algorithms” (Issues Paper) 2019, paras 243–275.

38 CMA, Posters (n 5). For the equivalent U.S. case, see U.S. District Court for the Northern District of California, Case 3:15-cr-00419-WHO, United States v Daniel Aston, August 11, 2016. The U.S. Department of Justice (DOJ) pursued a similar case earlier, see U.S. District Court for the Northern District of California, Case3:15-cr-00201-WHO, United States v David Topkins, April 30, 2015. Both U.S. cases ended with a plea agreement.

39 On the availability and operation of such software, see Autoridade da Concorrência, “Digital ecosystems” (n 37), paras 208–221.

40 See, for example, CMA, Posters (n 5), para 3.83, quoting a message from a Trod employee to a GB employee: “nearly all posters you are undercutting, so presume your software is broken, so had to remove you from ignore list. Let me know when repaired.”

41 Zach Brown, “Competition in pricing algorithms” (2021) NBER Working Paper 28860, including both formal and empirical analysis. See also Autorité de la concurrence and Bundeskartellamt, “Algorithms” (n 37), 43–44.

42 The commitment needs to be credible. Brown argues that investments of a high-technology firm in the frequency and automation of its price-setting make its commitment credible. Note that the logic is similar to that of price-matching guarantees.

43 The mechanism is similar but not equal to that of the German petrol stations studied in Assad et al., “Algorithmic pricing and competition” (n 24). In a duopoly setting, Assad et al. find evidence for price effects only when both firms adopt superior pricing technology, which suggests that the mechanism in their setting is collusion or symmetric commitment.

44 On Uber’s pricing, see www.uber.com/us/en/marketplace/pricing/. Note that other platforms do offer pricing tools: Airbnb, for example, offers “Smart Pricing,” which automatically adapts hosts’ nightly prices to demand, see, www.airbnb.co.uk/help/article/1168.

45 Vogell, “Rent going up?” (n 8).

46 For a similar example, see Daniel Mândrescu, “When algorithmic pricing meets concerted practices – the case of Partneo” CoRe Blog (June 7, 2018), www.lexxion.eu/coreblogpost/when-algorithmic-pricing-meets-concerted-practices-the-case-of-partneo/ (on a pricing algorithm for auto parts, including allegations of clandestine meetings between certain auto makers).

47 Advocate General Spuznar already suggested the hub-and-spoke qualification for Uber in Case C-434/15 Asociación Profesional Elite Taxi v Uber Systems Spain EU:C:2017:364, para 62 and footnote 23. Another potential qualification is that of cartel facilitator, as in Case C-194/14 P AC-Treuhand v Commission EU:C:2015:717, but that qualification appears more suited to firms (such as consultancies) that operate on a completely different market.

48 Case C-74/14 Eturas t. Lietuvos Respublikos konkurencijos taryba EU:C:2016:42. Similar cases have been pursued at the national level, see, for example, Comisión Nacional de los Mercados y la Competencia, “The CNMC fines several companies EUR 1.25 million for imposing minimum commissions in the real estate brokerage market” (press release, December 9, 2021), www.cnmc.es/expedientes/s000320 (concerning a real estate platform that imposed minimum commissions of 4% on agencies).

49 Case C-74/14 Eturas (n 48), para 10.

50 Footnote Ibid., para 25.

51 Footnote Ibid., para 27, referencing Case C-8/08 T-Mobile (n 30), paras 32–33.

52 Case C-74/14 Eturas (n 48), para 28.

53 Footnote Ibid., para 39.

54 Footnote Ibid., paras 40–41. Travel agencies can rebut the presumption “for example by proving that they did not receive that message or that they did not look at the section in question or did not look at it until some time had passed since that dispatch.”

55 Footnote Ibid., paras 42 and 44. Note that an illegal concerted practice requires not only concertation but also “subsequent conduct on the market and a relationship of cause and effect between the two,” see C-286/13 P Dole Food v Commission EU:C:2015:184, para 126.

56 Case C-74/14 Eturas (n 48), paras 46–49.

57 Heather Vogell, “Department of Justice opens investigation into real estate tech company accused of collusion with landlords” ProPublica (November 23, 2022), www.propublica.org/article/yieldstar-realpage-rent-doj-investigation-antitrust.

58 U.S. District Court for the Southern District of New York, Case 15 Civ. 9796, Spencer Meyer v Travis Kalanick, March 31, 2016 (the judge believed there to be a hub-and-spoke cartel but Uber managed to move the case to arbitration). CADE, Technical Note No. 26/2018/CGAA4/SGA1/CADE, Public Ministry of the State of São Paulo v Uber do Brasil Tecnologia (the authority did not find sufficient concertation between drivers; simply accepting Uber’s terms and conditions did not suffice).

59 In addition to concertation, there is also subsequent conduct, that is, drivers follow Uber’s pricing (they cannot deviate from it).

60 See further EC, Guidelines on the application of Article 81(3) of the Treaty (Communication) OJ C101/97.

61 Conseil de la Concurrence Grand-Duché de Luxembourg, Case 2018-FO-01, Webtaxi, June 7, 2018. The authority found the pricing restriction proportional given that it was indispensable to realize the efficiencies and there was no less restrictive way of doing so. Competition was not eliminated because Webtaxi represented only a quarter of Luxembourg cabs.

62 On reinforcement and Q-learning in a pricing context, see Ashwin Ittoo and Nicolas Petit, “Algorithmic pricing agents and tacit collusion: A technological perspective” in Hervé Jacquemin and Alexandre de Streel (eds), L’Intelligence Artificielle et le Droit (Larcier, 2017) 247–256.

63 Bruno Salcedo, “Pricing algorithms and collusion” (2015), available at https://brunosalcedo.com/docs/collusion.pdf.

64 Calvano et al., “Artificial intelligence” (n 7).

65 See Autorité de la concurrence and Bundeskartellamt, “Algorithms” (n 37), 45–52 for a discussion of the assumptions underlying the research of Calvano et al. and other experimental studies.

66 See Richard Posner, “Oligopoly and the antitrust laws: A suggested approach” (1968) Stanford Law Review, 21: 1562.

67 Organisation for Economic Co-operation and Development (OECD), “Algorithms and collusion: Competition policy in the digital age” (Background Note) 2017, 35–36.

68 Case 48–69 Imperial Chemical Industries (ICI) v Commission EU:C:1972:70, para 64.

69 Case C-74/14 Eturas (n 48), para 27; Case C-8/08 T-Mobile (n 30), paras 32–33.

70 Joined cases 40–48, 50, 54–56, 111, 113 and 114–73 Suiker Unie v Commission EU:C:1975:174, para 174.

71 Joined cases C-89/85, C-104/85, C-114/85, C-116/85, C-117/85 and C-125/85 to C-129/85 A. Ahlström Osakeyhtiö v Commission (‘Wood Pulp II’) EU:C:1993:120, para 71. Earlier case law was less strict, see Case 48–69 ICI (n 68), para 66 (“Although parallel behaviour may not by itself be identified with a concerted practice, it may however amount to strong evidence of such a practice if it leads to conditions of competition which do not correspond to the normal conditions of the market”).

72 Container Shipping (Case AT.39850) Commission Decision of 7 July 2016. Note that the case ended with commitments so there is no final decision, let alone a judgment confirming it.

73 Footnote Ibid., paras 45–47.

74 For a well-considered proposal, situated in the U.S. context, see Joseph Harrington, ‘Developing competition law for collusion by autonomous artificial agents’ (2018) Journal of Competition Law & Economics, 14: 331, in particular Section 6.

75 Case 243/83 Binon EU:C:1985:284, para 44 and Case 27/87 Louis Erauw-Jacquery v La Hesbignonne EU:C:1988:183, para 15. RPM constitutes a “hardcore” restriction under Commission Regulation (EU) 2022/720 on the application of Article 101(3) of the Treaty on the Functioning of the European Union to categories of vertical agreements and concerted practices [2022] OJ L134/4, art 4(a). See further EC, “Guidelines on vertical restraints” (Communication) OJ C248/1, paras 185–201. Note that maximum prices are treated similarly to recommended resale prices and minimum resale prices similarly to RPM.

76 Asus (Case AT.40465) Commission Decision of 24 July 2018, Denon & Marantz (Case AT.40469) Commission Decision of 24 July 2018, Philips (Case AT.40181) Commission Decision of 24 July 2018, and Pioneer (Case AT.40182) Commission Decision of 24 July 2018. For an overview, see EC, “Commission fines four consumer electronics manufacturers for fixing online resale prices” (press release, July 24, 2018) IP/18/4601.

77 Authority for Consumers & Markets (ACM), Case ACM/20/040569, Samsung, September 14, 2021.

78 Spider software crawls the web to collect price data from different sources.

79 On choice architecture, see CMA, “Online choice architecture: How digital design can harm competition and consumers” (Discussion Paper) 2022. Ranking (paras 4.35–4.41) is only one aspect of choice architecture, defaults (paras 4.27–4.34) are another powerful tool.

80 Google Search (Shopping) (Case AT.39740) Commission Decision of 27 June 2017, confirmed in Case T-612/17 Google and Alphabet v Commission EU:T:2021:763. For a discussion, see Friso Bostoen, “The General Court’s Google Shopping judgment: Finetuning the legal qualifications and tests for platform abuse” (2022) Journal of European Competition Law & Practice, 13: 75.

81 Google Search (Shopping) (n 80), paras 454–461. See also CMA, “Online search: Consumer and firm behaviour” (Literature Review) 2017.

82 This appeared to be the case. By way of illustration, one Google executive wrote that Froogle (then the name of Google’s CSS) “simply doesn’t work,” see Google Search (Shopping) (n 80), para 490.

83 Ranking is but one method of algorithmic exclusion. For a discussion of other methods (including defaults), see Thomas Cheng and Julian Nowag, “Algorithmic predation and exclusion” (2023) University of Pennsylvania Journal of Business Law, 25: 41.

84 Amazon does not only promote its own products but also those of third-party sellers that use its “Fulfilled by Amazon” logistics service. See EC, “Commission accepts commitments by Amazon barring it from using marketplace seller data, and ensuring equal access to Buy Box and Prime” (press release, December 20, 2022) IP/22/7777 and AGCM, “Amazon fined over € 1,128 billion for abusing its dominant position” (press release, December 9, 2021), https://en.agcm.it/en/media/press-releases/2021/12/A528.

85 Bundeskartellamt, “Extension of ongoing proceedings against Amazon to also include an examination pursuant to Section 19a of the German Competition Act” (press release, November 14, 2022), www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2022/14_11_2022_Amazon_19a.html.

86 Regulation (EU) 2022/1925 of the European Parliament and of the Council on contestable and fair markets in the digital sector (Digital Markets Act) [2022] OJ L265/1, art 6(5). Other provisions are also relevant from a choice architecture point of view, see, for example, arts 6(3)–(4) on defaults. “Gatekeepers” are defined in art 3.

87 Directive 2011/83/EU of the European Parliament and of the Council on consumer rights [2011] OJ L304/64 (as amended by Directive (EU) 2019/2161 of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules [2019] OJ L328/7), art 6a(1)(a). See further EC, “Guidance on the interpretation and application of Directive 2011/83/EU” (Notice) [2021] OJ C525/1, Section 3.4.1.

88 Regulation (EU) 2019/1150 of the European Parliament and of the Council on promoting fairness and transparency for business users of online intermediation services [2019] OJ L186/57, art 5. See further EC, “Guidelines on ranking transparency pursuant to Regulation (EU) 2019/1150” (Notice) [2020] OJ C424/1.

89 Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) [2022] OJ L277/1 also regulates recommender systems, see, for example, art 27 on transparency.

90 ACM, “Following ACM actions, Wish bans fake discounts and blocks personalized pricing” (press release, July 26, 2022), www.acm.nl/en/publications/following-acm-actions-wish-bans-fake-discounts-and-blocks-personalized-pricing.

91 Rather, the ACM referenced the CRD (n 87), discussed further infra.

92 Article 102(c) TFEU prohibits “applying dissimilar conditions to equivalent transactions with other trading parties, thereby placing them at a competitive disadvantage.” Given that the list of potential abuses is non-exhaustive, this framing of price discrimination is not necessarily limiting.

93 See OECD, “Personalised pricing in the digital era” (note by the European Union) DAF/COMP/WD(2018)128, 9–12.

94 Regulation (EU) 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) OJ L119/1. See further Richard Steppe, “Online price discrimination and personal data: A general data protection regulation perspective” (2017) Computer Law & Security Review, 33: 768.

95 Digital Markets Act (n 86), art 6(a) on data collection, combination and cross-use.

96 Directive 2005/29/EC of the European Parliament and of the Council concerning unfair business-to-consumer commercial practices in the internal market (Unfair Commercial Practices Directive) [2005] OJ L149/22. A personalized price may, for example, be “aggressive” or an exertion of “undue influence” under arts 8–9, see further EC, “Guidance on the interpretation and application of Directive 2005/29/EC” (Notice) OJ C526/1, Section 4.2.8.

97 CRD (n 87), art 6(1)(ea). See further CRD Guidance (n 87), Section 3.3.1.

98 P2B Regulation (n 87), arts 7 and 9.

99 DSA (n 89).

100 Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).

101 CMA, “Pricing algorithms” (n 35), paras 2.13–2.20.

102 See Axel Gautier, Ashwin Ittoo, and Pieter Van Cleynenbreugel, “AI algorithms, price discrimination and collusion: A technological, economic and legal perspective” (2020) European Journal of Law and Economics, 50: 405.

103 DOJ, “Former E-Commerce executive charged with price fixing in the antitrust division’s first online marketplace prosecution” (press release, April 6, 2015), www.justice.gov/opa/pr/former-e-commerce-executive-charged-price-fixing-antitrust-divisions-first-online-marketplace. See similarly Vestager, “Algorithms and competition” (n 21) (“companies can’t escape responsibility for collusion by hiding behind a computer program”).

104 See Joseph Harrington and David Imhof, “Cartel screening and machine learning” (2022) Stanford Computational Antitrust, 2: 133.

105 “Price-bots can collude against consumers” (n 2).

10 AI and Consumer Protection An Introduction

1 Agnieszka Jabłonowska, Anna Maria Nowak, Giovanni Sartor, Hans-W Micklitz, Maciej Kuziemski, and Pałka Przemysław (EUI working papers), “Consumer law and artificial intelligence: Challenges to the EU consumer law and policy stemming from the business’ use of artificial intelligence – final report of the ARTSY project” (2018), https://ssrn.com/abstract=3228051, accessed December 23, 2022, 7; Martin EbersLiability for AI & consumer law” (2021) JIPITEC, 12: 206.

2 Jabłonowska a.o., “Consumer law and AI” 5 and 36.

3 Jabłonowska a.o., “Consumer law and AI” 49.

4 Jablonowska a.o., “Consumer law and AI” 5.

5 CMA, “Online platforms and digital advertising: Market study final report” (July 1, 2020), www.gov.uk/cma-cases/online-platforms-and-digital-advertising-market-study#final-report, accessed December 23, 2022.16; Jablonowska a.o., “Consumer law and AI,” 5.

6 Ebers, “Liability for AI & consumer law” 208; Giovani Sartor, “Artificial intelligence: Challenges for EU citizens and consumers” (January 2019), www.europarl.europa.eu/RegData/etudes/BRIE/2019/631043/IPOL_BRI(2019)631043_EN.pdf, accessed December 23, 2022, 5.

7 European Commission, DG Justice and Consumers, Francisco Lupiáñez-Villanueva, Alba Boluda, Francesco Bogliacino et al., “Behavioural study on unfair commercial practices in the digital environment: Dark patterns and manipulative personalisation: final report” (2022) Publications Office of the European Union, https://data.europa.eu/doi/10.2838/859030. 73; Ebers, “Liability AI-consumer law” 208.

8 EC, “Behavioural study” 103.

9 Ebers, “Liability for AI & consumer law” 212; CMA, “Digital advertising” 64; Brent Mittelstadt, Johann Laux, and Sandra Wachter, “Neutralizing online behavioural advertising: Algorithmic targeting with market power as an unfair commercial practice” (2021) Common Market Law Review, 58: 719.

10 OECD, “Dark commercial patterns, OECD digital economy papers” (2022) No.336 OECD Publishing 9.

11 Sartor, “AI: challenges for EU citizens and consumers” 14.

12 OECD, “Dark commercial patterns” 12; CMA, ‘Algorithms: how they can reduce competition and harm consumers’ (January 19, 2021), www.gov.uk/government/publications/algorithms-how-they-can-reduce-competition-and-harm-consumers/algorithms-how-they-can-reduce-competition-and-harm-consumers, accessed December 23, 2022.

13 Sartor, “AI: challenges for EU citizens and consumers” 3.

14 CMA, “Algorithms – how they can harm consumers”; Iqbal H. Sarker, “Machine learning: Algorithms, real-world applications and research directions” (2021) SN Computer Science: 160.

15 CMA, “Algorithms – how they can harm consumers.”

16 Sartor, “AI: Challenges for EU citizens and consumers” 18.

17 Ahbimanyu S. Ahuja, “The impact of artificial intelligence in medicine on the future role of physician” (2019) PeerJ 12; Louise I. T. Lee, Radha S. Ayyalaraju, Rakesh Ganatra, and Senthooran Kanthasamy, “The current state of artificial intelligence in medical imaging and nuclear medicine” (2019) BJR Open 5.

18 For more examples, Jablonowska a.o., “Consumer law and AI” 19 et seq.

19 Jablonowska a.o., “Consumer law and AI” 33.

20 Geraint Howells, Ian Ramsay, and Thomas Wilhelmsson, “Consumer law in its international dimension” in G. Howells and T. Wilhelmsson (eds), Handbook of Research in International Consumer Law, 2nd ed (Edward Elgar Publishing, 2018), 4.

21 Howells, Ramsay, and Wilhelmsson, “Consumer law in its international dimension” 4.

22 Frank Trentmann, Empire of Things: How We Became a World of Consumers, from the Fifteenth Century to the Twenty-First (HarperCollins, 2016).

23 Howells, Ramsay, and Wilhelmsson, “Consumer law in its international dimension,” 4–6.

24 On the emergence of consumer law in the EU, see more elaborately H.-W. Micklitz et al. (eds), The Fathers and Mothers of Consumer Law and Policy in Europe: The Foundational Years 1950–1980 (2019), EUI, https://cadmus.eui.eu/handle/1814/63766, accessed February 22, 2023.

25 Council Resolution of 14 April 1975 on a preliminary programme of the European Economic Community for a consumer protection and information policy [1975] OJ C 92&/1; Council Resolution of 19 May 1981 on a second programme of the European Economic Community for a consumer protection and information policy [1981] OJ C133&/1; See, in more detail, Ludwig Krämer, “European Commission” in Micklitz, The Fathers and Mothers of Consumer Law, 26 ff.

26 See also H.-W. Micklitz, “Squaring the circle? Reconciling consumer law and the circular economy” (2019) EuCML 229, pointing out that the protective element faded into the background when the EU took over consumer policy in the aftermath of the Single European Act.

27 On the omnipresent risk of manipulation of such interests and preferences, see Cass Sunstein, “Fifty shades of manipulation” (2016) Journal of Marketing Behavior, 213: 32.

28 Most EU consumer legislation indeed tends to be based on internal market justifications, see Howells, Ramsay, and Wilhelmsson, “Consumer law in its international dimension,” 9. See also the legal basis used for most consumer protective directives: Art 114 TFEU rather than Art 169 TFEU.

29 Howells, Ramsay, and Wilhelmsson, “Consumer law in its international dimension,” 35.

30 Ugo Mattei and Alessandra Quarta, The Turning Point in Private Law (Elgar Edward Publishing, 2019) 95.

31 On the information paradigm that plays a central role in EU consumer policy, see among others: Norbert Reich and H.-W. Micklitz, “Economic law, consumer interests and EU integration” in Norbert Reich et al. (eds), European Consumer Law (Intersentia, 2014) 1, 21; Steven Weatherill, EU Consumer Law and Policy (Edward Elgar Publishing, 2013) ch 4.

32 In this sense, see Josef Drexl, Die wirtschaftliche Selbstbestimmung des Verbrauchers (Mohr Siebeck, 1998).

33 See among others for insights from behavioral sciences, Geneviève Helleringer and Anne-Lise Sibony (2017) “European consumer protection through the behavioral lensColumbia Journal of European Law, 23(3): 607–646.

34 The same remark can be made from a sustainability perspective.

35 Most prominently in the UCPD, see arts. 5–9 and Recital 18 UCPD. See, however, also the case law with regard to the UCTD, where the benchmark of the average consumer is invoked to determine the transparency of contract terms, for example, Case C-348/14 Bucura, para. 66; Case C-26/13 Kásler and Káslerné Rábai, para. 73–74.

36 Recital 18 UCPD and see Case C-210/96 Gut Springenheide and Tusky [1998] ECR I-4657, para 3.

37 Commission Notice – Guidance on the interpretation and application of Directive 2005/29/EC of the European Parliament and of the Council concerning unfair business-to-consumer commercial practices in the internal market (“Guidance UCPD”), C/2021/9320, point 2.5.

38 See, for example, Jason Cohen, “Bringing down the average: The case for a less sophisticated reasonabless standard in US and EU consumer law” (2019) Loyola Consumer Law Review, 32:1, p. 2; Rossella Incardona, Cristina Poncibò, “The average consumer, the unfair commercial practices directive, and the cognitive revolution” (2007) Journal of Consumer Policy, 30: 36.

39 See, for criticism on this point, among others. Martijn Hesselink, “EU private law injustices” (2022) Yearbook of European Law, 1: 22–23.

40 Art. 5(3) UCPD. The concrete application of these benchmarks is discussed in more detail below (Section 5 Dark patterns).

41 So a consumer can be vulnerable in one situation but not in another, see Guidance UCPD, points 2.6 and 4.2.7.

42 Guidance UCPD, points 2.6 and 4.2.7.

43 Natali Helberger, Orla Lynskey, H.-W. Micklitz, Peter Rott, Marijn Sax, and Joanna Strycharz, “EU Consumer Protection 2.0. Structural asymmetries in digital consumer markets,” (March 2021), www.beuc.eu/sites/default/files/publications/beuc-x-2021-018_eu_consumer_protection_2.0.pdf, p. 5.

44 For recommendations on further reaching interventions, among others in the form of additional prohibited practices; reversal of the burden of proof for the fairness of data exploitation strategies and the concretization of legal benchmarks, see Helberger et al., “Structural asymmetries” 79.

45 See in the same sense Helberger et al., “Structural asymmetries.”

46 For a plea to move away from a silo approach, see Christof Koolen, “Consumer protection in the age of artificial intelligence: Breaking down the silo mentality between consumer, competition and data,” to be published in ERPL 2023; similarly: Wolfgang Kerber, “Digital markets, data, and privacy: Competition law, consumer law and data protection” (2016) Journal of Intellectual Property Law & Practice, 865–866.

47 Opinion of Advocate General AG J. Richard de la Tour, Case C-319/20 Meta Platforms Ireland, para. 81.

48 Decision of BGH of 23 June 2020, KVR 69/19.

49 The case involved the use of data collected on and off Facebook to provide Facebook consumers with personalized services. It was held that consumers had no choice to refuse such personalized services and the collection of off-Facebook data as this was only possible by completely giving up access to Facebook services. See for a more detailed analysis, Marco Loos and Joasia Luzak, Study of the European Parliament. Update the unfair contract terms directive for digital services (2021), www.europarl.europa.eu/RegData/etudes/STUD/2021/676006/IPOL_STU(2021)676006_EN.pdf, 31–32.

50 Case C-319/20 Meta Platforms Ireland.

51 Extra-contractual liability is not covered in this contribution, and we refer to the contribution of Jan De Bruyne and Wannes Ooms in Chapter 8 of this book.

52 Concretely: The Unfair Commercial Practices Directive 2005/29/EC (“UCPD”); the Consumer Rights Directive 2011/83/EU; the Unfair Contract Terms Directive 93/13/EEC (“UCTD”).

54 Giovanni Sartor, IMCO committee study, “New aspects and challenges in consumer protection: Digital services and artificial intelligence,” 2020, pp. 36–37; Guidance UCPD, point 4.2.7.

55 Art. 5 (2) UCPD. See for the (limited) possibilities to take the vulnerable consumer as a benchmark, above point 10.3.3 and below point 10.5.2.

56 Arts. 6–7 UCPD.

57 Art. 8 UCPD.

58 See, for example, the analysis of Johann Laux, Brent Mittelstadt, and Sandra Wachter, “Neutralizing online behavioural advertising: Algorithmic targeting with market power as an unfair commercial practice” (2021) Common Market Law Review, 58.

59 See also the conclusion of the European Commission, DG for Justice and Consumers, Francisco Lupiáñez-Villanueva, Alba Boluda, Francesco Bogliacino et al., “Behavioural study on unfair commercial practices in the digital environment: Dark patterns and manipulative personalisation: final report,” Publications Office of the European Union, 2022, https://data.europa.eu/doi/10.2838/859030.

60 Sartor, “Digital services and artificial intelligence,” 2020, 36–37.

61 With limited exceptions, inter alia, with regard to information obligations for on premises contracts, see art. 5 CRD.

62 See arts. 5 and 6 CRD, as amended by the Modernization Directive.

63 Ebers, “Liability for AI & consumer law,” 210.

64 Directive (EU) 2019/2161 of the European Parliament and of the Council of 27 November 2019 amending Council Directive 93/13/EEC and Directives 98/6/EC, 2005/29/EC and 2011/83/EU of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules, OJ L 328, 18.12.2019.

65 Recital 17 Modernization directive.

66 The directive had to be implemented by November 28, 2021. The implementing provisions had to be applied from May 28, 2022 (art. 7 Modernization directive).

67 Loos and Luzak, “Unfair contract terms for digital services,” 30.

68 Footnote Ibid., see also critical Agustin Reyna, “The price is (not) right: The perils of personalisation in the digital economy,” InformaConnect, January 4, 2019, https://informaconnect.com/the-price-is-not-right-the-perils-of-personalisation-in-the-digital-economy/.

69 Art. 3 (1) UCTD.

70 Art. 6 UCTD.

71 Cases C-74/15 Dumitru Tarcӑu and C-534/15 Dumitraş.

72 Commission notice – Guidance on the interpretation and application of Council Directive 93/13/EEC on unfair terms in consumer contracts, OJ C 323, 27.9.2019, pp. 4–92, point 1.2.1.2.

73 Art. 8 UCTD.

74 For a detailed analysis on the possibilities and shortcomings of the UCTD in a digital context, see: Loos and Luzak, “Unfair contract terms for digital services.”

75 Art. 4 (2) UCTD.

76 Art. 4(2) UCTD.

77 See Loos and Luzak, “Unfair contract terms for digital services,” 31. The authors propose to introduce a presumption of unfairness, implying that that personalized prices and terms are discriminatory and therefore unfair.

78 Art. 6(1) UCTD.

79 Art. 2(5) and art. 3(3) CSD.

80 For a detailed analysis, see Piia Kalamees, “Goods with digital elements and the seller’s updating obligation” (2021) JIPITEC, 12: 131; Hugh Beale, “Digital content directive and rules for contracts on continuous supply” (2021) JIPITEC, 12: 96.

81 Art. 54 Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act) [2022] OJ L265/1. Note that article e 3(6) and (7) and Articles 40, 46, 47, 48, 49, and 50 shall apply from November 1, 2022 and article 42 and Article 43 shall apply from June 25, 2023.

82 Recitals 2, 4, and 34 DMA.

83 Recitals 6 and 15 DMA.

84 Art. 6 (3) DMA.

85 Art. 6(5) DMA.

86 Art. 5 (9) and art. 6(8) DMA.

87 Rupprecht Podszun, ‘The Digital Markets Act: What’s in It for Consumers?’, EuCML 2022, 3–5.

88 Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market for Digital Services and amending Directive 2000/31/EC (Digital Services Act) [2022] OJ L277/1.

89 Article 93 DSA. However, Article 24(2), (3), and (6), Article 33(3) to (6), Article 37(7), Article 40(13), Article 43 and Sections 4, 5, and 6 of Chapter IV shall apply from November 16, 2022.

90 Art. 1 DSA.

91 European Commission, “The Digital Services Act package” (November 24, 2022), https://digital-strategy.ec.europa.eu/en/policies/digital-services-act-package, accessed on December 24, 2022.

92 Art. 26 DSA; see also art. 39 DSA for additional transparency obligations for very large online platforms.

93 Art. 28(2) DSA.

94 Art. 26(3).

95 Art. 3(s) and art. 27 DSA.

96 Art. 34 DSA. For a discussion of this risk assessment requirement, see also Chapter 14 of this book on AI and Media by Lidia Dutkiewicz, Noémie Krack, Aleksandra Kuczerawy, and Peggy Valcke.

97 Art 1(a) “Regulation (EU) 2024/1689 of the European Parliament and the council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations” (Artificial Intelligence Act) (“AI Act”).

98 Explanatory memorandum, AI Act Proposal COM (2021) 206 final, 12; Recital 26 AI Act.

99 Art. 5(1) (a) AI Act.

100 Art. 99 AI Act.

101 Art. 65 AI Act.

102 See BEUC, Position Paper on the AI Act. Regulating AI to protect the consumer, www.beuc.eu/sites/default/files/publications/beuc-x-2021-088_regulating_ai_to_protect_the_consumer.pdf. See in this regard also Nathalie A. Smuha, Emma Ahmed-Rengers, Adam Harkens, Wenlong Li, James MacLaren, Riccardo Piselli, and Karen Yeung, “How the EU can achieve legally trustworthy AI: A response to the European Commission’s proposal for an Artificial Intelligence Act,” http://dx.doi.org/10.2139/ssrn.3899991.

103 Natali Helberger, Hans-W. Micklitz, and Peter Rott, The Regulatory Gap: Consumer Protection in the Digital Economy, 2021, p. 36, www.beuc.eu/sites/default/files/publications/beuc-x-2021-116_the_regulatory_gap-consumer_protection_in_the_digital_economy.pdf.

104 OECD, “Dark commercial patterns, OECD digital economy papers” (2022) No. 336, OECD Publishing, 8.

105 Guidance UCPD 101; European Commission, Directorate-General for Justice and Consumers, Francesco Bogliacino, Alba Boluda, Francisco Lupiáñez-Villanueva et al., “Behavioural study on unfair commercial practices in the digital environment: dark patterns and manipulative personalization: final report” (2022) Publications Office of the European Union, https://data.europa.eu/doi/10.2838/859030, 6; Jamie Luguri and Lior Strahilevitz, “Shining a light on dark patterns” (2021) Journal of Legal Analysis, 44.

106 Luguri and Strahilevitz, “Dark patterns” 55 and 58; Lupiáñez-Villanueva et al., “Behavioural study” 64.

107 Luguri and Strahilevitz, “Dark patterns” 47; Lupiáñez-Villanueva et al., “Behavioural study” 105.

108 For example, by claiming that a product or service is only available for a limited time, or communicating that the offer will pass to pressure the consumer to make a purchase, Guidance UCPD, 101; Luguri, “Dark patterns” 53 and 100.

109 Luguri and Strahilevitz, “Dark patterns” 53, 55, and 58.

110 OECD, “Dark commercial patterns” 9.

111 See, for example, referring to YouTube: Zakary Kinnaird, “Dark patterns powered by machine learning: An intelligent combination” (October 13, 2020) https://uxdesign.cc/dark-patterns-powered-by-machine-learning-an-intelligent-combination-f2804ed028ce, accessed February 3, 2023.

115 Article 2(d) UCPD refers to “any act, omission, course of conduct or representation, commercial communication including marketing, by a trader, directly connected with the promotion, sale or supply of a product to consumers.”

116 Art. 5 UCPD, Guidance UCPD 46.

117 Guidance UCPD 31.

118 See above, Section 3.3.

119 See in any event in this sense, Guidance UCPD, point 4.2.7.

120 Lupiáñez-Villanueva et al., “Behavioural study” 72.

121 Guidance UCPD, points 2.6, 35.

122 Lupiáñez-Villanueva et al., “Behavioural study” 72.

123 Sartor, Digital services and artificial intelligence, 36–37.

124 Annex 1 UCPD, currently 35 practices are listed.

125 Case C-435/11 CHS Tour Services GmbH v Team4 Travel GmbH [2013] ECR I-00057, §45.

126 Practice 11 Annex I UCPD.

127 Practice 7 Annex I UCPD, Commission guidance, point 4.2.7.

128 Practice 5 (bait) and 6 (bait and switch) Annex I UCPD. The provisions in essence prohibit making offers when the trader knows that he will probably not be able to meet the demand (bait advertising) or making offers at a specified price and then refusing to deliver the product (on time) with the intention of promoting a different product (bait and switch).

129 Practice 28 Annex I UCPD.

130 Practice 26 Annex I UCPD. Commission guidance, point 4.2.7.

131 BEUC, “Dark Patterns and the EU consumer law acquis: Recommendations forbetter enforcement and reform” (February 7, 2022), www.beuc.eu/sites/default/files/publications/beuc-x-2022-013_dark_patters_paper.pdf, accessed December 23, 2022, 9; Lupiáñez-Villanueva et al., “Behavioural study” 66.

132 Lupiáñez-Villanueva et al., “Behavioural study” 122.

133 Footnote Ibid., 122.

134 Recital 67 DSA: “Practices that materially distort or impair, either purposefully or in effect, the ability of recipients of the service to make autonomous and informed choices or decisions.”

135 Art. 25(1) DSA.

136 Recital 67 DSA.

137 Recital 67 DSA.

138 Art. 5 AI Act.

139 Art 5 (a) and (b) AI Act.

140 Lupiáñez-Villanueva et al., “Behavioural study” 83; Catalina Goanta, “Regulatory Siblings: The Unfair Commercial Practices Directive Roots of the AI ACT,” in I. Graef & B. van der Sloot (ed.). The Legal Consistency of Technology Regulation in Europe (pp. 71–88). Oxford: Hart Publishing, 2024.

141 See in this regard Rostam Josef Neuwirth, The EU Artificial Intelligence Act Regulating Subliminal AI Systems (Routledge, 2023).

142 Recital 9 AI Act.

143 Information provided to consumers before the conclusion of a contract in distance contracts must be presented in a clear and understandable manner, pursuant to Art. 8 (1) CRD; see also BEUC, “Dark Patterns,” 9.

144 Art. 33 CRD.

145 Art. 3(3) (d) CRD.

146 Guidance UCPD, point 4.2.7.

147 The CRD was amended by Directive (EU) 2023/2673 of 22 November 2023 amending Directive 2011/83/EU as regards financial services contracts concluded at a distance and repealing Directive 2002/65/EC, OJ L, 2023/2673, 28.11.2023. This new article 11a must be transposed by 19 December 2025 and applied from 19 June 2026.

11 Artificial Intelligence and Intellectual Property Law

1 See, for example, Christian Hartmann, Jacqueline Allan, P. Bernt Hugenholtz, João Pedro Quintais, and Daniel Gervais, “Trends and developments in artificial intelligence. Challenges to the intellectual property rights framework,” Brussels, 2020, https://bit.ly/3XgBPPa; Reto Hilty, Jyh-An Lee, and Kung-Chung Liu, Artificial Intelligence and Intellectual Property (Oxford University Press, 2021); Ryan Abbott (ed), Research Handbook on Intellectual Property and Artificial Intelligence (Edward Elgar Publishing, 2022); Larry A DiMatteo, Cristina Poncibò, and Michel Cannarsa (eds), The Cambridge Handbook of Artificial Intelligence. Global Perspectives on Law and Ethics (Cambridge University Press, 2022), pp. 87–160; Jozefien Vanherpe, “AI and IP: Great Expectations” in Jan De Bruyne and Cedric Vanleenhove (eds), Artificial Intelligence and the Law (2nd ed, Intersentia, 2023) pp. 233–267; Anke Moerland, “Intellectual property law and AI” in Ernest Lim and Phillip Morgan (eds), The Cambridge Handbook of Private Law and Artificial Intelligence (Cambridge University Press, 2024), 362–83.

2 Article 63 European Patent Convention (EPC).

3 The definition of “infringement” is left to national law, see Article 64(3) EPC.

4 See from a US perspective, https://tinyurl.com/37a763c3, accessed August 14, 2024.

5 Articles 52–53 EPC.

6 EPO, “Guidelines for Examination, Part G, Chapter II, 3.3.1,” https://bit.ly/3SNGMyG, accessed August 14, 2024.

7 EBA Decision 10 March 2021 re patent application 03793825.5, G 0001/19, https://bit.ly/31o8x9g, accessed August 14, 2024.

8 EPO, “Guidelines for Examination, Part G, Chapter II, 3.3.1,” 2018, https://bit.ly/3BQb8W9, accessed August 14, 2024.

9 Articles 52 juncto 54–57 EPC.

10 Articles 54–55 EPC. In case priority is claimed, the relevant date is the priority date.

11 Article 56 EPC. In determining whether a certain invention involves inventive step (and is therefore not “obvious”), the EPO applies the so-called “problem-solution approach.” This approach involves (1) determining the so-called “closest prior art,” (2) establishing the “objective technical problem” in the state of the art, and (3) considering whether or not the claimed invention, starting from the closest prior art and the objective technical problem, would have been obvious to the skilled person (“could-would approach,” see in more detail EPO, “Guidelines for Examination, Part G, Chapter VII.5,” https://bit.ly/3GQL5ln, accessed August 14, 2024).

12 Article 57 EPC.

13 See however in relation to patent protection of AI-generated output below, Section 11.3.2.

14 EBA Decision 10 March 2021 re patent application 03793825.5, G 0001/19, in particular paras 106–138; Timo Minssen and Mateo Aboy, “The patentability of computer-implemented simulations and implications for computer-implemented inventions (CIIs)” (2021) JIPLP, 16: 633, 633–35.

15 Article 83 EPC.

16 Mizuki Hashiguchi, “The global artificial intelligence revolution challenges patent eligibility laws” (2017) J Bus & Tech L, 13: 1, 29–30.

17 Brian Higgins, “The role of explainable artificial intelligence in patent law” (2019) Intell Prop & Tech LJ, 31: 3, 7.

18 See, for example, Wojciech Samek et al. (eds), Explainable AI: Interpreting, Explaining and Visualizing Deep Learning, vol 11700 (Lecture Notes in Computer Science, Springer International Publishing, 2019).

19 See Articles 11, 53 and Annexes IV, XI Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 [2024] OJ L1689/1 (hereinafter the “AI Act”). See on this topic, for example, Balint Gyevnar, Nick Ferguson and Burkhard Schafer, “Bridging the transparency gap: what can explainable AI learn from the AI Act?” (2023) DOI: 10.3233/FAIA230367. See also Thomas Gils, Frederic Heymans and Wannes Ooms, “Report: from policy to practice: prototyping the EU AI Act’s transparency requirements” (2024), https://tinyurl.com/2s3w8jhp, accessed August 14, 2024.

20 Cf. Katarina Foss-Solbrekk, “Three Routes to Protecting AI Systems and Their Algorithms under IP Law: The Good, the Bad and the Ugly” (2021) 16 JIPLP 247, 256–58.

21 “WIPO Technology Trends 2019 – Artificial Intelligence,” 2019, 14, https://bit.ly/3wlRQH5, accessed August 14, 2024.

22 WIPO Technology Trends 2019, p. 13; “WIPO Technology Trends 2021, Assistive Technology,” 2022, https://bit.ly/3EO8T7z, accessed August 14, 2024.

23 Articles 2 and 5(2) Berne Convention.

24 See, for example, Articles 6bis-14ter Berne Convention; Articles 2–4 Directive 2001/29/EC on the harmonisation of certain aspects of copyright and related rights in the information society [2001] OJ L167/10 (InfoSoc Directive). See on this topic Christophe Geiger, Franciska Schönherr, Irini Stamatoudi, Paul Torremans, and Stavroula Karapapa, “Chapter 11: the Information Society Directive,” in Irini Stamatoudi and Paul Torremans (eds), EU Copyright Law. A Commentary (Edward Elgar Publishing, 2021), 279–380.

25 Article 7 Berne Convention; Article 12 TRIPS Agreement; Article 1 Directive 2006/116/EC on the term of protection of copyright and certain related rights (codified version) [2006] OJ L 372/12 (Term Directive).

26 Article 9(2) TRIPS Agreement. A common example of this requirement is that styles (such as Cubism) are not susceptible to copyright protection, while concrete expressions of such styles (such as a specific painting by Picasso in the Cubist style) may qualify for copyright protection, subject to the fulfilment of the condition of originality.

27 C-5/08 Infopaq [2009] ECLI:EU:C:2009:465; C-393/09 BSA [2010] ECLI:EU:C:2010:816; C-145/10 Painer [2011] ECLI:EU:C:2011:798.

28 C-406/10 SAS Institute [2012] ECLI:EU:C:2012:259.

29 C-393/09 BSA [2010] ECLI:EU:C:2010:816.

30 See Foss-Solbrekk, “Three Routes,” pp. 249–253; Begoña Gonzalez Otero, “Machine learning models under the copyright microscope: Is EU copyright fit for purpose?” (2021) GRUR International, 70: 1043, 1–13.

31 See re design law: Hasan Yılmaztekin, Artificial Intelligence, Design Law and Fashion (Routledge, 2023).

32 See in detail Oleksandr Bulayenko, João Pedro Quintais, Daniel Gervais, and Joost Poort, “AI music outputs: Challenges to the copyright legal framework,” 2022, https://ssrn.com/abstract=4072806, accessed August 14, 2024.

33 Video available at youtu.be/Y8UawLT4it0 accessed August 14, 2024.

34 See www.auxuman.space accessed August 14, 2024.

35 See https://endel.io accessed August 14, 2024.

36 See https://bit.ly/3whiQHy and https://bit.ly/3wrGrFO accessed August 14, 2024.

37 See www.flow-machines.com accessed August 14, 2024.

38 See https://chat.openai.com accessed August 14, 2024.

39 See, for example, Thomas Hornigold, “The first novel written by AI is here – and it’s as weird as you’d expect it to be,” Singularity Hub (October 25, 2018), https://bit.ly/3mOs4rP, accessed August 14, 2024. See however Gary Smith, “The Great American Novel will not be written by a computer,” Mind Matters (June 30, 2021), https://bit.ly/3HOUQRy.

40 See www.deepl.com, accessed August 14, 2024.

41 See, for example, https://aiartists.org/ai-timeline-art, accessed August 14, 2024.

42 See www.nextrembrandt.com, accessed August 14, 2024.

43 See https://openai.com/dall-e-3, accessed August 14, 2024.

44 See www.midjourney.com, accessed August 14, 2024.

45 See Pesala Bandara, “The best AI image generators in 2023” (PetaPixel, January 3, 2023), https://bit.ly/3Xxj1ej, accessed August 14, 2024; see also, for example, https://aiartists.org; www.artaigallery.com.

46 By way of example, users provide the DeepL translation app with relevant input and may manually modify the translated text.

47 See however also under US law, for example: US Copyright Office, “Copyright Registration Guidance: Works Containing Material Generated by Artificial Intelligence,” 2023, 37 CFR Part 202, https://bit.ly/4dxQIEQ; US District Court for the District of Columbia 18 August 2023, 22-1564, https://bit.ly/4ckMr6l.

48 Cf. Annemarie Bridy, “Coding creativity: Copyright and the artificially intelligent author” [2012] STLR 28, 4.

49 See, for example, Article 2(6) Berne Convention, which conceptualizes copyright as a form of protection for the author and their successors in title. An AI system as such is not a legal entity, which implies that it cannot be endowed with rights of any kind, including ownership rights. Notably, continental EU law does not have a rule similar to the “work-made-for-hire” doctrine that applies in the United States, which allows employers to be treated as the author of a work created by a human employee.

50 Annemarie Bridy, “The evolution of authorship: Work made by code” (2016) Colum JL & Arts, 39: 9, 401.

51 See, for example, Andres Guadamuz, “Do Androids dream of electric copyright? Comparative analysis of originality in artificial intelligence generated works” [2017] IPQ 169: 173–74.

52 Daniel Gervais, “The machine as author” (2020) Iowa L Rev, 105: 2053, 2062, 2098–101, 2106.

53 James Grimmelmann, “There’s no such thing as a computer-authored work—And it’s a good thing, too” (2016) Colum JL & Arts, 39: 403, 403, 406–08; Erica Fraser, “Computers as inventors – legal and policy implications of artificial intelligence on patent law” (2016) SCRIPTed, 13: 305, 305, 306; Samantha Fink Hedrick, “I ‘Think,’ therefore I create: Claiming copyright in the outputs of algorithms” (2019) NYU Journal of Intell Prop & Ent Law, 8: 324, 329.

54 Cf. Margot E Kaminski, “Authorship, disrupted: AI authors in copyright and first amendment law” (2017) UCD L Rev, 51: 589, 595.

55 Hedrick, “I ‘Think,’ therefore I create,” 353, 358–60.

56 See also Noam Shemtov, “A study on inventorship in inventions involving AI activity” (European Patent Office, 2019) 6, 20, 35. See for a more recent example Rhiannon Williams, “What happened when 20 comedians got AI to write their routines” (MIT Technology Review 17 June 2024), https://tinyurl.com/yxad3bse, accessed August 14, 2024.

57 “If I have seen further, it is by standing upon the shoulders of Giants.” – Sir Isaac Newton (1675).

58 Cf. in relation to patent law Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 28–29; Ryan Abbott, “I think, therefore I invent: Creative computers and the future of patent law” (2016) BC L Rev, 57: 1079, 1082, 1099, 1108–11.

59 Peter Blok, “The inventor’s new tool: Artificial intelligence – how does it fit in the European patent system?” (2017) EIPR, 39: 69, 69, 72.

60 Kaminski, “Authorship, disrupted,” pp. 589, 599; Shlomit Yanisky-Ravid and Xiaoqiong (Jackie) Liu, “When artificial intelligence systems produce inventions: An alternative model for patent law at the 3A era” (2018) Cardozo L Rev, 39: 2215, 2243–46.

61 Pamela Samuelson, “Allocating ownership rights in computer-generated works” (1986) U Pitt L Rev, 47: 1185, 1199; Hedrick, “I ‘Think,’ therefore I create,” 334–336; Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions,” pp. 2239–41; Garry A Gabison, “Who holds the right to exclude for machine work products?” [2020] IPQ 20, 20, 37.

62 Cf. Gervais, “The machine as author,” pp. 2060–2061.

63 Directive 96/9/EC on the legal protection of databases [1996] OJ L77/20. See on this topic Estelle Derclaye, “Chapter 9: The database directive,” in Irini Stamatoudi and Paul Torremans (eds), EU Copyright Law. A commentary (Edward Elgar Publishing, 2021), pp. 216–254.

64 Article 7 Database Directive.

65 See Dan Burk, “AI patents and the self-assembling machine” (2021) Minn Law Rev Headnotes, 105: 301; Daria Kim et al., “Ten assumptions about artificial intelligence that can mislead patent law analysis” [2021] SSRN Electronic Journal.

66 See, for example, Robert Plotkin, The Genie in the Machine; How Computer-Automated Inventing Is Revolutionizing Law and Business (Stanford University Press, 2009).

67 An acronym for “Device for the Autonomous Bootstrapping of Unified Sentience.”

68 See https://bit.ly/3qgbWSd; https://bit.ly/3CQNf26, accessed August 14, 2024.

69 Dr Thaler has obtained several patents in relation to the technology behind DABUS. See Abbott, “I think, therefore I invent,” 1083–1086.

70 EP application with number 18275163.6 (EP 3 564 144 A1), filed on October 17, 2018 and EP application with number 18275174.3 (EP 3 563 896 A1), filed on November 7, 2018.

71 See Legal Board of Appeal Decision December 21, 2021 re EP applications 18275163.6 and 18275174.3, J 0008/20, paras I–III, https://bit.ly/3WzzdNb, accessed August 14, 2024.

72 See, for example, with regard to the priority right to a patent Article 4(A) Paris Convention for the Protection of Industrial Property, 20 March 1883, as amended. See also Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions,” p. 2230; Eva Stanková, “Human inventorship in European patent law” (2021) The Cambridge Law Journal, 80: 338.

73 See Article 60 EPC.

74 Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 10–11, 20; Blok, “The inventor’s new tool,” pp. 71–72.

75 Legal Board of Appeal Decision 21 December 2021 re patent applications 18275163.6 and 18275174.3, Sections 4.14.4; UK Supreme Court 20 December 2023, UKSC 49, https://bit.ly/3YMYBBV, accessed August 14, 2024; German Federal Supreme Court 11 June 2024, case number X ZB 5/22, https://bit.ly/3YFkT8N, accessed August 14, 2024.

76 See Article 4ter Paris Convention. See also respectively Articles 62 and 81 jo. 90 and Rule 19.1 EPC. See also Shemtov, “A study on inventorship in inventions involving AI activity,” p. 8.

77 Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 5, 23–25, 27.

78 Abbott, “I think, therefore I invent,” pp. 1081–82, 1098–99, 1104; Alexandra George and Toby Walsh, “Artificial intelligence is breaking patent law” (2022) Nature, 605: 7911, 616. See, however, Rose Hughes, “Artificial intelligence is not breaking patent law: EPO publishes DABUS decision (J 8/20)” (The IPKat, July 11, 2022), https://bit.ly/3H8YMy6, accessed August 14, 2024.

79 Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions,” p. 2239.

80 Blok, “The inventor’s new tool,” p. 73; Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 5, 17, 19.

81 Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 6, 20, 35.

82 Cf. Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 28–29; Abbott, “I think, therefore I invent,” pp. 1082, 1099, 1108–11.

83 See Blok, “The inventor’s new tool,” p. 73.

84 IP ownership is (as of yet) primarily a matter of national law.

85 Cf. Bridy, “Coding creativity,” p. 20; Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 12–13, 19–20; Hedrick, “I ‘think,’ therefore I create,” pp. 328–29, 332, 352. See, however, Tim W. Dornis, “Of ‘authorless works’ and ‘inventions without inventor’ – the muddy waters of ‘AI autonomy’ in intellectual property doctrine” (2021) EIPR, 43: 570.

86 Hedrick, “I ‘think,’ therefore I create,” pp. 337, 440.

87 Cf. Hedrick, “I ‘think,’ therefore I create,” pp. 367, 371–374.

88 Grimmelmann, “There’s no such thing as a computer-authored work,” p. 413; Blok, “The inventor’s new tool,” p. 73; Shemtov, “A study on inventorship in inventions involving AI activity,” p. 20. Patent protection is not available to “discoveries” as such (Article 52 EPC).

89 Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 19–21, 31; AIPPI resolution on inventorship of inventions made using artificial intelligence, October 14, 2020, https://bit.ly/3DRMOoN, accessed August 14, 2024.

90 Cf. Paul Sawers, “Chinese court rules AI-written article is protected by copyright,” Venture Beat (January 10, 2020), https://bit.ly/3DW5SlD, accessed August 14, 2024.

91 See Mark Summerfield, “The impact of machine learning on patent law, Part 3: Who is the inventor of a machine-assisted invention?,” Patentology (February 4, 2018), https://bit.ly/3xlHNlM, accessed August 14, 2024.

92 Samuelson, “Allocating ownership rights in computer-generated works,” p. 1205; Shemtov, “A study on inventorship in inventions involving AI activity,” p. 22; Gabison, “Who holds the right to exclude for machine work products?,” p. 23.

93 Samuelson, “Allocating ownership rights in computer-generated works,” p. 1209; Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions,” pp. 2231–2232.

94 Bridy, “Coding creativity,” p. 25.

95 Hedrick, “I ‘think,’ therefore I create,” pp. 354, 362.

96 Cf. Hedrick, “I ‘think,’ therefore I create,” pp. 338–339, 343, 354.

97 Samuelson, “Allocating ownership rights in computer-generated works,” pp. 1207–1208, 1225; Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions”, p. 2233; Shemtov, “A study on inventorship in inventions involving AI activity,” p. 31.

98 Samuelson, “Allocating ownership rights in computer-generated works,” p. 1208.

99 Shemtov, “A study on inventorship in inventions involving AI activity,” p. 31.

100 Samuelson, “Allocating ownership rights in computer-generated works,” pp. 1201–04; Hedrick, “I ‘think,’ therefore I create,” p. 344; Gabison, “Who holds the right to exclude for machine work products?,” p. 35; Tim Dornis, “Artificial intelligence and innovation: The end of patent law as we know it” (2020) Yale J L & Tech, 23: 97, 154–57.

101 Shemtov, “A study on inventorship in inventions involving AI activity,” pp. 6, 30.

102 Samuelson, “Allocating ownership rights in computer-generated works,” pp. 1221–24; Hedrick, “I ‘think,’ therefore I create,” p. 348. See extensively Paulien Wymeersch, “Terms of use on the commercialisation of AI-produced images and copyright protection”, (2024) EIPR pp. 374–381.

103 Cf. Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions,” p. 2235.

104 Hedrick, “I ‘Think,’ therefore I create,” p. 348.

105 Cf. Abbott, “I think, therefore I invent,” p. 1117; Hedrick, “I ‘Think,’ therefore I create,” p. 347.

106 Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions,” pp. 2222, 2252–2256; Shemtov, “A study on inventorship in inventions involving AI activity,” p. 24; Gabison, “Who holds the right to exclude for machine work products?,” pp. 32–33, 39; Gervais, “The machine as author,” p. 2060.

107 See, for example, Jamie Grierson, “Photographer admits prize-winning image was AI-generated” (The Guardian April 17, 2023), https://bit.ly/4cq4xEd, accessed August 14, 2024.

108 Abbott, “I think, therefore I invent,” pp. 1097–98; Higgins, “The role of explainable artificial intelligence in patent law,” p. 29.

109 See Article 50 AI Act; Thomas Gils, “A detailed analysis of Article 50 of the EU’s Artificial Intelligence Act” (2024), https://ssrn.com/abstract=4865427, accessed August 14, 2024. See also https://tinyurl.com/m3bhr5a5, accessed August 14, 2024. For an extensive discussion of the AI Act, see also Chapter 12 of this book, authored by Nathalie A. Smuha and Karen Yeung, “The European Union’s AI Act: beyond motherhood and apple pie?”, 228–258.

110 See for an overview https://tinyurl.com/j6wvr7ez, accessed August 14, 2024.

111 Article 53(1)(c)–(d) AI Act. See also in particular Recitals 105, 107 AI Act. A template for such a “sufficiently detailed summary” is to be provided by the AI Office. See for a valiant attempt at operationalization of this requirement, https://tinyurl.com/yeu723r5, accessed August 14, 2024. See however extensively Tim W. Dornis and Sebastian Stober, “Urheberrecht und Training generativer KI-Modelle - technologische und juristische Grundlagen” (August 2024), https://ssrn.com/abstract_id=4946214, accessed 20 September 2024.

112 See, for example, Martin Senftleben, “Generative AI and author remuneration” (2023) IIC, 54, 1535–60; Martin Senftleben, “AI Act and author remuneration – A model for other regions?” (2024), https://ssrn.com/abstract=4740268, accessed August 14, 2024.

113 Camille Vermosen, “Copyright, liability and artificial intelligence: Who is responsible when an artificial intelligence system infringes copyright in the context of the EU?” (KU Leuven, 2017); Bridget Watson, “A mind of its own – direct infringement by users of artificial intelligence systems” (2017) IDEA, 58: 31; Alina Škiljić, “When art meets technology or vice versa: Key challenges at the crossroads of AI-generated artworks and copyright law” (2021) IIC, 52: 1338.

114 Dornis, “Artificial intelligence and innovation,” pp. 104, 124–134.

115 EPO, “Guidelines for examination, Part G, Chapter VII.3,” https://bit.ly/3xBzu5H, accessed August 14, 2024.

116 Blok, “The inventor’s new Tool,” p. 72; Ryan Abbott, “Everything is obvious” (2018) 66 UCLA L Rev 2, 2, 5–6, 17, 34–37.

117 Yanisky-Ravid and Liu, “When artificial intelligence systems produce inventions,” pp. 2248–49.

118 Abbott, “Everything is obvious,” pp. 8–9, 31, 34, 37–38.

119 Abbott, “Everything is obvious,” pp. 48–50, 52.

120 See Michael Grynberg, “AI and the ‘death of trademark’” (2019) Ky L J, 108: 199–238; Anke Moerland and Conrado Freitas, “Artificial intelligence and trademark assessment” in Reto Hilty, Jyh-An Lee, and Kung-Chung Liu, Artificial Intelligence and Intellectual Property (Oxford University Press, 2021), 266–291; Marie-Christine Janssens and Viltė Kristina Dessers, “The artificially intelligent consumer in EU trademark law” in Veronika Fischer, Georg Nolte, Martin Senftleben, and Louisa Specht-Riemenschneider, Gestaltung der Informationsordnung. Festschrift Commemorating the 65th Anniversary of Professor Thomas Dreier (CH Beck, 2022), 143–160.

121 See, for example, the tools and applications listed at https://bit.ly/3YPSlJV, including and WIPO’s Vienna Classification Assistant https://bit.ly/3WQmCqj, accessed August 14, 2024.

122 See, for example, the EUIPO https://bit.ly/30XlRRJ and the UKIPO https://bit.ly/3DOiNWX, accessed August 14, 2024.

123 Re trademarks, see, for example, Brandstock https://bit.ly/30Voofc; CompuMark https://clarivate.com/compumark; Rocketeer https://bit.ly/311euZX; TrademarkNow www.trademarknow.com; and Corsearch https://corsearch.com, all accessed August 14, 2024. Re patents, see, for example, Rowan Patents https://rowanpatents.com, accessed August 14, 2024.

124 See, for example, Cipher https://cipher.ai; elementary IP https://elementaryip.com; IP Check-Up https://bit.ly/3E3Dxdr; Octimine www.octimine.com; and SHIP Global IP https://shipglobalip.com, all accessed August 14, 2024.

125 See, for example, Visual-AI https://bit.ly/3HQtt9B, accessed August 14, 2024.

126 The WIPO consultation process on AI and IP garnered over 250 substantive submissions, while the virtual WIPO seminars on AI and IP that WIPO has organized since 2019 attracted almost 9000 participants from all over the world. The submissions to the consultation process are available online at https://bit.ly/3GU9M09, accessed August 14, 2024. More information on the so-called ‘WIPO Conversation on Intellectual Property and Frontier Technologies’ is available online at https://bit.ly/3WO0s8f, accessed August 14, 2024.

12 The European Union’s AI Act Beyond Motherhood and Apple Pie?

* Smuha primarily contributed to Sections 12.2 and 12.3, (drawing on Nathalie A. Smuha, Algorithmic Rule by Law: How Algorithmic Regulation in the Public Sector Erodes the Rule of Law (Cambridge University Press, 2025, Chapter 5.4), while Yeung contributed primarily to Section 12.4 (drawing extensively on a keynote speech delivered on September 12, 2022, ADM+S Centre Symposium, Automated Societies, RMIT, Melbourne, Australia. A recording is available at https://podcasters.spotify.com/pod/show/adms-centre/episodes/2022-ADMS-Symposium-Keynote-by-Professor-Karen-Yeung-e1nmp1r/a-a8gu1ph (accessed August 2, 2024)).

1 Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act), OJ L, 2024/1689, July 12, 2024.

2 Anu Bradford, The Brussels Effect: How the European Union Rules the World (Oxford University Press, 2020). See in this regard also Nathalie A. Smuha, “From a ‘race to AI’ to a ‘race to AI regulation’: regulatory competition for artificial intelligence,” (2021) Law, Innovation and Technology, 13(1): 57–84.

3 See Karen Yeung, Andrew Howes, and Ganna Pogrebna, “AI governance by human rights–centered design, deliberation, and oversight: An end to ethics washing,” in Markus D. Dubber, Frank Pasquale, and Sunit Das (eds), The Oxford Handbook of Ethics of AI (Oxford University Press, 2020), pp. 76–106.

4 Statement by the European Parliament’s Special Committee on Artificial Intelligence in a Digital Age (AIDA), “Draft report on artificial intelligence in a digital age” (European Parliament, 2021) (2020/2266(INI)) 9.

5 See in this regard also Chapter 1 of this book by Wannes Meert, Tinne De Laet, and Luc De Raedt.

6 The first use of this term is ascribed to Adrian Wooldridge in his The Economist article titled “The coming tech-lash,” November 2013.

7 See, for example, Jim Isaak and Mina J Hanna, “User data privacy: Facebook, Cambridge Analytica, and privacy protection” (2018) Computer, 51(8): 56-59.

8 European Commission, Artificial Intelligence for Europe, COM (2018) 237 final, Brussels, April 25, 2018.

9 Nathalie A. Smuha, “The EU approach to ethics guidelines for trustworthy artificial intelligence” (2019) Computer Law Review International, 20(4): 98.

10 See also Anu Bradford, Digital Empires: The Global Battle to Regulate Technology (Oxford University Press, 2023).

11 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119, May 4, 2016, pp. 1–88.

12 Both the composition and the mandate of the AI HLEG was criticized, mostly due to the larger representation of industry, and the fact that the Commission tasked the group with drafting voluntary guidelines rather than asking its input on new binding rules. Yeung was one of these members. Smuha served as the group’s coordinator from its initial formation until July 2019.

13 High-Level Expert Group on AI, “Ethics Guidelines for Trustworthy AI,” Brussels, April 8, 2019. The Guidelines were endorsed by the Commission in a Communication that was published the same day, encouraging AI developers and deployers to implement them in their organization. See European Commission, Building Trust in Human-Centric Artificial Intelligence, COM (2019) 168 final, Brussels, April 8, 2019.

14 Trustworthy AI was defined as: (1) lawful, or complying with all applicable laws and regulations; (2) ethical, or ensuring adherence to ethical principles and values; and (3) robust since, even with good intentions, AI systems can still lead to unintentional harm. The AI HLEG was however careful in stating that the Guidelines only offered guidance on complying with the two latter components (ethical and robust AI), indicating the need for the EU to take additional steps to ensure that AI systems were also lawful. See in this regard also Nathalie A. Smuha, Emma Ahmed-Rengers, Adam Harkens, Wenlong Li, James MacLaren, Riccardo Piselli, and Karen Yeung, “How the EU Can Achieve Legally Trustworthy AI: A Response to the European Commission’s Proposal for an Artificial Intelligence Act,” Social Science Research Network, 2021, https://ssrn.com/abstract=3899991.

15 The Guidelines also included an assessment list to operationalize these requirements in practice, and a list of critical concerns raised by AI systems that should be carefully considered (including, for example, the use of AI systems to identify and track individuals, covert AI systems, AI-enabled citizen scoring, lethal autonomous weapons, and longer-term concerns, covering what is today often referred to as “existential risks”).

16 High-Level Expert Group on AI, ‘Policy and Investment Recommendations for Trustworthy AI’ (European Commission, June 26, 2019), https://digital-strategy.ec.europa.eu/en/library/policy-and-investment-recommendations-trustworthy-artificial-intelligence.

17 In addition, the group was also mandated to support the Commission with outreach through the European AI Alliance, a multi-stakeholder online platform seeking broader input on Europe’s AI policy. See European Commission, Call for Applications for the Selection of Members of the High-Level Expert Group on Artificial Intelligence, March 9, 2018, https://digital-strategy.ec.europa.eu/en/news/call-high-level-expert-group-artificial-intelligence.

18 Policy and Investment Recommendations for Trustworthy AI (n 16), 26.

19 Footnote Ibid., 38.

20 Footnote Ibid., 20.

21 Footnote Ibid., 40.

22 Footnote Ibid., 13.

23 European Commission, White Paper on Artificial Intelligence – A European approach to excellence and trust, Brussels, February 19, 2020, COM (2020) 65 final.

24 See also the Explanatory Memorandum of the White Paper.

25 The White Paper provides the examples of Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products, and repealing Council Decision 93/465/EEC, and to Regulation (EU) 2019/881 of the European Parliament and of the Council of 17 April 2019 on ENISA and on information and communications technology cybersecurity certification (the Cybersecurity Act).

26 Proposal for a Regulation laying down harmonized rules on artificial intelligence (Artificial Intelligence Act), COM (2021) 206 final, Brussels, April 21, 2021.

27 See also Smuha et al. (n 14), 28.

28 See Footnote ibid, 50.

29 Council of the European Union, General Approach, 2021/0106(COD) Brussels, 25 November 2022 (adopted December 6, 2022).

30 Essentially, it provided that GPAI systems used for high-risk purposes should be treated as such. However, instead of directly applying the high-risk requirements to such systems, the Council proposed that the Commission should adopt an implementing act to specify how they should be applied, based on a consultation and detailed impact assessment and taking into account their specific characteristics.

31 European Parliament, Amendments adopted by the European Parliament on 14 June 2023 on the proposal for an Artificial Intelligence Act, COM (2021)0206 – C9-0146/2021 – 2021/0106(COD), Amendment 168.

32 See also infra (n 61).

33 See also Stephen Weatherill, “The limits of legislative harmonization ten years after tobacco advertising: How the court’s case law has become a ‘drafting guide’” (2011) German Law Journal, 12(3): 827–864.

34 See Recital 3 of the AI Act.

35 See in this regard also Nathalie A. Smuha, “The paramountcy of data protection law in the age of AI (Acts),” in Brendan Van Alsenoy, Julia Hodder, Fenneke Buskermolen, Miriam Čakurdová, Ilektra Makraki and Estelle Burgot (eds), Twenty Years of Data Protection. What Next? – EDPS 20th Anniversary, Luxembourg (2024), Publications Office of the European Union, 226–39.

36 See in more details Article 2(1) of the AI Act.

37 For a discussion of the importance of AI definitions, see also Bilel Benbouzid, Yannick Meneceur and Nathalie A. Smuha, “Four shades of AI regulation. A cartography of normative and definitional conflicts” (2022) Réseaux, 232–33(2–3), 29–64.

38 Article 3(1) of the AI Act. The definition’s emphasis on the system making inferences seems to exclude more traditional or rule-based AI systems from its scope, despite their significant potential for harm. Ultimately, it will be up to the courts to decide how this definition must be interpreted in case of a dispute.

39 More generally, yet less unusual, the legislator also carved out from the AI Act all areas that fall outside the scope of EU law.

40 Article 2 of the AI Act provides that “this Regulation does not apply to AI systems released under free and open-source licences, unless they are placed on the market or put into service as high-risk AI systems or as an AI system that falls under Article 5 or 50” (covering respectively prohibited AI practices and systems requiring additional transparency measures). Moreover, Article 53 of the AI Act excludes providers of AI models that are released under a free and open-source licence from certain transparency requirements if the license “allows for the access, usage, modification, and distribution of the model” and if certain information (about the parameters including the weights, model architecture, and model usage) is made publicly available. The exclusion does not apply to general-purpose AI models with “systemic risks” though, which shall be discussed further below.

41 For instance, Article 63 of the AI Act states that microenterprises can comply with certain elements of the quality management system required by Article 17 in “a simplified manner,” for which “the Commission shall develop guidelines.”

42 See in this regard Karen Yeung and Sofia Ranchordas, An Introduction to Law and Regulation, 2nd ed. (Cambridge University Press, 2025), especially Chapter 9, Section 9.9.2.

43 Article 5(1)(a) of the AI Act.

44 Article 5(1)(b) of the AI Act.

45 Article 5(1)(d) of the AI Act.

46 Article 5(1)(c) of the AI Act.

47 Article 5(1)(f) of the AI Act.

48 See also Smuha et al. (n 14) 27.

49 Article 5(1)(e) of the AI Act.

50 Article 5(1)(g) of the AI Act. The four latter practices were introduced by the European Parliament in its June 2023 negotiating mandate (along with other spurious practices that, unfortunately, did not survive the trilogue with the Commission and the Council).

51 Article 5(1)(h) of the AI Act.

52 Article 7 of the AI Act establishes a procedure for the Commission to amend Annex III through delegated acts. The domain headings can only be adapted by the EU legislator through a revision of the regulation itself.

53 Article 6(3) of the AI Act. To avoid misuse of this provision, the AI Act states that such providers must justify why, despite being included in Annex III, their system does not pose a significant risk. Article 6 establishes a procedure for the European Commission to challenge their justification and to impose the high-risk requirements in case the justification is flawed.

54 Articles 23 to 27 also set out some obligations for importers, distributors and deployers of high-risk AI systems.

55 Article 9(2)(a) and (b) of the AI Act.

56 Article 9(5) of the AI Act.

57 Article 9(3) of the AI Act.

58 See Nathalie A. Smuha, Algorithmic Rule by Law: How Algorithmic Regulation in the Public Sector Erodes the Rule of Law (Cambridge University Press, 2025), Chapter 5.4.

59 Article 26(5) also states that: “where deployers have reason to consider that the use of the high-risk AI system in accordance with the instructions may result in that AI system presenting a risk within the meaning of Article 79(1), they shall, without undue delay, inform the provider or distributor and the relevant market surveillance authority, and shall suspend the use of that system.”

60 See Karen Yeung, “Response to European Commission White Paper,” Social Science Research Network, 2020, https://ssrn.com/abstract=3626915; Nathalie A. Smuha et al., n (14).

61 That said, as noted in n (53), AI providers who self-assess their high-risk system as excluded from the Act’s requirements will still need to justify their assessment and register their system in a newly established database, managed by the Commission. See Article 49(2) of the AI Act.

62 Article 27 of the AI Act.

63 Article 27(5) of the AI Act.

64 Article 3(63) of the AI Act. It does exclude AI models used for research, development or prototyping activities before their placement on the market.

65 Article 53(1) of the AI Act.

66 Article 55(1) of the AI Act.

67 Article 3(65) of the AI Act.

68 Article 51(2) of the AI Act.

69 See in this regard also Smuha, n (58), Chapter 5.4.

70 See Philipp Hacker, “What’s missing from the EU AI Act: Addressing the four key challenges of large language models,” VerfassungsBlog, December 13, 2023, https://verfassungsblog.de/whats-missing-from-the-eu-ai-act/.

71 If a GPAI system is deployed for the purpose of one of the high-risk applications listed in Annex III – and if it is self-assessed as posing a significant risk – it will need to comply with the standard requirements for high-risk systems as listed in Chapter III, Section 2.

72 It should however be noted that the European Commission can also designate certain GPAI models as posing a systemic risk through a decision, either ex officio or based on a qualified alert by a scientific panel that the AI Act will set up for this purpose. It is also able to amend the thresholds through delegated acts. Moreover, at least in theory, also systems that do not fall under the specified threshold can be considered as posing a systemic risk if they show high impact capabilities evaluated on the basis of “appropriate technical tools and methodologies, including indicators and benchmarks,” which the Commission can supplement over time.

73 Article 50(2) of the AI Act.

74 Article 50(4) of the AI Act.

76 Article 50(2) of the AI Act.

77 Articles 95 and following of the AI Act.

78 See European Commission, n (8), 2.

79 One could argue that the abovementioned derogations for open-source AI systems can likewise be seen as an innovation-boosting measure. See supra, n (41).

80 Article 57(1) of the AI Act.

81 Article 58 of the AI Act.

82 See, for example, Article 34(2) of the AI Act.

83 See Yeung and Ranchordas, n (42), Chapter 8.

84 Article 96 of the AI Act. When issuing such guidelines, the Commission “shall take due account of the generally acknowledged state of the art on AI, as well as of relevant harmonised standards and common specifications that are referred to in Articles 40 and 41, or of those harmonised standards or technical specifications that are set out pursuant to Union harmonisation law.”

85 See Articles 40 and 41 of the AI Act. A harmonized standard is a European standard developed by a recognized European Standardization Organization and its creation is requested by the European Commission. The references of harmonized standards must be published in the Official Journal of the EU. See https://single-market-economy.ec.europa.eu/single-market/european-standards/harmonised-standards_en, accessed June 20, 2024.

86 Member States are free to establish a new entity for this purpose, or they can designate an existing authority. They can also assign this task to several existing authorities, as long as they designate one of those authorities as the main authority and contact point for practical purposes. See Article 70 of the AI Act.

87 Under the New Legislative Framework for product safety legislation, (national) market surveillance authorities have the task to monitor the market and, in case of doubt, to verify ex post whether the conformity assessment has correctly been carried out, and the CE mark duly affixed. This market surveillance authority can be a separate entity, or it can be the same authority that is also responsible for the supervision of the implementation of a regulation. As regards the regime of the AI Act, for all stand-alone high-risk systems, it provides that the national supervisory authority is also the market surveillance authority. For high-risk systems that are already covered by legal acts listed in Annex I (and that are hence already subject to a monitoring system, such as toys or medical devices), the competent authorities under those legal acts will remain the lead market surveillance authority, though cooperation is encouraged.

88 The supervisory authorities should act independently and impartially in performing their tasks and exercising their powers. These powers consist of e.g. requesting the technical documentation and records that providers of high-risk systems must create and – if they exhausted all other reasonable ways to verify the system’s conformity, they can also request access to the system’s training, validation and testing datasets, the trained and training model of the high-risk AI system, including its relevant model parameters. Pursuant to Article 74(13) of the AI Act, national supervisory authorities can exceptionally also obtain access to the source code of a high-risk AI system, upon a reasoned request. Any information must be treated as confidential, and with respect to intellectual property rights and trade secrets.

89 The establishment of the AI Office reflects the desire of both the European Parliament and the Council to have a stronger involvement at the EU level when it comes to implementing and enforcing the AI Act. Over time, the AI office could become a full-fledged European AI Agency.

90 Articles 53 and following of the AI Act. For those models, the AI Office will also contribute to fostering standards and testing practices and enforcing common rules in all member states.

91 Especially for those provisions that the Commission cannot adapt through a delegated act, but that can only be amended by the legislators (such as the domain headings under Annex III or the prohibited AI practices). See Article 112(11) of the AI Act.

92 Article 74(11) of the AI Act.

93 Article 95 of the AI Act.

94 Article 71 of the AI Act.

95 See Yeung and Ranchordas, n (42), Chapter 7 and literature cited therein.

96 Article 85 of the AI Act. Article 86 also grants affected persons who are subjected to (most) high-risk AI systems listed in Annex III the ‘right to an explanation’, covering the “right to obtain from the deployer clear and meaningful explanations of the role of the AI system in the decision-making procedure and the main elements of the decision taken.” This right however only applies if the decision “produces legal effects or similarly significantly affects that person in a way that they consider to have an adverse impact on their health, safety or fundamental rights,” and national or Union law can provide exceptions to this right.

97 It refers to a package of measures intended to: improve market surveillance; establish a framework of rules for product safety; enhance the quality of and confidence in the conformity assessment of products through stronger and clearer rules on notification requirements of conformity assessment bodies; and clarify the meaning of CE markings to enhance their credibility. This package of measures consists of Regulation (EC) 765/2008, which sets out the requirements for accreditation and the market surveillance of products, Commission Decision 768/2008 on a common framework for the marketing of products, which is effectively a template for future product harmonisation legislation and Regulation (EU) 2019/1020 on market surveillance and compliance of products, which aims to govern the role of various economic operators (manufacturers, authorised representatives, importers) and standardizing their tasks with regard to the placing of products on the market.

98 See Article 43 of the AI Act.

99 High-risk AI providers and deployers must also have a system in place to report to the relevant authorities any serious incidents or breaches of national and Union law, and take appropriate corrective actions.

100 The decision of the Court of Justice of the EU (CJEU) in Cassis de Dijon in 1979 was highly significant. The Court ruled that products lawfully manufactured or marketed in one Member State should in principle move freely throughout the Union where such products meet equivalent levels of protection to those imposed by the Member State of destination, and that barriers to free movement which result from differences in national legislation may only be accepted under specific circumstances, namely (1) the national measures are necessary to satisfy mandatory requirements (such as health, safety, consumer protection and environmental protection), (2) they serve a legitimate purpose which justifies overriding the principle of free movement of goods, and (3) they can be justified with regard to the legitimate purpose and are proportionate with the aims. See Case 120/78 Cassis de Dijon [1979] ECR 649 (Rewe-Zentral v Bundesmonopolverwaltung für Branntwein).

101 Yet in practice, the framework did not create the necessary level of trust between Member States. Therefore, in 1989 and 1990, the “Global Approach” was adopted, which established general guidelines and detailed procedures for conformity assessment to cover a wide range of industrial and commercial products.

102 See in this regard Jean-Pierre Galland, “Big Third-Party Certifiers and the Construction of Transnational Regulation” (2017) The ANNALS of the American Academy of Political and Social Science, 670(1), 263–279. This New Legislative Framework consists of a tripartite package of EU measures (1) EC Regulation No 765/2008 on accreditation and marketing surveillance (2) Decision No 768/2008/EC on establishing a common framework for the marketing of products (3) EC Regulation No 764/2008 to strengthen the internal market for a wide range of other products not subject to EU harmonisation.

103 See Commission Implementing Decision of 22 May 2023 on a standardisation request to the European Committee for Standardisation and the European Committee for Electrotechnical Standardisation in support of Union policy on artificial intelligence, Brussels, 22 May 2023, C(2023) 3215 final.

104 This is because an organization that seeks to act as an independent third-party certifier first needs to receive accreditation from a national notifying authority which evaluates and monitors that these third-party certifiers meet certain quality and independence standards.

105 Article 41 of the AI Act.

106 Derek B. Larson and Sara R. Jordan, “Playing it safe: toy safety and conformity and assessment in Europe and the US” (2018) International Review of Administrative Sciences, 85(4), 763–79.

107 See in this regard also Victoria Martindale and Andre Menache, “The PIP scandal: an analysis of the process of quality control that failed to safeguard women from the health risks” (2013) Journal of the Royal Society of Medicine, 106(5), 173–77.

108 Council Directive 93/42/EEC of 14 June 1993 concerning medical devices, OJ L 169, July 12, 1993, 1–43.

109 This is borne out in Laura Silva-Cataneda, “A forest of evidence: Third-party certification and multiple forms of proof – a case study on oil palm plantations in Indonesia” (2012) Agriculture and Human Values, 29(3): 361–70. In her study, she found that in practice, auditors regard the company’s documents as the ultimate form of evidence. Villagers who disagree with the company may point to localized and personalized markers but not to documents, and this is regarded by the auditors as a “lack of evidence.” Hence, in contrast to the company’s documentary arsenal, auditors’ unwillingness to recognize the validity of evidence other than in documentary while disregarding the local knowledge of local communities exacerbated the power imbalance between them.

110 See Michael Power, The Audit Society: Rituals of Verification (Oxford University Press, 1997), p. 84.

111 As Hopkins clarifies, under a safety case regime, when regulators make site visits, “rather than inspecting to ensure that hardware is working, or that documents are up to date, they must audit against the safety case, to ensure that the specified controls are functioning as intended.” See Andrew Hopkins, “Explaining the ‘safety case,’” Working Paper 87, Australian National University, April 2012, p. 6.

112 The EU is currently struggling to implement a wide-ranging change in how medical devices are regulated – from the 1993 Medical Device Directive (MDD) to the 2017 Medical Device Regulation (MDR). Phased introduction of the MDR was due to be completed by May 2020, but was extended until this year due to COVID-19 pressures. This new regulatory framework is designed to ensure more thorough testing of devices before they can be used on patients, requiring clinical investigation and more rigorous monitoring of performance of devices once on the market. The MDR’s implementation, however, has not gone smoothly.

113 Lawrence Bush, Standards: Recipes for Realities (The MIT Press, 2011), p. 13.

114 However, in Public.Resource.Org, Inc., Right to Know CLG vs. European Commission (C-588/21 P) the CJEU ruled that the Commission must indeed grant access to the four requested harmonized standards on the basis that harmonized standards form part of EU law and that the rule of law requires that access to harmonized standards must be freely available without charge. There is thus an overriding public interest in free access to the harmonized standards.

115 See Article 79(2) of the AI Act. Supervisory authorities (in their capacity of market surveillance authorities) are empowered to have access to documentation, datasets and code upon reasoned request, together with other “appropriate technical means and tools enabling remote access” and datasets. However, only if the documentation is “insufficient to ascertain whether a breach of obligations under EU law intended to protect fundamental rights has occurred” can the MSA organize the testing of the high-risk system through technical means (see Article 77(3) of the AI Act).

116 Joanna J. Bryson, “Belgian and Flemish policy makers’ guide to AI regulation,” KCDS-CiTiP Fellow Lectures Series: Towards an AI Regulator?, Leuven, October 11, 2022.

117 Although the CJEU decided in the James Elliot case that it has jurisdiction to interpret harmonized standards in preliminary ruling procedures, according to Ebers (2022), it is unlikely that the Court would be willing to rule on the validity of a harmonized standard, either in an annulment action (per Article 264 TFEU) or a preliminary ruling procedure (per Article 267 TFEU). And even if it were, the CJEU is unlikely to review and invalidate its substantive content – its jurisdiction would be limited to reviewing whether the Commission made an error in making the decision to publish a harmonized standard in the official journal. See Martin Ebers, “Standardizing AI: The case of the European Commission’s proposal for an ‘Artificial Intelligence Act,’” in L. A. DiMatteo, C. Poncibò, and M. Cannarsa (eds.), The Cambridge Handbook of Artificial Intelligence: Global Perspectives on Law and Ethics (Cambridge University Press, 2022), pp. 321–344.

118 See for example the ANEC and BEUC standardization project: https://anec.eu/projects/ai-standards, accessed June 20, 2024.

119 CENELEC/CEN standardization committees are dispersed across all corners of Europe, yet most of the meetings now tend to take place online.

120 Our experiences when piloting the AI HLEG’s Trustworthy AI Assessment List showed an across-the-board lack of understanding of what a fundamental rights impact assessment entails, with the majority of respondents mystified by the requirement to consider the impact of their AI system on fundamental rights in the first place.

121 But see recent efforts by Equinet, “Equality-compliant artificial intelligence: Equinet’s plans for 2024”, available at https://equineteurope.org/latest-developments-in-ai-equality/ (accessed June 20, 2024).

122 See in this regard the CENELEC Internal Regulations, Part 3.

123 See Article 113 of the AI Act, which also lists some exceptions.

124 See also Karen Yeung, “Responsibility and AI – A Study of the Implications of Advanced Digital Technologies (Including AI Systems) for the Concept of Responsibility within a Human Rights Framework,” Council of Europe, 2019, DGI (2019)05; Nathalie A. Smuha, “Beyond the individual: governing AI’s societal harm,” Internet Policy Review, 10(3), 2021.

Figure 0

Figure 7.1 A fundamental rights perspective on the sources of privacy and data protection law

Figure 1

Figure 7.2 The EU legal order – general and data protection specific

Figure 2

Figure 7.3 Lawfulness and purpose limitation, combined

Figure 3

Figure 7.4 Overview of the main steps of a Data Protection Impact Assessment

Figure 4

Table 12.1 High-risk AI systems listed in Annex III

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • AI, Law and Policy
  • Edited by Nathalie A. Smuha, KU Leuven
  • Book: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence
  • Online publication: 06 February 2025
  • Chapter DOI: https://doi.org/10.1017/9781009367783.009
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • AI, Law and Policy
  • Edited by Nathalie A. Smuha, KU Leuven
  • Book: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence
  • Online publication: 06 February 2025
  • Chapter DOI: https://doi.org/10.1017/9781009367783.009
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • AI, Law and Policy
  • Edited by Nathalie A. Smuha, KU Leuven
  • Book: The Cambridge Handbook of the Law, Ethics and Policy of Artificial Intelligence
  • Online publication: 06 February 2025
  • Chapter DOI: https://doi.org/10.1017/9781009367783.009
Available formats
×