10.1 Introduction
AI brings risks but also opportunities for consumers. For instance, AI can help consumers to optimize their energy use, detect fraud with their credit cards, simplify or select relevant information or translate. Risks do however also exist, for instance, in the form of biased erroneous information and advice or manipulation into choices that do not serve consumers best interests. Also when it comes to consumer law, which traditionally focuses on protecting consumers’ autonomy and self-determination, the increased use of AI poses major challenges, which will be the focal point of this chapter.
We start by setting out how AI systems can affect consumers in both positive and negative ways (Section 10.2). Next, we explain how the fundamental underpinnings and basic concepts of consumer law are challenged by AI’s ubiquity, and we caution against a silo approach to the application of this legal domain in the context of AI (Section 10.3). Subsequently, we provide a brief overview of some of the most relevant consumer protection instruments in the EU and discuss how they apply to AI systems (Section 10.4). Finally, we illustrate the shortcomings of the current consumer protection law framework more concretely by taking dark patterns as a case study (Section 10.5). We conclude that additional regulation is needed to protect consumers against AI’s risks (Section 10.6).
10.2 Challenges and Opportunities of AI for Consumers
The combination of AI and data offers traders a vast range of new opportunities in their relationship with consumers. Economic operators may use, among other techniques, machine learning algorithms, a specialized subdiscipline of AI, to analyze large datasets. These algorithms process extensive examples of desired and interesting behavior, known as the “training data,” to generate computer-readable data-learned knowledge. This knowledge can then be used to optimize various processes.Footnote 1 The (personal) data of consumers thus becomes a valuable source of information for companies.Footnote 2 Moreover, with the increasing adoption of the Internet of Things and advances in Big Data, the accuracy and amount of information obtained about individual consumers and their behavior is only expected to increase.Footnote 3 In an ideal situation consumers know which input (data set) was employed by the market operator to train the algorithm, which learning algorithm was applied and which assignment the machine was trained for.Footnote 4 However, market operators using AI often fail to disclose this information to consumers.Footnote 5 In addition, consumers also often face the so-called “black box” or “inexplicability” problem with data-driven AI, which means that the exact reasoning that led to the output, the final decision as presented to humans, remains unknown.Footnote 6 Collectively, this contributes to an asymmetry of information between businesses and consumers with market players collecting a huge amount of personal data on consumers.Footnote 7 In addition, consumers often remain unaware that pricing, or advertising have been tailored to their supposed preferences, thus creating an enormous potential to exploit the inherent weaknesses in the consumers’ ability to understand that they are being persuaded.Footnote 8 Another major challenge, next to the consumer’s inability to understand business behavior, is that automated decisions of algorithmic decision-making can lead to biased or discriminatory results, as the training data may not be neutral (selected by a human and thus perpetuating human biases) and may contain outdated data, data reflecting consumer’s behavioral biases or existing social biases against a minority.Footnote 9 This could lead directly to consumers receiving biased and erroneous advice and information.
In addition, AI brings significant risks of influencing consumers into making choices that do not serve their best interests.Footnote 10 The ability to predict the reactions of consumers allows businesses to trigger the desired behavior of consumers, potentially making use of consumer biases,Footnote 11 for instance through choice architecture. Ranging from the color of the “buy” button on online shopping stores to the position of a default payment method – the choice in design architecture can be based on algorithms that define how choices are presented to consumers in order to influence them.Footnote 12
Economic operators may furthermore influence or manipulate consumers by restricting the information or offers they can access and thus their options and this for purely economic goals.Footnote 13 Clustering techniques are used to analyze consumer behavior to classify them into meaningful categories and treat them differently.Footnote 14 This personalization can occur in different forms, including the “choice architecture,” the offers that are presented to consumers or in the form of different prices for the same product for different categories of consumers.Footnote 15 AI systems may also be used to determine and offer consumers the reserve price – the highest price they are able or willing to pay for a good or service.Footnote 16
Although AI entails risks, it also provides opportunities for consumers, in various sectors. Think of AI applications in healthcare (e.g., through mental health chatbots, diagnosticsFootnote 17), legal services (e.g., cheaper legal advice), finance and insurance services (e.g., fraud prevention), information services (e.g., machine translation, selection of more relevant content), and energy services (e.g., optimization of energy use through “smart homes”), to name but a few.Footnote 18 Personalized offers by traders and vendors could (at least in theory) also assist consumers to overcome undesirable information overload. An example of a consumer empowering technology in the legal sector is CLAUDETTE. This online system detects potentially unfair clauses in online contracts and privacy policies, to empower the weaker contract party.Footnote 19
10.3 Challenges of AI for Consumer Law
Section 10.2 illustrated how AI systems can both positively and negatively affect consumers. However, the digital transformation in general and AI specifically also raises challenges to consumer law. The fundamental underpinnings and concepts of consumer law are increasingly put under pressure, and these new technologies also pose enormous challenges in terms of enforcement. Furthermore, because of the different types of concerns that AI systems raise in this context, these challenges make it clear that consumer law cannot be seen or enforced in isolation from data protection or competition law. These aspects are briefly discussed in Sections 10.3.1–10.3.3.
10.3.1 Challenges to the Fundamental Underpinnings of Consumer Law
Historically, the emergence of consumer law is linked to the development of a consumer society. In fact, this legal domain has been referred to as a “reflection of the consumer society in the legal sphere.”Footnote 20 The need for legal rules to protect those who consume, was indeed felt more urgently when consumption, above the level of basic needs, became an important aspect of life in society.Footnote 21 The trend to attach increasing importance to consumption had been ongoing for several centuries,Footnote 22 but the increasing affluence, the changing nature of the way business was conducted, and the massification of consumption, all contributed to a body of consumer protection rules being adopted, mainly from the 1950s.Footnote 23 More consumption was thought to equal more consumer welfare and more happiness. Consumer protection law in Europe first emerged at national level.Footnote 24 It was only from the 1970s on that European institutions started to develop an interest in consumer protection and that the first consumer protection programs followed.Footnote 25 The first binding instruments were adopted in the 1980s, and consisted mostly of minimum harmonization instruments. This means that member states are allowed to maintain or adopt more protective provisions, as long as the minimum standards imposed by the harmonization instrument are respected. From 2000 onwards, the shift to maximum harmonization in the European consumer protection instruments reduced the scope for a national consumer (protection) policy.
While originally the protection of a weaker consumer was central in many national regimes, the focus in European consumer law came to be on the rational consumer whose right to self-determination (private autonomy) on a market must be guaranteed.Footnote 26 This right to self-determination can be understood as the right to make choices in the (internal) market according to one’s own preferencesFootnote 27 thereby furthering the realization of the internal market.Footnote 28 This focus on self-determination presupposes a consumer capable of making choices and enjoying the widest possible options to choose from.Footnote 29 EU consumer law could thus be described as the guardian of the economic rights of the nonprofessional player in the (internal) market. Private autonomy and contractual freedom should in principle suffice to protect these economic rights and to guarantee a bargain in accordance with one’s own preferences, but consumer law acknowledges that the preconditions for such a bargain might be absent, especially due to information asymmetries between professional and nonprofessional players.Footnote 30 Information was and is therefore used as the main corrective mechanism in EU consumer law.Footnote 31 Further reaching intervention – for example, by regulating the content of contracts – implies a greater intrusion into private autonomy and is therefore only a subsidiary protection mechanism.Footnote 32
AI and the far-reaching possibilities of personalization and manipulation it entails, especially when used in combination with personal data, now challenges the assumption of the rational consumer with its “own” preferences even more fundamentally. The efficiency of information as a means of protection had already been questioned before the advent of new technologies,Footnote 33 but the additional complexity of AI leaves no doubt that the mere provision of information will not be a solution to the ever increasing information asymmetry and risk of manipulation. The emergence of an “attention economy” whereby companies strive to retain consumers’ attention in order to generate revenue based on advertising and data gathering, furthermore also makes clear that “more consumption is more consumer welfare” is an illusion.Footnote 34 The traditional underpinnings of consumer law therefore need revisiting.
10.3.2 Challenges to the Basic Concepts of Consumer Law
European consumer law uses the abstract concept of the “average” consumer as a benchmark.Footnote 35 This is a “reasonably well informed and reasonably observant and circumspect” consumer;Footnote 36 a person who is “reasonably critical […], conscious and circumspect in his or her market behaviour.”Footnote 37 This benchmark, as interpreted by the Court of Justice of the European Union, has been criticized for not taking into account cognitive biases and limitations of the consumers and for allowing companies to engage in exploitative behavior.Footnote 38 AI now creates exponential possibilities to exploit these cognitive biases and the need to realign the consumer benchmark with the realities of consumer behavior is therefore even more urgent. There is furthermore some, but only limited, attention to the vulnerable consumer in EU consumer law.Footnote 39 Thus, the Unfair Commercial Practices Directive, for example, allows to assess a practice from the perspective of the average member of a group of vulnerable consumers even if the practice was directed to a wider group, if the trader could reasonably foresee that the practice would distort the behavior of vulnerable consumers.Footnote 40 The characteristics the UCPD identifies to define vulnerability (such as mental or physical infirmity, age, or credulity) are however not particularly helpful nor exhaustive in a digital context. Interestingly, however, the Commission Guidance does stress that vulnerability is not a static concept, but a dynamic and situational conceptFootnote 41 and that the characteristics mentioned in the directive are indicative and non-exhaustive.Footnote 42 The literature has however rightly argued that a reinterpretation of the concept of vulnerability will not be sufficient to better protect consumers in a digital context. It is submitted that in digital marketplaces, most, if not all consumers are potentially vulnerable; digitally vulnerable and susceptible “to (the exploitation of) power imbalances that are the result of increasing automation of commerce, datafied consumer-seller relations and the very architecture of digital marketplaces.”Footnote 43 AI and digitalization thus create a structural vulnerability that requires a further reaching intervention than just to reinterpret vulnerability.Footnote 44 More attention to tackling the sources of digital vulnerability and to the architecture of digital marketplaces is hence definitely necessary.Footnote 45
10.3.3 Challenges to the Silo Approach to Consumer Law
Consumer law has developed in parallel with competition law and data protection law but, certainly in digital markets, it is artificial – also in terms of enforcement – to strictly separate these areas of the law.Footnote 46 The use of AI often involves the use of (personal) consumer data and concentration in digital markets creates a risk of abuses of personal data also to the detriment of consumers. Indeed, there are numerous and frequent instances where the same conduct will be covered simultaneously by consumer law, competition law, and data protection law.Footnote 47 The German Facebook case of the BundesgerichtshofFootnote 48 is just one example where competition law (abuse of dominant position) was successfully invoked also to guarantee consumer’s choice in the data they want to share and in the level of personalization of the services provided.Footnote 49 There is certainly a need for more convergence and a complementary application of these legal domains, rather than artificially dividing them, especially when it comes to enforcement. The case law allowing consumer protection organizations to bring representative actions on the basis of consumer law (namely unfair practices or unfair contract terms), also for infringements of data protection legislation, is therefore certainly to be welcomed.Footnote 50
10.4 Overview of Relevant Consumer Protection Instruments
The mentioned challenges of course do not imply that AI currently operates in a legal vacuum and that there is no protection in place. The existing consumer law instruments provide some safeguards, both when AI is used in advertising or in a precontractual stage, and when it is the actual subject matter of a consumer contract (e.g., as part of a smart product). The current instruments are however not well adapted to AI, as will be illustrated by the brief overview of the most relevant instruments below.Footnote 51 An exercise is ongoing to potentially adapt several of these instrumentsFootnote 52 and make them fit for the digital age.Footnote 53 In addition, several new acts were adopted or proposed in the digital sphere that also have an impact on consumer protection and AI.
10.4.1 The Unfair Commercial Practices Directive
The UCPD is a maximum harmonization instrument that regulates unfair commercial practices occurring before, during and after a B2C transaction. It has a broad scope of application and the combination of open norms and a blacklist of practices that are prohibited in all circumstances allows it to tackle a wide range of unfair business practices, also when these practices result from the use of AI.Footnote 54 Practices are unfair, according to the general norm, if they are contrary to the requirements of “professional diligence and are likely to materially distort the economic behaviour of the average consumer.”Footnote 55 The UCPD furthermore prohibits misleading and aggressive practices. Misleading practices are actions or omissions that deceive or are likely to deceive and cause the average consumer to make a transactional decision they would not have taken otherwise.Footnote 56 Aggressive practices are practices that entail the use of coercion or undue influence which significantly impairs the average consumer’s freedom of choice and causes them to make a transactional decision they would not have taken otherwise.Footnote 57
The open norms definitely offer some potential to combat the use of AI to manipulate consumers, either using the general norm or the prohibition of misleading or aggressive practices.Footnote 58 However, the exact application and interpretation of these open norms makes the outcome of such cases uncertain.Footnote 59 When exactly does the use of AI amount to “undue influence,” how is the concept of the “average consumer” to be used in a digital context; when exactly does personalized advertising become misleading. We make these problems more concrete in our analysis of dark patterns below (Section 10.5). More guidance on the application of these open norms could make the application to AI-based practices easier.Footnote 60 Additional blacklisted practices could also provide more legal certainty.
10.4.2 Consumer Rights Directive
The CRD – also a maximum harmonization directiveFootnote 61 – regulates the information traders must provide to consumers when contracting, both for on premises contracts and for distance and doorstep contracts. In addition, it regulates the right of withdrawal from the contract. The precontractual information requirements are extensive and they include an obligation to provide information about the main characteristics and total price of goods or services; about the functionality and interoperability of digital content and digital services, and the duration and conditions for termination of the contract.Footnote 62 However, as Ebers mentions, these obligations are formulated quite generally, making it difficult to concretize their application to AI systems.Footnote 63 The Modernization directiveFootnote 64 – adopted to “modernize” a number of EU consumer protection directives in view of the development of digital toolsFootnote 65 – introduced a new information obligation for personal pricing.Footnote 66 Art. 6 (1) (ea) of the modernized CRD now requires the consumer to be informed that the price was personalized on the basis of automated decision-making. There is however no obligation to reveal the algorithm used nor its methodology; neither is there an obligation to reveal how the price was adjusted for a particular consumer.Footnote 67 This additional information obligation has therefore been criticized for being too narrow as it hinders the finding of price discrimination.Footnote 68
10.4.3 Unfair Contract Terms Directive
The UCTD in essence requires contract terms to be drafted in plain, intelligible language and the terms must not cause a significant imbalance in the parties’ rights and obligations, to the detriment of the consumerFootnote 69 Contract terms that do not comply with these requirements can be declared unfair and therefore nonbinding.Footnote 70 The directive has a very broad scope of application and applies to (not individually negotiated) clauses in contracts between sellers/suppliers and consumers “in all sectors of economic activity.”Footnote 71 It does not require that the consumer provides monetary consideration for a good or service. Contracts whereby the consumer “pays” with personal data or whereby the consideration provided consists in consumer generated content and profiling are also covered.Footnote 72 It is furthermore a minimum harmonization directive, so stricter national rules can still apply.Footnote 73
The UCTD can help consumers to combat unfair clauses (e.g., exoneration clauses, terms on conflict resolution, terms on personalization of the service, terms contradicting the GDPR)Footnote 74 in contracts with businesses that use AI. It could also be used to combat untransparent personalized pricing whereby AI is used. In principle, the UCTD does not allow for judges to control the unfairness of core contract terms (clauses that determine the main subject matter of the contract), nor does it allow to check the adequacy of price and remuneration.Footnote 75 This is however only the case if these clauses are transparent.Footnote 76 The UCTD could furthermore also be invoked if AI has been used to personalize contract terms without disclosure to the consumer.Footnote 77 Unfair terms do not bind the consumer and may even lead to the whole contract being void if the contract cannot continue to exist without the unfair term.Footnote 78
10.4.4 Consumer Sales Directive and Digital Content and Services Directive
When AI is the subject matter of the contract, the new Consumer Sales Directive 2019/771 (“CSD”) and Digital Content and Services Directive 2019/770 (“DCSD”), provide the consumer with remedies in case the AI application fails. The CSD will apply when the digital element – provided under the sales contract – is thus incorporated or connected with the good that the absence of the digital element would prevent the good from performing its function.Footnote 79 If this is not the case, the DCSD will apply. Both directives provide for a similar – but not identical – regime that determines the requirements for conformity and the remedies in case of nonconformity. These remedies include specific performance (repair or replacement in case of a good with digital elements), price reduction and termination. Damages caused by a defect in an AI application continue to be governed by national law. The directives also provide for an update obligation (including security updates) for the seller of goods with digital elements and for the trader providing digital content or services.Footnote 80
10.4.5 Digital Markets Act and Digital Services Act
The Digital Markets Act (“DMA”), which applies as of May 2, 2023Footnote 81 aims to maintain an open and fair online environment for businesses users and end users by regulating the behavior of large online platforms, known as “gatekeepers,” which have significant influence in the digital market and act as intermediaries between businesses and customers.Footnote 82 Examples of such gatekeepers are Google, Meta, and Amazon. The regulation has only an indirect impact on the use of AI, as it aims to prevent these gatekeepers from engaging in unfair practices, which give them significant power and control over access to content and services.Footnote 83 Such practices may involve the use of biased or discriminatory AI algorithms. The regulation imposes obligations on gatekeepers such as providing the ability for users to uninstall default software applications on the operating system of the gatekeeper,Footnote 84 a ban on self-preferencing,Footnote 85 and the obligation to provide data on advertising performance and ad pricing.Footnote 86 The DMA certainly provides for additional consumer protection, but it does so indirectly, by mainly regulating the relationship between platforms and business users and by creating more transparency. Consumer rights are not central in the DMA and this is also apparent from the lack of involvement of consumers and consumer organizations in the DMA’s enforcement.Footnote 87
The Digital Services Act (“DSA”),Footnote 88 which applies as of February 17, 2024,Footnote 89 establishes a harmonized set of rules on the provision on online intermediary services and aims to ensure a safe, predictable, and trustworthy online environment.Footnote 90 The regulation mainly affects online intermediaries (including online platforms), such as online marketplaces, online social networks, online travel and accommodation platforms, content-sharing platforms, and app stores.Footnote 91 It introduces additional transparency obligations, including advertising transparency requirements for online platformsFootnote 92 and a ban on targeted advertisement of minors based on profilingFootnote 93 as well as a ban on targeted advertising based on profiling using special categories of personal data, such as religious belief or sexual orientation.Footnote 94 It also introduces recommender system transparency for providers of online platforms.Footnote 95 The regulation furthermore obliges very large online platforms to carry out a risk assessment of their services and systems, including their algorithmic systems.Footnote 96
10.4.6 Artificial Intelligence Act
The Artificial Intelligence Act (“AI Act”) Act, adopted June 13, 2024, provides harmonized rules for “the placing on the market, the putting into service and the use of AI systems in the Union.”Footnote 97 It uses a risk-based methodology to classify certain uses of AI systems as entailing a low, high, or unacceptable risk.Footnote 98 AI practices that pose an unacceptable risk are prohibited, including subliminal techniques that distort behavior and cause significant harm.Footnote 99 The regulation foresees penalties for noncomplianceFootnote 100 and establishes a cooperation mechanism at European level (the so-called European Artificial Intelligence Board), composed of representatives from the Member States and the Commission, to ensure enforcement of the provisions of the AI Act across Europe.Footnote 101 Concerns have been expressed whether the AI Act is adequate to also tackle consumer protection concerns. It has been argued that the list of “high-risk” applications and the list of forbidden AI practices does not cover all problematic AI applications or practices for consumers.Footnote 102 Furthermore, the sole focus on public enforcement and the lack of appropriate individual rights for consumers and collective rights for consumers organization to ensure an effective enforcement has been criticized.Footnote 103
10.5 Dark Patterns as a Case Study
10.5.1 The Concept of Dark Patterns
The OECD Committee on Consumer Policy uses the following working definition of dark patterns:
business practices employing elements of digital choice architecture, in particular in online user interfaces, that subvert or impair consumer autonomy, decision-making or choice. They often deceive, coerce or manipulate consumers and are likely to cause direct or indirect consumer detriment in various ways, though it may be difficult or impossible to measure such detriment in many instances.Footnote 104
A universally accepted definition is lacking, but dark patterns can be described by their common features involving the use of hidden, subtle, and often manipulative designs or marketing tactics that exploit consumer biases, vulnerabilities, and preferences to benefit the business or provider of intermediary services that presents the information that may not align with the consumer’s own preferences or best interest.Footnote 105 Examples of such marketing practices include (i) false hierarchy (the button for the business’ desired outcome is more prominent or visually appealing than the others),Footnote 106 (ii) hidden information,Footnote 107 (iii) creating a sense of false urgency,Footnote 108 (iv) forced continuity or roach motel (making it significantly more difficult for consumers to cancel their subscription than it was to sign up or automatically renew the service without the user’s express consent and repeatedly asking consumers to reconsider their choice).Footnote 109 All of these illustrations are practices closely related to the concept of choice architecture and hyper personalization discussed in Section 10.2 presenting choices in a non-neutral way.
Dark patterns may involve the use of personal data of consumers and the use of AI.Footnote 110 AI is an asset for modifying dark patterns to have a greater impact on consumers behavior in a subtle way. It allows business operators to examine which dark patterns work best, especially when personal data is involved, and dark patterns are adapted accordingly. Examples of the power of the combination of dark patterns and AI can be found in platforms encouraging consumers to become paying members by presenting this option in different ways and over different time periods.Footnote 111 Machine learning applications can analyze personal data to optimize dark patterns and find more innovative ways to convince consumers to buy a subscription. They can examine how many hours are spent a day watching videos, how many advertisements are being skipped and whether the app is closed when an ad is shown.Footnote 112 The ad play may be increased if the consumer refuses to become a paying member.Footnote 113 Such a process can be stretched over quite a long time, making the consumer believe it is its own decision to subscribe, without him feeling tricked.Footnote 114 In essence, the combination of AI, personal data and dark patterns, results in an increased ability to manipulate consumers.
10.5.2 Overview of the Relevant Instruments of Consumer Protection against Dark Patterns
The UCPD is a first instrument that offers a number of possible avenues to combat dark patterns. As mentioned, it covers a wide range of prohibited practices in a business to consumer context.Footnote 115 First, the general prohibition of unfair commercial practices of art. 5 UCPD that functions as a residual control mechanism can be invoked. It prohibits all practices that violate a trader’s professional diligence obligation and that cause the average consumer to make a transactional decision that they would not otherwise have made.Footnote 116 This includes not only the decision to purchase or not purchase a product but also related decisions, such as visiting a website, or viewing content.Footnote 117 As mentioned, the standard of the “average” consumer (of the target group) is a normative standard that has (so far) been applied rather strictly, as rational behavior is the point of departure in the assessment.Footnote 118 The fact that the benchmark can be modulated to the target group does however offer some possibilities for a less strict standard in case of personalization, as the practice could then even be assessed from the perspective of a single targeted person.Footnote 119
Article 5(3) UCPD, furthermore creates some possibilities to assess a practice from the perspective of a vulnerable consumer, but the narrow definition of vulnerability as mental or psychical disability, age or credulity is – as mentioned – not suitable for the digital age. Indeed, any consumer can be temporarily vulnerable due to contextual and psychological factors.Footnote 120 According to the European Commission, the UCPD provides a non-exhaustive list of characteristics that make a consumer “particularly susceptible” and therefore states that the concept of vulnerability should include these context-dependent vulnerabilities, such as interests, preferences, psychological profile, and even mood.Footnote 121 It will indeed be important to adopt such a broader interpretation to take into account the fact that all consumers can be potentially vulnerable in a digital context. The open norms of the UCPD might indeed be sufficiently flexible for such an interpretation,Footnote 122 but a clearer text in the directive – and not only (nonbinding) guidance of the Commission guidance – would be useful.
The more specific open norms prohibiting misleading and especially aggressive practices (arts. 6–9 UCPD) can also be invoked. But it is again uncertain how open concepts such as “undue influence” (art. 8 UCPD) must be interpreted in an AI context and to what extent the benchmark of the average consumer can be individualized. At what point does an increased exposure to advertising, tailored on past behavior, in order to convince a consumer to “choose” a paid subscription, amount to undue influence? More guidance on the interpretation of these open norms would be welcome.Footnote 123
The blacklist in Annex I of the UCPD avoids the whole discussion on the interpretation of these benchmarks. That list prohibits specific practices that are considered unfair in all circumstancesFootnote 124 and does not require an analysis of the potential effect on the average (or – exceptionally – vulnerable) consumer. The practices also do not require proof that the trader breached his professional diligence duty.Footnote 125 The list prohibits several online practices, including disguised ads,Footnote 126 false urgency (e.g., fake countdown timers),Footnote 127 bait and switch,Footnote 128 and direct exhortations to children.Footnote 129 However, these practices were not specifically formulated to be applied in an AI context and interpretational problems therefore also occur when applying the current list to dark patterns. Thus, it is for instance mentioned in the Commission guidance that “making repeated intrusions during normal interactions in order to get the consumer to do or accept something (i.e., nagging) could amount to a persistent and unwanted solicitation.”Footnote 130 The same interpretational problem then rises: how much intrusion and pressure is exactly needed to make a practice a “persistent and unwanted solicitation”? Additional blacklisted (AI) practices would increase legal certainty and facilitate enforcement.
Finally, the recently added Article 7(4a) UCPD requires traders to provide consumers with general information about the main parameters that determine the ranking of search results and their relative importance. The effectiveness of this article in protecting consumers by informing them can be questioned, as transparency about the practices generated by an AI system collides with the black box problem. Sharing information about the input-phase, such as the data set and learning algorithm that were used, may to some extent mitigate the information asymmetry but it will not suffice as a means of protection.
While the UCPD has broad coverage for most types of unfair commercial practices, the case-by-case approach does not allow to effectively address all forms of deceptive techniques known as “dark patterns.” For example, BEUC’s report of 2022 highlights the lack of consumer protection for practices that use language and emotion to influence consumers to make choices or take specific actions, often through tactics such as shaming, also referred to as confirmshaming.Footnote 131 In addition, there is uncertainty about the responsibilities of traders under the professional diligence duty and whether certain practices are explicitly prohibited.Footnote 132 Insufficient enforcement by both public and private parties further weakens this instrument.Footnote 133
A second piece of legislation that provides some protection against dark patterns is the DSA. The regulation refers to dark patterns as practices “that materially distort or impair, either purposefully or in effect, the ability of recipients of the service to make autonomous and informed choices or decisions.”Footnote 134 The DSA prohibits online platforms from designing, organizing, or operating their interfaces in a way that “deceives, manipulates, or otherwise materially distorts or impacts the ability of recipients of their services to make free and informed decisions”Footnote 135 in so far as those practices are not covered under the UCPD and GDPR.Footnote 136 Note that the important exception largely erodes consumer protection. Where the UCPD applies, and that includes all B2C practices, the vague standards of the UCPD will apply and not the more specific prohibition of dark patterns in the DSA. A cumulative application would have been preferable. The DSA inter alia targets exploitative design choices and practices as “forced continuity,” that make it unreasonably difficult to discontinue purchases or to sign out from services.Footnote 137
The AI Act contains two specific prohibitions on manipulation practices carried out through the use of AI systems that may cover dark patterns.Footnote 138 These bans prohibit the use of subliminal techniques to materially distort a person’s behavior in a manner that causes or is likely to cause significant harm and the exploitation of vulnerabilities in specific groups of people to materially distort their behavior in a manner that causes or is likely to cause significant harm.Footnote 139 These prohibitions are similar to those in the UCPD, except that they are limited to practices carried out through the use of AI systems.Footnote 140 They furthermore have some limitations. The ban relating to the abuse of vulnerabilities only applies to certain explicitly listed vulnerabilities, such as age, disability or specific social or economic situation, yet the mentioned problem of digital vulnerability is not tackled. A further major limitation was fortunately omitted in the final text of the AI Act. Whereas in the text of the AI proposal, these provisions only applied in case of physical and mental harm – which will often not be present and may be difficult to proveFootnote 141 – the prohibitions of the final AI Act also apply to (significant) economic harm.
The AI Act is complementary to other existing regulations, including data protection, consumer protection, and digital service legislation.Footnote 142 Finally, taking into account the fact that this Regulation strongly focuses on high-risk AI and that there are not many private services that qualify as high risk, the additional protection for consumers from this regulation seems limited.
The Consumer Rights Directive with its transparency requirement for pre-contractual informationFootnote 143 and its prohibition to use pre-ticked boxes implying additional payments might also provide some help.Footnote 144 However, the prohibition on pre-ticked boxes does not apply to certain sectors that are excluded from the directive, such as financial services.Footnote 145 The UCPD could however also be invoked to combat charging for additional services through default interface settings and that directive does apply to the financial sector.Footnote 146 The CRD does not regulate the conditions for contract termination, except for the right of withdrawal. An obligation for traders to insert a “withdrawal function” or “cancellation button” in contracts concluded by means of an online interface has recently been added to the CRD.Footnote 147 This function is meant to make it easier for consumers to terminate distance contracts, particularly subscriptions during the period of withdrawal. This has could be a useful tool to combat subscription traps.
10.6 Conclusion
AI poses major challenges to consumers and to consumer law and the traditional consumer law instruments are not well adapted to tackle these challenges. The mere provision of information on how AI operates will definitely not suffice to adequately protect consumers. The current instruments do allow to tackle some of the most blatant detrimental practices, but the application of the open norms in a digital context creates uncertainty and hinders effective enforcement, as our case study of dark patterns has shown. The use of AI in a business context creates a structural vulnerability for all consumers. This requires additional regulation to provide better protection, as well as additional efforts in raising awareness of the risks AI entails.