9.1 Introduction
Algorithmic competition issues have been in the public eye for some time.Footnote 1 In 2017, for example, The Economist warned: “Price-bots can collude against consumers.”Footnote 2 Press attention was fueled by Ezrachi and Stucke’s Virtual Competition, a well-received book on the perils of the algorithm-driven economy.Footnote 3 For quite some time, however, academic and press interest outpaced the reality on the ground.Footnote 4 Price algorithms had been used to fix prices, but the collusive schemes were relatively low-tech (overseen by sellers themselves) and the consumer harm seemingly limited (some buyers of Justin Bieber posters overpaid).Footnote 5 As such, the AI and competition law literature was called “the closest ever our field came to science-fiction.”Footnote 6 More recently, that has started to change – with an increase in science, and a decrease in fiction. New economic models show that sellers cannot just use pricing algorithms to collude – algorithms can actually supplant human decision-makers and learn to charge supracompetitive prices autonomously.Footnote 7 Meanwhile, in the real world, pricing algorithms became even more common and potentially pernicious, affecting markets as essential as real estate.Footnote 8
The topic of AI and competition law is thus ripe for reexamination, for which this chapter lays the groundwork. The chapter only deals with substantive competition law (and related areas of law), not with more institutional questions like enforcement, which deserve a separate treatment. Section 9.2 starts with the end-goal of competition law, that is, consumer welfare, and how algorithms and the increasing availability of data may affect that welfare. Section 9.3 dives into the main algorithmic competition issues, starting with restrictive agreements, both horizontal and vertical (Section 9.3.1), and moving on to abuse of dominance, both exclusionary and exploitative (Section 9.3.2). The guiding question is whether EU competition rules are up to the task of remedying these issues. Section 9.4 concludes with an agenda for future research.
Before we jump in, a note on terminology. The careful reader will have noticed that, despite the “AI” in the title, I generally refer to “algorithms.” An algorithm is simply a set of steps to be carried out in a specific way.Footnote 9 This “specific way” can be pen and paper, but algorithms truly show their potential when executed by computers that are programmed to do so. At that point, we enter the “computational” realm, but when can we refer to AI? The problem is that AI is somewhat of a nebulous concept. In the oft-quoted words of the late Larry Tesler: “AI is whatever hasn’t been done yet” (the so-called “AI Effect”).Footnote 10 Machine learning (ML) is a more useful term, referring to situations where the computer (machine) itself extracts the algorithm for the task that underlies the data.Footnote 11 Thus, with ML, “it is not the programmers anymore but the data itself that defines what to do next.”Footnote 12 In what follows, I continue to refer to algorithms to capture its various uses and manifestations. For a more extensive discussion of the technological aspects of AI, see Chapter 1 of this book.
9.2 Consumer Welfare, Data, and Algorithms
The goal of EU competition law has always been to prevent distortions of competition, in other words, to protect competition.Footnote 13 But protecting competition is a means to an end. As the General Court put it: “the ultimate purpose of the rules that seek to ensure that competition is not distorted in the internal market is to increase the well-being of consumers.”Footnote 14 Competition, and thus consumer welfare, has different parameters, in particular price, choice, quality or innovation.Footnote 15 A practice’s impact on those parameters often determines its (il)legality.
Algorithmic competition can affect the parameters of competition. At the outset, though, it is important to understand that algorithms need input – that is, data – to transform into output. When it comes to competition, the most relevant type of data is price data. Such data used to be hidden from view, requiring effort to collect (e.g., frequenting competitors’ stores). Nowadays, price transparency has become the norm, at least in business-to-consumer (B2C) settings, so at the retail level.Footnote 16 Prices tend to be available online (e.g., on the seller’s website). And digital platforms, including price comparison websites (PCWs), aggregate prices of different sellers in one place.
The effects of price transparency are ambiguous, as the European Commission (EC) found in its E-Commerce Sector Inquiry.Footnote 17 The fact that consumers can easily compare prices online leads to increased price competition between sellers.Footnote 18 At the same time, price transparency also allows firms to monitor each other’s prices, often algorithmically.Footnote 19 In a vertical relation between supplier and distributor, the supplier can more easily spot deviations from the retail price it recommended – and perhaps ask retailers for adjustment. In a horizontal relation between competitors, it has become common for firms to automatically adjust their prices to those of competitors.Footnote 20 In this case, the effects can go two ways. As EU Commissioner Vestager noted: “the effect of an algorithm depends very much on how you set it up.”Footnote 21 You can use an algorithm to undercut your rivals, which is a boon for consumers. Or you can use algorithms to increase prices, which harms consumers.
Both types of algorithms (undercutting and increasing) feature in the story of The Making of a Fly, a book that ended up being priced at over $23 million on Amazon. What happened? Two sellers of the book relied on pricing algorithms, with one systematically undercutting the other (but only just), and the other systematically charging a price 27% higher than the other. An upward price spiral ensued, resulting in the book’s absurd price. In many other instances, however, the effects are less absurd and more harmful. Various studies have examined petrol prices, which are increasingly transparent.Footnote 22 In Chile, the government even obliged petrol station owners to post their prices on a public website. After the website’s introduction in 2012, coordination by petrol station owners increased their margins by 9%, at the expense of consumers.Footnote 23 A similar result can be reached in the absence of such radical transparency. A study of German petrol stations found that adoption of algorithmic pricing also increased their margins by 9%.Footnote 24 Companies such as A2i specialize in providing such pricing software.Footnote 25
Algorithms can create competition issues beyond coordination on a supracompetitive price point. They can also be at the basis of unilateral conduct, of which two types are worth highlighting. First, algorithms allow for personalized pricing.Footnote 26 The input here is not pricing data from competitors but rather personal data from consumers. If personal data allows the seller to infer the consumers’ exact willingness to pay, they can perfectly price discriminate, although this scenario is theoretical for now. The impact of price discrimination is not straightforward: while some consumers pay more than they otherwise would, it can also allow firms to serve consumers they otherwise would not.Footnote 27 Second, algorithms are widely used for non-pricing purposes, in particular for ranking.Footnote 28 Indeed, digital platforms have sprung up to bring order to the boundless internet (e.g., Google Search for websites, Amazon Marketplace for products). Given the platforms’ power over consumer choice, a tweak of their ranking algorithm can marginalize one firm while bringing fortune to another. As long as tweaks are made in the interests of consumers, they are not problematic. But if tweaks are made simply to give prominence to the platform’s own products (“self-preferencing”), consumers may suffer the consequences.
9.3 Algorithmic Competition Issues
Competition law protects competition, thus guaranteeing consumer welfare, via specific rules. I focus on two provisions: the prohibitions of restrictive agreements (Article 101 TFEU) and of abuse of dominance (Article 102 TFEU).Footnote 29 The next sections examine these prohibitions, and the extent to which they substantively cover algorithmic competition issues.
9.3.1 Restrictive Agreements
Restrictive agreements come in two types: they are horizontal when entered into between competitors (“collusion”) and vertical when entered into between firms at different levels of the supply chain (e.g., supplier and distributor). An agreement does not require a contract; more informal types of understanding between parties (“concerted practices”) also fall under Article 101 TFEU.Footnote 30 To be illegal, the common understanding must have the object or effect of restricting competition. According to the case law, “by object” restrictions are those types of coordination that “can be regarded, by their very nature, as being harmful to the proper functioning of normal competition.”Footnote 31 Given that such coordination reveals, in itself, a sufficient degree of harm to competition, it is not necessary to assess its effects.Footnote 32 “By effect” restrictions do require such an assessment. In general, horizontal agreements are more likely to fall into the “by object” category (price-fixing being the typical example), while vertical agreements are more likely to be categorized as “by effect” (e.g., recommending retail prices). Let us look at horizontal and vertical agreements in turn.
9.3.1.1 Horizontal Agreements
There are two crucial aspects to every horizontal price-fixing agreement or “cartel”: the moment of their formation and their period of stability (i.e., when no cartelist deviates from the arrangement). In the physical world, cartel formation and stability face challenges.Footnote 33 It can be difficult for cartelists to reach a common understanding on the terms of the cartel (in particular the price charged), and coordination in any case requires contact (e.g., meeting in a hotel in Hawaii). Once an agreement is reached, the cartelists have to abide by it even while having an incentive to cheat (deviating from the agreement, e.g., by charging a lower price). Such cheating returns a payoff: in the period before detection, the cheating firm can win market/profit share from its co-cartelists (after detection, all cartelists revert to the competitive price level). The longer the period before detection, the greater the payoff and thus the incentive to cheat.
In a digital world, cartel formation and stability may face fewer difficulties.Footnote 34 Cartel formation does not require contact when algorithms themselves reach a collusive equilibrium. When given the objective to maximize profits (in itself not objectionable), an ML algorithm may figure out that charging a supracompetitive price, together with other firms deploying similar algorithms, satisfies that objective. And whether or not there is still an agreement at the basis of the cartel, subsequent stability is greater. Price transparency and monitoring algorithms allow for quicker detection of deviations from the cartel agreement.Footnote 35 As a result, the expected payoff from cheating is lower, meaning there is less of an incentive to do so.Footnote 36 When a third party algorithmically sets prices for different sellers (e.g., Uber for its drivers), deviation even becomes impossible. In these different ways, algorithmic pricing makes cartels more robust. Moreover, competition authorities may have more trouble detecting cartels, given that there is not necessarily a paper trail.
In short, digitization – in particular price transparency and the widespread use of algorithms to monitor/set prices – does not make cartels less likely or durable. Taking a closer look at algorithmically assisted price coordination, it is useful to distinguish three scenarios.Footnote 37 First, firms may explicitly agree on prices and use algorithms to (help) implement that agreement. Second, firms may use the same pricing algorithm provided by a third party, which results in price coordination without explicit agreement between them. Third, firms may instruct distinct pricing algorithms to maximize profits, which results in a collusive equilibrium/supracompetitive prices. With each subsequent scenario, the existence of an agreement becomes less clear; in the absence of it, Article 101 TFEU does not apply. Let us test each scenario against the legal framework.
The first scenario, in which sellers algorithmically implement a prior agreement, does not raise difficult questions. The Posters case, referenced in the introduction, offers a model.Footnote 38 Two British sellers of posters, Trod and GB, agreed to stop undercutting each other on Amazon Marketplace. Given the difficulty of manually adjusting prices on a daily basis, the sellers implemented their cartel agreement via re-pricing software (widely available from third parties).Footnote 39 In practice, GB programmed its software to undercut other sellers but match the price charged by Trod if there were no cheaper competing offers. Trod configured its software with “compete rules” but put GB on an “ignore list” so that the rules it had programmed to undercut competitors did not apply to GB. Still, humans were still very much in the loop, as evidenced by emails in which employees complained about apparent noncompliance with the arrangement, in particular when the software did not seem to be working properly.Footnote 40 The UK Competition and Markets Authority had no trouble establishing agreement, which fixed prices and was thus restrictive “by object.”
In this first scenario, the use of technology does not expose a legal vacuum; competition law is up to the task. But what if there was no preexisting price-fixing agreement? In that case, the sellers would simply be using repricing software to undercut other sellers and each other. At first sight, that situation appears perfectly competitive: undercutting competitors is the essence of competition – if that happens effectively and rapidly, all the better. The reality is more complex. Brown has studied the economics of pricing algorithms, finding that they change the nature of the pricing game.Footnote 41 The logic is this: once a firm commits to respond to whatever price its competitors charge, those competitors internalize that expected reaction, which conditions their pricing (they are more reluctant to decrease prices in the first place).Footnote 42 In short, even relatively simple pricing algorithms can soften competition. This is in line with the aforementioned study of algorithmic petrol station pricing in Germany.Footnote 43
The second scenario, in which sellers rely on a common algorithm to set their prices, becomes more difficult but not impossible to fit within Article 101 TFEU. There are two sub-scenarios to distinguish. First, the sellers may be suppliers via an online platform that algorithmically sets the price for them. This setting is not common as platforms generally leave their suppliers free to set a price but Uber, which sets prices for all of its drivers, provides an example.Footnote 44 Second, sellers may use the same “off-the-shelf” pricing software offered by a third party. The U.S. firm RealPage, for example, offers its YieldShare pricing software to a large number of landlords.Footnote 45 It relies not on public information (e.g., real estate listings) but on private information (actual rent charged) and even promotes communication between landlords through groups.Footnote 46 In either sub-scenario, there is not necessarily communication between the different sellers, be they Uber drivers or landlords. Rather, the coordination originates from a third party, the pricing algorithm provider. Such scenarios can be classified as “hub-and-spoke” cartels, where the hub refers to the algorithm provider and the spokes are the sellers following its pricing guidance.Footnote 47
The guiding EU case on this second scenario is Eturas.Footnote 48 The case concerned the Lithuanian firm Eturas, operator of the travel booking platform E-TURAS. At one point, Eturas messaged the travel agencies using its platforms that discounts would be automatically reduced to 3% “to normalise the conditions of competition.”Footnote 49 In a preliminary reference, the European Court of Justice (ECJ) was asked whether the use of a “common computerized information system” to set prices could constitute a concerted practice between travel agencies under Article 101 TFEU.Footnote 50 The ECJ started from the foundation of cartel law, namely that every economic operator must independently determine their conduct on the market, which precludes any direct or indirect contact between operators so as to influence each other’s conduct.Footnote 51 Even passive modes of participation can infringe Article 101 TFEU.Footnote 52 But the burden of proof is on the competition authority, and the presumption of innocence precludes the authority from inferring from the mere dispatch of a message that travel agencies were also aware of that message.Footnote 53 Other objective and consistent indicia may justify a rebuttable presumption that the travel agencies were aware of the message.Footnote 54 In that case, the authority can conclude the travel agencies tacitly assented to a common anticompetitive practice.Footnote 55 That presumption too must be rebuttable, including by (i) public distancing, or a clear and express objection to Eturas; (ii) reporting to the administrative authorities; or (iii) systematic application of a discount exceeding the cap.Footnote 56
With this legal framework in mind, we can return to the case studies introduced earlier. With regard to RealPage’s YieldShare, it bears mentioning that the algorithm does not impose but suggests a price, which landlords can deviate from (although very few do). Nevertheless, the U.S. Department of Justice (DOJ) has opened an investigation.Footnote 57 The fact that RealPage also brings landlords into direct contact with each other may help the DOJ’s case. Uber has been subject to investigations around the globe, including the U.S. and Brazil, although no infringement was finally established.Footnote 58 In the EU, there has not been a case, although Eturas could support a finding of infringement: drivers are aware of Uber’s common price-setting system and can thus be presumed to participate in a concerted practice.Footnote 59 That is not the end of it though, as infringements of Article 101(1) TFEU can be justified under Article 101(3) TFEU if they come with countervailing efficiencies, allow consumers a fair share of the benefit, are proportional, and do not eliminate competition.Footnote 60 Uber might meet those criteria: its control over pricing is indispensable to the functioning of its efficient ride-hailing system (which reduces empty cars and waiting times), and that system comes with significant consumer benefits (such as convenience and lower prices). In its Webtaxi decision on a platform that operates like Uber, the Luxembourgish competition authority exempted the use of a common pricing algorithm based on this reasoning.Footnote 61
To conclude, this second scenario of sellers relying on a common price-setting algorithm, provided by either a platform or a third party, can still be addressed by EU competition law, even though it sits at the boundary of it. And if a common pricing algorithm is essential to a business model that benefits consumers, it may be justified.
The third scenario, in which sellers’ use of distinct pricing algorithms results in a collusive equilibrium, may escape the grasp of Article 101 TFEU. The mechanism is the following: sellers instruct their ML algorithms to maximize profits, after which the algorithms figure out that coordination on a supracompetitive price best attains that objective. These algorithms tend to use “reinforcement learning” and more specifically “Q-learning”: the algorithms interact with their environment (including the algorithms of competing sellers) and, through trial and error, learn the optimal pricing policy.Footnote 62 Modeling by Salcedo showed “how pricing algorithms not only facilitate collusion but inevitably lead to it,” albeit under very strong assumptions.Footnote 63 More recently, Calvano et al. took an experimental approach, letting pricing algorithms interact in a simulated marketplace.Footnote 64 These Q-learning algorithms systematically learned to adopt collusive strategies, including the punishment of deviations from the collusive equilibrium. That collusive equilibrium was typically below the monopoly level but substantially above the competitive level. In the end, while these theoretical and experimental results are cause for concern, it remains an open question to what extent autonomous price coordination can arise in real market conditions.Footnote 65
Nevertheless, it is worth asking whether EU competition law is up to the task if/when the third scenario of autonomously coordinating pricing algorithms materializes. The problem is in fact an old one.Footnote 66 In oligopolistic markets (with few players), there is no need for explicit collusion to set prices at a supracompetitive level; high interdependence and mutual awareness may suffice to reach that result. Such tacit collusion, while societally harmful, is beyond the reach of competition law (the so-called “oligopoly problem”). Tacit collusion is thought to occur rarely given the specific market conditions it requires but some worry that, through the use of algorithms, it “could become sustainable in a wider range of circumstances possibly expanding the oligopoly problem to non-oligopolistic market structures.”Footnote 67 To understand the scope of the problem, let us take a closer look at the EU case law.
In case of autonomous algorithmic collusion, there is no agreement. Might there be a concerted practice? The ECJ has defined a concerted practice as “a form of coordination between undertakings by which, without it having reached the stage where an agreement properly so called has been concluded, practical cooperation between them is knowingly substituted for the risks of competition.”Footnote 68 This goes back to the requirement that economic operators independently determine their conduct on the market.Footnote 69 The difficulty is that, while this requirement strictly precludes direct or indirect contact between economic operators so as to influence each other’s conduct, it “does not deprive economic operators of the right to adapt themselves intelligently to the existing and anticipated conduct of their competitors.”Footnote 70 Therefore, conscious parallelism – even though potentially as harmful as a cartel – does not meet the concertation threshold of Article 101 TFEU. Indeed, “parallel conduct cannot be regarded as furnishing proof of concertation unless concertation constitutes the only plausible explanation for such conduct.”Footnote 71 Discarding every other plausible explanation for parallelism is a Herculean task with little chance of success. The furthest the EC has taken the concept of concertation is in Container Shipping.Footnote 72 The case concerned shipping companies that regularly announced their intended future price increases, doing so 3–5 weeks beforehand, which allowed for customer testing and competitor alignment. According to the EC, this could be “a strategy for reaching a common understanding about the terms of coordination” and thus a concerted practice.Footnote 73
Truly autonomous collusion can escape the legal framework in a way that tacit collusion has always done. In this sense, it is a twist on the unsolved oligopoly problem. Even the price signaling theory of Container Shipping, already at the outer boundary of Article 101 TFEU, hardly seems to capture autonomous collusion. If/when autonomous pricing agents are widely deployed, however, it may pose a bigger problem than the oligopoly one we know. Scholars have made suggestions on how to adapt the legal framework to fill the regulatory gap, but few of proposed rules are legally, economically and technologically sound and administrable by competition authorities and judges.Footnote 74
9.3.1.2 Vertical Agreements
When discussing horizontal agreements, I only referenced the nature of the restrictions in passing, given that price-fixing is the quintessential “by object” restriction. Vertical agreements require more careful examination. An important distinction exists between recommended resale prices, which are presumptively legal, and fixed resale prices (“resale price maintenance” or RPM), which are presumptively illegal as “by object” restrictions.Footnote 75 The difference between the two can be small, especially when a supplier uses carrots (e.g., reimbursing promotional costs) or sticks (e.g., withholding supply) to turn a recommendation into more of an obligation. Algorithmic monitoring/pricing can play a role in this process. It can even exacerbate the anticompetitive effects of RPM.
In the wake of its E-Commerce Sector Inquiry, the EC started a number of investigations into online RPM. In four decisions, the EC imposed more than €110 million in fines on consumer electronics suppliers Asus, Denon & Marantz, Philips, and Pioneer.Footnote 76 These suppliers restricted the ability of online retailers to price kitchen appliances, notebooks, hi-fi products, and so on. Although the prices were often “recommendations” in name, the suppliers intervened in case of deviation, including through threats or sanctions. The online context held dual relevance. First, suppliers used monitoring software to effectively detect deviations by retailers and to intervene swiftly when prices decreased. Second, many retailers used algorithms to automatically adjust their prices to other retailers. Given that automatic adjustment, the restrictions that suppliers imposed on low-pricing retailers had a wider impact on overall prices than they would have had in an offline context.
There is also renewed interest in RPM at the national level. The Authority for Consumers & Markets (ACM) fined Samsung some €40 million for RPM of television sets.Footnote 77 Samsung took advantage of the greater transparency offered by web shops and PCWs to monitor prices through so-called “spider software,”Footnote 78 and confronted retailers that deviated from its price “recommendations.” Retailers also used “spiders” to adjust their prices (often downward) to those of competitors. Samsung regularly asked retailers to disable their spiders so that they would not automatically switch along to lower online prices. The ACM, like the EC, classified these practices as anticompetitive “by object.” Thus, while the methods of RPM may evolve, the traditional legal analysis remains applicable.
9.3.2 Abuse of Dominance
Abusive conduct comes in two types: it is exclusionary when it indirectly harms consumers by foreclosing competitors from the market and exploitative when it directly harms consumers, for example, by charging excessive prices. I discuss the main algorithmic concern under each category of abuse, that is, discriminatory ranking and personalized pricing, respectively. While I focus on abusive conduct, remember that such conduct only infringes Article 102 TFEU if the firm in question is also in a dominant position.
9.3.2.1 Exclusion
Given the abundance of online options (of goods, videos, webpages, etc.), curation is key. The role of curator is assumed by platforms, which rank the options for consumers; think, for example, of Amazon Marketplace, TikTok, and Google Search. Consumers trust that a platform has their best interests in mind, which is generally the case, and thus tend to rely on their ranking without much further thought. This gives the platform significant power over consumer choice, which can be abused. A risk of skewed rankings exists particularly when the platform does not only intermediate between suppliers and consumers, but also offers its own options. In that case, the platform may want to favor its own offering through choice architecture (“self-preferencing”).Footnote 79
The landmark case in this area is Google Search (Shopping).Footnote 80 At the heart of the abusive conduct was Google’s Panda algorithm, which demoted third-party comparison shopping services (CSS) in the search results, while Google’s own CSS was displayed prominently on top. Even the most highly ranked non-Google CSS appeared on average only on page four of the search results. This had a significant impact on visibility, given that users tend to focus on the first 3–5 results, with the first 10 results accounting for 95% of user clicks.Footnote 81 Skewed rankings distort the competitive process by excluding competitors and can harm consumers, especially when the promoted results are not the most qualitative ones.Footnote 82
Google was only the first of many cases of algorithmic exclusion.Footnote 83 Amazon has also been on the radar of competition authorities, with a variety of cases regarding the way it ranks products (and in particular, selects the winner of its “Buy Box”).Footnote 84 It is also under investigation for its “algorithmic control of price setting by third-party sellers,” which “can make it difficult for end customers to find offers by sellers or even lead to these offers being no longer visible at all.”Footnote 85
EU legislators considered the issue of discriminatory ranking serious enough to justify the adoption of ex ante regulation to complement ex post competition law. The Digital Markets Act (DMA) prohibits “gatekeepers” from self-preferencing in ranking, obliging them to apply “transparent, fair and non-discriminatory conditions to such ranking.”Footnote 86 Earlier instruments, like the Consumer Rights Directive (CRD)Footnote 87 and the Platform-to-Business (P2B) Regulation,Footnote 88 already mandated transparency in ranking.Footnote 89
9.3.2.2 Exploitation
Price discrimination, and more specifically personalized pricing, is of particular concern in algorithmically driven markets. Dynamic pricing, that is, firms adapting prices to market conditions (essentially, supply and demand) has long existed. Think for example of airlines changing prices over time (as captured by the saying that “the best way to ruin your flight is to ask your neighbor what they paid”). With personalized pricing, prices are tailored to the characteristics of the consumers in question (e.g., location and previous purchase behavior) so as to approach their willingness to pay. Authorities have put limits to such personalized pricing. Following action by the ACM, for example, the e-commerce platform Wish decided to stop using personalized pricing.Footnote 90
The ACM did not intervene based on competition law.Footnote 91 Article 102(a) TFEU prohibits excessive prices, but personalized prices are not necessarily excessive as such, and competition authorities are in any case reluctant to intervene directly in price-setting. Price discrimination, explicitly prohibited by Article 102 TFEU(c), may seem like a more fitting option, but that provision is targeted at discrimination between firms rather than between consumers.Footnote 92 Another limitation is that Article 102 TFEU requires dominance, and most firms engaged in personalized pricing do not have market power. While competition law is not an effective tool to deal with personalized pricing, other branches of law have more to say on the matter.Footnote 93
First, personalization is based on data, and the General Data Protection Regulation (GDPR) regulates the collection and processing of such data.Footnote 94 The DMA adds further limits for gatekeepers.Footnote 95 Various other laws – including the Unfair Commercial Practices Directive (UCPD),Footnote 96 the CRD,Footnote 97 and the P2B RegulationFootnote 98 – also apply to personalized pricing but are largely restricted to transparency obligations. The recent Digital Services Act (DSA)Footnote 99 and AI ActFootnote 100 go a step further with provisions targeted at algorithms, although their applicability to personalized pricing is yet to be determined.
Despite different anecdotes on personalized pricing (e.g., by Uber), there is no empirical evidence of widespread personalized pricing.Footnote 101 One limiting factor may be the reputational costs a firm incurs when its personalized pricing is publicized, given how consumers tend to view such practices as unfair. In addition, the technological capability to effectively personalize prices is sometimes overstated.Footnote 102 It would be good, however, to have a clear view of the fragmented regulatory framework for when the day of widespread personalized pricing does arrive.
9.4 Conclusion
Rather than revisiting interim conclusions, I end with a research agenda. This chapter has set out the state of the art on AI and competition, at least on the substantive side. Algorithms also pose risks – and opportunities – on the institutional (enforcement) side. Competition authority heads have vowed that they “will not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the Internet using complex pricing algorithms.”Footnote 103 While this elegant one-liner is a common-sense policy statement, the difficult question is “how?”. Substantive issues aside, algorithmic anticompetitive conduct can be more difficult to detect and deter. Compliance by design is key. Just like the ML models that have become world-class at playing Go and Texas Hold’em have the rules of those games baked in, firms deploying algorithms should think about programming them with the rules of economic rivalry, that is, competition law. At the same time, competition authorities will have to build out their algorithmic detection capabilities.Footnote 104 They may even want to go a step further and intervene algorithmically – or, in the words of the Economist article this chapter started with: “Trustbusters might have to fight algorithms with algorithms.”Footnote 105
Returning to substantive questions, the following would benefit from further research:
Theoretical and experimental research shows that autonomous algorithmic collusion is a possibility. To what extent are those results transferable to real market conditions? Do new developments in AI increase the possibility of algorithmic collusion?
Autonomous algorithmic collusion presents a regulatory gap, at least if such collusion exits the lab and enters the outside world. Which rule(s) would optimally address this gap, meaning they are legally, economically, and technologically sound and administrable by competition authorities and judges?
Algorithmic exclusion (ranking) and algorithmic exploitation (personalized pricing) are regulated to varying degrees by different instruments, including competition law, the DMA, the DSA, the P2B Regulation, the CRD, the UCPD and the AI Act. How do these instruments fit together – do they exhibit overlap? A lot of instruments are centered around transparency – is that approach effective given the bounded rationality of consumers?
The enforcement questions (relating, e.g., to compliance by design) are no less pressing and difficult. Even more so than the substantive questions, they will require collaboration between lawyers and computer scientists.