Skip to main content Accessibility help
×
Hostname: page-component-745bb68f8f-kw2vx Total loading time: 0 Render date: 2025-02-06T12:43:26.004Z Has data issue: false hasContentIssue false

9 - Artificial Intelligence and Competition Law

from Part II - AI, Law and Policy

Published online by Cambridge University Press:  06 February 2025

Nathalie A. Smuha
Affiliation:
KU Leuven

Summary

Firms use algorithms for important decisions in areas from pricing strategy to product design. Increased price transparency and availability of personal data, combined with ever more sophisticated machine learning algorithms, has turbocharged their use. Algorithms can be a procompetitive force, such as when used to undercut competitors or to improve recommendations. But algorithms can also distort competition, as when firms use them to collude or to exclude competitors. EU competition law, in particular its provisions on restrictive agreements and abuse of dominance (Articles 101–102 TFEU), prohibits such practices, but novel anticompetitive practices – when algorithms collude autonomously for example – may escape its grasp. This chapter assesses to what extent anticompetitive algorithmic practices are covered by EU competition law, examining horizontal agreements (collusion), vertical agreements (resale price maintenance), exclusionary conduct (ranking), and exploitative conduct (personalized pricing).

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025
Creative Commons
Creative Common License - CCCreative Common License - BY
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY 4.0 https://creativecommons.org/cclicenses/

9.1 Introduction

Algorithmic competition issues have been in the public eye for some time.Footnote 1 In 2017, for example, The Economist warned: “Price-bots can collude against consumers.”Footnote 2 Press attention was fueled by Ezrachi and Stucke’s Virtual Competition, a well-received book on the perils of the algorithm-driven economy.Footnote 3 For quite some time, however, academic and press interest outpaced the reality on the ground.Footnote 4 Price algorithms had been used to fix prices, but the collusive schemes were relatively low-tech (overseen by sellers themselves) and the consumer harm seemingly limited (some buyers of Justin Bieber posters overpaid).Footnote 5 As such, the AI and competition law literature was called “the closest ever our field came to science-fiction.”Footnote 6 More recently, that has started to change – with an increase in science, and a decrease in fiction. New economic models show that sellers cannot just use pricing algorithms to collude – algorithms can actually supplant human decision-makers and learn to charge supracompetitive prices autonomously.Footnote 7 Meanwhile, in the real world, pricing algorithms became even more common and potentially pernicious, affecting markets as essential as real estate.Footnote 8

The topic of AI and competition law is thus ripe for reexamination, for which this chapter lays the groundwork. The chapter only deals with substantive competition law (and related areas of law), not with more institutional questions like enforcement, which deserve a separate treatment. Section 9.2 starts with the end-goal of competition law, that is, consumer welfare, and how algorithms and the increasing availability of data may affect that welfare. Section 9.3 dives into the main algorithmic competition issues, starting with restrictive agreements, both horizontal and vertical (Section 9.3.1), and moving on to abuse of dominance, both exclusionary and exploitative (Section 9.3.2). The guiding question is whether EU competition rules are up to the task of remedying these issues. Section 9.4 concludes with an agenda for future research.

Before we jump in, a note on terminology. The careful reader will have noticed that, despite the “AI” in the title, I generally refer to “algorithms.” An algorithm is simply a set of steps to be carried out in a specific way.Footnote 9 This “specific way” can be pen and paper, but algorithms truly show their potential when executed by computers that are programmed to do so. At that point, we enter the “computational” realm, but when can we refer to AI? The problem is that AI is somewhat of a nebulous concept. In the oft-quoted words of the late Larry Tesler: “AI is whatever hasn’t been done yet” (the so-called “AI Effect”).Footnote 10 Machine learning (ML) is a more useful term, referring to situations where the computer (machine) itself extracts the algorithm for the task that underlies the data.Footnote 11 Thus, with ML, “it is not the programmers anymore but the data itself that defines what to do next.”Footnote 12 In what follows, I continue to refer to algorithms to capture its various uses and manifestations. For a more extensive discussion of the technological aspects of AI, see Chapter 1 of this book.

9.2 Consumer Welfare, Data, and Algorithms

The goal of EU competition law has always been to prevent distortions of competition, in other words, to protect competition.Footnote 13 But protecting competition is a means to an end. As the General Court put it: “the ultimate purpose of the rules that seek to ensure that competition is not distorted in the internal market is to increase the well-being of consumers.”Footnote 14 Competition, and thus consumer welfare, has different parameters, in particular price, choice, quality or innovation.Footnote 15 A practice’s impact on those parameters often determines its (il)legality.

Algorithmic competition can affect the parameters of competition. At the outset, though, it is important to understand that algorithms need input – that is, data – to transform into output. When it comes to competition, the most relevant type of data is price data. Such data used to be hidden from view, requiring effort to collect (e.g., frequenting competitors’ stores). Nowadays, price transparency has become the norm, at least in business-to-consumer (B2C) settings, so at the retail level.Footnote 16 Prices tend to be available online (e.g., on the seller’s website). And digital platforms, including price comparison websites (PCWs), aggregate prices of different sellers in one place.

The effects of price transparency are ambiguous, as the European Commission (EC) found in its E-Commerce Sector Inquiry.Footnote 17 The fact that consumers can easily compare prices online leads to increased price competition between sellers.Footnote 18 At the same time, price transparency also allows firms to monitor each other’s prices, often algorithmically.Footnote 19 In a vertical relation between supplier and distributor, the supplier can more easily spot deviations from the retail price it recommended – and perhaps ask retailers for adjustment. In a horizontal relation between competitors, it has become common for firms to automatically adjust their prices to those of competitors.Footnote 20 In this case, the effects can go two ways. As EU Commissioner Vestager noted: “the effect of an algorithm depends very much on how you set it up.”Footnote 21 You can use an algorithm to undercut your rivals, which is a boon for consumers. Or you can use algorithms to increase prices, which harms consumers.

Both types of algorithms (undercutting and increasing) feature in the story of The Making of a Fly, a book that ended up being priced at over $23 million on Amazon. What happened? Two sellers of the book relied on pricing algorithms, with one systematically undercutting the other (but only just), and the other systematically charging a price 27% higher than the other. An upward price spiral ensued, resulting in the book’s absurd price. In many other instances, however, the effects are less absurd and more harmful. Various studies have examined petrol prices, which are increasingly transparent.Footnote 22 In Chile, the government even obliged petrol station owners to post their prices on a public website. After the website’s introduction in 2012, coordination by petrol station owners increased their margins by 9%, at the expense of consumers.Footnote 23 A similar result can be reached in the absence of such radical transparency. A study of German petrol stations found that adoption of algorithmic pricing also increased their margins by 9%.Footnote 24 Companies such as A2i specialize in providing such pricing software.Footnote 25

Algorithms can create competition issues beyond coordination on a supracompetitive price point. They can also be at the basis of unilateral conduct, of which two types are worth highlighting. First, algorithms allow for personalized pricing.Footnote 26 The input here is not pricing data from competitors but rather personal data from consumers. If personal data allows the seller to infer the consumers’ exact willingness to pay, they can perfectly price discriminate, although this scenario is theoretical for now. The impact of price discrimination is not straightforward: while some consumers pay more than they otherwise would, it can also allow firms to serve consumers they otherwise would not.Footnote 27 Second, algorithms are widely used for non-pricing purposes, in particular for ranking.Footnote 28 Indeed, digital platforms have sprung up to bring order to the boundless internet (e.g., Google Search for websites, Amazon Marketplace for products). Given the platforms’ power over consumer choice, a tweak of their ranking algorithm can marginalize one firm while bringing fortune to another. As long as tweaks are made in the interests of consumers, they are not problematic. But if tweaks are made simply to give prominence to the platform’s own products (“self-preferencing”), consumers may suffer the consequences.

9.3 Algorithmic Competition Issues

Competition law protects competition, thus guaranteeing consumer welfare, via specific rules. I focus on two provisions: the prohibitions of restrictive agreements (Article 101 TFEU) and of abuse of dominance (Article 102 TFEU).Footnote 29 The next sections examine these prohibitions, and the extent to which they substantively cover algorithmic competition issues.

9.3.1 Restrictive Agreements

Restrictive agreements come in two types: they are horizontal when entered into between competitors (“collusion”) and vertical when entered into between firms at different levels of the supply chain (e.g., supplier and distributor). An agreement does not require a contract; more informal types of understanding between parties (“concerted practices”) also fall under Article 101 TFEU.Footnote 30 To be illegal, the common understanding must have the object or effect of restricting competition. According to the case law, “by object” restrictions are those types of coordination that “can be regarded, by their very nature, as being harmful to the proper functioning of normal competition.”Footnote 31 Given that such coordination reveals, in itself, a sufficient degree of harm to competition, it is not necessary to assess its effects.Footnote 32 “By effect” restrictions do require such an assessment. In general, horizontal agreements are more likely to fall into the “by object” category (price-fixing being the typical example), while vertical agreements are more likely to be categorized as “by effect” (e.g., recommending retail prices). Let us look at horizontal and vertical agreements in turn.

9.3.1.1 Horizontal Agreements

There are two crucial aspects to every horizontal price-fixing agreement or “cartel”: the moment of their formation and their period of stability (i.e., when no cartelist deviates from the arrangement). In the physical world, cartel formation and stability face challenges.Footnote 33 It can be difficult for cartelists to reach a common understanding on the terms of the cartel (in particular the price charged), and coordination in any case requires contact (e.g., meeting in a hotel in Hawaii). Once an agreement is reached, the cartelists have to abide by it even while having an incentive to cheat (deviating from the agreement, e.g., by charging a lower price). Such cheating returns a payoff: in the period before detection, the cheating firm can win market/profit share from its co-cartelists (after detection, all cartelists revert to the competitive price level). The longer the period before detection, the greater the payoff and thus the incentive to cheat.

In a digital world, cartel formation and stability may face fewer difficulties.Footnote 34 Cartel formation does not require contact when algorithms themselves reach a collusive equilibrium. When given the objective to maximize profits (in itself not objectionable), an ML algorithm may figure out that charging a supracompetitive price, together with other firms deploying similar algorithms, satisfies that objective. And whether or not there is still an agreement at the basis of the cartel, subsequent stability is greater. Price transparency and monitoring algorithms allow for quicker detection of deviations from the cartel agreement.Footnote 35 As a result, the expected payoff from cheating is lower, meaning there is less of an incentive to do so.Footnote 36 When a third party algorithmically sets prices for different sellers (e.g., Uber for its drivers), deviation even becomes impossible. In these different ways, algorithmic pricing makes cartels more robust. Moreover, competition authorities may have more trouble detecting cartels, given that there is not necessarily a paper trail.

In short, digitization – in particular price transparency and the widespread use of algorithms to monitor/set prices – does not make cartels less likely or durable. Taking a closer look at algorithmically assisted price coordination, it is useful to distinguish three scenarios.Footnote 37 First, firms may explicitly agree on prices and use algorithms to (help) implement that agreement. Second, firms may use the same pricing algorithm provided by a third party, which results in price coordination without explicit agreement between them. Third, firms may instruct distinct pricing algorithms to maximize profits, which results in a collusive equilibrium/supracompetitive prices. With each subsequent scenario, the existence of an agreement becomes less clear; in the absence of it, Article 101 TFEU does not apply. Let us test each scenario against the legal framework.

The first scenario, in which sellers algorithmically implement a prior agreement, does not raise difficult questions. The Posters case, referenced in the introduction, offers a model.Footnote 38 Two British sellers of posters, Trod and GB, agreed to stop undercutting each other on Amazon Marketplace. Given the difficulty of manually adjusting prices on a daily basis, the sellers implemented their cartel agreement via re-pricing software (widely available from third parties).Footnote 39 In practice, GB programmed its software to undercut other sellers but match the price charged by Trod if there were no cheaper competing offers. Trod configured its software with “compete rules” but put GB on an “ignore list” so that the rules it had programmed to undercut competitors did not apply to GB. Still, humans were still very much in the loop, as evidenced by emails in which employees complained about apparent noncompliance with the arrangement, in particular when the software did not seem to be working properly.Footnote 40 The UK Competition and Markets Authority had no trouble establishing agreement, which fixed prices and was thus restrictive “by object.”

In this first scenario, the use of technology does not expose a legal vacuum; competition law is up to the task. But what if there was no preexisting price-fixing agreement? In that case, the sellers would simply be using repricing software to undercut other sellers and each other. At first sight, that situation appears perfectly competitive: undercutting competitors is the essence of competition – if that happens effectively and rapidly, all the better. The reality is more complex. Brown has studied the economics of pricing algorithms, finding that they change the nature of the pricing game.Footnote 41 The logic is this: once a firm commits to respond to whatever price its competitors charge, those competitors internalize that expected reaction, which conditions their pricing (they are more reluctant to decrease prices in the first place).Footnote 42 In short, even relatively simple pricing algorithms can soften competition. This is in line with the aforementioned study of algorithmic petrol station pricing in Germany.Footnote 43

The second scenario, in which sellers rely on a common algorithm to set their prices, becomes more difficult but not impossible to fit within Article 101 TFEU. There are two sub-scenarios to distinguish. First, the sellers may be suppliers via an online platform that algorithmically sets the price for them. This setting is not common as platforms generally leave their suppliers free to set a price but Uber, which sets prices for all of its drivers, provides an example.Footnote 44 Second, sellers may use the same “off-the-shelf” pricing software offered by a third party. The U.S. firm RealPage, for example, offers its YieldShare pricing software to a large number of landlords.Footnote 45 It relies not on public information (e.g., real estate listings) but on private information (actual rent charged) and even promotes communication between landlords through groups.Footnote 46 In either sub-scenario, there is not necessarily communication between the different sellers, be they Uber drivers or landlords. Rather, the coordination originates from a third party, the pricing algorithm provider. Such scenarios can be classified as “hub-and-spoke” cartels, where the hub refers to the algorithm provider and the spokes are the sellers following its pricing guidance.Footnote 47

The guiding EU case on this second scenario is Eturas.Footnote 48 The case concerned the Lithuanian firm Eturas, operator of the travel booking platform E-TURAS. At one point, Eturas messaged the travel agencies using its platforms that discounts would be automatically reduced to 3% “to normalise the conditions of competition.”Footnote 49 In a preliminary reference, the European Court of Justice (ECJ) was asked whether the use of a “common computerized information system” to set prices could constitute a concerted practice between travel agencies under Article 101 TFEU.Footnote 50 The ECJ started from the foundation of cartel law, namely that every economic operator must independently determine their conduct on the market, which precludes any direct or indirect contact between operators so as to influence each other’s conduct.Footnote 51 Even passive modes of participation can infringe Article 101 TFEU.Footnote 52 But the burden of proof is on the competition authority, and the presumption of innocence precludes the authority from inferring from the mere dispatch of a message that travel agencies were also aware of that message.Footnote 53 Other objective and consistent indicia may justify a rebuttable presumption that the travel agencies were aware of the message.Footnote 54 In that case, the authority can conclude the travel agencies tacitly assented to a common anticompetitive practice.Footnote 55 That presumption too must be rebuttable, including by (i) public distancing, or a clear and express objection to Eturas; (ii) reporting to the administrative authorities; or (iii) systematic application of a discount exceeding the cap.Footnote 56

With this legal framework in mind, we can return to the case studies introduced earlier. With regard to RealPage’s YieldShare, it bears mentioning that the algorithm does not impose but suggests a price, which landlords can deviate from (although very few do). Nevertheless, the U.S. Department of Justice (DOJ) has opened an investigation.Footnote 57 The fact that RealPage also brings landlords into direct contact with each other may help the DOJ’s case. Uber has been subject to investigations around the globe, including the U.S. and Brazil, although no infringement was finally established.Footnote 58 In the EU, there has not been a case, although Eturas could support a finding of infringement: drivers are aware of Uber’s common price-setting system and can thus be presumed to participate in a concerted practice.Footnote 59 That is not the end of it though, as infringements of Article 101(1) TFEU can be justified under Article 101(3) TFEU if they come with countervailing efficiencies, allow consumers a fair share of the benefit, are proportional, and do not eliminate competition.Footnote 60 Uber might meet those criteria: its control over pricing is indispensable to the functioning of its efficient ride-hailing system (which reduces empty cars and waiting times), and that system comes with significant consumer benefits (such as convenience and lower prices). In its Webtaxi decision on a platform that operates like Uber, the Luxembourgish competition authority exempted the use of a common pricing algorithm based on this reasoning.Footnote 61

To conclude, this second scenario of sellers relying on a common price-setting algorithm, provided by either a platform or a third party, can still be addressed by EU competition law, even though it sits at the boundary of it. And if a common pricing algorithm is essential to a business model that benefits consumers, it may be justified.

The third scenario, in which sellers’ use of distinct pricing algorithms results in a collusive equilibrium, may escape the grasp of Article 101 TFEU. The mechanism is the following: sellers instruct their ML algorithms to maximize profits, after which the algorithms figure out that coordination on a supracompetitive price best attains that objective. These algorithms tend to use “reinforcement learning” and more specifically “Q-learning”: the algorithms interact with their environment (including the algorithms of competing sellers) and, through trial and error, learn the optimal pricing policy.Footnote 62 Modeling by Salcedo showed “how pricing algorithms not only facilitate collusion but inevitably lead to it,” albeit under very strong assumptions.Footnote 63 More recently, Calvano et al. took an experimental approach, letting pricing algorithms interact in a simulated marketplace.Footnote 64 These Q-learning algorithms systematically learned to adopt collusive strategies, including the punishment of deviations from the collusive equilibrium. That collusive equilibrium was typically below the monopoly level but substantially above the competitive level. In the end, while these theoretical and experimental results are cause for concern, it remains an open question to what extent autonomous price coordination can arise in real market conditions.Footnote 65

Nevertheless, it is worth asking whether EU competition law is up to the task if/when the third scenario of autonomously coordinating pricing algorithms materializes. The problem is in fact an old one.Footnote 66 In oligopolistic markets (with few players), there is no need for explicit collusion to set prices at a supracompetitive level; high interdependence and mutual awareness may suffice to reach that result. Such tacit collusion, while societally harmful, is beyond the reach of competition law (the so-called “oligopoly problem”). Tacit collusion is thought to occur rarely given the specific market conditions it requires but some worry that, through the use of algorithms, it “could become sustainable in a wider range of circumstances possibly expanding the oligopoly problem to non-oligopolistic market structures.”Footnote 67 To understand the scope of the problem, let us take a closer look at the EU case law.

In case of autonomous algorithmic collusion, there is no agreement. Might there be a concerted practice? The ECJ has defined a concerted practice as “a form of coordination between undertakings by which, without it having reached the stage where an agreement properly so called has been concluded, practical cooperation between them is knowingly substituted for the risks of competition.”Footnote 68 This goes back to the requirement that economic operators independently determine their conduct on the market.Footnote 69 The difficulty is that, while this requirement strictly precludes direct or indirect contact between economic operators so as to influence each other’s conduct, it “does not deprive economic operators of the right to adapt themselves intelligently to the existing and anticipated conduct of their competitors.”Footnote 70 Therefore, conscious parallelism – even though potentially as harmful as a cartel – does not meet the concertation threshold of Article 101 TFEU. Indeed, “parallel conduct cannot be regarded as furnishing proof of concertation unless concertation constitutes the only plausible explanation for such conduct.”Footnote 71 Discarding every other plausible explanation for parallelism is a Herculean task with little chance of success. The furthest the EC has taken the concept of concertation is in Container Shipping.Footnote 72 The case concerned shipping companies that regularly announced their intended future price increases, doing so 3–5 weeks beforehand, which allowed for customer testing and competitor alignment. According to the EC, this could be “a strategy for reaching a common understanding about the terms of coordination” and thus a concerted practice.Footnote 73

Truly autonomous collusion can escape the legal framework in a way that tacit collusion has always done. In this sense, it is a twist on the unsolved oligopoly problem. Even the price signaling theory of Container Shipping, already at the outer boundary of Article 101 TFEU, hardly seems to capture autonomous collusion. If/when autonomous pricing agents are widely deployed, however, it may pose a bigger problem than the oligopoly one we know. Scholars have made suggestions on how to adapt the legal framework to fill the regulatory gap, but few of proposed rules are legally, economically and technologically sound and administrable by competition authorities and judges.Footnote 74

9.3.1.2 Vertical Agreements

When discussing horizontal agreements, I only referenced the nature of the restrictions in passing, given that price-fixing is the quintessential “by object” restriction. Vertical agreements require more careful examination. An important distinction exists between recommended resale prices, which are presumptively legal, and fixed resale prices (“resale price maintenance” or RPM), which are presumptively illegal as “by object” restrictions.Footnote 75 The difference between the two can be small, especially when a supplier uses carrots (e.g., reimbursing promotional costs) or sticks (e.g., withholding supply) to turn a recommendation into more of an obligation. Algorithmic monitoring/pricing can play a role in this process. It can even exacerbate the anticompetitive effects of RPM.

In the wake of its E-Commerce Sector Inquiry, the EC started a number of investigations into online RPM. In four decisions, the EC imposed more than €110 million in fines on consumer electronics suppliers Asus, Denon & Marantz, Philips, and Pioneer.Footnote 76 These suppliers restricted the ability of online retailers to price kitchen appliances, notebooks, hi-fi products, and so on. Although the prices were often “recommendations” in name, the suppliers intervened in case of deviation, including through threats or sanctions. The online context held dual relevance. First, suppliers used monitoring software to effectively detect deviations by retailers and to intervene swiftly when prices decreased. Second, many retailers used algorithms to automatically adjust their prices to other retailers. Given that automatic adjustment, the restrictions that suppliers imposed on low-pricing retailers had a wider impact on overall prices than they would have had in an offline context.

There is also renewed interest in RPM at the national level. The Authority for Consumers & Markets (ACM) fined Samsung some €40 million for RPM of television sets.Footnote 77 Samsung took advantage of the greater transparency offered by web shops and PCWs to monitor prices through so-called “spider software,”Footnote 78 and confronted retailers that deviated from its price “recommendations.” Retailers also used “spiders” to adjust their prices (often downward) to those of competitors. Samsung regularly asked retailers to disable their spiders so that they would not automatically switch along to lower online prices. The ACM, like the EC, classified these practices as anticompetitive “by object.” Thus, while the methods of RPM may evolve, the traditional legal analysis remains applicable.

9.3.2 Abuse of Dominance

Abusive conduct comes in two types: it is exclusionary when it indirectly harms consumers by foreclosing competitors from the market and exploitative when it directly harms consumers, for example, by charging excessive prices. I discuss the main algorithmic concern under each category of abuse, that is, discriminatory ranking and personalized pricing, respectively. While I focus on abusive conduct, remember that such conduct only infringes Article 102 TFEU if the firm in question is also in a dominant position.

9.3.2.1 Exclusion

Given the abundance of online options (of goods, videos, webpages, etc.), curation is key. The role of curator is assumed by platforms, which rank the options for consumers; think, for example, of Amazon Marketplace, TikTok, and Google Search. Consumers trust that a platform has their best interests in mind, which is generally the case, and thus tend to rely on their ranking without much further thought. This gives the platform significant power over consumer choice, which can be abused. A risk of skewed rankings exists particularly when the platform does not only intermediate between suppliers and consumers, but also offers its own options. In that case, the platform may want to favor its own offering through choice architecture (“self-preferencing”).Footnote 79

The landmark case in this area is Google Search (Shopping).Footnote 80 At the heart of the abusive conduct was Google’s Panda algorithm, which demoted third-party comparison shopping services (CSS) in the search results, while Google’s own CSS was displayed prominently on top. Even the most highly ranked non-Google CSS appeared on average only on page four of the search results. This had a significant impact on visibility, given that users tend to focus on the first 3–5 results, with the first 10 results accounting for 95% of user clicks.Footnote 81 Skewed rankings distort the competitive process by excluding competitors and can harm consumers, especially when the promoted results are not the most qualitative ones.Footnote 82

Google was only the first of many cases of algorithmic exclusion.Footnote 83 Amazon has also been on the radar of competition authorities, with a variety of cases regarding the way it ranks products (and in particular, selects the winner of its “Buy Box”).Footnote 84 It is also under investigation for its “algorithmic control of price setting by third-party sellers,” which “can make it difficult for end customers to find offers by sellers or even lead to these offers being no longer visible at all.”Footnote 85

EU legislators considered the issue of discriminatory ranking serious enough to justify the adoption of ex ante regulation to complement ex post competition law. The Digital Markets Act (DMA) prohibits “gatekeepers” from self-preferencing in ranking, obliging them to apply “transparent, fair and non-discriminatory conditions to such ranking.”Footnote 86 Earlier instruments, like the Consumer Rights Directive (CRD)Footnote 87 and the Platform-to-Business (P2B) Regulation,Footnote 88 already mandated transparency in ranking.Footnote 89

9.3.2.2 Exploitation

Price discrimination, and more specifically personalized pricing, is of particular concern in algorithmically driven markets. Dynamic pricing, that is, firms adapting prices to market conditions (essentially, supply and demand) has long existed. Think for example of airlines changing prices over time (as captured by the saying that “the best way to ruin your flight is to ask your neighbor what they paid”). With personalized pricing, prices are tailored to the characteristics of the consumers in question (e.g., location and previous purchase behavior) so as to approach their willingness to pay. Authorities have put limits to such personalized pricing. Following action by the ACM, for example, the e-commerce platform Wish decided to stop using personalized pricing.Footnote 90

The ACM did not intervene based on competition law.Footnote 91 Article 102(a) TFEU prohibits excessive prices, but personalized prices are not necessarily excessive as such, and competition authorities are in any case reluctant to intervene directly in price-setting. Price discrimination, explicitly prohibited by Article 102 TFEU(c), may seem like a more fitting option, but that provision is targeted at discrimination between firms rather than between consumers.Footnote 92 Another limitation is that Article 102 TFEU requires dominance, and most firms engaged in personalized pricing do not have market power. While competition law is not an effective tool to deal with personalized pricing, other branches of law have more to say on the matter.Footnote 93

First, personalization is based on data, and the General Data Protection Regulation (GDPR) regulates the collection and processing of such data.Footnote 94 The DMA adds further limits for gatekeepers.Footnote 95 Various other laws – including the Unfair Commercial Practices Directive (UCPD),Footnote 96 the CRD,Footnote 97 and the P2B RegulationFootnote 98 – also apply to personalized pricing but are largely restricted to transparency obligations. The recent Digital Services Act (DSA)Footnote 99 and AI ActFootnote 100 go a step further with provisions targeted at algorithms, although their applicability to personalized pricing is yet to be determined.

Despite different anecdotes on personalized pricing (e.g., by Uber), there is no empirical evidence of widespread personalized pricing.Footnote 101 One limiting factor may be the reputational costs a firm incurs when its personalized pricing is publicized, given how consumers tend to view such practices as unfair. In addition, the technological capability to effectively personalize prices is sometimes overstated.Footnote 102 It would be good, however, to have a clear view of the fragmented regulatory framework for when the day of widespread personalized pricing does arrive.

9.4 Conclusion

Rather than revisiting interim conclusions, I end with a research agenda. This chapter has set out the state of the art on AI and competition, at least on the substantive side. Algorithms also pose risks – and opportunities – on the institutional (enforcement) side. Competition authority heads have vowed that they “will not tolerate anticompetitive conduct, whether it occurs in a smoke-filled room or over the Internet using complex pricing algorithms.”Footnote 103 While this elegant one-liner is a common-sense policy statement, the difficult question is “how?”. Substantive issues aside, algorithmic anticompetitive conduct can be more difficult to detect and deter. Compliance by design is key. Just like the ML models that have become world-class at playing Go and Texas Hold’em have the rules of those games baked in, firms deploying algorithms should think about programming them with the rules of economic rivalry, that is, competition law. At the same time, competition authorities will have to build out their algorithmic detection capabilities.Footnote 104 They may even want to go a step further and intervene algorithmically – or, in the words of the Economist article this chapter started with: “Trustbusters might have to fight algorithms with algorithms.”Footnote 105

Returning to substantive questions, the following would benefit from further research:

  • Theoretical and experimental research shows that autonomous algorithmic collusion is a possibility. To what extent are those results transferable to real market conditions? Do new developments in AI increase the possibility of algorithmic collusion?

  • Autonomous algorithmic collusion presents a regulatory gap, at least if such collusion exits the lab and enters the outside world. Which rule(s) would optimally address this gap, meaning they are legally, economically, and technologically sound and administrable by competition authorities and judges?

  • Algorithmic exclusion (ranking) and algorithmic exploitation (personalized pricing) are regulated to varying degrees by different instruments, including competition law, the DMA, the DSA, the P2B Regulation, the CRD, the UCPD and the AI Act. How do these instruments fit together – do they exhibit overlap? A lot of instruments are centered around transparency – is that approach effective given the bounded rationality of consumers?

The enforcement questions (relating, e.g., to compliance by design) are no less pressing and difficult. Even more so than the substantive questions, they will require collaboration between lawyers and computer scientists.

Footnotes

1 Generative AI applications fall outside the scope of this chapter, as it is updated until January 31, 2023. For more recent developments on the intersection of competition law and generative AI, see Friso Bostoen and Anouk van der Veer, “Regulating competition in generative AI: A matter of trajectory, timing and tools” (2024) Concurrences, 2-2024: 27–33.

2 “Price-bots can collude against consumers” The Economist (May 6, 2017), www.economist.com/finance-and-economics/2017/05/06/price-bots-can-collude-against-consumers.

3 Ariel Ezrachi and Maurice Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (Harvard University Press, 2016). For an update, see Ariel Ezrachi and Maurice Stucke, “Sustainable and unchallenged algorithmic tacit collusion” (2020) Northwestern Journal of Technology and Intellectual Property, 17: 217.

4 See Thibault Schrepel, “Here’s why algorithms are NOT (really) a thing” Concurrentialiste (May 2017) www.networklawreview.org/algorithms-based-practices-antitrust/.

5 The case often referred to concerned Amazon sellers fixing the price of celebrity posters, which sparked enforcement in the US and the UK. See Competition and Markets Authority (CMA), Case 50223, Online sales of posters and frames, August 12, 2016.

6 Nicolas Petit, “Antitrust and artificial intelligence: A research agenda” (2017) Journal of European Competition Law & Practice, 8(6): 361–362, 361.

7 Emilio Calvano et al., “Artificial intelligence, algorithmic pricing, and collusion” (2020) American Economic Review, 110: 3267.

8 Heather Vogell, “Rent going up? One company’s algorithm could be why” ProPublica (October 15, 2022), www.propublica.org/article/yieldstar-rent-increase-realpage-rent.

9 Panos Louridas, Algorithms (MIT Press, 2020), Chapter 1.

10 On his CV (Section “Adages & Coinages”), Larry Tesler corrects the record: “What I actually said was: ‘Intelligence is whatever machines haven’t done yet,’” see, www.nomodes.com/Larry_Tesler_Consulting/Adages_and_Coinages.html.

11 Ethem Alpaydin, Machine Learning (MIT Press, 2021) 17–18. Alpaydin argues that ML is a requirement for AI (see 18–22) and defines AI as computers doing things, which – if done by humans – would be said to require intelligence (while stressing the problem that AI definitions tend to be human-centric).

12 Footnote Ibid., 12.

13 See the various references to “distort[ions of] normal competition” in the Treaty establishing the European Coal and Steel Community (1951); more recently, see the Consolidated version of the Treaty on European Union – Protocol (No 27) on the internal market and competition [2008] OJ C115/309 (“the internal market as set out in Article 3 [TFEU] includes a system ensuring that competition is not distorted”).

14 Joined Cases T-213/01 and T-214/01 Österreichische Postsparkasse v Commission EU:T:2006:151, para 115.

15 Case C-413/14 P Intel v Commission EU:C:2017:632, para 134.

16 In business-to-business (B2B) settings, prices are often individually negotiated, or in any case not made public.

17 EC, “Final report on the E-Commerce Sector Inquiry” (Staff Working Document) COM (2017) 229.

18 Footnote Ibid., para 12. The EC adds, however, that increased price competition may negatively affect competition on parameters other than price, such as quality and innovation.

19 Footnote Ibid., para 13.

20 Footnote Ibid. (“Two thirds of [retailers] use automatic software programmes that adjust their own prices based on the observed prices of competitors.”).

21 Margrethe Vestager, “Algorithms and competition” (Bundeskartellamt 18th Conference on Competition, Berlin, March 16, 2017).

22 Petrol prices are displayed prominently, so even in the past, they could be collected by driving by petrol stations. Meanwhile, specific apps have sprung up to compare petrol prices. Navigation apps such as Google’s Waze also provide information on the prices charged by petrol stations.

23 Fernando Luco, “Who benefits from information disclosure? The case of retail gasoline” (2019) American Economic Journal: Microeconomics, 11: 277 (due to differences in search behavior, low-income consumers were more affected than high-income consumers).

24 Stephanie Assad et al., “Algorithmic pricing and competition: Empirical evidence from the German retail gasoline market” (2020) CESifo Working Paper No. 8521 (the 9% increase was found in non-monopoly markets; in duopoly markets, the authors found that margins do not change when only one of the two stations adopts, but increase by 28% when both do).

25 See Sam Schechner, “Why do gas station prices constantly change? Blame the algorithm” The Wall Street Journal (May 8, 2017), www.wsj.com/articles/why-do-gas-station-prices-constantly-change-blame-the-algorithm-1494262674.

26 CMA, “Algorithms: How they can reduce competition and harm consumers” (Report) 2021, 2.9–2.20.

27 An important question is whether total output increases, see Hal Varian, “Price discrimination” in Richard Schmalensee and Robert Willig (eds), Handbook of Industrial Organization – Volume I (Elsevier, 1989) 597.

28 See Michael Schrage, Recommendation Engines (MIT Press, 2020).

29 The merger control regime is also important, but algorithmic competition issues have not played an important role there yet. For a primer, see Ai Deng and Cristián Hernández, “Algorithmic pricing in horizontal merger review: An initial assessment” (2022) Antitrust, 36(2): 36–41.

30 See Case C-8/08 T-Mobile Netherlands v Nederlandse Mededingingsautoriteit EU:C:2009:343, para 23 (“the definitions of ‘agreement’ … and ‘concerted practice’ are intended, from a subjective point of view, to catch forms of collusion having the same nature which are distinguishable from each other only by their intensity and the forms in which they manifest themselves”).

31 Case C-345/14 Maxima Latvija v Konkurences padome EU:C:2015:784, para 18.

32 Footnote Ibid., para 20.

33 This has been well documented in the case of the lysine cartel, where an executive from one of the firms served as FBI informant, making up to 300 audio and video recordings of cartel-related meetings. The picture that emerges is one of constant distrust between the cartelists. See John Connor, “‘Our customers are our enemies’: The Lysine Cartel of 1992–1995” (2001) Review of Industrial Organization, 18: 5.

34 Salil Mehra, “Antitrust and the robo-seller: Competition in the time of algorithms” (2016) Minnesota Law Review, 100: 1323–1375, 1348–49.

35 Note that quicker detection of deviations only works at the retail (B2C) level, where prices tend to be transparent. In addition to quicker detection of deviations, the use of algorithms also reduces the chance of errors and accidental deviations. See CMA, “Pricing algorithms” (Economic Working Paper) 2018, paras 5.7–5.11.

36 E-commerce Sector Inquiry (n 17), para 33.

37 These three scenarios are in line with Autorité de la concurrence and Bundeskartellamt, “Algorithms and competition” (Report) 2019, 26–60 and Autoridade da Concorrência, “Digital ecosystems, big data and algorithms” (Issues Paper) 2019, paras 243–275.

38 CMA, Posters (n 5). For the equivalent U.S. case, see U.S. District Court for the Northern District of California, Case 3:15-cr-00419-WHO, United States v Daniel Aston, August 11, 2016. The U.S. Department of Justice (DOJ) pursued a similar case earlier, see U.S. District Court for the Northern District of California, Case3:15-cr-00201-WHO, United States v David Topkins, April 30, 2015. Both U.S. cases ended with a plea agreement.

39 On the availability and operation of such software, see Autoridade da Concorrência, “Digital ecosystems” (n 37), paras 208–221.

40 See, for example, CMA, Posters (n 5), para 3.83, quoting a message from a Trod employee to a GB employee: “nearly all posters you are undercutting, so presume your software is broken, so had to remove you from ignore list. Let me know when repaired.”

41 Zach Brown, “Competition in pricing algorithms” (2021) NBER Working Paper 28860, including both formal and empirical analysis. See also Autorité de la concurrence and Bundeskartellamt, “Algorithms” (n 37), 43–44.

42 The commitment needs to be credible. Brown argues that investments of a high-technology firm in the frequency and automation of its price-setting make its commitment credible. Note that the logic is similar to that of price-matching guarantees.

43 The mechanism is similar but not equal to that of the German petrol stations studied in Assad et al., “Algorithmic pricing and competition” (n 24). In a duopoly setting, Assad et al. find evidence for price effects only when both firms adopt superior pricing technology, which suggests that the mechanism in their setting is collusion or symmetric commitment.

44 On Uber’s pricing, see www.uber.com/us/en/marketplace/pricing/. Note that other platforms do offer pricing tools: Airbnb, for example, offers “Smart Pricing,” which automatically adapts hosts’ nightly prices to demand, see, www.airbnb.co.uk/help/article/1168.

45 Vogell, “Rent going up?” (n 8).

46 For a similar example, see Daniel Mândrescu, “When algorithmic pricing meets concerted practices – the case of Partneo” CoRe Blog (June 7, 2018), www.lexxion.eu/coreblogpost/when-algorithmic-pricing-meets-concerted-practices-the-case-of-partneo/ (on a pricing algorithm for auto parts, including allegations of clandestine meetings between certain auto makers).

47 Advocate General Spuznar already suggested the hub-and-spoke qualification for Uber in Case C-434/15 Asociación Profesional Elite Taxi v Uber Systems Spain EU:C:2017:364, para 62 and footnote 23. Another potential qualification is that of cartel facilitator, as in Case C-194/14 P AC-Treuhand v Commission EU:C:2015:717, but that qualification appears more suited to firms (such as consultancies) that operate on a completely different market.

48 Case C-74/14 Eturas t. Lietuvos Respublikos konkurencijos taryba EU:C:2016:42. Similar cases have been pursued at the national level, see, for example, Comisión Nacional de los Mercados y la Competencia, “The CNMC fines several companies EUR 1.25 million for imposing minimum commissions in the real estate brokerage market” (press release, December 9, 2021), www.cnmc.es/expedientes/s000320 (concerning a real estate platform that imposed minimum commissions of 4% on agencies).

49 Case C-74/14 Eturas (n 48), para 10.

50 Footnote Ibid., para 25.

51 Footnote Ibid., para 27, referencing Case C-8/08 T-Mobile (n 30), paras 32–33.

52 Case C-74/14 Eturas (n 48), para 28.

53 Footnote Ibid., para 39.

54 Footnote Ibid., paras 40–41. Travel agencies can rebut the presumption “for example by proving that they did not receive that message or that they did not look at the section in question or did not look at it until some time had passed since that dispatch.”

55 Footnote Ibid., paras 42 and 44. Note that an illegal concerted practice requires not only concertation but also “subsequent conduct on the market and a relationship of cause and effect between the two,” see C-286/13 P Dole Food v Commission EU:C:2015:184, para 126.

56 Case C-74/14 Eturas (n 48), paras 46–49.

57 Heather Vogell, “Department of Justice opens investigation into real estate tech company accused of collusion with landlords” ProPublica (November 23, 2022), www.propublica.org/article/yieldstar-realpage-rent-doj-investigation-antitrust.

58 U.S. District Court for the Southern District of New York, Case 15 Civ. 9796, Spencer Meyer v Travis Kalanick, March 31, 2016 (the judge believed there to be a hub-and-spoke cartel but Uber managed to move the case to arbitration). CADE, Technical Note No. 26/2018/CGAA4/SGA1/CADE, Public Ministry of the State of São Paulo v Uber do Brasil Tecnologia (the authority did not find sufficient concertation between drivers; simply accepting Uber’s terms and conditions did not suffice).

59 In addition to concertation, there is also subsequent conduct, that is, drivers follow Uber’s pricing (they cannot deviate from it).

60 See further EC, Guidelines on the application of Article 81(3) of the Treaty (Communication) OJ C101/97.

61 Conseil de la Concurrence Grand-Duché de Luxembourg, Case 2018-FO-01, Webtaxi, June 7, 2018. The authority found the pricing restriction proportional given that it was indispensable to realize the efficiencies and there was no less restrictive way of doing so. Competition was not eliminated because Webtaxi represented only a quarter of Luxembourg cabs.

62 On reinforcement and Q-learning in a pricing context, see Ashwin Ittoo and Nicolas Petit, “Algorithmic pricing agents and tacit collusion: A technological perspective” in Hervé Jacquemin and Alexandre de Streel (eds), L’Intelligence Artificielle et le Droit (Larcier, 2017) 247–256.

63 Bruno Salcedo, “Pricing algorithms and collusion” (2015), available at https://brunosalcedo.com/docs/collusion.pdf.

64 Calvano et al., “Artificial intelligence” (n 7).

65 See Autorité de la concurrence and Bundeskartellamt, “Algorithms” (n 37), 45–52 for a discussion of the assumptions underlying the research of Calvano et al. and other experimental studies.

66 See Richard Posner, “Oligopoly and the antitrust laws: A suggested approach” (1968) Stanford Law Review, 21: 1562.

67 Organisation for Economic Co-operation and Development (OECD), “Algorithms and collusion: Competition policy in the digital age” (Background Note) 2017, 35–36.

68 Case 48–69 Imperial Chemical Industries (ICI) v Commission EU:C:1972:70, para 64.

69 Case C-74/14 Eturas (n 48), para 27; Case C-8/08 T-Mobile (n 30), paras 32–33.

70 Joined cases 40–48, 50, 54–56, 111, 113 and 114–73 Suiker Unie v Commission EU:C:1975:174, para 174.

71 Joined cases C-89/85, C-104/85, C-114/85, C-116/85, C-117/85 and C-125/85 to C-129/85 A. Ahlström Osakeyhtiö v Commission (‘Wood Pulp II’) EU:C:1993:120, para 71. Earlier case law was less strict, see Case 48–69 ICI (n 68), para 66 (“Although parallel behaviour may not by itself be identified with a concerted practice, it may however amount to strong evidence of such a practice if it leads to conditions of competition which do not correspond to the normal conditions of the market”).

72 Container Shipping (Case AT.39850) Commission Decision of 7 July 2016. Note that the case ended with commitments so there is no final decision, let alone a judgment confirming it.

73 Footnote Ibid., paras 45–47.

74 For a well-considered proposal, situated in the U.S. context, see Joseph Harrington, ‘Developing competition law for collusion by autonomous artificial agents’ (2018) Journal of Competition Law & Economics, 14: 331, in particular Section 6.

75 Case 243/83 Binon EU:C:1985:284, para 44 and Case 27/87 Louis Erauw-Jacquery v La Hesbignonne EU:C:1988:183, para 15. RPM constitutes a “hardcore” restriction under Commission Regulation (EU) 2022/720 on the application of Article 101(3) of the Treaty on the Functioning of the European Union to categories of vertical agreements and concerted practices [2022] OJ L134/4, art 4(a). See further EC, “Guidelines on vertical restraints” (Communication) OJ C248/1, paras 185–201. Note that maximum prices are treated similarly to recommended resale prices and minimum resale prices similarly to RPM.

76 Asus (Case AT.40465) Commission Decision of 24 July 2018, Denon & Marantz (Case AT.40469) Commission Decision of 24 July 2018, Philips (Case AT.40181) Commission Decision of 24 July 2018, and Pioneer (Case AT.40182) Commission Decision of 24 July 2018. For an overview, see EC, “Commission fines four consumer electronics manufacturers for fixing online resale prices” (press release, July 24, 2018) IP/18/4601.

77 Authority for Consumers & Markets (ACM), Case ACM/20/040569, Samsung, September 14, 2021.

78 Spider software crawls the web to collect price data from different sources.

79 On choice architecture, see CMA, “Online choice architecture: How digital design can harm competition and consumers” (Discussion Paper) 2022. Ranking (paras 4.35–4.41) is only one aspect of choice architecture, defaults (paras 4.27–4.34) are another powerful tool.

80 Google Search (Shopping) (Case AT.39740) Commission Decision of 27 June 2017, confirmed in Case T-612/17 Google and Alphabet v Commission EU:T:2021:763. For a discussion, see Friso Bostoen, “The General Court’s Google Shopping judgment: Finetuning the legal qualifications and tests for platform abuse” (2022) Journal of European Competition Law & Practice, 13: 75.

81 Google Search (Shopping) (n 80), paras 454–461. See also CMA, “Online search: Consumer and firm behaviour” (Literature Review) 2017.

82 This appeared to be the case. By way of illustration, one Google executive wrote that Froogle (then the name of Google’s CSS) “simply doesn’t work,” see Google Search (Shopping) (n 80), para 490.

83 Ranking is but one method of algorithmic exclusion. For a discussion of other methods (including defaults), see Thomas Cheng and Julian Nowag, “Algorithmic predation and exclusion” (2023) University of Pennsylvania Journal of Business Law, 25: 41.

84 Amazon does not only promote its own products but also those of third-party sellers that use its “Fulfilled by Amazon” logistics service. See EC, “Commission accepts commitments by Amazon barring it from using marketplace seller data, and ensuring equal access to Buy Box and Prime” (press release, December 20, 2022) IP/22/7777 and AGCM, “Amazon fined over € 1,128 billion for abusing its dominant position” (press release, December 9, 2021), https://en.agcm.it/en/media/press-releases/2021/12/A528.

85 Bundeskartellamt, “Extension of ongoing proceedings against Amazon to also include an examination pursuant to Section 19a of the German Competition Act” (press release, November 14, 2022), www.bundeskartellamt.de/SharedDocs/Meldung/EN/Pressemitteilungen/2022/14_11_2022_Amazon_19a.html.

86 Regulation (EU) 2022/1925 of the European Parliament and of the Council on contestable and fair markets in the digital sector (Digital Markets Act) [2022] OJ L265/1, art 6(5). Other provisions are also relevant from a choice architecture point of view, see, for example, arts 6(3)–(4) on defaults. “Gatekeepers” are defined in art 3.

87 Directive 2011/83/EU of the European Parliament and of the Council on consumer rights [2011] OJ L304/64 (as amended by Directive (EU) 2019/2161 of the European Parliament and of the Council as regards the better enforcement and modernisation of Union consumer protection rules [2019] OJ L328/7), art 6a(1)(a). See further EC, “Guidance on the interpretation and application of Directive 2011/83/EU” (Notice) [2021] OJ C525/1, Section 3.4.1.

88 Regulation (EU) 2019/1150 of the European Parliament and of the Council on promoting fairness and transparency for business users of online intermediation services [2019] OJ L186/57, art 5. See further EC, “Guidelines on ranking transparency pursuant to Regulation (EU) 2019/1150” (Notice) [2020] OJ C424/1.

89 Regulation (EU) 2022/2065 of the European Parliament and of the Council on a Single Market for Digital Services (Digital Services Act) [2022] OJ L277/1 also regulates recommender systems, see, for example, art 27 on transparency.

90 ACM, “Following ACM actions, Wish bans fake discounts and blocks personalized pricing” (press release, July 26, 2022), www.acm.nl/en/publications/following-acm-actions-wish-bans-fake-discounts-and-blocks-personalized-pricing.

91 Rather, the ACM referenced the CRD (n 87), discussed further infra.

92 Article 102(c) TFEU prohibits “applying dissimilar conditions to equivalent transactions with other trading parties, thereby placing them at a competitive disadvantage.” Given that the list of potential abuses is non-exhaustive, this framing of price discrimination is not necessarily limiting.

93 See OECD, “Personalised pricing in the digital era” (note by the European Union) DAF/COMP/WD(2018)128, 9–12.

94 Regulation (EU) 2016/679 of the European Parliament and of the Council on the protection of natural persons with regard to the processing of personal data and on the free movement of such data (General Data Protection Regulation) OJ L119/1. See further Richard Steppe, “Online price discrimination and personal data: A general data protection regulation perspective” (2017) Computer Law & Security Review, 33: 768.

95 Digital Markets Act (n 86), art 6(a) on data collection, combination and cross-use.

96 Directive 2005/29/EC of the European Parliament and of the Council concerning unfair business-to-consumer commercial practices in the internal market (Unfair Commercial Practices Directive) [2005] OJ L149/22. A personalized price may, for example, be “aggressive” or an exertion of “undue influence” under arts 8–9, see further EC, “Guidance on the interpretation and application of Directive 2005/29/EC” (Notice) OJ C526/1, Section 4.2.8.

97 CRD (n 87), art 6(1)(ea). See further CRD Guidance (n 87), Section 3.3.1.

98 P2B Regulation (n 87), arts 7 and 9.

99 DSA (n 89).

100 Regulation (EU) 2024/1689 of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act).

101 CMA, “Pricing algorithms” (n 35), paras 2.13–2.20.

102 See Axel Gautier, Ashwin Ittoo, and Pieter Van Cleynenbreugel, “AI algorithms, price discrimination and collusion: A technological, economic and legal perspective” (2020) European Journal of Law and Economics, 50: 405.

103 DOJ, “Former E-Commerce executive charged with price fixing in the antitrust division’s first online marketplace prosecution” (press release, April 6, 2015), www.justice.gov/opa/pr/former-e-commerce-executive-charged-price-fixing-antitrust-divisions-first-online-marketplace. See similarly Vestager, “Algorithms and competition” (n 21) (“companies can’t escape responsibility for collusion by hiding behind a computer program”).

104 See Joseph Harrington and David Imhof, “Cartel screening and machine learning” (2022) Stanford Computational Antitrust, 2: 133.

105 “Price-bots can collude against consumers” (n 2).

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×