The explosion of the internet and commensurate increase in citizens’ ability to create, disseminate, and access media presents states with a host of regulatory challenges that did not exist before the rise of widespread digital networks. Early observers of internet expansion believed that it fundamentally interfered with governments’ ability to regulate political and economic activity (Castells Reference Castells1996) by encouraging politicians to reduce regulation to placate increasingly mobile firms (Tonnelson 2000), introducing complexities that undermined the clarity and legitimacy of legal jurisdictions (Post and Johnson Reference Post and Johnson1996), and by frustrating geographically restricted authorities’ attempts to police the flow of information across fluid, multi-national, digital networks (Barlow Reference Barlow1996; Haufler Reference Haufler2001; Rosenau and Singh Reference Rosenau and Singh2002). This logic was summed up by John Gilmore’s oft-cited comment that, “… the Net interprets censorship as damage and routes around it” (Elmer-DeWitt Reference Elmer-De Witt, Jackson and King1993).Footnote 1
But more recent scholarship notes the surprising flexibility with which governments adapted to the rapid expansion of digital interconnectivity and the fundamental role that states played in shaping the legal boundaries of information technology (Reidenberg Reference Reidenberg1998; Geist Reference Geist2003; Newman and Zysman Reference Newman and Zysman2005; Goldsmith and Wu Reference Goldsmith and Wu2006).Footnote 2 While multiple authors highlight the role that private actors—and the communications protocols and infrastructure that they develop and control—have in regulating digital content (Reidenberg Reference Reidenberg1998; Lessig Reference Lessig1999; DeNardis Reference DeNardis2009; DeNardis Reference DeNardis2012), others emphasize the tendency of governments to use these private actors—notably internet service providers (ISPs) and online content providers (OCPs)—as “points of control” to regulate the digital behavior of a variety of other private actors (Zittrain Reference Zittrain2003; Farrell Reference Farrell2006; Adler Reference Adler2011; MacKinnon Reference MacKinnon2012). Thus, by focusing their regulatory efforts on large content and connectivity providers that cannot easily uproot their businesses to avoid state oversight, governments developed an internet regulation strategy that takes advantage of the structure of digital networks, partially overcoming the fluidity of the content that resides on those networks.
Fundamentally, governments—especially in liberal democracies—now substantially rely on private actors to regulate public internet access. And crucially, while the decentralized structure of the internet is often seen as an important bulwark against state interference, scholars routinely worry that regulation by proxy provides states a route around formal speech and privacy protections, and that—at least in states with strong rule of law—privately provided limits on internet content are subject to fewer legal constraints than outright government censorship, undermining civil liberties and consumer protections (Boyle Reference Boyle1997; Adler Reference Adler2011; Marsden Reference Marsden2011). Nonetheless, because studies of internet censorship tend to focus on firewalls and filtering tactics in authoritarian regimes (see e.g., Deibert et al. Reference Deibert, Palfrey, Rohozinski and Zittrain2008; King, Pan and Roberts Reference King, Pan and Roberts2013), we have only limited understanding of, and even less comparative empirical evidence about, the circumstances under which authorities in democratic states censor digital sources (Breindl Reference Breindl2013). In particular, we do not know when and why democratic governments take advantage of private points of control (PPCs) and ask service providers to remove content from their networks.Footnote 3 Exploring this question is fundamental to building a comprehensive empirical understanding of the politics of internet regulation. There are strong practical and normative reasons to examine the conditions under which democratic states pressure ISPs and OCPs to remove content from the internet. We observe vast differences in this form of digital censorship across democracies. Why do some democracies make extensive use of such censorship mechanisms while others rarely bother service providers with content removal requests?
Crucially, we argue that even democratic states seek to curtail content dissemination in response to demands to restrict speech, either to influence public opinion, reduce criticism of public officials, limit citizens’ access to media, and other sources of information, or, more benevolently, to bolster national security or protect individuals’ reputations or privacy. In particular, using both large-N panel data and a synthetic case-control design, we show that democratic governments respond to internal opposition, criticism, or unrest by stepping up their digital censorship activities. While our understanding of authoritarian repression hinges on how regimes control information to limit dissent, this work shows that, by leaning on PPCs, even democracies circumscribe digital speech when they face internal unrest.
While our empirical focus is on the relationship between internal unrest and digital censorship, we also highlight two other factors that encourage the construction and maintenance of digital content regulation regimes. First, regulators may seek to protect the intellectual property (IP) of their citizens. States that house firms that hold extensive IP portfolios have an incentive to protect knowledge industries, and should use PPCs to cater to IP-producing firms. Second, demands for content removal often pit the preferences of concentrated actors—politicians, individuals or firms with reputations to protect, and IP producers—against broad societal and consumer interests in unfettered access to media, open information flows, and freedom of speech. Thus, we expect governments operating under political institutions that provide comparative advantages to narrow interests to make more intensive use of PPCs. Building on the trade protection literature (Rickard Reference Rickard2012, for example), we argue that governments with less proportional electoral systems will regulate content more aggressively.
This paper represents one of the first attempts to systematically examine internet censorship cross-nationally.Footnote 4 We use data on government requests for content removal, furnished by the Google Corporation, to test our arguments. Google provides a common set of services—search, YouTube, Google+, and so forth—to consumers around the world.Footnote 5 These data represent a unique opportunity for scholars of censorship, because it is difficult to directly observe most forms of censorship activity by governments.Footnote 6 It is rarer still for data on censorship activities to be collected comprehensively across countries. In this context, however, governments rely on a single third party to execute their wishes. Indeed, since Google provides similar products to individuals within multiple countries, these data provide an ideal opportunity to test the determinants of censorship from a comparative perspective. They provide a consistent, cross-national window into when—and how often—governments use PPCs to regulate their citizens’ access to digital content.
Defining Internet Censorship
We define internet censorship as actions taken by a government to remove or obscure internet content from citizens, or to limit the digital transmission of information to a broad audience. Our conceptualization is intentionally devoid of any of the normative content sometimes associated with censorship, adhering to the dictionary definition of a censor: “a person who examines books, movies, letters, etc., and removes things that are considered to be offensive, immoral, harmful to society, etc.” (Merriam-Webster 2014).
While the mechanisms that produce internet takedown requests may vary in their levels of institutionalization and legitimacy, we argue that it is inappropriate to make a priori distinctions about content removal requests here. First, we simply lack the tools and information to measure validity and process, and thus cannot discriminate between requests objectively. More importantly, we do not find such distinctions helpful when predicting censorship activities cross-nationally. Most people agree that muzzling domestic criticism of government is both a form of censorship and normatively unappealing. But activities that some would consider simple applications of law—such as curbing defamation and aggressively protecting IP rights—remain contested and intensely political. For example, while limiting defamation serves a useful societal purpose, it also restricts freedom of speech, and the value that society places on these two goals varies both across and within countries. Similarly, the appropriate scope of IP protection is a hotly debated question (Electronic Frontier Foundation 2010); reasonable people disagree about how to balance consumers’ interests in the free flow of information, or low-cost access to medicine and technology, against producer protections designed to ensure just compensation for investment in research and creative activity, and to spur innovation. Some people consider IP protection, at least certain forms, “censorship,” while others do not. Moreover, defamation and IP law create winners and losers and the application and scope of these laws are political decisions that vary over time and space (Baldwin Reference Baldwin2014). Thus, because common standards for what constitutes censorship are inherently normative, vary broadly, and are shaped by political conflict, we opt for an inclusive definition: for our purposes, censorship is simply the act of restricting public access to content.
Political Incentives and Internet Censorship
Internet censorship is a political activity. That is, while a variety of incentives—ranging from information and speech control, to the maintenance of privacy, copyright and IP protection, and national security concerns—may all drive governments to remove digital content, these activities are filtered through political lenses. We therefore identify and test three political determinants of censorship through PPCs: the need to muzzle opposition from internal challengers, the demand for censorship emanating from IP interests, and democratic institutions that encourage political responsiveness to concentrated interests.
Internal Unrest
Individual citizens, businesses, and politicians can all generate demands for government censorship that are based on personal, or political, motivations. Content on the internet may defame individuals, criticize politicians, impinge on personal privacy, or violate national security statutes. Attempts to limit the availability of such content generate removal requests.
However, incentives to restrict contested, controversial, or politically sensitive speech varies across democracies. While this variety is driven by a number of factors, we focus on a particular issue, internal unrest, because of its political salience and importance to the quality of democratic governance. A democracy with passive internal rivals and strong internal stability creates limited demand for censorship from the regime itself. Political motivations to censor are most prominent when a country is shaken by internal dissent. Because free speech is fundamental to democratic politics, democratic governments will face legal obstacles to direct assaults on political speech. Thus, they are likely to appeal to concerns, such as national security, or defamation, when attempting to silence rivals. Riots, protests, terrorism, and other forms of large-scale, or violent, anti-regime activity may incentivize even democratic politicians to muzzle opposition; they also provide a rationale for—or even legitimize—speech curtailment. And, in reasonably democratic states, especially those with strong rule of law, PPCs may provide an especially attractive venue for quashing political criticism, specifically because private actors have substantial legal leeway to govern the content that resides on their networks. Indeed, it will often be financially and politically expedient for ISPs and OCPs to submit to government pressure in circumstances under which end users might appeal to legal speech protections.Footnote 7 Finally, governments may react to internal opposition not only by requesting more censorship, but by creating legal and regulatory frameworks to facilitate digital censorship.
Hypothesis 1: Democracies will lodge more digital content removal requests when they face higher levels of internal unrest.
IP
Many governments aggressively protect the IP of their citizens and businesses. Indeed, the United States has an “IP Tsar” (formally, the Intellectual Property Enforcement Coordinator) tasked with developing and evaluating IP protection policy (Espinel Reference Espinel2012). The US case illustrates a key point: states that generate significant IP, relative to consumption, have higher incentives to protect that property than do other regimes (Sell Reference Sell1998). Public efforts to reduce IP theft are a subsidy to knowledge industries. They benefit specific industries and firms, at cost to the taxpayer.Footnote 8
The development and enforcement of IP law are political acts. Indeed, Baldwin (Reference Baldwin2014) extensively describes the inherent political conflict surrounding copyright, noting how IP laws in the United States and Europe changed over time, varying with political and economic conditions. Initially a net importer of creative works, and an inveterate intellectual pirate state, the United States refused to recognize foreign copyright until the late 1800s. Now the United States is a champion of IP protection worldwide. American politicians’ appetite for IP protection has grown in tandem with domestic creative industries. And continental Europe, traditionally more copyright friendly than the anglophone world, now houses multiple political parties representing content “pirates.”
In sum, the size of local knowledge industries should play an important role how aggressively states regulate IP. Furthermore, the political influence wielded by IP producers should also facilitate increased non-IP-related censorship by altering legal institutions to ease the production of takedown requests.
Hypothesis 2: Democracies housing firms that produce substantial IP will pursue more digital content removal requests than those with small knowledge industries.
Domestic Political Institutions
Democracies sporting institutions that empower diffuse interests, like consumers, will generate fewer takedown requests than those with institutions that particularly respond to concentrated interests, such as IP-producing businesses, defamation targets, and politicians themselves. We highlight one such institution: the electoral system. We describe two mechanisms for generating content removal requests that should be modulated by district magnitude.Footnote 9 The key actors supporting both mechanisms are elected politicians; while both firms and individuals play a role in the processes that we describe, our theoretical focus is on how politicians translate personal incentives, and pressures placed on them by firms and individuals, into behavior that should predict government use of PPCs.
First, because politicians’ reputations become more important as district magnitude shrinks, small district magnitudes should enhance politicians’ appetites to quash personal criticism, generating a direct incentive for actors in government to restrict digital speech through tools such as defamation claims. Many of the vignettes that Google provides as part of its transparency report, some of which we describe in third and fifth sections, reflect such behavior, which ranges from demands to remove content issued by executive or legislative organs, to defamation charges filed with the courts (Google Incorporated 2013). While large-magnitude systems can generate incentives for personal vote cultivation when co-partisans compete for preference votes in open lists (Carey and Shugart Reference Carey and Shugart1995), candidate name recognition shrinks with district magnitude. Even in open-list systems, incumbents in large-magnitude districts have substantially lower name recognition advantages vis-à-vis challengers than those that compete in single member districts (Samuels Reference Samuels2001). Similarly, in low-magnitude systems, incumbents all have an interest in protecting themselves from less known challengers, and to collude in the provision of tools for reputation protection, while incumbents in larger magnitude elections that do encourage personal vote cultivation (e.g., open list) may worry that their counterparts will use speech limitations to restrict their ability to criticize one another. Therefore, political demand for personality-driven speech regulation should decrease as district magnitude grows.
The second mechanism that we describe is less direct. Mirroring a long-standing thread of the trade literature, we argue that electoral systems determine how effectively politicians translate the preferences of narrow interests—which here will tend to favor increased digital regulation—into policy. Politicians in low-magnitude systems will be more receptive to such lobbies and will tend to pass legislation, and engage in bureaucratic oversight, that facilitates protection of firms’ IP and digital content regulation more broadly.Footnote 10 Indeed, many authors argue that plurality-based, or low district magnitude elections, encourage politicians to cater to concentrated interests and overlook the diffuse preferences of the majority (see e.g., Magee, Brock and Young Reference Magee, Brock and Young1989; Rogowski Reference Rogowski1989; Grossman and Helpman Reference Grossman and Helpman2005; Persson and Tabellini Reference Persson and Tabellini2005).Footnote 11
Hypothesis 3: States that use high district magnitude elections will generate fewer digital content removal requests than low district magnitude systems.
Data and Methods
Dependent Variable: Google Takedown Requests
We draw our dependent variable from Google’s online “Transparency Report.” Google has publishes counts of takedown and user data requests lodged by governments worldwide (Google Incorporated 2013). Google began reporting these data in the second half of 2009 and issues their data in the form of half-year summaries, by country, of censorship attempts by government sources. Google reports currently list the number of content takedown requests issued by 58 different governments. These removal requests can be related to any of Google’s many services (Search, YouTube, Gmail, Google+, etc.). Each individual request by a government may identify one or more pieces of digital content for takedown (e.g., multiple related defamatory images), but represents a single instance of attempted government censorship. Multiple attempts to censor the same item are counted as multiple requests. Thus, we measure censorship attempts in terms of government contacts rather than in terms of individual pieces of content. Takedown request counts omit activity that Google performs at its own initiative, regardless of local law, particularly removal of child pornography. Finally, the data contains requests related to IP when they take the form of successful court proceedings and actions taken by government agencies but these data do not contain copyright requests that firms and other rights holders (e.g., Recording Industry Association of America) issue directly to Google, which Google fields using a different system. The data contain any takedowns coming from court orders, executive branch interventions, and other direct government activity.Footnote 12
Google now categorizes takedown requests, as we show in Table 1. These breakdowns are only available after July 2010, making it difficult to use this information in over time analyses, such as those presented in this paper. Nevertheless, the descriptive data provide a picture of why governments censor. Takedown requests run the gamut between reasonably clear speech-suppressing activity, like government criticism, to more regulatory activities like copyright or trademark violations. The largest categories are “defamation,” “privacy and security,” and “other,” which contains a mix of anti-speech and regulatory activities. Furthermore, while many of the censorship attempts that fall into these three categories would be generally regarded as legitimate governance, they provide rationales for speech limitation that governments could potentially misuse, and that worry critics of the rise in state leverage over PPCs. Unfortunately, Google does not provide full details for all individual takedown requests. Nevertheless, it does provide descriptive vignettes for selected requests.Footnote 13 In general, Google appears to prefer to highlight non-regulatory, politically motivated, takedown requests by governments. Consider this example of defamation from Italy in early 2011: “We received a request from the Central Police in Italy to remove a YouTube video that satirized Prime Minister Silvio Berlusconi’s lifestyle. We did not remove content in response to this request.” Nevertheless, many requests are, in fact, more mundane, regarding economic regulation or legal requirements: “[In Norway] two requests resulted in the removal of 1814 items from AdWords for violating Norwegian marketing laws.” It is also worth noting that governments sometimes attempt to censor political criticism under the guise of regulatory activity; for example, Google determined that a blog post that the Bolivian legislative assembly requested be removed because it “infringed copyright” actually contained political speech.
Table 1 Google Takedown Request Categories
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180309064506381-0514:S2049847017000012:S2049847017000012_tab1.gif?pub-status=live)
Note: Takedown percentage is percent of total from the July 2010 to July 2013 period, omitting reasons more recently introduced in 2012 (18 percent), calculated from recent Google provided data (Google Incorporated 2016).
While Google clearly fields takedown requests that are politically motivated, our inability to disaggregate requests is a limitation of the current analysis, because we cannot distinguish between “good” and “bad” censorship. It is also possible that the content of takedown requests co-varies with our key predictors. Nonetheless, we argue that understanding broad trends in digital takedown requests contributes significantly to our understanding of how governments use PPCs.
Independent Variables
We measure internal unrest three ways. First, we use the logged total number of terrorist incidents (National Consortium for the Study of Terrorism and Reponses to Terrorism 2012).Footnote 14 Terrorist attacks represent an explicit indicator of violent, anti-government activity. We also performed tests using a non-events indicator, the Worldwide Governance Indicators’ (WGI) Political Stability and Absence of Violence. WGI’s index intends to capture “the likelihood the government will be destabilized or overthrown by unconstitutional or violent means” (Kaufmann, Kraay and Mastruzzi Reference Kaufmann, Kraay and Mastruzzi2012). Compared with terrorist events, however, this index is a less tangible and precise measure of internal opposition and stability, reflecting “hundreds” of individual underlying variables (Kaufmann, Kraay and Mastruzzi Reference Kaufmann, Kraay and Mastruzzi2010).
We contend that states that produce IP at high rates are likely to house firms that put pressure on politicians to protect IP. We use patent applications as a proxy for IP production and measure cross-national patent production using the World Intellectual Property Organization’s IP database (World Intellectual Property Organization 2013), which runs through 2012. We adjust the number of patents for population using the World Development Indicators (WDI) data set (The World Bank Reference Tonnelson2013), creating a patents per capita variable. Finally, because the distribution of patent applications is highly skewed, we log the resulting indicator.
We operationalize the proportionality of the electoral system using a measure of average lower house district magnitude provided by the Database of Political Institutions (DPI) (Beck et al. Reference Beck, Clarke, Groff, Keefer and Walsh2001), logging observations to correct for substantial positive skew. We draw these data from the Quality of Government (QoG) data set. The QoG data contain yearly observations of district magnitude covering our entire 2009–2012 observation period.Footnote 15
Turning to control variables, we use a measure of internet users per capita, drawn from the WDI, to account for the size of the digital information environment. We control for economic development—and government capacity—using logged gross domestic product (GDP) per capita, in 2005 US dollars, again drawing from the WDI. The WDI also record the average time it takes to start a business, including licensing delays and other red tape, which we use as a proxy for bureaucratic activity and intrusiveness. Finally, to control for the effects of Google’s market share on takedowns, we use Google’s percentage of internet search to proxy for market penetration (StatCounter 2013).
Our analysis focuses on democracies lodging at least one takedown request between July 2009 and June 2012. The Online Appendix discusses our sampling decisions, missing data, and also presents a zero-inflated negative binomial regression that models the selection process that determines if states lodge takedown requests with Google; this robustness check reinforces the results that we present in the main text. Furthermore, because many takedown requests are defamation based, legal institutions and culture may play an important role. In the Online Appendix we examine the robustness of our empirical models to the inclusion of a measure of legal tradition. The Online Appendix also provides descriptive statistics for the variables that we use in our analyses.
Estimation Strategy
Our dependent variable is a count measure that exhibits overdispersion; countries that rarely make content removal requests coexist with states that use this content regulation mechanism regularly. Therefore, we model takedown requests using negative binomial regression (see e.g., Cameron and Trivedi Reference Cameron and Trivedi2005). We include random intercepts for countries in all tests, to model baseline variation in takedown requests across countries.Footnote 16 While we would like to use country-specific fixed intercepts to control for cross-national variability in our models, such an approach is not possible here. Because our measures of institutions are largely static, we cannot include institutional factors and fixed unit effects in the same model. Also, given our necessarily short observation period, even our demand measures—patent filings and terrorist incidents—are slowly evolving, rendering fixed effects altogether impractical.Footnote 17 We do include fixed effects for time period (half year) in all specifications, to account for differences in incentive to send requests over time.
Predicting Google Takedown Requests
Table 2 shows the results of negative binomial regression models that predict content takedown requests. In general, the results are consistent with our theoretical expectations about the determinants of government censorship activity. Model 1 includes only demand factors, model 2 adds electoral institutions, while models 3 and 4 replicate models 1 and 2, but include controls. Model 4, therefore, tests all of our hypotheses while controlling for a variety of plausible determinants of censorship activity. Finally, model 5 tests the robustness of our domestic stability findings by substituting the WGI stability/violence measure for terrorist incidents. Figure 1 illustrates the substantive effects of key variables from model 4, predicting number of takedowns in a six-month period, across independent variables’ observed ranges.Footnote 18 The top panels of the figure display predicted takedowns while the bottom panels show the predicted probabilities of observing at least 5, 10, 15, and 20 requests. Because observed request counts are skewed toward 0 (see the Online Appendix), the point estimates for predicted counts are often quite low, and given the functional form of the model, confidence intervals grow quickly as counts increase. Nonetheless, the predicted probabilities in the lower panels show that the substantive effects of the predictor variables are often quite substantial. For example, an average country with few terrorists incidents is extremely unlikely to lodge more than five takedown requests in any six-month period, while the most terror-plagued states in our data set have around 97, 76, 51, and 32 percent chances of lodging at least 5, 10, 15, and 20 requests, respectively, in a single period.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180309064506381-0514:S2049847017000012:S2049847017000012_fig1g.jpeg?pub-status=live)
Fig. 1 Model predictions
Table 2 Predicting Google Content Removal Requests
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180309064506381-0514:S2049847017000012:S2049847017000012_tab2.gif?pub-status=live)
Note: The dependent variable is content requests issued. The observation level is country half-year. We estimated all models using negative binomial regression with random intercepts for country and dummies for half-year time period (not shown). We used chained multiple imputation for missing data; N=322.
WGI=Worldwide Governance Indicators; GDP=gross domestic product.
*p<0.05.
All models support a demand-based explanation for digital censorship activity grounded in both regulatory and information-suppressive perspectives. We find a strong relationship between instability and political violence and takedown activity by governments. The coefficient for our measure of political violence—terrorist incidents—is statistically significant in all specifications. Countries experiencing more terrorist violence produce more takedown requests. The model predicts around four takedown requests per period for an average case. Such a case experiences around three terrorist incidents in a period. The model predicts that substantial increases in terrorism would generate commensurate jumps in takedown activity. In particular, if our average case were to experience a 2 SD increase in terrorism—about 47 events—the model would expect to see takedowns jump to around nine per period. The highest observed terrorism rate—373 events—corresponds to a predicted rate of 19 additional takedown requests per six-month interval. The second column of Figure 1 illustrates the substantive influence of terrorism more generally. Furthermore, model 5 indicates that our results are robust to our choice of measure of stability; the WGI stability index is negatively correlated with censorship.
The estimated effect of the private sector’s demand for censorship, proxied by the log of patent applications per capita, is also consistently statistically significant and substantively meaningful. High patent countries send more requests. A comparison between South Korea—which produced the data set’s highest number of patents per capita in the most recent time period—and Bosnia and Herzegovina—which in the same period produced the lowest number of patents per person—is instructive. During the final observation period South Korea lodged 33 takedown requests while Bosnia and Herzegovina sent only one. Model 3 generates predictions of 45 and one for these two cases, and both true values fall within 95 percent credible intervals for the predictions. Furthermore, the model allows us to explore counterfactual questions. In particular, the model predicts that Korea would have sent around four requests if it produced patents at the Bosnian rate while it predicts that Bosnia would have requested 25 takedowns if it contained knowledge industries as productive as those in South Korea. As Figure 1 illustrates, countries that produce numerous patents are most likely to also lodge many takedown requests. Thus, our results are consistent with the hypothesis that the presence of many IP producers in a state motivates governments to make intensive use of PPCs. Note that only a small percentage of the requests that we measure—see Table 1—directly address issues of copyright and trademark infringement. Thus, while our evidence is indirect, the strong relationship that we observe between IP production and government takedown requests implies that regulatory structures built to service IP producers may allow governments to influence digital information flows more broadly.
Next we examine our argument that government use of PPCs varies across electoral systems. Log of district magnitude taps underlying electoral system responsiveness to concentrated or minority interests and should be associated with increased censorship.Footnote 19 Our theory suggests that regulatory and speech censorship makes a majority of the population worse off, while benefiting small groups of constituents (e.g., IP producers or aggrieved or defamed citizens), and that low district magnitude systems should be especially responsive to concentrated interests. The results appear to strongly support this supposition; the relationship between district magnitude and takedown requests is substantively and statistically significant across specifications. While the average effect of district magnitude depicted in Figure 1 is quite modest, the figure—especially the bottom panel of the third column—illustrates that the tendency to make takedown requests drops off precipitously as district magnitude increases. Furthermore, because the model is non-linear, the role of district magnitude can be magnified in states that are otherwise predisposed toward extensive internet regulation. Take Israel, which has an especially large district magnitude (120) and asked Google to take down five or fewer items in each period in our sample. When we use our model to examine the counterfactual question of how many requests Israel would have lodged if it operated under a single member district system, it predicts requests counts ranging between 16 and 45 for the time periods in the sample. While this counterfactual exercise is admittedly speculative, it starkly illustrates the strength of the relationship between electoral system and governmental use of PPCs.
The statistically significant coefficients for time to start a business in Table 2 provide further evidence that institutions matter for internet censorship request activity. How heavily the hand of regulation falls on the brick and mortar economy correlates the with extent of internet regulation. Countries with more red tape censor the internet more actively.
Finally, the coefficients for our remaining control variables in Table 2 are statistically insignificant. Once we account for unit effects with random intercepts and control for key demand factors, countries with higher GDP per capita tend to send takedown requests at about the same rate as poorer countries. Google’s search market share also has no clear relationship to requests, with a coefficient close to 0. Google is the dominant search engine across our sample—its average market share is 93 percent, and while the minimum share is 33 and the SD is 9, the first quartile is 92—so this non-result may be attributable to a lack of meaningful variation on this dimension. Finally, while there is a large positive relationship between internet usage and content removal requests, this effect is not statistically significant in our models.
Mechanism Tracing: The Turkish Case
The previous section corroborates our arguments, but cannot conclusively establish the causal mechanisms that underly our theory. Takedown requests are a recent phenomenon and the empirical record of the use of PPCs is limited. While we can present results on how electoral institutions correlate with digital censorship, we observe no within-country variation in electoral district magnitude during our observation period—we can only establish that states with majoritarian electoral institutions have used takedown requests more aggressively than their counterparts, after accounting for a series of plausible drivers of digital regulatory activity. Similarly, while there is some variation in patent production within countries across the waves of our panel, these shifts are progressive rather than revolutionary, and, even where we observe sizeable changes in patent production, we would not expect politicians, bureaucrats, and legal actors to routinely translate evolving demands from firms into policy in the time frame that we examine here. Nonetheless, these results provide value, both because they describe an empirical landscape that has yet to be explored (see Gerring Reference Gerring2012 for an argument about the importance of “mere description”) and because they serve as baseline tests of the theoretical framework that we develop in the second section, providing a road map for future research on this topic. Yet, one of our hypothesized determinants of internet censorship—internal unrest—does exhibit within-unit variability in our sample. In particular, Turkey represents an exceptional example of recent volatility in terrorism, protests, and other indicators of political instability. We therefore present a short study of this case to help establish the plausibility of a causal relationship between internal unrest and digital speech suppression through PPCs.
Turkey is a state where our theoretical framework indicates strong potential for PPC use. First, while Turkish holds closed-list proportional elections, they have a strong tradition of personalism, average district magnitude is quite low (seven), and high electoral thresholds amplify majoritarian tendencies. And while internet penetration is relatively low, at 41 percent, Turkey’s IP-producing sector is also sufficiently developed—Turkish patent generation is within 1 SD of the sample mean—to generate private-sector demand for digital content policing. Finally, the Turkish state has long-faced challenges to internal stability from the separatist Kurdistan Workers’ Party (PKK), which has waged an armed insurgency—punctuated by numerous cease-fires—since 1984.
Turning to recent events, the PKK ordered a cease-fire in April of 2009 (Jenkins Reference Jenkins2009), which held until May of 2010 (PKK Announces Ceasefire in Turkey 2010). Internal conflict with the PKK remained relatively low during this period, primarily induced by the government’s promises of reforms and political and cultural opening (US Department of State 2012, 85). PKK attacks, however, increased in the second half of 2010 and there was a “spike” in attacks and kidnapping in the run-up to, and aftermath of, national elections in June of 2011 (National Counterterrorism Center 2012, 9). Indeed, the Global Terrorism Database lists less than five terrorist events in Turkey in the second half of 2009 and the first half of 2010, but 18 in the second half of 2010, 21 early in 2011, and 32 in the second half of that year.
As the period of relative quiet ended, the Justice and Development Party (AKP) consolidated its control of government, winning its third straight general election and almost 60 percent of the seats in the National Assembly, forming a single party majority government. The empowered Turkish government responded to renewed PKK activity, in part, by ratcheting up its censorship activity, altering media content, and imprisoning journalists using “overly broad and aggressively applied anti-terrorism laws, combined with a judicial system that too often sees its role as protecting the state, rather than the individual” (Corke et al. Reference Corke, Finkel, Kramer, Robbins and Schenkkan2014, 14). This speech curtailment included attempts to regulate internet content through PPCs. Indeed, Google’s transparency report includes vignettes from this period that reflect both the AKP’s strict enforcement of laws prohibiting “criticism of [Mustafa Kemal] Atatürk, the government or national identity or values,” a broad tool for quashing politically subversive speech, and several more specific examples of attempts to suppress information related to Kurdish activism and independence, including “two requests from a government agency to remove a blog that contains information about the Kurdish Party and Kurdish activities as well as a Google+ picture showing a map of Kurdistan” and requests “to remove blogs for discussing minority independence” (Google Incorporated 2015).
Currently available Google transparency reports begin in July 2009 and extend through the end of 2013, covering nine half-year periods.Footnote 20 We use these reports to trace the above-described process quantitatively, and use synthetic case-control methods (Abadie, Diamond and Hainmueller Reference Abadie, Diamond and Hainmueller2010) to determine if post-election takedown requests by the Turkish regime are indicative of a policy intervention, or simply an artifact of ongoing cross-national trends in digital content regulation. We use tools described in the study by Abadie, Diamond and Hainmueller (Reference Abadie, Diamond and Hainmueller2011) to construct a synthetic control case for Turkey—a weighted amalgam of the other cases in our data set designed to match Turkey as closely as possible, during the pre-intervention period, with respect to a set of covariates that should predict takedown requests.Footnote 21
Figure 2 presents the results of our synthetic control study. The left-hand panel compares pre- and post-electionFootnote 22 takedown requests in Turkey to a weighted average of requests produced by the cases that contribute to the synthetic control. Turkish requests jump substantially in the post-election period before skyrocketing in 2013, when the AKP vastly expanded its censorship activities, likely in light of the Gezi Park protests, but also because of the intensifying of conflict with the PKK associated with the onset of the Syrian civil war (Dombey and Fielding-Smith Reference Dombey and Fielding-Smith2012). The conflict only abated in March 2013 (the end of half-year 8), when the group declared a cease-fire. The synthetic control case maintains a trajectory of slow growth—the scale of the Turkish expansion of takedown requests obscures this modest upward trend—in both the pre- and post-election periods, implying that the jump in Turkish requests reflects a case-specific policy intervention. Corroborating this interpretation, the right-hand panel of Figure 2 displays the result of a permutation test where we iteratively constructed a synthetic control for each country in our sample, and then plotted the gap between each case’s takedown requests and their matched synthetic control, across the observation period. The permutation test indicates that the likelihood of seeing an intervention effect of the size that we observe in Turkey is small. Indeed, only two cases show gaps that approach the magnitude of the Turkish case, and one of these cases is poorly matched to its synthetic control in the pre-intervention period. While this synthetic control study cannot definitely establish a causal relationship between internal unrest—initially sparked by the end of the PKK cease-fire, but then fueled by popular protest in response to perceived AKP overreach and a corruption scandal—and takedown requests,Footnote 23 it helps to rule out the possibility that the expansion of the use of PPCs in Turkey was simply the result of wider trends in the use of such regulatory tools. This quantitative exercise also lends credence to our interpretation of the qualitative evidence. More broadly, the mechanism tracing that we do here buttresses the large-N analysis that we presented in the previous section, at least with respect to the relationship between internal unrest and takedown requests.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180309064506381-0514:S2049847017000012:S2049847017000012_fig2g.jpeg?pub-status=live)
Fig. 2 Synthetic control study results Note: The left-hand panel compares Turkey’s takedown request trend to that of a matched synthetic control case; the right-hand panel plots gaps between cases and matched synthetic controls for Turkey and each country in the potential control sample.
Finally, it is worth noting that, while many current accounts of strong Turkish government pressure on PPCs focus exclusively on the government’s handling of the Gezi Park protests in May 2013, “the tools used to pressure and control media outlets and individual journalists existed before the AK Party came to power. But the party, with its extraordinary political dominance, has used them unapologetically and with increasing frequency and force” (Corke et al. Reference Corke, Finkel, Kramer, Robbins and Schenkkan2014, 8). The case of Turkey demonstrates the relatively fungibility of legal frameworks and institutions that facilitate censorship. In particular, much of the legal framework that would eventually be applied to terrorist organizations in 2011 and then to urban protesters in 2013 have their basis in a legacy of the Turkish military’s control and influence on the media, which dates back to before the AKP took power (Corke et al. Reference Corke, Finkel, Kramer, Robbins and Schenkkan2014, 6–7). The events of 2011 represent the confluence of a renewal of hostilities with internal PKK forces and a consolidation of power for the AKP in the aftermath of highly successful elections, but they also reflect the role that pre-existing mechanisms for control play in facilitating censorship by democratically elected regimes.
Politics on the Internet
The primary contribution of this paper is to provide an initial answer to an outstanding, substantively important, question: how do democracies censor the internet, and why (Breindl Reference Breindl2013, 41)? We provide a theoretical framework that points to a pair of factors—political instability and violence, and IP production—that generate demand for digital censorship, and political institutions—namely electoral system design—that translate those demands into government activity. We argue that the answer to the question of when and why democracies censor the internet is, at least partially, political. Furthermore, while previous work has argued that PPCs represent a key content management tool for democracies, this paper provides, to our knowledge, the first large-sample comparative empirical study of what factors drive, and which institutions modulate, this form of digital censorship, showing that patterns in government censorship activity vary systematically with political demand factors and institutions. Finally, we overcome a key obstacle to the comparative study of digital censorship—the lack of cross-nationally comparable measures of this activity—by focusing on Google content removal requests. In sum, this study can provide a theoretical and empirical foundation for subsequent work on this topic. Nonetheless, we do not wish to overstate the strength of our evidence. In particular, data availability limits our ability to subject our causal arguments to strong tests. Our analysis is observational and we currently only have a short panel to work with. We argue only that our theory is consistent with the empirical record to date.
Our study has important implications for facilitating the free flow of information in modern democracies. One key finding is, ironically, that countries that are most invested in the information economy—those with large knowledge-producing sectors—are also those that most actively restrict their citizens’ access to information. While this activity helps incentivize IP production, spurring both economic growth and knowledge accumulation, it is important to realize that only a small percentage of the government censorship that we measure in this study actually pertains to IP protection (see Table 1). In fact, the bulk of content removal requests in our data set fall under the rubrics of defamation and privacy and security. Indeed, Google has a system in place to field IP infringement requests directly from private actors, heading off many such IP challenges before the government gets involved. The fact that the marginal relationship between IP production and digital censorship through PPCs is so strong implies that regulatory structures built to satisfy economically motivated constituencies are being leveraged for other purposes. Of course, this paper does not directly trace the mechanisms underlying this argument. Yet, our study highlights an empirical regularity that raises a question that warrants further investigation: do IP-protecting institutions that were designed to promote business interests and to spur innovation allow states to more widely interfere in the free flow of digital information? Digital rights activists, including the Electronic Frontier Foundation (2010), have long argued that IP-oriented legislation like the DMCA can have unintended consequences. If, as others have argued (e.g., Adler Reference Adler2011), censorship through PPCs is subject to less oversight and accountability than traditional censorship methods, then citizens of information-rich societies should find our analysis disturbing, especially given that we find that governments also use PPCs more as political instability and violence increases.
Our institutional results contribute to an ongoing debate in political economy about how electoral institutions affect the balance of power between diffuse and concentrated interests. Furthermore, while our empirical focus is on digital content management, our findings imply that electoral institutions may influence the degree of media freedom in society more broadly. We find evidence to support the claim that, because electoral rules modulate politicians’ incentives—both to cater to focused interests and to protect their own reputations—we should see more government interference in digital information transmission in low district magnitude systems. But this argument is not tethered to the details of internet regulation and may be applicable beyond the digital domain.
From a policy-oriented perspective, our results imply that citizens in lower magnitude electoral systems face an uphill battle when it comes to protecting their digital rights and thus must work hard to organize to protect consumer interests in free information access. Politicians in low-magnitude systems have incentives to make it easy to remove damaging information from the internet and knowledge-producing firms are ideally situated to obtain protection when elections are low magnitude. It is instructive that one of the biggest wins for consumer advocates of digital rights in the United States—the defeat of Stop Online Piracy Act/PROTECT IP Act (SOPA/PIPA)—was largely organized by content providers who, while substantial IP holders themselves, were concerned that protections that would benefit other knowledge producers would hurt them. Similarly, civil society organizations partnered with content providers to play a critical role in organizing support for Brazil’s Civil Rights Framework for the internet. Thus, because organizing average consumers to effectively lobby government is notoriously difficult, one of the most effective strategies that consumer groups in low-magnitude states may have to limit digital censorship is to take advantage of fault lines across knowledge industries. When no fault lines exist, consumer protections in low-magnitude systems are likely to suffer.
A number of intriguing questions for future research emerge from our theory. One of the most timely is: can patterns in content removal requests tell us anything about level of democracy or democratic survival? Could spikes in takedowns be an early warning sign of future autocratic tendencies? On one hand, the answer would appear to be no—many consolidated democracies use these tools extensively. On the other hand, our findings in Turkey provide an intriguing counter example. As we describe, prior to the events of 2013 in Gezi Park and its subsequent crackdown, when many outside observers identified authoritarian trends in the AKP, Turkey progressively moved from a low of between one and ten requests in the second half of 2009, to an incredibly high figure of 501 contacts in the first half of 2012. Perhaps major changes in the extent to which states leverage PPCs are an early warning sign for the erosion of democracy?
Finally, this analysis focuses solely on censorship requests, ignoring compliance. Google provides information on its compliance rate that we hope to investigate in future work. Moreover, other firms—notably Twitter and Microsoft—recently begin releasing transparency reports of their own. Eventually, takedown data from multiple firms will provide a powerful tool for examining the robustness of our findings and cross-firm variations in compliance should help to shed light on when governments are best able to compel private actors to do their digital content regulation for them.