We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
During the Trump presidency in the United States of America, the social media network Twitter (now known as X) became a new, unofficial media channel through which the former president issued many political statements and informed the public about planned activities and new decisions. At the same time, however, he also continued to use this venue for more personal information, most frequently somehow connected to his office, for example on the size of his ‘nuclear button’ in comparison to that assumed to be the North Korean leader’s one after a news report. This type of communication was until then unknown as a general communication strategy at least for most public officials. Press conferences and bulletins were the typical means of informing the public and professionally interested parties about the standpoints of the government, its actions and its plans. Also, government information was typically delivered in a rather neutral and down-to-earth tone and was carefully drafted and revised, rather than being spur-of-the-moment ideas frequently dismissing other ideas using direct, sometimes offensive language. It is obvious that the statements of the president of a leading nation and the largest democracy in the world will attract attention. However, the Twitter postings under the Trump presidency attracted more attention than the usual; Trump’s tweets reached millions of followers and generated countless clicks. The criminal proceedings and the impeachment process following the storming of the Capitol in January 2021 were based on the realization and consequently the recognition of the impact of those communicative acts on Trump’s followers.
Today is a time of retrogression in sustaining rights-protecting democracies, and of high levels of distrust in institutions. Of particular concern are threats to the institutions, including universities and the press, that help provide the information base for successful democracies. Attacks on universities, and university faculties, are rising. In Poland over the last four years, a world-renowned constitutional law theorist, Wojciech Sadurski, has been subject to civil and criminal prosecutions for defamation of the governing party. In Hungary, the Central European University (CEU) was ejected by the government, and had to partly relocate to Vienna, and other attacks on academic freedom followed. Faculty members in a number of countries have needed to relocate to other countries for their own safety. Governments attack what subjects can be taught – in Hungary bans on gender studies; in Poland, a government minister issued a call to ban gender studies and ‘LGBT ideology’. Attacks on academics and universities, through government restrictions and public or private violence, are not limited to Poland and Hungary, but are of concern in Brazil, India, Turkey and a range of other countries. Attacks on journalists are similarly rising. These developments are deeply concerning. The proliferation of ‘fake news’, doctored photos and false claims on social media has been widely documented. Constitutional democracy cannot long be sustained in an ‘age of lies’, where truth and knowledge no longer matter.
In United States v. Alvarez, the US Supreme Court ruled that an official of a water district who introduced himself to his constituents by falsely stating in a public meeting that he had earned the Congressional Medal of Honor had a First Amendment right to make that demonstrably untrue claim. Audience members misled by the statement might well be considered to have a First Amendment interest in not being directly and knowingly lied to in that way. Other members of the community might be thought to have a First Amendment interest in public officials such as Xavier Alvarez telling the truth about their credentials and experiences. Nevertheless, as both the plurality and the concurring justices who together formed the majority in Alvarez viewed the case, it was the liar’s interest in saying what he wished that carried the day. Why is that? Crucial to answering this question is whether ‘the freedom of speech’ that the First Amendment tolerates ‘no law abridging’ is understood to be primarily speaker-centered, audience-centered, or society-centered.
In today's digital age, the spread of dis- and misinformation across traditional and social media poses a significant threat to democracy. Yet repressing political speech in the name of truth can also undermine democratic values. This volume brings together prominent legal scholars from democracies worldwide to explore and evaluate different regulatory approaches for addressing this complex problem – all taking into account that the cure must not be worse than the disease. Using a comparative lens, the book offers important and novel insights into methods ranging from national regulation of politicians' speech to empowering civil-society groups that are well-positioned to blunt the effects of disinformation and misinformation. The book also provides solutions-oriented recommendations for policymakers, judges, legal practitioners, and scholars seeking to promote democratic values by encouraging free political speech while combatting disinformation and misinformation. This title is also available as Open Access on Cambridge Core.
This chapter launches the contemporary section of the book. The overarching argument is that despite the binaries leveraged by leaders and analysts alike, political contestation in the twenty-first century, as in the nineteenth and twentieth, is not reducible to an “Islamist vs. secularist” cleavage. Instead, contestation and key outcomes are driven by shifting coalitions for and against pluralism, notably, an Islamo-liberal/secular liberal coalition that marked the sixth major, pluralizing alignment since the Tanzimat reforms. It would transform state and society, even though the coalition itself proved short-lived as democratization stalled against a backdrop of debates over Islamophobia, the headscarf, minority rights, freedom of expression, media freedoms, and sweeping show trials.
The digital revolution has transformed the dissemination of messages and the construction of public debate. This article examines the disintermediation and fragmentation of the public sphere by digital platforms. Disinformation campaigns, that aim at assuming the power of determining a truth alternative to reality, highlight the need to enhance the traditional view of freedom of expression as negative freedom with an institutional perspective. The paper argues that freedom of expression should be seen as an institution of freedom, an organizational space leading to a normative theory of public discourse. This theory legitimizes democratic systems and requires proactive regulation to enforce its values.
Viewing freedom of expression as an institution changes the role of public power: this should not be limited to abstention but instead has a positive obligation to regulate the spaces where communicative interactions occur. The article discusses how this regulatory need led to the European adoption of the Digital Services Act (DSA) to correct DPs through procedural constraints. Despite some criticisms, the DSA establishes a foundation for a transnational European public discourse aligned with the Charter of Fundamental Rights and member states’ constitutional traditions.
The “Danish cartoons controversy” has often been cast as a paradigm case of the blindness of liberal language ideologies to anything beyond the communication of referential meaning. This article returns to the case from a different angle and draws a different conclusion. Following recent anthropological interest in the way legal speech grounds the force of law, the article takes as its ethnographic object a 2007 ruling by the French Chamber of the Press and of Public Liberties. This much-trumpeted document ruled that the Charlie Hebdo magazine’s republication of the cartoons did not constitute a hate speech offense. The article examines the form as well as the content of the ruling itself and situates it within the entangled histories of French press law, revolutionary antinomianism, and the surprisingly persistent legal concern with matters of honor. The outcome of the case (the acquittal of Charlie Hebdo) may seem to substantiate a view of liberal language ideology as incapable of attending to the performative effects of signs. Yet, a closer look challenges this now familiar image of Euro-American “representationalism,” and suggests some broader avenues of investigation for a comparative anthropology of liberalism and free speech.
Violence and time are elements shaping the lives of children. For children, time is something that to a large part is placed in the future, while to adults, it is placed in the past; still, it is within this time that violence directed toward children occurs because they are children, often with the purpose of shaping their personhood and controlling them. To be able to speak freely about how time and violence socially construct the self-identity as a child is an important act of resistance against the use of violence constructing childhood but also an important form of protection. To fight violence, the child rights discourse must move beyond the child’s rights to be heard to also take seriously the right to freedom of speech.
Germany’s content moderation law—NetzDG— is often the target of criticism in English-language scholarship as antithetical to Western notions of free speech and the First Amendment. The purpose of this Article is to encourage those engaged in the analysis of transatlantic content moderation schemes to consider how Germany’s self-ideation influences policy decisions. By considering what international relations scholars term ontological security, Germany’s aggressive forays into the content moderation space are better understood as an externalization of Germany’s ideation of itself, which rests upon an absolutist domestic moral and constitutional hierarchy based on the primacy of human dignity. Ultimately, this Article implores American scholars and lawmakers to consider the impact of this subconscious ideation when engaging with Germany and the European Union in an increasingly multi-polar cyberspace.
Dean John Wade, who replaced the great torts scholar William Prosser on the Restatement (Second) of Torts, put the finishing touches on the defamation sections in 1977.1 Apple Computer had been founded a year before, and Microsoft two, but relatively few people owned computers yet. The twenty-four-hour news cycle was not yet a thing, and most Americans still trusted the press.2
The term “content moderation,” a holdover from the days of small bulletin-board discussion groups, is quite a bland way to describe an immensely powerful and consequential aspect of social governance. Today’s largest platforms make judgments on millions of pieces of content a day, with world-shaping consequences. And in the United States, they do so mostly unconstrained by legal requirements. One senses that “content moderation” – the preferred term in industry and in the policy community – is something of a euphemism for content regulation, a way to cope with the unease that attends the knowledge (1) that so much unchecked power has been vested in so few hands and (2) that the alternatives to this arrangement are so hard to glimpse.
This chapter addresses an underappreciated source of epistemic dysfunction in today’s media environment: true-but-unrepresentative information. Because media organizations are under tremendous competitive pressure to craft news that is in harmony with their audience’s preexisting beliefs, they have an incentive to accurately report on events and incidents that are selected, consciously or not, to support an impression that is exaggerated or ideologically convenient. Moreover, these organizations have to engage in this practice in order to survive in a hypercompetitive news environment.1
What is the role of “trusted communicators” in disseminating knowledge to the public? The trigger for this question, which is the topic of this set of chapters, is the widely shared belief that one of the most notable, and noted, consequences of the spread of the internet and social media is the collapse of sources of information that are broadly trusted across society, because the internet has eliminated the power of the traditional gatekeepers1 who identified and created trusted communicators for the public. Many commentators argue this is a troubling development because trusted communicators are needed for our society to create and maintain a common base of facts, accepted by the broader public, that is essential to a system of democratic self-governance. Absent such a common base or factual consensus, democratic politics will tend to collapse into polarized camps that cannot accept the possibility of electoral defeat (as they arguably have in recent years in the United States). I aim here to examine recent proposals to resurrect a set of trusted communicators and the gatekeeper function, and to critique them from both practical and theoretical perspectives. But before we can discuss possible “solutions” to the lack of gatekeepers and trusted communicators in the modern era, it is important to understand how those functions arose in the pre-internet era.
The laws of defamation and privacy are at once similar and dissimilar. Falsity is the hallmark of defamation – the sharing of untrue information that tends to harm the subject’s standing in their community. Truth is the hallmark of privacy – the disclosure of facts about an individual who would prefer those facts to be private. Publication of true information cannot be defamatory; spreading of false information cannot violate an individual’s privacy. Scholars of either field could surely add epicycles to that characterization – but it does useful work as a starting point of comparison.
The commercial market for local news in the United States has collapsed. Many communities lack a local paper. These “news deserts,” comprising about two-thirds of the country, have lost a range of benefits that local newspapers once provided. Foremost among these benefits was investigative reporting – local newspapers at one time played a primary role in investigating local government and commerce and then reporting the facts to the public. It is rare for someone else to pick up the slack when the newspaper disappears.
An entity – a landlord, a manufacturer, a phone company, a credit card company, an internet platform, a self-driving-car manufacturer – is making money off its customers’ activities. Some of those customers are using the entity’s services in ways that are criminal, tortious, or otherwise reprehensible. Should the entity be held responsible, legally or morally, for its role (however unintentional) in facilitating its customers’ activities? This question has famously been at the center of the debates about platform content moderation,1 but it can come up in other contexts as well.2
Coordinated campaigns of falsehoods are poisoning public discourse.1 Amidst a torrent of social-media conspiracy theories and lies – on topics as central to the nation’s wellbeing as elections and public health – scholars and jurists are turning their attention to the causes of this disinformation crisis and the potential solutions to it.
Current approaches to content moderation generally assume the continued dominance of “walled gardens”: social-media platforms that control who can use their services and how. Whether the discussion is about self-regulation, quasi-public regulation (e.g., Facebook’s Oversight Board), government regulation, tort law (including changes to Section 230), or antitrust enforcement, the assumption is that the future of social media will remain a matter of incrementally reforming a small group of giant, closed platforms. But, viewed from the perspective of the broader history of the internet, the dominance of closed platforms is an aberration. The internet initially grew around a set of open, decentralized applications, many of which remain central to its functioning today.
Political scientist and ethicist Russell Hardin observed that “trust depends on two quite different dimensions: the motivation of the potentially trusted person to attend to the truster’s interests and his or her competence to do so.”1 Our willingness to trust an actor thus generally turns on inductive reasoning: our perceptions of that actor’s motives and competence, based on our own experiences with that actor.2 Trust and distrust are also both episodic and comparative concepts, as whether we trust a particular actor depends in part on when we are asked – and to whom we are comparing them.3 And depending on our experience, distrust is sometimes wise: “[D]istrust is sometimes the only credible implication of the evidence. Indeed, distrust is sometimes not merely a rational assessment but it is also benign, in that it protects against harms rather than causing them.”4