We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure no-reply@cambridge.org
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The study of dis/misinformation is currently in vogue, however with much ambiguity about what the problem precisely is, and much confusion about the key concepts that are brought to bear on this problem. My aim of this paper is twofold. First, I will attempt to precisify the (dis/mis)information problem, roughly construing it as anything that undermines the “epistemic aim of information.” Second, I will use this precisification to provide a new grounded account of dis/misinformation. To achieve the latter, I will critically engage with three of the more popular accounts of dis/misinformation which are (a) harm-based, (b) misleading-based, and (c) ignorance-based accounts. Each engagement will lead to further refinement of these key concepts, ultimately paving the way for my own account. Finally, I offer my own information hazard-based account, which distinguishes between misinformation as content, misinformation as activity, and disinformation as activity. By introducing this distinction between content and activity, it will be shown that my account is erected on firmer conceptual/ontological grounds, overcoming many of the difficulties that have plagued previous accounts, especially the problem of the proper place of intentionality in understanding dis/misinformation. This promises to add clarity to dis/misinformation research and to prove more useful in practice.
Who should decide what passes for disinformation in a liberal democracy? During the COVID-19 pandemic, a committee set up by the Dutch Ministry of Health was actively blocking disinformation. The committee comprised civil servants, communication experts, public health experts, and representatives of commercial online platforms such as Facebook, Twitter, and LinkedIn. To a large extent, vaccine hesitancy was attributed to disinformation, defined as misinformation (or data misinterpreted) with harmful intent. In this study, the question is answered by reflecting on what is needed for us to honor public reason: reasonableness, the willingness to engage in public discourse properly, and trust in the institutions of liberal democracy.
Despite Kenya’s transformative and progressive 2010 Constitution, it is still grappling with a hybrid democracy, displaying both authoritarian and democratic traits. Scholars attribute this status to several factors, with a prominent one being the domination of the political order and wielding of political power by a few individuals and families with historical ties to patronage networks and informal power structures. The persisting issues of electoral fraud, widespread corruption, media harassment, weak rule of law and governance challenges further contribute to the hybrid democracy status. While the 2010 Constitution aims to restructure the state and enhance democratic institutions, the transition process is considered incomplete, especially since the judiciary’s role of judicial review is mostly faced with the difficult task of countering democratic regression. Moreover, critical institutions such as the Independent Electoral and Boundaries Commission (IEBC) have faced criticism due to corruption scandals and perceptions of partisanship, eroding public trust in their ability to oversee fair elections effectively.
It is frequently argued that false and misleading claims, spread primarily on social media, are a serious problem in need of urgent response. Current strategies to address the problem – relying on fact-checks, source labeling, limits on the visibility of certain claims, and, ultimately, content removals – face two serious shortcomings: they are ineffective and biased. Consequently, it is reasonable to want to seek alternatives. This paper provides one: to address the problems with misinformation, social media platforms should abandon third-party fact-checks and rely instead on user-driven prediction markets. This solution is likely less biased and more effective than currently implemented alternatives and, therefore, constitutes a superior way of tackling misinformation.
The structure of society is heavily dependent upon its means of producing and distributing information. As its methods of communication change, so does a society. In Europe, for example, the invention of the printing press created what we now call the public sphere. The public sphere, in turn, facilitated the appearance of ‘public opinion’, which made possible wholly new forms of politics and governance, including the democracies we treasure today. Society is presently in the midst of an information revolution. It is shifting from analogue to digital information, and it has invented the Internet as a nearly universal means for distributing digital information. Taken together, these two changes are profoundly affecting the organization of our society. With frightening rapidity, these innovations have created a wholly new digital public sphere that is both virtual and pervasive.
State responses to the recent ‘crisis’ caused by misinformation in social media have mainly aimed to impose liability on those who facilitate its dissemination. Internet companies, especially large platforms, have deployed numerous techniques, measures and instruments to address the phenomenon. However, little has been done to assess the importance of who originates disinformation and, in particular, whether some originators of misinformation are acting contrary to their preexisting obligations to the public. My view is that it would be wrong to attribute only to social media a central or exclusive role in the new disinformation crisis that impacts the information ecosystem. I also believe that disinformation has different effects depending on who promotes it – particularly whether it is promoted by a person with a public role. Importantly, the law of many countries already reflects this distinction – across a variety of contexts, public officials are obligated both to affirmatively provide certain types of information, and to take steps to ensure that information is true. In contrast, private individuals rarely bear analogous obligations; instead, law often protects their misstatements, in order to prevent censorship and promote public discourse.
The 2024 presidential election in the USA demonstrates, with unmistakable clarity, that disinformation (intentionally false information) and misinformation (unintentionally false information disseminated in good faith) pose a real and growing existential threat to democratic self-government in the United States – and elsewhere too. Powered by social media outlets like Facebook (Meta) and Twitter (X), it is now possible to propagate empirically false information to a vast potential audience at virtually no cost. Coupled with the use of highly sophisticated algorithms that carefully target the recipients of disinformation and misinformation, voter manipulation is easier to accomplish than ever before – and frighteningly effective to boot.
The issue of mass disinformation on the Internet is a long-standing concern for policymakers, legislators, academics and the wider public. Disinformation is believed to have had a significant impact on the outcome of the 2016 US presidential election. Concern about the threat of foreign – mainly Russian – interference in the democratic process is also growing. The COVID-19 pandemic, which reached global proportions in 2020, gave new impetus to the spread of disinformation, which even put lives at risk. The problem is real and serious enough to force all parties concerned to reassess the previous European understanding of the proper regulation of freedom of expression.
The ‘marketplace of ideas’ metaphor tends to dominate US discourse about the First Amendment and free speech more generally. The metaphor is often deployed to argue that the remedy for harmful speech ought to be counterspeech, not censorship; listeners are to be trusted to sort the wheat from the chaff. This deep skepticism about the regulation of even harmful speech in the USA raises several follow-on questions, including: How will trustworthy sources of information fare in the marketplace of ideas? And how will participants know whom to trust? Both questions implicate non-regulatory, civil-society responses to mis- and disinformation. This chapter takes on these questions, considering groups and institutions that deal with information and misinformation. Civil society groups cannot stop the creation of misinformation – but they can decrease its potential to proliferate and to do harm. For example, advocacy groups might be directly involved with fact-checking and debunking misinformation, or with advancing truthful or properly contextualized counter-narratives. And civil society groups can also help strengthen social solidarity and reduce the social divisions that often serve as fodder for and drivers of misinformation.
In April 2023, the Government of India amended a set of regulations called the Information Technology Rules, which primarily dealt with issues around online intermediary liability and safe harbour. Until 2023, these rules required online intermediaries to take all reasonable efforts to ensure that ‘fake, false or misleading’ information was not published on their platforms. Previous iterations of these rules had already been challenged before the Indian courts for imposing a disproportionate burden on intermediaries, and having the effect of chilling online speech. Now, the 2023 Amendment went even further: it introduced an entity called a ‘Fact Check Unit’, to be created by the government. This government-created unit would flag information that – in its view – was ‘fake, false or misleading’ with respect to ‘the business of the central government’. Online intermediaries were then obligated to make reasonable efforts to ensure that any such flagged information would not be on their platforms. In practical terms, what this meant was that if intermediaries did not take down flagged speech, they risked losing their safe harbour (guaranteed under the Information Technology Act).
Misinformation can be broadly defined as information that is inaccurate or false according to the best available evidence, or information whose validity cannot be verified. It is created and spread with or without clear intent to cause harm. There is well-documented evidence that misinformation persists despite fact-checking and the presentation of corrective information, often traveling faster and deeper than facts in the online environment. Drawing on the frameworks of social judgment theory, cognitive dissonance theory, and motivated information processing, the authors conceptualize corrective information as a generic type of counter-attitudinal message and misinformation as attitude-congruent messages. They then examine the persistence of misinformation through the lens of biased responses to attitude-inconsistent versus -consistent information. Psychological inoculation is proposed as a strategy to mitigate misinformation.
In today's digital age, the spread of dis- and misinformation across traditional and social media poses a significant threat to democracy. Yet repressing political speech in the name of truth can also undermine democratic values. This volume brings together prominent legal scholars from democracies worldwide to explore and evaluate different regulatory approaches for addressing this complex problem – all taking into account that the cure must not be worse than the disease. Using a comparative lens, the book offers important and novel insights into methods ranging from national regulation of politicians' speech to empowering civil-society groups that are well-positioned to blunt the effects of disinformation and misinformation. The book also provides solutions-oriented recommendations for policymakers, judges, legal practitioners, and scholars seeking to promote democratic values by encouraging free political speech while combatting disinformation and misinformation. This title is also available as Open Access on Cambridge Core.
The attentive public widely believes a false proposition, namely, that the race Implicit Association Test (“IAT”) measures unconscious bias within individuals that causes discriminatory behavior. We document how prominent social psychologists created this misconception and the field helped perpetuate it for years, while skeptics were portrayed as a small group of non-experts with questionable motives. When a group highly values a goal and leaders of the group reward commitment to that goal while marginalizing dissent, the group will often go too far before it realizes that it has gone too far. To avoid the sort of groupthink that produced the mismatch between what science now knows about the race IAT and what the public believes, social psychologists need to self-consciously embrace skepticism when evaluating claims consistent with their beliefs and values, and governing bodies need to put in place mechanisms that ensure that official pronouncements on policy issues, such as white papers and amicus briefs, are the product of rigorous and balanced reviews of the scientific evidence and its limitations.
An extension is described to a product testing model to account for misinformation among subjects. A misinformed subject is one who associates the taste of product A with product B and vice-versa; thus, the subject would tend to perform incorrectly on pick 1 of 2 tests. A likelihood ratio test for the presence of misinformation is described. The model is applied to a data set, and misinformation is found to exist. Biases due to model misspecificationand other implications for product testing are discussed.
Malgré l'attention accordée à l'enjeu de la mésinformation au cours des dernières années, peu d’études ont examiné l'appui des citoyens pour les mesures visant à y faire face. À l'aide de données récoltées lors des élections québécoises de 2022 et de modèles par blocs récursifs, cet article montre que l'appui aux interventions contre la mésinformation est élevé en général, mais que les individus ayant une idéologie de droite, appuyant le Parti conservateur du Québec et n'ayant pas confiance dans les médias et les scientifiques sont plus susceptibles de s'y opposer. Ceux qui ne sont pas préoccupés par l'enjeu, priorisent la protection de la liberté d'expression ou adhèrent aux fausses informations y sont aussi moins favorables. Les résultats suggèrent que dépolitiser l'enjeu de la mésinformation et travailler à renforcer la confiance envers les institutions pourraient augmenter la légitimité perçue et l'efficacité de notre réponse face à la mésinformation.
This review article explores the role of land-grant Extension amidst an escalating epistemic crisis, where misinformation and the contestation of knowledge severely impact public trust and policymaking. We delve into the historical mission of land-grant institutions to democratize education and extend knowledge through Cooperative Extension Services, highlighting their unique position to address contemporary challenges of information disorder and declining public confidence in higher education. Land-grant universities can reaffirm their relevance and leadership in disseminating reliable information by reasserting their foundational principles of unbiased, objective scholarship and deep engagement with diverse stakeholders. This reaffirmation comes at a critical time when societal trust in science and academia is waning, necessitating a recommitment to community engagement and producing knowledge for the public good. The article underscores the necessity for these institutions to adapt to the changing information landscape by fostering stakeholder-engaged scholarship and enhancing accessibility, thus reinforcing their vital role in upholding the integrity of public discourse and policy.
This chapter introduces the reader to the topic studied in the book, factual misinformation and its appeal in war. It poses the main research question of who believes in wartime misinformation and how people know what is happening in war. It then outlines the book’s central argument about the role of proximity and exposure to the fighting in constraining public misperceptions in conflict, and the methods and types of evidence used to test it. After clarifying some key concepts used in the book, it finally closes with a sketch of the manuscript’s main implications and an outline of its structure and contents.
This chapter concludes the book and considers its major theoretical and practical implications. It begins by exploring how the book pushes us to think about fake news and factual misperceptions as an important “layer” of war – a layer that has been largely neglected despite the burgeoning attention to these issues in other domains. This final chapter then examines what the book’s findings tell us about such topics as the psychology and behavior of civilian populations, the duration of armed conflicts, the feasibility of prevailing counterinsurgency models, and the depths and limits of misperceptions more broadly in social and political life. It also engages with the practical implications of the book for policymakers, journalists, activists, and ordinary politically engaged citizens in greater depth, exploring how the problems outlined in the research might also be their own solutions. Ultimately, this chapter shows how the book has something to offer to anyone who is interested in the dynamics of truth and falsehood in violent conflicts (and beyond) – and perhaps the beginnings of a framework for those who would like to cultivate more truth.
This chapter examines issues of factual misinformation and misperception in the case of the US drone campaign in Pakistan. It first shows that, while the drone campaign is empirically quite precise and targeted, it is largely seen as indiscriminate throughout Pakistani society. In other words, there is a pervasive factual misperception about the nature of the drone strikes in Pakistan. Second, the chapter shows that this misperception is consequential. Notably, it shows that Pakistani perceptions of the inflated civilian casualties associated with the strikes are among the strongest drivers of opposition to them in the country. It also provides evidence suggesting that this anti-drone backlash fuels broader political alienation and violence in Pakistan. Finally, the chapter shows that these misbeliefs about drones (and the reactions they inspire) are not shared by local civilians living within the tribal areas where the incidents occur. In sum, the chapter demonstrates that factual misperceptions about US drone strikes in Northwest Pakistan are generally widespread and consequential in the country, but not in the areas that actually experience the violence.
Text classification methods have been widely investigated as a way to detect content of low credibility: fake news, social media bots, propaganda, etc. Quite accurate models (likely based on deep neural networks) help in moderating public electronic platforms and often cause content creators to face rejection of their submissions or removal of already published texts. Having the incentive to evade further detection, content creators try to come up with a slightly modified version of the text (known as an attack with an adversarial example) that exploit the weaknesses of classifiers and result in a different output. Here we systematically test the robustness of common text classifiers against available attacking techniques and discover that, indeed, meaning-preserving changes in input text can mislead the models. The approaches we test focus on finding vulnerable spans in text and replacing individual characters or words, taking into account the similarity between the original and replacement content. We also introduce BODEGA: a benchmark for testing both victim models and attack methods on four misinformation detection tasks in an evaluation framework designed to simulate real use cases of content moderation. The attacked tasks include (1) fact checking and detection of (2) hyperpartisan news, (3) propaganda, and (4) rumours. Our experimental results show that modern large language models are often more vulnerable to attacks than previous, smaller solutions, e.g. attacks on GEMMA being up to 27% more successful than those on BERT. Finally, we manually analyse a subset adversarial examples and check what kinds of modifications are used in successful attacks.