Hostname: page-component-745bb68f8f-hvd4g Total loading time: 0 Render date: 2025-02-05T20:33:22.586Z Has data issue: false hasContentIssue false

Democracy’s Continuing Dilemma: How to Build Credibility in Chaotic Times

Published online by Cambridge University Press:  08 July 2019

Arthur Lupia
Affiliation:
University of Michigan and National Science Foundation
Mathew D. McCubbins
Affiliation:
Duke University
Rights & Permissions [Opens in a new window]

Abstract

Type
Symposium: Can Citizens Learn What They Need to Know? Reflections on The Democratic Dilemma
Copyright
Copyright © American Political Science Association 2019 

A world in chaos. Democracy in decline. Political institutions facing existential threats. All of these headlines reflect common characterizations of the present day.

Change is certainly in the air. Fast-evolving electronic communication technologies are transforming social interaction. They give people new ways to express themselves. They offer new ways for people to learn about others.

These technologies bring new opportunities. We build deep economic, social, scientific, cultural, and many other types of relationships with people—most of whom we will never meet in person. We participate in vast communication networks and have free access to many different types of expertise on a large array of topics. Today, information and expertise are more plentiful—and easier to access—than at any point in human history. Many good things come from interconnectedness.

These changes also prompt uncertainty and fear. Many concerns arise from humanity’s collective lack of experience with new modes of communication. We are uncertain about how massive amounts of information affect our safety and security. We are concerned that others spin attention-grabbing conspiracy theories about critical matters, under cover of the internet’s vast capacity for anonymity.

At the root of so many of our current opportunities and challenges are deep questions about how humans process information. Many of these questions, although using new words and referring to new contexts, share foundations with fundamental questions that have transfixed citizens since the beginning of recorded time. Questions such as: Who can we trust? Which claims should we believe? The answers to these questions are critical for every person’s quality of life. How individuals and social organizations and institutions filter the information that they use from the information that they ignore or reject affects decisions. It affects decisions in contexts running the gamut from families to factories. Information processing affects the efficiency and effectiveness of every element of the private and public sectors. As electronic communication technologies enable new types of expression and information exchange, there are many concerns about their effect on human actions.

This article focuses on informational effects in the context of democratic decision making. To provide a focal point for evaluating the effect of information on decisions in this context, a key question is: What information is necessary or sufficient to make good decisions?

In political contexts, it often is difficult to identify universally accepted definitions of “good.” Conceptions of “good” vary across cultures, across time, and even across individuals when they are in different situations. Yet, what many decision makers have in common is a desire to reconcile a set of goals that they can imagine or articulate with the best available information.

Now-common questions about how people learn in today’s fast-evolving social media contexts imply that agreement on the common good or the truth has become more difficult to achieve in recent years. One source of difficulty is an expansion in the number of people who present themselves as “experts.” Unlike most of human history, we now can gain easy access to advice on all matters of science, morality, and culture. In many cases, this advice is offered in as few as 280 characters. In other cases, it is delivered with a six-second film clip and a caption. Populating the same informational venues are numerous actors with unprecedented capacity to produce and distribute “fake news.”

Unlike most of human history, we now can gain easy access to advice on all matters of science, morality, and culture. In many cases, this advice is offered in as few as 280 characters.

When we wrote The Democratic Dilemma in the mid-1990s, the internet, hyper-partisan 24-hour news networks, and seemingly all-encompassing social media did not exist. Whereas the intervening decades have transformed the types of information to which people have easy access, other critical factors have remained constant. Human attentive capacity has remained constant. Basic neural foundations of information processing have remained constant. People retain the ability to use language strategically and to use knowledge or beliefs about such abilities to condition their responses to what others claim. The power of motivated reasoning and magical thinking persist.

In The Democratic Dilemma, we sought to address fundamental questions about credibility, learning, and decision making using some of the most rigorous and advanced methods available at the time. Our toolbox included a tightly integrated mix of formal models, laboratory experiments, and survey-based experiments—all of which were motivated by our interactions with scholars and ideas from philosophy, mathematics, the neurosciences, and the social sciences.

We are grateful to this symposium’s contributors for their engagement with the original work. They have perceived the work in some ways that we hoped for and in other ways that we never imagined. From a corpus built using the original work and our subsequent experiences, we use this opportunity to convey our views about what the methods and findings of The Democratic Dilemma imply for democratic decision making today. The first section of this article reviews the original work’s core themes and conclusions to provide a context for what follows. The second section describes scholarly extensions of our academic work. The third section describes how this work influenced efforts to improve science communication. Concluding remarks are in the final section.

THE CORE QUESTIONS AND RESULTS OF THE DEMOCRATIC DILEMMA

The subtitle of our 1998 book asked, “Can Citizens Learn What They Need to Know” so that, in turn, they can perform their duties within their democracy. Citizens are asked to be overseers of democracy. They are asked to serve as voters, jurors, board and council members, and legislators. A principal attack on democracy, especially in America, is that citizens cannot attain the knowledge they need to perform these duties and make democracy work.

In The Democratic Dilemma (Lupia and McCubbins Reference Lupia and McCubbins1998), we sought to clarify conditions under which citizens who had limited information could make the same choices they would have made with more information. The logical foundation of our work was a series of extensions of Crawford and Sobel’s (Reference Crawford and Sobel1982) classic “cheap-talk” communication model. We used these extensions to derive conditions for trust, persuasion, and effective learning in a range of decision-making contexts.

Our work revealed four sufficient conditions for trust in these contexts. Specifically, when attempting to derive an accurate inference from what someone else says, a person must believe that (1) she shares with the speaker common interests in the outcome of their communicative interaction; or (2) the speaker paid a sufficiently large cost just to speak with her; or (3) the speaker’s statements are subject to sufficient threat of external (third-party) verification; or (4) a speaker is subject to sufficiently large (third-party) penalties for lying. Various combinations of these factors also are sufficient. The listener will trust what the speaker says when any of these conditions are upheld.

If the listener also correctly believes the speaker’s statement to contain relevant expertise that she does not already hold, then the consequence of the communicative act will be a persuasive statement that may lead the listener to have more accurate beliefs about the decision in question (depending on whether the speaker has the needed expertise). If this increase in accuracy is sufficiently large, then we can say that the communicative act increased the listener’s competence in making the decision.

From this logic, we argued that reasoned choice (i.e., the choice that a listener would make if she had the best available information) does not require actually holding encyclopedic knowledge about a decision. Rather, it merely requires that a person be able to identify reliable sources of relatively simple pieces of information that lead her to the same decisions that she would have made if she knew more.

An implication of these findings is that institutions—such as rules of evidence in courts and agencies as well as legislative rules and the competition between parties and candidates for office—can help people make better decisions about which information to trust and which to ignore. This outcome occurs when institutions generate one or more of the conditions described previously (something that Facebook and other social media platforms still struggle to create).

The results on trust and persuasion also allowed us to model problems of delegation. Delegation is ubiquitous. We can think abstractly about all delegations as an “agency problem.” The principal is the person who requires a task to be performed; the agent is the person to whom the principal delegates authority to complete that task. Redelegation also is ubiquitous. However, there are always perils from delegating, in the form of agency losses and agency costs. The former, agency losses, are the principal’s welfare losses due to the agent’s choices when those choices are suboptimal from the principal’s perspective. The latter, agency costs, are the costs of managing and overseeing an agent’s actions (e.g., the agent’s salary).

An implication of these findings is that institutions—such as rules of evidence in courts and agencies as well as legislative rules and the competition between parties and candidates for office—can help people make better decisions about which information to trust and which to ignore.

Three necessary conditions give rise to agency losses and, thus, the delegation dilemma. The first condition is that the agent must have agenda control. That is, the principal delegates to the agent the authority to take action without requiring the principal’s consent in advance. This puts the principal in the position of having to respond to the action ex post rather than being able to veto it ex ante. The second condition that underpins the delegation dilemma is that there will almost always exist a conflict of interest between the principal and the agent. If the two have the same interests, or if they share at least some common goals, then the agent likely will choose an outcome that the principal finds satisfactory. If not, then the third condition is that the principal must be unable to effectively check an agent’s actions. Conventionally, the lack of an effective check is attributed to the agent’s expertise—that is, the agent may be chosen because of this expertise in the first place, but the indirect consequence is that the principal may be unable to evaluate the outcome of the agent’s choice. In The Democratic Dilemma, we presented the necessary and sufficient conditions for solving the delegation dilemma, showing when principals can delegate and be made better off by their agent’s actions.

Bureaucratic expertise relative to members of Congress, for example, is an often-discussed reason that delegation to the bureaucracy becomes unaccountable and unsuccessful. However, the problem is not that legislators lack information or that bureaucrats monopolize information; rather, it is how principals are to assess the accuracy of the information they receive. This might seem to imply that to ascertain whether an agency is doing its job, political leaders must engage in proactive oversight: members of Congress, senators, and the president must gather sufficient information, before the agency’s actions, to assess whether it is pursuing its mission in a way that improves the welfare of political leaders.Footnote 1 To do this, political leaders collect and correlate enough information to make accurate inferences and then reach reasonable conclusions about whether an agency is serving their interests. In chapter 5 of The Democratic Dilemma, we identified the conditions under which conflicting interests and information asymmetry between the principal and agent cause delegation to succeed or fail. We proved that if a principal has access to the testimony of others and either the ability to identify testimony from a knowledgeable and trustworthy source or access to institutions that give the principal this ability, then delegation can succeed even if the dilemmas of delegation are present.

A final contribution of The Democratic Dilemma pertains to attention. Our work demonstrated that a person’s expectations of learning the truth, as opposed to learning a falsehood, influence attention-related incentives. In turn, conditions for trust influence how a person distributes her limited attentive capacity.

SUBSEQUENT INNOVATIONS

As a whole, The Democratic Dilemma clarified how people and institutions can—and cannot—reconcile democratic responsibilities with cognitive limitations. This work also generated a series of notable inquiries into how the presence of multiple information sources affect trust and learning outcomes. For example, Boudreau and McCubbins (Reference Boudreau and McCubbins2008) built on our experimental design to show that competition between experts is not sufficient to generate conditions for trust, persuasion, and learning. Rather, competition can have this effect only when conditions for trust and persuasion are present in sufficient quantities to generate truthful statements and associated knowledge acquisition. Boudreau and McCubbins (Reference Boudreau and McCubbins2010) tested whether a “wisdom-of-crowds” effect can substitute for the conditions for trust and persuasion. In these experiments, Boudreau and McCubbins offered experimental subjects a choice between a poll of the choices that all subjects recommended versus a clear statement made by speakers who were known to listeners to satisfy the conditions for trust and persuasion. Boudreau and McCubbins’s results showed the power of the wisdom of the crowd as well as its folly. In their work, unsophisticated subjects were more likely to rely on polls than trustworthy experts when the choices were difficult. The folly arises from the fact that, when the problems are most difficult, a poll across a group that includes many nonexperts will almost always be uninformative or wrong. Sophisticated subjects are more likely to solve the problem on their own. Hence, sophisticated subjects rely less on polls and are rarely misled. Together, these experiments—along with complementary efforts by Boudreau and McCubbins (Reference Boudreau and McCubbins2009) and Boudreau, McCubbins, and Coulson (Reference Boudreau, McCubbins and Coulson2008)—reinforced and extended our (1998) insights about how contextual changes can facilitate reasoned choice.

Other work focused on how attributes of communication networks facilitate trust and reasoned choice. McCubbins, Paturi, and Weller (Reference McCubbins, Paturi and Weller2009) and Enemark, McCubbins, and Weller (Reference Enemark, McCubbins and Weller2014) experimentally examined the role of different types of connections within communicative networks. These authors argued that the primary function of connections in a network is to disseminate information. They showed how giving an individual more information about a network’s structure can have the same types of effects as adding additional network connections. They demonstrated that if the information produces the conditions for trust described previously, it can help subjects coordinate and accomplish tasks more effectively.

Thinking further about the types of information that can facilitate reasoned choice, Burnett and McCubbins (Reference Burnett and McCubbins2013) and Burnett, Garrett, and McCubbins (Reference Burnett, Garrett and McCubbins2010) extended Lupia’s (Reference Lupia1994) emphasis that voters can use endorsements to make reasoned choices in initiatives and referenda. In each study, they conducted surveys during these types of elections. They showed that the endorsements have salubrious effects only when Lupia and McCubbins’s conditions are satisfied—that is, when a voter believes an endorser is knowledgeable and that the endorser is perceived by the voter as trustworthy.

Following publication of The Democratic Dilemma, the results that we described were replicated and extended into more specified models of learning, voting, and decision making. Taken together, this literature has made important contributions to how scholars and practitioners understand what information is and is not sufficient for reasoned choice.

Misinformation and fake news combined with a better understanding of motivated reasoning has prompted most present-day observers to be concerned not about a lack of information but rather about how citizens will sort through and use all of the information that is available.

A SURPRISING SUBSTANTIVE IMPLICATION

The world has changed in important ways since publication of The Democratic Dilemma. One change is the amount of information available on a wide range of topics. When the book was written, it was fashionable to conjecture that a lack of information was the main problem facing voters. Today, the main concern has a different character. Misinformation and fake news combined with a better understanding of motivated reasoning has prompted most present-day observers to be concerned not about a lack of information but rather about how citizens will sort through and use all of the information that is available.

Across the world, the lowered cost of accessing information and the proliferation of different kinds of information has produced crises of credibility. Formerly trusted sources of information ranging from the mass media to government are now questioned more frequently—and, in many cases, more fervently—than ever before. Science has not been immune from these changes. On high-profile issues such as climate change, genetically modified organisms, and vaccination, substantial questions about the validity and value of scientific research have been raised.

In response to these challenges, many science organizations have pursued ways to communicate more effectively. This is a significant change. When The Democratic Dilemma was written, and for decades prior to its publication, professional incentives for most scientists did not include the ability to communicate effectively with broad audiences. Instead, the main currencies of the scholarly ecosystem that fueled academic careers were publishing in academic journals, presenting papers at academic conferences, and obtaining funding from science-focused organizations. To achieve these goals, the communicative skill needed was to convey insight and value to other scholars who often shared detailed substantive and methodological affinities. For many scholars, the “ideal reviewer” was one who appreciated the same jargon and conceptual frameworks as they did.

Today, most people obtain information about science through electronic devices—the same devices that show funny movies, live sporting events, memes, video games, social network connections, and so much more. Hence, for scientists to communicate effectively, they must compete in a fierce battle for attention (Lupia Reference Lupia2013; Reference Lupia2016). The Democratic Dilemma’s model of attention laid out fundamental parameters of this battle. The model’s initial premises drew from literatures in psychology and neuroscience that, at the time, documented extraordinary limits on human attentive capacity. These limits produce strong incentives to direct attention to high-consequential stimuli. Subsequent research in brain-related fields showed how human cognitive systems actively seek to hide attention limits from consciousness during real-time information processing (see, e.g., the review in Peterson and Posner Reference Peterson and Posner2012). An implication of these findings is that we would not expect untrained science communicators to be aware of these limits in themselves or in others. The Democratic Dilemma provided an early theoretical template for thinking about communication strategies in such circumstances.

Processing scientific information, however, requires more than obtaining attention. It requires keeping attention for a long enough time to form new memories. To understand these phenomena, and to help a broader range of researchers and science organizations produce content from which people will want to learn, a science of science communication has emerged (see, e.g., Jamieson, Kahan, and Scheufele Reference Jamieson, Kahan and Scheufele2017; Suhay and Druckman Reference Suhay and Druckman2015). Key insights in this literature relate to how to obtain trust and credibility (for reviews of foundational literatures see, e.g., Chong and Druckman Reference Chong and Druckman2007; Druckman and Lupia Reference Druckman and Lupia2000; Reference Druckman and Lupia2016). The Democratic Dilemma’s emphasis on conditions for trust provides the explicit theoretical foundations for this work. At a time when many scientists struggle with the premise that their credibility is not preordained but rather the result of knowable theoretical conditions, The Democratic Dilemma clarifies ways in which science communicators can effectively offer important information.

Collectively, this scholarship has had a substantial effect on practice. Organizations such as Climate Central—whose content is used by news organizations around the world—and the National Academies of Science, Engineering, and Medicine’s “From Research to Reward” educational series are drawn explicitly from the lessons of this research. Whereas many hurdles to effective science communication exist, literature that built from The Democratic Dilemma’s foundation are enabling more scholars to help more citizens make reasoned choices about more topics than ever before.

CONCLUSION

When we wrote The Democratic Dilemma, we were fortunate to have the support of scholars and students whose expertise spanned many academic domains. We appreciated that they cared enough about us and the project to inspire and challenge us throughout its creation. Two decades later, we see so many of their influences in these pages. We hope that they are pleased with the impact that the book has had. We are forever in their debt.

The answers to deep questions about the viability of democracy, civil society, and socially beneficial interactions can hinge on foundational questions about how humans process information and, in particular, how they choose whom and what to believe. It is easy to be cynical about the fact that citizens—and those with whom we disagree—are ignorant of many facts; however, cynicism that does not lead to adaptations with real prospects for improving human decisions and outcomes is not worth much. We are grateful to the thousands of scholars who have taken The Democratic Dilemma’s central challenge to do better; to attack the problem constructively; and to work diligently to identify, refine, question, and then improve our understanding of the conditions under which we can help people make better decisions. Each contributor to this symposium has made important contributions to that effort—an outcome for which we are most grateful.

Footnotes

1. McCubbins and Schwartz (Reference McCubbins and Schwartz1984) distinguished between two types of oversight, labeling them “police-patrol” and “fire-alarm.” In the former, members of Congress actively seek evidence of misbehavior by agencies: they look for trouble as a method of control similar to a prowling patrol car. In the latter, members of Congress wait for signs that agencies are improperly executing policy and they use complaints from concerned groups to trigger concern that an agency is misbehaving.

References

REFERENCES

Boudreau, Cheryl, and McCubbins, Mathew D.. 2008. “Nothing but the Truth? Experiments on Adversarial Competition, Expert Testimony, and Decision Making.” Journal of Empirical Legal Studies 5: 751–89.CrossRefGoogle Scholar
Boudreau, Cheryl, and McCubbins, Mathew D.. 2009. “Competition in the Courtroom: When Does Expert Testimony Improve Jurors’ Decisions?Journal of Empirical Legal Studies 6: 793817.CrossRefGoogle Scholar
Boudreau, Cheryl, and McCubbins, Mathew D.. 2010. “The Blind Leading the Blind: Who Gets Polling Information and Does It Improve Decisions?Journal of Politics 72: 513–27.CrossRefGoogle Scholar
Boudreau, Cheryl, McCubbins, Mathew D., and Coulson, Seana. 2008. “Knowing When to Trust Others: An ERP Study of Decision Making after Receiving Information from Unknown People.” Social Cognitive and Affective Neuroscience 4: 2334.CrossRefGoogle ScholarPubMed
Burnett, Craig M., Garrett, Elizabeth, and McCubbins, Mathew D.. 2010. “The Dilemma of Direct Democracy.” Election Law Journal 9: 305–24.CrossRefGoogle Scholar
Burnett, Craig M., and McCubbins, Mathew D.. 2013. “When Common Wisdom Is Neither Common nor Wisdom: Exploring Voters’ Limited Use of Endorsements on Three Ballot Measures.” Minnesota Law Review 97: 1557–95.Google Scholar
Chong, Dennis, and Druckman, James N.. 2007. “Framing Theory.” Annual Review of Political Science 10: 103–26.CrossRefGoogle Scholar
Crawford, Vincent, and Sobel, Joel. 1982. “Strategic Information Transmission.” Econometrica 50: 1431-51CrossRefGoogle Scholar
Druckman, James N., and Lupia, Arthur. 2000. “Preference Formation.” Annual Review of Political Science 3: 124.CrossRefGoogle Scholar
Druckman, James N., and Lupia, Arthur. 2016. “Preference Change in Competitive Political Environments.” Annual Review of Political Science 19: 1331.CrossRefGoogle Scholar
Enemark, Daniel, McCubbins, Mathew D., and Weller, Nicholas. 2014. “Knowledge and Networks: An Experimental Test of How Network Knowledge Affects Coordination.” Social Networks 36: 122–33.CrossRefGoogle Scholar
Jamieson, Kathleen Hall, Kahan, Dan, and Scheufele, Dietram A. 2017. The Oxford Handbook of the Science of Science Communication. New York: Oxford University Press.CrossRefGoogle Scholar
Lupia, Arthur. 1994. “Shortcuts Versus Encyclopedias: Information and Voting Behavior in California Insurance-Reform Elections.” American Political Science Review 88: 6376.CrossRefGoogle Scholar
Lupia, Arthur. 2013. “Communicating Science in Politicized Environments.” Proceedings of the National Academy of Sciences 110: 14048–54.CrossRefGoogle ScholarPubMed
Lupia, Arthur. 2016. Uninformed: Why People Know So Little about Politics and What We Can Do about It. New York: Oxford University Press.Google Scholar
Lupia, Arthur, and McCubbins, Mathew D.. 1998. The Democratic Dilemma: Can Citizens Learn What They Need to Know? New York: Cambridge University Press.Google Scholar
McCubbins, Mathew D., Paturi, Ramamohan, and Weller, Nicholas. 2009. “Connected Coordination: Network Structure and Group Coordination.” American Politics Research 37: 899920.CrossRefGoogle Scholar
McCubbins, Mathew D., and Schwartz, Thomas. 1984. “Congressional Oversight Overlooked: Police Patrols versus Fire Alarms.” American Journal of Political Science 28 (1): 165–79.CrossRefGoogle Scholar
Peterson, Steven E., and Posner, Michael I.. 2012. “The Attention System of the Human Brain: 20 Years After.” Annual Review of Neuroscience 35: 7389.CrossRefGoogle Scholar
Suhay, Elizabeth, and Druckman, James N. (eds.). 2015. “The Politics of Science: Political Values and the Production, Communication, and Reception of Scientific Knowledge.” The ANNALS of the American Academy of Political and Social Science 658. Available at https://doi.org/10.1177%2F0002716214559004Google Scholar