Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-06T05:07:27.807Z Has data issue: false hasContentIssue false

More Accurate, But No Less Polarized: Comparing the Factual Beliefs of Government Officials and the Public

Published online by Cambridge University Press:  24 November 2020

Nathan Lee
Affiliation:
Stanford University, Palo Alto, California, USA
Brendan Nyhan*
Affiliation:
Dartmouth College, New Hampshire, USA
Jason Reifler
Affiliation:
University of Exeter, UK
D. J. Flynn
Affiliation:
IE School of Global and Public Affairs, Madrid, Spain
*
*Corresponding author. E-mail: nyhan@dartmouth.edu
Rights & Permissions [Opens in a new window]

Abstract

Studies of the American public demonstrate that partisans often diverge not only on questions of opinion but also on matters of fact. However, little is known about partisan divergence in factual beliefs among the government officials who make real policy decisions, or how it compares to belief polarization among the public. This letter describes the first systematic comparison of factual belief polarization between the public and government officials, which we conducted using a paired survey approach. The results indicate that political elites are consistently more accurately informed than the public across a wide range of politically contentious facts. However, this increase in accuracy does not translate into reduced factual belief polarization. These findings demonstrate that a more informed political elite does not necessarily mitigate partisan factual disagreement in policy making.

Type
Letter
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Contemporary polarization in American politics is often marked by disagreement over not just matters of opinion, but matters of fact as well (for example, Frankovic Reference Frankovic2016; Frankovic Reference Frankovic2018; Roush and Sood Reference Roush and SoodN.d.). Polarized factual beliefs can undermine debates on issues ranging from climate change to end-of-life care (Ding et al. Reference Ding2011; Nyhan Reference Nyhan2010). However, prior studies of partisan disagreement over factual matters have largely consisted of surveys of the mass public, neglecting the beliefs of the government officials who make policy decisions.

This study reports the results of parallel surveys that provide the first direct comparison of factual belief polarization between government officials and the public. Our surveys measured beliefs about six controversial policy issues (including the prevalence of voter fraud, the tax burden on the rich, and the safety of genetically modified foods) and two salient population quantities (the number of unemployed and foreign-born residents). For each issue, we consider both overall accuracy and the degree of belief polarization.Footnote 1

We find that government officials are consistently more accurately informed than the mass public across a range of politically contentious facts. However, this increase in accuracy does not reduce the overall degree of factual belief polarization. These findings suggest government officials' factual beliefs may reflect both the incentives they face to be accurately informed about policy-relevant facts and the pressures to hold beliefs that align with their partisan preferences. Taken together, this study demonstrates that a more informed political elite does not necessarily mitigate partisan factual disagreement in policy making.

Theoretical Approach

Partisans frequently diverge in their factual beliefs as well as their policy preferences. We call the former phenomenon factual belief polarization. Though partisan knowledge gaps are smaller than many assume (Roush and Sood Reference Roush and Soodn.d.), misperceptions are widespread among the public, especially among people for whom the claims are politically congenial (for example, Frankovic Reference Frankovic2016; Frankovic Reference Frankovic2018). As a result, partisan differences in factual beliefs are larger for misinformation-related topics than for other issues (Roush and Sood Reference Roush and Soodn.d.).

In this study, we extend research on factual belief polarization to also consider U.S. political elites – people in positions of influence and power. Elites might be thought to hold less polarized beliefs than the public for three reasons. First, they tend to be more knowledgeable, a characteristic that is associated with greater belief accuracy (Gottfried et al. Reference Gottfried2013). Secondly, elites possess domain expertise in politics and public policy that could reduce the influence of cognitive biases. Kahan et al. (Reference Kahan2015) find, for instance, that judges display less bias in legal reasoning than law students or the mass public, which they attribute to ‘legal training and experience’. Finally, elites potentially face external scrutiny from the media and other political actors if they espouse inaccurate beliefs (Nyhan and Reifler Reference Nyhan and Reifler2015).

However, based on prior research, we evaluated the preregistered hypothesis that factual belief polarization would be greater among elites than the public.Footnote 2 First, studies typically find that elites have more polarized preferences than the public (for example, Bafumi and Herron Reference Bafumi and Herron2010) and higher levels of issue constraint (for example, Lupton, Myers and Thornton Reference Lupton, Myers and Thornton2015). Secondly, higher levels of education or knowledge, which we expect to observe among elites, are associated with higher levels of attitude-consistent factual beliefs in many partisan factual controversies (for example, Kahan et al. Reference Kahan2017). Finally, the available evidence suggests that political incentives do not effectively constrain these tendencies: recent research documents systematic elite error in perceptions of public opinion that is correlated with their own preferences (Broockman and Skovron Reference Broockman and Skovron2018).

In addition, we examine whether elites hold more accurate factual beliefs than the public. While this outcome measure was not preregistered, considering accuracy differences clarifies our preregistered findings. Our findings specifically challenge the seemingly widespread expectation that belief accuracy and belief polarization are inversely related, which may seem to be intuitive. For instance, some studies show that interventions which increase belief accuracy also reduce partisan or ideological belief polarization, while others find that exposure to misinformation can not only decrease belief accuracy but also widen belief polarization (for example, Guess et al. Reference Guess2020; van der Linden, Leiserowitz and Maibach Reference van der Linden, Leiserowitz and Maibach2018). This expectation was recently displayed in media coverage of Bullock et al.'s (Reference Bullock2015) finding that offering financial incentives for accurate answers decreased partisan polarization on factual survey questions. One headline stated, for instance, ‘Attention fact-checkers: Dangle a buck in front of partisans and they'll come closer to the truth’ (Benton Reference Benton2013). But as Bullock and Lenz (Reference Bullock and Lenz2019, 337) write:

[a]lthough accuracy incentives used by Bullock et al. (Reference Bullock2015) and Prior et al. (Reference Prior, Sood and Khanna2015) clearly reduce partisan differences in survey responses, it is not as clear that they cause respondents to answer more accurately. Bullock et al. do not examine the effects of incentives on accuracy. Prior et al. (Reference Prior, Sood and Khanna2015, especially p. 503) do examine the effects of incentives on accuracy, although these effects are not their focus; they find that incentives increase accuracy in their first study but not in their second.

In fact, belief polarization can be unchanged or widen when belief accuracy increases. For instance, Jerit and Barabas (Reference Jerit and Barabas2012) show that belief accuracy increases for issues with greater media coverage, but these increases are concentrated among partisans for whom those facts are politically congenial, creating greater belief polarization and greater belief accuracy.

The present study provides important descriptive evidence about relative belief polarization of government officials and the mass public. While we find that government officials possess more accurate factual beliefs than the mass public, they are just as polarized.

Data and Methods

We analyze survey data from national samples of government officials and the American public.Footnote 3 Our government official data come from a national online survey panel of local and state government officials through CivicPulse conducted by Nathan Lee.Footnote 4 CivicPulse is a nonprofit organization that maintains a comprehensive list of elected policy makers, legislative staffers, and top administrative positions in local and state government in the United States. From this list, a random sample of officials was invited to participate in a confidential survey between 23 February and 28 April 2017. A total of 743 officials from all fifty states participated in the survey. Below we refer to this sample of survey participants as ‘government officials’, though we note that they specifically represent officials in state and local government who are involved in making or administering policy. Specifically, 75 per cent of the sample is comprised of elected policy makers; the other 25 per cent represents legislative staffers and top-level bureaucrats (for example, city managers). The public data were collected from 7–19 April 2017 by Ipsos-MORI among 2,000 respondents in their opt-in Internet panel. Quotas for gender, age and region were applied.Footnote 5

We asked respondents in both surveys factual belief questions about four controversial issues about which misperceptions are common: voter fraud, climate change, federal spending and taxes paid by the wealthy, which we refer to as ‘issue beliefs’ (see Appendix A for question wording). These issues were chosen because they are salient, controversial and expected to be balanced with respect to partisan congeniality. The voter fraud question asks respondents how many votes were cast in the 2016 presidential election by people who should have been ineligible or who voted more than once. Respondents used a five-point scale ranging from ‘less than a thousand’ to ‘millions’. The climate change question asks whether the world's temperature has been increasing over the last 100 years. The two remaining issue questions asked whether the federal government spends more on health care or the military and for respondents' estimates of the share of federal income tax paid by the top 1 per cent of earners. In addition to these questions about beliefs on controversial issues, we also asked respondents for their beliefs about the number of unemployed and foreign-born Americans, two salient population quantities that people frequently overestimate (for example, Horsley Reference Horsley2017; Kessler Reference Kessler2019).Footnote 6 Specifically, we asked respondents to estimate both how many people out of 100 were ‘born outside the United States’ and how many ‘are currently unemployed’ among those ‘who have a job or are actively looking for a job’. We call these ‘population beliefs’.Footnote 7

To expand the pool of data we analyze, we also provide non-preregistered analyses of data collected from government officials and the public on two additional issues – beliefs in the misperceptions that needle exchanges increase drug use (which we expected would be more common among Republicans) and that genetically modified (GMO) foods are unsafe for human consumption (which we expected would be more common among Democrats). While the elite data for these two issues come from a random subset of our CivicPulse sample of government officials, the public survey data come from a nationally representative survey administered by YouGov in December 2016 rather than the Ipsos survey described above.Footnote 8

As specified in our preregistration, each closed-ended belief accuracy measure is coded on a 0–1 scale where higher values indicate greater accuracy. More formally, 0 is the least accurate response, 1 is the most accurate response, and other responses take values of i/n for response options i = {2, …, n − 1}.Footnote 9 For our open-ended population beliefs, the measure is calculated as 1 − (|e − y|/100) where e is the respondent estimate of the quantity in question on a 0–100 scale and y is the true population value.Footnote 10

Results

We first plot factual belief polarization among the public and government officials on eight controversial issues and population quantities. Figure 1 plots average belief accuracy for government officials (represented as solid squares) and members of the public (represented as hollow triangles) by party. (Independents are not plotted separately for visual clarity but are included in the other tables and figures reported below.)

Figure 1. Belief polarization among the public and government officials

Note: figure shows differences in belief accuracy between government officials and members of the public. Beliefs are measured on a 0–1 scale where 1 represents the most accurate response (see Appendix A for question wording). The vertical line represents mean belief accuracy by issue for members of the public (including independents). Public partisanship was measured using self-placement on a seven-point measure (with leaners treated as partisans). We code government officials as partisans if they ran for office as a partisan or identified as Democrats or Republicans (including leaners).

As the figure indicates, only one issue (voter fraud) produces the preregistered expectation of greater elite belief polarization (that is, partisan government officials are more polarized than their public counterparts). However, we consistently observe higher levels of accuracy among government officials. On the other seven issues, mean accuracy levels are uniformly higher among officials in both parties than among their co-partisan counterparts in the public. Moreover, three issues display a pattern in which officials on both sides have more accurate beliefs than partisans on either side (GMO safety, the foreign-born population and the unemployment rate).

The systematic differences we observe in belief accuracy between government officials and the public are more clearly summarized in Figure 2, which presents the mean differences in belief accuracy for government officials when pooled compared to members of the public across all eight outcome measures. As the figure illustrates, differences in mean accuracy range from 0.09 (for climate change and health care spending) to 0.16 (for GMO safety) by issue. In each case, we can reject the null of no difference between groups (p < 0.01): elites are always more accurate than the public, on average.

Figure 2. Accuracy differences between the public and government officials

Note: figure shows differences in belief accuracy between government officials and members of the public with 95 per cent confidence intervals. Beliefs are measured on a 0–1 scale where 1 represents the most accurate response (see Appendix A for question wording). The vertical line represents mean belief accuracy by issue for all respondents in the public sample.

Ordinary least squares (OLS) models in Table 1 formally test for differences in belief polarization and accuracy between government officials and members of the public, controlling for a series of preregistered covariates.Footnote 11 We observe relatively little evidence of greater belief polarization among government officials compared to the public. When we compute the relevant quantities of interest from the results in the table, we only observe evidence of greater elite polarization at the p < 0.05 level for voter fraud (see Appendix Table C2 for details).Footnote 12 Government officials provide more accurate responses than the public on five of the eight issues tested (p < 0.01 for two issues; p < 0.05 for three).

Table 1. Issue belief accuracy by partisanship and elite status

Note: cell entries are OLS coefficients with robust standard errors in parentheses. Dependent variables are measured on a 0–1 scale, where 1 is the most accurate response. Control variables are indicators for sex, college degree, non-white and age ranges 30–44, 45–64, and 65 and older. Partisanship was measured using self-placement on a seven-point party ID measure (with leaners treated as partisans) for the public. We code government officials as partisans if they reported running for office as a partisan or identified as Democrats or Republicans (including leaners). * p < 0.10; ** p < 0.05; *** p < 0.01

Conclusion

We provide the first comparison of factual belief polarization between political elites and the public. Using a paired survey approach, we find that government officials hold more accurate beliefs than the public across a range of politically contentious issues. However, the greater accuracy we observe among officials is not associated with reduced belief polarization.

These results challenge the assumption that belief accuracy and belief polarization are inversely related; increased factual accuracy among political elites does not necessarily translate into greater factual agreement across partisan lines. These findings may reflect the competing motivations government official face to hold accurate beliefs about policy-relevant facts and to adopt beliefs that support their partisan preferences.

Future research should address three limitations of the present study. First, it would be valuable to explore cross-issue variation in factual belief polarization between the public and elites. Our findings are largely consistent, but we do observe notable heterogeneity across issues, especially on voter fraud and GMO safety. Secondly, scholars should more closely examine the direction and nature of the relationship between factual belief polarization and opinion polarization. Finally, researchers should evaluate the extent to which factual belief polarization affects the policy-making process.

For now, however, these results provide important new evidence of the extent of polarization – especially among government officials – and how it relates to factual beliefs. Learning more about the sources of the partisan divide over facts and its consequences for both elites and the public will be essential for understanding American democracy in this polarized age.

Supplementary material

Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/RNPR9U and online appendices are available at https://doi.org/10.1017/S000712342000037X.

Acknowledgements

We remember Aaron Rapport and thank him for his assistance with the Ipsos-MORI survey. We also thank Mia Costa, Rasmus Tue Pedersen, Miguel Pereira, and participants at the annual meeting of the American Political Science Association for helpful comments. The conclusions and any errors are, of course, our own.

Data availability statement

Data replication sets are available at https://dataverse.harvard.edu/dataverse/BJPolS.

Financial support

This project received funding from the European Research Council under the European Union's Horizon 2020 research and innovation programme (grant agreement No. 682758). Lee gratefully acknowledges funding support from Stanford University's Laboratory for the Study of American Values.

Footnotes

1 Our preregistered hypotheses and analyses focus exclusively on factual belief polarization. We hypothesized that, because elites tend to be more polarized than the public on policy preferences, they would also be more polarized on matters of opinion. As we discuss below, however, our results are more easily understood when considering factual accuracy directly.

2 We preregistered two other hypotheses that were not supported. See Appendix D for details.

3 Our study was preregistered and can be found online at https://osf.io/qap3c. The preregistration discusses additional analyses that would pair elite data from the National Candidate Survey (NCS) with public data from the American National Election Studies. We lack access to the NCS data, so these analyses are omitted. Other deviations are noted below.

5 See Appendix B for further details on both samples. As we show there, the Ipsos sample over-represents college graduates, but the results are similar in our YouGov sample (see below). Also, per our preregistration, we do not use survey weights because we pool public and survey data.

6 We also measured perceptions of the local unemployed and foreign-born population (see Appendix D for an analysis of these results).

7 This approach is consistent with previous research (Sides and Citrin Reference Sides and Citrin2007). Our unemployment question wording is meant to mirror the official definition of the term. As with any survey, we must rely on accurate self-reporting of respondent beliefs.

8 The YouGov data were collected as part of a survey experiment, so we only use data from the control condition. See Appendix A for descriptive statistics.

9 ‘Don't know’ responses, which were offered as an explicit option for the needle exchange and GMO questions but not the other issue measures, are treated as missing. See Appendix A for the exact wording and details on how we coded the responses.

10 This coding represents a deviation from our preregistration that we have chosen so that higher values consistently indicate greater accuracy across all outcome measures. (The preregistered coding was that higher values would indicate more unemployed or foreign-born residents. However, the results are equivalent using the preregistered coding; see Appendix E.)

11 These findings are robust to using ordered probit instead (if appropriate); see Appendix C. We also note two deviations from our preregistration. First, we omit a planned pooled model of responses to the questions that do not concern population quantities due to the addition of new issues and ambiguity about how to pool across issues given the varying relationship between partisanship and accuracy by issue. We also exclude control variables for appointed and elected officials so we can estimate an overall coefficient for government officials.

12 In addition, we observe significant differences in belief polarization on GMO safety due to a sign reversal: GOP elites have more accurate beliefs than Democratic elites, but the opposite is true for the public. This finding does not clearly support our expectations of greater elite polarization.

References

Bafumi, J and Herron, MC (2010) Leapfrog representation and extremism: a study of American voters and their members in Congress. American Political Science Review 104(3), 124.Google Scholar
Benton, J (2013) Attention fact-checkers: dangle a buck in front of partisans and they'll come closer to the truth. Nieman Journalism Lab, 3 June. Downloaded 27 January 2019. Available from http://archive.is/0o0lp.Google Scholar
Broockman, DE and Skovron, C (2018) Bias in perceptions of public opinion among political elites. American Political Science Review 112(3), 542563.CrossRefGoogle Scholar
Bullock, JG and Lenz, G (2019) Partisan bias in surveys. Annual Review of Political Science 22, 325342.Google Scholar
Bullock, JG et al. (2015) Partisan bias in factual beliefs about politics. Quarterly Journal of Political Science 10(4), 519578.Google Scholar
Ding, D et al. (2011) Support for climate policy and societal action are linked to perceptions about scientific agreement. Nature Climate Change 1(9), 462466.CrossRefGoogle Scholar
Frankovic, K (2016) Belief in conspiracies largely depends on political identity. YouGov, 27 December. Downloaded 4 April 2018. Available from http://archive.fo/Txz8x.Google Scholar
Frankovic, K (2018) Russia's impact on the election seen through partisan eyes. YouGov, 9 March. Downloaded 4 April 2018. Available from http://archive.fo/4H8bO.Google Scholar
Gottfried, JA et al. (2013) Did fact checking matter in the 2012 presidential campaign? American Behavioral Scientist 57(11), 15581567.Google Scholar
Guess, AM et al. (2020) ‘Fake news’ may have limited effects beyond increasing beliefs in false claims. Harvard Kennedy School Misinformation Review 1(1).Google Scholar
Horsley, S (2017) Ahead of Trump's first jobs report, a look at his remarks on the numbers. National Public Radio. 29 January. Downloaded 4 March 2020. Available from https://www.npr.org/2017/01/29/511493685/ahead-of-trumps-first-jobs-report-a-look-at-his-remarks-on-the-numbers.Google Scholar
Jerit, J and Barabas, J (2012) Partisan perceptual bias and the information environment. Journal of Politics 74(3), 672684.CrossRefGoogle Scholar
Kahan, DM et al. (2015) Ideology or situation sense: an experimental investigation of motivated reasoning and professional judgment. University of Pennsylvania Law Review 164, 349439.Google Scholar
Kahan, DM et al. (2017) Motivated numeracy and enlightened self-government. Behavioural Public Policy 1(1), 5486.CrossRefGoogle Scholar
Kessler, G (2019) President Trump tweets nonsensical figures on illegal immigration. Washington Post. 29 January. Downloaded 4 March 2020. Available from https://www.washingtonpost.com/politics/2019/01/29/president-trump-tweets-nonsensical-figures-illegal-immigration/.Google Scholar
Lee, N, Nyhan, B, Reifler, J and Flynn, DJ (2020) “Replication Data for: More Accurate, But No Less Polarized: Comparing the Factual Beliefs of Government Officials and the Public”, https://doi.org/10.7910/DVN/RNPR9U, Harvard Dataverse, V1.Google Scholar
Lupton, R, Myers, W and Thornton, J (2015) Political sophistication and the dimensionality of elite and mass attitudes, 1980–2004. Journal of Politics 77(2), 368380.Google Scholar
Nyhan, B (2010) Why the ‘death panel’ myth won't die: misinformation in the health care reform debate. The Forum 8(1).CrossRefGoogle Scholar
Nyhan, B and Reifler, J (2015) The effect of fact-checking on elites: a field experiment on U.S. state legislators. American Journal of Political Science 59(3), 628640.Google Scholar
Prior, M, Sood, G and Khanna, K (2015) You cannot be serious: The impact of accuracy incentives on partisan bias in reports of economic perceptions. Quarterly Journal of Political Science 10(4), 489518.CrossRefGoogle Scholar
Roush, CE and Sood, G (n.d.) A Gap in Our Understanding? Reconsidering the Evidence for Partisan Knowledge Gaps. Unpublished manuscript. Downloaded 26 February 2020. Available from http://www.gsood.com/research/papers/partisan_gap.pdf.Google Scholar
Sides, J and Citrin, J (2007) European opinion about immigration: the role of identities, interests and information. British Journal of Political Science 37(3), 477504.CrossRefGoogle Scholar
van der Linden, S, Leiserowitz, A and Maibach, E (2018) Scientific agreement can neutralize politicization of facts. Nature Human Behaviour 2(1), 23.CrossRefGoogle Scholar
Figure 0

Figure 1. Belief polarization among the public and government officialsNote: figure shows differences in belief accuracy between government officials and members of the public. Beliefs are measured on a 0–1 scale where 1 represents the most accurate response (see Appendix A for question wording). The vertical line represents mean belief accuracy by issue for members of the public (including independents). Public partisanship was measured using self-placement on a seven-point measure (with leaners treated as partisans). We code government officials as partisans if they ran for office as a partisan or identified as Democrats or Republicans (including leaners).

Figure 1

Figure 2. Accuracy differences between the public and government officialsNote: figure shows differences in belief accuracy between government officials and members of the public with 95 per cent confidence intervals. Beliefs are measured on a 0–1 scale where 1 represents the most accurate response (see Appendix A for question wording). The vertical line represents mean belief accuracy by issue for all respondents in the public sample.

Figure 2

Table 1. Issue belief accuracy by partisanship and elite status

Supplementary material: Link

Lee et al. Dataset

Link
Supplementary material: PDF

Lee et al. supplementary material

Lee et al. supplementary material

Download Lee et al. supplementary material(PDF)
PDF 332.6 KB