Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-11T02:58:45.093Z Has data issue: false hasContentIssue false

Experimental Measurement of Misperception in Political Beliefs

Published online by Cambridge University Press:  10 March 2021

Taylor N. Carlson
Affiliation:
Department of Political Science, Washington University in Saint Louis, One Brookings Drive, St. Louis, MO63130, USA
Seth J. Hill*
Affiliation:
Department of Political Science, University of California, San Diego, 9500 Gilman Drive #0521, La Jolla, CA92093-0521, USA
*
*Corresponding author. Email: sjhill@ucsd.edu
Rights & Permissions [Opens in a new window]

Abstract

Recent research suggests widespread misperception about the political views of others. Measuring perceptions often relies on instruments that do not separate uncertainty from inaccuracy. We present new experimental measures of second-order political beliefs. To carefully measure political (mis)perceptions, we have subjects report beliefs as probabilities. To encourage accuracy, we provide micro-incentives for each response. To measure learning, we provide information sequentially about the perception of interest. We illustrate our method by applying it to perceptions of vote choice in the 2016 presidential election. Subjects made inferences about randomly selected American National Election Study (ANES) respondents. Before and after receiving information about the other, subjects reported a probabilistic belief about the other’s vote. We find that perceptions are less biased than in previous work on second-order beliefs. Accuracy increased most with the delivery of party identification and report of a most important problem. We also find evidence of modest egocentric and different-trait bias.

Type
Research Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Scholars have long been interested in properly measuring and understanding individual political opinions, but only recently has research turned to perceptions about the beliefs and opinions of others, called “second-order beliefs.” This work suggests that individuals and elites systematically misperceive the beliefs and opinions of others (Ahler and Sood, Reference Ahler and Sood2018; Broockman and Skovron, Reference Broockman and Skovron2018; Hertel-Fernandez, Mildenberger, and Stokes, Reference Hertel-Fernandez, Mildenberger and Stokes2019; Levendusky and Malhotra, Reference Levendusky and Malhotra2015; Mildenberger and Tingley, Reference Mildenberger and Tingley2017). For example, Mildenberger and Tingley (Reference Mildenberger and Tingley2017) find that individuals tend to underestimate the extent that others believe climate change is a problem.

Perceptions of how other actors think about political issues or candidates can have important consequences for political behavior. These perceptions are used to process political information (Ahn, Huckfeldt, and Ryan, Reference Ahn, Huckfeldt and Ryan2014), to prepare for potential disagreement in a discussion (Carlson and Settle, n.d.), and to make decisions about whether to turn out and for whom to vote (Feddersen and Pesendorfer, Reference Feddersen and Pesendorfer1996; Huckfeldt and Sprague, Reference Huckfeldt and Sprague1995).

Research on second-order beliefs has taken a methodological approach of directly asking subjects to report beliefs about a perception of interest. Scholars have queried beliefs about the percentage of partisans that come from different demographic groups or the percentage of constituents in a legislator’s district that hold a policy opinion. Drawing inferences about bias in these second-order beliefs, however, requires care in classifying bias in beliefs versus bias induced by instrument or method. While researchers know the target quantity they aim to elicit from subjects, subjects often lack incentives to devote effort to evaluate difficult quantities, might not fully understand what the researcher asks, might misinterpret the context or information upon which they are asked to draw, or the instrument might not allow characterization of uncertainty. Moreover, recent work shows that incentives, clarification, and information can all change subject engagement with and the effectiveness of the instrument, and thus the values elicited from subjects (e.g. Hill and Huber, Reference Hill and Huber2019; Prior and Lupia, Reference Prior and Lupia2008).

Given the complexity of second-order beliefs, we propose a new research design to elicit beliefs with more nuance. Our experimental approach includes three extensions to existing measures. First, we ask participants to report beliefs as probabilities. This allows researchers to capture uncertainty in participants’ beliefs and provides more nuanced measures of bias. Second, we provide micro-incentives for accurate reports of subject beliefs to two-thirds of our subjects with the goal of reducing the impact of expressive responding or shirking. Third, our design integrates exposure to information and repeated measures of the quantity of interest. This allows researchers to examine how perceptions and bias change in response to new information. Researchers can use this feature to systematically evaluate factors that might exacerbate or mitigate biased perceptions.

To introduce the method, we focus on perceptions of vote choice in the 2016 presidential election. Footnote 1 Our subjects evaluated the reported vote choice of randomly selected respondents to the 2016 American National Election Studies (ANES). Our main substantive result is that citizens have only small biases in perceptions – less than 5 percentage points, on average – about the 2016 vote choice of others. This magnitude of bias is notably smaller than in other studies, which report bias on the order of 20 points. Most of our results indicate a level of political information or sophistication contrary to other research that suggests dramatic misperceptions. Given the importance and scholarly interest in political misperceptions, it is crucial to critically evaluate the methodology used to measure bias and second-order beliefs. We hope this design stimulates further methodological development on measurement and experiments surrounding political beliefs.

Design to elicit second-order beliefs

In this section, we provide a general overview of the approach and in the following section, we describe our application. Participants are invited to play what we present as a game in which they are asked to provide beliefs about target quantities of interest. Participants report beliefs as probabilities, which provides a measure of respondent uncertainty about their beliefs that is richer than a standard “don’t know” response option. In addition to reporting probabilistic beliefs, we also sequentially provide information to respondents so that we can observe how they revise their beliefs in response to each informational stimulus. Changes in probability provide estimates of the informational value of each stimulus.

While not a requirement for our proposed method, one benefit is that we can provide incentives for accurate reporting of beliefs. When the target quantity is something that is verifiable, we can use scoring methods to provide incentives for subjects to accurately report their probabilistic beliefs about the politics of others. We use the crossover scoring method, which is robust to risk preferences, but other scoring methods might also be applied. Footnote 2 Thus, our design adds three features to existing approaches. We elicit beliefs as probabilities, provide a method for sequential delivery of information, and provide incentives for accuracy.

Application: perceived vote choice in the 2016 election

The method is best illustrated by an example. We fielded an experiment to understand perceptions of vote choice in the 2016 presidential election. We matched subjects at random to individual respondents from the 2016 ANES. We refer to the target ANES respondent as the “other.” We elicited subjects’ probabilistic beliefs that their matched other reported voting for Hillary Clinton or Donald Trump in the postelection survey. Participants were randomly paired with four others and were asked to report the probability that each voted for Trump or Clinton five times. They first reported the probability that the other voted for Trump or Clinton without any information about him or her. Participants were then sequentially presented with four pieces of information about the other, reporting potentially revised beliefs after each piece.

We used the survey platform Lucid to recruit a nationally representative sample of 3,253 US adults. Footnote 3 We selected others from the set of ANES respondents who had both reported voting for either Clinton or Trump and had been validated to have turned out to vote in the November 2016 election (Enamorado, Fifield, and Imai, Reference Enamorado, Fifield and Imai2018). The four pieces of information presented about each ANES other were randomly selected from race, gender, income, state, party identification, and free-response report of the most important problem facing the nation. Footnote 4 Table 1 presents a summary of the inputs we used, examples, and how they were presented to subjects in our experiment. Figure 1 shows an example of what participants would see and how they reported their probabilities.

Table 1. Informational Inputs

Figure 1. Example of probability elicitation.

NOTES: Taken from the example round participants saw before starting the game. Full instructions and additional examples can be found in Appendix Section H.

We incentivized participants to report beliefs with bonuses for “winning” the game. Participants could earn $0.10 for each of 20 beliefs via the crossover scoring method. The crossover scoring method asks participants to evaluate a true/false statement at what probability p they would be indifferent between being paid if and only if the statement is true or entering a lottery that pays at rate p. Footnote 5

Altogether, we elicited more than 50,000 probabilities. With this sample, we are able to average across thousands of elicitations spanning 14,312 unique combinations of characteristics. While we view the central contribution of this paper as methodological, the richness of the data allows us to explore important substantive questions about the mechanisms of bias using intersections between characteristics of our subjects and characteristics of the others.

Example analysis

We turn now to a brief analysis of the experiment to highlight the scientific benefits of the method. We describe three quantities of interest to show that elicited probabilities can be flexibly adapted to answer a host of research questions. We evaluate subject accuracy, the impact of information delivered (informativeness), and bias. Our variable of interest is subject beliefs about others as elicited in Figure 1. Footnote 6

Results: Subject accuracy

Our first results evaluate subject accuracy. We measure accuracy by comparing the subject’s probabilistic beliefs that the other voted for Trump to the observed proportion of ANES respondents who voted for Trump with the characteristics so far delivered to the subject. For example, a subject might be in the third elicitation and have learned that the other is a female from Texas.

Figure 2 presents this result, averaging across all characteristic combinations with the same rounded Trump share (x-axis), compared to the average probability given by all subjects evaluating a characteristic combination with that vote share (y-axis). Continuing the example of a female from Texas, the x-axis value would be the observed proportion of the 52 females from Texas in our ANES sample who reported voting for Trump (71.5%), and the y-axis value would be the average probability of voting for Trump elicited knowing that the person was a female from Texas. Footnote 7

Figure 2. Subjective probabilities on ANES vote share given other’s characteristics.

NOTES: Each point is the average probability elicited from subjects (y-axis) against ANES vote share given characteristics subject had observed up to that elicitation (x-axis), grouped by vote share.

Most points do not fall on the 45-degree line, which would indicate perfectly informed subjects. However, the data presents a positive linear relationship with little variability about the line. When Trump’s share in the ANES subset is near 100%, on average subjects return a probability around 70. For ANES characteristic-combinations with Trump share near 0, subjects return an average probability of about 30. In Appendix Section D, we examine how accuracy changes with the presentation of new information. We also consider additional measures of accuracy, including estimating whether the respondent’s reported probability that the other voted for Trump falls within the 95% confidence interval that an ANES respondent with the given characteristics voted for Trump. We also consider a perceived probability of greater than 50% in favor of the correct candidate for that other as an alternative definition of accuracy. The probabilities elicited in our approach allow researchers to use different measures of accuracy.

Results: Informativeness of characteristics and traits

In this section, we consider the informativeness of the information delivered to subjects. We measure informativeness by examining how far subjects’ elicited probabilities move toward the truth with the delivery of each class of information. We find that partisanship leads to the greatest increase in accuracy. Those whose prior beliefs were between 0 and 10 (very inaccurate) moved about one-third of the scale toward truth when they were informed of the other’s partisanship. Those who began with an uncertain prior around 50 became about 12 points more accurate after learning the other’s partisanship. The second most informative characteristic was the other’s report of the most important problem, followed by race. See Appendix Section E and Figure A2.

To make inferences about informativeness while accounting for floor and ceiling effects, we estimate a regression model motivated by Bayesian learning. Bayes’ rule states that posterior beliefs are a combination of prior beliefs and new information. While one can estimate a structural model of Bayesian learning with log-odds beliefs and log-likelihood ratios (see Hill, Reference Hill2017), we present here a reduced form version of a Bayesian learning model. We run an OLS regression of posterior beliefs on prior beliefs and indicators for the type of information delivered at that elicitation

(1) $${y_{ijt}} = \beta {y_{ijt - 1}} + \gamma X + {\varepsilon _{ijt}},$$

where y is the probability given by subject i in round j elicitation t, β is a lag coefficient on the prior belief at $t - 1$ , X is an i times j times t by number-of-characteristic-values design matrix indicating which characteristic value was presented to subject i in round j elicitation t, γ is a number-of-characteristic-values-length vector of coefficients of average learning for each characteristic mapped from X to y, and ε is an idiosyncratic disturbance. For example, if a subject returns a probability of Trump vote of 60 after learning the other’s gender was female in elicitation three, and the elicitation two probability was 55, ${y_{ijt}} = 60$ , ${y_{ijt - 1}} = 55$ , and the row in X for elicitation ijt would have a one in the column for gender female and zeros in all others. Footnote 8

Because this regression model generates 59 informativeness coefficients, we plot the estimates in Figure 3 sorted by magnitude into 2 frames. Each point is the coefficient estimate and lines extend to 95% confidence intervals. Estimates in Figure 3 roughly break into three groups by magnitude of informativeness. At the top in the left frame are the most informative characteristic values, four values of partisanship and race Black. These five pieces of information increased accuracy on average by 25 points or more conditional on prior beliefs. The second group of values increases accuracy by 20–25 points, which include race/ethnicity Hispanic, the other’s report of the most important problem facing the country, and places of residence District of Columbia, Alabama, Kentucky, and Oregon. The remaining characteristic values increase informativeness between about 10 and 20 points. Overall, there is substantial variation in informativeness across the characteristics in this study, but subject beliefs on average move toward truth with each piece of information.

Figure 3. Estimated informativeness of each characteristic value.

NOTES: Points are OLS point estimates of the average treatment effect of information on the subjective probability of target vote correct, lines extending to 95% confidence interval.

Results: Bias and misperception

Finally, we consider bias in subject beliefs. To measure bias, we calculate the average difference in probability elicited from subjects given a profile of characteristics (e.g. female from Texas) from the actual vote rate of ANES respondents with that profile of characteristics. That is, does the average probability returned systematically over- or understate the actual vote rate for one of the two candidates?

Overall, subjects overestimated Trump’s support by an average of 2.3 and median of 0.9 percentage points. This bias is of notably lower magnitude than misperceptions of other target quantities in existing published work. One natural concern is that average bias might mask large underlying bias within relevant subgroups. First, we show that magnitude of bias does not vary by prior beliefs (see Figure A3 in Appendix F). Second, researchers focused on misperceptions of second-order beliefs have suggested at least two mechanisms for misperceptions: egocentric bias and different-trait bias.

Egocentric bias implies that subject beliefs are biased toward the candidate that the subject themselves supported. To measure the extent of egocentric bias, we tabulate average and median bias by the subjects’ 2016 vote, which we queried of all subjects. Clinton voters had an average bias of 3.5 points toward Clinton, median 1.8; Trump voters had an average bias toward Trump of 8.6, median 5.4; and Other candidate voters had an average bias of 0.25 toward Trump, 0.1 median. This evidence is consistent with egocentric bias: subject beliefs tend toward their own vote choice.

The magnitudes show that our finding of small overall bias is not simply a function of bias of magnitude similar to other studies by Clinton and Trump supporters canceling out. However, these Clinton and Trump-supporter magnitudes are larger than the average and could be consequential in some settings. For example, a bias of 8.6 points on Trump’s support of 48% is 18% off.

Different-trait bias is grounded in social identity theory and suggests individuals are likely to assume that out-group members are more homogeneous than in-group members. Bias in perception should thus increase with the number of traits on which subjects differ from the other.

The top frame of Figure 4 presents average bias toward Trump (y-axis) across elicitation numbers and the number of shared characteristics between the subject and the other (x-axis). For example, the leftmost point shows that subjects in elicitation two who did not share the single characteristic so far delivered about the other were biased toward Trump on average by around 5 points. The next point, however, shows that subjects in elicitation two who did share the characteristic were biased toward Trump by less than 1 point. Across elicitations, sharing at least one characteristic with the other led subjects to have less biased perceptions of the other’s vote choice, consistent with a different-trait mechanism.

Figure 4. Average distance between the estimate and ANES vote share by elicitation number and number of shared characteristics, and by subject 2016 vote.

NOTES: Each point is the average distance between elicited probability and actual ANES vote share given the characteristics so far delivered to the subject with error bars to 95% confidence intervals. Gray lines are each characteristic separately. Top frame by elicitation number, bottom by subject vote.

Figure 4b.

The bottom frame of Figure 4 presents average bias toward Trump (y-axis) across subject 2016 vote (Clinton, Trump, or Other, excluding those who did not vote in 2016) and the number of shared characteristics between the subject and the other (x-axis). For subjects who voted for Clinton or Trump, increasing the number of characteristics shared with the other decreases misperception; for Other voters, the relationship is notably flat. Egocentric bias for Clinton voters declines from 5 to 0 points moving from zero shared traits to two shared traits. Egocentric bias for Trump voters declines from 13 to 1 point across the same range – though appears to move toward a Clinton bias of around 4 points with three or four shared traits.

Figure 4 highlights additional scientific understanding about beliefs that our method allows. We are able to move from overall measures of bias, measured on a scientifically relevant probability scale, to evaluate theoretical mechanisms of bias. We find that bias is greater when the other differs from the subject and that beliefs lean toward subjects’ own actions.

Discussion

We have presented a new method to measure beliefs about the politics of others and applied this method to understand beliefs about the 2016 American presidential vote. Our results suggest that beliefs are (1) responsive to information about others, (2) biased to a small degree, and (3) made somewhat more accurate when the subject and the other share characteristics. We find magnitudes of misperception smaller than those found in existing studies on misperceptions about attitudes or group attachments. We find that bias decreased with knowledge of the characteristics of others and that subject perception of vote probability were within 3 points of true vote share, on average.

Our measurement strategy might be applied to address many questions of second-order political beliefs. For example, political discussion scholars have long been interested in the accuracy of beliefs about discussion partner views. Eveland et al. (Reference Eveland, Song, Hutchens and Levitan2019) suggest that the way in which we measure accuracy might conflate inaccuracy with uncertainty. Our approach could be extended to ask subjects to report beliefs about policy preferences, vote choices, or partisanship about those in their social network. This continuous measure would allow for more concrete estimates of uncertainty to help distinguish inaccuracy from uncertainty. Research on misperceptions might use our approach to allow for more nuanced measurement of beliefs and uncertainty than directly asking subjects to report the quantity of interest.

We believe that our design is flexible to meet researchers’ unique questions. In addition to the political discussion network example above, our approach could be used to answer questions about second-order political actions. Researchers interested in social influence on turning out to vote might consider measuring how individuals perceive the probability that another person will turn out to vote. Researchers could use voter file data to generate the observed turnout rates for characteristic combinations and ask survey respondents to guess the probability that an individual with given characteristics turned out to vote. Future research could also consider the ways in which electoral contexts shape the accuracy of second-order beliefs about vote choice. For example, we might expect accuracy to vary over time or political system as social sorting into political parties varies. If our study were replicated in an electoral context in which vote choice was less strongly related to partisan identity and there was more demographic heterogeneity within the parties, we might expect second-order perceptions of vote choice to be noisier and less accurate.

Despite the potential benefits of this design for several research questions, the design requires choices considering budget, respondent fatigue, and research ethics. We chose to provide four pieces of information – out of six total options – about each ANES other. While the informational characteristics we chose to present are theoretically important, investigating only six characteristics limits our ability to make global statements about bias or informativeness. Footnote 9 Moreover, our design assumes that participants have some intuitive understanding of probability. While our approach of making inferences about one person at a time might be less cognitively taxing than estimating population proportions, it could still be dependent on numeracy. Using a standard measure of numeracy, Footnote 10 we found that numeracy did not significantly alter our results. Figure A5 (in Appendix G) also shows that our respondents provided more guesses expressing uncertainty (probabilities around 50) than would otherwise be merited based on the observed rates of Trump voting given characteristics in the ANES data. However, the probabilities elicited in our study otherwise map onto the observed probabilities.

As with any experimental method, there are external validity concerns that researchers should consider. For example, offering incentives for accuracy might not necessarily map on to real-world inferences, especially if individuals are motivated by directional incentives rather than accuracy. One-third of our respondents were randomly assigned to receive a flat rate bonus at the end of the game, regardless of their performance in each round. We show in Table A1 (in Appendix G) that our results are substantively similar for respondents who did and did not receive incentives for accuracy, although those without payments learn with smaller magnitude.

We conclude by noting that political beliefs and perceptions are difficult to evaluate. Subjects in research studies are often asked to respond to questions phrased in ways they might or might not understand on topics they might have rarely – if ever – considered. Classifying magnitudes of accuracy or bias in responses to such queries is challenging because of responses proxy, rather than reveal, underlying beliefs. Whether these proxies allow for comparison to external benchmarks requires careful consideration. We hope our study motivates new efforts to measure political beliefs.

Supplementary material

To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2021.2.

Footnotes

The experiments presented here are approved by the UCSD Human Research Protections Program. We thank David Broockman, Dan Butler, Jamie Druckman, Anthony Fowler, James Fowler, Federica Izzo, and Shiro Kuriwaki for their helpful discussion. The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi:10.7910/DVN/OJ3HJE (Hill and Carlson, 2021). The authors declare no conflicts of interest. Support for this research was provided by the University of California San Diego Academic Senate.

1 See Appendix Section B for a brief discussion about why perceptions of others’ vote choices is important.

2 Please see Appendix Section C for more details on our scoring method.

3 See Coppock and McClellan (Reference Coppock and McClellan2019) for a detailed analysis validating the Lucid platform. 27.4% of our respondents had some college, but no degree, 23.0% had a Bachelor’s degree; 78.9% of our sample was White; 11.7% was Black; 3.4% was Asian. 10.5% of our sample was Hispanic. 41.5% of our sample was male; 58.5% was female. According to the 2019 American Community Survey, the US population was 20.0% some college but no degree and 20.3% Bachelor’s degree; 74.2% White, 12.6% Black, and 4.8% Asian; 16.4% Hispanic or Latino; 49.2% male, and 50.8% female.

4 We manually removed responses that included profanity, explicit racism, or were incomprehensible. We did not correct grammatical or spelling errors.

5 Approximately one-third of our respondents were randomly assigned to receive a flat -rate bonus of $1.50 instead of $0.10 per probability estimate per round. These participants were instead told that they could win “points” using the same crossover scoring method. We pool these groups together for analysis as we found only small differences in behavior.

6 We will call the response a “probability” in our prose even though it is elicited on (0,100) rather than (0,1).

7 ANES proportions and confidence intervals account for the complex survey design with the “survey” package in R (Lumley, Reference Lumley2004; R Core Team, 2020) . Please see Appendix Section H for the instrument of the application. Please see replication materials for examples of how to implement these comparisons.

8 Most important problem remains as a single column in the design matrix as the open-end text responses are unique. We also collapsed 14 states that were presented to subjects less than 50 times each into one “small state” column.

9 After they had given their final belief in round four, we asked subjects if there was anything else they would like to know about the other to improve accuracy. Among those who wanted more information, 41.3% wanted one of our six characteristics (16.7% party identification, 6.8% income, 6% gender, 5.5% race, 3.6% most important problem, and 2.7% state), while 15% wanted to know the other’s policy preferences, most often about immigration, border security, and abortion. Another 10% wanted the other’s age. Other subjects requested religion, past voting behavior, and whether the other lived in an urban or rural environment.

10 We asked “Suppose you had $100 in a savings account and the interest rate was 2% per year. After 5 years, how much do you think you would have in the account if you left the money to grow?” The response options were: “more than $102,” “exactly $102,” and “less than $102.”

References

Ahler, Douglas J. and Sood, Gaurav. 2018. The Parties in Our Heads: Misperceptions about Party Composition and Their Consequences. Journal of Politics 80(3): 964–81.CrossRefGoogle Scholar
Ahn, T. K., Huckfeldt, Robert, and Ryan, John Barry. 2014. Experts, Activists, and Democratic Politics: Are Electorates Self-Educating? Cambridge: Cambridge University Press.Google Scholar
Broockman, David E. and Skovron, Christopher. 2018. Bias in Perceptions of Public Opinion among Political Elites. American Political Science Review 112(3): 542–63.CrossRefGoogle Scholar
Carlson, Taylor N. Settle, Jaime E.. n.d. What Goes Without Saying: Navigating Political Discussion in America. Working Manuscript.Google Scholar
Coppock, Alexander and McClellan, Oliver A.. 2019. Validating the Demographic, Political, Psychological, and Experimental Results Obtained from a New Source of Online Survey Respondents. Research & Politics 6(1). https://journals.sagepub.com/doi/full/10.1177/2053168018822174 CrossRefGoogle Scholar
Enamorado, Ted, Fifield, Benjamin, and Imai, Kosuke. 2018. Users Guide and Codebook for the ANES 2016 Time Series Voter Validation Supplemental Data. Technical report American National Election Studies.Google Scholar
Eveland, William P., Song, Hyunjin, Hutchens, Myiah J., and Levitan, Lindsey Clark. 2019. Not Being Accurate Is Not Quite the Same as Being Inaccurate: Variations in Reported (in) Accuracy of Perceptions of Political Views of Network Members Due to Uncertainty. Communication Methods and Measures 13(4): 305311.CrossRefGoogle Scholar
Feddersen, Timothy J. and Pesendorfer, Wolfgang. 1996. The Swing Voter’s Curse. American Economic Review 86(3): 408–24.Google Scholar
Hertel-Fernandez, Alexander, Mildenberger, Matto, and Stokes, Leah C.. 2019. Legislative Staff and Representation in Congress. American Political Science Review 113(1): 118.CrossRefGoogle Scholar
Hill, Seth J. 2017. Learning Together Slowly: Bayesian Learning About Political Facts. Journal of Politics 79(4): 1403–18.CrossRefGoogle Scholar
Hill, Seth J. and Carlson, Taylor N.. 2021. Replication Data for: Experimental Measurement of Misperception in Political Beliefs. Harvard Dataverse. doi: 10.7910/DVN/OJ3HJE Google Scholar
Hill, Seth J. and Huber, Gregory A.. 2019. On The Meaning of Survey Reports of Roll Call ‘Votes’. American Journal of Political Science 63(3): 611–25.CrossRefGoogle Scholar
Huckfeldt, Robert and Sprague, John. 1995. Citizens, Politics, and Social Communication: Information and Influence in an Election Campaign. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Levendusky, Matthew S. and Malhotra, Neil. 2015. (Mis) Perceptions of Partisan Polarization in the American Public. Public Opinion Quarterly 80(S1): 378–91.CrossRefGoogle Scholar
Lumley, Thomas. 2004. Analysis of Complex Survey Samples. Journal of Statistical Software 9(1): 119.CrossRefGoogle Scholar
Mildenberger, Matto and Tingley, Dustin. 2017. Beliefs about Climate Beliefs: The Importance of Second-Order Opinions for Climate Politics. British Journal of Political Science 49(4): 1279–307.CrossRefGoogle Scholar
Prior, Markus and Lupia, Arthur. 2008. Money, Time, and Political Knowledge: Distinguishing Quick Recall and Political Learning Skills. American Journal of Political Science 52(1): 169–83.CrossRefGoogle Scholar
R Core Team. 2020. R: A Language and Environment for Statistical Computing. Vienna, Austria: R Foundation for Statistical Computing.Google Scholar
Figure 0

Table 1. Informational Inputs

Figure 1

Figure 1. Example of probability elicitation.NOTES: Taken from the example round participants saw before starting the game. Full instructions and additional examples can be found in Appendix Section H.

Figure 2

Figure 2. Subjective probabilities on ANES vote share given other’s characteristics.NOTES: Each point is the average probability elicited from subjects (y-axis) against ANES vote share given characteristics subject had observed up to that elicitation (x-axis), grouped by vote share.

Figure 3

Figure 3. Estimated informativeness of each characteristic value.NOTES: Points are OLS point estimates of the average treatment effect of information on the subjective probability of target vote correct, lines extending to 95% confidence interval.

Figure 4

Figure 4. Average distance between the estimate and ANES vote share by elicitation number and number of shared characteristics, and by subject 2016 vote.NOTES: Each point is the average distance between elicited probability and actual ANES vote share given the characteristics so far delivered to the subject with error bars to 95% confidence intervals. Gray lines are each characteristic separately. Top frame by elicitation number, bottom by subject vote.

Figure 5

Figure 4b.

Supplementary material: Link

Carlson and Hill Dataset

Link
Supplementary material: PDF

Carlson and Hill supplementary material

Carlson and Hill supplementary material

Download Carlson and Hill supplementary material(PDF)
PDF 874.9 KB