Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-09T17:10:04.621Z Has data issue: false hasContentIssue false

Correcting Bias in Perceptions of Public Opinion Among American Elected Officials: Results from Two Field Experiments

Published online by Cambridge University Press:  03 March 2020

Joshua L. Kalla*
Affiliation:
Department of Political Science and Department of Statistics and Data Science, Yale University, USA
Ethan Porter
Affiliation:
School of Media and Public Affairs, George Washington University, USA
*
*Corresponding author. E-mail: josh.kalla@yale.edu
Rights & Permissions [Opens in a new window]

Abstract

While concerns about the public's receptivity to factual information are widespread, much less attention has been paid to the factual receptivity, or lack thereof, of elected officials. Recent survey research has made clear that US legislators and legislative staff systematically misperceive their constituents' opinions on salient public policies. This study reports the results from two field experiments designed to correct misperceptions of sitting US legislators. The legislators (n = 2,346) were invited to access a dashboard of constituent opinion generated using the 2016 Cooperative Congressional Election Study. Despite extensive outreach efforts, only 11 per cent accessed the information in Study 1 and only 2.3 per cent did so in Study 2. More troubling for democratic norms, legislators who accessed constituent opinion data were no more accurate at perceiving their constituents' opinions. The findings underscore the challenges confronting efforts to improve the accuracy of elected officials' perceptions and suggest that elected officials may indeed resist factual information.

Type
Letter
Copyright
Copyright © Cambridge University Press 2020

Concerns about the public's receptivity to factually accurate political information are widespread (Berinsky Reference Berinsky2017; Grinberg et al. Reference Grinberg2019; Guess, Nagler and Tucker Reference Guess, Nagler and Tucker2019; Iyengar and Massey Reference Iyengar and Massey2019; Lazer et al. Reference Lazer2018; Pennycook and Rand Reference Pennycook and Rand2019; Porter and Wood Reference Porter and Wood2019). However, little is known about whether elected officials are receptive to factual information and update their views upon learning new information, with some notable exceptions (for example, Butler and Nickerson Reference Butler and Nickerson2011; Nyhan and Reifler Reference Nyhan and Reifler2014). This is unfortunate. Many theories of representative democracy depend on politicians having accurate information about what their constituents believe (for example, Mansbridge Reference Mansbridge2003). Regardless of whether legislators should behave as ‘delegates’ who mirror the public's will or ‘trustees’ who use their own judgment, constituent preferences are often assumed to play some role in informing legislative outcomes (Burke Reference Burke1986; Pitkin Reference Pitkin1967). Yet recent research in the United States has made clear that legislators (Broockman and Skovron Reference Broockman and Skovron2018) and their staffers (Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019) systematically misperceive constituent opinion, believing that public opinion is significantly more conservative than it actually is. This may be responsible for the rightward shift in American policy in recent decades (Hacker and Pierson Reference Hacker and Pierson2010).

In this letter, we report the results of two field experiments designed to improve the accuracy of elected officials' perceptions of their constituents' preferences. Relying on data from the 64,600-respondent 2016 Cooperative Congressional Election Study (CCES) (Ansolabehere and Schaffner Reference Ansolabehere and Schaffner2017), in both experiments we provided sitting US state legislators (n = 2,346) the opportunity to view granular information about the policy attitudes of their constituents. Under the auspices of an ad hoc organization we created called District Pulse, legislators were assigned to receive either an invitation to a password-protected website that contained information about the policy preferences of their constituents or, as a placebo, information about those in their broader Census regions, of which there are four nationally. Afterwards, a researcher with no visible ties to District Pulse surveyed the legislators who accessed the information. We anticipated that providing legislators with district-specific information about their constituents' preferences would increase the accuracy of their perceptions of those preferences and, potentially, cause their legislative decision making to better represent those preferences.

Despite the normative and electoral incentives for legislators to learn their constituents' opinions, the vast majority of legislators in our study failed to access the information we provided them about their constituents' preferences. Moreover, the post-treatment surveys make clear that even the legislators who accessed the information about their constituents were unaffected by it. Not only do we find that most legislators in these experiments were uninterested in what their constituents believe – but even those who accessed such information were made no more accurate as a result.

Legislators’ resistance to factually accurate political information may contrast with the behavior of those they represent. Researchers have shown that in some settings presenting the public with factual information can improve their accuracy about political matters (Guess and Coppock Reference Guess and Coppock2018; Hill Reference Hill2017; Kraus, Rucker and Richeson Reference Kraus, Rucker and Richeson2017; Wood and Porter Reference Wood and Porter2019).Footnote 1 However, across the two field experiments described here, we find that legislators from both parties are unwilling to more accurately perceive their constituents' preferences. They resist becoming more factually accurate about their constituents even though they likely face electoral incentives to do so (Mayhew Reference Mayhew1974). Indeed, we fielded one of the experiments in the run-up to the 2018 midterm US election, a time when legislators are potentially most incentivized to accurately gauge their constituents' beliefs. Yet the proximity of the election made legislators in these experiments no more responsive to or interested in accurate information about their constituents' preferences.

Experimental design

Our evidence comes from pre-registered randomized field experiments we conducted in 2017 and 2018.Footnote 2 Both experiments adhered to the same design. They proceeded in five steps. First, using data from the 2016 CCES and multi-level regression and post-stratification (Gelman and Little Reference Gelman and Little1997; Lax and Phillips Reference Lax and Phillips2009; Park et al. Reference Park2004; Warshaw and Rodden Reference Warshaw and Rodden2012), we estimated district-level public opinion in 2,346 state House and Senate districts on eight issues. Specifically, we estimated district-level public opinion about immigration, mandatory minimum sentencing, renewable portfolio standards to increase the production of renewable energy, background checks for gun purchases, the minimum wage, highway funding, abortion and repealing the Affordable Care Act (full wording of the policy areas is presented in the Appendix). Then, we randomly assigned each state legislator to receive access to public opinion estimates on a random set of four of the eight issues. Next, we randomly assigned legislators to receive either polling estimates specific to their district or polling estimates covering the four broad US Census regions for the four randomly selected issues. The former, which provided legislators with information specific to their own constituents, acted as our treatment of interest; the latter was our placebo. Because Census regions are large, we anticipated that providing regional polling data about all four regions would be uninformative for a state legislator trying to understand her specific constituent's opinions. (On average, the district-specific and Census region polling estimates differed by 8.1 percentage points across issue items.) The control group was generated based on the random set of four issues that legislators did not receive polling information about.

To increase the credibility of the information we provided to the legislators, we partnered with Chi/Donahoe, a digital creative consulting firm with expertise in political data visualizations, to create District Pulse. After receiving an invitation, state legislators could log onto District Pulse to access their public opinion polling data. To further ensure that state legislators took this dashboard seriously, we pre-tested it with several current and former legislators and staffers. The results of this pre-testing are reported in the Appendix. In addition, our invitation noted that the polling data came from a large National Science Foundation-funded study and that the information was not being provided by either a partisan or special interest group. We received no replies from legislators or their staff questioning the credibility or legitimacy of District Pulse.

Figure 1 shows what District Pulse looked like for a state legislator randomly assigned to receive district-specific polling. Had this legislator been randomly assigned to the placebo condition, the web page would have looked identical except that the sentence reading ‘In your legislative district…’ would have been removed, as would have the district-specific data.Footnote 3

Figure 1. Sample district-specific polling treatment

Note: in the placebo, the web page would have looked identical except that the sentence reading ‘In your…’ would have been removed, as would have the district-specific data.

We provided access to District Pulse by emailing custom URLs and passwords to legislators. Under the auspices of District Pulse, we sent three rounds of email invitations. We also made one set of phone calls to legislators to remind them to access the information. In all of our communications with legislators, we noted the large sample size of the polling, the National Science Foundation support, and the non-partisan, academic source. The text of the invitation emails and phone calls are included in the Appendix. We were able to track which legislators accessed the polling data, when they did so, and how many times, by requiring them to log into District Pulse.

Two weeks after the final invitation to access District Pulse, an unaffiliated academic invited the legislators to complete a survey. For each of the eight policy proposals, we asked three distinct outcome measures: their perception of constituent support, which was our primary outcome of interest, as well as the legislator's personal policy positions and expected voting behavior if the policy were to come before them (wording of the outcome measures is included in the Appendix). By comparing responses on these three outcome measures across the treatment, placebo and control conditions, we can test our hypothesis that providing district-specific polling information to legislators about their constituents' preferences should increase the accuracy of their perceptions and, potentially, their expected legislative behavior.

Following the procedure for analyzing field experiments with survey outcomes (Broockman, Kalla and Sekhon Reference Broockman, Kalla and Sekhon2017), we analyzed the data by limiting the post-treatment survey responses to the compliers – legislators who logged in to access District Pulse (n = 300 across both studies). A total of 14 per cent of compliers responded to the post-treatment survey and answered our primary outcome measure (n = 43 across both studies). This response rate is comparable to the rates found in recent surveys of political elites (Broockman and Skovron Reference Broockman and Skovron2018; Broockman et al. Reference Broockman2019; Teele, Kalla and Rosenbluth Reference Teele, Kalla and Rosenbluth2018). A full 85 per cent of responders to our survey identified themselves as legislators. As we note in the Appendix, these responders are broadly representative in terms of baseline constituent opinion on the eight issues, Trump vote share and median household income. Furthermore, there is no evidence of differential attrition on who responded to these surveys.

Our primary outcome was examining whether providing district-specific information reduces the degree to which elected officials misperceive their constituents' preferences (Broockman and Skovron Reference Broockman and Skovron2018; Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019). Following our pre-analysis plan, we coded misperceptions as the absolute value of the legislator's response to a survey question asking what percentage of constituents they believe support each policy area minus the district support estimated using the CCES data; this was the information directly conveyed to the treatment group. We tested our other two pre-registered outcomes with survey questions on legislators' policy preferences for each of the eight policy areas and questions regarding intended voting behavior. That is, we asked legislators their level of agreement with each policy and how they would vote if each policy came before them. (Full question wordings are in the Appendix). We then generated a ‘long’ dataset where each row is a legislator-issue (meaning each legislator who was a complier and responded to the post-treatment survey appears eight times) and columns are the three outcome measures. We analyzed the data by regressing each dependent variable on the treatment indicator using ordinary least squares and calculating cluster-robust standard errors at the state legislator level. In accordance with our pre-analysis plan, our primary model includes pre-treatment covariates for the legislator's party, whether the legislator serves in an upper or lower chamber, Trump's 2016 vote share in the district and state fixed effects.Footnote 4

We took several steps to maximize statistical power. First, by randomly assigning each legislator to receive polling information on four of eight issues but collecting outcome measures for all eight, we were able to conduct a within-subject analysis and increase the effective sample size (see Green, Wilke and Cooper (Reference Green, Wilke and Cooper2017) for a similar design). Secondly, by randomly assigning subjects to receive either district-specific or placebo regional polling aggregates and tracking whether a legislator accessed their polling information, we were able to create a placebo group to estimate the treatment-on-treated effect that is robust to our low compliance rate (Broockman, Kalla and Sekhon Reference Broockman, Kalla and Sekhon2017; Nickerson Reference Nickerson2005). With these design features and the observed compliance and post-treatment survey response rates, our experiment had 80 per cent power to detect a 7-percentage-point reduction in misperceptions from an average of 18 percentage points in the control group. The Appendix includes many additional details on the multi-level regression and post-stratification procedure, the placebo, balance checks, implementation and robustness checks under alternative model specifications.

The first experiment was conducted in the fall of 2017. The second experiment was conducted in the fall of 2018, in advance of the 2018 US mid-term election. Prior studies have found that elites respond to informational treatments that allude to their re-election chances (Nyhan and Reifler Reference Nyhan and Reifler2014). In 2018, we randomly assigned 917 of the legislators who did not access District Pulse in 2017 to receive the same email invitation used in the original study, and another 917 legislators to receive an electoral-salience invitation. The latter invitation noted in the subject line that legislators could ‘Access detailed polling before the elections’ and referred to the elections in the main body (see the Appendix for the full emails).

Results

Despite administering a statistically well-powered experiment in which we offered state legislators a credible source of information about their constituents' preferences, we were unable to improve the accuracy of legislators' perceptions. Legislators in the control group who received no polling information misperceived their constituents' policy preferences by 18 percentage points on average, similar to previous estimates (Broockman and Skovron Reference Broockman and Skovron2018; Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019). Legislators provided with the placebo regional polling information and treatment district-specific information misperceived constituent policy preferences by 16 and 18 percentage points, respectively. In sum, legislators remained roughly equivalently uninformed about their constituents' preferences regardless of whether they were provided with information about their specific constituents, whether they received information about the four US Census regions or whether they received no information at all.

We present the main results in Figure 2, which regresses the outcome of misperception on a treatment indicator and covariates for the legislator's party, whether the legislator serves in an upper or lower chamber, Trump's 2016 vote share in the district and state fixed effects. We calculate cluster-robust standard errors at the state legislator level, since each observation in this regression is a legislator issue (all pre-specified in our pre-analysis plan). In the left-hand panel, we limit our analysis to observations randomly assigned to receive district-specific polling on that issue or no polling on that issue (control). In the right-hand panel, we limit our analysis to observations receiving either the district-specific polling or the Census region polling (placebo). Each panel then includes four treatment effect estimates: (1) using all policy issues and limited to our 2017 study (sample size is 20 legislators and 80 legislator-responses in treatment, 39 and 156 in control, and 19 and 76 in placebo);Footnote 5 (2) using only those policy domains (mandatory minimum sentences, renewable portfolio mandates, gun purchase background, infrastructure spending and banning abortions) over which state legislators have direct legislative authority and limited to our 2017 study (20 legislators and 50 legislator-responses in treatment, 39 and 102 in control, and 19 and 43 in placebo); (3) using all issues and including both the 2017 and 2018 studies (22 legislators and 88 legislator-responses in treatment, 43 and 171 in control, and 21 and 84 in placebo); and (4) using just the state-level issues and both the 2017 and 2018 studies (22 legislators and 55 legislator-responses in treatment, 43 and 110 in control, and 21 and 50 in placebo).Footnote 6

Figure 2. Treatment effect on misperceptions

Note: higher values indicate greater misperceptions. All models include pre-treatment covariates and state fixed effects that were specified in our pre-analysis plan. Lines denote the 95 per cent confidence intervals clustering at the state legislator level. A total of 312 legislator-responses (39 legislators) were included in the 2017 data and 31 legislator-responses (4 legislators) from 2018.

Figure 2 demonstrates that district-specific polling has no effect on reducing legislators' misperceptions when compared to either the control or the placebo. In fact, compared to the placebo, the sign points in the opposite direction than expected. The results are robust when examining only those policy domains over which state legislators have direct legislative authority and when including the 2018 replication. As an additional robustness test, we examine the accuracy of the legislators who recalled receiving the polling information from District Pulse, as measured at the end of the post-treatment survey. Despite any bias potentially introduced by conditioning on this post-treatment variable (Montgomery, Nyhan and Torres Reference Montgomery, Nyhan and Torres2018), we continue to find no treatment effects in this group. Legislators who received the treatment information misperceived public opinion by an average of 17 percentage points compared to 16 and 17 percentage points for the placebo and control groups, respectively. Even within the subset of the treatment group that recalled receiving polling information from District Pulse, their degree of inaccuracy remains substantively large and no more accurate than the placebo and control groups. In the Appendix, we present additional robustness results and interaction effects with electoral safety, which consistently fail to find evidence of either statistically or substantively significant effects as a result of providing polling information to legislators. However, given our limited sample size, these interaction effects may be underpowered. Turning to the other outcomes, we also fail to find an effect of providing polling information on legislators' policy preferences or their voting intentions. We applied the same robustness checks described above for misperceptions to these two outcomes, and the results remain unchanged. Complete results are presented in the Appendix.

Discussion

Our results are sobering. While previous research (Broockman and Skovron Reference Broockman and Skovron2018; Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019) has established that elected officials systematically misperceive what their constituents want, the evidence presented here portrays such officials as immune to our non-partisan efforts to correct those misperceptions, possibly opening the door to influence from others, such as likely voters, co-partisans, political activists, lobbyists or donors (Barber Reference Barber2016; Butler and Dynes Reference Butler and Dynes2016; Kalla and Broockman Reference Kalla and Broockman2016; Leighley and Oser Reference Leighley and Oser2018; Miler Reference Miler2010). In our experiments, legislators did not update their attitudes to match those of their constituents on salient political issues, even when the requisite information was quite literally at their fingertips.

We can think of three potential explanations for our results. First, legislators may not have received or trusted the information provided to them. Had the polling come to legislators via alternative channels or sources (for example, Butler and Nickerson Reference Butler and Nickerson2011), it may have had a greater impact, suggesting the need for further replication. Our results, while stark, are limited to these particular legislators and this particular intervention; other interventions tested on other samples may result in different findings. Secondly, legislators may discount aggregate measures of their constituents' attitudes, and instead focus on the preferences of political elites, lobbyists, donors, co-partisans and other policy demanders who they view as more central to their electoral prospects (Barber Reference Barber2016; Bawn et al. Reference Bawn2012; Gilens Reference Gilens2012; Miler Reference Miler2010). Knowing what the average constituent believes on policy matters may be less important to elected officials than knowing what these others believe. Thirdly, given the increasing levels of polarization in state legislatures (Shor and McCarty Reference Shor and McCarty2011) and the ‘nationalization’ of US politics (Hopkins Reference Hopkins2018), legislators may think of themselves not as delegates for their specific constituents, but as participants in national partisan debates. Knowing what their constituents believe on policy matters may be less important than understanding the positions of their national party – and sticking to them.

Our experiment expands upon an innovative prior experiment by Butler and Nickerson (Reference Butler and Nickerson2011). During a 2008 New Mexico special legislative session, the authors sent 35 state legislators polling from their districts on support for a one-time tax rebate during a special legislative session. They find in this particular circumstance that legislators who received their district-specific polling were much more likely to vote with their constituents than the control group. Since that research was conducted, American political elites and state legislators have become increasingly polarized (Shor and McCarty Reference Shor and McCarty2011). Our results may differ from those of Butler and Nickerson (Reference Butler and Nickerson2011) because the current study included a more diverse set of contentious political issues across a much broader swath of legislators at a time of heightened polarization. Other differences between the settings of the studies, such as the fact that Butler and Nickerson (Reference Butler and Nickerson2011) provided polling information directly tied to an upcoming legislative vote and in conjunction with a local media source, may also explain why the legislators in their study were more responsive to constituent opinion than the legislators studied here. Our results underscore the importance of replication in the social sciences, especially as the external political world changes.

Our two experiments reported here should not be the last word on the topic of constituency influence in American politics. Though we have shown that it is difficult to improve the accuracy of legislators' perceptions of constituent preferences, that does not mean it is impossible. A different treatment might have yielded different results. And even though we went to great lengths to maximize our statistical power, a design with even greater power might have been able to detect smaller effects. In addition, we could not control who accessed District Pulse and who took the post-treatment survey; it may have been the case that there were systemic differences between those who took the survey and those who did not, thereby preventing us from observing effects. Finally, it is possible that the set of legislators willing to respond to a survey are somehow different and less treatment responsive than the majority of legislators who chose not to respond. These limitations are especially important given how our results diverge from some prior work (Butler and Nickerson Reference Butler and Nickerson2011). With all this in mind, future research should investigate ways to spur legislators to obtain more accurate impressions of what their constituents believe, and to update their own impressions and attitudes accordingly.

Supplementary material

Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/0A1AM3 and online appendices at: https://doi.org/10.1017/S0007123419000711.

Acknowledgements

We thank Daniel Butler, Peter Aronow, Avi Feller, Donald Green, Kim Gross, Steven Klein, Gabe Lenz, Winston Lin, Eric Schickler, Jasjeet Sekhon, John Sides and Lynn Vavreck for helpful feedback. Mark McKibbin and participants in UC Berkeley's Undergraduate Research Apprentice Program provided invaluable research assistance in contacting state legislators. We also thank Frank Chi and Will Donahoe for website design. All remaining errors are our own. This research was approved by the George Washington University Committee for Protection of Human Subjects (IRB#071742). Full replication materials and pre-analysis plans are available in the Appendix. The data, replication instructions and the data's codebook can be found at https://doi.org/10.7910/DVN/0A1AM3.

Footnotes

1 Future work should directly compare the accuracy of perceptions and responsiveness to corrections of legislators and citizens (e.g., Lee et al. Reference Lee2019). We do not make such direct comparisons in the present article.

2 Institutional Review Board approval was obtained before conducting the experiments. Full replication materials, pre-analysis plans, data and code are in the Appendix and available at Kalla and Porter (Reference Kalla and Porter2020).

3 The treatment did not provide any polling information disaggregated by likely voters, partisans or other constituent characteristics. Future research should consider how legislators respond to these types of polling results.

4 Note that contrary to our pre-analysis plan, no baseline survey was conducted, so no baseline responses are included as covariates. The results are the same with and without covariates (see Appendix Tables 11–13.)

5 Recall that each legislator receives polling information (whether Census or district-specific) on four issues and no polling information on four issues. Hence, each legislator appears twice in these counts. A total of 39 and 4 legislators completed the 2017 and 2018 experiments, respectively.

6 Two diagrams in the Appendix illustrate the design and sample size at each stage.

References

Ansolabehere, S and Schaffner, B (2017) CCES Common Content, 2016. Available from http://dx.doi.org/10.7910/DVN/GDF6Z0.CrossRefGoogle Scholar
Barber, MJ (2016) Representing the preferences of donors, partisans, and voters in the US Senate. Public Opinion Quarterly 80(S1), 225249.CrossRefGoogle Scholar
Bawn, K et al. (2012) A theory of political parties: groups, policy demands and nominations in American politics. Perspectives on Politics 10(3), 571597.CrossRefGoogle Scholar
Berinsky, A (2017) Rumors and health care reform: experiments in political misinformation. British Journal of Political Science 47(2), 241262.CrossRefGoogle Scholar
Broockman, D, Kalla, J and Sekhon, JS (2017) The design of field experiments with survey outcomes: a framework for selecting more efficient, robust, and ethical designs. Political Analysis 25(4), 435464.CrossRefGoogle Scholar
Broockman, D et al. (2019) Why local party leaders don't support nominating centrists. British Journal of Political Science. Doi: 10.1017/S0007123419000309.Google Scholar
Broockman, DE and Skovron, C (2018) Bias in perceptions of public opinion among political elites. American Political Science Review 112(3), 542563.CrossRefGoogle Scholar
Burke, E (1986) Speech to the Electors of Bristol. Available from http://press-pubs.uchicago.edu/founders/documents/v1ch13s7.html.Google Scholar
Butler, D and Nickerson, D (2011) Can learning constituency opinion affect how legislators vote? Results from a field experiment. Quarterly Journal of Political Science 6(1), 5583.CrossRefGoogle Scholar
Butler, DM and Dynes, AM (2016) How politicians discount the opinions of constituents with whom they disagree. American Journal of Political Science 60(4), 975989.CrossRefGoogle Scholar
Gelman, A and Little, T (1997) Poststratification into many categories using hierarchical logistic regression. Survey Methodology 23(2), 127135.Google Scholar
Gilens, M (2012) Affluence and Influence: Economic Inequality and Political Power in America. Princeton, NJ: University Press.Google Scholar
Green, D, Wilke, A and Cooper, J (2017) Reducing Intimate Partner Violence through Informal Social Control: A Mass Media Experiment in rural Uganda. Technical report. New York: Columbia University.Google Scholar
Grinberg, N et al. (2019) Fake news on Twitter during the 2016 US presidential election. Science 363(6425), 374378.CrossRefGoogle Scholar
Guess, A and Coppock, A (2018) Does counter-attitudinal information cause backlash? Results from three large survey experiments. British Journal of Political Science. Doi: 10.1017/S0007123418000327.Google Scholar
Guess, A, Nagler, J and Tucker, J (2019) Less than you think: prevalence and predictors of fake news dissemination on Facebook. Science Advances 5(1), 18.CrossRefGoogle ScholarPubMed
Hacker, JS and Pierson, P (2010) Winner-take-all Politics: How Washington Made the Rich Richer–and Turned its Back on the Middle Class. New York: Simon and Schuster.Google Scholar
Hertel-Fernandez, A, Mildenberger, M and Stokes, LC (2019) Legislative staff and representation in congress. American Political Science Review 113(1), 118.CrossRefGoogle Scholar
Hill, SJ (2017) Learning together slowly: Bayesian learning about political facts. The Journal of Politics 79(4), 14031418.CrossRefGoogle Scholar
Hopkins, D (2018) The Increasingly United States: How and Why American Political Behavior Nationalized. Chicago, IL: University of Chicago Press.CrossRefGoogle Scholar
Iyengar, S and Massey, DS (2019) Scientific communication in a post-truth society. Proceedings of the National Academy of Sciences 116(16), 76567661.CrossRefGoogle Scholar
Kalla, JL and Broockman, DE (2016) Campaign contributions facilitate access to congressional officials: a randomized field experiment. American Journal of Political Science 60(3), 545558.CrossRefGoogle Scholar
Kalla, J and Porter, E (2020) Replication Data for: Correcting Bias in Perceptions of Public Opinion Among American Elected Officials: Results from Two Field Experiments, https://doi.org/10.7910/DVN/0A1AM3, Harvard Dataverse, V1, UNF:6:W8vf8R+Y0WH9O1IAqv5x1g== [fileUNF]Google Scholar
Kraus, MW, Rucker, JM and Richeson, JA (2017) Americans misperceive racial economic equality. Proceedings of the National Academy of Sciences 114(39), 1032410331.CrossRefGoogle ScholarPubMed
Lax, J and Phillips, J (2009) How should we estimate public opinion in the states? American Journal of Political Science 53(1), 107121.CrossRefGoogle Scholar
Lazer, DMJ et al. (2018) The science of fake news. Science 359(6380), 10941096.CrossRefGoogle ScholarPubMed
Lee, N et al. (2019) More Accurate Yet More Polarized? Comparing the Factual Beliefs of Government Officials and the Public. Working paper. Palo Alto, CA: Stanford University.Google Scholar
Leighley, JE and Oser, J (2018) Representation in an era of political and economic inequality: how and when citizen engagement matters. Perspectives on Politics 16(2), 328344.CrossRefGoogle Scholar
Mansbridge, J (2003) Rethinking representation. American Political Science Review 97(4), 515528.CrossRefGoogle Scholar
Mayhew, DR (1974) Congress: The Electoral Connection. New Haven, CT: Yale University Press.Google Scholar
Miler, K (2010) Constituency Representation in Congress: The View From Capitol Hill. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Montgomery, JM, Nyhan, B and Torres, M (2018) How conditioning on posttreatment variables can ruin your experiment and what to do about it. American Journal of Political Science 62(3), 760775.CrossRefGoogle Scholar
Nickerson, D (2005) Scalable protocols offer efficient design for field experiments. Political Analysis 13(3), 233252.CrossRefGoogle Scholar
Nyhan, B and Reifler, J (2014) The effect of fact-checking on elites: a field experiment on US state legislators. American Journal of Political Science 59(3), 628640.CrossRefGoogle Scholar
Park, D et al. (2004) Bayesian multilevel estimation with poststratification: state-level estimates from national polls. Political Analysis 12(4), 375385.CrossRefGoogle Scholar
Pennycook, G and Rand, DG (2019) Fighting misinformation on social media using crowdsourced judgments of news source quality. Proceedings of the National Academy of Sciences 116(7), 25212526.CrossRefGoogle ScholarPubMed
Pitkin, HF (1967) The Concept of Representation. Berkeley: University of California Press.CrossRefGoogle Scholar
Porter, E and Wood, TJ (2019) False Alarm: The Truth About Political Mistruths in the Trump Era. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Shor, B and McCarty, N (2011) The ideological mapping of American legislatures. American Political Science Review 105(3), 530551.CrossRefGoogle Scholar
Teele, DL, Kalla, J and Rosenbluth, F (2018) The ties that double bind: social roles and women's underrepresentation in politics. American Political Science Review 112(3), 525541.CrossRefGoogle Scholar
Warshaw, C and Rodden, J (2012) How should we measure district-level public opinion on individual issues? The Journal of Politics 74(1), 203219.CrossRefGoogle Scholar
Wood, T and Porter, E (2019) The elusive backfire effect: mass attitudes’ steadfast factual adherence. Political Behavior 41(1), 135163.CrossRefGoogle Scholar
Figure 0

Figure 1. Sample district-specific polling treatmentNote: in the placebo, the web page would have looked identical except that the sentence reading ‘In your…’ would have been removed, as would have the district-specific data.

Figure 1

Figure 2. Treatment effect on misperceptionsNote: higher values indicate greater misperceptions. All models include pre-treatment covariates and state fixed effects that were specified in our pre-analysis plan. Lines denote the 95 per cent confidence intervals clustering at the state legislator level. A total of 312 legislator-responses (39 legislators) were included in the 2017 data and 31 legislator-responses (4 legislators) from 2018.

Supplementary material: Link

Kalla and Porter Dataset

Link
Supplementary material: PDF

Kalla and Porter supplementary material

Online Appendix

Download Kalla and Porter supplementary material(PDF)
PDF 1.7 MB