Concerns about the public's receptivity to factually accurate political information are widespread (Berinsky Reference Berinsky2017; Grinberg et al. Reference Grinberg2019; Guess, Nagler and Tucker Reference Guess, Nagler and Tucker2019; Iyengar and Massey Reference Iyengar and Massey2019; Lazer et al. Reference Lazer2018; Pennycook and Rand Reference Pennycook and Rand2019; Porter and Wood Reference Porter and Wood2019). However, little is known about whether elected officials are receptive to factual information and update their views upon learning new information, with some notable exceptions (for example, Butler and Nickerson Reference Butler and Nickerson2011; Nyhan and Reifler Reference Nyhan and Reifler2014). This is unfortunate. Many theories of representative democracy depend on politicians having accurate information about what their constituents believe (for example, Mansbridge Reference Mansbridge2003). Regardless of whether legislators should behave as ‘delegates’ who mirror the public's will or ‘trustees’ who use their own judgment, constituent preferences are often assumed to play some role in informing legislative outcomes (Burke Reference Burke1986; Pitkin Reference Pitkin1967). Yet recent research in the United States has made clear that legislators (Broockman and Skovron Reference Broockman and Skovron2018) and their staffers (Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019) systematically misperceive constituent opinion, believing that public opinion is significantly more conservative than it actually is. This may be responsible for the rightward shift in American policy in recent decades (Hacker and Pierson Reference Hacker and Pierson2010).
In this letter, we report the results of two field experiments designed to improve the accuracy of elected officials' perceptions of their constituents' preferences. Relying on data from the 64,600-respondent 2016 Cooperative Congressional Election Study (CCES) (Ansolabehere and Schaffner Reference Ansolabehere and Schaffner2017), in both experiments we provided sitting US state legislators (n = 2,346) the opportunity to view granular information about the policy attitudes of their constituents. Under the auspices of an ad hoc organization we created called District Pulse, legislators were assigned to receive either an invitation to a password-protected website that contained information about the policy preferences of their constituents or, as a placebo, information about those in their broader Census regions, of which there are four nationally. Afterwards, a researcher with no visible ties to District Pulse surveyed the legislators who accessed the information. We anticipated that providing legislators with district-specific information about their constituents' preferences would increase the accuracy of their perceptions of those preferences and, potentially, cause their legislative decision making to better represent those preferences.
Despite the normative and electoral incentives for legislators to learn their constituents' opinions, the vast majority of legislators in our study failed to access the information we provided them about their constituents' preferences. Moreover, the post-treatment surveys make clear that even the legislators who accessed the information about their constituents were unaffected by it. Not only do we find that most legislators in these experiments were uninterested in what their constituents believe – but even those who accessed such information were made no more accurate as a result.
Legislators’ resistance to factually accurate political information may contrast with the behavior of those they represent. Researchers have shown that in some settings presenting the public with factual information can improve their accuracy about political matters (Guess and Coppock Reference Guess and Coppock2018; Hill Reference Hill2017; Kraus, Rucker and Richeson Reference Kraus, Rucker and Richeson2017; Wood and Porter Reference Wood and Porter2019).Footnote 1 However, across the two field experiments described here, we find that legislators from both parties are unwilling to more accurately perceive their constituents' preferences. They resist becoming more factually accurate about their constituents even though they likely face electoral incentives to do so (Mayhew Reference Mayhew1974). Indeed, we fielded one of the experiments in the run-up to the 2018 midterm US election, a time when legislators are potentially most incentivized to accurately gauge their constituents' beliefs. Yet the proximity of the election made legislators in these experiments no more responsive to or interested in accurate information about their constituents' preferences.
Experimental design
Our evidence comes from pre-registered randomized field experiments we conducted in 2017 and 2018.Footnote 2 Both experiments adhered to the same design. They proceeded in five steps. First, using data from the 2016 CCES and multi-level regression and post-stratification (Gelman and Little Reference Gelman and Little1997; Lax and Phillips Reference Lax and Phillips2009; Park et al. Reference Park2004; Warshaw and Rodden Reference Warshaw and Rodden2012), we estimated district-level public opinion in 2,346 state House and Senate districts on eight issues. Specifically, we estimated district-level public opinion about immigration, mandatory minimum sentencing, renewable portfolio standards to increase the production of renewable energy, background checks for gun purchases, the minimum wage, highway funding, abortion and repealing the Affordable Care Act (full wording of the policy areas is presented in the Appendix). Then, we randomly assigned each state legislator to receive access to public opinion estimates on a random set of four of the eight issues. Next, we randomly assigned legislators to receive either polling estimates specific to their district or polling estimates covering the four broad US Census regions for the four randomly selected issues. The former, which provided legislators with information specific to their own constituents, acted as our treatment of interest; the latter was our placebo. Because Census regions are large, we anticipated that providing regional polling data about all four regions would be uninformative for a state legislator trying to understand her specific constituent's opinions. (On average, the district-specific and Census region polling estimates differed by 8.1 percentage points across issue items.) The control group was generated based on the random set of four issues that legislators did not receive polling information about.
To increase the credibility of the information we provided to the legislators, we partnered with Chi/Donahoe, a digital creative consulting firm with expertise in political data visualizations, to create District Pulse. After receiving an invitation, state legislators could log onto District Pulse to access their public opinion polling data. To further ensure that state legislators took this dashboard seriously, we pre-tested it with several current and former legislators and staffers. The results of this pre-testing are reported in the Appendix. In addition, our invitation noted that the polling data came from a large National Science Foundation-funded study and that the information was not being provided by either a partisan or special interest group. We received no replies from legislators or their staff questioning the credibility or legitimacy of District Pulse.
Figure 1 shows what District Pulse looked like for a state legislator randomly assigned to receive district-specific polling. Had this legislator been randomly assigned to the placebo condition, the web page would have looked identical except that the sentence reading ‘In your legislative district…’ would have been removed, as would have the district-specific data.Footnote 3
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20210912153340023-0165:S0007123419000711:S0007123419000711_fig1.png?pub-status=live)
Figure 1. Sample district-specific polling treatment
Note: in the placebo, the web page would have looked identical except that the sentence reading ‘In your…’ would have been removed, as would have the district-specific data.
We provided access to District Pulse by emailing custom URLs and passwords to legislators. Under the auspices of District Pulse, we sent three rounds of email invitations. We also made one set of phone calls to legislators to remind them to access the information. In all of our communications with legislators, we noted the large sample size of the polling, the National Science Foundation support, and the non-partisan, academic source. The text of the invitation emails and phone calls are included in the Appendix. We were able to track which legislators accessed the polling data, when they did so, and how many times, by requiring them to log into District Pulse.
Two weeks after the final invitation to access District Pulse, an unaffiliated academic invited the legislators to complete a survey. For each of the eight policy proposals, we asked three distinct outcome measures: their perception of constituent support, which was our primary outcome of interest, as well as the legislator's personal policy positions and expected voting behavior if the policy were to come before them (wording of the outcome measures is included in the Appendix). By comparing responses on these three outcome measures across the treatment, placebo and control conditions, we can test our hypothesis that providing district-specific polling information to legislators about their constituents' preferences should increase the accuracy of their perceptions and, potentially, their expected legislative behavior.
Following the procedure for analyzing field experiments with survey outcomes (Broockman, Kalla and Sekhon Reference Broockman, Kalla and Sekhon2017), we analyzed the data by limiting the post-treatment survey responses to the compliers – legislators who logged in to access District Pulse (n = 300 across both studies). A total of 14 per cent of compliers responded to the post-treatment survey and answered our primary outcome measure (n = 43 across both studies). This response rate is comparable to the rates found in recent surveys of political elites (Broockman and Skovron Reference Broockman and Skovron2018; Broockman et al. Reference Broockman2019; Teele, Kalla and Rosenbluth Reference Teele, Kalla and Rosenbluth2018). A full 85 per cent of responders to our survey identified themselves as legislators. As we note in the Appendix, these responders are broadly representative in terms of baseline constituent opinion on the eight issues, Trump vote share and median household income. Furthermore, there is no evidence of differential attrition on who responded to these surveys.
Our primary outcome was examining whether providing district-specific information reduces the degree to which elected officials misperceive their constituents' preferences (Broockman and Skovron Reference Broockman and Skovron2018; Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019). Following our pre-analysis plan, we coded misperceptions as the absolute value of the legislator's response to a survey question asking what percentage of constituents they believe support each policy area minus the district support estimated using the CCES data; this was the information directly conveyed to the treatment group. We tested our other two pre-registered outcomes with survey questions on legislators' policy preferences for each of the eight policy areas and questions regarding intended voting behavior. That is, we asked legislators their level of agreement with each policy and how they would vote if each policy came before them. (Full question wordings are in the Appendix). We then generated a ‘long’ dataset where each row is a legislator-issue (meaning each legislator who was a complier and responded to the post-treatment survey appears eight times) and columns are the three outcome measures. We analyzed the data by regressing each dependent variable on the treatment indicator using ordinary least squares and calculating cluster-robust standard errors at the state legislator level. In accordance with our pre-analysis plan, our primary model includes pre-treatment covariates for the legislator's party, whether the legislator serves in an upper or lower chamber, Trump's 2016 vote share in the district and state fixed effects.Footnote 4
We took several steps to maximize statistical power. First, by randomly assigning each legislator to receive polling information on four of eight issues but collecting outcome measures for all eight, we were able to conduct a within-subject analysis and increase the effective sample size (see Green, Wilke and Cooper (Reference Green, Wilke and Cooper2017) for a similar design). Secondly, by randomly assigning subjects to receive either district-specific or placebo regional polling aggregates and tracking whether a legislator accessed their polling information, we were able to create a placebo group to estimate the treatment-on-treated effect that is robust to our low compliance rate (Broockman, Kalla and Sekhon Reference Broockman, Kalla and Sekhon2017; Nickerson Reference Nickerson2005). With these design features and the observed compliance and post-treatment survey response rates, our experiment had 80 per cent power to detect a 7-percentage-point reduction in misperceptions from an average of 18 percentage points in the control group. The Appendix includes many additional details on the multi-level regression and post-stratification procedure, the placebo, balance checks, implementation and robustness checks under alternative model specifications.
The first experiment was conducted in the fall of 2017. The second experiment was conducted in the fall of 2018, in advance of the 2018 US mid-term election. Prior studies have found that elites respond to informational treatments that allude to their re-election chances (Nyhan and Reifler Reference Nyhan and Reifler2014). In 2018, we randomly assigned 917 of the legislators who did not access District Pulse in 2017 to receive the same email invitation used in the original study, and another 917 legislators to receive an electoral-salience invitation. The latter invitation noted in the subject line that legislators could ‘Access detailed polling before the elections’ and referred to the elections in the main body (see the Appendix for the full emails).
Results
Despite administering a statistically well-powered experiment in which we offered state legislators a credible source of information about their constituents' preferences, we were unable to improve the accuracy of legislators' perceptions. Legislators in the control group who received no polling information misperceived their constituents' policy preferences by 18 percentage points on average, similar to previous estimates (Broockman and Skovron Reference Broockman and Skovron2018; Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019). Legislators provided with the placebo regional polling information and treatment district-specific information misperceived constituent policy preferences by 16 and 18 percentage points, respectively. In sum, legislators remained roughly equivalently uninformed about their constituents' preferences regardless of whether they were provided with information about their specific constituents, whether they received information about the four US Census regions or whether they received no information at all.
We present the main results in Figure 2, which regresses the outcome of misperception on a treatment indicator and covariates for the legislator's party, whether the legislator serves in an upper or lower chamber, Trump's 2016 vote share in the district and state fixed effects. We calculate cluster-robust standard errors at the state legislator level, since each observation in this regression is a legislator issue (all pre-specified in our pre-analysis plan). In the left-hand panel, we limit our analysis to observations randomly assigned to receive district-specific polling on that issue or no polling on that issue (control). In the right-hand panel, we limit our analysis to observations receiving either the district-specific polling or the Census region polling (placebo). Each panel then includes four treatment effect estimates: (1) using all policy issues and limited to our 2017 study (sample size is 20 legislators and 80 legislator-responses in treatment, 39 and 156 in control, and 19 and 76 in placebo);Footnote 5 (2) using only those policy domains (mandatory minimum sentences, renewable portfolio mandates, gun purchase background, infrastructure spending and banning abortions) over which state legislators have direct legislative authority and limited to our 2017 study (20 legislators and 50 legislator-responses in treatment, 39 and 102 in control, and 19 and 43 in placebo); (3) using all issues and including both the 2017 and 2018 studies (22 legislators and 88 legislator-responses in treatment, 43 and 171 in control, and 21 and 84 in placebo); and (4) using just the state-level issues and both the 2017 and 2018 studies (22 legislators and 55 legislator-responses in treatment, 43 and 110 in control, and 21 and 50 in placebo).Footnote 6
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20210912153340023-0165:S0007123419000711:S0007123419000711_fig2.png?pub-status=live)
Figure 2. Treatment effect on misperceptions
Note: higher values indicate greater misperceptions. All models include pre-treatment covariates and state fixed effects that were specified in our pre-analysis plan. Lines denote the 95 per cent confidence intervals clustering at the state legislator level. A total of 312 legislator-responses (39 legislators) were included in the 2017 data and 31 legislator-responses (4 legislators) from 2018.
Figure 2 demonstrates that district-specific polling has no effect on reducing legislators' misperceptions when compared to either the control or the placebo. In fact, compared to the placebo, the sign points in the opposite direction than expected. The results are robust when examining only those policy domains over which state legislators have direct legislative authority and when including the 2018 replication. As an additional robustness test, we examine the accuracy of the legislators who recalled receiving the polling information from District Pulse, as measured at the end of the post-treatment survey. Despite any bias potentially introduced by conditioning on this post-treatment variable (Montgomery, Nyhan and Torres Reference Montgomery, Nyhan and Torres2018), we continue to find no treatment effects in this group. Legislators who received the treatment information misperceived public opinion by an average of 17 percentage points compared to 16 and 17 percentage points for the placebo and control groups, respectively. Even within the subset of the treatment group that recalled receiving polling information from District Pulse, their degree of inaccuracy remains substantively large and no more accurate than the placebo and control groups. In the Appendix, we present additional robustness results and interaction effects with electoral safety, which consistently fail to find evidence of either statistically or substantively significant effects as a result of providing polling information to legislators. However, given our limited sample size, these interaction effects may be underpowered. Turning to the other outcomes, we also fail to find an effect of providing polling information on legislators' policy preferences or their voting intentions. We applied the same robustness checks described above for misperceptions to these two outcomes, and the results remain unchanged. Complete results are presented in the Appendix.
Discussion
Our results are sobering. While previous research (Broockman and Skovron Reference Broockman and Skovron2018; Hertel-Fernandez, Mildenberger and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019) has established that elected officials systematically misperceive what their constituents want, the evidence presented here portrays such officials as immune to our non-partisan efforts to correct those misperceptions, possibly opening the door to influence from others, such as likely voters, co-partisans, political activists, lobbyists or donors (Barber Reference Barber2016; Butler and Dynes Reference Butler and Dynes2016; Kalla and Broockman Reference Kalla and Broockman2016; Leighley and Oser Reference Leighley and Oser2018; Miler Reference Miler2010). In our experiments, legislators did not update their attitudes to match those of their constituents on salient political issues, even when the requisite information was quite literally at their fingertips.
We can think of three potential explanations for our results. First, legislators may not have received or trusted the information provided to them. Had the polling come to legislators via alternative channels or sources (for example, Butler and Nickerson Reference Butler and Nickerson2011), it may have had a greater impact, suggesting the need for further replication. Our results, while stark, are limited to these particular legislators and this particular intervention; other interventions tested on other samples may result in different findings. Secondly, legislators may discount aggregate measures of their constituents' attitudes, and instead focus on the preferences of political elites, lobbyists, donors, co-partisans and other policy demanders who they view as more central to their electoral prospects (Barber Reference Barber2016; Bawn et al. Reference Bawn2012; Gilens Reference Gilens2012; Miler Reference Miler2010). Knowing what the average constituent believes on policy matters may be less important to elected officials than knowing what these others believe. Thirdly, given the increasing levels of polarization in state legislatures (Shor and McCarty Reference Shor and McCarty2011) and the ‘nationalization’ of US politics (Hopkins Reference Hopkins2018), legislators may think of themselves not as delegates for their specific constituents, but as participants in national partisan debates. Knowing what their constituents believe on policy matters may be less important than understanding the positions of their national party – and sticking to them.
Our experiment expands upon an innovative prior experiment by Butler and Nickerson (Reference Butler and Nickerson2011). During a 2008 New Mexico special legislative session, the authors sent 35 state legislators polling from their districts on support for a one-time tax rebate during a special legislative session. They find in this particular circumstance that legislators who received their district-specific polling were much more likely to vote with their constituents than the control group. Since that research was conducted, American political elites and state legislators have become increasingly polarized (Shor and McCarty Reference Shor and McCarty2011). Our results may differ from those of Butler and Nickerson (Reference Butler and Nickerson2011) because the current study included a more diverse set of contentious political issues across a much broader swath of legislators at a time of heightened polarization. Other differences between the settings of the studies, such as the fact that Butler and Nickerson (Reference Butler and Nickerson2011) provided polling information directly tied to an upcoming legislative vote and in conjunction with a local media source, may also explain why the legislators in their study were more responsive to constituent opinion than the legislators studied here. Our results underscore the importance of replication in the social sciences, especially as the external political world changes.
Our two experiments reported here should not be the last word on the topic of constituency influence in American politics. Though we have shown that it is difficult to improve the accuracy of legislators' perceptions of constituent preferences, that does not mean it is impossible. A different treatment might have yielded different results. And even though we went to great lengths to maximize our statistical power, a design with even greater power might have been able to detect smaller effects. In addition, we could not control who accessed District Pulse and who took the post-treatment survey; it may have been the case that there were systemic differences between those who took the survey and those who did not, thereby preventing us from observing effects. Finally, it is possible that the set of legislators willing to respond to a survey are somehow different and less treatment responsive than the majority of legislators who chose not to respond. These limitations are especially important given how our results diverge from some prior work (Butler and Nickerson Reference Butler and Nickerson2011). With all this in mind, future research should investigate ways to spur legislators to obtain more accurate impressions of what their constituents believe, and to update their own impressions and attitudes accordingly.
Supplementary material
Data replication sets are available in Harvard Dataverse at: https://doi.org/10.7910/DVN/0A1AM3 and online appendices at: https://doi.org/10.1017/S0007123419000711.
Acknowledgements
We thank Daniel Butler, Peter Aronow, Avi Feller, Donald Green, Kim Gross, Steven Klein, Gabe Lenz, Winston Lin, Eric Schickler, Jasjeet Sekhon, John Sides and Lynn Vavreck for helpful feedback. Mark McKibbin and participants in UC Berkeley's Undergraduate Research Apprentice Program provided invaluable research assistance in contacting state legislators. We also thank Frank Chi and Will Donahoe for website design. All remaining errors are our own. This research was approved by the George Washington University Committee for Protection of Human Subjects (IRB#071742). Full replication materials and pre-analysis plans are available in the Appendix. The data, replication instructions and the data's codebook can be found at https://doi.org/10.7910/DVN/0A1AM3.