Hostname: page-component-66644f4456-mv4w6 Total loading time: 0 Render date: 2025-02-13T02:00:59.009Z Has data issue: true hasContentIssue false

Do Policy Makers Listen to Experts? Evidence from a National Survey of Local and State Policy Makers

Published online by Cambridge University Press:  28 September 2021

NATHAN LEE*
Affiliation:
Rochester Institute of Technology, United States
*
Nathan R. Lee, Assistant Professor, Department of Public Policy, Rochester Institute of Technology, United States, and Managing Director, CivicPulse, United States, nrlcla@rit.edu.
Rights & Permissions [Opens in a new window]

Abstract

Do elected officials update their policy positions in response to expert evidence? A large literature in political behavior demonstrates a range of biases that individuals may manifest in evaluating information. However, elected officials may be motivated to accurately incorporate information when it could affect the welfare of their constituents. I investigate these competing predictions through a national survey of local and state policy makers in which I present respondents with established expert findings concerning three subnational policy debates, debates that vary as to whether Republicans or Democrats are more likely to see the findings as confirmatory or challenging. Using both cross-subject and within-subject designs, I find policy makers update their beliefs and preferences in the direction of the evidence irrespective of the valence of the information. These findings have implications for the application of mass political behavior theories to politicians as well as the prospects for evidence-based policy making.

Type
Research Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of the American Political Science Association

Introduction

The United States enjoys an unparalleled network of experts and scientists who produce findings intended to support effective policy making. But when these findings challenge policy makers’ preexisting preferences, do they listen? A large body of literature in the study of mass political behavior has found that individuals often reject or discount information that they dislike or find challenging (Bolsen et al. Reference Bolsen, Palm, Bolsen and Palm2019; Kahan Reference Kahan2013; Lewandowsky and Oberauer Reference Lewandowsky and Oberauer2016; Taber and Lodge Reference Taber and Lodge2006). Scholars describe this biased form of information processing as directional-motivated reasoning. Footnote 1 Many of these studies find that the individuals most prone to such biases tend to be those who are most partisan or hold the strongest opinions about politics. Accordingly, an elected official—the archetype of the opinionated partisan—should be especially likely to manifest such biases.

On the other hand, policy makers may face instrumental or intrinsic motivations to counteract such biases and evaluate new information in an unbiased fashion, a process described as accuracy-motivated reasoning. After all, policy makers have to make real decisions that affect the livelihoods of those they represent. Consequently, they may expect voters to hold them accountable for the effects of such decisions (Fiorina Reference Fiorina1981) or that the news media will criticize them if they misrepresent facts (Nyhan and Reifler Reference Nyhan and Reifler2015). Other factors may promote accuracy-motivated reasoning as well, such as the deliberative process associated with policy making (Quirk, Bendix, and Bächitger Reference Quirk, Bendix and Bächtiger2018) or an individual policy maker’s intrinsic sense of civic duty (Mullinix Reference Mullinix2018).

To test these competing predictions, I implemented a national online survey of local and state policy makers to assess responsiveness to expert evidence concerning three policy debates: needle exchanges, GMO bans, and rent-control ordinances. These policies were selected for three reasons. First, they are politically contested yet there exists a high degree of agreement among subject-matter experts about whether or not each is effective. Second, they are primarily debated and legislated at the state and local levels. Third, they vary with respect to which party would be more likely to be at odds with the stance of the expert community. For each of these policies, I examine responsiveness to expert evidence through a cross-subject experiment and within-subject pre-post design. In the cross-subject experiment, I find that respondents update their beliefs and preferences in the direction of the evidence irrespective of party affiliation. In the within-subject design, I explore how updating varies as a function of the distance between the pretreatment belief or preference and the content of the message. I find that those who are most likely to find the information challenging—that is, those with the most distant priors—update the most.

Taken together, these findings demonstrate that, in contrast to the tenor of most journalistic and scholarly accounts, directional-motivated reasoning does not dominate how policy makers evaluate information. On the contrary, this study provides both cross-issue and cross-party evidence of accuracy-motivated reasoning among policy makers. More generally, this study suggests that partisan policy makers are not merely partisans, in a psychological sense: further research should consider the possible ways by which electoral incentives, institutional norms, and self-selection may complicate the application of mass political behavior theories to public officials. On a practical note, this study suggests reasons for optimism concerning the possibilities of evidence-based policy making, at least in local and state government.

Competing Expectations of How Policy Makers Process Evidence

Existing psychological theories of information processing are predicated on the idea that an individual might have one of two types of motivations when processing information. An individual may desire to know the world as it truly is (“accuracy motivation”), or to make new information fit with their preexisting attitudes (“directional motivation”) (Kunda Reference Kunda1990). Directional motivations can encompass a range of distinct goals, including defending prior opinions, protecting one’s self concept or identity, or managing others’ impression of one’s self (Druckman Reference Druckman2012). Two prominent mechanisms by which such biases manifest themselves are through confirmation bias and disconfirmation bias, which refer to being more or less likely to accept new information, depending on the congeniality of the information. The congeniality of the information is, in turn, defined by virtue of the relationship between the information and one’s prior attitude, identity, or social group.

There is a large body of literature in the study of mass political behavior that illustrates a polity riddled with such biases, particularly when it comes to partisans or ideologues (Chan et al. Reference Chan, Jones, Jamieson and Albarracín2017; Flynn, Nyhan, and Reifler Reference Flynn, Nyhan and Reifler2017; Kahan Reference Kahan2010; Reference Kahan2013; Kraft, Lodge, and Taber Reference Kraft, Lodge and Taber2015; Lewandowsky and Oberauer Reference Lewandowsky and Oberauer2016; Lodge and Taber Reference Lodge and Taber2013; Thorson Reference Thorson2016). However, a smaller number of studies have found that, despite large differences in baseline attitudes, partisan groups tend to update their views in response to new information in a way that is consistent with unbiased accuracy-motivated reasoning (Gerber and Green Reference Gerber and Green1999; Guess and Coppock Reference Guess and Coppock2020). Finally, a third line of literature has focused on the factors that change the relative likelihood that information processing in politics is guided by directional vs. accuracy goals (for a review, see Bolsen et al. Reference Bolsen, Palm, Bolsen and Palm2019). These factors include both (a) individual-level characteristics such as strength of prior attitude (Chong and Druckman Reference Chong and Druckman2010), personal experience (McCabe Reference McCabe2016), or the presence of intersecting identities (Klar Reference Klar2013; Mullinix Reference Mullinix2018; Peterson Reference Peterson2019) and (b) contextual factors such as presence of party conflict (Druckman, Peterson, and Slothuus Reference Druckman, Peterson and Slothuus2013), rhetorical competition (Druckman, Fein, and Leeper Reference Druckman, Fein and Leeper2012), or incentives (Bullock et al. Reference Bullock, Gerber, Hill and Huber2015; Khanna and Sood Reference Khanna and Sood2018; Prior, Sood, and Khanna Reference Prior, Sood and Khanna2015).

Compared with the mass public, which behavioral pattern should we expect to observe among policy makers? On the one hand, policy makers tend to be more politically polarized than the mass public (Bafumi and Herron Reference Bafumi and Herron2010), and thus we might expect directional-motivated reasoning to be heightened. They are also more likely to hold strong opinions on a variety of policy debates, which also increases the likelihood of directional-motivated reasoning. On the other hand, the anticipation that voters might hold them accountable for the effects of their decisions could heighten accuracy-motivated reasoning (Fiorina Reference Fiorina1981). Likewise, the deliberative process associated with policy making, or even an intrinsic sense of civic duty, could provide other reasons for accuracy-motivated reasoning to dominate (Quirk, Bendix, and Bächtiger Reference Quirk, Bendix and Bächtiger2008; Mullinix Reference Mullinix2018).

These two perspectives lead to competing predictions about how we should expect policy makers to respond to expert evidence. To the extent that policy makers are directional motivated, responsiveness to the evidence will depend on its perceived congeniality (i.e., how well it corresponds with their preexisting attitudes). To the extent that policy makers are motivated by accuracy, policy makers will update their beliefs and preferences in the direction of the evidence irrespective of the congeniality of the evidence.

These hypotheses can be stated as follows:

Directional Motivation Hypothesis. Policy makers will be less likely update their beliefs or preferences in the direction of the evidence when the evidence is uncongenial.

Accuracy Motivation Hypothesis. Policy makers will update their beliefs or preferences in the direction of the expert evidence irrespective of the congeniality of the evidence.Footnote 2

While there are far fewer behavioral studies examining directional versus accuracy motivations among policy makers, those that exist mirror the distribution of evidence in the mass political behavior literature. The majority of studies suggest that policy makers are directionally biased in their processing of information with respect to both expert evidence (Baekgaard et al. Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019; Bolsen, Druckman, and Cook Reference Bolsen, Druckman and Cook2015; Jerit et al. Reference Jerit, Barabas, Altman, Berinsky, Best, Campbell and Claibourn2006; Nyhan Reference Nyhan2010; Vivalt and Coville Reference Vivalt and Coville2020) and perceptions of their own voters (Broockman and Skovron Reference Broockman and Skovron2018; Butler and Dynes Reference Butler and Dynes2016; Hertel-Fernandez, Mildenberger, and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019), as well as engage in a variety of other problematic cognitive biases (Sheffer et al. Reference Sheffer, Loewen, Soroka, Walgrave and Sheafer2018). For example, Baekgaard et al. (Reference Baekgaard, Christensen, Dahlmann, Mathiasen and Petersen2019) carry out a survey experiment with Danish politicians in which they ask them to compare the performance of two schools based on a summary report. By randomizing the inclusion or exclusion of identifying information about the schools, they illustrate how politicians are less likely to correctly identify which school performed best when the information presented contradicts their prior attitudes.

On the other hand, a few studies have suggested that policy makers accurately incorporate evidence (see Butler and Nickerson [Reference Butler and Nickerson2011] for voters and Zelizer [Reference Zelizer2018] and Hjort et al. [Reference Hjort, Moreira, Rao and Santini2019] for expert evidence). In particular, Hjort et al. (Reference Hjort, Moreira, Rao and Santini2019) conduct a field experiment in Brazil in which they provide municipal officials evidence from randomized controlled trials showing that sending out a tax payment reminder letter to local residents increases tax compliance. They find that policy makers who are given the opportunity to learn about this research are more likely to implement the reminder letter than those who do not have the opportunity to learn about it. Notably, they find no evidence that this effect is contingent on prior attitudes toward the policy.

Research Design

To evaluate expert responsiveness among policy makers, I conducted a national online survey of municipal, county, and state elected officials and their staffers. In these surveys, I test responsiveness to expert findings across three politically charged but subnationally relevant policy issues. Responsiveness is measured through both a cross-subject experimental design and a within-subject pre–post design. This section briefly describes and justifies the choices made in each of the key elements of the research design: the subnational policy maker survey, the criteria for issue selection, and the two measures of expert responsiveness.

Sampling Frame

This study leverages a national online survey of local and state public officials. Why study subnational government officials? There are both substantive and practical reasons. Substantively, the vast majority of elected officials in the United States preside over local or state level jurisdictions: while there are 535 members of Congress, there are 7,383 state legislators, over 20,000 local elected executives, and over 95,000 local elected legislators.Footnote 3 Consequently, despite the increasing nationalization of political discourse, the vast majority of day-to-day policy decisions continue to occur at the local and state levels. Practically, the far greater number of local and state elected officials makes it easier to generate a survey sample with sufficient statistical power.

To carry out this survey, I partnered with CivicPulse, a nonprofit research organization that conducts multicollaborator surveys of local public officials to generate shared data and research.Footnote 4 CivicPulse combines data from multiple private vendors to construct and maintain a comprehensive contact list of the elected and appointed officials in all township, municipality, county, and state legislative districts in the United States with populations of 1,000 or more (98% coverage). From this contact list, a random sample of policy makers was invited to participate in an online confidential survey in March 2018. A total of 690 policy makers from all 50 states completed the survey. The sample is comprised of approximately 40% elected municipal officials, 24% elected county officials, 12% state legislators, and 24% state legislative staffers (Table 1). Further information about the demographic composition of the sample can be found in the appendix (Table A5).

To characterize the sample’s representativeness, I geocoded all respondents using the FIPS system and subsequently matched these respondents to Census-area demographic data. In general, the survey modestly oversamples municipalities, counties, and districts that are more populated, more urban, more liberal, and more educated. Further information on representativeness is provided in the appendix, as well as demonstration of the insensitivity of the main results to these geographic biases and the inclusion or exclusion of survey weights based on Census-area characteristics.

Issue Selection

The three policy debates for this study were selected based on three key criteria. First, each debate had to be one in which a significant proportion of local and state policy makers across the United States would plausibly be in positions to make decisions about. Second, each debate had to have a fairly high degree of consensus among domain-relevant experts such that it would not be misleading to summarize an expert community as having a singular view. Third, these three debates had to collectively span the range of potential relationships between the political parties and the expert community: at least one in which the expert view was more congenial for Democrats and at least one in which the expert view was more congenial for Republicans.

The three issues selected to fit these criteria were needle exchanges, GMO bans, and rent-control ordinances. Needle exchange programs have become a popular response to addressing the rise of needle-born diseases associated with the opioid crisis. Despite unsubstantiated claims about how needle exchanges increase drug use, the vast majority of public health experts who have studied this intervention have found that needle exchanges are effective in reducing the spread of the disease and do not increase drug use (CDC 2017). Genetically-modified organisms (GMOs) were chosen for the second issue. Despite common claims about the dangers of GMOs, the vast majority of scientists who have studied this topic have concluded that GMOs are safe for human consumption (National Academy of Sciences Engineering and Medicine 2016). Finally, the third issue selected was the debate around rent control. Despite its popularity in some areas, the vast majority of economists agree that this intervention fails to achieve its purported goal—namely, increasing the quantity of affordable housing (Glaeser and Luttmer Reference Glaeser and Luttmer2003; Jenkins Reference Jenkins2009).

Importantly, these three issues span the range of potential relationships between the political parties in the United States and the views of the relevant expert communities: Democrats are much more likely to hold the expert-congruent policy stance than Republicans concerning needle exchanges, Republicans are modestly more likely to hold the expert-congruent position with respect to GMOs, and Republicans are much more likely to hold the expert-congruent policy stance concerning rent-control ordinances. These patterns are illustrated in Figure 1. In addition, I further show in the appendix (Figure A3) that the partisan orientation of these issues is recognized by the respondents themselves.

Table 1. Respondents by Level of Government and Position

Note: Elected municipal and county officials include both executives (e.g., mayor) and legislators.

Figure 1. Expert Congruence by Party Prior to Treatment

Note: Policy preferences are based on the question, “Do you support or oppose [policy]? {Strongly support, Moderately support, Slightly support, Neither support nor oppose, Slightly oppose, Moderately oppose, Strongly oppose}.” For needle exchanges, all “support” responses are coded as expert-congruent. For GMO bans and rent-control ordinances, all “oppose” responses are coded as expert-congruent. Data is taken from the control group for each policy. The three policies span the range of potential alignments between the political parties and the expert communities. A far greater percentage of Democrats are aligned with public health experts concerning needle exchanges, whereas a far greater percentage of Republicans are aligned with economists concerning rent-control ordinances. For GMO bans, alignment with scientists is higher among Republicans, though this difference is less pronounced.

Survey Design and Outcome Measures

This study employs two different approaches to measuring policy makers’ responsiveness to expert evidence. The first is a cross-subject informational experiment. The control group receives a “balanced” summary of the controversy. The treatment group receives this information, plus an additional paragraph summarizing the findings of a study by a group of experts. Afterwards, respondents in both the treatment and control groups are asked their own beliefs about the degree of consensus in the expert community and what their own policy preferences are. Each respondent is randomly assigned to two of the three policy modules. The order of the policies presented is also randomized. By way of example, the text from the GMO module is provided below (the full text of each module can be found in the appendix).

Genetically modified organisms (GMOs) are crops that have had changes made in their DNA to improve resistance to disease or herbicides. However, some people think they are unsafe to eat.

[treatment condition only] In fact, scientists from several universities have examined this issue and concluded that GMO foods are safe to eat.

The second approach leverages the fact that the respondents assigned to the control condition for each policy can subsequently be treated with the same information without compromising the validity of the pretreatment outcome measures. Thus, by exposing respondents in the control condition to the message and subsequently asking the same outcome questions again, I am able to create a within-subject pre–post measure for updating in response to expert evidence. Of course, this entails a trade-off: on the one hand, the pre–post design sacrifices the clear identification of causal treatments available in the experimental setup; on the other hand, it facilitates the ability to directly estimate whether and how much each individual updates their beliefs and preferences.Footnote 5

The two outcome measures are respondents’ subjective beliefs about the degree of consensus among the expert community and the respondent’s own personal policy preferences. These questions are as follows (continuing the GMO example):

Beliefs about Experts. If you were to guess, what percentage of scientists who conduct research related to GMOs believe that GMOs are safe to eat? {Less than 20%, 20–40%, 40–60%, 60–80%, More than 80%}

Policy Preferences. Do you support or oppose a ban on GMO foods? {Strongly support a ban, Moderately support a ban, Slightly support a ban, Neither support nor oppose a ban, Slightly oppose a ban, Moderately oppose a ban, Strongly oppose a ban}

The belief question was chosen to measure initial information uptake—namely, to what extent the specific information in the message moves respondents’ subjective perceptions about the degree of consensus in the expert community. Importantly, the information treatment is qualitative, so respondents could differentially interpret the proportion of experts who hold the view espoused in the message. The policy preference question—which is always asked after the belief question—was chosen to assess the extent to which this information uptake translates into a change in a respondent’s opinion about the policy.

These measures are translated into belief accuracy (accuracy of perceived consensus) and preference congruence (congruence with the implied policy stance of expert community), respectively. The survey is designed so that, for all three policies, the most accurate answer to the belief question is always the top end of the five-point Likert-type scale (“more than 80% of experts”). For preference congruence, the “strongly support” endpoint of the seven-point scale is the most congruent response for needle exchanges, while the “strongly oppose” endpoint is the most congruent response for rent control and GMO bans. The full five- and seven-point Likert-type scales are used as the outcome measures in the analyses that follow, rescaled from zero to one and oriented such that higher values are always more accurate or more congruent. In the appendix, I explore the sensitivity of the results to using alternative binary outcome measures (Tables A15-A16).

Estimation and Hypothesis-Testing

In the context of the cross-subject experimental design, the test for the accuracy motivation hypothesis is whether the treatment effects on beliefs and preferences are significant and in the implied direction of the message. The test for the directional motivation hypothesis is whether these treatment effects are moderated by congeniality, wherein congeniality is defined by one’s party affiliation (measured using the standard four-category party ID questionFootnote 6): for needle exchanges, Republicans should be less likely to move in the direction of the message than Democrats; for rent control, the inverse should be true.

In the within-subject design, as in the cross-subject design, the test for the accuracy motivation hypothesis is whether respondents significantly update their beliefs and preferences in the expected direction. The test for the directional motivation hypothesis is again whether updating is moderated by congeniality, but this time congeniality is defined by the pretreatment outcome measure. Specifically, the pretreatment belief and preference measures are collapsed into three-factor categorical variables: beliefs are converted into least, somewhat, and most accurate. Policy preferences are converted into least, somewhat, and most congruent. Belief accuracy is assigned in the following way: answers under 40% are labeled “least,” greater than 60% are labeled “most,” and answers at the midpoint (40–60%) are labeled “somewhat.” Assignment for preference congruence for needle exchanges is as follows: all “support” answers are labeled “most,” all “oppose” answers are labeled “least,” and all “neither support nor oppose” answers are labeled “somewhat.” For preference congruence for GMO bans and rent control, this labeling is reversed, where “opposed” answers are the most congruent.

I then examine how updating compares across each of these groups. By construction, the most accurate (or most congruent) respondents cannot update any further in the implied direction of the message. However, the comparison between the respondents who are the least accurate (or least congruent) and respondents who are somewhat accurate (or somewhat congruent) provides a direct test of the directional motivation hypothesis. If respondents are motivated by maintaining their prior beliefs, we should expect that the least accurate respondents will update their beliefs less than the somewhat accurate respondents. Likewise, if respondents are motivated by maintaining their prior preferences, we should expect that the least congruent respondents will update their preferences less than the somewhat congruent respondents. (Table A19 in the appendix explores the potential confounding role of ceiling effects in these tests.)

One way of viewing these two approaches is that the cross-subject design provides an opportunity to test accuracy-motivated reasoning against partisan-motivated reasoning while the within-subject design provides an opportunity to test accuracy-motivated reasoning against prior-attitude-motivated reasoning. Taken together, the following equations provide the regression estimation functions for the cross-subject (1–2) and within-subject (3–4) designs:

(1) $$ \begin{array}{c}{\mathrm{B}}_{ij}={\beta}_0+{\beta}_1\hskip1.5pt \ast \hskip1.5pt \mathrm{republican}+{\beta}_2\hskip1.5pt \ast \hskip1.5pt \mathrm{treatment}\\ {}\hskip1pc +\hskip1.5pt {\beta}_3\hskip1.5pt \ast \hskip1.5pt \mathrm{treatment}\hskip1.5pt \ast \hskip1.5pt \mathrm{republican}+{\varepsilon}_{ij},\end{array} $$
(2) $$ \begin{array}{c}{\mathrm{P}}_{ij}={\beta}_0+{\beta}_1\hskip1.5pt \ast \hskip1.5pt \mathrm{republican}+{\beta}_2\hskip1.5pt \ast \hskip1.5pt \mathrm{treatment}\\ {}\hskip9.5pt +\hskip1.5pt {\beta}_3\hskip1.5pt \ast \hskip1.5pt \mathrm{treatment}\hskip1.5pt \ast \hskip1.5pt \mathrm{republican}+{\varepsilon}_{ij},\end{array} $$
(3) $$ \begin{array}{c}{\mathrm{B}}_{ij}^{posttreatment}-{\mathrm{B}}_{ij}^{pretreatment}={\beta}_4+{\beta}_5\hskip1.5pt \ast \hskip1.5pt \mathrm{least}\_{\mathrm{accurate}}_{ij}\\ {}+{\beta}_6\hskip1.5pt \ast \hskip1.5pt \mathrm{most}\_{\mathrm{accurate}}_{ij}+{\varepsilon}_{ij},\end{array} $$
(4) $$ \begin{array}{c}{\mathrm{P}}_{ij}^{posttreatment}-{\mathrm{P}}_{ij}^{pretreatment}={\beta}_4+{\beta}_5\hskip1.5pt \ast \hskip1.5pt \mathrm{least}\_{\mathrm{congruent}}_{ij}\\ {}+{\beta}_6\hskip1.5pt \ast \hskip1.5pt \mathrm{most}\_{\mathrm{congruent}}_{ij}+{\varepsilon}_{ij},\end{array} $$

where $ {\mathrm{B}}_{ij} $ and $ {\mathrm{P}}_{ij} $ are the belief and preference outcomes, respectively. In the cross-subject model, the reference category is Democrat (independents are excluded). In the within-subject model, the reference category is somewhat (somewhat accurate beliefs or somewhat congruent preferences). Because the outcome variable in the within-subject model is the difference between the posttreatment and pretreatment measures, β 4 is conceptually similar to β 2 in the cross-subject model. The hypothesis tests for both designs are listed in Table 2.

Results

Figure 2 displays the point estimates of the effect of treatment on belief accuracy (with respect to the degree of consensus in the expert community) and preference congruence (with respect to the implied stance of the expert community).Footnote 7 Across policies, exposure to the expert findings significantly increases both belief accuracy and preference congruence. The effects on belief accuracy range from eight percentage points for GMOs, to 10 percentage points for needle exchanges, to 14 percentage points for rent control (all significant at 95% level).Footnote 8 The relative sizes of these effects are consistent with the fact that respondents are the least likely to be aware of the consensus among economists concerning rent control (47% in the control group) whereas a far greater percentage of respondents are accurately informed about the consensus among scientists concerning GMOs (69%) and among public health experts concerning needle exchanges (67%). Consequently, respondents have further to update with respect to rent control, on average, than with respect to the other two issues.

Table 2. Key Hypothesis Tests

Figure 2. Overall Treatment Effects from Cross-Subject Design

Note: The left-hand side of the plot displays the effect of treatment on belief accuracy, and the right-hand side displays the effect of treatment on preference congruence (in relation to the implied stance of the expert community). Across policies, exposure to expert findings causes an increase in both outcomes.

The effects on preference congruence range from four percentage points for needle exchanges, to eight percentage points for GMOs, to nine percentage points for rent control (the effect for needle exchanges is not statistically significant, but the effects for the other two issues are). Again, this is consistent with the preexisting attitudes in the control group: 65% of respondents are likely to already hold the expert-congruent preference with respect to needle exchanges, 56% hold this preference with respect to GMOs, and 49% do so with respect to rent control. Therefore, respondents have the furthest to update on their preferences concerning rent control and the least far to update concerning needle exchanges.

Taken together, the increase in belief accuracy and preference congruence across issues suggests a willingness among policy makers to incorporate new evidence. While such behavior is clearly consistent with accuracy-motivated reasoning, it is impossible to rule out the confounding role of directional motivations from this result alone. For example, the updating could primarily be occurring among the respondents for whom the expert findings are congenial. Therefore, to adjudicate between directional-motivated and accuracy-motivated updating, I turn to the first of the two tests designed for this purpose, namely, a comparison of the estimated treatment effects by party.

Figure 3 plots the average treatments effect of the message by party affiliation for each policy. The directional motivation hypothesis would predict that the effect of the treatment would be lower for Republicans for needle exchanges and higher for GMOs and rent control, while the accuracy motivation hypothesis would predict that all of these point estimates should be similar. While there are modest differences by party consistent with the directional motivation hypothesis, these differences are not significant. On the other hand, the evidence in favor of the accuracy motivation is clear: all 12 point estimates are positive, and nine are statistically significant at the 95% confidence level (see Tables A8–A9 for formal regression results across a variety of specifications).

Figure 3. Treatment Effects by Party from Cross-Subject Design

Note: The average treatment effects are plotted by party for each policy.Both beliefs and preferences are rescaled from zero to one. The directional motivation hypothesis predicts that the effect of treatment should be conditional on party, whereas the accuracy motivation hypothesis predicts that updating should occur irrespective of party. Although there are modest differences by party consistent with the directional motivation hypothesis, these differences are not significant. On the other hand, the evidence in favor of the accuracy motivation hypothesis is clear: all 12 point estimates are positive, and nine are statistically significant at the 95% confidence level.

It is important to note that, while these results clearly demonstrate the role of accuracy-motivated reasoning, they do not rule out the possible simultaneous role of directional-motivated reasoning. Specifically, the absence of significant differences by party could be driven by differential ceiling effects, which could bias against observing differences between parties that would otherwise exist. Table A18 in the appendix demonstrates this possibility. More generally, it is difficult to draw strong conclusions from moderation analyses of experimental effects, due to limited statistical power. Finally, while party affiliation is one mechanism by which directional-motivated reasoning might occur, it may not always be the relevant one. For example, it is possible that not all policy makers perceive needle exchanges, GMOs, and rent control as oriented along party lines in the way that the above test for directional motivation presupposes. For these reasons, I turn to the within-subject pre–post study design to approach this question in a different way.

In the within-subject design, I again examine how updating differs across different levels of congeniality, but in this case congeniality is defined by the degree to which the pretreatment response aligns with the expert community. Figure 4 displays average belief updating at different levels of pretreatment belief accuracy and average preference updating at different levels of pretreatment preference congruence. Pretreatment beliefs are collapsed into least/somewhat/most accurate and pretreatment preferences into least/somewhat/most congruent. Updating is close to zero for respondents who already hold the most accurate beliefs or congruent preferences, by construction. However, comparing average updating between the “least” and “somewhat” accurate (or congruent) groups provides a test of the directional motivation hypothesis: following this hypothesis, because the information should be more uncongenial for the “least” respondent groups, these respondents should be less likely to update. However, it is these very respondents who update the most. Table A19 in the appendix further shows that this finding is not confounded by differential ceiling effects. Consequently, this results offers evidence in favor of accuracy-motivated reasoning and against directional-motivated reasoning.Footnote 9

Figure 4. Updating by Prior Attitude from Within-Subject Design

Note: Pretreatment beliefs and preferences are collapsed into least/somewhat/most accurate and least/somewhat/most congruent, respectively. The outcome is the difference between the pre- and posttreatment measures of the five-point belief and the seven-point preference Likert-type scale, with each difference rescaled from -1 to 1. Updating is close to zero for respondents who already hold accurate beliefs (or congruent preferences), as they cannot update further in the direction of the information. However, both “somewhat” and “least” accurate and congruent respondent groups significantly update. Furthermore, the least accurate and least congruent groups update the most, a finding that is more consistent with accuracy-motivated reasoning than directional-motivated reasoning.

External Validity

In this section, I will briefly raise and address a range of concerns that might arise when considering the external validity of these findings. Before doing so, it is important to recognize that although these findings demonstrate that subnational policy makers are motivated to accurately incorporate expert evidence into their policy positions, real decision making may be sensitive to a range of other factors: from the preferences of constituents, to the preferences of party leaders, to legislative bargaining, to policy makers’ personal ideological commitments. That said, it is affirming to note that the findings from the two field experiments to date that have investigated the effect of expert evidence on actual policy decisions are consistent with the findings presented here (Hjort et al. Reference Hjort, Moreira, Rao and Santini2019; Zelizer Reference Zelizer2018).

The first concern I will address is that the information treatment in this study may not be reflective of how policy makers typically encounter expert evidence in the real world. Most notably, the treatment in this study was quite weak: the messages were single sentences with no citations, no methodological descriptions, and no arguments. Certainly in the real world, experts giving testimony, for example, would have more than a sentence to convey their points, and policy makers would be able to interrogate them with further questions. However, the weakness of the treatments in this case biases against the findings: one might expect partisan policy makers to give themselves permission to reject uncongenial evidence if it were seen as weak. And yet, even with these weak information treatments, I demonstrate policy makers update in a pattern consistent with accuracy-motivated reasoning.

A second concern is that the respondents’ motivations in the survey may not reflect their motivations in the real world. For example, a common concern in survey research is that responses are merely “expressive”—that is, respondents are not accurately representing their true beliefs or preferences but, rather, are signaling something else. One possibility is that policy makers were using the survey questionnaire as an opportunity to express their partisan credentials. This would be consistent with the “party cheerleading” phenomenon demonstrated in some survey experiments with the public (Bullock et al. Reference Bullock, Gerber, Hill and Huber2013; Prior, Sood, and Khanna Reference Prior, Sood and Khanna2015; Schaffner and Luks Reference Schaffner and Luks2018). However, this, too, would bias against the findings in this study. Conversely, respondents might have inferred something about the intent of the survey and, consequently, wished to demonstrate that they were, in fact, responsive to expertise. This would be a more problematic concern, as it would bias in favor of the findings. Nonetheless, recent work on randomization of “experimenter intent” has suggested these concerns may be overstated (Mumolo and Peterson Reference Mumolo and Peterson2018).

A third concern is whether or not the survey sample is representative of the sampling frame from which it is drawn (i.e., municipal, county, and state policy makers across the United States) in a way that is correlated with the propensity to be responsive to expert evidence. While I cannot rule out this possibility, I provide careful consideration of multiple sources of potential biases in the appendix. Notably, this sample modestly overrepresents policy makers representing higher-population areas: areas which also tend to be more urban, more college-educated, and more liberal.Footnote 10 However, I demonstrate in the appendix, both through a variety moderation analyses and through use of survey weights, that the findings in this study are not sensitive to these geographic biases.

Fourth, one might consider whether the findings would change if the degree of politicization of the issues were different. For example, would the findings be different if the study were limited to states or localities where the respondents had already taken positions on specific legislation concerning the issues at hand? Or would the findings have been different if, instead of needle exchanges, GMOs, and rent control, the issues had been abortion, climate change, and the minimum wage? The answer to both questions is almost certainly yes. But not only because greater politicization would stoke stronger directional motivations; in addition, greater politicization also means greater salience, so policy makers would likely already be more familiar with what experts have to say on the topics. Consequently, the effect of further exposure to expert evidence would likely be attenuated regardless of the relative weight of accuracy and directional motivations.

Finally, how would these findings extend to national policy makers? On the one hand, the greater influence of their decisions may mean their motivation to accurately evaluate evidence is further heightened. On the other hand, if they are more polarized, then their directional motivations would also be heightened. Nonetheless, one clear difference between subnational and national policy making is that the former is often a part-time job, whereas the latter is almost always a full time job. Moreover, national policy makers benefit from a much larger support staff than most local and state legislators. For both these reasons, national policy makers are more likely to already have been exposed to the expert findings that would be important to know concerning whatever policy decisions they are making. Consequently, the marginal effect of sending information about expert findings to a national policy maker is likely be smaller, even if the relative weight of accuracy and directional motivations is the same.

Conclusion

This study investigates the question of policy maker responsiveness to expert evidence using a national sample of municipal, county, and state elected officials. Across a series of three locally contested policies, I demonstrate that briefly exposing policy makers to a short message summarizing the established findings of domain-relevant experts increases belief accuracy between 8 and 16 percentage points and increases preference congruence between 4 and 16 percentage points, irrespective of party affiliation. Taken together, these findings offer an important exhibition of evidence against the view that directional biases dominate how policy makers process information. Instead, these findings reveal that both Republican and Democratic lawmakers seek to accurately evaluate and incorporate expert evidence that is presented to them.

These findings are not incompatible with existing evidence that show policy makers are sometimes misinformed about what experts think (Bolsen, Druckman, and Cook Reference Bolsen, Druckman and Cook2015; Jerit et al. Reference Jerit, Barabas, Altman, Berinsky, Best, Campbell and Claibourn2006; Nyhan Reference Nyhan2010) or what their voters think (Broockman and Skovron Reference Broockman and Skovron2018; Butler and Dynes Reference Butler and Dynes2016; Hertel-Fernandez, Mildenberger, and Stokes Reference Hertel-Fernandez, Mildenberger and Stokes2019). First, this study does not rule out that directional and accuracy motivations might coexist in policy makers (Lee et al. Reference Lee, Nyhan, Reifler and Flynn2021), nor does it rule out the presence of other cognitive biases in policy makers (Sheffer et al. Reference Sheffer, Loewen, Soroka, Walgrave and Sheafer2018). Second, policy makers updated in the direction of the evidence but did not abandon their beliefs altogether. Consequently, misperceptions could continue to persist for a long period even if policy makers are accuracy-motivated “on the margin.” Research considering the origins of these beliefs must be considered alongside responsiveness to new information before drawing broader conclusions about psychological motivations.

More generally, this study raises questions about how scholars should apply theories of political behavior developed from studies of the mass public to policy makers. Making real policy decisions that will have wide-ranging effects on thousands of people is a very different motivational environment than the one thateveryday citizens tend to face when reading or hearing politically charged information. For most of the public, politics feels more like a “spectator sport,” so indulging in one’s directional motivations is, or at least feels, less costly (Hersh Reference Hersh2017). In contrast, policy makers may perceive that they will face repercussions if they don’t consider all information carefully and objectively, or may simply feel a sense of civic duty to do so (Mullinix Reference Mullinix2018). Further research should consider the possible ways by which electoral incentives, institutional norms, and self-selection may distinguish how policy makers and everyday citizens behave.

With respect to evidenced-based policy making, these findings suggest some reasons for optimism and indicate that more pessimistic accounts of the role of outside expertise in contemporary policy making may be overstated (Nichols Reference Nichols2017). In particular, this study suggests that further efforts to promote evidence-based policy making might be particularly effective within the scope conditions of local and state policy making concerning issues with a high degree of expert consensus.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1017/S0003055421000800.

DATA AVAILABILITY STATEMENT

Replication materials for this study are available at the American Political Science Review Dataverse: https://doi.org/10.7910/DVN/S2SNOT.

ACKNOWLEDGMENTS

The author would like to express his gratitude to Hakeem Jefferson, Adam Bonica, Jens Hainmueller, Michael Tomz, Jonathan Rodden, Brendan Nyhan, Dan Thompson, Marc Grinberg, Hannes Malmberg, Mark Lee, and Dylan Boynton for their insightful comments and suggestions. The author also wishes to thank the team at CivicPulse for facilitating the data collection.

CONFLICT OF INTEREST

The author declares no ethical issues or conflicts of interest in this research.

ETHICAL STANDARDS

Approval for the online survey was received from Stanford’s Internal Review Board prior to implementation (Protocol #39690).

Footnotes

1 Directional-motivated information processing is often referred to generically as “motivated reasoning.” Because all reasoning is motivated or “goal-driven” (Kunda Reference Kunda1990), I will avoid the use of this term and instead follow the precedent set in this literature in distinguishing between directional and accuracy motivations.

2 This hypothesis is related to conventional uses of the notion of “Bayesian updating.” However, I do not measure confidence in respondents’ prior attitudes so cannot formally test this. This hypothesis is also related to the “deficit model” used in the science communication literature (Brossard, Dominique, and Lewenstein Reference Brossard, Lewenstein, Kahlor and Stout2010).

5 Fortunately, the within-subject estimates of updating from exposure to expert findings (see Tables A10–A11) approximately correspond to the experimentally estimated treatment effects (Tables A8–A9), lending some credibility to the design. Figure A17 graphically compares the point estimates from these two designs.

6 Generally speaking, do you usually think of yourself as a {Democrat, Republican, Independent, Other party}?

7 Replication materials for this study can be found at the Harvard Dataverse (Lee, Reference Lee2021).

8 Throughout the results section, I will evaluate treatment estimates in terms of percentage points, which refer to shift along the five-point belief scale or the seven-point preference scale relative to the full range of each outcome measure. For example, a shift from one endpoint to the other endpoint of either outcome measure would correspond with a shift of one hundred percentage points. This form of exposition can mask the role of differential ceiling effects, which is explored in the appendix (Tables A18–A19).

9 This finding is also consistent with what would be expected if respondents were behaving like Bayesians, if we assume the “somewhat” and “least” respondent groups were similarly confident in their pretreatment responses.

10 As of 2015, the median population size of all subcounty polities in the United States was 3,527 people. The median population size of the subcounty polities represented in the survey was 53,638. In this sense, the survey oversamples higher-population polities. However, another way of considering the population of interest is to consider the characteristics of the polity that the median constituent in the United States would experience. In this sense, the sample approximates the population of interest well: the population size of the median constituent’s polity in 2015 was 58,993.

References

Baekgaard, Martin, Christensen, Julian, Dahlmann, Casper Mondrup, Mathiasen, Asbjørn, and Petersen, Niels Bjørn Grund. 2019. “The Role of Evidence in Politics: Motivated Reasoning and Persuasion among Politicians.” British Journal of Political Science 49 (3): 1117–40.CrossRefGoogle Scholar
Bafumi, Joseph, and Herron, Michael C.. 2010. “Leapfrog Representation and Extremism: A Study of American Voters and Their Members in Congress.” American Political Science Review 104(3): 519–42.CrossRefGoogle Scholar
Bolsen, T., Druckman, J. N., and Cook, F. L.. 2015. “Citizens’, Scientists’, and Policy Advisors’ Beliefs about Global Warming.” The ANNALS of the American Academy of Political and Social Science 658 (March): 271–95.CrossRefGoogle Scholar
Bolsen, Toby, Palm, Risa, Bolsen, Toby, and Palm, Risa. 2019. “Motivated Reasoning and Political Decision Making.” In Oxford Research Encyclopedia of Politics. https://doi.org/10.1093/acrefore/9780190228637.013.923.CrossRefGoogle Scholar
Broockman, David E., and Skovron, Christopher. 2018. “Bias in Perceptions of Public Opinion among Political Elites.” American Political Science Review 112(3): 542–63.CrossRefGoogle Scholar
Brossard, Dominique, and Lewenstein, Bruce. 2010. “A Critical Appraisal of Models of Public Understanding of Science.” Chap. 1 in Communicating Science: New Agendas in Communication, eds. Kahlor, Lee Ann and Stout, Patricia A.. New York: Routledge.Google Scholar
Bullock, John G., Gerber, Alan S., Hill, Seth J., and Huber, Gregory. 2015. “Partisan Bias in Factual Beliefs about Politics.” Quarterly Journal of Political Science 10(4): 519–78CrossRefGoogle Scholar
Bullock, John G., Gerber, Alan S., Hill, Seth J., and Huber, Gregory A.. 2013. “Partisan Bias in Factual Beliefs About Politics.” NBER Working Paper.CrossRefGoogle Scholar
Butler, Daniel M., and Dynes, Adam M.. 2016. “How Politicians Discount the Opinions of Constituents with Whom They Disagree.” American Journal of Political Science 60 (4): 975–89.CrossRefGoogle Scholar
Butler, Daniel M., and Nickerson, David W.. 2011. “Can Learning Constituency Opinion Affect How Legislators Vote? Results from a Field Experiment.” Quarterly Journal of Political Science 6 (1): 5583.CrossRefGoogle Scholar
Centers for Disease Control (CDC). 2017. “Injection Drug Use | HIV Risk and Prevention.” https://www.cdc.gov/hiv/basics/hiv-transmission/injection-drug-use.html.Google Scholar
Chan, Man-pui Sally, Jones, Christopher R., Jamieson, Kathleen Hall, and Albarracín, Dolores. 2017. “Debunking: A Meta-Analysis of the Psychological Efficacy of Messages Countering Misinformation.” Psychological Science 28 (11): 1531–46.CrossRefGoogle ScholarPubMed
Chong, Dennis, and Druckman, James N.. 2010. “Dynamic Public Opinion: Communication Effects Over Time.” American Political Science Review 104(4): 663–80.CrossRefGoogle Scholar
Druckman, James N. 2012. “The Politics of Motivation.” Critical Review 24(2): 199–216.CrossRefGoogle Scholar
Druckman, James N., Fein, Jordan, and Leeper, Thomas J.. 2012. “A Source Of Bias In Public Opinion Stability.” American Political Science Review 106(2): 430–54.CrossRefGoogle Scholar
Druckman, James N., Peterson, Erik, and Slothuus, Rune. 2013. “How Elite Partisan Polarization Affects Public Opinion Formation.” American Political Science Review 107 (1): 57–79.CrossRefGoogle Scholar
Fiorina, Morris. 1981. Retrospective Voting in American National Elections. New Haven, CT: Yale University Press.Google Scholar
Flynn, D. J., Nyhan, Brendan, and Reifler, Jason. 2017. “The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs about Politics.” Political Psychology 38: 127–50.CrossRefGoogle Scholar
Gerber, Alan, and Green, Donald. 1999. “Misperceptions about Perceptual Bias.” Annual Review of Political Science 2: 189210.CrossRefGoogle Scholar
Glaeser, Edward L., and Luttmer, Erzo F. P.. 2003. “The Misallocation of Housing under Rent Control.” American Economic Review 93 (4): 1027–46.CrossRefGoogle Scholar
Guess, Andrew, and Coppock, Alexander. 2020. “Does Counter-Attitudinal Information Cause Backlash? Results from Three Large Survey Experiments.” British Journal of Political Science 50 (4): 1497–515.CrossRefGoogle Scholar
Hersh, Eitan. 2017. “Political Hobbyism: A Theory of Mass Behavior.” Working Paper. https://www.eitanhersh.com/uploads/7/9/7/5/7975685/hersh_theory_of_hobbyism_v2.0.pdf.Google Scholar
Hertel-Fernandez, Alexander, Mildenberger, Matto, and Stokes, Leah C.. 2019. “Legislative Staff and Representation in Congress.” American Political Science Review. 113 (1): 1–18.Google Scholar
Hjort, Jonas, Moreira, Diana, Rao, Gautam, and Santini, Juan. 2019. “How Research Affects Policy: Experimental Evidence from 2,150 Brazilian Municipalities.” National Bureau of Economic Research. Working Paper. https://www.nber.org/papers/w25941.Google Scholar
Jenkins, Blair. 2009. “Rent Control: Do Economists Agree?” Econ Journal Watch 6 (1): 73–112.Google Scholar
Jerit, Jennifer, Barabas, Jason, Altman, Micah, Berinsky, Adam, Best, Jonathan, Campbell, Andrea, Claibourn, Michele, et al. 2006. “Bankrupt Rhetoric: How Misleading Information Affects Knowledge About Social Security.” Public Opinion Quarterly 70 (3): 278303.CrossRefGoogle Scholar
Kahan, Dan. 2010. “Fixing the Communications Failure.” Nature 463 (7279): 296–97.CrossRefGoogle ScholarPubMed
Kahan, Dan M. 2013. “Ideology, Motivated Reasoning, and Cognitive Reflection.” Judgment and Decision Making 8 (4): 407–24.Google Scholar
Khanna, Kabir, and Sood, Gaurav. 2018. “Motivated Responding in Studies of Factual Learning.” Political Behavior. 40: 79–101.CrossRefGoogle Scholar
Klar, Samara. 2013. “The Influence of Competing Identity Primes on Political Preferences.” Journal of Politics 75 (4): 1108–24.CrossRefGoogle Scholar
Kraft, P. W., Lodge, M., and Taber, C. S.. 2015. “Why People ‘Don’t Trust the Evidence’: Motivated Reasoning and Scientific Beliefs.” The ANNALS of the American Academy of Political and Social Science 658 (March): 121–33.CrossRefGoogle Scholar
Kunda, Ziva. 1990. “The Case for Motivated Reasoning.” Psychological Bulletin 108 (3): 480–98.CrossRefGoogle ScholarPubMed
Lee, Nathan. 2021. “Replication Data for: Do Policy Makers Listen to Experts? Evidence from a National Survey of Local and State Policy Makers.” Harvard Dataverse. Dataset. https://doi.org/10.7910/DVN/S2SNOT.CrossRefGoogle Scholar
Lee, Nathan, Nyhan, Brendan, Reifler, Jason, and Flynn, D. J.. 2021. “More Accurate, but No Less Polarized: Comparing the Factual Beliefs of Government Officials and the Public.” British Journal of Political Science 51(3): 1315–22.CrossRefGoogle Scholar
Lewandowsky, Stephan, and Oberauer, Klaus. 2016. “Motivated Rejection of Science.” Current Directions in Psychological Science 25 (4): 217–22.CrossRefGoogle Scholar
Lodge, Milton, and Taber, Charles S.. 2013. The Rationalizing Voter. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
McCabe, Katherine T. 2016. “Attitude Responsiveness and Partisan Bias: Direct Experience with the Affordable Care Act.” Political Behavior 38(4): 861–82.CrossRefGoogle Scholar
Mullinix, Kevin J. 2018. “Civic Duty and Political Preference Formation.” Political Research Quarterly 71(1): 199–214.CrossRefGoogle Scholar
Mumolo, Jonathan, and Peterson, Erik. 2018. “Demand Effects in Survey Experiments: An Empirical Assessment.” American Political Science Review 113(2): 517–29.CrossRefGoogle Scholar
Nichols, Thomas M. 2017. The Death of Expertise: The Campaign against Established Knowledge and Why It Matters. Oxford: Oxford University Press.Google Scholar
Nyhan, Brendan. 2010. “Why the ‘Death Panel’ Myth Wouldn’t Die: Misinformation in the Health Care Reform Debate.” The Forum 8 (1): Article 5.CrossRefGoogle Scholar
Nyhan, Brendan, and Reifler, Jason. 2015. “The Effect of Fact-Checking on Elites: A Field Experiment on US State Legislators.” American Journal of Political Science 59(3): 628–40.CrossRefGoogle Scholar
National Academy of Sciences Engineering and Medicine. 2016. Genetically Engineered Crops: Experiences and Prospects. Washington, DC: The National Academies Press.Google Scholar
Peterson, Erik. 2019. “The Scope of Partisan Influence on Policy Opinion.” Political Psychology 40(2): 335–53.Google Scholar
Prior, Markus, Sood, Gaurav, and Khanna, Kabir. 2015. “You Cannot Be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic PerceptionsQuarterly Journal of Political Science 10 (4): 489518.CrossRefGoogle Scholar
Quirk, Paul J., Bendix, William, and Bächtiger, Andre. 2018. “Institutional Deliberation.” In The Oxford Handbook of Deliberative Democracy, eds. Andre Bächtiger, John S. Dryzek, Jane Mansbridge, and Mark Warren, 273–99. Oxford: Oxford University Press.CrossRefGoogle Scholar
Schaffner, Brian F., and Luks, Samantha. 2018. “Misinformation or Expressive Responding? What an Inauguration Crowd Can Tell Us about the Source of Political Misinformation in Surveys.” Public Opinion Quarterly 82(1): 135–47.CrossRefGoogle Scholar
Sheffer, Lior, Loewen, Peter, Soroka, Stuart, Walgrave, Stefaan, and Sheafer, Tamir. 2018. “Nonrepresentative Representatives: An Experimental Study of the Decision Making of Elected Politicians.” American Political Science Review 112(2): 302–21.CrossRefGoogle Scholar
Taber, Charles S., and Lodge, Milton. 2006. “Motivated Skepticism in the Evaluation of Political Beliefs.” American Journal of Political Science 50 (3): 755–69.CrossRefGoogle Scholar
Thorson, Emily. 2016. “Belief Echoes: The Persistent Effects of Corrected Misinformation.” Political Communication 33(3): 460–80.CrossRefGoogle Scholar
Vivalt, Eva, and Coville, Aidan. 2020. “How Do Policymakers Update Their Beliefs?Working Paper. http://evavivalt.com/wp-content/uploads/How-Do-Policymakers-Update.pdf.Google Scholar
Zelizer, Adam. 2018. “The Effects of Informational Lobbying by a Legislative Caucus: Evidence from a Field Experiment.” Legislative Studies Quarterly 43(4): 595–618.CrossRefGoogle Scholar
Figure 0

Table 1. Respondents by Level of Government and Position

Figure 1

Figure 1. Expert Congruence by Party Prior to TreatmentNote: Policy preferences are based on the question, “Do you support or oppose [policy]? {Strongly support, Moderately support, Slightly support, Neither support nor oppose, Slightly oppose, Moderately oppose, Strongly oppose}.” For needle exchanges, all “support” responses are coded as expert-congruent. For GMO bans and rent-control ordinances, all “oppose” responses are coded as expert-congruent. Data is taken from the control group for each policy. The three policies span the range of potential alignments between the political parties and the expert communities. A far greater percentage of Democrats are aligned with public health experts concerning needle exchanges, whereas a far greater percentage of Republicans are aligned with economists concerning rent-control ordinances. For GMO bans, alignment with scientists is higher among Republicans, though this difference is less pronounced.

Figure 2

Table 2. Key Hypothesis Tests

Figure 3

Figure 2. Overall Treatment Effects from Cross-Subject DesignNote: The left-hand side of the plot displays the effect of treatment on belief accuracy, and the right-hand side displays the effect of treatment on preference congruence (in relation to the implied stance of the expert community). Across policies, exposure to expert findings causes an increase in both outcomes.

Figure 4

Figure 3. Treatment Effects by Party from Cross-Subject DesignNote: The average treatment effects are plotted by party for each policy.Both beliefs and preferences are rescaled from zero to one. The directional motivation hypothesis predicts that the effect of treatment should be conditional on party, whereas the accuracy motivation hypothesis predicts that updating should occur irrespective of party. Although there are modest differences by party consistent with the directional motivation hypothesis, these differences are not significant. On the other hand, the evidence in favor of the accuracy motivation hypothesis is clear: all 12 point estimates are positive, and nine are statistically significant at the 95% confidence level.

Figure 5

Figure 4. Updating by Prior Attitude from Within-Subject DesignNote: Pretreatment beliefs and preferences are collapsed into least/somewhat/most accurate and least/somewhat/most congruent, respectively. The outcome is the difference between the pre- and posttreatment measures of the five-point belief and the seven-point preference Likert-type scale, with each difference rescaled from -1 to 1. Updating is close to zero for respondents who already hold accurate beliefs (or congruent preferences), as they cannot update further in the direction of the information. However, both “somewhat” and “least” accurate and congruent respondent groups significantly update. Furthermore, the least accurate and least congruent groups update the most, a finding that is more consistent with accuracy-motivated reasoning than directional-motivated reasoning.

Supplementary material: PDF

Lee supplementary material

Lee supplementary material

Download Lee supplementary material(PDF)
PDF 2.3 MB
Link
Submit a response

Comments

No Comments have been published for this article.