In recent years, Americans’ attitudes toward scientists and other experts have become more negative. Trust in the scientific community, for example, has declined steadily on the ideological right since the mid-1990s (Gauchat Reference Gauchat2012) and has remained only moderately positive on the ideological left (Mullin Reference Mullin2017). This increased negativity has important implications for American political life by shaping citizens’ preferences for anti-science political candidates and encouraging disbelief in scientific consensus (Motta Reference Motta2017).
An important line of scholarly research focuses on individual-level factors that might explain why some Americans hold negative attitudes toward scientists. For example, several studies investigated the effects of citizens’ knowledge about science, ideological conservatism, and the interaction between the two on attitudes toward scientists and science more broadly (Blank and Shaw Reference Blank and Shaw2015; Bolsen, Druckman, and Cook Reference Bolsen, Druckman and Cook2015; Gauchat Reference Gauchat2012; Gauchat, O’Brien, and Mirosa Reference Gauchat, O’Brien and Mirosa2017; Hofstadter Reference Hofstadter1963; Kahan et al. Reference Kahan, Peters, Wittlin, Slovic, Ouellette, Braman and Mandel2012; McCright et al. Reference McCright, Dentzman, Charters and Dietz2013; Sturgis and Allum Reference Sturgis and Allum2004). Americans’ religious preferences, perceptions of scientific consensus and understanding, and attitudes toward modernization also have been linked to their attitudes toward scientists (Gauchat Reference Gauchat2008; Reference Gauchat2012; Hofstadter Reference Hofstadter1963; McCright, Dunlap, and Xiao 2013; Nichols Reference Nichols2017).
Much less attention, however, has been given to scientists and experts themselves, especially regarding their involvement in politics (see Cofnas, Carl, and Woodley of Menie Reference Cofnas, Carl and Woodley of Menie2017 for a review). This is a notable shortcoming in the literature because President Trump’s skepticism toward science and interference with scientific research (Tobias Reference Tobias2017) have led scientists to organize on behalf of their political interests.
This raises an important question: When scientists organize politically, and visibly, do their actions influence public opinion? The March for Science events taking place across the country in late April 2017 offered a unique opportunity to answer this question.
Leveraging online panel data from three days before and two days after the events, I found that liberals’ and conservatives’ attitudes about scientists and experts polarized immediately following the March for Science. Liberals and conservatives were divided before the March about their attitudes toward scientists and experts, and the March appears to have exacerbated these differences. It is interesting that whereas liberals and conservatives also were divided in their attitudes toward scientific research before the March, the events did not appear to polarize these attitudes. The results suggest that, in this case, “mobilized science” can have polarizing effects on the public’s affect for scientists and experts but does not necessarily impact their attitudes toward the research that these individuals produce.
MOBILIZED SCIENCE AND PUBLIC OPINION
Although scholars have made important strides in understanding how individual-level factors affect attitudes toward science, fewer works consider how the political actions of scientists themselves might shape public opinion (however, see Brulle’s Reference Brulle2018 critical reflection on the effectiveness of the March for Science). I refer to the public efforts of scientists, academics, and experts more broadly to advance their collective political interests as mobilized science. I conceptualize mobilized science as a general term to describe the efforts of these groups to draw attention to or take action on matters relevant to their shared goals.
The March for Science events taking place in April 2017 can be considered an example of mobilized science. It was organized by dozens of scientists and academics (March for Science 2017) in partnership with several preexisting interest groups devoted to the advancement of scientific interests (e.g., American Association for the Advancement of Science). Through an extensive social media campaign (March for Science 2017), the group organized 610 semi-autonomous “satellite” marches across the country (and the world). Today, the organization continues to operate by soliciting donations, supporting community organization efforts, and creating platforms by which interested visitors to their website can contact policy makers.
Critically, the marches received substantial attention in the popular press. The flagship March for Science in Washington, DC, had several celebrity hosts and guests (Gibson Reference Gibson2017) and even received Twitter attention from President Trump. High levels of popular attention to the March raise the possibility—at least in theory—that it may have had an impact on public opinion.
This article explores the possibility that the March for Science may have influenced the public’s attitudes about science, research, and expertise. I suspect that it may have polarized opinion along ideological lines, potentially taking one of the following forms.
This article explores the possibility that the March for Science may have influenced the public’s attitudes about science, research, and expertise. I suspect that it may have polarized opinion along ideological lines, potentially taking one of the following forms.
The first possibility is the Affective Polarization Hypothesis. Fundamentally, the “public face” of the March for Science is the people participating. Although they gathered in support of several common goals—some of which concerned academic and scientific research (e.g., federal funding for research and hiring practices)—media coverage about the March itself was focused primarily on who was doing the marching (Nyhan Reference Nyhan2017; Smith Reference Smith2017). Consistent with this view, some scientists voiced concern (before the March) that the events might encourage the public to view scientists as a “liberal constituency” (Mullin Reference Mullin2017).
A second possibility is the Generalized Polarization Hypothesis. According to this model, the March for Science was a broadly polarizing event, encouraging conservatives (or liberals) to view both scientists and their research more negatively (or positively). Like the Affective Polarization Hypothesis, this view recognizes that the March may have polarized public opinion about scientists. However, consistent with recent insight on how citizens formulate political judgments (Lodge and Taber Reference Lodge and Taber2013), negative feelings toward these individuals might subsequently spill over to shape citizens’ attitudes about related concepts (e.g., scientific research).
Although these expectations are exploratory, I suspect that the Affective Polarization Hypothesis is a particularly good candidate for explaining potential change in public opinion following the March for Science. Given the significant media attention given to it in an increasingly polarized political landscape (Abramowitz Reference Abramowitz2010), the March’s personal focus on those doing the protesting creates a clear possibility for polarization on the basis of affect toward scientists and experts.
Three additional notes bear mentioning. First, this study concerns the polarization of attitudes about scientists as a group. Whereas it is certainly possible that liberals (or conservatives) evaluate some types of scientists differently than others (McCright et al. Reference McCright, Dunlap and Xiao2013), recent survey research found that conservatives tend to be more distrusting of scientists writ large than liberals (Blank and Shaw Reference Blank and Shaw2015). Second, this study focuses on ideological polarization in an effort to speak directly to extant literature on the subject (Gauchat Reference Gauchat2008; Reference Gauchat2012). Given the strong correspondence between ideological self-placement and partisan identification, however, I consider whether the March polarized partisans on these issues in the supplementary materials. Third, it is an important caveat that this study is only a first step in understanding how mobilized science shapes public opinion. Future research should explore the dynamics of elite polarization on mobilized science and how media coverage of it might influence opinion formation about scientists and their research (for more on this general phenomenon, see Bolsen and Druckman Reference Bolsen and Druckman2015 and Druckman, Peterson, and Slothuus Reference Druckman, Peterson and Slothuus2013).
THE PANEL STUDY
To test these hypotheses, I fielded a two-wave panel study measuring public support for scientists, experts, and their research immediately before and after the March for Science. My purpose was to exploit how change in the saliency of the March might alter opinions about scientists and research at the individual level for the same individuals.
This design can best be thought of as quasi-experimental. In a true natural experiment, respondents would be assigned to naturally occurring treatment and control groups. Here, I alternatively used what Shadish, Cook, and Campbell (Reference Shadish, Cook and Campbell2002) referred to as a “one-group (within-participants) pretest-posttest design,” which means that all panel participants had the opportunity to be “treated by” (i.e., exposed to information about) the March. Consequently, I used tests designed not to assess the raw treatment effects of the March but rather the conditional treatment effects across ideological subgroups (identified before the treatment took place).
DATA
To construct a pre–post-March panel, I first surveyed 428 workers on Amazon’s Mechanical Turk (MTurk) on April 19, 2017, exiting the field two (full) days before the March for Science on April 22. I then recontacted all 428 individuals (using Turk Prime’s recontact feature) and invited them to participate in a second survey, taking place from 10 a.m. (CST) on April 24 to 10 a.m. on April 25. The second wave of the study produced a recontact rate of 83% and a completion rate of 82%, with a final N of 350.
I fielded the study on these dates to assess respondents’ opinions at a time in which media coverage of the March for Science was low (Wave 1), followed by a time in which media coverage was high (Wave 2). Figure 1 demonstrates that the selection of these dates was apparently well justified. News coverage of the March for Science was comparatively low when the study began fielding (i.e., N = 30 articles on April 19). The coverage grew rapidly after exiting the field later in the day on April 19, producing about 250 articles between April 22 and April 23, between the two waves.
Of course, the MTurk workers surveyed do not constitute a nationally representative sample (see table S1 for specific details about the sample’s demographics). The raw proportions described in the results may not generalize to the American population, which is an important caveat to remember. Still, the movement in attitudes observed across ideological groups over time likely is valid for at least two reasons.
First, it is critical that differences in opinion across waves are not necessarily biased by sample composition. As long as MTurk workers do not process or react differently to the March’s increased saliency than the rest of the public, change across waves is less likely to be biased. This appears to be a reasonable assumption because liberals and conservatives on MTurk were shown to have psychological profiles similar to those surveyed in representative samples, making the site a valid outlet for research on political ideology (Clifford, Jewell, and Waggoner Reference Clifford, Jewell and Waggoner2015).
Second, to the extent that MTurk and nationally representative samples differ, cross-sample discrepancies can be dramatically reduced with the inclusion of simple demographic controls in multivariate modeling (Levay, Freese, and Druckman Reference Levay, Freese and Druckman2016). For example, Levay and colleagues found that 93% of the difference in climate-change attitudes across MTurk and representative sampling can be accounted for with the addition of simple demographic controls (e.g., race, age, gender, and education).
MEASURES
There are two key groups of outcome variables in this analysis. The first concerns attitudes toward scientists and experts, and it is measured using five different variables. The first three variables are standard 101-point “feeling thermometers” toward “scientists,” “college professors,” and “intellectuals.” The remaining two variables ask respondents whether they agree or disagree (i.e., using a five-point Likert scale ranging from “Strongly Disagree” to “Strongly Agree”) with the following statements:
(1) “Scientists care less about solving important problems than their own personal gain.”
(2) “Most experts are untrustworthy.”
The second group contains two variables measuring citizens’ attitudes toward scientific research. Respondents again were asked whether they agreed or disagreed with the following two statements:
(1) “Most scientific research is politically motivated.”
(2) “You simply can’t trust most scientific research.”
The key independent variable in this study is respondents’ ideological self-identification. This was measured using a standard seven-point self-placement scale, ranging from “extremely liberal” to “extremely conservative.” At times, in the analyses that follow, I recoded this variable into a trichotomous indicator of whether individuals identified as liberals (i.e., all scores below the scale’s midpoint), moderates (i.e., the midpoint), or conservatives (i.e., all scores above the midpoint).
I controlled for respondents’ age, education, race (i.e., black and Hispanic indicators), income, gender (0 = male, 1 = female), and interest in politics in certain multivariate models. All controls were scaled to range from 0 to 1. Full wording of the questions for these variables is in the supplementary materials.
RESULTS
To test my theoretical expectations, I constructed several multivariate difference-in-difference tests. I chose this analytical design because it directly calculates growth in pre-to-post March for Science levels of polarization. Also, it can provide a statistical estimate of whether this movement is significantly different from what we would ordinarily expect by chance. Typically, this design is used to compare naturally occurring treatment and control groups (Ashenfelter and Card Reference Ashenfelter and Card1985). However, the quasi-experimental design described previously calls for a test of conditional treatment effects. Consequently, the treatment and control groups are pretreatment indicators of whether respondents self-identified as liberals or conservatives, respectively.
Four additional methodological points warrant mentioning. First, the difference-in-difference analyses were restricted to individuals completing both waves of the survey, with moderates excluded the potential for polarization among moderates is discussed shortly). Second, due to the well-known tradeoffs of including covariates in quasi-experiments (Mutz Reference Mutz2011), I estimated difference-in-difference effects both with and without the covariates listed in the previous section. The results, discussed shortly, are quite similar across specification strategies. Third, I clustered standard errors at the respondent level in both sets of models, as often is recommended (Imbens and Wooldridge Reference Imbens and Wooldridge2007). Fourth, in addition to presenting item-specific difference-in-difference tests, I guarded against the possibility of random measurement error by averaging each group of items into corresponding indices (Ansolabehere, Rodden, and Snyder Reference Ansolabehere, Rodden and Snyder2008).
The results, presented in table 1, are consistent with the Affective Polarization Hypothesis. In rows 1–7, which pertain to affect toward scientists and experts, I found significant increases in change between liberals and conservatives before and after the March for Science in six of seven models. Without controls, five produced estimates that were significant at the p<0.05 level (two-tailed); one approached conventional levels at the p<0.10 level. These results were similar when adding controls, except that the “experts are untrustworthy” item dipped below the p=0.10 threshold. In addition to being statistically significant, these effects were substantively large, ranging from a 4% change across ideological subgroups (i.e., college-professor affect in both specifications) to 11% (i.e., belief that scientists are motivated by personal gains; 12% change in the covariate specification).
Furthermore, the results do not provide any evidence that the March for Science polarized citizens’ attitudes toward scientific research—even when the items were averaged together to reduce measurement error. In all cases (rows 8–10) and across both specifications, I found small but statistically insignificant increases in polarization across waves.
N = 271 (moderates excluded; liberal N = 182, conservative N = 89).
Notes: Multivariate difference-in-difference tests were calculated using the DIFF package in Stata 13. Models were run first without controls and then re-estimated controlling for respondents’ gender, race, age, income, interest in politics, and educational attainment. Rows 1–3 are “feeling thermometers” toward “scientists,” “intellectuals,” and “college professors,” respectively (row 4 index α = 0.85). Rows 5–6 ask whether respondents agree or disagree (1 = Strongly Disagree, 5 = Strongly Agree) with the following statements: (1) “Scientists care less about solving important problems than their own personal gain,” and (2) “Most experts are untrustworthy” (row 7 index α = 0.76). Rows 8–9 ask respondents whether they agree or disagree with the following statements: (1) “Most scientific research is politically motivated,” and (2) “You simply can’t trust most scientific research” (row 10 index α = 0.80). All variables were scaled to range from 0 to 1.
Furthermore, the results do not provide any evidence that the March for Science polarized citizens’ attitudes toward scientific research—even when the items were averaged together to reduce measurement error. In all cases (rows 8–10) and across both specifications, I found small but statistically insignificant increases in polarization across waves. This is consistent with the idea that the March polarized liberals’ and conservatives’ attitudes about scientists and experts but not their research.
Finally, although I lacked a clear a priori expectation about how moderates might respond to the March for Science, relative to either liberals or conservatives, follow-up tests suggested that they tended to follow conservative opinion after the March. To do this, I re-ran table 1, swapping self-identified conservatives for self-identified moderates (N = 77, for those taking both waves). The results in table S3 show that moderates, relative to liberals, did in fact become significantly more negative toward scientists following the March for Science.
ADDRESSING POTENTIAL CONFOUNDS
Before concluding, it is important to address three potential concerns with the results presented so far. First is the possibility of differential attrition. Theoretically, it may be that individuals who opted to take both waves of the study differed in their attitudes about science than those who were lost to attrition. Table S4 in the supplementary materials tested this possibility and revealed no significant differences across these two groups.
Second, a common issue with quasi-experimental designs is the ability to disentangle treatment effects from broader time trends. Although this study was conducted during the course of six days, it (theoretically) could be the case that the passage of time itself—and not the March for Science—increased polarization.
To test whether this was true, I added a nonequivalent dependent variable component to the difference-in-difference tests in table 1, as Shadish, Cook, and Campbell (2002) recommended. This was like a placebo test; in which the goal is to run the same models in the table using outcome variables that also should be polarized across ideological lines but that should not be expected to grow over the six-day span. Failing to observe significant difference-in-difference estimates on the nonequivalent dependent variables would provide added confidence that the March for Science—and not the passage of time more broadly—led to affective polarization.
I did this by swapping respondents’ attitudes toward Muslims and immigrants for the variables listed in table 1. The results presented in table S5 in the supplementary materials reveal no significant difference-in-difference estimates between liberals and conservatives across waves. This provided added assurance that the quasi-experimental design was not confounded by the passage of time.
Third, given the high correspondence between partisanship and ideology (Bafumi and Shapiro Reference Bafumi and Shapiro2009), I re-ran all difference-in-difference models using indicators of whether respondents self-identified as Democrats or Republicans. The results in table S2 of the supplementary materials show that the effects were similar.
DISCUSSION
These results provide a unique look into the polarizing effects of the March for Science on public opinion. Although liberals and conservatives held differing opinions toward scientists, experts, and scientific research before the March, the aftermath appears to have exacerbated those differences. I observed these effects with respect only to citizens’ attitudes toward scientists and experts themselves, not the research they produce, which is consistent with the Affective Polarization Hypothesis.
Of course, these analyses are not without limitations. As discussed previously, I drew these conclusions from a non-representative sample of Americans. Whereas the amount of change observed across ideological groups may not differ in more-representative samples, the raw estimates of where liberals and conservatives stand on each item should be interpreted with caution. Furthermore, I studied polarization in response to only one naturally occurring instance of mobilized science. Studying future instances of mobilized science can provide further validation of these results.
Overall, this study advances our understanding of how mobilized science influences public opinion about scientists on two fronts. First, it offers novel insights into an understudied topic in political science: how scientists’ political actions shape public opinion about themselves. Moreover, it identifies an important practical tradeoff for those involved in the mobilization of science. As scientists organize to combat skepticism and interference from the Trump administration, they may indeed win support from those most congenial to their cause. However, they risk losing support among those who are less sympathetic. Whether this tradeoff is worth the cost is a question that should surround future mobilized-science efforts.
SUPPLEMENTARY MATERIAL
To view supplementary material for this article, please visit https://doi.org/10.1017/S1049096518000938