Public opinion shifts are hard to study, because we cannot anticipate events likely to move opinion, such as political scandals and natural disasters, and thus must use data collected after an event to make inferences. We can provide new information to respondents through experiments—but is the information in an experimental treatment truly new? Experimental studies have been criticized for assuming respondents are “clean slates” (Gaines et al., Reference Gaines, Kuklinski and Quirk2007, 17) without prior information “shap[ing] attitudes. . . and condition[ing] responses to the experimental stimuli” (Druckman and Leeper, Reference Druckman and Leeper2012, 876). We can only properly interpret survey experiments, according to Sniderman (Reference Sniderman, Druckman, Green, Kuklinski and Lupia2011, 109–110), if we understand how pre-treatment affects the responses we receive. Designing experiments around important real-world events makes pre-treatment problems especially likely (Druckman and Leeper, Reference Druckman and Leeper2012, 875, 888; Gaines et al., Reference Gaines, Kuklinski and Quirk2007, 12).
This paper makes two important contributions to the literatures on experiments, public opinion, and political communication: We find pre-treatment can lead to not only underestimation of message effects, as prior work theorizes, but also to overestimation. Second, we explore how pre-treatment effects vary based on the nature of prior media communications, focusing on clarity.
THEORETICAL CONTRIBUTION
A basic pre-treatment problem occurs when an experiment repeats information respondents have received in the real world. If someone exposed to a message during an experiment has already encountered this information and does not further change opinion during the study, we would conclude the message had no effect. Instead, say Chong and Druckman (Reference Chong and Druckman2010, 664), the more accurate conclusion is that the message had “no effect in the study, not that it had no impact in reality,” as additional message “doses” may not move opinion (e.g., Cacioppo and Petty, Reference Cacioppo and Petty1979, 105; Iyengar et al., Reference Iyengar, Kinder, Peters and Krosnick1984, 781; Malhotra and Krosnick, Reference Malhotra and Krosnick2007, 269).
A second pre-treatment problem emerges if a subject encounters a real-world positive frame favoring a position, and is then exposed to a negative experimental frame against this position; her net opinion shift depends on the frames’ comparative strength (see e.g., Chong and Druckman, Reference Chong and Druckman2007). If we are not aware of the real-world pre-treatment, we might overestimate the negative frame’s impact on the position in question. Yet, we know very little about how real-world media messages affect pre-treatment; it is difficult to carefully measure the information respondents received through the media, and “most published work on media effects does not include measures of media content” (Barabas and Jerit, Reference Barabas and Jerit2009, 74).
There are only a few studies examining how prior exposure to real-world news shapes experimental results. Druckman and Leeper (Reference Druckman and Leeper2012) demonstrate how hearing about a casino proposal reduces the impact of experimental treatments on opinion change, with some examination of news coverage of the proposal. Other studies examine media exposure in a more limited way: Fowler and Gollust (Reference Fowler and Gollust2015, 164) suggest experiments’ null influence could come from prior exposure to similar local media frames, and Slothuus (Reference Slothuus2015, 18–19) suggests experiments might yield underestimates, if information in the treatment is not surprising. To our knowledge, ours is the first study to link respondents’ news consumption to content analysis of that news, allowing us to see how coverage shapes pre-treatment.
To study the full effects of pre-treatment, we designed two experiments around events receiving extensive national coverage—the 2012 Supreme Court rulings on the Affordable Care Act and on Arizona’s immigration law.Footnote 1 Supreme Court rulings are ideal for studying political communication in real-world settings. First, major Court decisions attract extensive press coverage, letting us examine aspects of the coverage, such as message clarity. Second, the topic and approximate date of major decisions can be ascertained in advance, allowing us to field a before/after survey with precise estimates of opinion change (Mutz, Reference Mutz2011, 93). Third, the direction and scope of Court decisions often comes as a surprise; even highly informed respondents can receive new information.Footnote 2
Although scholars have started incorporating measures of pre-treatment, we argue that “pre-treatment” does not always imply a single effect. We test whether pre-treatment can mask true framing effects, as prior studies hypothesize, and whether pre-treatment can also lead experimenters to overestimate message effects.
Imagine a researcher repeats information about the Supreme Court’s endorsement of Obamacare: Because of pre-treatment effects, he might see no increase in support in his experiment, even though support increased in real life right before the experiment, as prior studies have cautioned. We raise a new worry here: Imagine the researcher repeats the information first conveyed through the media, while also informing respondents the decision was contested, with some justices claiming the government should not be able to force people to buy health care. He might observe a decrease in support for Obamacare following a Supreme Court endorsement, if respondents react negatively to this new information (even though the media bumped up support before the experiment in both the treatment and the control group).
Complex events mean the potential effects of pre-treatment are less clear. A researcher might (a) repeat information respondents have already heard or (b) present information with a different (though perhaps more accurate) valence than the media’s. Pre-treatment could, therefore, lead us to under- or overestimate true framing effects, a concern in an era with tremendous partisan media consumption.
Hypothesis 1: When researchers repeat information respondents have heard from the media, pre-treatment effects can mask true opinion change, leading researchers to underestimate effects.
Hypothesis 2: When respondents hear information from the media, and then hear new, opposing information from researchers, pre-treatment effects can cause researchers to overestimate the new information’s impact.
Our study also is the first to examine how message content can influence pre-treatment, focusing on clarity. Unclear messages are a kind of “hard” learning (e.g., Zaller, Reference Zaller1992, 125–16), where muddled coverage makes receiving a message difficult, preventing opinion change. Scholars have found clear, unambiguous messages can yield larger opinion shifts than ambiguous messages (e.g., Wallace, Reference Wallace2013, 128–129). Thus, when media messages are clear, pre-treatment effects are likely to be larger than when messages are unclear and less real-world learning occurs.
Hypothesis 3: Clarity can moderate pre-treatment, with clear messages leading to larger pre-treatment effects than unclear messages.
Through studying how six evening news programs—on ABC, CBS, NBC, CNN, Fox News, and MSNBC—covered each Court decision, we identify significant variation in the clarity of real-world messages received by respondents. We combine content analysis of these programs with our opinion data. In the following sections, we describe the studies’ background and design, the pre-treatment environment surrounding the real-world events, and then the results of the experiments we placed in these environments. We illustrate how measuring only aggregate effects can both miss true opinion change and overestimate informational effects, and show how message clarity can moderate pre-treatment effects.
STUDY DESIGN
Our studies—the health care study involving NFIB vs. Sebelius (2012) and the immigration study involving Arizona vs. United States (2012)—use two-wave, nationally-representative samples. The waves were fielded in the weeks before and days after the June 2012 rulings. Respondents were asked about their support for or opposition to the challenged provision (either the individual mandate or Arizona’s “show your papers” law) in each wave of the survey.Footnote 3
Prior to being asked for their opinion in wave 2, respondents received one of four possible treatment assignments: No additional information about the ruling, or an reminder involving: (a) a summary of the decision (R1), (b) a summary plus an argument in favor, drawn from the majority (R2), or (c) a summary plus both an argument in favor and an additional argument drawn from the dissent/concurrence (R3).
Following prior work on Court rulings and public opinion (e.g., Egan and Citrin, Reference Egan and Citrin2011), our dependent variable can take three values: 0 if respondents did not change their opinions between the two waves, 1 if respondents increased their support, or −1 if respondents reduced their support. The mean opinion in both studies shifted from wave 1 to wave 2, with respondents becoming more significantly supportive of the provisions.Footnote 4
We use an indirect measure of exposure, rather than knowledge, to classify pre-treated individuals; respondents indicated whether they had heard various political and non-political headlines from the week of the Court ruling. A measure of knowledge (such as whether the respondents could correctly identify whether the Court upheld a law) can lead to bias when information is unclear (Druckman and Leeper, Reference Druckman and Leeper2012). In our immigration experiment, involving unclear real-world information, people supportive of the immigration restriction were far more likely than opponents to believe it had been upheld.Footnote 5
Respondents were classified as pre-treated (or not) based on whether or not they indicated having seen recent news headlines. The analyses involved splitting the full samples into the pre-treated and not-pre-treated groups. We show consistent, alternative results in the Supplemental appendix, including models without control variables and with interaction terms rather than split samples.
STUDY 1: HEALTH CARE
Study 1 concerns the Supreme Court’s upholding of the Affordable Care Act’s individual mandate in the 2012 NFIB vs. Sebelius decision. A powerful dissent emphasized that Americans should not be forced to buy a product they do not want.
Pre-Treatment Environment
Coders assessed whether a viewer of the program would correctly understand that the provision we studied—the individual mandate provision of the health care law—had been upheld. Both coders agreed that all six networks clearly presented the Court’s ruling on the individual mandate.Footnote 6 Media coverage of the health care ruling was widespread and clear, thus, we expected extensive and strong pre-treatment effects.Footnote 7
We felt a low threshold for pre-treatment—classifying most individuals as “pre-treated”—was appropriate, given the broad and clear coverage of the ruling. About 90% of the sample reported seeing at least one of seven news headlines in our attentiveness question, and we classified them as pre-treated. Our results are robust to alternative ways of distinguishing pre-treated individuals.Footnote 8
Results
A naive analysis, not accounting for pre-treatment, would lead us to falsely conclude the Court upholding the individual mandate actually reduced support for the mandate. Separating out pre-treated respondents leads to more theoretically plausible findings. Table 1 presents three sets of results: the overall effect, the effect among people receiving information for the first time (not pre-treated), and the effect among people who had likely already received information from the media (pre-treated). The “overall effect” column suggests that experimental information indicating the Court had upheld the individual mandate either made no difference (R1 and R2) or reduced support for the mandate (R3). When we separate out the pre-treated group, we see more plausible patterns. People unlikely to have already heard about the decision significantly increased their support for the mandate after hearing the Court had upheld it, consistent with Hypothesis 1: Pre-treatment leads us to underestimate true effects; we would miss the significant shifts in support for the individual mandate if we did not consider the not-pre-treated group separately.
Table 1 Opinion Change, Health Care
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180608071512578-0945:S205226301700029X:S205226301700029X_tab1.gif?pub-status=live)
Note. See Table A13 in the Supplemental appendix for the full models.
***p < 0.01, **p < 0.05, *p < 0.10.
When we add an argument from the dissent, we find non-pre-treated respondents did not change their original views. Our results are consistent with the literature on the Court and public opinion, and on framing effects: Endorsements from trusted actors can increase support for a position, but these effects are mitigated when respondents are also exposed to competing frames (e.g., Chong and Druckman, Reference Chong and Druckman2007).
Among the group likely to have been exposed to the Court’s decision, hearing about the dissent led some respondents to reduce their support for the mandate. Even relatively well-informed respondents had likely not encountered the dissent, which received less than 2% of the six networks’ Court coverage (see the appendix, Section 3).Footnote 9 This supports Hypothesis 2: Pre-treatment can lead to overestimation if we present pre-treated individuals with a different take on information to which they have already been exposed. These findings are consistent with the literature on message repetition. Respondents do not continue to shift their views when information is repeated through different sources—the media and the experiment.
STUDY 2: IMMIGRATION
In June 2012, the Supreme Court also ruled on Arizona vs. United States, concerning Arizona’s restrictive immigration law. The most controversial provision of this law—the “show your papers” provision, allowing police to check the immigration status of anyone they stop, if they believe the person is in the country illegally—was upheld as constitutional, while three other provisions were struck down. This ruling was unanimous but featured a concurrence, in which some justices noted the potential for civil rights violations, though they had voted to uphold the “papers” provision.Footnote 10
Pre-Treatment Environment
Media coverage of the immigration ruling was less widespread and much less clear than the health care coverage. Importantly, the programs varied greatly on whether they focused on the most controversial provision of the immigration law—the “show your papers” provision—or on other provisions that were struck down. Four evening news programs emphasized that key portions of the law had been upheld (ABC, CBS, CNN, and FNC), while two others (NBC and MSNBC) instead highlighted that key portions of the law had been struck down.Footnote 11
Our coders concluded NBC and MSNBC had presented the ruling in an unclear manner, meaning viewers could have paid attention and still have not correctly understood that the Court upheld the “papers” provision. In our study, 23% of respondents thought they understood the Court ruling, but incorrectly believed the “papers” provision had been struck down. Given the more limited and confusing coverage for immigration, relative to health care, fewer individuals needed to be classified as “pre-treated” than in study 1. We classified the group that indicated that they saw at least five of the seven headlines included in our news attentiveness question as “pre-treated” for the immigration study.Footnote 12
Results
As a result of this classification, the overall effect more closely reflects the effect among the non-pre-treated group. Table 2 presents these results, using OLS models with standard demographic control variables that could affect opinion change on this issue. Presenting information beyond the outcome of the decision (either the majority argument, or the majority plus the concurrence) significantly moved opinions toward support for the provision.Footnote 13 Among the pre-treated group, the coefficients for all three experimental reminders were positive, but none reached conventional levels of significance.
Table 2 Opinion Change, Immigration
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180608071512578-0945:S205226301700029X:S205226301700029X_tab2.gif?pub-status=live)
Note. See Table A14 in the Supplemental appendix for the full models.
***p < 0.01, **p < 0.05, *p < 0.10, +p = 0.10.
The similarity in effect size between the pre-treated and not pre-treated groups in the immigration study, however—particularly when contrasted with the large differences between these groups in the health care study (Table 1)—suggests pre-treatment effects may be more complex than expected.
Clarity as a Moderator of Pre-Treatment Effects
We know individuals’ characteristics can moderate pre-treatment effects (e.g., Druckman and Leeper, Reference Druckman and Leeper2012). Characteristics of the message itself could also impact these effects, though this has not yet received attention in the pre-treatment literature. To investigate how message clarity moderates pre-treatment effects, we divided the pre-treated respondents from the immigration study based on whether they received clear or unclear information from the evening news programs they typically watch.Footnote 14 Even an attentive respondent could reasonably misunderstand the ruling when exposed to unclear information.Footnote 15
We also separated the six news programs by whether the news coverage was supportive or critical of the Court ruling, as the media’s chosen frames could affect our results. Programs are classified as one-sided if a majority of their coverage used frames supportive of or neutral toward the Court ruling (Fox News and CBS), and two-sided if that threshold was not met (ABC, CNN, MSNBC, and NBC).Footnote 16
Table 3 suggests people receiving unclear information from the media shifted their opinions after the experiment, supporting Hypothesis 3. Opinion shifts were smaller—and pre-treatment effects larger—for people receiving clear and one-sided information. Our results about message clarity must be interpreted as first findings, rather than definitive conclusions.
Table 3 Opinion Change Among Pre-treated Group by Information Clarity and Frames, Immigration
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20180608071512578-0945:S205226301700029X:S205226301700029X_tab3.gif?pub-status=live)
***p < 0.01, **p < 0.05, *p < 0.10.
CONCLUSION
Even if we can predict an event will happen, and are able to measure opinion before and after it happens, we still must appreciate ways pre-treatment can bias our results. Through a study of opinion change around two major events, we demonstrated pre-treatment can cause us to both underestimate and overestimate framing effects. Further, we showed message characteristics can moderate the pre-treatment process: Clarity impacts the degree to which respondents receiving real-world information will actually be pre-treated.
Crucially, we have shown that “controlling for” pre-treatment may bias results in unexpected directions. We suggest researchers develop measures of the relevant media environments for their studies, and the prior information respondents may have received. To address the possibility of pre-treatment, it is necessary to understand not only the volume, but also the nature of the information (framing and clarity) to which respondents were previously exposed. Pre-treatment problems are likely to be most severe when real-world information is extensive, clear, and one-sided; researchers must account for this or risk misinterpreting their experimental results.
Further, work is still needed to expand upon our findings and understand other dimensions of pre-treatment. We study the dissemination of information at a discrete moment in time, and show how experiments that closely follow this dissemination should be interpreted. In real life, information is often repeated, extended, or contested, and it is critical to study how cumulative exposure to prior information influences subsequent experimental results.
SUPPLEMENTARY MATERIALS
To view supplementary material for this article, please visit https://doi.org/10.1017/XPS.2017.29