Introduction
Fundamental to democratic governance is the willingness and ability of citizens to hold elected officials responsible for their actions and decisions. Electoral accountability requires that citizens be responsive to new information and appropriately update their opinions on the basis of this information so as to correctly reward and punish elected leaders (Key Reference Key1966; Dahl Reference Dahl1989). This task is not without its challenges. Not only must citizens pay attention to new information, but they must also be able to interpret the meaning and relevance of such information with respect to their existing beliefs and be willing to update those beliefs in light of this new information.
Public policy reforms are increasingly mandating the reporting of objective performance information with the expectation that greater information means more accountability and superior outcomes (James Reference James2011). In practice, however, the connection between information and opinions may be a tenuous one. Bayesian models of political learning demonstrate that, depending on the content, strength and stability of prior beliefs and other factors, new information can produce either no learning or biased opinion change (Zechman Reference Zechman1979; Achen Reference Achen1992; Gerber and Green Reference Gerber and Green1999; Achen and Bartels Reference Achen and Bartels2006; Bullock Reference Bullock2009).Footnote 1 For example, new information consistent with one’s prior beliefs will not lead to opinion updating (Hutchings Reference Hutchings2003), and information inconsistent with prior beliefs may lead to little or no updating for recipients whose prior beliefs are strong – as may be the case for strong partisans whose opinions are based largely on ideological considerations (e.g. Berelson et al. Reference Berelson, Lazarsfeld and McPhee1954; Campbell et al. Reference Campbell, Converse, Miller and Stokes1960) or in cases where there are disagreements (including partisan or ideological ones) about the interpretation of the information (e.g. Bartels Reference Bartels2002; Lenz Reference Lenz2009). Still other citizens may fail to appropriately update their beliefs in light of new information because they are either uninterested or do not understand the importance of the information (MacKuen Reference MacKuen1984). It is also unclear whether information itself may change attitudes given the importance of local conditions and personal experiences (see e.g. Erickson and Stoker Reference Erikson and Stoker2011; Egan and Mullin Reference Egan and Mullin2012); individual experiences may be more influential than policy-relevant information.
Testing the connection between information and beliefs or opinions is critically important for evaluating the health of a representative democracy, but it is also a difficult empirical puzzle. The difficulty arises because, in everyday life, individuals selectively expose themselves to information (e.g. Prior Reference Prior2007). Even if they choose to consume news, they also choose which type of information to consume (e.g. Stroud Reference Stroud2009).
As a result, there is a robust debate regarding the nature of accountability and whether the public holds public officials responsible for outcomes in seemingly irrational (e.g. Achen and Bartels Reference Achen and Bartels2002; Healy et al. Reference Healy, Malhotra and Mo2010) or rational (e.g. Malhotra and Kuo Reference Malhotra and Kuo2008) ways (but see Ashworth Reference Ashworth2012). We contribute to the critical task of assessing the prospects for democratic accountability by testing whether information about policy outcomes impacts citizens’ evaluations of public officials and policy proposals. Given the importance of past performance for future assessments (Woon Reference Woon2012), we determine whether and how citizens update their initial beliefs about policy in response to objective information about recent performance outcomes for an issue that is salient, important and consequential for the functioning of democracy: public education (Dewey Reference Dewey1916).
Although most of the literature on government accountability focuses on voters’ responses to economic performance (e.g. Fiorina Reference Fiorina1981; Hibbing and Alford Reference Hibbing and Alford1981; Stein Reference Stein1990; Markus Reference Markus1992; Rudolph Reference Rudolph2003), the presumed linkage between information and accountability is perhaps clearest in public education reforms in the United States. During the so-called “standards-based accountability movement” of the 1990s, many states began testing students against a set of standards for each grade and subject on an annual basis to create ratings for school performance (Hanushek and Raymond Reference Hanushek and Raymond2005). By 2001, 45 states created and published “report cards” on schools based on student test performance, and 27 of them used an explicit rating system to identify low performers (Figlio and Ladd Reference Figlio and Ladd2008). The enactment of No Child Left Behind (NCLB) in 2002 applied such a test-based rating system to every school district in the nation.
The primary rationale for publicising school performance is that such information empowers parents (or the community) to pressure relevant decision-makers – including school staff, local school board members, state officials and others – to increase the performance of less effective schools by finding new resources or using existing resources more efficiently (Dorn Reference Dorn1998; Loeb and Strunk Reference Loeb and Strunk2007).Footnote 2 This “bottom-up” pressure may take the form of informal communication, moving one’s child elsewhere or, in the case of elected officials, voting for new representation (Berry and Howell Reference Berry and Howell2007). The implicit theory of action assumes that citizens absorb and act upon school performance information.
Education is an appropriate and important focus for such an investigation for many reasons. Not only is it an area of policy where reforms have focused on increasing data collection and dissemination to promote bottom-up accountability, but the primary policy objective in education – increasing student learning – is also clear and uncontested. There are certainly disagreements over how best to achieve increased student performance or the optimal role for the federal government, and some may care about the policy more than others [e.g. parents with children or homeowners for whom education quality is capitalised into housing values (Black Reference Black1999)], but when it comes to evaluating the performance of state governments on statewide educational performance for a given expenditure level, citizens are unlikely to disagree about the need to maximise student performance. The policy outputs of education policy are also directly measurable, comparable and widely available because of the standardised testing regime. State laws and education policies are also made by officials who are either directly elected (in the case of local school boards) or who are overseen by elected officials (in the case of the Department of EducationFootnote 3 ) and who are therefore, in principle, responsive to voters (Berkman and Plutzer Reference Berkman and Plutzer2005). Finally, public education is an important and essential public good. Understanding the foundations of education policy is an important undertaking for political scientists given the close linkages among education, citizenship and democracy (Guttman Reference Guttman1987).
Education is also an area in which few existing studies have found connections between the availability of performance information and citizens’ views (e.g. Schneider et al. Reference Schneider, Teske and Marschall2002). For example, parents give higher survey ratings to schools in New York City that have higher test score performance, attendance rates and scores on district quality reviews (Favero and Meier Reference Favero and Meier2013). Chingos et al. (Reference Chingos, Henderson and West2012) show that survey respondents rate their schools higher when student proficiency rates are greater and that this correlation is higher among parents. They also present mixed evidence that higher state-assigned accountability grades lead to higher citizen ratings, even for schools that perform very similarly, which suggests that respondents respond to information provision. Henderson et al. (Reference Henderson, Howell and Peterson2014) similarly find that survey respondents given information about how their local districts compare to state or national averages rate their local schools less well, even when their district outperforms those averages. They also find that these newly informed respondents express stronger support for universal (though not targeted) voucher programs and charter schools, suggesting that information on performance can inform policy preferences.
Utilising a similar approach to Henderson et al. (Reference Henderson, Howell and Peterson2014), we explore the potential for citizen-driven accountability in education policy by examining the impact of information on citizens’ evaluations of institutional performance and their education policy opinions. Given the methodological difficulties of estimating the effects of information when citizens selectively expose themselves to information, we conduct a survey experiment of 1,500 randomly selected Tennesseans. We measure their prior beliefs about statewide educational performance and the connection between their beliefs and their opinions about both public officials and proposed reforms. We then investigate whether and how they update those opinions in response to objective, non-partisan performance information that, in many cases, challenges their prior beliefs. We then characterise whether the effect of information on opinion formation differs by respondent characteristics and the type of performance information that is provided. We show that, while the information does appear to result in citizens updating their beliefs about the institutions of government responsible for education policy in meaningful ways, there is no evidence that the information also affected support for the various policy reforms that have been proposed to boost student performance and decrease the performance gap between races.
The role of information in updating beliefs
Understanding how objective and verifiable information affects the evaluation of public officials and opinions about public policies is critical for evaluating the prospects for democratic accountability. A necessary condition for “bottom-up” accountability is that the opinions of enough citizens must be responsive to new information and experiences to create electoral incentives for elected officials. If citizens ignore new information, or if new information is interpreted in accordance with existing partisan or ideological leanings, then there may be no independent effect of information on opinion formation.
Deriving hypothesised effects requires modelling how individuals process new information and adjust their opinions. Our interest is in assessing how new information affects citizens’ support for policies and institutions and the subsequent implications for democratic accountability. Although there are many models of cognitive processing, given our interests, we seek a suitably broad framework that can accommodate the possibility of both null and differential effects.
To this end, we use a Bayesian model of political learning (Zechman Reference Zechman1979; Achen Reference Achen1992; Gerber and Green Reference Gerber and Green1999; Bullock Reference Bullock2009). By reinterpreting how strong prior beliefs might be and what such beliefs entail, it is possible to accommodate the possibility of partisan bias (e.g. Campbell et al. Reference Campbell, Converse, Miller and Stokes1960; Rahn Reference Rahn1993; Bartels Reference Bartels2002; Lenz Reference Lenz2009) or spur of the moment processing based upon primed considerations (e.g. McGuire Reference McGuire1969; Zaller Reference Zaller1992). A model of Bayesian learning can generate predictions ranging from no learning to biased updating depending on the model’s parameters (Achen and Bartels Reference Achen and Bartels2006; Bullock Reference Bullock2009).Footnote 4 To be clear, our intention is not to “test” this model, but rather to use it to motivate the empirical investigation that follows and to illustrate how a variety of differential effects of information may arise even while holding constant a hypothesised model of citizen learning.
We want to characterise how new information (x) changes individuals’ beliefs. Suppose that individual i’s opinion about an issue or public official is denoted by μ (for clarity, we drop the individual superscript unless the between-individual variation is relevant). Suppose further that prior beliefs can be thought of as being normally distributed with mean μ P and variance σ P 2. That is, absent new information, asking i about her opinion on an issue will result in responses centred at μ P , but there may be variation due to transient effects [e.g. priming, the ambiguities of how the question is interpreted or other reasons why the survey response may contain error (Zaller and Feldman Reference Zaller and Feldman1992)]. Opinions may be extremely stable (i.e. σ P 2 is small), as might be the case if the individual is a highly educated partisan with very strong beliefs (e.g. Popkin Reference Popkin1994), or they may be extremely variable (i.e. σ P 2 is large), as might be the case if the individual has no political attitudes and has never thought about the issue before being asked about it in the survey.
The effect of new information is simply the change in opinion that results. If the effect of new information can be thought of as being normally distributed with a mean of μ I and a variance of σ I 2, Bayesian updating requires that the opinion of individual i is a combination of prior beliefs and the new data, with the impact of each determined by the relative strength of each. Mathematically, the new (posterior) opinion is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921050624593-0186:S0143814X14000312:S0143814X14000312_eqnU1.gif?pub-status=live)
with the precision of the new belief being given by 1/σ P 2 +1/σ I 2. The effect of the new information is the difference μ−μ P .
This seemingly sparse model reveals several possible effects of new information. One possibility is that there is no effect: μ−μ P =0. A null effect is possible for several reasons. First, the information that is provided may already be known or be consistent with existing opinions [e.g. individuals for whom the issue is relevant may possess correct beliefs (Hutchings Reference Hutchings2003)]. If so, μ=μ P , and we would obviously expect no difference.Footnote 5 Second, existing beliefs may be so strong so as to make the new information irrelevant. If 1/σ P 2>1/σ I 2 and the difference is large, opinions may be unchanged even if μ≠μ P and the difference is dramatic. This might be the case if the individual’s belief is based on considerations that are unchanged by the new information or if the new information is thought to be untrustworthy. For example, parents with school-aged children may have strong beliefs about schools based largely on personal experiences. If so, new performance information may not change their opinions. Similarly, strong partisans may be less responsive to new information because their opinions are based purely on partisan considerations (e.g. Berelson et al. Reference Berelson, Lazarsfeld and McPhee1954; Campbell et al. Reference Campbell, Converse, Miller and Stokes1960).Footnote 6
Information updating occurs if the new information differs from their existing beliefs and individuals are sufficiently motivated to update their existing beliefs (MacKuen Reference MacKuen1984; Kuklinski et al. Reference Kuklinski, Quirk, Jerit and Rich2001). Individuals are receptive to new information if their existing beliefs are sufficiently imprecise (i.e. σ P 2 is large) or if the new information is precise in its implications (i.e. σ I 2 is small). For a given piece of information, if there are no differential perceptions of the clarity of the new information (i.e. σ I 2 is constant across individuals), differential effects between individuals can emerge if there is variation in the strength of existing priors (σ P 2) or in how distant the prior belief is from the new information; those with stronger prior beliefs and more accurate perceptions are less sensitive to new information.
Differences in whether and how individuals update their beliefs in response to new information may result from both individual level differences (e.g. how important and interested an individual is in education policy), as well as partisan differences. For example, the importance of statewide examinations, graduation rates and other seemingly objective measures may differ by political orientation. This is because of how partisans interpret such data in light of the real or imagined political orientations of public officials responsible for designing and implementing education policy. Accordingly, the contribution of the new information will differ between partisans even if they share a common goal of increased student performance given current expenditure levels.
While most education policies at least implicitly assume that there is a strong relationship between information and opinions, we do not actually know how providing objective, outcome-based information affects citizens’ assessment of those institutions that are responsible for designing and implementing education policy at the state level. We also do not know how such information influences the support for various policies aimed at increasing student performance. While we may hope that assessments are responsive to information in a way that brings about the convergence of citizens’ beliefs, there are also reasons to think that there may be either no effect or effects that depend on existing beliefs in ways that may prevent citizens from agreeing, even on issues such as education where everyone presumably concurs on the desirability of increasing student performance.
Experimental design
We test the straightforward hypothesis that providing citizens with objective information about the public education system’s recent performance will change their opinions about (a) the performance of educationally relevant government institutions and (b) education policies and reforms, at least among the population(s) for whom the objective information is inconsistent with their prior beliefs about system performance. While others have looked at how mistaken beliefs correlate with opinions (e.g. Sides and Citrin Reference Sides and Citrin2007), we employ an experimental design that identifies the effect of providing information, controlling for existing beliefs. This detail is important, because it avoids the complications that may result from differences in individuals’ abilities to form accurate initial beliefs.
We utilise two pieces of objective performance information: (1) student achievement on standardised math tests and (2) the extent to which student achievement on these tests varies by the race of the student (the racial “achievement gap”). Under NCLB, standardised test scores in math and reading form the basis of school accountability in every school district in the United States.Footnote 7 Within each state, common tests cover the same material for each grade level, so scores have the same meaning across schools and districts. NCLB requires schools to publicly report proficiency levels from these tests both for the school as a whole and for racial and ethnic subgroups, in part based on the assumption that parents, communities and other stakeholders can utilise these data to pressure schools to better serve the needs of students (see Figlio and Loeb Reference Figlio and Loeb2011).
We embedded a survey experiment within a larger Random Digit Dial survey of 1,500 citizens of Tennessee. Tennessee provides a useful laboratory for this experiment, because their long history with school performance data means that Tennesseans are among the most familiar and, presumably, comfortable with their usage. Tennessee was a relatively early adopter of the school accountability policies that predated NCLB (Hanushek and Raymond Reference Hanushek and Raymond2005), and, prior to the mandates of NCLB, the state based its accountability policy almost purely on making information available to the public rather than on using student test data for the kinds of administrative interventions favoured in other “consequentialist” accountability states (Carnoy and Loeb Reference Carnoy and Loeb2002). The use of student data for school improvement has maintained a high profile in the state in recent years; for example, in 2011, a law passed as part of Race to the Top reforms mandated that 50% of a teacher’s annual evaluation must come from the standardised test performance of his or her students (Sher Reference Sher2011). In the wake of Race to the Top – with Tennessee as one of two first-round awardees in the federal grants competition, receiving more than $500 million – the state also began increasing the presence of charter schools, creating a statewide Achievement School District to turn around its lowest-performing schools, moving towards implementation of the Common Core State Standards and investing heavily in innovation in science and mathematics instruction, among other reforms, ensuring the salience of education reform among the public at the time of the study.
The survey experiment contained five randomly assigned conditions comprising two experiments. As Table 1 summarises, in four of the conditions (conditions 2–5), respondents’ beliefs about school performance in Tennessee were measured by asking about student performance on standardised math tests. Half were asked a single question about student performance on end-of-year math exams (the “performance experiment” in conditions 2 and 3), and half were asked this question and a question about the race-related gap in student performance on these tests (the “achievement gap experiment” in conditions 4 and 5).Footnote 8 In two of the four conditions, respondents received the treatment of being told the correct answer(s) after they expressed their beliefs concerning performance (condition 3) or both overall performance and the achievement gap between black and white students (condition 5). To identify the impact of the information itself rather than the impact of the framing of student performance (see, e.g. Chong and Druckman Reference Chong and Druckman2007), the actual performance was reported without commentary.Footnote 9
Table 1 Experimental design
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170128224649-28865-mediumThumb-S0143814X14000312_tab1.jpg?pub-status=live)
After the intervention, respondents in conditions 2 through 5 were asked (1) to rate the performance of various public institutions involved in setting education policy and (2) what they thought about various educational reforms that have been proposed. Respondents in the remaining group (condition 1) were asked the same battery of evaluations, but they were not primed to consider performance issues beforehand. This group lies outside both the performance and achievement gap experiments.
Our design identifies several effects of interest. Because asking citizens about student performance may prime considerations that are not commonly used when citizens articulate preferences for education policies or the public officials responsible for education policy (see, e.g. the theories of McGuire Reference McGuire1969; Zaller Reference Zaller1992 and Zaller and Feldman Reference Zaller and Feldman1992), the experimental manipulation may itself affect the evaluation by priming the respondent to think in terms of performance or equity considerations when answering the questions. We can identify the possible priming effect by comparing the responses of condition 1 to condition 2 (and also conditions 1 to 4). To identify how the performance information we provide affects the opinions of otherwise similar individuals, we compare individuals’ responses in conditions 2 and 3. The difference in evaluations and opinions reveals whether individuals with otherwise identical characteristics and beliefs about statewide student performance differ as a result of being exposed to the objective performance information. Because we can condition on prior beliefs, we can identify the effect of the information we provide holding initial beliefs fixed. We also examine if the effect varies depending on how important educational issues are to the respondent, because the importance of an issue is presumably related to the strength of prior beliefs or the motivation to update beliefs. Comparing the differences in conditions 2 and 4 reveals how additionally priming racial gap considerations – and, more specifically, the racial disparity in educational performance – affects opinions. Do opinions change if respondents are thinking not only in terms of overall performance, but also in terms of the relative performance of students by race?
Replicating the comparison for conditions 2 and 3 using conditions 4 and 5 reveals how providing information about student performance and student performance by race affects evaluations. Not only is the comparison between the corrected and uncorrected individuals of interest, but it is also of interest to see how the overall effect of providing these two pieces of information compares with the effect revealed when comparing conditions 2 and 3.
The accuracy of prior beliefs about student performance and racial gaps
The effect of information presumably depends on both the accuracy and strength of existing beliefs. The first task in identifying the effect of information on citizens’ evaluations of public officials and public policy proposals therefore involves assessing the strength and accuracy of existing beliefs (Delli Carpini and Keeter Reference Delli Carpini and Scott1996). Figure 1 graphs the distribution of beliefs regarding the percentage of elementary and middle school students who are performing at grade level or better on Tennessee’s end-of-grade math tests (left) and the difference between the percentage of white students and black students performing at grade level or better on Tennessee’s end-of-year math tests (right).
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170128224649-68052-mediumThumb-S0143814X14000312_fig1g.jpg?pub-status=live)
Figure 1 Distribution of citizens’ beliefs. Note: The figures provide the distribution of responses using the 1,328 respondents in conditions 2–5 who were asked the overall performance question (left) and the 650 respondents in conditions 4–5 who were asked about possible racial disparities in student performance (right). The vertical line denotes the true percentage in each instance.
Figure 1 facilitates several conclusions. First, as is the case for other issues (e.g. Gilens Reference Gilens2001; Kuklinski et al. Reference Kuklinski, Quirk, Jerit and Rich2001), very few citizens hold accurate beliefs. Despite the amount of attention paid to the issue and the number of policies in Tennessee that use student performance data, only 20% chose the response category containing the true level of student performance (34%), and only 8% chose the category containing the true gap in student performance (22%).Footnote 10 In fact, the nearly uniform distribution of responses to the achievement question suggests that the 71% of respondents who chose a response other than “Don’t know” were simply guessing. Second, Figure 1 reveals that citizens are less likely to possess correct beliefs about the race-related gap.
Third, citizens’ beliefs about statewide overall performance are too optimistic; 54% of the respondents think student performance is better than it actually is. However, a larger percentage of respondents also think the racial gap in student performance is larger than it actually is than smaller than it actually is (36% and 27%, respectively).Footnote 11 The inaccuracy of beliefs does not vary according to the importance of education to the respondent. In an additional analysis (not shown), we measured salience – and, presumably, strength of prior beliefs regarding education issues – by using whether the respondent believes education should be the top priority of the Tennessee government,Footnote 12 whether the individual has children that attend public school and whether the respondent owns their home or has a mortgage (see Figlio and Lucas Reference Figlio and Maurice2004). Proportions with correct performance information were similar and statistically indistinguishable across these conditions.
The fact that many citizens are unaware of actual student performance despite the fact that it is a centrepiece of accountability-based educational reforms is consistent with the pervasive lack of information that the public has routinely exhibited on political issues (e.g. Campbell et al. Reference Campbell, Converse, Miller and Stokes1960; Delli Carpini and Keeter Reference Delli Carpini and Scott1996). However, what matters are the consequences of the misinformation (e.g. Bartels Reference Bartels1996) and whether citizens are willing to update these beliefs and the opinions for which these beliefs are relevant. It is to this analysis that we now turn.
Estimating the effect of objective information about performance outcomes
Having shown that citizens often misperceive – by large margins – the performance of the public education system on commonly used metrics, we now assess: (1) whether misperceptions about performance and opinions are indeed linked, (2) whether correcting citizens’ misperceptions via the provision of performance information leads to changes in policy opinions and (3) whether opinions are differentially responsive to different types of performance information (i.e. overall performance versus black–white gaps). We investigate these questions for opinions based on the performance of three education institutions (Tennessee schools as a whole, the Tennessee Department of Education and the local school board) and by which education reforms should be pursued.
The effect of performance information on evaluations of educational institutions
We begin by assessing whether the act of simply asking about student performance primes considerations and influences evaluations, even in the absence of new information. There is no evidence that priming affects institutional evaluations; given our sample size, we have sufficient power to detect differences of 0.2 on the five-point scale we use.Footnote 13 Comparing the evaluations for respondents in conditions 1 and 2 via t-tests reveals that the smallest p-value that was obtained was 0.55.Footnote 14 Comparing the average responses for conditions 1 and 4 also produces null results even though the respondents in condition 4 are asked to think about both overall performance and race-related differences.Footnote 15
To better test the association between performance information and evaluations of education institutions, we estimate a series of ordered probits of the form:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921050624593-0186:S0143814X14000312:S0143814X14000312_eqnU2.gif?pub-status=live)
The dependent variable in equation (1) is the grade respondent i gives to institution g, using a survey question that asks every respondent to assign a grade of A, B, C, D or F to “public schools in Tennessee,” “the Tennessee Department of Education” (TDOE) and “your local public school board.” While the assumptions of the ordered probit are most appropriate given the ordinal nature of the evaluations, the supplemental appendix shows that translating the grades into their GPA equivalents (i.e. A=4, B=3, C=2, D=1, F=0) and using OLS produces identical substantive results.
We first compare conditions 2 and 3. Subjects in these two conditions were asked to assess the performance of Tennessee schools after providing their estimate of the percentage of elementary and middle school students performing at grade level or better according to state standardised tests using a series of 20 percentage point ranges (i.e. 0–19%, 20–39% and so forth). We use these responses to measure prior beliefs using a series of indicator variables (i.e. the interval containing subject i’s estimate, including “I don’t know,” is set to 1, and all other intervals are set to 0, with the 0–19% interval being the omitted category).
Because all respondents in condition 3 were told the correct answer, the treatment variable T i is set to 1 if subject i is in condition 3 and 0 if in condition 2. The row-vector X i for individual i contains the control variables used to improve the estimates’ precision. These include indicators for: female, black, Democrat, Republican, having a college education, having children in school and owning a home, as well as a linear (three-item) ideology scale, age, age squared, the number of years residing in Tennessee and a six-category measure of respondent income. In equation (1), β 2 estimates the average effect of being told the true performance level, and the coefficient vector β 1 measures the association between the prior beliefs of i and the grade i assigns to institution g.
Table 2 presents the results. The odd-numbered columns display the results of estimating equation (1).Footnote 16 Several important conclusions are evident. First, as expected, respondents’ evaluations of both Tennessee schools in general and TDOE are increasing in their beliefs about statewide student math performance; the better an individual thinks student performance is, the higher the grade that was given. Those citizens who overestimated student performance the most (i.e. a performance guess of 80–100% performing at grade level) also gave the highest average grade to the educational institution. However, this pattern is least true for evaluations of the local school board (Model 5). Consistent with the murkier connection between statewide performance and the efficacy of one’s local board, beliefs about statewide performance are largely uncorrelated with local board evaluations.
Table 2 The effect of prior beliefs about performance on evaluations of education institutions
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170128224649-18707-mediumThumb-S0143814X14000312_tab2.jpg?pub-status=live)
Ordered probit coefficients shown.
Models also condition on control variables.
Standard errors in parentheses.
*p<0.10, **p<0.05, ***p<0.01.
Second, the average effect of receiving the informational update containing the true student performance level is negative and statistically distinguishable from zero at conventional levels for all three institutions. Moreover, the magnitude of the effect is sensibly ordered: the effects are largest for the evaluation of Tennessee schools (Model 1) and the TDOE (Model 3), which are most responsible and relevant for statewide performance, but there is little effect of learning about statewide performance on citizens’ evaluations of local school boards (Model 5).Footnote 17
These results assume that the effect of information does not depend on prior beliefs. If the performance update treatment affects institutional evaluations via the adjustment of respondents’ posterior beliefs, however, the treatment effect should be the greatest among those respondents who most overestimate student performance. To test this hypothesis, we test for a possible interaction between prior beliefs and treatment status using the specification of equation (2):
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921050624593-0186:S0143814X14000312:S0143814X14000312_eqnU3.gif?pub-status=live)
The even-numbered columns of Table 2 report the results of estimating equation (2) by ordered probit. Several important refinements emerge. First, columns 2 and 4 reveal that the negative effect of the performance update is driven by the substantially lower performance evaluations given to Tennessee schools and TDOE by those respondents who most overestimate student performance. Figure 2 graphs the substantive magnitude of this effect on the probability of assigning a grade of A. Margins are shown separately for those in the treatment and control groups, with the vertical bracketed lines corresponding to 95% confidence intervals. (The effects are substantively similar using the probability of assigning a B or higher, and the supplemental appendix replicates the results using OLS.)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170128224649-06860-mediumThumb-S0143814X14000312_fig2g.jpg?pub-status=live)
Figure 2 The effect of prior beliefs on the effect of information. (a) Probability of assigning Tennessee schools a grade of A; (b) probability of assigning the Tennessee Department of Education a grade of A; (c) probability of assigning the Local School Board a grade of A.
Figures 2a and 2b reveal that the provision of objective performance information similarly affects citizens’ evaluations of Tennessee schools and TDOE. In both cases, the line denoting the opinions of the control group shows that, in the absence of information, the respondents who most severely overestimate student performance are also much more inclined to assign a high grade to institutional performance. In contrast, those given the information update in the treatment group have a roughly equally low probability of assigning the highest grade regardless of their prior belief. Because respondents are equally likely to assign a grade of “A” regardless of their initial beliefs even after they are told the actual level of performance, this pattern is consistent with the evaluations being based on the provided information rather than prior beliefs or individual experience.
Figure 2c provides an important contrast by showing that the information has almost no effect on citizens’ evaluations of local school boards. This null effect is consistent with the observation that evaluations of local school boards should not be affected by the information provided by the experimental condition, because information about statewide performance is irrelevant for assessing the performance of the school board.
We also tested whether the effect of information on evaluations varied according to the likely importance of the issue of education to the respondent. We did so using measures of issue salience based on having a child in school, owning a home and naming education as the top priority for state government. We might expect salience to matter, for example, if it predicts stronger prior beliefs that are more difficult to update. For instance, parents can access numerous sources of information about school performance on a variety of dimensions – including their day-to-day interactions with their children’s schools, which may make them less responsive to test score information. We re-estimated equation (1) including interactions between each measure of issue salience and the performance assessment indicators and the treatment indicator. Table A.1 in the supplemental appendix online reveals no evidence that there are differences in the treatment effect related to the saliency of the issue.Footnote 18
The effect of performance information on support for policy reforms
Ostensibly, citizens’ support for a policy initiative is driven in part by perceptions that it can address deficiencies in the status quo. If so, does learning about the status quo change opinions about specific public policies? More specifically, does learning that student performance is lower than expected increase citizens’ support for education reforms?
We answer this question for six policies common to current education reform debates: test-based performance pay for teachers, NCLB, governmental provision of pre-kindergarten programs, public vouchers for private school attendance, charter schooling and differential pay to incentivise teachers to work in low-income schools. Following the three institutional evaluation questions, each respondent was asked whether or not they support each of these six policies using the questions listed in supplemental appendix B. The effect of information was identified by re-estimating equations (1) and (2) to predict the support for each policy separately using probit models.
Figure 3 summarises the effect of information on policy opinions by graphing the predicted probabilities of supporting each reform for otherwise typical and identical individuals in the treatment and control groups and allowing the effect of information to vary by prior beliefs about student performance [equation (2)].Footnote 19 The six panels reveal that the effect of information is quite different from the effects evident in prior sections.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170128224649-83950-mediumThumb-S0143814X14000312_fig3g.jpg?pub-status=live)
Figure 3 Beliefs about student performance and support for education policy reform with and without informational updating. (a) Teacher performance pay; (b) No Child Left Behind; (c) state-provided pre-kindergarten; (d) private school vouchers; (e) charter schools; (f) higher pay for teachers in low-income schools.
First, there is not a decreasing (or increasing) association between a respondent’s prior beliefs about student performance in the status quo – shown on the x-axis – and the likelihood of supporting any of the examined reforms. There is slight evidence of a U-shaped relationship (i.e. respondents in the 40–59% or 60–79% categories have statistically significantly lower likelihoods of support than categories on the ends) for three of the six policies (performance pay, vouchers and charter schools), but the reason for this relationship is unclear.
Second, providing correct information about the status quo has no effect on the probability that a citizen supports any of the policies. Regardless of prior beliefs, receiving information about actual student performance has no impact whatsoever on their support for some of the reforms that have been proposed to increase student performance.
What may explain the pervasive null effect of information on the support for proposed reforms? One possibility is simply that respondents do not believe that any of these reform strategies are likely to produce changes in student achievement, so they do not update their beliefs in response to this information. Alternatively, the Bayesian learning model described in the first section reveals that performance information and its updating will not substantially affect opinion formation if prior beliefs about policies are strong. Although citizens possess inaccurate beliefs about student performance and are seemingly willing to update them when called upon to evaluate educational institutions, it is possible that opinions about public policies are more strongly held because they are closely tied to the individuals’ ideology and partisanship. If partisan or ideological beliefs drive policy preferences, or if there are partisan and ideological disagreements about the efficacy of the various policies (and their costs), correcting mistaken beliefs about student performance may be insufficient to change opinions about the policies themselves (Rahn Reference Rahn1993). In other words, these results would be consistent with citizens’ policy evaluations being primarily driven by party and ideology (Campbell et al. Reference Campbell, Converse, Miller and Stokes1960; Jacoby Reference Jacoby1988; Green et al. Reference Green, Palmquist and Schickler2002; Highton and Kam Reference Highton and Kam2011).
Examining the covariates of these models reveals evidence consistent with this explanation. Two factors – political ideology and race – are the only consistent predictors of respondents’ policy opinions across the various specifications. The joint test that the ideology and party variables are statistically indistinguishable from zero can be rejected at the 0.05 level in four of the six models (performance pay and NCLB are the exceptions). Perhaps because of the racial gap in student performance, black respondents are more supportive of five of the six proposed reforms (differentiated pay for teachers in low-income schools is the exception).
The effect of information about the racial achievement gap
So far, we have considered the effect of providing performance information about the overall level of student performance. In reality, citizens may also be responsive to other types of performance information, and different citizens may respond differently depending on the type of information. In education policy, for example, a significant amount of research and public debate focuses on the achievement gaps between students with different backgrounds, particularly with respect to race (Hochschild and Shen Reference Hochschild and Shen2012). Closing the achievement gap between white students and black students – estimated to be a standard deviation or more on standardised tests (Fryer and Levitt Reference Fryer and Levitt2004) – is a demonstrably important goal in education and a central aim of many education reform efforts. In evaluating the performance of education institutions or making decisions about their support for particular education policy changes, do citizens’ perceptions about the relative performance of white and black students inform their evaluations?
To characterise the effect of the information, we use conditions 4 and 5 in our survey experiment (see Table 1). Subjects in these conditions were asked to estimate the percentage of elementary and middle school students testing at grade level in math (performance prompt) and to estimate the difference in this percentage for white and black students (achievement gap prompt). Respondents in condition 4 serve as the control group. In condition 5 (treatment), respondents were given an information update containing the true percentages for overall performance (34%) and the percentage gap between white and black students (22 percentage points).
Table 3 reports the associations between prior beliefs about overall performance and race-related performance differences and institutional evaluations. The coefficients for prior beliefs about overall performance reported in the top half of Table 3 reveal that institutional evaluations increase when respondents’ overestimate overall student performance. This pattern is consistent with the results of Table 2. However, in contrast to the prior results, the coefficients for prior beliefs regarding the race-related achievement gap show no clear pattern, and most are indistinguishable from the reference group of those who believe that there is no gap.Footnote 20 All else equal, individuals who think that there is no difference in student achievement and those who think that the performance gap is more than 35% provide the same grade to educational institutions. This finding suggests that, while evaluations of education officials depend on beliefs about overall student performance, evaluations do not depend on beliefs regarding race-related differences in student performance.
Table 3 Prior beliefs about achievement gaps do not predict institutional evaluations
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170128224649-04637-mediumThumb-S0143814X14000312_tab3.jpg?pub-status=live)
Ordered probit coefficients shown.
‘No difference’ is the omitted category for the equity guess.
Models also condition on control variables.
Standard errors in parentheses.
*p<0.10, **p<0.05, ***p<0.01.
To identify whether priming or updating information about race-based achievement differences affects citizens’ opinions, we pool data from all subjects in conditions 2 through 5 and examine their institutional evaluations. We estimate a version of equation (1) that controls for prior beliefs about student performance and includes indicators for the condition to which the respondent was assigned. An indicator’s coefficient tells us the average response change that is attributable to random assignment to that condition relative to the excluded category (condition 2, the control group for the performance experiment).Footnote 21
There are several comparisons of interest. A significant coefficient on condition 4 (the control group for the equity experiment) would suggest that receiving the equity prime (i.e. the question about race-related differences, but not the update) in addition to the overall performance prime affects opinions, because the only difference between conditions 2 and 4 is that condition 4 respondents were also asked to think about the black–white test score gap (but neither group was given updated information). A significant difference between the coefficients for conditions 4 and 5 would suggest that receiving the actual updated information about student performance changes one’s opinion relative to simply being asked about performance and equity (without being told the actual performance). Lastly, a significant difference between conditions 3 and 5 (the two treatment groups) would suggest that receiving information about the achievement gap in addition to receiving information about overall performance changes the average response.
Table 4 summarises the main results (Table A.2 in the supplemental appendix contains the full results). Interestingly, respondents in condition 4 and who were primed to think about the achievement gap (but not updated) gave more negative institutional evaluations in two of the three models; the coefficient is negative but not significant at conventional levels in the third. Comparing the coefficients for conditions 4 and 5 reveals that the coefficients are statistically distinguishable only for evaluations of all Tennessee schools. The effect of being informed about actual student performance only matters for evaluations of Tennessee schools. This result is surprising given the effect of condition 3 in Table 4, which shows that receiving only the overall performance update negatively affects institutional evaluations.
Table 4 Equity prime, equity information update and institutional evaluations
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921050624593-0186:S0143814X14000312:S0143814X14000312_tab4.gif?pub-status=live)
Ordered probit coefficients shown.
Models run on pooled sample from Conditions 2 through 5.
Models also condition on performance guess and control variables.
Standard errors in parentheses.
*p<0.10, **p<0.05, ***p<0.01.
Moreover, in only one model – again for all Tennessee schools – does receiving an informational update about overall performance and equity result in a different response from being updated about overall performance alone. Somewhat surprisingly, the coefficient for condition 5 is smaller, meaning that receiving additional information about the race-related achievement gap reduces the impact of being updated about overall performance.
We replicated this analysis for the six policy reforms. Supplemental appendix Table A.3 shows that (1) prior beliefs about the achievement gap are not clearly related to support for policy reforms and (2) receiving updated information has no effect.Footnote 22 Instead, opinions about educational reforms are best explained by partisan self-identification.
Overall, the effect of being updated on both overall student performance and the racial achievement gap is largely consistent with the effects of being informed only about overall performance. We cannot determine from these data why we find no effect for achievement gap information. One possibility is that citizens’ opinions regarding education policy are not influenced by achievement gap concerns. Another is that receiving information on race-based gaps primes respondents to consider aspects of student achievement that are heavily correlated with family characteristics and thus beyond the control of schools. Alternatively, it could simply be that providing two pieces of information prevented respondents from being able to cognitively process the second piece.
Discussion and conclusion
Assessing citizens’ responsiveness to new information is critical for determining the prospects for democratic accountability. Unless citizens change their opinions and beliefs in response to new information, it is hard to imagine how votes cast at the ballot box could reflect an informed assessment of public officials’ performance and create the correct incentives for elected officials (Achen and Bartels Reference Achen and Bartels2002). Forming accurate beliefs, however, requires that citizens be able to appropriately update their existing beliefs in response to new information. If citizens update their beliefs in biased ways based on prior beliefs and partisan leanings, or if they fail to update their beliefs, the prospects for democratic accountability may be dim. This is a particularly disconcerting possibility in areas like education where the stakes are high and many reforms either implicitly or explicitly depend on public pressure to improve performance.
Our results suggest mixed implications from the perspective of democratic accountability. Despite Tennessee’s long-standing emphasis on reporting educational outcomes and the large number of existing policies utilising such assessment information – which make it unlike many other states – most citizens overestimate student performance and therefore hold overly favourable assessments of the institutions responsible for education policy. However, consistent with prior work (e.g. James Reference James2011; Chingos et al. Reference Chingos, Henderson and West2012), we also find that citizens’ assessments of educational institutions respond in seemingly rational ways to performance-related information. Not only are assessments driven by the level of student performance rather than ideological predispositions, but the institutions that are most closely associated with statewide performance are most affected by information about statewide performance.
To be clear, the fact that citizens’ evaluations of educational institutions are responsive to performance-based information is not necessarily evidence of democratic accountability in action, because this is only the first step of what is required for policy performance-based accountability. Our experimental design allows us to cleanly identify the effect of learning about various dimensions of student performance and the possibility of priming, but we cannot determine whether the effects are transient or long lasting (but see, e.g. Chong and Druckman Reference Chong and Druckman2010). Moreover, even if citizens are completely informed and unsatisfied with educational performance, they may be reluctant to punish elected officials and create required electoral consequences for a lack of performance in secondary education if they are content with other issues (James Reference James2011). Creating the incentives necessary for democratic accountability in education policy may be difficult given the many possible dimensions of interest to citizens. Future research might link experimental data like ours to voter files or later survey responses about voting behaviour to assess whether information provision affects subsequent turnout or voting choices as a means of delving further into these processes. Of course, voting behaviour is but one mechanism through which citizens can express dissatisfaction with public institutions, and future research might examine the connections between performance information and other mechanisms, such as advocacy or residential or school mobility, as well.
The fact that citizens’ opinions about particular policies designed to improve educational performance are unresponsive to learning about student performance under the status quo is also potentially sobering for the prospects of citizen-led policy change. Unlike the results in Henderson et al. (Reference Henderson, Howell and Peterson2014), learning that educational performance is worse than expected does not cause any change in the support for various policies aimed to increase student performance, perhaps reflecting the conflicting state of both the evidence base and policy discourse surrounding the effectiveness of these reform initiatives. Instead, citizens’ opinions about education policies are primarily driven by ideological and partisan affiliations. These affiliations may represent prior beliefs that are too strong to be affected by the information we provide, or perhaps citizens base their policy opinions on beliefs or information unrelated to average student performance. In either case, consistent with the conclusions of other work highlighting the importance of elite messaging for public opinion change (e.g. Ladd and Lenz Reference Ladd and Lenz2009; Noel Reference Noel2013), changing the public’s support for particular reforms would appear to depend on the actions of partisan and ideological leaders. Clearly, investigation of the factors underlying education policy opinion formation would be another fruitful avenue for research.
Acknowledgements
The authors wish to thank seminar participants at the Center for the Study of Democratic Institutions at Vanderbilt University for reactions to an earlier draft. Any mistakes and oversights are the fault of the authors alone.
Supplementary material
To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S0143814X14000312