In an era of devolution, state governments have increasingly taken on the responsibility of implementing programs that address the physical, economic, and social well-being of their residents. Supporters of this decentralized approach to policy making argue that states can more nimbly innovate and iterate with social policies. Experimenting implies occasional (if not frequent) failure, and, left unaddressed, failures can exacerbate the very inequalities social policies are trying to remedy. If states are to fully fulfill their promise of being laboratories of democracy, they must have systems for recognizing and responding to the policy failures that are all but certain to ensue.
Legislators, bureaucrats, watchdogs, and advocates have long been interested in assessing the outcomes of public policies (e.g., Bissell Reference Bissell1979; Green Reference Green1984), and the advent of big data has promised faster, more accurate insight into public policy outcomes. Building on this promise, some states and research organizations have even established formal “research practice partnerships” to facilitate the accurate evaluation of policies and the dissemination of policy findings (Coburn and Penuel Reference Coburn and Penuel2016). Yet existing explanations for policy learning and policy change rely on more traditional dynamics such as partisanship (e.g., Suhay and Druckman Reference Suhay and Druckman2015), geography (e.g., Walker Reference Walker1969), and public opinion (e.g., Erikson et al. Reference Erikson, Wright, Wright and McIver1993)—none of which necessarily reflect the reality of a policy’s impacts—to explain changes in policies and policy preferences. Theories that do acknowledge the role of data tend to lump together a state’s resources and habits for collecting and analyzing data (Sanderson Reference Sanderson2002) or they study how states collect information about their residents and infrastructure, but not on policy outcomes (Brambor et al. Reference Brambor, Goenaga, Lindvall and Teorell2020).
If advances in data analysis are to enhance responsive policy making, we must better understand when and how public officials incorporate information into their policy preferences. Existing studies either focus almost entirely on the individual, usually lamenting the central role that ideology and values play in filtering findings or they explore practices for generating new information about policies without regard to how it might effectively influence public officials’ behavior and preferences (Blank and Shaw Reference Blank and Shaw2015). As with most complex processes—of which acknowledging and addressing policy failure is one—breaking them down into their constituent parts is critical to fully understanding how they work. This paper examines the first step in better understanding responses to policy failure by asking: when are public officials willing to acknowledge that a policy has failed? I leverage a comparative case study of the policy trajectories for two particularly punitive state-level truancy policies, Failure to Attend School in Texas (FTAS) and Becca’s Bill in Washington state, to develop new theoretical distinctions between state capacity for gathering and analyzing data. These distinctions, in turn, expand our explanation for when public officials are likely to acknowledge policy failure.
In 1995, during the first legislative session after Texas passed its school accountability plan, which rewarded and punished schools based on academic and attendance benchmarks, the legislature added the FTAS clause to the Texas Education Code with little discussion and substantial bipartisan support. FTAS made Texas the first state in the country to criminalize truancy, by allowing schools to charge students with Class C misdemeanors in adult criminal courts for missing any more than 10 days of school per year.
The same year Texas policy makers enacted FTAS, Washington state passed its own truancy bill in response to the gruesome murder of a runaway and chronically truant student, Rebecca Hedman. Becca’s Bill, as the policy was called, allowed judges to detain truant students (along with runaways) for several days if they failed to comply with a court order to attend school. While these detentions were supposed to occur in therapeutic settings, in practice this rarely occurred, and detained young people were sometimes even exposed to juvenile and adult criminals as part of their truancy detention. In both states, graduation rates eventually stalled, and rates of juvenile detention (in Washington) and ticketing (in Texas) skyrocketed, particularly among students of color, suggesting the harsher consequences for truancy were not encouraging the desired outcome.
By 2015, all three branches of the Texas government had acknowledged FTAS’ failure—as measured by public statements and votes supporting as much—and the legislature and governor even voted to revise the policy to be in line with research-backed best practice, requiring preventive measures. Washington state, on the other hand, continued to enforce Becca’s Bill, even actively defeating a measure to ban detention for truants in 2015.
Traditional explanations for policy learning and change suggest that Washington officials should have acknowledged its truancy policy’s failure first. With its reputation as a conservative state with a more tough-on-crime approach to criminal justice, how did Texan public officials come to re-evaluate and revise their truancy policy before Washington? In addition to Washington’s more progressive leanings, the state also has a robust research bureaucracy charged with regularly evaluating policy outcomes, including Becca’s Bill, making it even more puzzling that Washington public officials were not first to re-evaluate and revise their truancy policy. This project explores why Texas may have acknowledged failure faster, despite politics-as-usual logic that Washington should have led the way. I find that robust and universal data collection efforts—as distinct from data analysis—are essential for a data-driven response to policy outcomes.
A note on the organization of this article is necessary before moving forward. The discovery process in this project was inductive; Texas leading the way on acknowledging the failure of its punitive truancy policy compared with Washington’s lagging inspired a deep dive into the policy trajectories and political contexts of each state. While I did expect that available research played an important role in explaining the acknowledgment of policy failure, I did not have an a priori expectation about the distinction between collection and analytical capacity or the unique role that state investment plays in each. For clarity, however, I first describe my theoretical contribution that I developed through the comparison of state policy trajectories followed by empirical evidence from the case studies. Thus, the article proceeds as follows. First, I conceptualize policy failure, which remains poorly defined in existing literature. I then turn to briefly examining existing explanations for policy learning and change and identify unanswered questions about the political response to policy failure. Next, I offer an alternative scheme that distinguishes data collection capacity from analytical capacity and prioritizes collection capacity as the necessary feature for facilitating failure acknowledgment. Following the description of my theoretical contribution, I explore the evidence for my theory in the case studies. For each state policy trajectory, I describe how existing scholarship on policy change fails to explain the pattern of acknowledgment I observe. I show that the interaction between data collection capacity and analytical capacity better explains the patterns of acknowledgment in Texas and Washington. The article concludes with a consideration of additional factors that must be studied to fully understand the conditions under which public officials acknowledge (and eventually respond to) policy failure.
Conceptualizing Policy Failure
A challenge (and contribution) of this study is developing a definition of policy failure that both separates policy failure from political failure (e.g., Walsh Reference Walsh2000), and is not politicized itself. While it is difficult to scrub all definitions of policy failure of politics completely, I aim to do so as much as possible. I define a policy as having failed if it fits the following two criteria:
-
1. The original policy has an explicitly advertised intent in the original legislation. Footnote 1
-
2. There is consistent, reliable, scholarly research demonstrating that the intent of the law is either not being met or is being undermined by unexpected consequences.
Following Boswell (Reference Boswell2009), I define research as the digestible information produced by individuals and institutions with recognized qualifications to implement logically coherent methodologies to produce knowledge that meets “certain standards of theoretical and conceptual coherence” (Boswell Reference Boswell2009, 56). In other words, research is the information uncovered through systematic methods of a given discipline and connected to existing knowledge, concepts, and mechanisms.
While any policy can “fail” against a myriad of post-hoc developed criteria, I am interested in explaining the process of public officials reacting to failure in outcome-oriented public policies. These are the policies for which we might most expect learning to occur, given that there was a practical incentive for the policy in the first place. Therefore, I narrow my scope to policies for which there was an expressed intent to produce a particular outcome at the time of passage. This is not to say a policy will not fail to produce a latent intent held by its original supporters; however, I follow the practice of many state courts by interpreting the plain meaning of the language in the original statute (California Courts and Use of Legislative Intent Materials 2019).
Policy Learning and Recognizing Failure
In a perfect world, we might think of the ideal policy implementation and evaluation sequence going something like the following: A policy is put in place, information is collected on how the policy is working, the data are then analyzed to determine if they are producing the desired outcomes, followed by public officials acknowledging what the data suggest, followed by policy revision, as necessary. However, this optimistic view of policy making and policy learning feels far from possible, particularly in our polarized, anti-intellectual, and anti-science policy climate. Instead, politics interjects at every point along the way.
Even casual observers of American politics know that, unlike the technocratic process described above, ideologically driven failure labels are all too common. Public officials, particularly elected ones, rarely acknowledge policy failure simply because evidence of the outcomes contradicts the policy’s original intention. Acknowledging this type of failure often requires admitting a prior mistake—a risky proposition if one hopes to get re-elected in the future (e.g., Bardach Reference Bardach1976; Volden Reference Volden2016) or to maintain one’s reputation as a competent bureaucrat (Carpenter Reference Carpenter2001). Furthermore, even if a policy maker wants to change a policy, there is the uncertainty regarding whether the replacement will actually be an improvement on the status quo. Predicting the impacts of the many facets of a new policy is near impossible, and politicians are hesitant to bring on unknown challenges that may be worse than the challenges they already face (Patashnik Reference Patashnik2008, 6).
With their aversion to admitting mistakes and adding uncertainty, elected and appointed officials also have agenda setting power and the platform from which to frame issues as either problematic or acceptable (Bachrach and Baratz Reference Bachrach and Baratz1962; Schattschneider Reference Schattschneider1975). Generally, a policy must be labeled as a problem before public officials are willing to address it (Gamble and Stone Reference Gamble and Stone2006). Research on agenda setting suggests that we should not expect to see any acknowledgment of policy failure without some clear electoral incentive to do so, such as a salient shift in public opinion, especially among a public official’s own constituents (Erikson et al. Reference Erikson, Wright, Wright and McIver1993).
Policy makers do occasionally respond to new information about policy context—a phenomenon scholars have broadly labeled “policy learning” (e.g., Dobbin, Simmons, and Garrett Reference Dobbin, Simmons and Garrett2007; Heclo Reference Heclo1974). Observing policies in other states can induce learning—a process known as policy diffusion (see Karch (Reference Karch2007) for an excellent overview), just as personal experiences (Dagan and Teles Reference Dagan and Teles2015) can change a policy maker’s perspective. Lest we think that learning is an apolitical process, it too, is impacted by party politics as usual. Ideology, values, and prior beliefs can be a powerful filter for when and how policymakers take evidence into account when making decisions (Gilardi Reference Gilardi2010, 651). But when, if ever, can a divergence between the documented impacts of a policy and its original intention sway public officials’ support for a policy, despite pre-existing preferences? In the section that follows, I build on existing research on state capacity to argue that high data collection capacity is necessary for public official recognition of policy failure.
The Critical Importance of Collection Capacity
I am not the first to suggest that research and research organizations may impact public officials’ decision making. Existing work suggests that the more investment in policy research, the more likely public officials will be able to recognize, acknowledge, and maybe even respond to, undesirable policy outcomes (Bennett and Howlett Reference Bennett and Howlett1992; Heclo Reference Heclo1974). The importance of a state’s capacity to produce knowledge by gathering data and effectively analyzing it, is in line with Heclo’s (Reference Heclo1974) argument that, “the administrative research capacities of administrators influence the degree to which they inform and shape the development of policy itself” (Heclo Reference Heclo1974, 302). However, Heclo goes on to conflate research capacity with data availability, describing Sweden’s “strong bureaucracy” and lamenting Britain’s haphazard system that relied on “multiple nondata-oriented sources” (Heclo Reference Heclo1974, 302). In their work on research expertise, Heintz and Jenkins-Smith emphasize the importance of analytical traceability, which is when, “the issue under debate has well developed theory; is well conceptualized and operationalized, and adequate data exists” (Reference Heintz and Jenkins-Smith1988, 269). More recent studies also tend to lump data collection and analysis together under the label of “research” (Reckhow, Galey, and Tompkins-Stange Reference Reckhow, Galey and Tompkins-Stange2018). While existing work may have accurately described the politics surrounding policy analysis in the 20th century, the explosion in data availability, the popularity of big data, and new analytical tools and strategies warrants a revision of this feature of a state’s research capacity.
My theory advances studies of policy learning and state capacity by separating out two specific characteristics of “research capacities”—data collection and analytical capacity—that influence a state’s ability to evaluate its policies. I propose that to fully understand the reactions to policy failure, we need to examine the capacity to collect data as distinct from a state’s capacity to analyze it.
I define collection capacity as a state’s available resources and motivation to gather relevant and usable data on a specific policy or policy area’s outcomes. Clear definitions and state orchestrated, centralized, and pre-emptive data collection plans characterize high collection capacity for a given state policy. Analytical capacity, on the other hand, is the state’s ability to draw scientifically valid and reliance inferences from the data. This includes having the human capital, technological, and financial resources to conduct accurate statistical tests and develop meaningful models using collected data. High analytical capacity may stem from state-sponsored research institutions, professionalized researchers, and established reporting schedules.
Figure 1 outlines the paths from policy failure to the likelihood of acknowledgment. At its core, collection capacity is the necessary condition for widespread, evidence-based acknowledgment of policy failure. It is logically impossible to have evidence-based acknowledgment of anything without evidence itself, and collecting data provides the foundation for usable evidence. Second, gathering specific and accurate information about the realities of a policy’s outcomes lends credibility to claims calling for its revision, in the case that it is failing, or its protection, in the case that it is effective. The absence of clear evidence creates opportunities for a policy’s supporters to cherry pick data points and sources, to strategically frame findings, and to discourage public officials from acknowledging policy failure.
While necessary for evidence-based recognition of policy failure, data on a policy’s outcomes is far from sufficient. The mere existence of data on a policy’s outcomes can do little to inform elected officials, bureaucrats, and the public about the policy’s impacts, particularly given the atomization of some state bureaucracies (Smith Reference Smith2013). If an agency meticulously collects data on policy recipients and relevant outcomes, but lacks the motivation, knowledge, or technology to meaningful analyze the data it will have just as much impact on policy as not having collected any data at all. Thoughtful and scientifically valid analysis of the information is necessary to draw useful conclusions from existing policy outcome data. Thus, capacity for analyzing data is also a necessary, but not sufficient, condition for evidence-driven acknowledgment of policy failure.
Distinguishing between collection and analytical capacities and the importance of each in laying the foundation for the acknowledgment of policy failure yields a series of testable expectations (see Figure 1). High collection capacity paired with high analytical capacity are most likely to lead to failure acknowledgment, given the clarity of findings that are likely to result from a highly centralized and expertly trained policy evaluation process (see Figure 1).
On the other extreme, we should not expect any evidence-driven acknowledgment of failure for state-policies with low collection and analytical capacity, given there is no information for public officials to learn from. This is not to say public officials will not argue a policy without clear outcome evidence has failed, but rather that we should expect these claims to occur when it is ideologically expedient to do so, rather than in response to evidence.
This article examines what happens when collection capacity and analytical capacity diverge. Without investment in data collection, widespread acknowledgment of policy failure should be unlikely. The rationale for this is straightforward: without the collection of analyzable data, researchers cannot systematically evaluate policy outcomes. While advances in causal inference have improved exponentially in the past two decades, researchers are still beholden to the existence and quality of the information available for analysis. Bureaucratic obstacles and interests, politics, and analytical challenges can and will influence whether the data get analyzed but convincing an agency to release data for analysis is actually possible, whereas turning back the clock to collect high-quality information about policy outcomes is not. Thus, without a systematic and centralized data collection plan for a given policy area, we should not expect widespread acknowledgment of policy failure.
The state’s investment in collection capacity is essential because, as the designer, implementer, and monitor of public policy, the state is best suited to efficiently observe widespread policy outcomes. Furthermore, the state likely has access to more private information about its residents that may be essential to evaluating policy outcomes, but unethical or impossible for a nonstate organization to collect.
Collecting data creates the possibility that researchers and policy makers can learn about policy outcomes, but data collection is only the first step and far from guarantees acknowledgment of policy failure. Data that have been collected but sit unanalyzed are not usable information, especially for busy public officials. Furthermore cleaning, analyzing, and interpreting longitudinal data from thousands, if not millions, of observations require substantial expertise. Thus, analytical capacity is also necessary for the acknowledgment of policy failure. Unlike collection capacity, however, nonstate actors may credibly supplement a state’s analytical capacity. Professional researchers from established research organizations with reputations for credible, nonpartisan analysis can evaluate state-collected data. If nonstate research organizations can also convincingly report on clear findings, then public officials may well take notice of policy failure.
Researching Policy Trajectories in Texas and Washington
Much of the existing research on policy learning focuses on individuals and how their experiences (Dagan and Teles Reference Dagan and Teles2015), ideology (Gilardi Reference Gilardi2010; Volden Reference Volden2016), and electoral incentives (Erikson et al. Reference Erikson, Wright, Wright and McIver1993) impact their policy perspectives. In this study, I instead focus on state-level policy trajectories to understand the policy design features that make policies more susceptible to learning and reconsideration among public officials.
I examine the trajectories of two state-policies that established punitive consequences for truancy. All states have mandatory school attendance laws, and in the 1990s, states increased the consequences for truancy to encourage better attendance, in hopes of meeting the expectations of new accountability laws. Most states aimed the punitive consequences at parents of truant students, which aligns with research that teenagers are notoriously short term in their decision making (Halpern-Felsher and Cauffman Reference Halpern-Felsher and Cauffman2001) and that students’ home environment is a key driver of attendance (Teasley Reference Teasley2004). However, Texas and Washington passed policies that encouraged court involvement, jail timeFootnote 2 , and, in Texas’ case, an adult criminal record for the students themselves. These policies combined with the frequency with which they were implemented—an average of 10,000 students per year in Texas and just under 15,000 students per year in Washington—led to the states’ reputations as the most punitive environments for truant students.
In both Texas and Washington, the legislatures explicitly named decreasing truancy and increasing graduation rates as key goals for their respective truancy policies (see Table 1). While a punitive response to truancy aligned with the in-vogue approach to managing young people’s undesirable behavior in the 1990s, this method quickly proved to be both ineffective and discriminatory, disproportionately impacting students of color and students with disabilities. A plethora of subsequent scholarly evidence from developmental psychology, sociology, and education research suggests that detention and criminalization for truancy should not result in greater attendance or high school graduation. First, punitive policies do not effectively alter adolescent behavior (Defoe et al. Reference Defoe, Dubas, Figner and Marcel2015; Halpern-Felsher and Cauffman Reference Halpern-Felsher and Cauffman2001). Second, studies on the causes of truancy identify several contextual factors, rather than adolescent decision making, as predictive of absenteeism (McCluskey, Bynum, and Patchin Reference McCluskey, Bynum and Patchin2004).
Sources: Becca’s Bill (1995) language comes from SB 5439 Ch 312, 54th Legislature, Regular Session (Washington 1995). This bill can be accessed through the Washington State Legislature at https://app.leg.wa.gov/billsummary?BillNumber=5439&Year=1995&Initiative=false. The second purpose statement is taken from the Washington State Institute for Public Policy’s first report, “Truancy: Preliminary Findings on Washington’s 1995 Truancy Law,” on Becca’s Bill (Webster Reference Webster1996, 5). The report can be accessed at http://www.wsipp.wa.gov/ReportFile/1217/Wsipp_TRUANCY-Preliminary-Findings-on-Washingtons-1995-Law_Full-Report.pdf. FTAS language comes from SB 1 Section 4.001, 73rd Regular Session (Texas 1995). This bill can be access through the Texas state legislative archives at https://capitol.texas.gov/billlookup/text.aspx?LegSess=74R&Bill=SB1.
Note: FTAS = Failure to Attend School in Texas.
Relying on a most similar case design, I chose Texas’s FTAS and Washington’s Becca’s Bill based on the policies’ similarities in aim (see Table 1) and clear divergence with research backed-best practice but disparate outcomes of public officials acknowledging failure.Footnote 3 Different regional affiliations and ideological leanings also make Texas and Washington useful foils. Texas’ conservative and law-and-order tendencies suggest that FTAS should have retained support, while the more progressive northwestern policy makers should have acknowledged Becca’s Bill’s failure, but, in fact, we observe the opposite. In what follows, I delineate my methods for data collection and analysis, followed by a review of each state’s policy trajectory and its alignment with my theory.
Meticulously constructed timelines of each policy’s trajectory and state political context provides the empirical basis for this study. Interviews with political elites, activists, and researchers involved in each case offered insight into the more subtle political dynamics and personal motivations at work in each case. I examined each state’s legislative archives, using keyword searches for legislation related to each policy, in order to understand the policy history, sequence of policy changes, and policy intent. While these records do not document private conversations among lawmakers, they do capture all proposed legislation and show details about how bills may have changed from introduction to passage (or defeat). Witness lists, which include organizational affiliations, delineate the groups and individuals that acknowledged failure and supported (or opposed) policy revision. Over 500 hours of audio and video recordings of public testimony in legislative committees offered invaluable insight into the rationale leveraged by supporters and opponents of the given policy. The back-and-forth discussion between legislators and the public that occurs in committee meetings also shed light onto public officials’ priorities, biases, and logic as they processed information about policy outcomes. Primary source materials from both state and nonstate research organizations allowed me to trace the sequencing of the availability of information on policy outcomes. Furthermore, these reports and related press releases often describe each organization’s perspective on the quality of the information available to public officials.
The policy trajectories and evaluation timeline of Becca’s Bill in Washington and FTAS in Texas highlight the different political dynamics that data collection capacity and analytical capacity create for public officials. In the sections that follow, I describe how the paucity of data on Becca’s Bill in Washington required public officials to debate the need for data collection efforts rather than dissecting the merits of the policy and possible solutions. Texas, on the other hand, demonstrates the power of high-quality data to motivate public officials to acknowledge policy failure, presuming there are researchers available to analyze it. In other words, the anemic collection capacity in Washington undermined its comparatively robust research institutions’ ability to describe and advertise the failure of Becca’s Bill, while the thorough data collection efforts in Texas established a treasure trove of data for nonstate analysts to examine, eventually highlighting the contraindicated strategy of punishing truant students.
Becca’s Bill: Without Clear Data, Uncertainty and Strategic Framing Dominate
In 1993, 13-year-old Rebecca Hedman, who had had a history of substance abuse and truancy, ran away from the substance abuse center she was enrolled in, and was brutally raped and murdered. Hedman’s parents argued publicly that they had sought help from the state to better manage and support their daughter, but that their efforts had been stymied by legal barriers regarding detention and information sharing among schools, the police, and parents. As a result, Washington state passed Becca’s Bill—named after Hedman—in 1995. In addition to giving parents additional rights in the case of runaways or substance-abusing children, the law required schools to file a truancy petition with juvenile court after 7 absences in a month or 10 in a single school year. Judges hearing the truancy petitions conduct a fact-finding hearing, and then order students to return to school; however, if a student violated this court order, she could be placed in a juvenile detention center for contempt of court (Burley and Harding Reference Burley and Harding1998). Becca supporters—as proponents of the bill are known in Washington—argued that detention can be necessary to protect the youth from themselves and can give parents and the school a chance to communicate with and provide support for the young person.
Washington’s state research bureaucracy is substantial, especially compared to Texas’ (see Table 2). Collectively, these organizations published 12 reports between 1995 and 2017 on the outcomes of Becca’s Bill (see Table 3). However, uncoordinated data efforts precluded clear findings in these reports and therefore undermined the possibility of widespread acknowledgment of Becca’s Bill’s failure. The absence of reliable data afforded skeptics the opportunity to reasonably dismiss suggestive evidence as the result of low-quality data, rather than having to reckon with clear evidence that the policy was failing to reduce truancy and increase student achievement. This in turn leads to, at best, partial acknowledgment of policy failure, as we see with Becca’s Bill, in which a handful of legislators led a charge for policy revision but failed to advance their cause past the Human Services and Corrections committee. Becca’s Bill also offers insight into how analytical capacity alone is insufficient to create conditions for widespread acknowledgment of policy failure.
a List reflects intendent state agencies whose mission or About Us description contain the terms “research,” “data,” or “studies.” The Texas list of state agencies can be found at https://www.tsl.texas.gov/apps/lrs/agencies/index.html and the Washington state list can be found at https://access.wa.gov/agency.html.
b Texas list omits the Texas Agrilife Research because it is no longer a separate agency, but part of the Texas A&M University system.
c The Texas State Auditor’s office describes their work as ensuring “accountability,” but is included here to be conservative and commensurate with including the same office for Washington, which does describe its mission as researching policy outcomes.
d Their first publicly available report is from 1995, suggesting it had to be up and running, by the latest, in 1994.
Sources: WSIPP reports can be found at https://www.wsipp.wa.gov/. WSCCR reports can be found at http://www.courts.wa.gov/index.cfm?fa=home.sub&org=wsccr&page=welcome&layout=&parent=committee&tab=Welcome).
Note: + = outcomes in line with policy intent; − = outcomes not in line with policy intent; mixed identifies reports with an equal number of positive and negative conclusions; inconclusive outcomes reflect either a descriptive report or unclear findings given poor data quality or insufficient analytical methods.
Source: TPPRI report can be found at https://csgjusticecenter.org/publications/breaking-schools-rules/. The Texas Appleseed report can be found at https://www.texasappleseed.org/sites/default/files/TruancyReport_ExecSummary_FINAL_SinglePages.pdf.
Note: + = outcomes in line with policy intent; − = outcomes not in line with policy intent; TPPRI = Texas Public Policy Research Institute.
In keeping with Washington’s decentralized judicial system, the original legislation allowed counties to decide which local officials took the lead on implementing Becca’s Bill (Webster Reference Webster1996, 12). This varied approach to implementation combined with vague definitions critical to the implementation of the law precluded accurate and useful data collection on the policy’s outcomes. According to the first Washington State Institute of Public Policy (WSIPP) report on the policy in 1996, “In each county, different actors took the lead” on data collection (Webster Reference Webster1996, 12). The report goes on to identify that many officials were initially confused about exactly what ages of students were subject to the policy and what type of legal representation students were entitled to for their first truancy hearing. Perhaps most importantly, the original legislation did not explicitly define “truancy” or “unexcused absence,” leaving it to the counties to determine the threshold for each condition. The 1998, WSIPP report on Becca’s Bill outlines a number of flaws in the data collection systems that influence the validity of the policy investigation, including up to “two-thirds of the students with excessive absences were not marked as ‘truant’ by their schools” (Burley and Harding Reference Burley and Harding1998, 17). The lack of a consistent definition for these key concepts precluded gathering reliable and interpretable data from all 39 counties.
Without analyzable statewide data, WSIPP initially relied on case studies of particular counties that were able and willing to provide data on their truancy outcomes. A 1998 10-county study suggested that the truancy petitions may have been “changing behavior patterns,” and also pointed out that these effects seemed to only apply to students “experimenting” with truancy, as opposed to those who had established patterns of truancy (Burley and Harding Reference Burley and Harding1998, ii). Two years later, WSIPP conducted an in-depth case study of the truancy detention practices in Seattle, finding that filing a truancy petition did not increase the chances that the student would stay in school. They suggested that awareness of the truancy process may have had some deterrence effect among students that had not yet been truant. However, across these case studies, the authors were quick to acknowledge that given the particulars of the counties that were included in the study, the findings were not generalizable to the remainder of the state. The 2008 WSIPP study most explicitly identified the data challenges it faced, stating, “Most programs are not evaluated and those that are evaluated generally use research designs and methodologies that do not permit us to draw conclusions about causality” (Kilma, Miller, and Nunlist Reference Kilma, Miller and Nunlist2009, 5). As shown in Table 3, inconclusive findings continued to dominate the reports through 2010.
Insufficient Data Create Opportunities for Strategic Framing
In the case of Becca’s Bill in Washington, there were knowledgeable and willing researchers with a deep understanding of statistical inference to evaluate the truancy policy’s impacts, but a paucity of data renders even the best trained researchers helpless to generate meaningful conclusions. The Washington state researchers articulated the absence of clear findings, which not only precluded widespread acknowledgment of policy failure, it also created the opportunity for ideologically driven narratives, since the official state research organizations had declared that they were unsure about the impacts of the policy.
Washington had also established a regular cadence for policy evaluation. These consistent reporting timelines meant that researchers were continually revising their findings and prior statements about the effectiveness of a policy. In several cases with Becca’s Bill, the research agencies met their reporting requirements but openly acknowledged contradictory findings and flaws in prior statistical methods. For example, in their 2002 report, WSIPP observed, “the bill seems to be achieving one of its intended outcomes: helping to keep youth enrolled in high school; indicative but not causal evidence that truancy petitions are associated with lower juvenile arrest rates” (Aos Reference Aos2002, 20). However, the 2010 WSIPP report retracted their statement about the effects of the policy:
An earlier Institute report found that the increase in petitions following enactment of the Becca Bill appeared to increase high school enrollment in Washington. However, an update of that analysis, using a longer time period and an improved statistical method, no longer shows a statistically significant relationship between petition filling and enrollment. (Miller, Kilma, and Nunlist Reference Miller, Kilma and Nunlist2010)
Although accurately reflecting the best available findings on the policy’s outcomes, this back and forth provided evidence to both the policy’s opponents and supporters. And, despite increasingly firm evidence that the policy was not producing the intended outcomes, the original sponsor of Becca’s Bill, Senator Jim Hargrove (D), remained a vocal advocate for the importance of courts retaining the power to detain young people for their own safety and well-being. Hargrove, as the chair of the Human Services and Corrections Committee, would often start hearings related to Becca’s Bill with a reminder of why he supported the bill and the evidence he believed showed its effectiveness:
As the truancy rates went up, the juvenile arrest rates dropped dramatically, and it may not all be connected to that, but I mean I would say that is probably accurate…I think it has had quite a bit of success in this state and in general though I know that we have some holes in this system that I’d like to repair, and I hope our work this session will allow us to repair. (Public Testimony on HB 5651 and HB 5745 2015)
Without clear data on the graduation rates, attendance patterns, and postgraduate outcomes for students that had been detained, there was no compelling alternative narrative about the outcomes of the policy. Legislators, students, and professionals who disagreed with Hargrove could only point to national studies on criminal justice and recidivism to argue that the policy needed revising.
For example, when one witness pushed back on Hargrove’s correlation interpretation, pointing out that the counties that detained the highest number of students were not the same counties that had the highest graduation rates, Hargrove retreated back to anecdotal evidence about the importance of continuing Becca’s Bill replying, “We have some judges up from Clark County…asked them to not take it away because they use it as the stick…I would like to hear what the judges have to say” (Public Testimony on HB 5651 and HB 5745 2015). Judges too offered anecdotal evidence of successes from their own courtrooms to argue that Becca’s Bill should remain in place. Without clear evidence to the contrary, these political elites could frame their views as authoritative interpretations of policy outcomes.
Notably, Becca Hedman’s parents, originally supporters of Becca’s Bill, eventually retracted their support (Miller, Kilma, and Nunlist Reference Miller, Kilma and Nunlist2010). In 2015, Democratic Senator Jeannie Darnielle introduced legislation to eliminate detention as an option for juveniles, but it failed to advance beyond the Human Services and Mental Health committee (Santos Reference Santos2015). Interestingly, this legislation was based on recommendations from a minority report from the Becca Task Force—a team of educators, judges, legislators, and social workers that met regularly to discuss issues related to Becca’s Bill. The majority of the Task Force had voted against revising Becca’s Bill, and, in strong disagreement with this finding, the minority had cooperated with Senator Darneielle to bring their proposals to the committee.
The original Becca Bill legislation established some expectation of data collection, institutionalizing some semblance of collection capacity. However, the decentralized and unstructured nature of data collection yielded information that was easily subjected to opportunist framing, rather than a coherent policy discussion. Uncoordinated data collection offered data integrity as a key point of debate, rather than the effectiveness of the policy, making limited acknowledgment, at best, the most likely outcome. Even when the state received a grant from the MacArthur Foundation to improve its juvenile justice practices, the added resources and expertise focused on improving data collection as the first step to revising the policy (Public Testimony on HB 5651 and HB 5745 2015).
Although Becca’s Bill benefitted from established state-backed research organizations and trained researchers examining the policy’s outcomes, the lack of reliable data resulted in conflicting findings regarding the efficacy of the policy, with earlier reports advertising its success and later reports suggesting its failure. This shows that high analytical capacity alone, is not sufficient to inspire public officials to acknowledge policy failure. High analytical capacity must be paired with high collection capacity, otherwise, analytical capacity remains a hollow tool for documenting policy outcomes, leaving public officials to frame findings to meet their political needs.
Failure to Attend School: Where There are Data, There is a Way
Texas incorporated the FTAS provision into the state’s education code in 1995 (Texas Education Code Section 25.094). The policy required schools to report students who had surpassed 10 absences in a school year and allowed schools to charge students with an adult Class C misdemeanor for truancy.Footnote 4 Unlike Washington, Texas did not a priori build in any studies of FTAS’ outcomes in the original legislation. The legislature was not expecting to receive any information or updates on how the policy performed, and, when they did, there were only two key reports that came from nonstate entities. And, yet, by 2013, the Texas legislature expressed bipartisan acknowledgment of its policy’s failure, including by many who had supported the original bill. This case highlights the critical importance of data collection capacity for creating the possibility of evidence-based acknowledgment of policy failure. When credible nonstate researchers can access and analyze existing high-quality data, then the chances of most public officials acknowledging failure increases substantially.
High Data Collection Capacity
The original FTAS legislation did not require any specific data collection to assess its effectiveness, nor did it establish any schedule for evaluating the policy; however, in the throes of establishing accountability through standardized testing in the state, the Texas Education Agency (TEA) happened to be collecting a treasure trove of data that eventually facilitated the evaluation of FTAS. Given the goals of FTAS were to decrease truancy and increase graduation rates and its implementation and long-term effects involved courts and jails, data from both the educational system and the criminal justice system offered the most accurate insight into the effectiveness of the policy.
Between 2000 and 2003, the TEA began collecting longitudinal data on the educational experiences and outcomes of the universe of students that entered 7th grade in the state between 2000 and 2002 (Carmichael et al. Reference Carmichael, Marchbanks, Booth, Fabelo, Thompson and Platkin2011). These data were uniquely suited to evaluating the long-term outcomes of various education policies, as they were centrally mandated and managed and reflected information on each individual student. According to Breaking Schools’ Rules, the first report that evaluated the impacts of school-based discipline using the data:
[Texas] is highly unusual in its maintenance of individual electronic records, rich with information about each public-school student…What further distinguished Texas from every other state at the start of this study in 2009 was the opportunity to study at least six years’ worth of state student level education and juvenile justice electronic records and to benefit from broad bipartisan support for this research. (Carmichael et al. Reference Carmichael, Marchbanks, Booth, Fabelo, Thompson and Platkin2011, 11–12)
Interviews with researchers involved in this analysis confirmed the rarity of such a complete dataset on student educational experiences and outcomes, and the possibilities it created for studying the effects of punitive school policies. A researcher from the project acknowledged that, “the impetus [to study school discipline] came from my understanding of the power of the data.”
A notable feature of the longitudinal Texas dataset is that it was centrally managed and not collected in response to a single policy outcome. The more general nature of the data collection may have provided some political cover such that schools or the Texas Education Agency itself was not aware of exactly how the information was going to be used and therefore had fewer clear incentives to shirk reporting responsibilities. Eventually, in 2011, after the publication of Breaking Schools’ Rules and after the longitudinal data collection on middle schoolers, the legislature required schools to document the number of FTAS truancy charges submitted each year. While schools, in theory, would receive a lowered state rating for failing to report their truancy charges, public records requests for the data revealed significant missingness in the FTAS-specific reporting. Texas Appleseed’s Class Not Court report describes the substantial missingness as they encountered in truancy data from the TEA:
The information received from TEA included data from less than half of all Texas school districts. According to TEA’s 2012–13 report, there are 1,026 school districts in Texas, but the information that TEA provided to Texas Appleseed only included data from 446 districts. (Fowler et al. Reference Fowler, Mergler, Johnson and Craven2015, 50)
However, Texas Appleseed was eventually able to supplement this data through Freedom of Information Act requests for supplementary data collected by the Office of Court Administration. This suggests that state collection capacity creates the possibility of high-quality data; however, if the incentives of the agencies or individuals tasked with collecting and reporting data can influence the quality of the final information. Centralized collection plans—evidence of high collection capacity—may produce more usable data that result in analysis more resilient to partisan reframing.
Outside Actors Can Supplement Analytical Capacity
Even though Washington received assistance from the MacArthur Foundation to develop better data collection systems beginning in 2014, they could only improve this going forward, meaning they had to wait several years to get enough information to draw meaningful conclusions. Texan public officials, on the other hand, had a treasure trove of data, just waiting to be analyzed that could provide insight on years’ worth of FTAS’ outcomes.
Given the high-quality data collection occurring in the TEA, there was an opportunity for policy makers to learn about the outcomes for youth that received punitive and severe consequences for their behavior in schools, and the results did not align with the goals of educating all Texas children and ensuring that they graduated from high school. Just because an agency has information on its outcomes, however, does not guarantee that it will want the information analyzed. According to several researchers at Texas Public Policy Research Institute (TPPRI)—the research organization at Texas A&M that first analyzed the TEA data for Breaking Schools’ Rules—getting access to the information from TEA was extremely challenging. TEA was not actively offering the data for analysis, especially to outside researchers. However, given recent changes in guidelines and funding requirements from the national Office of Juvenile Justice and Delinquency Prevention, the governor, legislators, and court officials were interested in documenting trends in school discipline and juvenile justice outcomes and therefore exerted pressure on TEA to cooperate with researchers (Carmichael et al. Reference Carmichael, Marchbanks, Booth, Fabelo, Thompson and Platkin2011, 4–5). The Breaking Schools’ Rules report describes the extensive political pressure required from a bi-partisan group of state legislatorsFootnote 5 to get the data from TEA to the expert researchers for analysis. The report further acknowledged the “strong support” from the state’s juvenile justice system, and the executive and judicial branches for supporting TPPRI’s access to the necessary data (Carmichael et al. Reference Carmichael, Marchbanks, Booth, Fabelo, Thompson and Platkin2011, 4–5).
While it was certainly unique that TEA and the Texas Juvenile Justice Department had collected comprehensive data, the data sat unused for many years. It was the analysis of the data that shed light on the deleterious impacts of criminalizing school-based behaviors. In Breaking Schools’ Rules, TPPRI at Texas A&M showed that zero-tolerance policies and even limited exposure to the criminal justice system dramatically decreased graduation rates. Texas Appleseed’s analysis of available (albeit incomplete) truancy-specific data in Class Not Court showed the jaw-dropping number of students charged with truancy each year—upwards of 100,000—and the substantial discrepancies in the number of students of color and low socio-economic status—groups already at high risk of dropping out—that became court-involved for truancy.
In Texas’ case, the capacity for data analysis lies not with the state agencies but with the higher education research system and with the robust group of policy organizations in the state. According to interviews with the researchers who worked with politicians to get access to the data, the TEA was both understaffed and ill-informed on how to leverage the treasure trove of data it collected. Unlike with Becca’s Bill, there were no agencies explicitly charged with analyzing data, but there was a plethora of trained researchers in and around Texas to take up the analytical task.
In the Hands of Capable Researchers, High Quality Data Paints a Clear Picture of Failure
The findings from TPPRI and Texas Appleseed in turn, caught the attention of several public officials and policy organizations (see Table 4). Led by its Chief Justice, Wallace Jefferson, the Texas Supreme Court’s policy arm took on FTAS and related criminalization policies as a key policy priority in 2013. According to interviews with some of the lead policy experts in the Texas Judicial Council, the policy arm of the Texas Supreme Court, it was the reliability and completeness of the research published by TPPRI, Texas Appleseed, and other researchers in the Texas university system that caught their attention. The policy experts in the Texas Judicial Council also suggested that while the research coming out of the think-tanks was compelling, the university-driven research was most helpful in that policymakers viewed it as more objective than information put out by more blatantly partisan think-tanks. In a blog post he wrote for the National Council of State Courts, Chief Justice Jefferson articulated the value of TPPRI research, and described how the clear findings moved the policy discussion beyond debating individual anecdotes about the policy’s impact and inspired the focus of his 2011–2013 state of the judiciary speech:
These invaluable studies add important numbers to anecdotal evidence of needed reforms. Texas officials are confronting this troubling data head on…To bring light to the issue and the need for improvement in school disciplinary policies, I pled for action during my 2011 State of the Judiciary speech. (Jefferson Reference Jefferson2012)
Nonstate actors researching the impacts of FTAS played a pivotal role in providing convincing analyses that eventually resulted in widespread bipartisan support for decriminalizing truancy. With access to a longitudinal data set on the universe of the population in question (in this case students)—the gold standard of an observational dataset—research organizations were able to conduct credible analyses; therefore, unlike in the case of Becca’s Bill, both liberal and conservative public officials and organizations eventually agreed that the documented outcomes were problematic, although for different reasons. Conservatives framed the issue as a waste of state resources and a perpetuation of big government, while more liberal leaning organizations, like Texas Appleseed itself, focused on the racial and class disparities that the policy perpetuated. The increasing national awareness of the financial and equity consequences of mass incarceration no doubt also accelerated opposition to the effects of FTAS.
In a true demonstration of learning, one of the original supporters of FTAS, Senator John Whitmire (D), Chair of the Criminal Justice Committee, led the legislative effort to amend FTAS. Many judges in Texas, like in Washington, opposed the change to the truancy law. However, unlike in Washington, Whitmire had specific evidence from his own state to point to after judges testified in public hearings about the anecdotal success of FTAS in their own courtroom. In response to public testimony supporting FTAS, Whitmire would often remind the committee of that over 100,000 students had been charged in the past year, and that the county with the highest FTAS charge rates was not the county with the highest graduation rates. By 2013, there was bipartisan support for decriminalizing status offenses generally, and FTAS specifically, and a full revision passed in 2015 (see Figure 2).
The Texas acknowledgment emerged from greater collection capacity than that available for Becca’s Bill. In Washington, despite a robust state network of agencies and researchers dedicated to policy analysis, the paucity of reliable and objective data on Becca’s Bill yielded conflicting reports about the policy’s outcomes and offered both critics and supporters a credible opportunity to claim undesirable outcomes were the result of politically driven data mining, rather than a reflection of reality. The case of FTAS in particular demonstrates the power of thoroughly collected data on policy outcomes. However, the Texas’ cases also highlight the role that nonstate research organizations can play in setting the stage for public officials to acknowledge failure. The juxtaposition between Texas and Washington suggests that states can vary independently along their capacity for collecting and analyzing data, and the interaction between the two capacities influences the likelihood of public officials acknowledging policy failure.
Moving Beyond Politics-as-Usual
Traditional explanations of policy change and public officials’ policy preferences suggest that ideology, changing electoral incentives, and public pressure as common explanations for public officials shifting their stance on public policy. However, the dynamics of learning and failure acknowledgment in the two truancy policy trajectories in Texas and Washington suggest that these explanations are not sufficient.
Public officials, particularly elected ones, rarely acknowledge policy failure, despite ideological preferences, simply because evidence of the outcomes contradicts the stated policy’s intention. Yet, in the traditionally conservative Texas, we see widespread and bipartisan acknowledgment of FTAS’ failure, including from original supporters of FTAS. In fact, Senator Whitmire, who led efforts to revise FTAS, had voted for FTAS in 1995. The more progressive Washington state, on the other hand, had a single Democratic senator warning of Becca’s Bill’s deleterious consequences, and the policy maintained bipartisan support, including from its original Democratic sponsor, Senator Jim Hargrove. Even when Hargrove had decided he would retire from the legislature, he continued to staunchly protect Becca’s Bill.
Salient popular pressure was remarkably absent from both cases. Teachers, principals, and the associated unions and professional organizations largely stayed out of the policy debate. No doubt more than occupied with the day-to-day challenges of running classrooms and schools, they were not leaders in highlighting the failures of each state’s truancy policy. The professional associations for teachers and principals did testify on the truancy bills, but only to remind the legislatures that they could not tolerate any additional unfunded mandates and requested that lawmakers consider the financial implication of whatever they decided to do.
A handful of students and families did testify about their experiences with the truancy policies in both Washington and Texas. While most of these testimonies outlined the failures of the punitive approach to truancy, at least one parent or student in each state testified in opposition to revision, arguing that appearing in court emphasized the severity of chronic truancy and inspired attendance.
Service providers (i.e., social workers and youth defense attorneys) familiar with the realities of Becca’s Bill also testified before the Senate Committee that the policy was exacerbating truancy, rather than ameliorating it. Perhaps most compellingly, the Mockingbird Society, an organization that organizes and trains members of the homeless and foster care community to self-advocate with state-level politicians, lobbied the legislature to revise the policy. Becca’s Bill, however, remained in place.
Texas legislators were also privy to a myriad of perspectives from practitioners, families, and young people who had experienced the reaches of FTAS. In 2015, students from an afterschool support group run by a local law student, a much less institutionalized group and lobbying effort compared to the Mockingbird Society, appeared before the Senate Committee on Criminal Justice. However, two of the three students testifying became so shy that they were unable to do any more than introduce themselves and say they opposed the bill—while endearing, this performance did not provide Texas legislators’ with much information on the broader effects of FTAS on young people. And, yet, by 2015, all three branches of Texas government acknowledged the failure of FTAS and voted to revise the policy.
Ideology, changing electoral incentives, and public pressure certainly have and will continue to influence public officials’ perspectives on which policies are effective. However, the comparison between Texas and Washington public officials’ response to their truancy policies suggests these features do not offer a complete explanation, particularly when public officials are willing to acknowledge failure.
Parsing out capacity for data collection from capacity for data analysis, on the other hand, offers insight into why Texas public officials were more willing to acknowledge policy failure, despite prior support for FTAS or an ideological commitment to being tough-on-crime. Figure 3 summarizes what degree of acknowledgment we should expect among public officials based on the capacity for data collection and analysis for a given state policy. Texas represents the treasure trove case, in which the state had invested in a centralized and large-scale data collection effort to evaluate a range of educational outcomes. While the state itself has not heavily invested in its analytical capacity, nonstate actors were able to step in and conduct clear and meaningful analysis. This analysis, in turn, produced convincing findings that swayed a range of public officials to acknowledge failure. Washington state, on the other hand, represents the hollow case, in which trained researchers were unable to compensate for inconsistent and uncoordinated data collection.
Conclusion
Public officials, pundits, and citizen groups often discuss evidence-based policy making as a panacea for ineffective and inefficient governance. While this examination of truancy policy in Texas and Washington from 1995 to 2017 does not wholly contradict this notion, it does complicate the connection between evidence and identifying policy failures.
In the mid-1990s, Texas and Washington states implemented unusually punitive policies that fed young people directly into the state juvenile and criminal justice systems. Subsequent long-term outcomes suggested that these experiences likely further traumatized young people, leading to more truancy and risk behavior—the exactly opposite of policies’ goals. These policies also disproportionately targeted Black, Brown, and low-income students, all of whom already have more obstacles between them and high school graduation. Texas public officials recognized the failure of their policy within two decades of its passage, while Washington’s did not. This study probes why Texan public officials were willing to acknowledge failure, despite their traditionally conservative and tough-on-crime policies.
First, I suggest that state capacity to engage in evidence-based acknowledgment of policy failure actually relies on two separate components: data collection capacity and analytical capacity. The interaction of these two capacities influences the likelihood of widespread acknowledgment of policy failure. Collection capacity is necessary for evidence-based acknowledgment of policy failure, whereas analytical capacity can be supplemented by nonstate actors. No amount of money or expertise could go back in time and collect data on past policy outcomes, highlighting the impotence of nonstate actors to supplement data collection efforts. Washington’s robust state research bureaucracy charged with studying policies could not make up for unreliable and missing data on the impacts of Becca’s Bill. Texas, on the other hand, had collected excellent data on educational experiences, criminal justice involvement, and life outcomes, and, when supplemented by analytical capacity from local research organizations, these data yielded clear and convincing evidence about the failure of FTAS. The quality of the data and subsequent analysis, in turn, constrained public officials’ ability to develop partisan-driven narratives of the policy’s outcomes.
My findings about the ability of nonstate actors to supplement state collection and analytical capacity have two implications for designing and implementing evidence-driven policy making. First, states partnering with outside institutions through research practice partnerships (RPPs)—an increasingly popular approach, particularly in education, in which researchers and practitioners cooperate to develop, implement, and test effective policy innovations to evaluate policies (e.g., Coburn and Penuel Reference Coburn and Penuel2016; Israel et al. Reference Israel, Schulz, Parker and Becker1998)—should consider leveraging the structure and resources of the agencies implementing policy to provide usable information for research partners. To the extent that RPPs may involve nonstate organizations in collecting data, centralizing the data collection and clearly delineated definitions and metrics promise to make the data more useful. Second, nonstate actors may be especially valuable in supplementing a state’s ability to analyze data. Public officials should be strategic in deciding when, where, and how nonstate institutions can supplement gaps in their state’s capacity to conduct robust data analysis. Third, and most broadly, this study suggests that states can effectively act as responsible policy laboratories, experimenting, evaluating, and responding to policy success and failures. However, in order to do so, states must intentionally invest in centralizing data collection efforts and identifying capable analytical partners.
This project is a first step in dissecting state capacity in the era of big data, and, thus, generates even more questions than it answers. Given their critical role in establishing the conditions for acknowledging failure, an important next step is documenting how frequently policy intents and mandated policy evaluations are included in state-level social policies. The decision to possibly constrain their future selves with requiring data collection and policy evaluation is a political calculation in and of itself for public officials. What motivates lawmakers to establish strong collection practices up front versus later in the life of a policy? How might the status (i.e., race, class, projected financial, and political resources) of the policy target influence this decision? The larger research agenda, of which this article is a part, goes a step further to examine acknowledgment of policy failure can turn into concrete policy revisions. Future work should also examine the political dynamics that lead to the development of state-based research infrastructure in the first place. Finally, future work should also consider how the effect of collection and analytical capacity may change in our current political climate characterized by increased anti-intellectualism and antiscience among prominent public officials.
With states increasingly owning the design and administration of policies that directly and dramatically impact racial, economic, health, and educational disparities, understanding when policy learning and responsible experimenting can occur is essential for addressing ever increasing inequality. Even the best intended public officials will make mistakes, but scholars of American politics and public policy have an opportunity to offer clearer insight into design elements of state institutions, policies, and bureaucracies that encourage learning and mitigate the chances of exacerbating inequality while innovating.
Data Availability Statement
Replication materials are available on UNC Dataverse at https://doi.org/10.1017/spq.2021.11.
Funding Statement
This research was supported by funding from the Harvard Multidisciplinary Program in Inequality and Social Policy.
Conflict of Interest
The author declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Author Biography
Sarah James is a doctoral candidate in Government and Social Policy at Harvard University. She studies state politics and inequality.