In this age of fiscal stress for governments, interest is growing in using benefit-cost analysis (BCA) to help guide budget decisions. States, in particular, have faced tough spending choices in recent years, as they sought to overcome cumulative shortfalls that exceeded $500 billion between 2007 and 2012 (National Conference of State Legislatures, 2012). Although states’ revenue has begun to expand, so have the costs of critical programs such as Medicaid, which climbed 20% in 2011 and 15% in 2012 (Center for Medicare & Medicaid Services, 2011; The Pew Charitable Trusts, 2013, 2014). Further, states’ long-term budget outlooks have been clouded by federal actions to address their budget deficit, such as the 2013 sequester, which cut federal grants to states by approximately $5 billion (Federal Funds Information for States, 2013). As a result, states are likely to continue confronting fiscal pressures for many years to come (Haggerty, Reference Haggerty2013; National Conference of State Legislatures, 2011, 2012).
Benefit-cost analysis, with its focus on identifying the most cost-effective means of achieving policy goals, holds the promise of helping states direct resources to programs that provide the greatest return on the investment of taxpayer dollars (Graham, Reference Graham2008). Although federal use of BCA has been long-standing and widely studied, there has been little data on states’ use of the technique. This article features a nationwide study that begins to address this information gap by answering four critical questions: How frequently do states (including the District of Columbia) conduct BCAs? What are the characteristics of these analyses? Are the results influential in state policy and budget decisions? What challenges do states face in conducting and using BCAs? It also establishes a baseline of descriptive information on states’ production of BCA studies, the characteristics of these studies, and their reported use by policymakers. We apply scholarly concepts discussed in the Literature Review and Findings sections to explore and distinguish among types of BCA use. We then use interview data to identify barriers to BCA production and use and discuss how these barriers align with those identified by other researchers. Although this study does not attempt to formulate new theory related to BCAs, it provides original data that other researchers can expand upon to develop theories on BCA production and use.
Literature review
Benefit-cost analysis seeks to project and assign dollar values to the predicted outcomes of one or more policy choices, ideally including all direct and indirect effects throughout society. These dollar values are presented as discounted net present values (NPVs) to reflect the fact that many benefits occur over time. The summed predicted benefits and costs of each option are compared to determine whether it would generate a net positive benefit to society, with the results typically reported as a benefit-cost ratio – a 5:1 ratio indicates that the choice would generate $5 of net value for every $1 invested. These findings can give decision makers insight into whether their policy choices will generate social benefits that outweigh the costs, and can be used to select the most efficient alternative among competing options as measured by return on investment (Boardman, Greenberg, Vining & Weimer, Reference Boardman, Greenberg, Vining and Weimer2011; Shaffer, Reference Shaffer2010; Lee et al., Reference Lee, Aos, Drake, Pennucci, Miller and Anderson2012; Vining & Weimer, Reference Vining and Weimer2010; Viscusi, Reference Viscusi1996).
When a BCA is not feasible due to data, research, or time limitations, related techniques such as cost-effectiveness analysis, which follows a similar logic in comparing alternatives, can be applied (Harrington, Heinzerling & Morgenstern, Reference Harrington, Heinzerling and Morgenstern2009). Cost-effectiveness analysis uses an economic approach to analyzing alternatives but typically does not assign dollar values to benefits; instead it compares the costs of achieving an outcome, such as the average per-person cost of completing an alternative drug treatment program (Boardman et al., Reference Boardman, Greenberg, Vining and Weimer2011; Brent, Reference Brent2006).
Federal use of BCA
Federal use of BCA is long-standing and is well documented, particularly as a tool for regulatory decision-making (Hahn, Reference Hahn1998a ; Hahn & Dudley, Reference Hahn and Dudley2007; Hahn & Litan, Reference Hahn and Litan2004; Harrington et al., Reference Harrington, Heinzerling and Morgenstern2009; Masur & Posner, Reference Masur and Posner2011; Organisation for Economic Co-operation and Development, 1997; Shapiro & Morrall, Reference Shapiro and Morrall2012; Sunstein, Reference Sunstein2002). In 1920, Congress updated the Rivers and Harbor Act to require the U.S. Army Corps of Engineers to recommend only water projects that would produce benefits exceeding their costs. The Corps subsequently rejected more than half of the proposed water projects it assessed using this approach after the analyses found that the projects’ benefits did not justify their costs (Porter, Reference Porter1995). In 1936, although project approval based on BCA had become standard Corps practice, Congress passed the Flood Control Act requiring evaluation of all benefits and costs of water resource projects and restricting Congressional authorization to those that had been approved by the Corps (Zerbe, Davis, Garland & Scott, Reference Zerbe, Davis, Garland and Scott2010).
Federal use of BCAs substantially expanded in the 1960s when the Department of Defense mandated the studies as part of its Planning and Programming Budgeting System initiative, which was later utilized throughout the executive branch (Fuchs & Anderson, Reference Fuchs and Anderson1987). Although that effort was unsuccessful and subsequently abandoned, the Ford administration incorporated BCA into its review process for federal regulations (Organisation for Economic Co-operation and Development, 1997).
In 1981, President Reagan’s Executive Order No. 12991 broadened the mandate and directed agencies to demonstrate that regulations would generate benefits to society in excess of their potential costs to society. During President George H.W. Bush’s administration, the U.S. Office of Management and Budget (OMB) issued Circular A-94 (Revised) (1992), which provided guidelines and discount rates for BCA of federal programs to promote well-informed decision-making. Then in 1993, President Clinton’s Executive Order No. 12866 similarly required agencies to assess the benefits and costs of regulations, using both quantitative and qualitative methods. Both Executive Orders 12991 and 12866 were far-reaching, requiring agencies to assess even those benefits and costs that could not be quantified (Hahn & Dudley, Reference Hahn and Dudley2007), and Circular A-4 standardized this process and prescribed methods for federal agencies to use when conducting BCAs (OMB, 2003).
The Obama administration has also embraced BCA and requires federal agencies to demonstrate that proposed funding priorities are based on credible empirical evidence (OBM, 2012). The OMB necessitates that regulatory impact statements including BCAs be produced for major rules and has emphasized the technique in its instructions to agencies about performance and management (OBM, 2012; Zerbe et al., Reference Zerbe, Davis, Garland and Scott2010).
Benefit-cost analysis at the state level
Although the federal government’s use of BCA is well documented, little is known about the extent to which states use this approach, particularly outside of grant- and rule-making. A 2009 evaluation of the federal Transportation Investment Generating Economic Recovery program, part of the American Recovery and Reinvestment Act, assessed the BCAs that states provided to the U.S. Department of Transportation to support their grant applications (Homan, Adams & Marach, Reference Homan, Adams and Marach2014). The study reported that, generally, the BCAs were barely above a marginal level of usefulness and not considered significant factors in project selection during the first program round. A follow-up study on four subsequent program rounds, however, showed a small but significant increase in BCA quality and that highly rated projects (based on BCA quality and the likelihood of positive net benefits) were significantly more likely to receive awards (Homan, Reference Homan2014). Hahn (Reference Hahn1998b ) assessed the application of economic analysis in state regulatory decision-making and found that few states perform comprehensive economic impact assessments of most rules, and the majority avoid more rigorous analyses such as BCA. Another study, from the Institute for Policy Integrity, evaluated state-level regulatory decision-making and concluded that most states lacked the resources to assess the basic costs of regulation and did not conduct any rigorous analysis of benefits or alternative policy choices (Schwartz, Reference Schwartz2010).
Use of BCA in policymaking
Scholarship on the use of research in the policy process customarily divides “use” into three distinct categories: instrumental – the direct use of findings by policymakers to shape their decisions; conceptual – the use of findings to enhance the understanding of an issue; and symbolic – policymakers’ strategic reference to research (though not necessarily specific findings) as justifications for decisions that they may have made for other purposes (Alkin, Daillak & White, Reference Alkin, Daillak and White1979; Caplan, Reference Caplan and Weiss1977; Henry, Reference Henry, Caracelli and Preskill2000; Leviton & Hughes, Reference Leviton and Hughes1981). More recently, authors have expanded the concept of use to include intangible and indirect influences, such as a policymaker’s motivation to think more deeply about an issue based on information from an evaluation (Mark & Henry, Reference Mark and Henry2004; Henry & Mark, Reference Henry and Mark2003; Kirkhart, Reference Kirkhart, Caracelli and Preskill2000).
Although the literature shows a general consensus that BCA has played an increasing role at the federal level (Baron & Dunoff, Reference Baron and Dunoff1996; Hahn & Sunstein, Reference Hahn and Sunstein2002; Hahn & Tetlock, Reference Hahn and Tetlock2008; Posner & Adler, Reference Posner and Adler1999; Sunstein, Reference Sunstein2002), less has been written about the use of BCA results to inform policy decisions, and such use has been heavily disputed (Johnston, Reference Johnston2002; Shapiro & Morrall, Reference Shapiro and Morrall2012; Sunstein, Reference Sunstein2002; Viscusi, Reference Viscusi1995. Shapiro and Morrall (Reference Shapiro and Morrall2012) found that individual case studies have been the most common approach to evaluating impact, but that these studies have reached widely disparate conclusions – ranging from conclusions that BCA had played a key role in reducing regulatory costs and increasing regulatory benefits to conclusions that BCAs had very minimal impact – making it difficult to draw generalizations about the impact of BCAs on policy and budget decisions. Even scarcer is research on BCA use at the state level, though several studies have assessed the use of other forms of research such as program evaluations and policy analyses (which in some cases incorporate BCA), and concerns about their limited impact are also long-standing (Bogenschneider & Corbett, Reference Bogenschneider and Corbett2010; Hird, Reference Hird2005; Oliver, Innvar, Lorenc, Woodman & Thomas, Reference Oliver, Innvar, Lorenc, Woodman and Thomas2014; VanLandingham, Reference VanLandingham2011).
Methodology
This study uses several methods to address our research questions. First, we performed a Westlaw search of the 50 states and the District of Columbia to identify statutory mandates to conduct BCAs (as of December 2012) using the terms “cost benefit analysis,” “benefit cost analysis,” and “cost effectiveness analysis.” We also searched CQ StateTrack for 2011 and 2012 legislative bills that created new statutory mandates using the same search terms. We initially expanded our search to capture potential alternative terminology, such as “economic impact analyses,” but decided to narrow our scope to those that used the above three terms to minimize the variance in how states define this method. Although in some cases states may have used executive orders, administrative codes, and other rules to require BCA, we limited our search to statutes due to the complexity of identifying these types of requirements and the fact that statutes, by their nature, suggest high-level support of this approach.
We then used a two-step process to identify benefit-cost reports produced by states. First, we sent an electronic survey to the heads of 1,518 executive and legislative offices and nonprofit and private policy institutions in the 50 states and the District of Columbia that we considered likely to sponsor or produce state-level BCAs. These included audit, budget/fiscal, corrections, economic development, education, environment and natural resources, evaluation, health, revenue, social services, and transportation units. The survey asked whether the entities had conducted or were aware of BCAs produced between January 2008 and December 2011, and requested electronic copies and links to these studies. For those entities that did not respond to the initial survey, follow-up electronic surveys were sent four to six weeks later. This effort generated a 31% response rate, which ranged from over a 50% response rate in Minnesota and West Virginia, at least a one-third response rate in 23 states, and less than a 25% response rate in 16 states. The highest response rates were from public safety and education entities – 50% and 45%, respectively. Among executive, legislative, and nongovernment organizations/academic entities, of which the majority of surveys were sent, response rates were roughly similar – 30%, 37%, and 32%, respectively. We then conducted a comprehensive scan of each of these units’ websites to identify additional reports that appeared to contain benefit-cost or related analyses. This enabled us to control for potential recency effects and to locate reports that were not identified by survey respondents.
These steps identified an initial sample of more than 1,000 state reports. Although these efforts likely did not identify all BCAs that had been performed during the four-year period (such as those that may have been conducted but never documented through a written report), it provided a reasonably comprehensive assessment of the quantity and quality of studies conducted by states. After initial screening, we eliminated from consideration those studies that were not prepared for, written by, or evidently used by state governments or legislatures; did not focus on state policy; lacked systematic cost or outcome analysis; or were produced outside of the 2008–2011 time period. This process reduced our sample to 507 reports that we thoroughly examined in a second screening.
We examined each in detail to determine whether they met eight technical elements essential to BCA rigor and reliability, which we identified through the relevant literature and the advice of an external review panel of benefit-cost and policy experts. These technical elements were as follows:
-
∙ Measurement of direct costs – costs specifically associated with the program/policy being evaluated, such as program wages and material costs.
-
∙ Measurement of indirect costs – costs not solely associated with the program/policy being evaluated, such as administrative costs for workers that are actually providing administrative support for an entire office.
-
∙ Monetization of tangible benefits – benefits that have a dollar value, such as health care costs saved from reduced doctor visits.
-
∙ Monetization of intangible benefits – benefits that typically do not have a dollar value, such as the value of time saved.
-
∙ Comparison of measured program benefits and costs against a baseline or alternatives.
-
∙ Discounting of future benefits and costs to NPVs.
-
∙ Disclosure of key assumptions used in calculations.
-
∙ Disclosure of sensitivity analysis to test how results would vary if key assumptions were changed.
Appendix A provides examples of each element taken from state BCAs.
We classified reports as “full” BCAs if they contained all eight elements and as “partial” BCAs if they contained a measurement of direct costs and outcomes (monetized or nonmonetized), and at least one other of the eight key elements. We excluded reports that did not meet these criteria. For quality assurance purposes, external reviewers verified the classification of a representative selection of reports. This review produced our final sample of 348 studies, including 36 full BCAs and 312 partial BCAs.
To assess the use of these reports in state policy and budget decision-making and the challenges states face in producing and utilizing these studies, we conducted approximately 6–10 semistructured phone interviews with executive and legislative officials, relevant nongovernment public policy experts, and report authors from all 50 states and the District of Columbia; in total, our interview sample included 360 people. The number of interviews conducted per state varied somewhat based on the quantity of reports in each state and interview candidates’ availability and willingness to participate. This effort included multiple e-mails and phone calls to schedule interviews, inquire about alternative officials to contact, and conduct follow-up interviews. The interviews addressed the respondents’ knowledge about the use of each report in the state’s policy process, their perceptions about the successes and barriers to performing and utilizing BCAs, and whether they knew of other states’ BCA studies (approximately 20 additional reports were identified through interviews). If interviewees indicated that a BCA was used in the state policy process, we required them to provide relevant documentation such as media clips, meeting minutes, testimony transcripts, or legislation. Many interviewees, for example, provided links to bills that incorporated a benefit-cost report’s recommendations.
Based on the interview responses, we coded each report into one of three categories of effect: reported “direct impact” – findings were adopted into or influenced legislative or regulatory action, including decisions to increase, decrease, or sustain appropriations; reported “indirect impact” – findings entered public discussion by receiving media attention or through presentations to and discussions with key policymakers; or no known impact – interviewees had no knowledge of an effect. Our definition of reported “direct impact” derives from Henry and Mark’s (Reference Henry and Mark2003) first level of influence, which they labeled as “those cases when evaluation processes or findings directly cause some change in the thoughts or actions of one or more individuals,” and such “instrumental” use of research in decision-making has been extensively studied by the field (Alkin, Reference Alkin and Mathison2005; Leviton & Hughes, Reference Leviton and Hughes1981; Shulha & Cousins, Reference Shulha and Cousins1997).
Our definition of reported “indirect impact” derives from a rich literature on conceptual and symbolic utilization of research that addresses situations in which a study contributes to policy debates even if no decision is reached or a proposed solution that incorporated the research findings is ultimately rejected (Carlsson, Eriksson-Baaz, Fallenius & Lövgren, Reference Carlsson, Eriksson-Baaz, Fallenius and Lövgren1999; Cummings, Reference Cummings2002; Henry & Mark, Reference Henry and Mark2003; Kirkhart, Reference Kirkhart, Caracelli and Preskill2000; Patton, Reference Patton2008).
Although the documents supporting impact claims helped to verify their occurrence and accuracy, we recognize the inherent difficulty of measuring the influence of BCAs due to the many factors that feed into policymakers’ decisions. These include policymakers’ a priori judgments about a topic before reading benefit-cost studies, their ability to generate interest in incorporating benefit-cost data into the decision-making process, policymakers’ existing relationships with researchers, whether the findings adhered to their values, and other information such as political factors that may have influenced the eventual policy choices made by the state. However, as the documentation received in almost all cases was consistent with the type of impact reported by the interviewees, we believe our data was reliable for identifying and appropriately categorizing the impact of the BCAs.
Upon completing data collection, we conducted descriptive analyses on the statutory mandates, BCA production, technical elements, and reported impact of the studies across the 50 states and the District of Columbia. We then reviewed our interview data to identify key challenges that interviewees recognized as impeding BCA production and use in their states and strategies that can be used to address these barriers.
A limitation of our study is that it was intended to provide a descriptive analysis of states’ production and use of BCA, and its focus on a four-year time period limits our ability to fully identify and substantiate long-term trends. And although we surveyed state offices and conducted a comprehensive review of their websites to identify BCAs, the survey’s low response rate (31%) and the fact that some BCAs may not have been posted on a website may have prevented us from identifying some studies. Also, although our interviewees offered generally consistent feedback about the BCAs that they were aware of, it is possible that other state officials could have provided differing perspectives about BCA use in the policy process. Further, it was beyond the scope of our study to build a theoretical model that would test or extend theory relating to states’ production and use of BCAs.
Findings
Our research shows that state governments are increasingly mandating and conducting BCAs. Although most of the studies lacked at least some of the desired technical aspects of BCAs, a notable proportion of studies discussed in our interviews had a reported impact on state policy and budget processes. States reported facing several challenges in conducting and using these studies, however, including resource and data limitations, timing considerations, and difficulty in gaining policymakers’ attention and confidence. These problems are similar to those confronting other types of social science research, and the prescriptions of scholars who have examined research utilization – including outreach to stakeholders, enhanced training and communication, a standardized BCA approach, and resource sharing – are similar to those identified by our interviewees.
State statutory mandates and BCA production
Our research found that states and the District of Columbia are increasingly mandating that BCAs be done to address policy issues and the number of reports they generated grew over time. As of December 2012, 48 states had passed 252 statutory mandates requiring BCAs. Only Wyoming and South Dakota had no such statutes. The number of statutory mandates varied across the states – Washington had 16 specific statutes requiring studies; and California, Connecticut, Florida, and Texas each had 11 or more. In contrast, 29 states had fewer than five.
These statutory mandates covered a wide range of policy areas. As shown in Figure 1, BCAs were most frequently required for economic development initiatives, health care programs, procurement, and communications and information technology policies. Only 13 statutes required BCAs for regulations.
Of the existing mandates, 88 (35%) were enacted after December 2007, and an increasing number of statutes were passed each year during the 2008 and 2011 study period, growing overall by 111%. During their 2011 and 2012 legislative sessions, 45 states introduced 297 bills to create new statutory mandates for benefit-cost and/or cost-effectiveness analyses.
The number of BCAs produced by states showed a similar growth pattern. All states and the District of Columbia conducted BCAs between 2008 and 2011, and more studies were published each year over the 2008–2011 period. Despite this overall growth, individual state activity levels varied greatly. As Table 1 shows, half of all BCAs were conducted by 11 states, with the most studies in California, Kansas, Missouri, North Carolina, Ohio, and Washington. By contrast, half the states conducted fewer than five BCAs. Four states – Alabama, Arizona, Kentucky, and North Dakota – performed only one BCA over the 2008–2011 period.
Although states increasingly mandated BCAs, most of the studies we identified were conducted for reasons other than meeting these statutory requirements. Only 25% – 87 – of the studies were produced in response to a statutory mandate, while 2% – 7 – were conducted in response to an executive rule, and 6% – 22 – were written as part of regular practice (e.g., BCAs completed as part of annual economic impact reports). Further, although 72% of the BCA statutes required systematic studies (i.e., BCAs be conducted annually or in conjunction with specific events such as Arizona Revised Statute 36-694 that requires a BCA only when a designated committee recommends a test be added to the state’s newborn screening program), almost two thirds of the actual studies (over 63%) were done as one-time studies, typically at the direction of state leaders, suggesting that the increase in statutory mandates is not a primary driver of BCA production.
Although BCAs were conducted by both the executive and legislative branches, almost two thirds were conducted by the former (see Figure 2). Legislative offices and universities/nongovernment organizations each conducted approximately one fifth of the reports. This pattern may reflect the executive branch’s relatively greater resources to conduct the studies.
State BCAs also covered a range of policy areas, but as shown in Figure 3, health and social services, the environment and natural resources, transportation, and economic development were the most common.
Technical characteristics of BCAs
Most of the BCAs produced by states lacked one or more of the technical elements recommended by the literature – just 36 of the 348 state reports included all eight elements. As Figure 4 shows, though all reports assessed direct costs, only 36% assessed indirect costs. While 92% monetized tangible benefits, just 19% did so for intangible ones. Though 67% addressed the applied assumptions, only 21% performed a sensitivity analysis to determine how outcomes might be different under an alternative set of assumptions.
Reported impact of BCAs
The key state stakeholders we interviewed indicated that BCAs had a relatively high degree of reported impact on policymaking in their states. Recognizing the complexity of the policy process, we did not expect stakeholders to report that any single study was the sole factor in a decision (and none did). However, of the 190 BCAs that we discussed with state officials, more than half (52%) had a reported effect on state policymaking.Footnote 1 Of these, most (65%) had a reported direct impact, including influencing decisions to redesign, eliminate, or modify funding of the studied programs, and the remaining (35%) had a reported indirect impact, such as spurring debate or helping to put an issue on the agenda. Only 25% of the BCAs with a reported impact were a product of a statutory requirement, suggesting that such mandates were not a primary factor in policymakers’ decisions to use BCA evidence.
Full analyses had both a higher rate of reported impact (67%) than partial ones (50%) and a higher rate of reported direct impact (81% and 64%, respectively). Also, of the BCAs that had a reported direct impact, 94% calculated both direct costs and tangible benefits, and 74% identified the assumptions used in their analysis. In contrast, only 31% of the BCAs that had reported use included a sensitivity analysis to test the assumptions, and only 20% calculated both indirect costs and monetized intangible benefits. The BCAs that had reported indirect impact showed similar trends. These findings suggest that methodological quality has an impact on policymakers’ willingness to use BCA results, but the limitations in our sample precluded full testing of this relationship.
Another factor worthy of further study is the inclusion of clear BCA conclusions or recommendations. As Figure 5 shows, 57% of the 92 studies that provided recommendations had a reported impact, compared to 48% of the BCAs that lacked recommendations. However, more than half (52%) of the BCAs in our sample did not recommend specific actions based on the results, and many (33%) did even not provide clear conclusions about the return on investment of the option(s) studied. Future research could explore whether and how the provision of various types of summary information (return on investment ratios, etc.) and recommendations promote policymaker buy-in and use of BCA results.
Challenges to conducting and using BCAs
Our interviews revealed three primary challenges that limited states’ production and use of BCA – resource and data limitations, timing problems, and difficulty in gaining policymaker attention and confidence in the methodology and findings. These challenges, reported by interviewees in 46 states (92%), are similar to those cited by scholars who have studied research utilization (Bogenschneider & Corbett, Reference Bogenschneider and Corbett2010; Cousins & Leithwood, Reference Cousins and Leithwood1986; Hird, Reference Hird2005; Lindblom & Cohen, Reference Lindblom and Cohen1979; Patton, Reference Patton2008; Thoenig, Reference Thoenig2000; Zajano & Lochtefeld, Reference Zajano, Lochtefeld and Jonas1999).
Interviewees in 35 states (69%) reported resource and data limitations as a key challenge to conducting and using BCAs, including the level of necessary commitment of time, money, data, and staff with specific expertise. For example, interviewees in several states reported that their state’s BCAs took up to one year to complete at a cost of up to $1 million. Interviewees also reported challenges in aggregating and analyzing the data needed to complete BCAs, noting that their state accounting systems often did not track costs by program or activity, making it difficult to compute marginal and total costs. The interviewees also observed that their states often lacked robust systems to track program outcomes, which are needed to predict and monetize program benefits. Additionally, BCAs require technical skills that can be in short supply among state personnel, and states may lack the funds to hire such staff personnel or to contract out these studies. These resource requirements, essential to evaluation quality and credibility (Cousins & Leithwood, Reference Cousins and Leithwood1986; Patton, Reference Patton2008), can paradoxically limit states’ ability to conduct rigorous BCAs when they are most needed to inform decisions during times of budget shortfall.
Interviewees in 18 states (35%) reported timing constraints as a barrier to producing studies. State policy processes operate in highly compressed periods (many legislative sessions are limited to 60 to 90 days), requiring policymakers to make a large number of decisions in a short time frame. However, rigorous BCAs typically require lengthy time periods to conduct – often a year or more. Social science research results that are not available when key decisions must be made are less likely to have any material impact (Bardach, Reference Bardach2003; Bogenschneider & Corbett, Reference Bogenschneider and Corbett2010; Kothari, MacLean & Edwards, Reference Kothari, MacLean and Edwards2009; Patton, Reference Patton2008; Zajano & Lochtefeld, Reference Zajano, Lochtefeld and Jonas1999).
A related timing challenge of critical relevance to BCAs (and reported by states) is that there are often disconnects between the long-term focus of the studies and the shorter term focus of policymakers. BCAs typically assess and monetize program benefits that accrue over multiple years. In contrast, policymakers tend to focus on only the current fiscal year or the next election cycle. As a result, elected officials may be reticent to fund programs with benefits that will not accrue (and yield political credit) until after their terms expire, even if BCAs find a high return on investment (Bogenschneider & Corbett, Reference Bogenschneider and Corbett2010; Garri, Reference Garri2010; Rogoff, Reference Rogoff1990).
Further, it is widely recognized that political values can trump data (Bogenschneider & Corbett, Reference Bogenschneider and Corbett2010; Jennings & Hall, Reference Jennings and Hall2012), and 53% of states reported that the BCAs can face a particular challenge in this regard due to their use (and sometimes, manipulation) by advocates (McCubbins, Noll & Weingast, Reference McCubbins, Noll and Weingast1989; Tiller, Reference Tiller2002). Policymakers must balance many priorities, and their own political or ideological considerations can overwhelm benefit-cost findings. Further, while social science research often faces challenges in gaining the attention of policymakers, this can be magnified for BCAs due to their highly technical nature, which compounds the difficulty in communicating results to leaders with limited understanding of the program area or analytical process (Harrington et al., Reference Harrington, Heinzerling and Morgenstern2009). Also, BCAs are typically presented as written products in a policymaking environment that tends to prioritize oral communication, anecdotal stories, and personal relationships (Bogenschneider & Corbett, Reference Bogenschneider and Corbett2010; Weiss, Reference Weiss1989; Whiteman, Reference Whiteman1995).
Strategies for strengthening BCA
A number of widely recognized steps to promote research utilization were also reported by 61% of states as applicable to enhancing the use of BCAs. These include stakeholder participation, enhanced training, assistance, and communication of findings, a more standardized approach for valuing benefits and costs, and improved information sharing.
Scholarship has identified the importance of seeking stakeholder participation in studies from the start, both to ensure that information needs are understood and to create communication channels that facilitate ongoing exchange between analysts and policymakers (Cousins & Shulha, Reference Cousins, Shulha, Shaw and Green2006; Johnson et al., Reference Johnson, Greenseid, Toal, King, Lawrenz and Volkov2009; Marra, Reference Marra2000; Mooney, Reference Mooney1991; Patton, Reference Patton2008; Preskill & Torres, Reference Preskill and Torres1999; VanLandingham, Reference VanLandingham2011).
Benefit-cost analysis is a complicated, technical field and as discussed above, many studies conducted at the state level lacked important elements such as assessments of indirect costs and sensitivity analysis. Increased training, such as that offered by academic centers such as the Center for Benefit-Cost Analysis at the University of Washington and the professional development workshops held before the annual meetings of the Society for Benefit-Cost Analysis, can boost states’ capacity to conduct studies, help them identify the appropriate type of studies to perform under various conditions, and enhance policymaker confidence in the reports produced. On a related note, academic literature emphasizes the importance of translating technical research, such as BCAs, into clear and concise bottom-line summaries that policymakers can easily comprehend (Bogenschneider & Corbett, Reference Bogenschneider and Corbett2010; Court & Young, Reference Court and Young2006; Herk, Reference Herk2010). Interviewees in seven states reported that their policymakers were more amenable to reviewing BCA results when they are presented as only a part of the discussion. Ergas (Reference Ergas2009) noted that BCA’s effectiveness lies in its capacity to provide a better understanding of the consequences of proposed public programs, and that these studies should be understood as one means to aid policymakers in making the best decisions for the public good.
More consistent methodologies could improve the accuracy and credibility of BCAs, as well as make it easier to compare results and build on past research. As noted above, we found significant variation among state BCA reports in the technical elements included in their analyses, indicating that research methodologies vary both across and within states. Researchers and academics are beginning to establish guidelines for conducting BCAs, which could improve the quality of studies being performed (Farrow & Zerbe, Reference Farrow and Zerbe2013). For example, the National Academies of Sciences held a workshop in 2013 to explore the feasibility of establishing standards for the practice of BCA (National Academies of Sciences, 2014). The Pew-MacArthur Results First Initiative (Results First), a project of The Pew Charitable Trusts and the John D. and Catherine T. MacArthur Foundation that works with states to implement a benefit-cost analysis approach, is also addressing this limitation by providing technical assistance that currently enables 17 states and 4 California counties to implement and customize the rigorous BCA model developed and successfully used by the Washington State Institute for Public Policy for more than 15 years (Pew-MacArthur Results First Initiative, 2014). This model uses a standardized approach to compute the potential return on state investment of a range of programs across multiple policy areas (Lee et al., Reference Lee, Aos, Drake, Pennucci, Miller and Anderson2012).
Additionally, states could address their resource limitations through mechanisms that enable them to share benefit-cost methodologies, standards, techniques, studies, and results. Results First uses several strategies to build a learning community of its participating states and counties, including convening annual meetings for its participating states and counties and sponsoring web portals that enable them to exchange information. Universities and think tanks could establish similar consortiums to enable practitioners to share information as well as offer expert technical support to help ensure the soundness of research findings. For example, the Society for Benefit-Cost Analysis provides two main forums to facilitate the dissemination of critical benefit-cost information to the field: an annual conference that brings together scholars and practitioners from around the globe to present and critique new BCA-related research, and the peer-reviewed journal, Journal of Benefit-Cost Analysis, which helps promote the continual integration of the best concepts and methods applied in BCA. The society’s support for the continual presentation and publication of research is a crucial element in the use and development of BCA in the states.
Conclusion
This study establishes the first comprehensive baseline of descriptive information on BCA production and reported impact in the states. Our findings identified how frequently states conducted BCAs over a four-year period, as well as the key characteristics of these studies and their level of reported impact. It also identified key challenges that impede states’ production and use of these studies. This information can provide the basis for future studies that develop theories on BCA use and refine existing theories on evaluation utilization.
Given the severity of budget challenges facing states and widespread skepticism about the merits of many public programs, the time is ripe for expanding research on BCA production and utilization in state policymaking. This research shows that although overall, states increasingly, though varyingly, used this rigorous method over a four-year period to test whether program or policy benefits justify their costs, more research is needed to help ensure that these efforts are successful and generate net benefits to policymakers that exceed the resources needed to conduct the BCAs. We encourage scholars to extend this research and further examine how differences in the technical characteristics of BCAs, the types of information included in analysis reports, and the methods that researchers use to communicate results to policymakers affect the type and level of utilization of BCA in the policy process.
Appendix A. Eight technical elements of benefit-cost analysis – examples from state benefit-cost analyses
Benefit-cost analysis seeks to answer whether the monetized benefits of providing a service outweigh its costs. It does this by identifying and placing dollar values on all program costs and benefits and then subtracting the monetized costs from the monetized benefits to obtain the net benefit. Although the method to achieve this ratio is not standardized, academic literature identifies eight baseline technical components required to ensure an accurate measurement. Each technical element is discussed below and includes an example from one of the full BCAs identified in this study (also listed in Appendix B).
Direct costs. Direct costs are those that are specifically associated with the service being evaluated, such as program wages and material costs. In their 2012 report, Return on Investment: Evidence-Based Options to Improve Statewide Outcomes, the Washington State Institute for Public Policy included a comprehensive list of costs in their portfolio analysis of juvenile justice intervention programs. To measure the costs of Functional Family Therapy, for example, the analysis included a range of costs such as interpreter services, therapists’ transportation and hourly rate, and therapist training materials, which came to a total cost of $3,262.
Indirect costs. Indirect costs are costs that are not solely associated with the service being evaluated, such as administrative costs for workers who are providing administrative support across multiple projects, not just the program evaluated in the BCA. In their report Cost-Benefit Analysis of a Prescription Drug Monitoring Program in Wisconsin, the Wisconsin Department of Regulation and Licensing performed a BCA of three different program models to combat prescription drug abuse. To generate a more accurate picture of the program costs, they calculated the share of indirect costs from office supplies ($13,000), which fed into the total program costs of $278,700 to $317,900 across three program options.
Monetization of tangible benefits. Tangible benefits are those that have a dollar value, such as health care costs saved from reduced doctor visits. In Extending the Foster Care Age to 21: Measuring Costs and Benefits in Washington State, the authors calculated the monetary value of benefits predicted from extending the age a youth in the state of Washington could remain in foster care from 18 to 21. These include the lifetime benefits from reducing crime, such as avoiding criminal justice costs, and the lifetime benefits of college attendance, such as increased lifetime earnings, which achieved benefits of $2,726 and $35,431 per participant, respectively.
Monetization of intangible benefits. Intangible benefits typically do not have a dollar value, such as the value of time saved or an increased sense of work satisfaction, but provide a simplified manner of comparing the value to tangible benefits. The North Carolina Governor’s Crime Commission report, A Study of the Impact of Expanding the Jurisdiction of the Department of Juvenile Justice and Delinquency Prevention, assessed the intangible benefits of reduced crime victimization costs such as the quality of life (pain and suffering). To do this, the authors applied estimates from three rigorous evaluations of victim costs and calculated annual present and discounted costs ranging from $6,055 to $5,460.
Comparison of measured program benefits and costs against a baseline or alternatives. Comparing program benefits and costs against a baseline or alternatives is critical to understanding which options yield the highest net benefits and/or the costs of not investing in the alternatives (e.g., cost-savings lost). In its report, Return on Investment: Evidence-Based Options to Improve Statewide Outcomes, the Washington State Institute for Public Policy compared the benefits and costs of a portfolio of program alternatives that enables readers to more easily evaluate which program(s) would provide the best return on taxpayer dollars. For example, the report’s assessment compared the benefits and costs of 11 juvenile justice programs in a consumer-report-style format that allows the reader to easily identify that Aggression Replacement Therapy and Multisystemic Therapy would provide an estimated net benefit of $65,481 and $22,096, respectively, while Scared Straight would provide an estimated negative net benefit of $6,095.
Discounting of future benefits and costs to NPVs. Benefits and costs can occur over an extended period of time, making cross-time comparisons difficult. Applying a discount rate enables analysts to more easily compare the sum of future benefits and costs in NPV. In The New Mexico PreK Evaluation: Results From the Initial Four Years of a New State Preschool Initiative, the authors applied a 3% discount rate to future societal benefits and costs to compare the long-term outcomes of quality pre-K programs in the state. They found that the NPV to society of a one-year high-quality pre-K program in the state is an estimated $15,307.
Disclosure of key assumptions used in calculations. Identifying assumptions behind benefit and cost estimates is critical to helping readers better understand the limitations of the benefit-to-cost ratio. In 2009, the Texas Public Utility Commission released a study on the BCA of storm hardening programs that aim to minimize hurricane damage. The study lists several assumptions throughout the report to assist the reader in understanding the basis and limitations of benefit and cost estimates. For example, in calculating the benefits of building a substation as part of a storm hardening program, the authors assume only damage avoidance and repair costs as benefits. For all programs, the authors reveal assumptions in costs of hurricane damages to Texas investor-owned utilities (IOU), such as the $480 million to the IOU Entergy, which is different from the Entergy-estimated range of $435 million to $510 million.
Disclosure of sensitivity analysis. Since predicting future impacts always entails a level of uncertainty on the impact of a service and the value of those impacts, it is critical to test how results would vary if key assumptions were changed. In their report, Treatment Alternatives and Diversion (TAD) Program: Advancing Effective Diversion in Wisconsin, the Wisconsin Population Health Institute employed the Monte Carlo simulation, a computerized tool that enables users to test the range of possible outcomes if assumptions were changed, to test the uncertainty of their benefit and costs estimates (e.g., treatment costs and benefits from prison days avoided) for an alternatives-to-incarceration program. The results, based on 10,000 simulations, demonstrated that the program would produce positive net benefits 78% of the time.
Appendix B. List of 36 full BCAs released by states between 2008 and 2011