1 Introduction
The U.S. federal government and other governments around the world have developed significant systems of prospective ex ante impact assessment, seeking to foresee the environmental, economic, and other impacts of new policies and projects (Craik, Reference Craik2008; Wiener, Reference Wiener, Livermore and Revesz2013; Wiener & Ribeiro, Reference Wiener and Ribeiro2016a, Reference Wiener, Ribeiro, Bignami and Zaring b; OECD, 2018). But ex ante impact assessments are inevitably imperfect, and policies may perform differently over time than had been foreseen (Harrington et al., Reference Harrington, Morgenstern and Nelson2000; Harrington, Reference Harrington2006; Greenstone, Reference Greenstone, Moss and Cisternino2009; Morgenstern, Reference Morgenstern2015). So, in addition to ex ante impact assessment, there have been calls for ex post retrospective impact assessment by numerous scholars (Greenstone, Reference Greenstone, Moss and Cisternino2009; Coglianese, Reference Coglianese2012, Reference Coglianese2013; Dudley, Reference Dudley2013; Aldy, Reference Aldy2014; Sunstein, Reference Sunstein2014; Wiener & Ribeiro, Reference Wiener and Ribeiro2016a; Cropper et al., Reference Cropper, Fraas and Morgenstern2017), and, as we detail below, by virtually every U.S. President, in both political parties, since the 1970s, as well as occasionally by Congress. Ex post impact assessment seeks to evaluate how well existing policies have performed, and to inform potential policy revisions.
Despite these longstanding and bipartisan calls to ramp up retrospective regulatory analysis, the results have been limited so far (Coglianese, Reference Coglianese2013; Dudley, Reference Dudley2013; Lutter, Reference Lutter2013; Aldy, Reference Aldy2014; Sunstein, Reference Sunstein2014; Wiener & Ribeiro, Reference Wiener, Ribeiro, Bignami and Zaring2016b; Cropper et al., Reference Cropper, Fraas and Morgenstern2017; OECD, 2018). Given the robust adoption and implementation of ex ante impact analysis, over several decades and across different political parties, the much less vigorous adoption and implementation of ex post analysis presents a puzzle. The answer to this puzzle is surely multidimensional – many different factors combine to hinder the widespread implementation of ex post impact analysis, including lack of political demand for reviews, lack of guidance and expertise for reviews, a reluctance to criticize prior actions, among many others. In this article, we focus on two further factors that may be holding back successful retrospective analysis: a narrow set of goals and tasks, and a narrow institutional framework. We argue that a more comprehensive vision of goals and tasks, a wider array of institutional designs, and better matching of goals/tasks to institutions, would lead to more effective retrospective reviews.
We begin with a theoretical articulation of three broad kinds of goals for retrospective analysis and eight different tasks involved in retrospective reviews. We then examine the text of the Presidential Executive Orders and major Congressional legislation addressing retrospective review, and document which goals were targeted and which institutions were used to conduct the reviews. We find that the U.S. federal government has almost always sought review of one rule at a time, conducted by the agency that issued or promulgated the rule. And such single-rule, single-agency review has typically focused primarily (often exclusively) on two goals: (i) assessing whether the rule is still relevant or is obsolete, and (ii) assessing the costs of the rule and how those costs may be reduced. We argue that this institutional framework for retrospective review – one rule, assessed by the promulgating agency, focused on relevance and cost – is only well-suited to meet a subset of the full goals of retrospective review. In particular, this narrow institutional framework cannot address several other goals of retrospective review, elements of broader “regulatory learning,” including: (iii) analyzing the cumulative impacts of multiple rules on an industry; (iv) evaluating not only costs but also benefits, and ancillary impacts (countervailing harms and co-benefits); (v) learning to improve the validity and accuracy of methodologies for ex ante forecasting of regulatory impacts, including on the benefits side of the ledger; (vi) learning about impacts, methodologies, and policy designs that span beyond the domain of a single regulatory agency; and more.
To move beyond the conventionally narrow approach, we suggest consideration of a broader set of institutional options for retrospective regulatory analysis, notably some form of multigroup agency or commission to pursue retrospective analysis of multiple rules and impact assessments, in order to match the broader goals of retrospective analysis with the institutional framework. This choice among institutions involves matching goals and tasks with capabilities and incentives (Breyer, Reference Breyer1982). It is an exercise in learning about institutional design in order to enhance regulatory learning (Farber, Reference Farber1993; Greenstone, Reference Greenstone, Moss and Cisternino2009; Gubler, Reference Gubler2014; Pidot, Reference Pidot2015; Dunlop & Radaelli, Reference Dunlop and Radaelli2018; Bennear & Wiener, Reference Bennear and Wiener2019b).
2 Goals and tasks of retrospective regulatory review
The institutional options for retrospective review may have received insufficient attention and clarification in the past because the goals of retrospective review have been narrow or unclear. Retrospective review, or similar mechanisms called ex post regulatory impact assessment, post-implementation policy evaluation, and others, may serve a variety of goals. Given a single goal, retrospective review is often viewed as a single analytic task, while in reality there are multiple tasks associated with the multiple goals of retrospective review. This section articulates the goals and tasks of retrospective review.
2.1 Goals of retrospective review
Sometimes the goal of retrospective review is to “clean up the books” by identifying rules that are outdated, redundant, or obsolete – no longer applicable, or lacking statutory authority – and removing them. We refer to this goal as the rule relevance goal.
Another frequent goal of retrospective review is to improve the outcomes of regulation – in particular, to revise each rule, taken one at a time, to improve its performance. We refer to this goal as the rule improvement goal. In practice, this has often meant identifying specific past rules that have turned out to pose high costs and seeking to reduce those costs through revisions.
When pursuing the rule improvement goal, there are choices as to which outcomes to analyze and the scope of impacts to assess. Retrospective regulatory analyses can in principle assess the performance of a rule against several outcome criteria, potentially including: costs, benefits, rule effectiveness, cost-effectiveness, ancillary impacts, social well-being, and distributional equity.
Broadening the scope of analysis to cover more impacts will likely raise the costs of data collection and analysis, but will often also raise the value of the information for decision making and thus result in greater improvements in social outcomes from the review (Wiener, Reference Wiener1998).
The above objectives focus on evaluation and revision of single rules. A different objective of retrospective analyses may be to learn from multiple past rules and analyses, in order to improve future rules and analyses. We refer to this goal as the regulatory learning goal. Retrospective review can contribute to broader learning about regulation in several ways.
First, retrospective review can improve our understanding of the performance of alternative policy designs or instruments, to evaluate how well they actually work in practice compared to predictions in theory. For example, retrospective review has enabled better understanding of the relative performance of technology standards, performance standards, information disclosure, taxes, tradable permits, and other policy instruments.
Second, retrospective review could help improve the accuracy of methods used to conduct ex ante regulatory impact analyses (RIAs). Retrospective reviews can be used to compare regulatory forecasts and counterfactual scenarios with actual outcomes over time, and this information could then be used to improve forecasting methods.
Finally, retrospective review could lead to improved understanding of the interaction effects of multiple regulations. For example, retrospective review could assess the cumulative impacts of multiple rules on an industry. And retrospective review could be used to understand a broad range of interaction effects among rules.
Such efforts at regulatory learning could involve not just one lookback exercise to conduct retrospective review and improve the rule, but an ongoing and planned process of monitoring, data collection, periodic reviews and adaptive updating (McCray et al., Reference McCray, Oye and Petersen2010; Bennear & Wiener, Reference Bennear and Wiener2019a). And – whether or not they are repeated in an ongoing planned adaptive process – such efforts at regulatory learning would involve retrospectively assessing not only one rule at a time, but larger representative samples of multiple rules and their ex post RIAs, compared to their ex ante RIAs, with variation across policy designs in order to test their comparative performance, and/or with variation across forecasting methodologies in order to test and improve the accuracy of the methodologies (Greenstone, Reference Greenstone, Moss and Cisternino2009; Office of Information and Regulatory Affairs, 2016, p. 6; Wiener & Ribeiro, Reference Wiener and Ribeiro2016a).Footnote 1 Such a broader multi-rule learning process may go beyond the domain of each individual agency, to include multi-agency comparisons. It amplifies the tradeoff of greater costs of information versus greater value of information for policy improvement. The more a regulatory system relies on ex ante RIA to design and approve new rules, the more it can gain from ex post RIA to improve the design of those rules and the accuracy of forecasting their impacts (Coglianese, Reference Coglianese2012, p. 65; Aldy, Reference Aldy2014, pp. 22–26). Adam White argues that “retrospective review’s greatest virtue actually has nothing to do with repealing regulations. Rather, retrospective review’s greatest value is forward-looking … to confront how accurate or inaccurate the agencies’ own projections were in forecasting the rules’ impacts in the first place” (White, Reference White2016).
While retrospective review has benefits, it also has costs (Bennear & Wiener, Reference Bennear and Wiener2021). Agencies must spend time and resources engaging in reviews. In addition to the direct resources costs of these reviews there is an opportunity cost in staff and resources not being allocated to other activities. Agencies may appear reluctant to review prior decisions as there may be reputational costs associated with revising prior rules – in essence it may be viewed as agency error which weakens credibility with stakeholders. While revising rules may not represent error in the original rulemaking – circumstances can and do change – these reputation costs may still be real (Bull, Reference Bull2015; Wiener & Ribeiro, Reference Wiener and Ribeiro2016a; Bennear & Wiener, Reference Bennear and Wiener2021). Retrospective review may also impose costs on society. Conventional wisdom may be that regulated actors would want to see rules changed if they prove overly costly, and certainly that can be true. But it can also be true that having spent the money to comply with a regulation, industry does not benefit from revisiting it, and repeated changes to rules can yield instability that is costly both to regulated actors and to those benefitting from regulatory protections, when they rely on the stability of past rules (Bennear & Wiener, Reference Bennear and Wiener2019b).
There may also appear to be tradeoffs among different types of regulatory analysis, for example, whether agencies should spend more time and effort on improving prospective analysis or on improving retrospective analysis. This tradeoff may be illusory as the two types of analysis can be complements. As articulated, the regulatory learning goal uses retrospective analysis in large part to improve future prospective analyses and rulemakings. But for more typical retrospective analyses that focus on rule relevance or rule improvement, there may well be tradeoffs between time spent on retrospective analysis and time spent improving prospective analysis.
Recognizing both the benefits and costs of retrospective review, we argue that reviews should be targeted where the net benefits of review are the highest. For the rule relevance and rule improvement goals, these are likely to be rules applying to industries, technologies, or scientific understanding that are changing more quickly. For those cases, the static rulemaking process may result in rules that grow increasingly mismatched to circumstances over time (Bennear & Wiener, Reference Bennear and Wiener2019a, Reference Bennear and Wiener b). For the regulatory learning goal this is likely to involve rulemakings that raise methodological issues or policy choices that apply to many rules and many agencies and are likely to result in significant improvements in future rulemakings.
2.2 Tasks of retrospective review
Retrospective review is often discussed as single analytic process. But in reality, retrospective review consists of at least eight tasks, described below, and one could imagine different institutions having different roles in each task.
-
(i) Issue instructions to do retrospective analysis. This instruction could come from within the promulgating agency itself or it could come from an outside institution such as Congress, the White House, or the Courts. Historically this task has been handled by the President (through executive orders) or by Congress. However, this instruction has been fairly broad, with discretion left to the promulgating agency, perhaps with input from stakeholders or OIRA, to determine which rules will be selected for review.
-
(ii) Implement selection criteria for which rules to review. Examples of criteria may include: (i) rules with high opportunity to reduce cost, (ii) rules with high opportunity to increase net benefits, or (iii) rules with high opportunity to learn (variation across rules, RIA methodologies). Ideally, these selection criteria are directly related to the goals of the retrospective review. If the goal is to determine if rules remain relevant, the rules that should be selected are those at risk of irrelevance. If the goal is to reduce costs (or increase net benefits), then rules should be selected that have high potential for cost reduction (or increasing net benefits). If the goal is to learn, then samples of multiple rules and RIAs should be selected that offer variation across observations in order to gain insights into policy designs, forecasting methodologies, or other features.
-
(iii) Set the scope of analysis. In theory, every review could examine the full set of criteria outlined in the prior section – relevance, costs, benefits, cost-effectiveness, ancillary impacts, social well-being, and distributional equity. In practice, some of these criteria may not be relevant for particular rules and, hence, analysis of those criteria is not a worthwhile expenditure of effort and resources. More generally, a key task in the review process is to balance the decision costs of doing retrospective analysis against the value of information for policy improvement, and determine which aspects should be examined (Wiener, Reference Wiener1998). This calls for a scoping analysis, looking at all the criteria, with a process for selecting criteria and evidence for more detailed analysis based on the results of this scoping process. A further scoping task concerns the determination of the counterfactual – what would have happened in the absence of the rule(s) (Cropper et al., Reference Cropper, Fraas and Morgenstern2017). This may also include decisions about what the relevant set of policy alternatives could have been.
-
(iv) Acquire data for analysis. Ideally, the need for these data would have been anticipated during the initial rulemaking and the data would already be collected. However, this is not always the case, and data acquisition and cleaning often consume large amounts of time and resources during a review.
-
(v) Conduct the analysis. Someone must use the data to analyze the chosen rules given the scope selected.
-
(vi) Outside review of the analysis. The analysis should be reviewed by a body other than the one conducting the review. In current practice, this role is frequently fulfilled by OIRA.
-
(vii) Make recommended policy changes. Based on the review, there may be changes deemed needed to revise the past regulation. Actually revising the rule is a task that probably has to be done by the regulatory agency with the statutory authority (hence the agency that promulgated the initial rule), or by Congress.
-
(viii) Publish and archive all aspects of review. There needs to be publication and archiving of the retrospective reviews for the purposes of transparency and learning. Prior analysis of retrospective reviews in the USA found that of the retrospective reviews that have been conducted, much of that analysis is not published, recorded or archived in ways that the public and scholars can find (Wiener & Ribeiro, Reference Wiener, Ribeiro, Bignami and Zaring2016b).
The next section explains our methodology for examining institutions for retrospective review in light of the goals and tasks framework we just established.
3 Text analysis of prior federal efforts at retrospective review
3.1 Methods and data
We conducted text analysis of all Presidential Executive Orders (EOs) that pertain to regulatory review since President Gerald Ford (more detail on these presidential efforts can be found in Wiener & Ribeiro, Reference Wiener and Ribeiro2016a), as well as major relevant legislation enacted by Congress. While some U.S. States and other countries, notably the EU, have also undertaken efforts at retrospective review (Golberg, Reference Golberg2018; Radaelli, Reference Radaelli2020), we focus here on efforts at the U.S. federal level. The list of prior efforts that were analyzed with a brief summary can be found in Table 1.
We examined each EO and statute along two dimensions. First, we determined which of the three goals – rule relevance, rule improvement, and regulatory learning – were addressed by the EO/Statute. Note that a single EO/statute may target more than one of these goals. Second, we identified what institutional mechanism was required for the review and grouped those mechanisms into two broad categories – within the same agency that promulgated the review, or across multiple agencies.
3.2 Findings
Table 2 summarizes the past executive branch efforts at retrospective review. Each executive action was categorized based up on which goal(s) were targeted and within goals which specific criteria were required to be analyzed. Each action was further categorized based on what institution was required to conduct the retrospective review. “Within Agency” means that the agency that had promulgated the initial rule was required to conduct the review. “Across Agencies” means the executive order assigned or established an institution that spans regulatory agencies that was tasked with (at least part) of the regulatory review. Detailed text-based justification of our categorization of each EO/Statute is available in the Supplementary Material.
Table 2 highlights several common features of prior executive action on retrospective review:
-
(i) Actions have largely focused on the regulatory relevance and rule improvement goals. No executive order has specifically required that reviews address the regulatory learning goal. (EO 12291, section 6(a)(5), addressed “duplicative, overlapping and conflicting rules.” EO 12866, section 5(c), authorized the Vice President to call for review of “groups of regulations of more than one agency.” EO 13563, section 3, called for “coordination across agencies” to address “redundant, inconsistent, or overlapping” rules. But none of these called for retrospective review of multiple rules or RIAs in order to learn to improve policy design or accuracy in forecasting methods.)
-
(ii) Within the rule improvement goal, past efforts have mainly focused on the criterion of reducing the costs of each regulation, with less attention to the other criteria noted above, such as benefits, ancillary impacts, social well-being, and distribution.
-
(iii) Past efforts have focused on the agency that issued the rule, asking that agency to undertake the retrospective analysis, rather than exploring other institutional options.
So far, efforts at retrospective review have been mostly through episodic lookbacks, or occasional agency efforts to update rules (Wagner et al., Reference Wagner, West, McGarity and Peters2017), with few instances of advance planning in initial rules themselves to collect data over time and then conduct a planned retrospective analysis at a future date.Footnote 2
4 The role of institutional frameworks
Despite the broad and longstanding support for retrospective review, government measures to require retrospective review have yielded only limited results, with only occasional episodes of effort to analyze past policies (Coglianese, Reference Coglianese2013; Dudley, Reference Dudley2013; Lutter, Reference Lutter2013; Aldy, Reference Aldy2014; Wiener & Ribeiro, Reference Wiener, Ribeiro, Bignami and Zaring2016b). There are several potential explanations for why retrospective review has not achieved greater uptake and there are almost certainly multiple factors at play, but we argue that the limited institutional framework for retrospective review captured in Table 2 is at least partially responsible for the dearth of success in this area.
There are several reasons why ad hoc lookbacks conducted rule-by-rule by the promulgating agency is unlikely to yield desired results. While the agency that issued the original rule may have the data, expertise, and legal authority to revise the rule, it likely faces time and resource constraints in conducting retrospective reviews, and inhibitions in critiquing its own past work. Moreover, in contrast to ex ante analyses of proposed new regulations, there may be less pressure on the agency to conduct retrospective analyses of past regulations, to the extent that ex ante analysis is necessary for the agency to obtain approval for a new rule to go forward, whereas ex post analysis may not be necessary for the agency to advance its mission (and may be perceived as diverting resources from its mission-critical work).Footnote 3
Political leadership of the agency is often interested in getting new things done, consistent with the current administration’s goals. Spending time and money on retrospective review of prior rules may distract from that unless, as we saw in the Trump administration, repealing prior rules is, itself, consistent with the current administration’s goals. Finally, the paperwork reduction act limits the ability of agencies to collect data from regulated entities that may be necessary to conduct retrospective review (Paperwork Reduction Act of 1995, n.d.).
The regulated entities themselves may not be particularly interested in revising prior rules as they have already made investments to comply with the existing rule. The controversy within the automobile manufacturing industry regarding the potential revision of the fuel economy standards is illustrative of this tension (Davenport, Reference Davenport2019; Davenport & Tabuchi, Reference Davenport and Tabuchi2019).
4.1 Alternative institutional options
While prior efforts have largely placed responsibility for retrospective review with the agency that issued the initial regulation, there are actually numerous alternative institutional options for retrospective analyses. Some of these options currently exist and have been previously used, albeit sparingly, while others would be new institutions requiring executive (or legislative) action to create.
To overcome the obstacles to retrospective review within an agency, it is often proposed that retrospective regulatory analyses be conducted or at least overseen by a central administrative body. Examples given for this institutional structure include the Office of Information and Regulatory Affairs (OIRA) in the USA, and the Regulatory Scrutiny Board (RSB) in the EU. Indeed, in the USA, OIRA does have oversight responsibility for retrospective reviews.Footnote 4 This responsibility could be extended to include development of detailed methodological guidelines for agency review (Aldy, Reference Aldy2014). An advantage of this hybrid institutional approach is that the expertise in review methodology is likely to be stronger at OIRA, but the topical expertise and data on each rule are likely to be stronger at the promulgating agency. So providing detailed methodological guidance centrally, while continuing to rely on agencies to conduct the review, may overcome one of the downsides to the agency-only approach. However, this approach still relies on the promulgating agency to conduct the analysis, and does not overcome the restriction of that approach with respect to comparative analysis or analysis of cumulative impacts. A further extension of responsibility could have OIRA conduct the analyses itself, at least in some cases such as where its ability to compare rules across agencies would be helpful. But OIRA currently does not have the budget and staff to take on this task (indeed OIRA’s staff has declined in size over the past several decades), and would require executive and legislative action to increase its staffing and analytic capabilities.
There are several additional institutional options that could try to overcome the major obstacles to the promulgating agency conducting retrospective analyses – such as a lack of resources or expertise in evaluation methods, an inability to compare multiple rules or across agencies, and inhibitions to criticizing the agency’s own activities.
-
(i) Other expert government agencies (outside of OIRA). There are other public agencies that already conduct regulatory reviews, including the Government Accountability Office (GAO). While this institution currently conducts some retrospective reviews, the GAO does not have the broader authority to select regulations to be reviewed and conduct those reviews. In theory, GAO could be given such authority although the enabling mechanism for such authority would likely require Congressional action. While an advantage of these potential institutions is that they can convene necessary expertise and they can examine rules across multiple agencies, a downside is that if regulatory changes are recommended, these institutions cannot actually adopt those changes. If the promulgating agencies or Congress do not agree, these recommendations may not be implemented.
-
(ii) Outside experts from academia, nonprofits, or think tanks. Instead of relying on existing government agencies, retrospective review could be conducted by private entities, presumably under contract with the government. For example, the National Academies of Science (NAS) is a private nonprofit organization that is frequently asked to conduct reviews of government regulatory processes by convening panels of experts. Academics and think tanks frequently engage in regulatory analysis, occasionally under contract and sometimes for other reasons (e.g., intellectual interest). An advantage of this institutional framework is the ability to draw on expertise outside of government. This institutional framework might be particularly well-suited to more difficult types of retrospective review including those that focus on cumulative impacts and interactive effects. However, this institution would encounter the same potential limitation that any recommendations would have to be enacted by the promulgating agency or legislature.
-
(iii) One-time commission of inquiry. This is a slight variant on the outside expert institutional framework in which, rather than relying on nonprofit contracts, the government establishes the authority internally to call an expert panel to conduct regulatory review, as needed. The commission of inquiry is widely used to investigate disasters and can be commissioned either by Congress or by the President (Balleisen et al., Reference Balleisen, Bennear, Cheang, Free, Hayes, Pechar, Preston, Balleisen, Bennear, Krawiec and Wiener2017). A similar framework could be used to establish a panel of experts for particular regulatory reviews.
-
(iv) New standing commission or review board. Another concept borrowed from the institutions for disaster review is the standing review board, modeled after the National Transportation Safety Board (NTSB). The advantage of a permanent review board (as opposed to one-time commission of inquiry) is the ability of the board to develop deep expertise in review methods that span policy domains (Balleisen et al., Reference Balleisen, Bennear, Cheang, Free, Hayes, Pechar, Preston, Balleisen, Bennear, Krawiec and Wiener2017). This option shares the concern about agency buy-in with the three previous options. Going further, some observers have proposed creation of a new standing commission (a “regulatory improvement commission,” RIC) to review the stock of existing regulations and their cumulative impacts, which could be advisory to the promulgating agencies or to Congress, or which could potentially be imbued by Congress with the authority to adopt its own revisions to agencies’ past rules, or to propose groups of legislative changes to past rules that Congress would then vote up or down as a slate (Mandel & Carew, Reference Mandel and Carew2013).Footnote 5 A standing commission would be particularly well-suited to identify issues that come up repeatedly in multiple rules across multiple agencies.
-
(v) Interagency working group. An alternative to the completely independent review institutions discussed above would be to establish an interagency working group tasked with conducting regulatory review. The working group could have representatives from each of the major regulatory agencies as well as representatives from government organizations focused on review methodology (e.g., OIRA or RSB). This working group would preserve the benefits of having a specialized group focused on evaluations that is at least partially removed from the promulgating agency (and hence can be more critical), but because each agency has representation on the working group there is the possibility that the working group will have more data, expertise and buy-in from the issuing agencies and this may lead to higher uptake of the working group’s recommendations. One concern about this approach might be that members of the commission engage in logrolling – exchanging positive reviews of one another’s rules.
In discussing the above institutions, we have focused on the analyses themselves. Whichever institution is selected, there needs to be a well-defined role for soliciting stakeholder and public input into the review process. This may include the ability to nominate rules for review either through a formal comment process or by way of citizen suits (Bull, Reference Bull2015, p. 96), supply relevant data and expertise for review, and comment on reviews and recommendations. Congress, as representatives of these various stakeholder and public groups, may also play a role by requesting reviews or mandating them, potentially through the use of so-called “sunset provisions” whereby statuatory authority for a rule expires if the rule is not reviewed in a particular amount of time (Ranchordás, Reference Ranchordás2015).
4.2 Mapping objectives and tasks to institutional options
The objectives of retrospective analysis may influence the choice of institutional approach. Government retrospective review has often aimed at individual regulatory policies, with a view to revising those specific policies, often to reduce their costs (Aldy, Reference Aldy2014). Broader retrospective analysis would assess the full scope of important impacts of each regulation (not only costs, but also benefits and ancillary impacts, with a view not only to reducing costs, but to increasing net benefits). But this may take too much time for any one agency’s staff and/or require expertise from multiple agencies. Similarly, a learning focus may require input from multiple agencies as well as deep expertise in analytic methods that may be better found outside the original rule-promulgating agency. In both of these cases, the reliance on the promulgating agency to conduct the rule is likely to limit the ability of the analysis to meet the objectives.
Further, the different tasks for retrospective review need not all be done by the same institution. Some tasks may be best performed by the promulgating agency, at least for rules with certain objectives, while others may be better performed by an alternative institution. Just as one example, the USA has focused on asking the agency that issued the rule to select rules for review and to conduct the retrospective analysis. That agency may have the most data and expertise, and the authority to revise the rule. But it may also face high opportunity costs in staff time diverted from other priorities. Whereas agencies may be motivated to submit ex ante RIAs in order to have their new rules pass OMB/OIRA review and be promulgated, there may not be as strong a motivation for agencies to conduct ex post RIAs when the retrospective review does not have practical rewards or only threatens to change the agency’s past work. And the issuing agency may face inhibitions from publishing a candid retrospective analysis that criticizes its own past rule or analysis (Wiener & Ribeiro, Reference Wiener, Ribeiro, Bignami and Zaring2016b).
Table 3 shows the range of tasks and objectives in the rows and the set of institutional options in the columns, and offers some examples of the mapping from objectives and tasks to institutional options. Institutional relationships between objectives and tasks that currently exist are shown in bold, while those that we are suggesting may be better in the future are in italics.
Under past EOs, OMB/OIRA has asked each agency to select the rules to be reviewed. The main selection criterion seems to have been cost – the opportunity to reduce the costs of each rule. A suggested complement is to invite stakeholders to nominate rules to be selected for review, such as through public comments in response to an OMB/OIRA or agency call for nominations (as occurred in several administrations), or through a petition process to each agency (Bull, Reference Bull2015), or through a public input process to a commission (Mandel & Carew, Reference Mandel and Carew2013, pp. 14–19). That may draw on the practical experience of stakeholders, but may also tend to focus on the parochial interests of those stakeholders. The agency could still have the final choice of whether to select the nominated rules. Another approach would be to have a broader selection process examine many candidates and select which rules deserve review – such as by an interagency working group, an expert board, or a commission established for this purpose (Mandel & Carew, Reference Mandel and Carew2013). This broader selection exercise could be better able to identify the rules most in need of review, especially if the agency faces inhibitions, or if the criteria include broader impacts (beyond cost and target benefits, to include ancillary impacts).
The analysis itself could be undertaken by the agency and historically this is what happens in most cases. Presumably it has the best data and expertise on the topic and understands the complex operation of the rule. But it may be that agencies have not been collecting data on their rules – hence the interest in requiring a plan for such data collection from the time the rule is proposed and adopted (Miller, Reference Miller2015; Cropper et al., Reference Cropper, Fraas and Morgenstern2017; Dudley & Katzen, Reference Dudley and Katzen2019).Footnote 6 Further, the agency may face opportunity costs and inhibitions. Thus it may also be useful in some contexts to have the retrospective analysis undertaken by another institution, such as an interagency working group, an expert board, or a commission (Dudley & Mannix, Reference Dudley and Mannix2018, pp. 16–17). Researchers at universities and think tanks often are the ones who undertake such retrospective analyses, either on their own or as contractors to agencies.
Whether the agency or another institution undertakes the analysis, it is most likely only the agency that has the legal authority to promulgate revisions to the rule, through the steps of proposed rule, notice and comment, and final rule, as provided under the Administrative Procedure Act. OMB/OIRA would ordinarily exercise oversight of such revisions.
A commission could be established to assess multiple rules, including the combined and interacting effects of accumulated multiple rules (Mandel & Carew, Reference Mandel and Carew2013). It could be created as a one-time exercise (perhaps lasting several years) to take stock of the accumulated body of regulation and recommend revisions. This could reflect the experience of the Defense Base Closure Commission, which identified military bases for closing or repurposing, somewhat insulated from the politics of Congress (Mandel & Carew, Reference Mandel and Carew2013). And it would be analogous to the one-time commissions of inquiry created after major disasters such as the 9/11 terrorist attacks in 2001 and the BP Deepwater Horizon oil spill in 2010.
But rather than a one-time exercise, there are also advantages to establishing such a commission as a standing body, similar to the NTSB, which would have a greater depth of experience and expert staff to inform its ongoing analyses (Balleisen et al., Reference Balleisen, Bennear, Cheang, Free, Hayes, Pechar, Preston, Balleisen, Bennear, Krawiec and Wiener2017). Establishing a standing board allows for the collection of expertise in evaluation methods that can be applied to multiple rules (or collections of rules) over time. It also separates the review activity from the direct regulatory responsibility. This level of independence enables more candid reviews of potential errors or missteps in the regulatory process. One concern about the use of a nonregulatory body to conduct reviews is that the independent reviews may lead to recommendations that then the implementing agency must take seriously and promulgate, and the implementing agency may not have much incentive to do so. While this is a possibility, analysis of the recommendations made by the NTSB (which has no authority to implement recommendations made to regulatory agencies and Congress) suggests that the vast majority of these recommendations are eventually adopted (Balleisen et al., Reference Balleisen, Bennear, Cheang, Free, Hayes, Pechar, Preston, Balleisen, Bennear, Krawiec and Wiener2017, p. 507; National Transportation Safety Board, 2017). The respect for the quality of the analysis and the objectivity of the NTSB has proven to be effective at persuading the regulatory agencies and Congress to take recommendations seriously.
The objectives and tasks of learning from multiple rules – to test the actual performance of policy designs (in light of predictions), and to test and improve the accuracy of ex ante forecasting methods – seems to go beyond what one agency could undertake, unless the agency were analyzing multiple rules within its own portfolio (as it might to test performance of differing policy designs in one sector). The broader learning objectives, especially to test and improve on variation in ex ante forecasting methods, might be best served by an interagency working group, or an expert board, or a commission, with the breadth and staff expertise to compare across multiple rules.
The influence of retrospective reviews on regulatory rules could be advisory, or could have more legal authority. The outputs of analyses of individual rules, or of broader multirule and multimethods analyses, could be presented to the relevant agencies as recommendations for agency action, and to OMB/OIRA as recommendations for new guidance on methods of ex ante and ex post impact assessment. Or they could be presented to Congress as recommendations for legislative enactment. Greater authority could be conferred if Congress were to enact legislation creating a commission and delegating to that commission some authority to adopt rules changes itself, but such an approach could sacrifice the value of agency expertise and could potentially be in conflict with principles of administrative law. One proposal is to have Congress create a regulatory review commission which would then propose a set of numerous changes to Congress which Congress would vote up or down as a package without amendments (Mandel & Carew, Reference Mandel and Carew2013, p. 14).
In the past, retrospective review has often been seen as a one-time follow-up evaluation – a second look back (after ex ante RIA). Achieving more regular and effective application of retrospective analysis would be an important advance. Going further, some rules or agency programs may warrant not just one look back, but ongoing multiple periodic reviews (e.g., every 2, 5, or 10 years), toward a continuous process of adaptive updating (McCray et al., Reference McCray, Oye and Petersen2010; Ribeiro, Reference Ribeiro2018; Bennear & Wiener, Reference Bennear and Wiener2019b). Some current laws call for such periodic reviews, such as the reviews every 5 years of the National Ambient Air Quality Standards (NAAQS) under the Clean Air Act, and of policies under the Lautenberg Chemical Safety for the 21st Century Act.
5 Conclusions and recommendations
Despite extensive use of prospective regulatory impact assessment, and a long history of calls for retrospective regulatory review from Presidents, Congress, and others, there has been limited implementation of systematic retrospective reviews of regulation. There are many factors that collectively contribute to this pattern; this article highlights the role of institutions. Prior efforts have primarily called for retrospective reviews to be conducted by the promulgating agency, one rule at a time. We argue that retrospective review has at least three different goals – rule relevance, rule improvement, and regulatory learning – and several different tasks. We show that most retrospective reviews to date have focused on the rule relevance goal and to some extent on the rule improvement goal, but with a narrow focus on costs. And we argue that the regulatory learning goal could be better advanced by assessing a broader set of impacts, and by employing broader institutional options to examine multiple rules and RIAs across multiple agencies.
Efforts at retrospective analysis of regulation could be strengthened by matching the different goals and tasks of retrospective review to different policy institutions. Not all retrospective analysis needs to take place at the promulgating agency. Our specific recommendations for how to improve this institutional match include:
Recommendation 1: Consistent guidance on retrospective reviews should be issued by the Office of Management and Budget/Office for Information and Regulatory Affairs.
Note, that similar to ex ante RIAs, some agencies have developed their own guidance for conducting retrospective analysis. For example, the U.S. Department of Health and Human Services included a chapter on retrospective analysis in its 2016 Guidance on Regulatory Impact analysis (Office of the Assistant Secretary for Planning and Evaluation, U.S. Department of Health and Human Services, 2016). The HHS guidance focus on evaluation of policy effectiveness, one of the potential criteria under the Rule Improvement Goal. Rule relevance, other criteria for Rule Improvement, and Regulatory Learning are not addressed in this guidance.
To assist agencies and to ensure comparability across their analyses, consistent criteria for retrospective reviews applicable to all federal agencies should be developed by OMB/OIRA. OMB/OIRA has issued this type of unifying guidance for prospective RIAs in its “Circular A-4.” A counterpart to this Circular should be developed to help agencies, commissions, or other analysts conduct retrospective reviews. Key issues to cover in this unifying guidance include: rule selection, establishing baselines and counterfactuals, scope of impacts to assess, appropriate statistical and other methods of inference, data collection and archival requirements, and so forth (Cropper et al., Reference Cropper, Fraas and Morgenstern2017, p. 1376).
Recommendation 2: Selection of rules for review should be transparent and focused on potential to improve net benefits. There should be a role for stakeholders in selecting rules for review. Additional funding should be provided for conducting such reviews.
Several prior efforts have required agencies to develop multiyear plans for retrospective review. Most recently, pursuant to the Foundations for Evidence-Based Policy Making Act of 2018, agencies must develop plans for evidence collection to help answer their identified policy questions. These efforts are laudable, but better guidelines are needed for developing these plans. In developing plans for retrospective analyses of their own rules, agencies should be directed to: (i) select rules not only for their high costs, but for their expected opportunity to improve net benefits; (ii) establish invitations for public input on the selection and analysis of rules, with the determination by the agency of which rules to review (Bull, Reference Bull2015)Footnote 7; and (iii) evenhandedly assess not only costs but also the full portfolio of relevant impacts including benefits, ancillary impacts (cobenefits and countervailing risks), net benefits, and distributional equity. A preliminary screening or scoping stage could initially take a broad view of impacts and thereby identify which specific impacts are of most importance for the analysis and improvement of each rule.
Furthermore, most past efforts have required agencies to engage in this activity without any additional funding. This creates a disincentives for agencies to conduct retrospective reviews, facing the opportunity cost of shifting resources from other priorities. It is unrealistic to expect high quality reviews that lead to significant regulatory improvements without allocating additional funds to facilitate retrospective analyses.
Recommendation 3: Agencies’ prospective plans for retrospective review should include details for data collection and monitoring and include plans for periodic reviews at specified time intervals.
Agencies should include in each major new rule a plan for prospective data collection (monitoring) of relevant impacts and scheduled retrospective analysis at a future time (Miller, Reference Miller2015; Cropper et al., Reference Cropper, Fraas and Morgenstern2017; Dudley & Katzen, Reference Dudley and Katzen2019). Planning for these data in advance can overcome some of the challenges presented by the Paperwork Reduction Act in limiting agencies from collecting additional data after the regulation is promulgated. In order to plan ahead for retrospective review of policy designs and their performance, in appropriate cases (which may not be possible in all rules), agency rules could include control groups or alternative treatment groups, such as stages of early and later implementation over time, or variation in policy parameters across states or regions or actors (Cropper et al., Reference Cropper, Fraas and Morgenstern2017, p. 1376).Footnote 8
Where appropriate, agencies should develop Planned Adaptive Regulation (PAR) with regular data collection and ongoing periodic reviews, at time intervals (e.g., 2 years, 5 years, 10 years) that balance the expected gains from learning (value of new information for improved policy) with the expected costs of review (monitoring, analysis, adjustment) (McCray et al., Reference McCray, Oye and Petersen2010; Bennear & Wiener, Reference Bennear and Wiener2019b).
Recommendation 4: An interagency working group or commission should be formed and tasked with identifying areas of regulatory learning that would improve outcomes across agencies and conducting cross-agency reviews.
An interagency working group, commission or board – for example, GAO, ACUS, an NAS panel, an NTSB-like board, or a new RIC (Mandel & Carew, Reference Mandel and Carew2013; Dudley & Mannix, Reference Dudley and Mannix2018, p. 16) – should select and assess sets of multiple rules (from multiple agencies), in order to (i) compare and learn from variation in policy designs, (ii) learn from cumulative and interactive impacts, and (iii) test and improve the accuracy of ex ante RIA methods. Such a body would likely need data from the relevant agencies. It would need expert staff and funding. It would offer its findings and recommendations, at least as advisory inputs to subsequent agency actions, but it may not have the legal authority to implement changes to regulatory policies. (Nonetheless, an independent standing expert analysis body may be influential in such an advisory role, assisting agencies and oversight bodies with timely analyses and recommendations, and overcoming some of the inhibitions faced by agencies regarding staffing, time, and self-criticism [Balleisen et al., Reference Balleisen, Bennear, Cheang, Free, Hayes, Pechar, Preston, Balleisen, Bennear, Krawiec and Wiener2017].) The required advisory committee under the Foundations for Evidence-Based Policymaking Act of 2019 is a good start toward developing such a working group. To succeed, such a body must have expert membership, meaningful authority to identify cross-cutting issues, access to key data, and the capacity to evaluate regulatory performance across rules and agencies, to evaluate forecasting accuracy across methodologies, to make recommendations, and to follow-up on recommendation implementation.
Acknowledgments
The authors are grateful for helpful comments on prior drafts by participants in the RFF Advisory Committee meeting on Retrospective Analysis (June 2019) and the RFF Workshop on Retrospective Analysis (September 2019), including chairs R. Morgenstern and A. Fraas, paper discussants S. Katzen, S. Dudley, and J. Holmstead, and other colleagues including R. Bull. The authors also thank A. Shan and C. Gerbode for excellent research assistance.
Supplementary Materials
To view supplementary material for this article, please visit http://dx.doi.org/10.1017/bca.2021.10.