Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-11T13:43:28.411Z Has data issue: false hasContentIssue false

Regulatory Process, Regulatory Reform, and the Quality of Regulatory Impact Analysis1

Published online by Cambridge University Press:  07 November 2016

Jerry Ellig*
Affiliation:
Mercatus Center at George Mason University, 3434 Washington Blvd., 4th Floor, Arlington, VA 22201, USA, Phone: 703-375-9410, e-mail: jellig@mercatus.gmu.edu
Rosemarie Fike
Affiliation:
Department of Economics, Texas Christian University, TCU Box 298510, Fort Worth, TX 76129, USA, e-mail: Rosemarie.Fike@tcu.edu
Rights & Permissions [Opens in a new window]

Abstract

Numerous regulatory reform proposals would require federal agencies to conduct more thorough economic analysis of proposed regulations or expand the resources and influence of the Office of Information and Regulatory Affairs (OIRA), which currently reviews executive branch regulations. Such reforms are intended to improve the quality of economic analysis agencies produce when they issue major regulations. We employ newly gathered data on variation in current administrative procedures to assess the likely effects of proposed regulatory process reforms on the quality of agencies’ regulatory impact analyses (RIAs). Our results suggest that greater use of advance notices of proposed rulemakings for major regulations, advance consultation with regulated entities, use of advisory committees, and expansion of OIRA’s resources and role would improve the quality of RIAs. They also suggest pre-proposal public meetings with stakeholders are associated with lower quality analysis.

Type
Articles
Copyright
© Society for Benefit-Cost Analysis 2016 

1 Introduction

Regulatory impact analysis (RIA) has become a key element of the regulatory process in developed and developing nations alike. A thorough RIA identifies the potential market failure or other systemic problem a regulation is intended to solve, develops a variety of alternative solutions, and estimates the benefits and costs of those alternatives. Governments have outlined RIA requirements in official documents, such as Executive Order 12866 (Clinton, Reference Clinton1993) and Office of Management and Budget (OMB) Circular A-4 (2003) in the United States and the Impact Assessment Guidelines in the European Union (European Commission, 2009). More recently, President Obama’s Executive Order 13563 reaffirmed Executive Order 12866 and noted some additional values agencies could consider, such as fairness and human dignity (Obama, Reference Obama2011).

Yet across the globe, evaluations of regulatory impact analysis have found that government agencies’ practice often falls far short of the principles outlined in scholarly research and governments’ own directives. Many RIAs in the United States lack basic information, such as monetized benefits and meaningful alternatives (Hahn et al., Reference Hahn2000; Hahn & Dudley, Reference Hahn and Dudley2007; Fraas & Lutter, Reference Fraas and Lutter2011a ; Shapiro & Morrall, Reference Shapiro and Morrall2012). Related analyses find that European Commission impact assessments have similar weaknesses (Renda, Reference Renda2006; Cecot et al., Reference Cecot2008; Hahn & Litan, Reference Hahn and Litan2005). Case studies also find that RIAs have significant deficiencies (Harrington, Heinzerling & Morgenstern, Reference Harrington, Heinzerling and Morgenstern2009; Graham, Reference Graham2008; Morgenstern, Reference Morgenstern1997; McGarity, Reference McGarity1991; Fraas, Reference Fraas1991). Some commentators have characterized individual RIAs as “litigation support documents” (Wagner, Reference Wagner and Harrington2009) or documents drafted to justify decisions already made for other reasons (Dudley, Reference Dudley2011: 126; Keohane, Reference Keohane2009). Interviews with agency economists indicate that this happens frequently (Williams, Reference Williams2008).

In the United States, perceived deficiencies in the quality of regulatory analysis have led to calls for significant reforms of the regulatory process to motivate higher quality analysis (House Judiciary Committee, 2013; President’s Jobs Council 2011, Harrington, Heinzerling & Morgenstern, Reference Harrington, Heinzerling and Morgenstern2009). One proposal would require that agencies publish an advance notice of proposed rulemaking (ANPRM) for all “major” regulations – typically regulations that have economic effects exceeding $100 million annually. Under current practice, an ANPRM may consist largely of a preliminary regulatory proposal or a request for information; reformers suggest that it should also include a preliminary RIA. Proponents believe that expanded use of ANPRMs to solicit comments on a preliminary RIA could improve the quality of regulatory analysis for three different reasons. First, public comment on a preliminary analysis provides the agency with more information; it allows the agency to benefit from critiques, feedback, and other public input (President’s Jobs Council, 2011, 43). Second, requiring an agency to produce a preliminary analysis before it writes a proposed regulation helps counter the tendency of agencies to make regulatory decisions first and then task economists or other analysts with producing an analysis that simply supports the decision (Williams, Reference Williams2008; House Judiciary Committee, 2013, 6–7). Third, public disclosure of a preliminary analysis alters incentives by “crowdsourcing” regulatory review, instead of leaving the review function solely to OIRA (Belzer, Reference Belzer2009).

Another proposal would require agencies to consult with affected private sector parties as early as possible, before proposing regulations that include significant mandates that affect the private sector. The Small Business Regulatory Enforcement Fairness Act already requires the Environmental Protection Agency and the Occupational Safety and Health Administration to create panels to advise them on how to craft certain types of regulations in a manner that minimizes impacts on small businesses. The Unfunded Mandates Reform Act requires agencies to consult with state, local, and tribal governments before writing regulations that affect them. Legislation that passed the House during the past several Congresses would extend this broader consultation requirement to cover the private sector as well (House Committee on Oversight and Government Reform, 2015, 4). Proposed legislation would also require trial-like, formal rulemaking hearings for “high impact” regulations – generally, those that impose costs or other burdens exceeding $1 billion annually (House Judiciary Committee, 2013).

Some reforms would augment the resources and role of the Office of Information and Regulatory Affairs (OIRA), the office within the OMB that reviews regulations and their accompanying RIAs for compliance with Executive Order 12866. Commentators have called for an expansion of OIRA’s staff (Shapiro & Morrall, Reference Shapiro and Morrall2013; President’s Jobs Council, 2011, 45) and for subjecting regulations from independent regulatory commissions to RIA requirements and OIRA review (Hahn & Sunstein, Reference Hahn and Sunstein2002, 1531–37; House Judiciary Committee, 2013, 24–26; President’s Jobs Council, 2011, 45; Tozzi, Reference Tozzi2011, 68; Katzen, Reference Katzen2011, 109; Fraas & Lutter, Reference Fraas and Lutter2011b ). Shapiro and Morrall (Reference Shapiro and Morrall2013) find that RIAs that underwent lengthier OIRA review contain a more thorough analysis. Based on this finding, they suggest that increasing OIRA’s staff to allow for more thorough review could improve the quality of analysis.

In a sense, the reform proposals represent a continuation of a trend toward greater uniformity in administrative procedures that began with the Administrative Procedure Act (APA) of 1946. The APA instituted uniform procedures and established minimum standards for information gathering and disclosure across agencies (McCubbins, Noll & Weingast, Reference McCubbins, Noll and Weingast1987, 256). The RIA requirements in executive orders raised the standards by enunciating a series of substantive questions all executive branch regulatory agencies are supposed to address.Footnote 2 The proposed reforms would further standardize agency procedures for developing regulations and RIAs and apply these standards to independent agencies as well.

Recent regulatory history provides a rich database of experience that can be used to test the prospective impact of proposed reforms. Many of the proposed reforms are similar to actions that agencies sometimes undertake voluntarily or are currently required by law for certain regulations. OIRA already subjects some regulations to a lengthier or more thorough review than others (McLaughlin & Ellig, Reference McLaughlin and Ellig2011). If more extensive effort by agencies and OIRA is correlated with higher quality analysis, then requiring such effort could improve the quality of regulatory analysis.

This paper combines newly gathered data on the variation in regulatory processes with an extensive set of expert scores that evaluate the quality of regulatory impact analysis for proposed federal regulations to assess whether RIA quality varies systematically with the type of effort expended by agencies and OIRA. We find that many types of agency effort, such as a pre-proposal notice requesting comment from the public, consultation with state governments, and use of advisory committees, are associated with higher quality RIAs. The quality of regulatory analysis is positively correlated with the length of OIRA review time, and quality is lower when OIRA is headed by an acting administrator rather than a presidential appointee. For most of the explanatory variables, similar results occur using ordered logit, ordinary least squares (OLS), or a three-stage least squares estimator that includes instrumental variables. The results suggest that regulatory reforms designed to expand agency analytical activity and augment OIRA’s influence and resources could improve the quality of regulatory impact analysis.

2 Theoretical considerations

Elected leaders delegate significant decision-making authority to regulatory agencies. This makes accountability more difficult, due to the asymmetry of information between the agencies and the elected leaders. As a result, elected leaders may not get the amount or type of regulation they would have written if they were privy to the agency’s expert knowledge (McCubbins, Reference McCubbins1985; Abbott, Reference Abbott1987, 180).

From the perspective of elected policymakers, agencies may be over- or under-zealous about adopting new regulations. Issuing new regulations requires effort, which is costly (McCubbins, Noll & Weingast, Reference McCubbins, Noll and Weingast1987, 247). Hence, bureaucratic inertia may lead to fewer regulatory initiatives than elected leaders desire (Kagan, Reference Kagan2001). Antiregulatory interests are also often well organized and well funded, and they may influence agencies to under-regulate (Bagley & Revesz, Reference Bagley and Revesz2006, 1282–304). A president can counter these incentives by appointing regulatory enthusiasts who will seek out information about new opportunities to regulate (Bubb & Warren, Reference Bubb and Warren2014).

The most obvious reason that regulators might choose an inefficiently high level of regulation is that some statutes instruct them to make decisions based on factors other than efficiency. In those cases, regulators would be following elected leaders’ wishes. However, several other incentives may prompt agencies to engage in more regulation, or more intensive regulation, than elected leaders may prefer. Most agency officials benefit from growth and expansion. Regulatory agency success is usually defined as success in creating regulations intended to achieve the agency’s specific mission, such as workplace safety (OSHA) or clean water (EPA), rather than thoroughly investigating the opportunity costs of alternative uses of social resources (DeMuth & Ginsburg, Reference DeMuth and Ginsburg1986; Dudley, Reference Dudley2011). In addition, the typical agency’s position as a monopoly supplier that exchanges a bundle of outputs for a budget can lead to levels of output and expenditures that exceed the levels elected leaders would desire if monitoring of the agency were costless (Niskanen, Reference Niskanen1994). Even if regulators are primarily concerned with the public interest, they may genuinely believe that the most effective way to advance the public interest is to advance their agency’s specific mission (Downs, Reference Downs1967, 102–52; Wilson, Reference Wilson1989, 260–62).

By adopting procedural requirements that compel agencies to publicize regulatory proposals in advance and disclose their likely consequences, Congress and the president mitigate information asymmetries and make it easier for affected constituencies to monitor and alert them about regulatory initiatives of concern (McCubbins, Noll & Weingast, Reference McCubbins, Noll and Weingast1987). As Horn and Shepsle (Reference Horn and Shepsle1989) note, this can increase the value of the legislative “deal” generating the regulation if constituents can monitor the effects of proposed regulations at lower cost than elected leaders can.

Executive orders requiring agencies to conduct and publish RIAs and clear regulations through the OMB are examples of presidential initiatives that seek to reduce information asymmetries (Bubb & Warren, Reference Bubb and Warren2014, 116).Footnote 3 Posner (Reference Posner2001) argues that elected leaders should find RIA requirements useful even when their goal is something other than economic efficiency, because the RIA is supposed to provide a structured and systematic way of identifying the regulation’s likely consequences. As if to confirm Posner’s hypothesis, seminal articles by DeMuth and Ginsburg (Reference DeMuth and Ginsburg1986) and Kagan (Reference Kagan2001) portray centralized regulatory review and RIAs as important tools for ensuring agency accountability under presidents Reagan and Clinton – the two US presidents who did the most to shape the current requirements and review process in the executive branch, despite their divergent attitudes toward regulation.Footnote 4

As a first approximation, we expect that regulatory reforms aimed at increasing agencies’ analytical activity and OIRA’s influence would lead to more thorough RIAs. After all, it is logical to expect that greater effort will produce more thorough analysis. Several complicating factors, however, could lead to different predictions under specific circumstances.

2.1 Agency effort

Increased agency activity may not always improve the quality of the RIA. Agencies can also devote analytical effort to increasing information asymmetries by making the RIA more complex but less informative. Some RIAs spend an inordinate amount of time on less important benefit or cost calculations while missing more substantial issues, such as significant alternatives (Keohane, Reference Keohane2009; Wagner, Reference Wagner and Harrington2009). Or an RIA may exhibit what Sinden (Reference Sinden2015) terms “false formality,” providing an extensive presentation of quantified benefits and costs that is used to justify decisions, while ignoring important unquantified benefits or costs. If pre-proposal effort merely promotes complexity, it may not improve the quality of the analysis.

Extra procedural steps could also reduce the quality or use of RIAs by giving interest groups greater influence over the regulatory process. Public meetings or other forums that gather stakeholders together may facilitate collusion among stakeholders at the expense of the general public, even if the purpose of the meeting is merely information gathering. To the extent that the agency is guided by agreement among stakeholders rather than the results of analysis, the RIA may be used less extensively. If analysts expect this to occur, they will likely put less effort into creating a high-quality analysis. Of course, greater responsiveness to stakeholders may be precisely the result elected leaders intend; nevertheless, it could lead to lower quality analysis. Even if stakeholders wield no inappropriate influence, public meetings or other extensive discussions may lead agencies to document the analysis or its effects less extensively in the NPRM or RIA, since major stakeholders already heard this discussion in meetings where many topics relevant to regulatory analysis were aired.

2.2 OIRA influence and resources

Executive Order 12866 explicitly gives OIRA two distinct functions, which sometimes conflict (Arbuckle, Reference Arbuckle2011; Dudley, Reference Dudley2011). OIRA has a dual role of ensuring that the regulations embody the regulatory analysis principles enunciated in Executive Order 12866 and ensuring that they reflect the president’s policy views. If OIRA primarily enforces the principles of Executive Order 12866, then we would expect greater effort on OIRA’s part to improve the quality and use of RIAs. If OIRA primarily enforces the president’s policy views on agencies, then OIRA’s efforts may have ambiguous effects on the quality of RIAs, depending on how much the administration’s policy views diverge from the principles in the executive order.

Prior research on OIRA’s effectiveness often finds that regulations do undergo change during the review process. A 2003 Government Accountability Office report found numerous instances in which OIRA review affected the content of an agency’s regulatory analysis or the agency’s explanation of how the analysis was related to the regulation (USGAO, 2007). Haeder and Yackee (Reference Haeder and Yackee2015) find that the length of OIRA review time is positively correlated with the amount of change to final rules, and that more change occurs when interest groups have met with OIRA staff to discuss the regulation. Croley (Reference Croley2003) found that at both the proposed and final stages, more rules are changed during the OIRA review process than are left unchanged, and the likelihood of change is greater if interest groups have met with OIRA staff. Other research, however, has concluded that OIRA has had little systematic impact on the cost-effectiveness of regulations (Hahn, Reference Hahn2000; Farrow, Reference Farrow2006) – a result not inconsistent with findings of interest group influence.

The effect of OIRA review on RIAs rather than the regulations themselves has been studied less extensively. One recent study finds that the length of OIRA review is positively correlated with the amount of information in RIAs (Shapiro & Morrall, Reference Shapiro and Morrall2013). This approach is more consistent with testing Posner’s (Posner (Reference Posner2001)) hypothesis that elected leaders utilize RIAs to curb information asymmetries, and thus we adopt a similar approach in our empirical analysis.

3 Data and variables of interest

3.1 Dependent variable

Our dependent variable measuring the quality and use of regulatory impact analysis consists of qualitative scores awarded by the Mercatus Center at George Mason University’s Regulatory Report Card, which assessed the quality of RIAs for proposed, economically significant, prescriptive regulations.Footnote 5 “Economically significant” regulations are those that have costs or other economic effects exceeding $100 million annually or that meet other criteria specified in section 3f1 of Executive Order 12866 (Clinton, Reference Clinton1993). “Prescriptive” regulations mandate or prohibit activities. The other major type of regulation is budget regulations, which implement federal spending or revenue collection programs (Posner, Reference Posner2003).

Ellig and McLaughlin (Reference Ellig and McLaughlin2012) list the Report Card evaluation criteria and explain how they mirror elements in the OMB’s Regulatory Impact Analysis Checklist (OMB, 2010). The Mercatus Report Card is the most in-depth evaluation we know of that covers numerous federal regulations. Evaluators assessed each criterion on a Likert (0–5) scale, where 0 indicates no relevant content and 5 indicates reasonably complete analysis with one or more best practices. The first eight Report Card criteria evaluate the quality of the RIA and other analysis accompanying the regulation. Appendix A lists these eight criteria, along with sub-questions employed to guide scoring of some of the more technical criteria.

One might reasonably ask whether this evaluation method genuinely assesses the quality of the RIA, or if it merely assesses the extent of quantification or formalism (Sinden, Reference Sinden2015, 33). We believe the Report Card is a valid assessment of quality, for several reasons. First, the criteria are derived from OMB guidance documents on regulatory analysis – most notably Circular A-4, which underwent external peer review by multiple academic experts in benefit-cost and regulatory impact analysis. Second, many of the evaluation questions listed in Appendix A address topics that are clearly important regardless of the RIA’s degree of quantification or formalism. For example, the evaluators considered whether the RIA conceptually identified relevant benefits and costs, and whether the cause-and-effect analysis of the underlying problem, benefits, and costs is well supported with coherent theory and systematic evidence. These factors are equally critical to ensure that the type of “conceptual” or “Ben Franklin” style analysis that Sinden advocates is grounded in reality rather than speculation. Third, the evaluation criteria assess readability and whether the underlying information is well documented – antidotes to the type of strategic formalism that obscures rather than enlightens. Fourth, the 0–5 Likert scale allowed reviewers to award partial credit for qualitative information, even when a criterion might seem to focus on quantitative information. Thus, the Report Card reflects established practice and addresses qualitative as well as quantitative factors in the analysis.

We use the score data for all of the prescriptive (i.e., non-budget) economically significant regulations that cleared OIRA review in 2008–10, the same time period covered by Ellig et al.’s (Reference Ellig, McLaughlin and Morrall2013) study that compares the quality and use of RIAs during the Bush and Obama administrations. This lets us determine whether their results hold after controlling for the regulatory process variables that are the primary focus of our analysis. During these years, 71 proposed prescriptive regulations cleared OIRA review.Footnote 6

In keeping with the current debate surrounding regulatory reform, we focus on prescriptive regulations for several reasons. First, prescriptive regulations fill the conventional role of regulations: they mandate or prohibit certain activities (Posner, Reference Posner2003). This distinguishes them from budget regulations, which implement federal spending programs or revenue collection measures. Second, empirical evidence shows that budget regulations have much lower quality analysis (Posner, Reference Posner2003; McLaughlin & Ellig, Reference McLaughlin and Ellig2011). By focusing on prescriptive regulations, we hope to identify which aspects of the regulatory process are conducive to higher quality analysis. Finally, OIRA review of prescriptive regulations tends to focus on the major elements of regulatory impact analysis as articulated in Executive Order 12866; review of budget regulations focuses mostly on whether the regulations’ implications for the federal budget are accurately estimated (McLaughlin & Ellig, Reference McLaughlin and Ellig2011). Since one of the pre-proposal process factors we examine is OIRA review, it seems logical to examine the type of regulation for which OIRA tries hardest to enforce the provisions of the executive order.

Table 1 shows summary statistics for the dependent variable – Quality of Analysis – and the seven regulatory process variables we consider in this study. The mean score for Quality of Analysis is 24.15 out of a maximum possible 40 points. The highest-scoring regulation received 33 out of a possible 40 points, or 82.5 percent.

Table 1 Summary statistics, dependent variable and regulatory process variables, $N=71$ .

Note: Quality of analysis is the dependent variable. All regulatory process variables are 0–1 dummy variables except for OIRA review time, which is measured in days.

Source: Authors’ calculations using data downloaded from www.mercatus.org/reportcard.

3.2 Regulatory process variables

A major contribution of this paper is a new dataset of observable indicators denoting the type of activity the agencies and OIRA devoted to the production and review of RIAs for each proposed rule. The authors and graduate research assistants read through the NPRMs, RIAs, and other supporting documents, searching for key words and concepts. The data were then coded as dummy variables to capture the types of actions accompanying each proposed regulation.Footnote 7 Unless otherwise indicated in the text, the dummy variable takes a value of “1” if the activity mentioned occurred, and “0” otherwise.

Any Prior Notice: indicates whether the NPRM was preceded by an ANPRM, a prior NPRM, a Request for Information, or a Notice of Data Availability. Under current practice, an ANPRM need not include any preliminary analysis or discussion of data, but some do. In a robustness test below, we consider the effects of using a prior notice dummy variable that excludes ANPRMs.

State Consultation: indicates whether the agency consulted with representatives from state, local, or tribal governments when drafting the proposed rule. Consultation may be in private or in public.

Public Meeting: indicates whether the agency provided a public forum to receive comments from interested parties before publishing the proposed regulation.

Advisory Committee: indicates whether the regulatory agency created and consulted with an advisory committee on the particular regulation proposed in the NPRM. Advisory committees include scientific advisory committees, stakeholder advisory committees, committees formed for a negotiated rulemaking, and panels created under the Small Business Regulatory Enforcement Fairness Act to advise the agency on how to craft the regulation to reduce impacts on small businesses.

Future Public Meeting: indicates whether the NPRM explicitly committed the agency to a public discussion forum in the future to receive feedback on the proposed rule. The information that an agency receives at a future public meeting cannot affect the quality and use of analysis for an NPRM that is published before the meeting. However, the prospect of a future public meeting may augment the agency’s incentive to conduct careful analysis, because the agency will have to defend its proposed rule in a public forum.

Acting OIRA Administrator: indicates whether OIRA concluded its review of the regulation during the interregnum period at the beginning of the Obama administration when the OIRA administrator was an acting career civil servant rather than a Senate-confirmed presidential appointee. This variable may identify a period when OIRA has less political clout in the administration. It may also control more generally for any tendency for the quality of analysis to fall during the transition to new political appointees in a new administration.

OIRA Review Time: To measure the extent of OIRA effort expended on each NPRM and RIA, we use the number of days OIRA spent reviewing the documents before their publication. Reginfo.gov, a federal regulatory portal, records the dates when OIRA review begins and concludes. Like many kinds of effort, review time may be subject to diminishing marginal returns. We therefore include a second variable, OIRA Review Time 2, to test for diminishing marginal returns.

3.3 Control variables

Ellig et al. (Reference Ellig, McLaughlin and Morrall2013) provide the most extensive published analysis of factors correlated with Report Card scores. To ensure that our results do not stem from the omission of important variables identified in their research, our control variables include all of their explanatory variables, plus some additional ones.

Obama Administration. This variable is intended to indicate whether there is any systematic difference in the quality and use of regulatory analysis in different presidential administrations.

Post-June 1 Midnight Regulation. “Midnight regulation” refers to the well-documented surge of regulations that tends to occur at the end of presidential terms, between Election Day and Inauguration Day (Arbuckle, Reference Arbuckle2011; Brito & de Rugy, Reference Brito and de Rugy2009; Howell & Mayer, Reference Howell and Mayer2005). This variable equals “1” for Bush administration regulations when OIRA review of the proposed regulation concluded after June 1, 2008 and the regulation was finalized between Election Day 2008 and Inauguration Day 2009. The Bush administration set the June 1 deadline in an explicit attempt to limit midnight regulations. These regulations might be expected to have lower scores for three reasons: they were put together in a hurry, political considerations may have led the administration to place a lower priority on conducting high-quality analysis, and the surge of midnight regulations may overwhelm OIRA’s review capacity (Brito & de Rugy, Reference Brito and de Rugy2009; McLaughlin, Reference McLaughlin2011).

Pre-June 1 Midnight Regulation. This variable equals “1” for Bush administration regulations when OIRA review of the proposed regulation concluded prior to June 1, 2008 and the regulation was finalized between Election Day 2008 and Inauguration Day 2009. These midnight regulations have not been included in prior studies that find midnight regulations have lower quality analysis (Ellig et al., Reference Ellig, McLaughlin and Morrall2013; McLaughlin & Ellig, Reference McLaughlin and Ellig2011).

Post-June 1 Leftover. This variable equals “1” for Bush administration regulations when OIRA review of the proposed regulation concluded after June 1, 2008, but the regulation was left for the Obama administration to finalize. These regulations might have lower quality of analysis because they were supposed to be midnight regulations but were not completed in time, or because they were lower-priority regulations passed on to the next administration.

Pre-June 1 Leftover. These are Bush administration regulations whose OIRA review concluded prior to June 1, 2008, but were left for the Obama administration to finalize.

Obama Potential Midnight. These are Obama administration regulations proposed but not finalized before Election Day 2012. There is usually a smaller surge of midnight regulations at the end of a president’s first term, even if he is re-elected (Cochran, Reference Cochran2001; De Rugy & Davies, Reference De Rugy and Davies2009). In addition, at the time these regulations were developed and proposed, the outcome of the 2012 election was unknown. Only one of President Obama’s potential midnight regulations became final in the midnight period following his 2012 victory. However, we cannot know which of the regulations would have become midnight regulations if President Obama had lost the election. For this reason, we do not attempt to distinguish between midnight and leftover regulations in the Obama administration.

Agency policy preferences. Posner’s model (Reference Posner2001, 1184–85) predicts that the greater the ideological distance between the president and the agency, the more likely the president is to require an agency to conduct regulatory impact analysis. Clinton and Lewis (Reference Clinton and Lewis2008) use expert elicitation to develop numerical scores measuring agency policy preferences on a “conservative–liberal” spectrum. Ellig et al. (Reference Ellig, McLaughlin and Morrall2013) find results consistent with Posner’s hypothesis: regulations from agencies with more “conservative” policy preferences tend to have lower Report Card scores during the Bush administration, and regulations from agencies with more “liberal” policy preferences tend to have lower scores during the Obama administration. To test for this kind of reversal, our agency policy preference variable takes a value equal to the proposing agency’s Clinton–Lewis score during the Bush administration and the negative of the Clinton–Lewis score in the Obama administration.

Public Comments. Regulations.gov tracks the number of public comments submitted on each proposed regulation. We also include the square of Public Comments, which controls for the possibility of a nonlinear relationship.

Shapiro and Morrall (Reference Shapiro and Morrall2012) employ the number of comments as an indicator of a regulation’s political salience; the more comments on a regulation, the more likely it is politically salient. They find that the more public comments on a regulation, the lower are its net benefits, suggesting that the federal government is less likely to try to maximize net benefits when significant political considerations get in the way. If this is true, we might also expect that regulations with more public comments would have lower scores for quality of regulatory analysis. On the other hand, McCubbins, Noll and Weingast (Reference McCubbins, Noll and Weingast1987) posit that the requirements of the APA help ensure that the most politically controversial regulations generate the most complete information on the public record; this implies that regulations with more public comments should have higher quality analysis.

These variables are solely indicators of political salience. They do not purport to measure the effect of public comments on the proposed regulation, since the dependent variable is the quality of analysis accompanying the proposed regulation, conducted before any comments were filed. They are not the only indicators of political salience in the regressions; midnight regulations may be politically controversial, and regulations with large impacts (below) may also be more politically visible.

Effects Exceed $1 Billion.This variable equals “1” if the agency indicates that either the benefits or the costs of the regulation exceed $1 billion annually. OMB Circular A-4 directs agencies to undertake a formal quantitative analysis of uncertainty for regulations with economic effects exceeding $1 billion annually (OMB, 2003, 41). These regulations may have a higher score for quality of analysis because the Report Card explicitly awards points for uncertainty analysis and because the research required to develop the uncertainty analysis may also generate additional information that improves other aspects of the RIA. They may also have a higher score if agencies simply conduct more thorough analysis for regulations that have larger impacts.

4 Econometric methods and results

4.1 Econometric estimators

Because the scores are qualitative evaluations, we primarily rely on an ordered logit regression model to assess whether the scores are correlated with any of the regulatory process variables (Maddala, Reference Maddala1983, and Greene, Reference Greene2003). Appendix B outlines the derivation of our model.

Ordered logit is the most appropriate method to use with ordinal score data. Unfortunately, ordered logit with fixed effects dummy variables may not yield a consistent estimator when the number of observations in each group is small (Chamberlain, Reference Chamberlain1980). Three of the 13 agencies in our sample – EPA, DOT, and DOL – each issued more than 10 regulations, but seven agencies each issued fewer than four regulations. Thus, consistency may be a problem with the ordered logit estimator.

Several alternative estimators for fixed effects ordered logit have been suggested in the literature, but there is little consensus on the best estimator. To test the robustness of our results, we use a fixed effects ordered logit estimator published by Baetschmann Staub and Winkelmann (Reference Baetschmann, Staub and Winkelmann2015). They demonstrate that their estimator is consistent, reasonably efficient, and remains unbiased for small sample sizes. They refer to their method as “blow up and cluster” (BUC). The sample is “blown up” by creating K-1 copies of each observation, where K is the number of possible values the dependent variable could take. Each of the copies is dichotomized at one of the different possible values of the dependent variable. Standard errors are clustered by observation, since each of the K-1 copies are obviously related to each other. Conditional maximum likelihood is applied to the entire blown-up set of observations.

We also use an OLS fixed effects estimator. A potential advantage of OLS is that the linear regression model using group dummy variables is a consistent estimator (Chamberlain, Reference Chamberlain1980, 225). A potential disadvantage is that OLS can yield biased coefficient estimates when the dependent variable is ordinal rather than cardinal (Baetschmann Staub & Winkelmann, Reference Baetschmann, Staub and Winkelmann2015, 702). The Report Card data suggest that a cardinal interpretation of the scores might be plausible. The histogram in Appendix C shows that scores range from 11 to 33, and the number of regulations with each score is somewhat dispersed rather than clustered around a few values. Even if a cardinal interpretation of the scores is not strictly accurate, if the OLS results are similar to the ordered logit results, we can be more confident that the results were not determined by our choice of estimator.

The possibility of using least squares also allows us to test for endogeneity and simultaneity by using three-stage least squares.Footnote 8 The quality of analysis could be endogenous if agencies are more willing to submit their analysis to critical examination when they know the analysis is more thorough. In other words, the agency might choose to undertake additional rounds of public comment, consultations, and meetings if it knows its analysis is more likely to stand up to criticism. In this case, more thorough analysis might be correlated with more procedural effort because more thorough analysis causes agencies to engage in more process – not because more process produces more thorough analysis. This possibility is less likely for Any Prior Notice, State Consultation, Public Meeting, and Advisory Committee, since these procedural efforts occur before the agency produces the RIA. The agency’s decision to commit to a future meeting to receive feedback, however, could plausibly be influenced by the quality of the analysis.

Simultaneity could be a problem if some unobserved variable causes agencies both to engage in more process and to produce more thorough analysis. In this case, any observed correlation between the quality of analysis and agency process could occur because of this unobserved factor, not because greater agency effort produces more thorough analysis. The conventional econometric remedy is to use a two-stage approach that first predicts the value of the explanatory variable(s) using instrumental variables that are not correlated with the dependent variable. When we employed a two-stage least squares estimator, it dropped most of the variables in the second stage. We can, however, use instrumental variables to predict the values of the agency process variables using three-stage least squares. The agency process variables are dichotomous, but Angrist (Reference Angrist2001) argues that least squares methods can produce accurate causal inferences even when the endogenous variables are 0–1 dummy variables.

Some of the regulatory process variables we consider might not be independent of each other. For example, for particularly important or controversial regulations, agencies might take several of the pre-proposal process steps we consider, and OIRA’s review time might be especially long. We tested for multicollinearity by examining correlation coefficients (Farrar & Glauber, Reference Farrar and Glauber1967), variance inflation factors, and the condition index (Belsley, Kuh & Welsch, Reference Belsley, Kuh and Welsch1980). None of these indicators suggested that the process variables suffer from multicollinearity.

Table 2 Most process variables are correlated with quality of analysis.

Note: The omitted agency category is Treasury. With the exception of Advisory committee and Future public meeting, all regulatory process variables are statistically significant regardless of the estimation technique. Absolute values of z-statistics or t-statistics in parentheses. Statistical significance: *10 percent, **5 percent, ***1 percent.

Robust standard errors clustered by department.

Source: Authors’ estimates using score data downloaded from www.mercatus.org/reportcards and explanatory variables coded by authors, as described in text.

4.2 Regression results

Table 2 reports regression results using the four different estimators. The first two agency process variables – Any Prior Noticeand State Consultation – are positively correlated with the quality of analysis and statistically significant regardless of the estimator used.Footnote 9 Public Meeting is negative and statistically significant regardless of estimator. Advisory Committee is positively correlated with the quality of analysis in the ordered logit regression, marginally significant in the BUC ordered logit fixed effects and the OLS regressions, but not significant in the three-stage least squares regression. Future Public Meeting has a positive sign, but it is never statistically significant.

The OIRA variables show a great deal of consistency across estimators. Acting OIRA Administrator is negative and significant in all four regressions.Footnote 10 OIRA Review Time is positively correlated with the quality of analysis and statistically significant in all four regressions; OIRA Review Time 2 has a negative sign in all four regressions and is statistically significant in three of them. The coefficients indicate that the net effect of OIRA review time is positive until review time exceeds 200–225 days. One regulation in the sample was reviewed for 200 days, and no other regulation was reviewed for more than 151 days, so OIRA review is associated with higher quality analysis for all but one regulation.

Comparing the two ordered logit regressions, the coefficients on the process variables are quite similar, but somewhat smaller in the BUC fixed effects regression. This suggests that the regular ordered logit regression produces fairly reliable estimates of the coefficients. (No departmental dummy variable coefficients are reported for the BUC fixed effects estimator because the method does not produce coefficients for the departmental dummy variables.)

In the three-stage regressions, some of the coefficients on the process variables are larger than those produced with OLS, and some are smaller. Similarly, the statistical significance of the process variables is sometimes greater in the three-stage equation and sometimes greater in the OLS equation. These results suggest that neither endogeneity nor simultaneity systematically bias the size or significance of the process variable coefficients.

Table 3 Equations Predicting Endogenous Agency Process Variables for 3SLS.

Note: Quality of Analysis does not predict the regulatory process dummy variables, so simultaneity is not a problem. Agency dummy variables are omitted to conserve space. Absolute values of t-statistics in parentheses. Statistical significance: *10 percent, **5 percent, ***1 percent.

Source: Authors’ estimates using score data downloaded from www.mercatus.org/reportcards and explanatory variables coded by authors, as described in text.

Table 3 reports the results of the regressions used to predict the agency process variables in the three-stage model. The coefficients on Quality of Analysis support the claim that endogeneity is not an issue; higher quality analysis does not appear to be associated with more extensive process efforts. In fact, the only regression showing even a marginally significant correlation between the quality of analysis and a process variable is the regression for State Consultation – but the sign on Quality of Analysis suggests that agencies are less likely to consult states when they produce a higher quality analysis. Thus, if there is any endogeneity for State Consultation, the endogeneity is not responsible for the positive correlation between Quality of Analysis and State Consultation in Table 2.

Table 3 includes 12 instrumental variables that were used to predict the agency process variables. The instrumental variables are not highly correlated with the quality of analysis after taking the other control variables into account.

The first two instrumental variables measure the complexity of the regulation’s topic: the total number of words in the NPRM and RIA, and whether the RIA includes a Regulatory Flexibility Act analysis.Footnote 11 A Regulatory Flexibility Act analysis, required by law under certain circumstances, assesses whether the regulation disproportionately burdens small businesses and, if so, whether there are regulatory alternatives that might lessen this impact. The next four variables indicate the type of regulation: civil rights, environment, security, or health/safety. The omitted category is economic regulation. The final eight variables indicate legal constraints that may affect the agency’s ability or willingness to engage in additional process efforts: whether the regulation is issued in response to a petition from an external party; whether there is a statutory or judicial deadline; whether the statute requires the agency to issue a new regulation (i.e., “no new regulatory action” is not an option); whether the statute prescribes the form, stringency, or coverage of the regulation; and whether the regulation establishes a national ambient air quality standard (for which the EPA is prohibited from considering costs). In general, each agency process variable is correlated with different instrumental variables, and the process variables usually have low correlations with the other control variables that are correlated with the quality of analysis in Table 2. It appears that the complexity of the regulation, the type of the regulation, and legal constraints explain agency decisions to engage in additional process efforts reasonably well.

We treat the OIRA variables as exogenous. Whether the OIRA administrator is an acting administrator or a confirmed presidential appointee is clearly determined by timing issues related to administration selection and Senate confirmation of appointees. OIRA review time could perhaps be affected by some unobserved variable that also affects the quality of analysis, but we could identify no convincing instruments for OIRA review time.

A lengthy OIRA review period could indicate either a problem with the analysis or with the substance of the regulation that is taking a long time to resolve. We can deal with this complication by omitting regulations that have unusually long review times. When we run the regressions omitting regulations with review times longer than the normal 90-day deadline or 90 days plus a 30-day extension, the results are essentially the same as when we use the entire sample.Footnote 12

It is possible that review time is influenced by the quality of the analysis the agency submits to OIRA; a more thorough analysis may take longer to review (Shapiro and Morrall 2013). In this case, we cannot presume based on the regression results alone that lengthier OIRA review causes higher quality analysis.

Some of our control variables also produce insightful results. Bush administration midnight or leftover regulations that cleared OIRA after June 1 have lower quality analysis, but midnight or leftover regulations that cleared OIRA prior to June 1 did not usually have lower quality analysis.Footnote 13 We also find that potential midnight regulations in the Obama administration have lower quality analysis – consistent with research cited above that finds the midnight regulation phenomenon occurs in administrations of both parties. Political salience, measured by the number of public comments, is correlated with the quality of analysis in most of the regressions. The negative sign on the squared term suggests that hyper-salient regulations that generate postcard and e-mail campaigns may have slightly worse analysis than regulations that are just highly salient. Finally, regulations with economic effects exceeding $1 billion tend to have more thorough analysis.

4.3 Quantitative impact

Many of the regulatory process variables have a statistically significant correlation with the quality of regulatory analysis. We can assess the size of this correlation most straightforwardly by examining the coefficients for the least squares estimators. Coefficients on Any Prior Notice, State Consultation,and Public Meeting range from 3.74 to 4.84 points when they are statistically significant. The net size of the two OIRA review time variables, evaluated at the mean review time of 62 days, is 3.42 points based on the OLS fixed effects regression and 4.04 points based on the three-stage least squares regression. The standard deviation of Quality of Analysis is 4.66 points. Thus, most of the process variables are associated with a change in score equal to at least three-quarters of a standard deviation. Acting OIRA Administrator has smaller coefficients, equal to about three-fifths of a standard deviation. Advisory Committee and Future Public Meeting also have smaller coefficients, and the latter is never statistically significant.

5 Examples

The regression results reported in the previous section establish that there is a correlation between many types of pre-proposal effort and the quality of regulatory impact analysis. The three-stage least squares estimate makes a claim of causality more plausible, because it helps rule out reverse causation and simultaneous causation by unobserved factors. To further explore the plausibility of the claim that greater pre-proposal effort produces better analysis, this section presents several examples that show how information garnered through pre-proposal processes affected the RIA. We focus on the pre-proposal processes whose coefficients are positive and most consistently statistically significant in the regressions in Table 2: Any prior notice, State consultation, Advisory committee, and OIRA review time.

5.1 Any prior notice

The Department of Energy (DOE) has a policy of issuing an ANPRM with 75 days for public comment when it develops an appliance energy efficiency standard under the Energy Policy and Conservation Act. Six of the 18 regulations in our sample with ANPRMs are DOE energy efficiency standards. DOE guidelines explicitly state that the ANPRM will identify candidate standard levels and provide a preliminary analysis of energy savings, consumer costs, and “national net present value,” but the ANPRM will not propose a particular standard (10 CFR Ch. 11 Part 430 Subpart C Appendix A).

The most recent energy efficiency regulation in our sample applied to residential refrigerators and freezers. Charts in the NPRM list changes in DOE’s analysis that occurred between the ANPRM and the NPRM; the text indicates that several occurred in response to comments on the ANPRM. One table lists changes in the engineering analysis to reflect technological and manufacturing possibilities, such as the percentage of surfaces that could be constructed with vacuum insulation panels without sacrificing structural integrity, estimates of the panels’ effectiveness based on manufacturers’ actual experience, and an assumption that high-efficiency heat exchanges can be installed only in products that have sufficient space for them (DOE, 2010, 59, 501–02). Commenters also questioned DOE’s use of household survey data to infer appliances’ electrical usage. In response, DOE obtained data on individual appliances’ actual metered electricity use (DOE, 2010, 59, 509–11).

Other changes dealt directly with economic analysis. The preliminary analysis released with the ANPRM did not include repair costs associated with energy-saving features, because DOE did not have any data on repair costs. Manufacturers commented that appliances with greater energy efficiency typically have more components that require repair, and these components are often more costly. In the revised analysis, DOE estimated repair rates for some components based on data from a previous rulemaking on commercial refrigeration equipment and obtained repair data for standard refrigerator–freezers from Consumer Reports magazine. Repair costs were estimated based on data from Best Buy and DOE’s own engineering analysis (DOE, 2010, 59, 514).

The DOE ANPRM also demonstrates that even when parts of an agency’s analysis do not change in response to comments, the ANPRM performs a useful vetting function. In the preliminary analysis, DOE assumed without evidence that the size of the retailer markup on the additional cost of high-efficiency appliances would depend on the retailer’s variable cost associated with carrying the high-efficiency appliances. In response to critical comments, DOE acknowledged that it had no data on retailers’ actual markup practices. However, it reasoned that in competitive markets, retailers are unlikely to be able to charge markups on high-efficiency appliances that increase retail profit margins. For this reason, DOE declined to change the way it calculates retail markups but explicitly asked for public comment on retailers’ actual markup practices (DOE, 2010, 59, 509).

As the DOE example illustrates, an ANPRM can be valuable for analytical purposes when it offers some preliminary analysis for comment; DOE’s ANPRMs for energy efficiency regulations do not actually include a proposed regulation. An agency can accomplish similar goals with a public request for information.

On December 1, 2006, the Department of Labor published an RFI in the Federal Register (DOL, 2006) seeking comments about the experiences with the department’s administration of the Family Medical Leave Act (FMLA) regulations. Because of this request, the DOL received more than 15,000 comments from workers, employers, academics, health care professionals, and many other parties. In June 2007, the DOL published a report summarizing the comments received (DOL, 2007), many of which affected the proposed rule and accompanying RIA published in the Federal Register on February 11, 2008 (DOL, 2008).

Many of the comments received helped the DOL identify a significant problem that motivated revision of the regulations: portions of the policy were vague and likely to cause confusion. For example, the set of ailments that could be classified as a “serious health condition” was vague and open to significant interpretation. A “serious” illness was defined as one that required continuing treatment by a health care provider (two or more visits, or one visit with a continuing treatment regimen) and resulted in a period of incapacity of more than 3 consecutive days. Employers commented that a serious cold or flu technically fit this definition, even though the policy was not intended to cover such ailments. Employees and labor groups, on the other hand, commented that this criterion was a clear test that served its intended purpose. While the DOL did not identify an alternative definition that would satisfy all parties, it revised the definition to clarify which types of conditions are covered (DOL, 2008, 7886–7887).

In addition, the RFI revealed that individuals were confused about whether an employee’s previous years of employment would count toward the 12-month eligibility requirement if the employee had a break in service of several years. The National Partnership for Women & Families argued that arbitrary time limits on how long employees could have a break in service would disproportionately affect women who stay at home to raise children for several years before returning to work. On the other hand, employers argued that there would be a high administrative burden if all previous service had to be counted with no limit to the number of years between employment periods. As a result of these and many similar comments, the DOL clarified the rule to state that while the 12 months of employment does not need to be consecutive, employment prior to a continuous break of 5 years or more need not be counted.

The DOL also requested information that would help better estimate the costs and benefits of FMLA. The department asked specifically for information about the impact of FMLA on employee morale, productivity, and employee turnover costs, as well as the relative burden FMLA places on small businesses versus large ones. Overall, comments indicated that many of the benefits of FMLA were the result of retaining employees with highly valued human capital and reducing long-run health care costs. The Center for Worklife Law estimated that productivity losses due to “presenteeism” (employees attending work when they are seriously ill) were significantly higher than productivity losses due to absenteeism. Employees with severe and contagious illnesses reduce the productivity of their coworkers if they attend work while ill. As such, one of the key benefits of FMLA is that it reduces the negative externalities due to presenteeism. Commenters also brought to the DOL’s attention various costs associated with unscheduled family leave. Industries involving assembly line manufacturing, seasonal peaks in demand, transportation operations, and public health and safety operations were those most susceptible to costs associated with unplanned intermittent leave. Another benefit not immediately obvious to the DOL is that the effect of having parents available to care for sick children has shortened recovery times and lead to improved health and education outcomes (DOL, 2007, 35629).

Finally, the responses to the RFI provided the basis for the DOL’s estimates about the number of covered and eligible employees and leave taken under FMLA in 2005 (DOL, 2008, 7940, Table 1); the percentage of covered and eligible employees taking leave under the FMLA in 2005 (DOL, 2008, 7941, Table 2); and the estimated number of FMLA eligible workers and FMLA leave usage by industry in 2005 (DOL, 2008, 7944, Table 5). In addition, the comments in response to the RFI indicated that the use of FMLA leave increased substantially in the 5–10 years preceding the RFI (DOL, 2007, 35623).

In these ways, formal requests for information provide the opportunity for major stakeholders to help identify the problem the regulation is supposed to solve and voice their concerns about potential unintended consequences of a policy. Information these stakeholders provide in response to a request for information also helps the agency to refine cost and benefit calculations, and to identify potential policy alternatives that had not previously been considered. As these stakeholders have better local knowledge about how the proposed policy will affect their lives and their businesses, it follows that providing them with an opportunity to participate in the regulatory conversation before the RIA is completed would increase the quality of the regulatory analysis.

5.2 State consultation

When NPRMs indicate that the federal regulatory agency consulted state governments, they often simply assert that consultation took place without explaining what was discussed. Where more description is provided, the consultation often involves alternative solutions that the agency might not have developed on its own.Footnote 14

In 2008, for example, the Forest Service proposed a rule governing management of roadless Forest Service lands in Colorado. The state requested the regulation and was officially named a “cooperating agency” in the rulemaking. The proposed Colorado rule retained many of the requirements of the 2001 national rule governing roadless areas but allowed some exemptions to address state and local concerns such as control of fire hazards; treatment for insects and diseases; construction of electric and water facilities; ski areas; and development of oil, gas, and coal resources. The RIA then evaluated the effects of this proposed rule compared to two alternatives: the 2001 roadless rule and Forest Service land management plans, which could take effect if ongoing litigation overturned the 2001 rule. In general, the proposed rule permitted more of these activities than the 2001 rule but less than was indicated in Forest Service land management plans (USDA 2008).

A 2008 regulation requiring local gas distribution companies to develop “integrity management” systems for their pipelines provides another example. DOT’s Pipeline and Hazardous Materials Safety Administration created four stakeholder groups (excavation damage, data, risk control, and strategic operations) that included state regulators. The stakeholders considered numerous alternatives, including model state legislation, national guidelines or consensus standards, regulatory guidance documents to be adopted by states, prescriptive federal regulation, performance-based federal regulation, development of new safety technology, and application of existing integrity management regulations for interstate gas transmission pipelines to local gas distribution pipelines. They rejected prescriptive federal regulation and suggested that performance-based regulation combined with guidance would likely be the most effective solution (PHMSA, 2005). The RIA considered five of these alternatives (apply existing transmission regulations to distribution pipelines, prescriptive federal regulation, performance-based federal regulation, model state legislation, federal guidance documents) plus the “no action” baseline. Comparison of the alternatives is largely qualitative (PHMSA, 2008, 9–14). The RIA’s assessment of the pros and cons of the first three alternatives largely mirrors the stakeholder assessments reported in PHMSA (2005).

State consultation can also provide critical research results. The NPRM for a 2009 EPA regulation that limits emissions from marine engines indicates that the EPA consulted with the California Air Resources Board (CARB). The RIA cites a CARB study which estimates that 10 percent of diesel particulate matter emissions in the California South Coast Air Basin came from oceangoing vessels in California coastal waters, and 96 percent of these emissions were from vessels in the offshore shipping lanes leading to the ports of Los Angeles and Long Beach. In a study of 45 ports, EPA estimated that 6.5 million people are exposed to elevated particulate matter emissions from the marine engines that were subject to the regulation (Environmental Protection Agency, 2009, 2–17 – 2–22). Such information helps determine whether oceangoing vessels create externalities for inland populations.

5.3 Advisory committee

In 2010, the DOL proposed revisions to standards regulating miners’ occupational exposure to respirable coalmine dust (DOL, 2008). The department consulted reports issued by the Mine Safety and Health Administration’s (MSHA) Respirable Dust Task Group and the Secretary of Labor’s Advisory Committee on the Elimination of Pneumoconiosis Among Coal Mine Workers (Dust Advisory Committee). These reports recommended several alternative margins that could be altered in the existing dust program to help reduce the health risks faced by miners.

One recommendation of the Dust Advisory Committee was that dust samples should be taken when the mine is operating close to its normal productive capacity, defined as 90 percent of the average production of the last 30 production shifts. MSHA’s Dust Task Group acknowledged limitations to this definition, as it might encourage intentionally reduced production leading up to a sampling period. In response, the MSHA proposed that a normal production shift be redefined as “(1) a production shift during which the amount of material produced by an MMU [mechanized mining unit] is at least equal to the average production recorded for the most recent 30 production shifts or (2) if fewer than 30 shifts of production data are available, a production shift during which the amount of material produced by an MMU is at least equal to the average production recorded by the operator for all of the MMU’s production shifts” (DOL, 2010, 64417).

Another recommendation by the Dust Advisory Committee was to allow a phase-in period for any rule changes to provide the mining community with an opportunity to identify ways to effectively meet these new standards without significantly disrupting production or dramatically increasing costs. Consistent with this recommendation, the MSHA proposed a 24-month phase-in period so that members of the mining industry could develop and implement new controls as well as train employees and management to use new technologies and abide by new standards.

In addition, the Dust Advisory Committee unanimously recommended that Continuous Personal Dust Monitor (CPDM) technology be the primary means to collect the data used to assess compliance with the DOL’s respiratory exposure standards. In response, the MSHA published a request for information (DOL, 2009) regarding the use of CPDM as a primary sampling device. All commenters agreed that requiring the use of a CPDM would increase the protection of miners’ health. The proposed rule required mine operators to adopt this technology, with an 18-month phase-in period to provide firms with the time to adjust to this change.

5.4 OIRA review

Executive Order 12866 requires agencies to identify publicly the substantive changes in “regulatory actions” made during the OIRA review process (Clinton, Reference Clinton1993, 51,742). It is rare, however, for agencies to post in the docket a marked-up copy showing changes made in the RIA for the proposed regulation. One exception in our sample is the Food and Drug Administration’s 2010 regulation requiring graphic warning labels on cigarette packages. The redlined version shows two major additions during the OIRA review process that noticeably improved the quality of the RIA. First, FDA added an extensive uncertainty analysis that presented a range of possible benefits. This analysis acknowledged that the benefits may be zero, because the FDA’s “effectiveness estimates are in general not statistically distinguishable from zero” (FDA, 2010, 83). This ended up being a material addition, because the D.C. Circuit Court of Appeals quoted this language when it overturned the regulation on the grounds that the FDA did not present evidence that the regulation advanced the government’s interest in preventing smoking sufficiently strong to justify the restriction on cigarette manufacturers’ First Amendment rights (Reynolds v. FDA,696 F.3d 1220). The FDA also added a table that calculated the incremental cost-effectiveness of alternative versions of the regulation (FDA, 2010, 103). There is no guarantee that these additions occurred at OIRA’s request, but they are consistent with OMB guidance to agencies on RIAs (see OMB, 2003, 11, 38–41).

6 Conclusion

This paper provides the most comprehensive assessment to date of the potential linkage between pre-proposal activity by government agencies and the quality of regulatory impact analysis. In ordered logit regressions, several types of agency effort, such as a prior notice containing preliminary analysis or soliciting information from the public, consultation with state governments, and use of advisory committees are associated with higher quality RIAs. Public meetings are associated with lower quality analysis. The quality of regulatory analysis is positively correlated with the length of OIRA review time and with the presence of a presidentially appointed administrator rather than an acting administrator. With one exception (advisory committees), these results persist when we use OLS and three-stage least squares estimators. While correlation need not imply causation, the examples we provide demonstrate how pre-proposal efforts have improved the quality of specific RIAs. Thus, our results give some cause for optimism about the likely effects of regulatory process reforms that require agencies to expend greater analytical effort and give OIRA more resources and authority.

Nevertheless, this paper does not purport to be a complete benefit-cost analysis of any of these regulatory reform proposals. A complete benefit-cost analysis would need to consider at least two additional questions. First, how much would improving the quality of analysis increase the net benefits of regulations? Second, would the increase in net benefits outweigh any costs associated with delays introduced by new procedural requirements? These questions are beyond the scope of this paper, but we have taken the crucial first step by identifying features of the regulatory process that are associated with higher quality analysis.

Our findings have implications beyond the contemporary regulatory reform debate. For readers curious about the effects of the current regulatory process, our analysis suggests that the agencies’ and OIRA’s efforts are not futile or merely symbolic. Many types of activity are positively correlated with the quality of regulatory impact analysis. Moreover, the signs and significance of our control variables are largely consistent with theory and findings in previously published research. The new control variables we employ suggest that regulations with impacts exceeding $1 billion tend to have higher quality analysis and that midnight regulations are associated with lower quality analysis regardless of presidential administration. Most broadly, this paper demonstrates how data from qualitative evaluations of RIAs can be used to generate substantial information about the effects of administrative processes.

Appendix A. Report card questions assessing the quality of regulatory impact analysis

Appendix B. Derivation of the ordered logit model

In an ideal situation, we would estimate the following latent model:

(B1) $$\begin{eqnarray}y_{i}^{\ast }=\unicode[STIX]{x1D6FD}_{0}+\unicode[STIX]{x1D6FD}_{1}x_{i,1}+\unicode[STIX]{x1D6FD}_{2}x_{i,2}+\cdots +\unicode[STIX]{x1D6FD}_{z}x_{i,z}+\unicode[STIX]{x1D700}_{i}.\end{eqnarray}$$

The variable $y_{i}^{\ast }$ is the perfect measure for capturing the true quality and use of regulatory analysis. The subscript, $i$ , denotes a particular observation in our sample of 71 regulations. The subscript $z$ indicates the independent variables utilized in this study and their corresponding coefficients. In reality, $y_{i}^{\ast }$ is unobservable, but we can observe a proxy for this value: expert subjective assessments of the quality and use of regulatory analysis for each individual rule.

The expert assessment does not provide $y_{i}^{\ast }$ , but rather a censoring of $y_{i}^{\ast }$ into different categories based on subjective thresholds. The observed value, $y_{i}$ , depends on whether the quality and use of regulatory analysis crosses above these subjective threshold marks. Using the Report Card data, we estimate the following model:

(B2) $$\begin{eqnarray}y_{i}=\unicode[STIX]{x1D6FD}_{0}+\unicode[STIX]{x1D6FD}_{1}x_{i,1}+\unicode[STIX]{x1D6FD}_{2}x_{i,2}+\cdots +\unicode[STIX]{x1D6FD}_{z}x_{i,z}+\unicode[STIX]{x1D700}_{i}.\end{eqnarray}$$

These scores are ordinal. There are 41 possible values for the dependent variable. The possible values for $y_{i}$ range from no or very poor regulatory analysis quality and use (0) to very thorough regulatory analysis quality and use (40). Thus,

$$\begin{eqnarray}\begin{array}{@{}cl@{}}y_{i}=0 & \text{if }y_{i}^{\ast }\leqslant 0,\\ y_{i}=1 & \text{if }0<y_{i}^{\ast }\leqslant \unicode[STIX]{x1D707}_{1},\\ y_{i}=2 & \text{if }\unicode[STIX]{x1D707}_{1}<y_{i}^{\ast }\leqslant \unicode[STIX]{x1D707}_{2},\\ \vdots & \quad \vdots \\ y_{i}=40 & \text{if }\unicode[STIX]{x1D707}_{39}\leqslant y_{i}^{\ast }.\end{array}\end{eqnarray}$$

In the actual dataset, the Report Card scores for quality of analysis range from 11 to 33.

The various $\unicode[STIX]{x1D707}$ values are unknown parameters estimated by the corresponding $\unicode[STIX]{x1D6FD}_{i}$ . Essentially, the $\unicode[STIX]{x1D707}$ values are the subjective threshold the expert evaluators have in mind when determining the regulation’s Report Card score. That is, if the expert assesses a particular regulation and determines that the true value of $y_{i}^{\ast }$ falls between thresholds $\unicode[STIX]{x1D707}_{30}$ and $\unicode[STIX]{x1D707}_{31}$ , that regulation would receive a score of 31. The specific score a regulation receives depends on measurable factors, our independent variables denoted by the $x_{j,i}$ .Footnote 15

One of the major assumptions of the ordered logit model is that the cumulative distribution function for this error term, $\unicode[STIX]{x1D700}_{i}$ , is a logistic function. That is,

$$\begin{eqnarray}f(\unicode[STIX]{x1D700}_{i})=\frac{\text{exp}(\unicode[STIX]{x1D700}_{i})}{[1+\exp \left(\unicode[STIX]{x1D700}_{i}\right)]^{2}}.\end{eqnarray}$$

Thus, the probabilities associated with the observed outcomes can be written as

(B3) $$\begin{eqnarray}\displaystyle \text{Prob}[y_{i}=j|\boldsymbol{x}_{\boldsymbol{i}}|] & = & \displaystyle \text{Prob}[\unicode[STIX]{x1D700}_{i}\leqslant \unicode[STIX]{x1D707}_{j}-\boldsymbol{x}_{\boldsymbol{i}}^{\prime }\unicode[STIX]{x1D737}]-\text{Prob}[\unicode[STIX]{x1D707}_{j-1}-\boldsymbol{x}_{\boldsymbol{i}}^{\prime }\unicode[STIX]{x1D737}],\nonumber\\ \displaystyle & & \displaystyle j=\frac{0}{40},\frac{1}{40},\ldots ,\frac{40}{40}.\end{eqnarray}$$

The alternative assumption that the error term follows a standard normal distribution function would lead us to estimate an ordered probit model. The results of these two estimations are typically similar, but ordered logit coefficients can be given a straightforward quantitative interpretation. The dependent variable in an ordered logit regression equation is the log of the ratio of the odds that the score will or will not have a designated value. The coefficients estimate how each explanatory variable affects this odds ratio.

Appendix C. Histogram of quality of analysis scores

Footnotes

Note: These questions parallel the topics listed in the Office of Management and Budget’s (OMB (2010)) Regulatory Analysis Checklist for agencies; the source below contains a crosswalk table.

Source: Jerry Ellig and Patrick A. McLaughlin. “The Quality and Use of Regulatory Analysis in 2008.” Risk Analysis 32 (2012): 869–71.

2 Unlike the APA, however, the executive orders on regulatory analysis are not judicially enforceable. Each one contains a sentence to that effect. See for example Clinton (Reference Clinton1993), Section 10.

3 Dunlop et al. (Reference Dunlop2012) offer an explanation of this use of RIAs in the European context.

4 President Reagan was the first president to subject agency regulations to OIRA review. President Clinton and his staff actively directed agencies to issue regulations and continued OIRA oversight.

5 Regulatory Report Card score data can be downloaded from www.mercatus.org/reportcards.

6 Ellig et al.’s (Reference Ellig, McLaughlin and Morrall2013) study includes 72 prescriptive regulations and 39 budget regulations. We reclassified one regulation they labeled prescriptive – dealing with abandoned mine lands – as a budget regulation, because it specifies conditions attached to federal grants for the restoration of abandoned mine lands.

7 In a spreadsheet, we compiled an extensive record of exactly where in the text of the NPRM and supporting documents the information used in our coding can be found. Some of the variables required careful reading of the regulation and some subjective interpretation of what type of power the agency has.

8 The statistical theory underlying two- or three-stage ordered logit estimators has not yet been developed, and development of that theory is clearly outside the scope of this paper.

9 Under current practice, an ANPRM need not include a preliminary RIA. However, when we removed ANPRMs from the Any Prior Notice dummy variable, we obtained regression results very similar to those reported in Table 2 for the regulatory process variables.

10 When we remove Acting OIRA Administrator, the coefficient on Obama becomes negative and highly significant, and the R-squared falls. We take this to indicate that it is only the regulations that cleared OIRA review during the interregnum prior to Cass Sunstein’s confirmation – not all Obama administration regulations – that had systematically lower quality analysis.

11 We could not use word counts for the NPRM and RIA separately because agencies sometimes produce the RIA as a separate document, sometimes publish the RIA as a separate section of the NPRM, and sometimes intersperse RIA content at various places in the NPRM as part of the agency’s justification for the regulation.

12 These regressions are omitted for brevity but available from the authors.

13 Omitting the dummy variables for regulations that cleared OIRA prior to June 1 does not alter the signs or significance for the other variables of interest. A pair of combined dummy variables for all midnight and all leftover regulations is not statistically significant.

14 Indeed, ordered logit regressions that use the regulation’s score for analysis of alternatives as the dependent variable find that state consultation is the only pre-proposal process variable that has a statistically significant correlation with the score for analysis of alternatives. These regressions are not reported here to conserve space but are available from the authors.

15 Another assumption we make is that the expert assessment is made in a similar way across all regulations; that is, the error component is similar for all regulations. Ellig and McLaughlin (Reference Ellig and McLaughlin2012) and Ellig et al. (Reference Ellig, McLaughlin and Morrall2013) report the results of inter-rater reliability analysis that demonstrates that the rating system produces consistent results across evaluators.

References

Abbott, Alden F. (1987). The Case Against Federal Statutory and Judicial Deadlines: A Cost-Benefit Appraisal. Administrative Law Review, 39, 171204.Google Scholar
Angrist, Joshua D. (2001). Estimation of Limited Dependent Variable Models with Dummy Endogenous Regressors: Simple Strategies for Empirical Practice. Journal of Business & Economic Statistics, 19(1), 216.Google Scholar
Arbuckle, Donald R. (2011). The Role of Analysis on the 17 Most Political Acres on the Face of the Earth. Risk Analysis, 31(6), 884892.CrossRefGoogle ScholarPubMed
Baetschmann, Gregori, Staub, Kevin E. & Winkelmann, Rainer (2015). Consistent Estimation of the Fixed Effects Ordered Logit Model. Journal of the Royal Statistical Society A, Part 3, 178, 685703.Google Scholar
Bagley, Nicholas & Revesz, Richard L. (2006). Centralized Oversight of the Regulatory State. Columbia Law Review, 106(6), 12601329.Google Scholar
Belsley, David A., Kuh, Edwin & Welsch, Roy E. (1980). Regression Diagnostics: Identifying Influential Data and Sources of Collinearity. New York: John Wiley & Sons.Google Scholar
Belzer, Richard B. (2009). Principles for an Effective Regulatory Impact Analysis Challenge Function. PRI Horizons, 10(3(May)), http://www.rbbelzer.com/uploads/7/1/7/4/7174353/belzer_2009_principles_for_an_effective_ria_challenge_function.pdf.Google Scholar
Brito, Jerry & de Rugy, Veronique (2009). Midnight Regulations and Regulatory Review. Administrative Law Review, 61(1), 163196.Google Scholar
Bubb, Ryan & Warren, Patrick L. (2014). Optimal Agency Bias and Regulatory Review. Journal of Legal Studies, 43(1), 95135.Google Scholar
Carter, Jimmy (1978). Executive Order 12044. Improving Government Regulations. Federal Register, 43(March 24), 1266112663.Google Scholar
Cecot, Caroline et al. (2008). An Evaluation of the Quality of Impact Assessment in the European Union with Lessons for the US and the EU. Regulation & Governance, 2, 405424.CrossRefGoogle Scholar
Chamberlain, Gary (1980). Analysis of Covariance with Qualitative Data. Review of Economic Studies, 47(1), 225238.Google Scholar
Clinton, Joshua D. & Lewis, David E. (2008). Expert Opinion, Agency Characteristics, and Agency Preferences. Political Analysis, 16, 320.Google Scholar
Clinton, William J. (1993). Executive Order 12866. Federal Register, 58(190), 5173551744.Google Scholar
Cochran, Jay(2001). The Cinderella Constraint: Why Regulations Increase Significantly During Post-Election Quarters, unpublished paper, Mercatus Center at George Mason University, http://mercatus.org/sites/default/files/publication/The_Cinderella_Constraint(1).pdf.Google Scholar
Croley, Steven (2003). White House Review of Agency Rulemaking: An Empirical Investigation. University of Chicago Law Review, 70(6), 821885.Google Scholar
DeMuth, Christopher C. & Ginsburg, Douglas H. (1986). White House Review of Agency Rulemaking. Harvard Law Review, 99(5), 10751088.CrossRefGoogle Scholar
Department of Agriculture (2008). Special Areas; Roadless Area Conservation; Applicability to the National Forests in Colorado; Proposed Rule. Federal Register, 73(144), 43,544–43,565.Google Scholar
Department of Energy (DOE) (2016). Appendix A to Subpart C of Part 430 – Procedures, Interpretations, and Policies for Consideration of New or Revised Energy Conservation Standards for Consumer Products, 10 Code of Federal Regulations Ch. 11 Part 430 Subpart C Appendix A.Google Scholar
Department of Energy (DOE) (2010). Energy Conservation Program: Energy Conservation Standards for Residential Refrigerators, Refrigerator-Freezers, and Freezers; Proposed Rule. Federal Register, 85(76), 59,470–59,577.Google Scholar
Department of Labor (DOL) (2010). Lowering Miners’ Exposure to Respirable Coal Mine Dust, Including Continuous Personal Dust Monitors; Proposed Rule. Federal Register, 75(201), 64,412–64,506.Google Scholar
Department of Labor (DOL) (2009). Respirable Coal Mine Dust: Continuous Personal Dust Monitor (CPDM), Request for Information. Federal Register, 74(197), 52,708–52,712.Google Scholar
Department of Labor (DOL) (2008). The Family and Medical Leave Act of 1993; Proposed Rule. Federal Register, 73(28), 78768001.Google Scholar
Department of Labor (DOL) (2007). Family and Medical Leave Act Regulations: A Report on the Department of Labor’s Request for Information; Proposed Rule. Federal Register, 72(124), 35,550–35,638.Google Scholar
Department of Labor (DOL) (2006). Request for Information on the Family and Medical Leave Act of 1993. Federal Register, 71(231), 69,504–69,514.Google Scholar
De Rugy, Veronique & Davies, Antony (2009). Midnight Regulations and the Cinderella Effect. Journal of Socio-Economics, 38(6), 886890.Google Scholar
Downs, Anthony (1967). Inside Bureaucracy. Boston: Little, Brown.Google Scholar
Dudley, Susan E. (2011). Observations on OIRA’s Thirtieth Anniversary. Administrative Law Review, 63, 113130.Google Scholar
Dunlop, Claire A. et al. (2012). The Many Uses of Regulatory Impact Assessment: A Meta-Analysis of EU and UK Cases. Regulation & Governance, 6(1), 2345.Google Scholar
Ellig, Jerry & McLaughlin, Patrick A. (2012). The Quality and Use of Regulatory Analysis in 2008. Risk Analysis, 32, 855880.Google Scholar
Ellig, Jerry, McLaughlin, Patrick A. & Morrall, John F. III (2013). Continuity, Change, and Priorities: The Quality and Use of Regulatory Analysis across U.S. Administrations. Regulation & Governance, 7, 153173.Google Scholar
Environmental Protection Agency (2009). Draft Regulatory Impact Analysis: Control of Emissions of Air Pollution from Category 3 Diesel Engines (June).Google Scholar
Farrar, Donald E. & Glauber, Robert R. (1967). Multicollinearity in Regression Analysis: The Problem Revisited. Review of Economics and Statistics, 49(1), 92107.Google Scholar
Farrow, Scott .(2006). Evaluating Central Regulatory Institutions with an Application to the US Office of Information and Regulatory Affairs. Paper presented at University of Pennsylvania Law School conference “White House Review of Regulation: Looking Back, Looking Forward” (Dec. 26). https://www.law.upenn.edu/institutes/regulation/conferences/whitehouse.html.Google Scholar
Food and Drug Administration (2010). “Required Warnings for Cigarette Packages and Advertisements: Proposed rule” redlined pre-proposal draft accompanying memo from Scott Chesemore dated September 12, 2011. in docket at www.regulations.gov.Google Scholar
Fraas, Art (1991). The Role of Economic Analysis in Shaping Regulatory Policy. Law and Contemporary Problems, 54(4), 113125.Google Scholar
Fraas, Art & Lutter, Randall (2011a). The Challenges of Improving the Economic Analysis of Pending Regulations: The Experience of OMB Circular A-4. Annual Review of Resource Economics, 3(1), 7185.Google Scholar
Fraas, Art & Lutter, Randall (2011b). On the Economic Analysis of Regulations at Independent Regulatory Commissions. Administrative Law Review, 63, 213241.Google Scholar
Graham, John D. (2008). Saving Lives through Administrative Law and Economics. University of Pennsylvania Law Review, 157, 395540.Google Scholar
Greene, William H. (2003). Econometric Analysis. (5th ed.). Upper Saddle River, NJ: Pearson Education Inc.Google Scholar
Haeder, Simon F. & Yackee, Susan Webb (2015). Influence and Administrative Process: Lobbying the U.S. President’s Office of Management and Budget. American Political Science Review, 109(3), 507522.Google Scholar
Hahn, Robert W. (2000). Reviving Regulatory Reform. Washington, DC: American Enterprise Institute Press.Google Scholar
Hahn, Robert W. & Dudley, Patrick (2007). How Well Does the Government Do Cost-Benefit Analysis? Review of Environmental Economics and Policy, 1(2), 192211.Google Scholar
Hahn, Robert W. & Litan, Robert (2005). Counting Regulatory Benefits and Costs: Lessons for the U.S. and Europe. Journal of International Economic Law, 8(2), 473508.CrossRefGoogle Scholar
Hahn, Robert W. & Sunstein, Cass (2002). A New Executive Order for Improving Federal Regulation Deeper and Wider Cost-Benefit Analysis. University of Pennsylvania Law Review, 150(5), 14891552.CrossRefGoogle Scholar
Hahn, Robert W. et al. (2000). Assessing Regulatory Impact Analyses: The Failure of Agencies to Comply with Executive Order 12,866.. Harvard Journal of Law and Public Policy, 23(3), 859871.Google Scholar
Harrington, Winston, Heinzerling, Lisa & Morgenstern, Richard (2009). Reforming Regulatory Impact Analysis. Washington, DC: Resources for the Future Press.Google Scholar
Horn, Murray J. & Shepsle, Kenneth A. (1989). Administrative Process and Organizational Form as Legislative Responses to Agency Costs. Virginia Law Review, 75(2), 499508.Google Scholar
House Committee on Oversight and Government Reform (2015). Unfunded Mandates Information and Transparency Act of 2015, Report No. 14-001. 114th Congress, 1st Session (Feb. 2).Google Scholar
House Judiciary Committee (2013). Regulatory Accountability Act of 2013, Report 113–237. 113th Congress, 1st Session (Sept. 28).Google Scholar
Howell, William G. & Mayer, Kenneth R. (2005). The Last One Hundred Days. Presidential Studies Quarterly, 35(3), 533553.Google Scholar
Kagan, Elena (2001). Presidential Administration. Harvard Law Review, 114, 22452385.Google Scholar
Katzen, Sally (2011). OIRA at Thirty: Reflections and Recommendations. Administrative Law Review, 63, 103112.Google Scholar
Keohane, Nathaniel O. (2009). The Technocratic and Democratic Functions of the CAIR Regulatory Analysis. In Harrington et al. (Ed.), In Reforming Regulatory Impact Analysis (pp. 3355). Washington, DC: Resources for the Future Press.Google Scholar
Lutter, Randall(2012). The role of retrospective analysis and review in regulatory policy. Working Paper No. 12–14, Mercatus Center at George Mason University, April, http://mercatus.org/sites/default/files/Lutter_Retrospective_v1-2.pdf.Google Scholar
Maddala, G. S. (1983). Limited-Dependent and Qualitative Variables in Econometrics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
McCubbins, Mathew D. (1985). The Legislative Design of Regulatory Structure. American Journal of Political Science, 29(4), 721748.Google Scholar
McCubbins, Mathew D., Noll, Roger G. & Weingast, Barry R. (1987). Administrative Procedures as Instruments of Political Control. Journal of Law, Economics, & Organization, 3(2), 243277.Google Scholar
McGarity, Thomas O. (1991). Reinventing Rationality. Cambridge: Cambridge University Press.Google Scholar
McLaughlin, Patrick A. (2011). The Consequences of Midnight Regulations and Other Surges in Regulatory Activity. Public Choice, 147(3), 395412.Google Scholar
McLaughlin, Patrick A. & Ellig, Jerry (2011). Does OIRA Review Improve the Quality of Regulatory Impact Analysis? Evidence from the Final Year of the Bush II Administration. Administrative Law Review, 63, 179202.Google Scholar
von Mises, Ludwig (1983). Bureaucracy. Cedar Falls, IA: Center for Futures Education.Google Scholar
Morgenstern, Richard D. (1997). Economic Analysis at EPA: Assessing Regulatory Impact. Washington, DC: Resources for the Future Press.Google Scholar
Niskanen, William A. Jr. (1994). Bureaucracy and Public Economics. Brookfield, VT: Edward Elgar.Google Scholar
Obama, Barack (2011). Executive Order 13563. Federal Register, 76(Jaunary 21), 38213823.Google Scholar
Office of Management and Budget (OMB) (2010). Agency Checklist: Regulatory Impact Analysis. Nov. 3 http://www.whitehouse.gov/sites/default/files/omb/inforeg/regpol/RIA_Checklist.pdf.Google Scholar
Office of Management and Budget (OMB) (2003). Circular A-4. September 17. Available at http://www.whitehouse.gov/omb/circulars_a004_a-4.Google Scholar
Pipeline and Hazardous Materials Safety Administration (2008). Preliminary Regulatory Impact Analysis: Pipeline Safety: Integrity Management Program for Gas Distribution Pipelines (June 17).Google Scholar
Pipeline and Hazardous Materials Safety Administration (2005). Assuring the Integrity of Gas Distribution Pipeline Systems: A Report to Congress (May).Google Scholar
Posner, Eric (2001). Controlling Agencies with Cost-Benefit Analysis: A Positive Political Theory Perspective. University of Chicago Law Review, 68(4), 11371199.Google Scholar
Posner, Eric (2003). Transfer Regulations and Cost-Effectiveness Analysis. Duke Law Journal, 53(3), 10671110.Google Scholar
President’s Jobs Council (2011). Road Map to Renewal: 2011 Year-End Report.Google Scholar
Renda, Andrea(2006). Impact Assessment in the EU: The State of the Art and the Art of the State. Brussels: Centre for European Policy Studies.Google Scholar
Reynolds v. FDA, 696 F.3d 1205 (D.C. Circuit 2012).Google Scholar
Shapiro, Stuart & Morrall, John F. III (2012). The Triumph of Regulatory Politics: Benefit-Cost Analysis and Political Salience. Regulation and Governance, 6, 189206.Google Scholar
Shapiro, Stuart & Morrall, John F. III (2013). Does Haste Make Waste? How Long Does it Take to Do a Good Regulatory Impact Analysis. Administration & Society, 20(1).Google Scholar
Sinden, Amy (2015). Formality and Informality in Cost-Benefit Analysis. Utah Law Review, 2015(1), 93172.Google Scholar
Theil, Henri (1971). Principles of Econometrics. New York: John Wiley & Sons.Google Scholar
Tozzi, Jim (2011). OIRA’s Formative Years: The Historical Record of Centralized Regulatory Review Preceding OIRA’s Founding. Administrative Law Review, 63, 3770.Google Scholar
US Government Accountability Office (GAO) (2007). Re-examining Regulations: Opportunities Exist to Improve Effectiveness and Transparency of Retrospective Reviews. GAO-07-791.Google Scholar
US Government Accountability Office (GAO) (2003). Rulemaking: OMB’s Role in Reviews of Agencies’ Draft Rules and the Transparency of Those Reviews. GAO-03-929.Google Scholar
Wagner, Wendy E. (2009). The CAIR RIA: Advocacy Dressed Up as Policy Analysis. In Harrington, Winston et al. (Ed.), Reforming Regulatory Impact Analysis. Washington, DC: Resources for the Future.Google Scholar
Williams, Richard(2008). The Influence of Regulatory Economists in Federal Health and Safety Agencies. Working Paper No. 08-15, Mercatus Center at George Mason University, Arlington, VA, July, http://mercatus.org/sites/default/files/publication/WP0815_Regulatory%20Economists.pdf.Google Scholar
Wilson, James Q. (1989). Bureaucracy: What Government Agencies Do and Why They Do It. New York: Basic Books.Google Scholar
Figure 0

Table 1 Summary statistics, dependent variable and regulatory process variables, $N=71$.

Figure 1

Table 2 Most process variables are correlated with quality of analysis.

Figure 2

Table 3 Equations Predicting Endogenous Agency Process Variables for 3SLS.