Consumer demand for information, including data regarding healthcare-associated infections (HAIs), has increased over the past decade. Publicly available websites, such as the Centers for Medicare and Medicaid Services (CMS) Hospital Compare website, 1 present standardized incidence ratios (SIRs) using indirect standardization for evaluation of HAIs and other metrics of hospital quality. Some argue that indirect standardization is suboptimal for interhospital comparisonReference Birnbaum, Zarate and Marfin 2 because it fails to account for case mix and because it may be distorted by variations in population numbersReference Birnbaum, Zarate and Marfin 2 and that direct standardization should be used for comparing hospitals (especially when ranking hospitals).Reference Delgado-Rodríguez and Llorca 3 Others argue that direct standardization is inappropriate for interhospital comparisons due to the small outcome frequency.Reference Gustafson 4 Therefore, there is no consensus on whether indirect or direct standardization is preferable for HAI risk adjustment, and the best approach for standardizing and presenting HAI data remains controversial.Reference Birnbaum, Zarate and Marfin 2 – Reference Julious, Nicholl and George 5 In addition to a better understanding of the proper epidemiologic uses, an evaluation of the practical implications of each method is needed. Specifically, in the era of financial penalties based on HAI rate-based “ranks,” the impact of using each method on the resulting hospital ranking is unknown.
The objective of this study was to assess whether employing direct standardization instead of indirect standardization changed hospital ranks for a single HAI.
METHODS
We examined publicly reported central-line–associated bloodstream infection (CLABSI) data from all intensive care units (ICUs) in 45 acute-care hospitals in Maryland from 2012 to 2014. Only hospitals that reported at least 1 CLABSI per year were included in the analysis: 29 hospitals in 2012, 28 hospitals in 2013, and 28 hospitals in 2014. For each year, we ranked hospital CLABSI performance using both methods, and we compared changes between the rankings.
Standardized incidence ratios for CLABSI using indirect standardization were calculated as posted on Hospital Compare by comparing the reported number of CLABSIs with the predicted number of CLABSIs based on 2006–2008 National Healthcare Safety Network (NHSN) data for each hospital.Reference Edwards, Peterson and Mu 6 More specifically, the individual hospital provided the central-line days (CLDs) (ie, the weights) and the NHSN standard population provided the ICU-specific CLABSI rates based on the national experience.
For direct standardization, the individual hospitals provided the CLABSI rates and the standard population provided the standard CLD by type of ICU (ie, the weights).Reference Wilcosky 7 Standard population estimates and CLD by type of ICU were obtained from the 2006–2008 NHSN report.Reference Szklo and Nieto 8 For each hospital, we multiplied the observed ICU-specific CLABSI rates for each unit by the standard CLD, then we summed these values across the hospital. We divided these values by the sum of the standard CLD in each unit. In summary, to generate the indirectly standardized rate ratio, we used CLD (weights) from the individual Maryland acute-care hospitals (ie, the study population) and the CLABSI rates from the NHSN data (ie, the standard population). To generate the directly standardized rate, we used the CLABSI rates from the individual Maryland acute-care hospitals (ie, the study population) and CLDs (ie, the weights) from the NHSN data (ie, the standard population).
Hospitals were ranked from lowest SIR (ie, the best performing) to highest SIR (ie, worst performing) using both indirect and direct standardization. The changes between ranks derived from both methods were used to generate slope graphs for each year to assess differences between rankings. We also evaluated shifts in the observed quartile for each hospital based on the ranks derived from each method, with interest in the composition of hospitals with highest CLABSI rates in the bottom quartile.

RESULTS
Table 1 shows the value and associated rank using indirect and direct standardization along with the reported number of CLABSI for each hospital for each study year. In 2012 and 2013 only 6 of 29 hospitals (21%) and 6 of 28 hospitals (21%) reported ≥5 CLABSIs. In 2014, only 5 of 28 hospitals (18%) reported ≥5 CLABSIs. The direction and magnitude of the change in rank between indirect versus direct standardization varied. As indicated by the thick lines in Figure 1, 10 hospital ranks (34.5%) changed by ≥3 positions in 2012; 7 hospital ranks (25%) changed by ≥3 positions in 2013 and; 10 hospital ranks (36%) changed by ≥3 positions in 2014. Also, 6 hospitals (21%) changed quartiles in 2012 when direct standardization was employed instead of indirect standardization. Similarly, 6 hospitals (21%) changed quartiles in 2013 and 2014. In 2012, 2 hospitals moved from the third quartile to the fourth quartile, and 2 hospitals moved from the fourth quartile to the third quartile. In 2013, 1 hospital moved into the fourth quartile and 1 moved out of the fourth quartile. In 2014, 1 hospital moved out of the fourth quartile and 1 hospital moved into the fourth quartile.

FIGURE 1 Direction and magnitude of Maryland central-line–associated bloodstream infection (CLABSI) rank change by standardization method (2012–2014). *Hospital 1=lowest CLABSI rate=best performance. **Hospital numbers do not carry over between years (ie, Hospital 1 in 2012 is not necessarily Hospital 1 in 2013 or 2014).
TABLE 1 Maryland CLABSI Rates by Standardization Method and Year

DISCUSSION
When direct standardization methods were used to adjust CLABSI rates of Maryland hospitals instead of indirect methods (as currently used by the CMS), many hospitals moved ≥3 rank positions in all study years. Moving from one quartile to another also occurred frequently. In all study years, some hospitals moved in and out of the fourth quartile. This finding is particularly noteworthy because hospitals in the fourth quartile are subjected to financial penalties for poor performance.
Major limitations are associated with both standardization methods. Comparison of hospitals using indirect standardization is not recommendedReference Birnbaum, Zarate and Marfin 2 , Reference Delgado-Rodríguez and Llorca 3 , Reference Julious, Nicholl and George 5 because each hospital’s indirectly standardized ratio is based on its own set of weights (ie, central-line utilization). The critics of indirect standardization argue that the CLABSI SIRs for each hospital should therefore only be compared to the NHSN US benchmark rates (the standard population).Reference Szklo and Nieto 8 In addition to the methodological complexities and limitations associated with indirect standardization, many consumers do not understand how to correctly interpret SIRs.Reference Masnick, Morgan and Sorkin 9 However, indirect standardization metrics are frequently used to make interhospital comparisons, including when hospitals are ranked for reimbursement. Direct standardization methods are appropriate to use for the comparison of rates in ≥2 groups; however, this approach suffers from instabilityReference Lee 10 and is highly susceptible to random variation when dealing with small numbers (<20),Reference Gustafson 4 as is the case with CLABSIs reported in most Maryland hospitals.
Despite the ongoing controversy surrounding HAI metrics, the CMS recently began cutting payments to hospitals in the worst-performing quartile based on indirect standardization on risk-adjusted quality measures including CLABSI. Considering these financial repercussions, it is even more critical that the methods used to generate HAI metrics, and how this information is presented to the public, are carefully considered. Interhospital comparisons, made by the public utilizing tools such as Hospital Compare and used for the purposes of reimbursement, are meaningless if risk adjustments for CLABSIs and other HAIs are not done correctly. Moreover, the number of CLABSIs reported at most hospitals in Maryland was very small. A metric with such infrequent events may not paint a comprehensive picture of institutional performance, and the appropriateness of continuing to use CLABSI as a hospital quality metric should be carefully considered.
This example using Maryland CLABSI data confirms that indirect and direct standardization methods generate different results with payment implications. Each method has its own limitations, and the evidence presented here is not strong enough to promote a change in methods. However, both methods standardize CLABSI rates only by type of ICU making them prone to residual confounding illustrating the need for improved risk adjustment for specific HAIs. When publicly reporting HAI data, and when selecting metrics for value-based purchasing, standardized data should be presented and interpreted with caution.
ACKNOWLEDGMENTS
The authors thank Theressa Lee and the Maryland Department of Health and Mental Hygiene for providing access to the data used for this study.
Financial support: L.M.O. is the recipient of a Banting Postdoctoral Fellowship administered by the Government of Canada. A.D.H. received grant support from the National Institutes of Health (K24 award). This study was not funded by industry and no manufacturer played a role in the gathering or preparation of data or in the writing of the manuscript.
Potential conflicts of interest: All authors report no conflicts of interest relevant to this article.