(See the article by Jackson et al4 on pages 1019–1024.)
Central vascular catheters (CVCs) are commonly used in hospitalized patients, and infections of these devices, known as central line-associated bloodstream infections (CLABSIs), can cause significant morbidity and mortality. A variety of factors can lead to a patient’s increased risk for the development of a CLABSI, including procedural and device-related factors (eg, anatomic placement, CVC type, use of sterile and aseptic insertion and maintenance practices, and CVC duration). In 2006, PronovostReference Pronovost, Needham and Berenholtz
1
illustrated that much of this risk can be mitigated by following standardized bundles of evidenced-based practices folded into a culture of accountability and safety.
In 2011, the Centers for Medicare and Medicaid Services (CMS) began to require acute-care facilities to report performance on key healthcare-associated infection (HAI) outcome metrics, with CLABSIs within intensive care units (ICUs) the first HAI designated for reporting. This was soon followed by other important HAIs, including CLABSIs in units outside of ICUs. Initially, hospitals were only required to report performance, but over the next few years, facility-specific reimbursement was tied to the specific level of performance, with those hospitals with the lowest infection rates receiving the highest monetary incentives.
Publicly reported infection performance is defined using surveillance definitions from the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) and is expressed as a standardized infection ratio (SIR) of the observed number of infection cases over the number of predicted events.
2
Historically for CLABSI, pooled mean infection rates (adjusted by unit type) and the total number of device days were used to calculate the predicted number of events. In 2017, new CLABSI logistic regression risk models were introduced, but these include weighting only for specific unit and facility type, bed size, and medical school affiliation to determine the predicted number of events. Patient-specific risk factors are not included in the risk models currently in use.
The impact of public reporting of HAI performance has been substantial and largely positive, likely contributing to many important changes surrounding infection prevention, including increased resources, improved awareness among frontline healthcare personnel of the impact and potential preventability of HAIs, and accountability of operational leaders for reduction in HAI-related patient harm. Financial incentives and penalties related to this performance are common and carry increasing consequences. Most importantly, rates of several HAIs have declined,
3
with a number of facilities reporting sustained reductions and even elimination of these events. Despite these important gains, the current reporting metrics do not fully adjust for important risk differences, resulting in an uneven playing field for interfacility comparisons. One could argue that many, if not all, of the “low” and “middle-hanging fruits” pertaining to CLABSI prevention have been culled. As such, we may be approaching a floor, where variations in performance no longer reflect largely preventable events but instead reflect variations in immitigable patient risk factors between facilities.
In this context, we examine the important study by Jackson et alReference Jackson, Leekha and Magder
4
that investigates the effect of adding patient comorbidities to the current NHSN CLABSI risk model. Using ICU-related CLABSI data from 22 hospitals, patient case-mix variables as determined by International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes were included in a modified risk model that was then compared to one similar to the NHSN risk model (pre-2017). The selected comorbid conditions were based on earlier work that used expert opinion to classify the 35 comorbid conditions in the Charlson and Elixhauser comorbidity indices.Reference Harris, Pineles and Anderson
5
For both models, predicted probabilities for CLABSI development were estimated and used to generate each model’s C statistic, a measure of predictive ability. They also performed an assessment of the change in risk adjusted rates and facility ranking in the cohort with the use of the enhanced model.
The proportion of patients in the study who acquired a CLABSI was a strikingly low, 0.2%. The case-mix model had a significantly better predictability for CLABSI than the ICU-type only model. Even among this small cohort, hospital rankings were impacted with the use of the case-mix enhanced model. Overall, 10 hospitals changed rank, some with relatively pronounced changes in the calculated SIR (between 11% and 16% among those moving more than 1 ranking spot). For example, one facility improved from an SIR of 1.53 to 1.29, resulting in an increase by 5 ranking spots in this limited cohort. If this facility were compared within the much larger general CMS population, the ranking change would be larger, and if its performance were moved from “worse than the national benchmark” to “no different than the national benchmark,” designations often used by external groups to allot financial incentives, that difference could have substantial consequences.
A major challenge facing risk adjustment of publicly reported HAI data is that any targeted risk variables must be defined in a standardized fashion and readily collected from a variety of patient medical record systems. Any risk variables that rely upon manual chart abstraction are unlikely to be effectively captured uniformly. Jackson et al note these requirements in their selection of coded patient comorbidities that are already submitted to CMS for other uses. While using diagnostic codes for HAI surveillance does have noted limitations,Reference van Mourik, van Duijn, Moons, Bonten and Lee
6
using these to capture patient risk factors allows for an efficient use of resources and synergizes HAI risk adjustment with other required processes.
As the authors note, there are some limitations to their study. The hospitals were predominantly large urban centers, and half were academically affiliated. This may raise questions regarding generalizability, but these facilities may also be disproportionately affected by publicly reported quality metrics.Reference Rajaram, Chung and Kinnier
7
The study period occurred prior to the release of ICD-10-CM coding, although one could reasonably assume that the effect noted with ICD-9-CM codes would not be markedly altered by this switch. Differences in coded diagnoses may be due to documentation adequacy at the different facilities as opposed to population differences between hospitals. Finally, while the addition of the comorbid factors improved the CLABSI risk model, the C statistic is still moderate and suggests that stronger predictive models may be possible.
Nonetheless, this study, as well as a similar study by the same authors examining surgical site infections,Reference Jackson, Leekha and Magder
8
adds a great deal to the discussion surrounding improving the risk adjustment of publicly reported HAI data. With the growing consequences of higher HAI performance, the need to better level the playing field in an equitable manner is imperative. Validation of reported data must be enhanced to assess for potential gaming (including underdiagnosis) and to ensure the correct application of surveillance definitions. Finally, given the unquestionable role of patient-specific factors that can lead to nonpreventable HAI events, the CDC and CMS must expand the current reporting models to better adjust for these important factors.
This path we tread is delicate. “Our patients are sicker” is a common refrain heard by infection prevention personnel working on HAI reduction efforts. Pushing colleagues past this concern to focus on standardized practices surrounding device insertion and maintenance has resulted in dramatic decreases in CLABSI rates across the country. We must acknowledge, however, that as these events have been largely prevented, those that remain may be of a very different character, no longer due to lapses in infection prevention practices but possibly due to unalterable patient risk factors. The work of Jackson et al provides an important blueprint for enhanced risk adjustment that would better level the playing field for HAI public reporting.
(See the article by Jackson et al4 on pages 1019–1024.)
Central vascular catheters (CVCs) are commonly used in hospitalized patients, and infections of these devices, known as central line-associated bloodstream infections (CLABSIs), can cause significant morbidity and mortality. A variety of factors can lead to a patient’s increased risk for the development of a CLABSI, including procedural and device-related factors (eg, anatomic placement, CVC type, use of sterile and aseptic insertion and maintenance practices, and CVC duration). In 2006, PronovostReference Pronovost, Needham and Berenholtz 1 illustrated that much of this risk can be mitigated by following standardized bundles of evidenced-based practices folded into a culture of accountability and safety.
In 2011, the Centers for Medicare and Medicaid Services (CMS) began to require acute-care facilities to report performance on key healthcare-associated infection (HAI) outcome metrics, with CLABSIs within intensive care units (ICUs) the first HAI designated for reporting. This was soon followed by other important HAIs, including CLABSIs in units outside of ICUs. Initially, hospitals were only required to report performance, but over the next few years, facility-specific reimbursement was tied to the specific level of performance, with those hospitals with the lowest infection rates receiving the highest monetary incentives.
Publicly reported infection performance is defined using surveillance definitions from the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) and is expressed as a standardized infection ratio (SIR) of the observed number of infection cases over the number of predicted events. 2 Historically for CLABSI, pooled mean infection rates (adjusted by unit type) and the total number of device days were used to calculate the predicted number of events. In 2017, new CLABSI logistic regression risk models were introduced, but these include weighting only for specific unit and facility type, bed size, and medical school affiliation to determine the predicted number of events. Patient-specific risk factors are not included in the risk models currently in use.
The impact of public reporting of HAI performance has been substantial and largely positive, likely contributing to many important changes surrounding infection prevention, including increased resources, improved awareness among frontline healthcare personnel of the impact and potential preventability of HAIs, and accountability of operational leaders for reduction in HAI-related patient harm. Financial incentives and penalties related to this performance are common and carry increasing consequences. Most importantly, rates of several HAIs have declined, 3 with a number of facilities reporting sustained reductions and even elimination of these events. Despite these important gains, the current reporting metrics do not fully adjust for important risk differences, resulting in an uneven playing field for interfacility comparisons. One could argue that many, if not all, of the “low” and “middle-hanging fruits” pertaining to CLABSI prevention have been culled. As such, we may be approaching a floor, where variations in performance no longer reflect largely preventable events but instead reflect variations in immitigable patient risk factors between facilities.
In this context, we examine the important study by Jackson et alReference Jackson, Leekha and Magder 4 that investigates the effect of adding patient comorbidities to the current NHSN CLABSI risk model. Using ICU-related CLABSI data from 22 hospitals, patient case-mix variables as determined by International Classification of Disease, Ninth Revision, Clinical Modification (ICD-9-CM) diagnostic codes were included in a modified risk model that was then compared to one similar to the NHSN risk model (pre-2017). The selected comorbid conditions were based on earlier work that used expert opinion to classify the 35 comorbid conditions in the Charlson and Elixhauser comorbidity indices.Reference Harris, Pineles and Anderson 5 For both models, predicted probabilities for CLABSI development were estimated and used to generate each model’s C statistic, a measure of predictive ability. They also performed an assessment of the change in risk adjusted rates and facility ranking in the cohort with the use of the enhanced model.
The proportion of patients in the study who acquired a CLABSI was a strikingly low, 0.2%. The case-mix model had a significantly better predictability for CLABSI than the ICU-type only model. Even among this small cohort, hospital rankings were impacted with the use of the case-mix enhanced model. Overall, 10 hospitals changed rank, some with relatively pronounced changes in the calculated SIR (between 11% and 16% among those moving more than 1 ranking spot). For example, one facility improved from an SIR of 1.53 to 1.29, resulting in an increase by 5 ranking spots in this limited cohort. If this facility were compared within the much larger general CMS population, the ranking change would be larger, and if its performance were moved from “worse than the national benchmark” to “no different than the national benchmark,” designations often used by external groups to allot financial incentives, that difference could have substantial consequences.
A major challenge facing risk adjustment of publicly reported HAI data is that any targeted risk variables must be defined in a standardized fashion and readily collected from a variety of patient medical record systems. Any risk variables that rely upon manual chart abstraction are unlikely to be effectively captured uniformly. Jackson et al note these requirements in their selection of coded patient comorbidities that are already submitted to CMS for other uses. While using diagnostic codes for HAI surveillance does have noted limitations,Reference van Mourik, van Duijn, Moons, Bonten and Lee 6 using these to capture patient risk factors allows for an efficient use of resources and synergizes HAI risk adjustment with other required processes.
As the authors note, there are some limitations to their study. The hospitals were predominantly large urban centers, and half were academically affiliated. This may raise questions regarding generalizability, but these facilities may also be disproportionately affected by publicly reported quality metrics.Reference Rajaram, Chung and Kinnier 7 The study period occurred prior to the release of ICD-10-CM coding, although one could reasonably assume that the effect noted with ICD-9-CM codes would not be markedly altered by this switch. Differences in coded diagnoses may be due to documentation adequacy at the different facilities as opposed to population differences between hospitals. Finally, while the addition of the comorbid factors improved the CLABSI risk model, the C statistic is still moderate and suggests that stronger predictive models may be possible.
Nonetheless, this study, as well as a similar study by the same authors examining surgical site infections,Reference Jackson, Leekha and Magder 8 adds a great deal to the discussion surrounding improving the risk adjustment of publicly reported HAI data. With the growing consequences of higher HAI performance, the need to better level the playing field in an equitable manner is imperative. Validation of reported data must be enhanced to assess for potential gaming (including underdiagnosis) and to ensure the correct application of surveillance definitions. Finally, given the unquestionable role of patient-specific factors that can lead to nonpreventable HAI events, the CDC and CMS must expand the current reporting models to better adjust for these important factors.
This path we tread is delicate. “Our patients are sicker” is a common refrain heard by infection prevention personnel working on HAI reduction efforts. Pushing colleagues past this concern to focus on standardized practices surrounding device insertion and maintenance has resulted in dramatic decreases in CLABSI rates across the country. We must acknowledge, however, that as these events have been largely prevented, those that remain may be of a very different character, no longer due to lapses in infection prevention practices but possibly due to unalterable patient risk factors. The work of Jackson et al provides an important blueprint for enhanced risk adjustment that would better level the playing field for HAI public reporting.
ACKNOWLEDGMENTS
Financial support: Only institutional funds provided support for this article.
Potential conflicts of interest: The author reports no conflicts of interest relevant to this article.