Hostname: page-component-6bf8c574d5-mggfc Total loading time: 0 Render date: 2025-02-21T23:42:28.934Z Has data issue: false hasContentIssue false

Variability in determining sepsis time zero and bundle compliance rates for the centers for medicare and medicaid services SEP-1 measure

Published online by Cambridge University Press:  22 June 2018

Chanu Rhee*
Affiliation:
Department of Population Medicine, Harvard Medical School/Harvard Pilgrim Health Care Institute, Boston, Massachusetts Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
Sarah R. Brown
Affiliation:
Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
Travis M. Jones
Affiliation:
Duke Center for Antimicrobial Stewardship and Infection Prevention, Duke University School of Medicine, Durham, North Carolina
Cara O’Brien
Affiliation:
Department of Medicine, Duke University Medical Center, Durham, North Carolina
Anupam Pande
Affiliation:
Department of Medicine, Washington University School of Medicine, St Louis, Missouri
Yasir Hamad
Affiliation:
Department of Medicine, Washington University School of Medicine, St Louis, Missouri
Amy L. Bulger
Affiliation:
Department of Quality and Safety, Brigham and Women’s Hospital, Boston, Massachusetts
Kathleen A. Tobin
Affiliation:
Lawrence Center for Quality and Safety, Massachusetts General Hospital, Boston, Massachusetts
Anthony F. Massaro
Affiliation:
Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
Deverick J. Anderson
Affiliation:
Duke Center for Antimicrobial Stewardship and Infection Prevention, Duke University School of Medicine, Durham, North Carolina
David K. Warren
Affiliation:
Department of Medicine, Washington University School of Medicine, St Louis, Missouri
Michael Klompas
Affiliation:
Department of Population Medicine, Harvard Medical School/Harvard Pilgrim Health Care Institute, Boston, Massachusetts Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts
for the CDC Prevention Epicenters Program
Affiliation:
Department of Population Medicine, Harvard Medical School/Harvard Pilgrim Health Care Institute, Boston, Massachusetts Department of Medicine, Brigham and Women’s Hospital, Boston, Massachusetts Duke Center for Antimicrobial Stewardship and Infection Prevention, Duke University School of Medicine, Durham, North Carolina Department of Medicine, Duke University Medical Center, Durham, North Carolina Department of Medicine, Washington University School of Medicine, St Louis, Missouri Department of Quality and Safety, Brigham and Women’s Hospital, Boston, Massachusetts Lawrence Center for Quality and Safety, Massachusetts General Hospital, Boston, Massachusetts
*
Author for correspondence: Chanu Rhee, MD, MPH, Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, 401 Park Drive, Suite 401, Boston, MA 02215. E-mail: crhee@bwh.harvard.edu
Rights & Permissions [Opens in a new window]

Abstract

We compared sepsis “time zero” and Centers for Medicare and Medicaid Services (CMS) SEP-1 pass rates among 3 abstractors in 3 hospitals. Abstractors agreed on time zero in 29 of 80 (36%) cases. Perceived pass rates ranged from 9 of 80 cases (11%) to 19 of 80 cases (23%). Variability in time zero and perceived pass rates limits the utility of SEP-1 for measuring quality.

Type
Concise Communication
Copyright
© 2018 by The Society for Healthcare Epidemiology of America. All rights reserved. 

In October 2015, the Centers for Medicare and Medicaid Services (CMS) implemented the “SEP-1” sepsis core measure requiring US hospitals to report compliance with 3- and 6-hour bundles of care for patients with severe sepsis or septic shock. 1 Hospitals are now devoting substantial resources to measuring and improving SEP-1 adherence, which requires all bundle components be met to “pass.”Reference Venkatesh, Slesinger and Whittle 2 , Reference Barbash, Rak, Kuza and Kahn 3

SEP-1 bundle adherence is measured relative to sepsis “time zero,” defined as the first point at which there is documentation of suspected or confirmed infection, 2 or more systemic inflammatory response syndrome criteria, and one or more organ dysfunction criteria within a 6-hour window. 1 Time zero is also triggered if a clinician explicitly documents severe sepsis or septic shock. Given this complex definition, different abstractors may reach different conclusions about time zero, which, in turn, could lead to different impressions on whether cases passed or failed SEP-1.Reference Klompas and Rhee 4 , Reference Aaronson, Filbin, Brown, Tobin and Mort 5

We compared time-zero determinations and SEP-1 pass rates among hospital abstractors and clinicians in 3 US hospitals. We also examined clinical factors associated with lower agreement rates.

Methods

We randomly selected 80 SEP-1 cases discharged between July 1 and December 31, 2016, at 3 US tertiary-care hospitals: Brigham and Women’s Hospital in Boston, Massachusetts; Barnes-Jewish Hospital in St Louis, Missouri; and Duke University Hospital in Durham, North Carolina. Each case was reviewed by the official hospital SEP-1 abstractor and by 2 clinicians at each hospital (either internal medicine physicians or clinical pharmacists) for all SEP-1 components, including time zero (Table 1) and whether cases passed. Abstractors were blinded to one another’s determinations. Clinician reviewers underwent 1 hour of training on SEP-1 abstraction by the lead investigator (C.R.) to encourage standardization using the CMS specification in place during the study period. 1 We applied CMS exclusion criteria prior to selecting cases for review (ie, outside hospital transfer, severe sepsis criteria not met on chart review, goals of care limitations, and antibiotic administration prior to 24 hours before time zero). 1

Table 1 SEP-1 Criteria for Severe Sepsis “Time Zero”

Note. aPPT, activated partial thromboplastin time.

a Time zero is the time at which the last sign of severe sepsis (documentation of suspected infection, ≥2 systemic inflammatory response syndrome criteria, and organ dysfunction) within that 6-hour window is noted. Alternatively, severe sepsis criteria are met if provider documentation of suspected or confirmed severe sepsis or septic shock is present.

b Excludes organ dysfunction explicitly documented as chronic.

We compared agreement on time-zero and SEP-1 pass-versus-fail rates among the 3 abstractors at each site. Time zero was considered concordant between abstractors if they were within ±1 minute of each other. We also conducted sensitivity analyses allowing for agreement if time-zero determinations were within 1 hour and 3 hours of each other.

We calculated interobserver variability on whether cases passed or failed SEP-1 using the Fleiss κ for 3-abstractor comparisons and the Cohen κ for 2-abstractor comparisons.Reference McHugh 6 We considered κ>0.75 to be strong agreement, κ=0.40–0.75 to be moderate agreement, and κ<0.40 to be poor agreement.Reference Stevens, Kachniarz and Wright 7

We conducted a multivariate analysis to identify factors associated with disagreement on time zero. Covariates included age >65, sex, hospital length-of-stay >7 days, sepsis time zero occurring after hospital admission (per the hospital abstractor), and which organ dysfunction criteria triggered time zero per the hospital abstractor (ie, hypotension, lactate >2.0 mmol/L, provider documentation of severe sepsis/septic shock, or other organ dysfunction).

Statistical analyses were performed using SAS version 9.3 software (SAS Institute, Cary, NC) and an online software package for interrater reliability calculations. 8 The study was approved by the institutional review boards at Harvard Pilgrim Health Care Institute, Partners Healthcare, Washington University School of Medicine, and Duke University Health System.

Results

Of the 80 study cases, all 3 abstractors agreed on time zero in 29 cases (36.3%). Agreement rates by hospital are shown in Table 2. Among the 51 cases for which there was a discrepancy, the median time-zero difference between clinician abstractors and hospital abstractors was 40 minutes (interquartile range [IQR], 0–70 minutes; range, 0 minutes to 11.6 days). Agreement on time zero was better but still marginal when the window for concordance was expanded to 1 hour (47 of 80 cases, 58.9%) or 3 hours (54 of 80 cases, 67.5%).

Table 2 Agreement in Determining Sepsis Time Zero and SEP-1 Compliance by HospitalFootnote a

Note. IQR, interquartile range.

a Data for each hospital are presented in no specific order.

b Represents the median difference in time zero determined by both clinician abstractors compared to each hospital’s official quality abstractor. Only cases where there was at least 1 disagreement were included in the analysis.

c The timing of sepsis onset was determined using the official hospital abstractor’s time zero.

d Interrater reliability calculations included all 3 abstractors at each hospital.

Hospital abstractors identified time zero as occurring in the emergency department or day of admission in 55 of 80 cases (68.8%). Agreement among the 3 abstractors was better in these cases (25 of 55 cases, 45.5%) than in cases in which time zero occurred after hospital admission (4 of 25 cases, 16%; P=.01). On multivariate analysis, hospital-onset of sepsis was independently associated with at least 1 abstractor disagreeing on time zero (odds ratio [OR], 8.2; 95% confidence interval [CI], 1.6–40.7; P=.01), whereas age, sex, length of stay>7 days, and organ dysfunction criteria were not.

Overall, hospital abstractors identified 19 of 80 cases (23.8%) as passing SEP-1. Among the clinician abstractors, 9 cases (11.3%) passed when using the abstractor with the strictest assessments at each hospital. When using the highest pass rates per clinician abstractor at each hospital, 15 cases (18.8%) passed. Interrater reliability among the 3 abstractors for determining SEP-1 compliance was poor (Fleiss κ, 0.39).

When assessing agreement by at least 1 clinician abstractor identifying the same time zero as the hospital abstractor, agreement increased to 56 of 80 cases (70.0%), and interrater reliability for determining SEP-1 compliance was better but still only moderate (Cohen κ, 0.67). When examining agreement only between the 2 clinician abstractors at each hospital, agreement on time zero occurred in 34 of 80 cases (42.5%) and interrater reliability for SEP-1 compliance was poor (Cohen κ, 0.28).

Discussion

We found poor agreement between abstractors for identifying sepsis time zero and whether cases passed the CMS SEP-1 measure. Agreement improved to only a moderate rating when requiring just 1 of 2 clinician abstractors to agree with a hospital’s official abstractor. Sepsis onset after hospital admission was associated with lower agreement rates compared to sepsis present on admission.

The SEP-1 measure relies on determining sepsis time zero to calculate 3- and 6-hour bundle compliance rates, but several potential sources of error as well as subjectivity may have affected the results. Abstractors need to assess many different parts of the chart (eg, vital signs, laboratory tests, clinical notes, and medication administration records) to determine time zero and overall SEP-1 compliance. Abstractors must exercise judgment to decide whether clinicians suspect infection, whether organ dysfunction is present, and whether organ dysfunction is new or chronic. Reviewers may also need to review dozens of progress notes, including multiple versions of the same note that have been copied and pasted, to find the first documentation of suspected infection, particularly when sepsis occurs after hospital admission.

More broadly, sepsis is an elusive entity to define and identify. There is no gold standard for sepsis, and even expert clinicians using common definitions often disagree on whether sepsis is present or absent.Reference Rhee, Kadri and Danner 9 , Reference Angus, Seymour and Coopersmith 10

Our study has several limitations. Clinicians may be less adept at abstracting data for quality measures than trained hospital abstractors. We focused on agreement for sepsis time zero and overall SEP-1 pass rates, but variability in abstracting individual bundle components could also contribute to disagreements in perceived pass rates. Our study was conducted in academic hospitals and may not be generalizable to community hospitals, where sepsis cases may differ in their level of complexity. Finally, the CMS specification for SEP-1 continues to change over time, and we were unable to evaluate the impact of recent changes on interrater reliability.

In conclusion, there is significant variability between different abstractors in determining severe sepsis time zero and SEP-1 compliance rates. These findings underscore the importance of ensuring adequate standardization of quality measures, especially complex ones like SEP-1, that require substantial judgment for implementation.

Acknowledgments

The content is solely the responsibility of the authors and does not necessarily represent the official views of the Centers for Disease Control and Prevention or the Agency for Healthcare Research and Quality.

Financial support

This study was funded by the Prevention Epicenters Program of the Centers for Disease Control and Prevention (grant no. U54CK000484). C.R. received support from the Agency for Healthcare Research and Quality (grant no. K08HS025008).

Potential conflicts of interest

None of the authors have any conflicts to disclose.

References

1. Centers for Medicare and Medicaid Services. QualityNet—Inpatient Hospitals Specifications Manual. Quality website. https://www.qualitynet.org. Accessed March 19, 2018.Google Scholar
2. Venkatesh, AK, Slesinger, T, Whittle, J, et al. Preliminary performance on the new CMS Sepsis-1 national quality measure: early insights from the emergency quality network (E-QUAL). Ann Emerg Med 2018;71:1015.Google Scholar
3. Barbash, IJ, Rak, KJ, Kuza, CC, Kahn, JM. Hospital perceptions of Medicare’s sepsis quality reporting initiative. J Hosp Med 2017;12:963968.Google Scholar
4. Klompas, M, Rhee, C. The CMS sepsis mandate: right disease, wrong measure. Ann Intern Med 2016;165:517518.Google Scholar
5. Aaronson, EL, Filbin, MR, Brown, DF, Tobin, K, Mort, EA. New mandated Centers for Medicare and Medicaid Services requirements for sepsis reporting: caution from the field. J Emerg Med 2017;52:109116.Google Scholar
6. McHugh, ML. Interrater reliability: the kappa statistic. Biochem Med (Zagreb) 2012;22:276282.Google Scholar
7. Stevens, JP, Kachniarz, B, Wright, SB, et al. When policy gets it right: variability in US hospitals’ diagnosis of ventilator-associated pneumonia*. Crit Care Med 2014;42:497503.Google Scholar
8. ReCal3: Reliability for 3+ coders. dfreelon.org website. http://dfreelon.org/utils/recalfront/recal3/. Accessed March 19, 2018.Google Scholar
9. Rhee, C, Kadri, SS, Danner, RL, et al. Diagnosing sepsis is subjective and highly variable: a survey of intensivists using case vignettes. Crit Care 2016;20:89.Google Scholar
10. Angus, DC, Seymour, CW, Coopersmith, CM, et al. A framework for the development and interpretation of different sepsis definitions and clinical criteria. Crit Care Med 2016;44:e113e121.Google Scholar
Figure 0

Table 1 SEP-1 Criteria for Severe Sepsis “Time Zero”

Figure 1

Table 2 Agreement in Determining Sepsis Time Zero and SEP-1 Compliance by Hospitala