Hostname: page-component-6bf8c574d5-b4m5d Total loading time: 0 Render date: 2025-02-22T01:12:45.988Z Has data issue: false hasContentIssue false

Accuracy and reliability of electronic versus CDC surveillance criteria for non-ventilator hospital-acquired pneumonia

Published online by Cambridge University Press:  10 December 2019

Haiyan Ramirez Batlle
Affiliation:
Department of Medicine, Brigham and Women’s Hospital, Boston, MA, USA
Michael Klompas*
Affiliation:
Department of Medicine, Brigham and Women’s Hospital, Boston, MA, USA Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA
*
Author for correspondence: Michael Klompas, E-mail: klompas@bwh.harvard.edu
Rights & Permissions [Opens in a new window]

Abstract

Nonventilator hospital-acquired pneumonia (NV-HAP) is one of the most common healthcare-associated infections, but most hospitals do not track it. We created a pilot electronic definition for NV-HAP and compared its accuracy to Centers for Disease Control and Prevention (CDC) criteria. Kappa values for the electronic definition and CDC criteria versus “true” pneumonia were similar: 0.40 and 0.47, respectively.

Type
Concise Communication
Copyright
© 2019 by The Society for Healthcare Epidemiology of America. All rights reserved.

Nonventilator hospital-acquired pneumonia (NV-HAP) is one of the most common and morbid healthcare-associated infections, but most hospitals do not routinely conduct surveillance for NV-HAP.Reference Magill, O’Leary and Janelle1Reference Micek, Chew, Hampton and Kollef3 A key reason for this is that the Centers for Disease Control and Prevention (CDC) traditional surveillance criteria for pneumonia are complicated, subjective, onerous to abstract, and correspond poorly with histological pneumonia.Reference Horan, Andrus and Dudeck4Reference Kerlin, Trick and Anderson10 Administrative codes do not provide a credible alternative because they are neither sensitive nor specific relative to clinical or surveillance criteria.Reference van Mourik, van Duijn, Moons, Bonten and Lee11, Reference Wolfensberger, Meier, Kuster, Mehra, Meier and Sax12

In light of these shortcomings, we explored the feasibility and accuracy of detecting NV-HAP using objective clinical data routinely found in electronic health record systems on the rationale that this might facilitate automated electronic surveillance.Reference Ji, McKenna and Ochoa13 We created a candidate electronic surveillance definition modeled after CDC pneumonia (PNU) and ventilator-associated event (VAE) definitions: worsening oxygenation, fever or leukocytosis, acquisition of chest imaging or pulmonary cultures, and ≥3 days of new antibiotics. We previously reported fair correlation between this definition and clinically diagnosed pneumonia.Reference Ji, McKenna and Ochoa13 We now extend this analysis by comparing case finding and interrater reliability for the candidate electronic definition versus CDC PNU criteria and expert classification of pneumonia.

Methods

We randomly selected 120 charts of adult patients hospitalized ≥3 days in Brigham and Women’s Hospital, Boston, between July 2015 and June 2017. We selected patients with ≥2 days of worsening oxygenation on or after hospital day 3, defined as a drop in the daily minimum oxygen saturation from ≥95% in a patient on ambient air to <95% on ambient air, or initiation of supplemental oxygen, or escalation of supplemental oxygen. We then electronically flagged patients that met the candidate electronic NV-HAP surveillance definition on the basis of (1) ≥2 days of worsening oxygenation as defined above, (2) fever (≥38 °C) or abnormal white blood cell count (<4,000 or ≥12,000 cells/mm3), (3) performance of chest imaging per procedure codes for chest x-rays or computed tomography, and/or culture of sputum or bronchoalveolar lavage fluid, and (4) ≥3 days of new antibiotics, defined as agents that had not been administered in the preceding 2 days. All criteria were required to be present on the first or second day of worsening oxygenation.

A physician blinded to the chart selection criteria and the electronic definition reviewed all charts for (1) clinical documentation of suspected pneumonia by treating clinicians (“clinical pneumonia”), (2) CDC PNU criteria for hospital-acquired pneumonia (“CDC criteria”), and (3) the likelihood of “true” pneumonia, taking into account patients’ presenting signs, radiographic findings including chest computed tomography when available, culture results, duration of treatment, clinical response to antibiotics, and potential alternative diagnoses. Patients were deemed to have “true” pneumonia if they had compatible signs and symptoms, persistent radiographic infiltrates, treatment with at least 5 days of antibiotics, clinical response to appropriate antibiotics, and no alternative diagnosis to explain their pulmonary syndrome. A second physician independently reviewed 10% of charts to confirm consistency and accuracy.

We evaluated the sensitivity, specificity, positive predictive value, and κ coefficient of the electronic definition versus the treating team’s clinical diagnoses, CDC criteria, and presumed “true” pneumonias. We also assessed the accuracy of clinicians’ working diagnoses and CDC criteria versus one another and presumed “true” pneumonias to provide context for interpreting the comparison between the electronic definition versus CDC criteria. Lastly, we collated reasons for false positives and negatives for each definition. All calculations were performed using JMP Pro 14 software (SAS Institute, Cary, NC).

Results

Electronic surveillance criteria flagged 42 of 120 cases (35%), CDC criteria flagged 29 of 120 cases (24%), clinicians diagnosed pneumonia in 45 of 120 patients (38%), and 28 of 120 patients (23%) were deemed to have a “true” pneumonia. Sensitivity, specificity, positive predictive value, and kappa values for each definition relative to the other definitions are presented in Table 1.

Table 1. Comparisons of the Candidate Electronic Surveillance Definition for Pneumonia Versus CDC Criteria, Treating Teams’ Clinical Diagnoses, and “True” Pneumonia

Note. CDC, Centers for Disease Control and Prevention; SE, standard error.

Agreement between the electronic definition and other definitions was fair to moderate (κ range, 0.27–0.40). The electronic definition had a sensitivity and positive predictive value of 56% and 60% relative to clinical team’s working diagnoses (κ, 0.33), 59% and 41% relative to CDC criteria (kappa 0.27), and 71% and 48% relative to “true” pneumonia (κ, 0.40). These values were inferior to the agreement between the clinical team’s diagnoses versus CDC criteria (sensitivity, 86%; positive predictive value, 56%; κ, 0.54) but were similar to CDC criteria versus “true” pneumonia (sensitivity, 61%; positive predictive value, 59%; κ, 0.47).

Sources of false positives and negatives for each definition versus “true” pneumonia are presented in Table 2. The 3 most common reasons for false positives with the electronic definition were pulmonary edema (6 cases, 27%), periprocedural care (5 cases, 23%), and atelectasis (4 cases, 18%). Of the 8 cases missed by the electronic definition, 5 (63%) were due to ≥1 criterion occurring 2–5 days after the onset of respiratory decline: 2 (25%) were immunocompromised hosts without fever or abnormal white blood cell counts, and 1 (13%) was a patient already receiving antibiotics who did not get new antibiotics.

Table 2. Reasons for False Positives and False Negatives for the Electronic Definition and CDC Definitions Versus “True” Pneumonias

False positives for CDC criteria relative to “true” pneumonia included pulmonary edema (3 cases, 25%), transient aspiration pneumonitis (2 cases, 17%), and periprocedural care (2 cases, 17%). Of the 11 “true” pneumonias missed by CDC criteria (false negatives), all were immunocompromised patients without documentation of the clinical criteria required to meet PNU1 or microbiological or histological confirmation of infection as required for PNU2 and PNU3.

Discussion

We proposed a candidate electronic definition for NV-HAP and assessed its accuracy relative to clinical teams’ diagnoses, CDC criteria, and an expert reviewer’s retrospective determination of presumed “true” pneumonia. The electronic definition proved slightly more sensitive but less specific than traditional CDC criteria relative to presumed “true” pneumonia. We found similar levels of agreement, however, between the electronic definition versus “true” pneumonia and CDC criteria versus “true” pneumonia. These findings suggest that a potentially automatable, objective, electronic definition for NV-HAP may provide a credible complement or alternative to traditional CDC criteria to track NV-HAP rates.

Our study has important limitations. Determining truth in pneumonia surveillance is very challenging. We tried to identify “true” pneumonias by parsing clinical trajectories, serial imaging (including computed tomography), microbiology, and response to antibiotics in addition to presenting signs, but we could not verify whether our determinations were accurate. Notably, no consensus on a reference standard for hospital-acquired pneumonia has been established, and even quantitative bronchoalveolar lavage cultures and histological specimens are subject to intersample and interobserver variability, which precludes even these studies from being perfect reference standards.Reference Stevens, Kachniarz and Wright9, Reference Kerlin, Trick and Anderson10 We have no basis to claim that the proposed electronic definition is more accurate than either clinical diagnoses or CDC criteria, but it does have the advantage of being suitable for automated surveillance using routine EHR data which in turn could increase the objectivity, reproducibility, and efficiency of surveillance. Other limitations include the small sample size and fact that the study was conducted in a single center, thus limiting its generalizability.

In conclusion, a potential electronic definition for NV-HAP yielded test characteristics and agreement values similar to CDC criteria to identify possible cases of “true” pneumonia. Future studies are needed to prospectively validate the proposed definition, to further assess its correlation with clinical events, and to determine the utility of such a definition for informing comprehensive surveillance and prevention programs for NV-HAP.

Acknowledgments

The authors would like to thank Caroline McKenna for identifying the patients to include in this study and for applying the electronic NV-HAP definition.

Financial support

This study was funded by the US Centers for Disease Control and Prevention.

Conflicts of interest

All authors report no conflicts of interest relevant to this article.

References

Magill, SS, O’Leary, E, Janelle, SJ, et al.Changes in prevalence of health care-associated infections in US hospitals. N Engl J Med 2018;379:17321744.CrossRefGoogle Scholar
Corrado, RE, Lee, D, Lucero, DE, Varma, JK, Vora, NM.Burden of adult community-acquired, health-care-associated, hospital-acquired, and ventilator-associated pneumonia: New York City, 2010 to 2014. Chest 2017;152:930942.CrossRefGoogle ScholarPubMed
Micek, ST, Chew, B, Hampton, N, Kollef, MH.A case-control study assessing the impact of nonventilated hospital-acquired pneumonia on patient outcomes. Chest 2016;150:10081014.CrossRefGoogle ScholarPubMed
Horan, TC, Andrus, M, Dudeck, MA.CDC/NHSN surveillance definition of health care-associated infection and criteria for specific types of infections in the acute care setting. Am J Infect Control 2008;36:309332.CrossRefGoogle ScholarPubMed
Roulson, J, Benbow, EW, Hasleton, PS.Discrepancies between clinical and autopsy diagnosis and the value of post mortem histology: a meta-analysis and review. Histopathology 2005;47:551559.CrossRefGoogle ScholarPubMed
Tejerina, E, Esteban, A, Fernandez-Segoviano, P, et al.Accuracy of clinical definitions of ventilator-associated pneumonia: comparison with autopsy findings. J Crit Care 2010;25:6268.CrossRefGoogle ScholarPubMed
Schurink, CA, Van Nieuwenhoven, CA, Jacobs, JA, et al.Clinical pulmonary infection score for ventilator-associated pneumonia: accuracy and inter-observer variability. Intensive Care Med 2004;30:217224.CrossRefGoogle ScholarPubMed
Klompas, M.Interobserver variability in ventilator-associated pneumonia surveillance. Am J Infect Control 2010;38:237239.CrossRefGoogle ScholarPubMed
Stevens, JP, Kachniarz, B, Wright, SB, et al.When policy gets it right: variability in US hospitals’ diagnosis of ventilator-associated pneumonia. Crit Care Med 2014;42:497503.CrossRefGoogle Scholar
Kerlin, MP, Trick, WE, Anderson, DJ, et al.Interrater reliability of surveillance for ventilator-associated events and pneumonia. Infect Control Hosp Epidemiol 2017;38:172178.CrossRefGoogle ScholarPubMed
van Mourik, MS, van Duijn, PJ, Moons, KG, Bonten, MJ, Lee, GM.Accuracy of administrative data for surveillance of healthcare-associated infections: a systematic review. BMJ Open 2015;5:e008424.CrossRefGoogle ScholarPubMed
Wolfensberger, A, Meier, AH, Kuster, SP, Mehra, T, Meier, MT, Sax, H. Should International Classification of Diseases codes be used to survey hospital-acquired pneumonia? J Hosp Infect 2018;99:8184.CrossRefGoogle ScholarPubMed
Ji, W, McKenna, C, Ochoa, A, et al.Development and assessment of objective surveillance definitions for nonventilator hospital-acquired pneumonia. JAMA Netw Open 2019;2:e1913674.CrossRefGoogle ScholarPubMed
Figure 0

Table 1. Comparisons of the Candidate Electronic Surveillance Definition for Pneumonia Versus CDC Criteria, Treating Teams’ Clinical Diagnoses, and “True” Pneumonia

Figure 1

Table 2. Reasons for False Positives and False Negatives for the Electronic Definition and CDC Definitions Versus “True” Pneumonias