The collection and publication of hospital-acquired infection (HAI) data, as part of an effort to publicly report hospital quality data, is a key element of the Affordable Care Act and other recent healthcare reform legislation in the United States. 1 , 2 These data are made available to the public via the Centers for Medicare and Medicaid Services (CMS) Hospital Compare website 3 and a number of state-specific websites. The reasons for making these data accessible to the public include improving hospital quality, increasing transparency in the healthcare system, and providing information for patients to allow them to make informed decisions related to their healthcare.
HAI data may be more difficult to interpret than other hospital quality metrics (eg, patient satisfaction) because these data involve infection rates and risk adjustment. The objective of this study was to determine whether the general public could compare hospitals using HAI data as they are currently presented on CMS Hospital Compare.
METHODS
We conducted a cross-sectional survey among patients admitted to the University of Maryland Medical Center, a 760-bed tertiary referral hospital in Baltimore, Maryland, on 37 days between June 17, 2014, and September 22, 2014. Patients were approached at random using a list of patients who were admitted within the prior 24 hours to mitigate selection bias against short admissions. Units where patients were unlikely to be capable of completing a survey (eg, intensive care units) or where conducting the survey would be disruptive to care (eg, obstetrics, psychiatry) were not included. Patients were not approached to participate if they were unavailable 2 or more times (eg, not in room), were discharged before enrollment, were physically or mentally unable to participate, or were on airborne or enhanced contact precautions. Patients were excluded if they could not speak or read English. Patients were not provided an incentive for completing the survey. This study was approved by the University of Maryland Institutional Review Board.
The survey was conducted using an iPad in a waterproof case that was sanitized with disinfectant wipes (Oxivir) after each interview. After enrollment and eligibility were assessed, participants were presented with several screens of written instructions followed by introductory information on catheter-associated urinary tract infections (CAUTI) (see online supplement) and a series of 12 questions that required review of contrived data presented in the same format as used by the CMS Hospital Compare website for CAUTI (see Figure 1 for example questions). Participants were also asked a series of questions about demographic characteristics, degree of healthcare experience (eg, whether the participant was a healthcare worker or had been hospitalized frequently), and prior experience with online tools for comparing hospitals. The survey was developed by a panel of experts in hospital epidemiology (A.D.H. and D.J.M.) and survey methodology (J.P.B.), and was pretested with both nonpatients and patients. The survey instrument is available in its entirety as an online supplement.
The 12 hospital comparison questions were divided into 4 sets of 3 questions. Each set of 3 questions, referred to below as a “task,” assessed understanding of a specific HAI data presentation format and set of characteristics of the data. All 12 questions presented data about 2 hospitals, referred to as Hospital 1 and Hospital 2. The questions had the same structure and response options, which asked the participant to determine whether (1) Hospital 1 performed better than Hospital 2, (2) they performed the same, or (3) Hospital 2 performed better than Hospital 1 (Figure 1). In addition, (4) “Not enough information” and (5) “Don’t know” were available as response options. Please refer to the online supplement for the scoring for each question. Questions were displayed in the same order for all participants. Figures 1 and 2 provide more information about the structure of these questions and data presentation formats, and each task is summarized below. The tables used by CMS Hospital Compare and shown in these figures use standardized infection ratios (SIRs), which are calculated by dividing the infection rate in the hospital by a predicted infection rate based on national data. 3 , 4 The written description (column 5 in Figure 2) describes the 95% CI of the SIR—for example, if the lower bound of the 95% CI is greater than 1, the hospital is described as “worse than the US National Benchmark” for CAUTI. SIRs of less than 1 indicate fewer reported infections than predicted; lower SIRs indicate better performance.
Task 1 (written SIR description [ie, “Evaluation”] only)
Questions 1–3 assessed interpretation of the default HAI data presentation table format on CMS Hospital Compare (as of January 2015). This consisted of only a written description of the hospital’s CAUTI performance as compared with a national benchmark. For example, Hospital 1 might be “Better than the US National Benchmark” and Hospital 2 might be “No different than the US National Benchmark”; in this case, the correct answer is that Hospital 1 performs better than Hospital 2.
Task 2 (written SIR description with numbers)
Questions 4–6 assessed interpretation of another HAI data presentation table format also available on CMS Hospital Compare under “View More Details.” This table included the same written description of performance against a national benchmark as in Task 1, as well as the number of reported infections, number of catheter days, the predicted number of infections, and the SIR.
Task 3 (identical SIR descriptions with numbers)
Questions 7–9 used the same HAI data presentation table format as Task 2. In this task, the number of infections differs substantially between the two hospitals, while denominators are similar, resulting in large differences in infection rates and SIRs between the hospitals. The SIR 95% CIs of both hospitals are on the same side of 1 to make interpreting numerical data necessary for answering correctly, as opposed to relying on the “Evaluation” column.
Task 4 (numbers only, without written SIR description)
Questions 10–12 used a HAI data presentation table format based on the “details format” but with the “Evaluation” column removed. This format does not appear on CMS Hospital compare; it was created for the purposes of this study.
We designed these tasks to be ascending in level of difficulty a priori. Task 1 was the most straight-forward of the 4 tasks, as it required only comparing the written descriptions of the SIRs (column 5 in Figure 2) for the 2 hospitals. Task 2 could also be completed correctly using the written SIR descriptions alone or by interpreting the numerical CAUTI data. Task 3 necessitated using the numerical data to correctly compare the 2 hospitals because the “Evaluation” cells were the same for both hospitals despite large differences in the numerical data. For this task, simply comparing the number of reported infections (column 1) would result in correct comparisons because the catheter-days (column 2 [denominator for infection rate]) and predicted number of infections (column 3) were similar. Task 4 forced participants to interpret numerical data because the “Evaluation” column was omitted. Additionally, in this task the catheter-days (column 2) and the predicted number of infections (column 3) differed substantially between the 2 hospitals. Thus, correct comparisons required calculating an infection rate (dividing column 1 by column 2) for each hospital and comparing them, or comparing the number of infections (column 1) to the predicted number of infections (column 3), or using the SIR (column 4) to approximate this comparison.
We report the mean percentage correct for each task with 95% confidence intervals. Each task contains 3 questions, so a mean percentage correct of 67% for a task corresponds to 2 of 3 correct answers on average for that task. We also report descriptive statistics for the demographic characteristics of our participants and their healthcare experience. Additionally, we report mean percentage correct for each task in subgroups of participants with at least 67% correct for Task 1, college-educated participants, and participants with some healthcare experience; our goal was to explore whether these factors affect participant scores. Finally, we report the responses to questions on the past usage and perceived utility of websites for comparing hospitals. Data were analyzed using Stata, version 12.1 (StataCorp).
RESULTS
A total of 110 patients completed the survey between June 17, 2014, and September 22, 2014. See Table 1 for details on participant demographic characteristics and degree of healthcare experience. An additional 75 patients were approached and declined to participate.
NOTE. Data are no. (%) of participants unless otherwise specified. Percentages may not total to 100% owing to rounding.
The mean (95% CI) percentage correct was 72% (66%–79%) for Task 1; 60% (55%–66%) for Task 2; 50% (42%–58%) for Task 3; and 38% (31%–45%) for Task 4 (Figure 3A). This pattern of descending mean percentages correct for each successive task was observed in subgroups of those who got either 2/3 or 3/3 questions correct for Task 1 (n=78; Figure 3B); and healthcare workers or participants who were a caregiver for a frequently hospitalized person (n=57; Figure 3D). A similar pattern was observed among college graduates (n=33; Figure 3C), who had the highest mean percentage correct for Task 1 and the lowest for Task 4. Percentages correct for individual questions within each task were similar (data not shown) except for the third question (no. 6) in Task 2, which was answered correctly only 35% of the time. In this question, Hospital 1 and 2 had nearly identical CAUTI rates and SIRs but Hospital 1 had twice as many catheter-days (denominator for infection rate) as Hospital 2. Including only the first 2 questions (nos. 4 and 5) in Task 2 (in these questions both hospitals had similar numbers of catheter-days), the mean (95% CI) percentage correct was 73% (66%–80%).
Participants were asked whether they had ever used a website for comparing hospitals before; 6 (6%) indicated that they had. A larger number of participants (39 [36%]) indicated that a hospital comparison website would have been helpful in their choice of hospital for their current admission (Table 1).
DISCUSSION
In our study, we asked participants to compare 2 hypothetical hospitals using CAUTI data displayed identically to the presentation methods currently used on CMS Hospital Compare and other hospital comparison sites meant for use by the general public. When presented with the relatively simple table providing only written summaries of the SIR (eg, “Better than US National Benchmark”), participants were able to correctly assess hospital performance 72% of the time, on average. As the complexity of the data and their interpretation increased, participants answered correctly less often (see Figure 1, Tasks 2–4; Figure 3A). These results indicate that the current tabular methods for presenting hospital-level HAI data to the general public may not allow patients to make informed comparisons among hospitals.
In general, we found that participants in our study were less accurate at comparing hospitals when correct comparisons required interpreting numerical data (Task 3 and Task 4) compared with when written descriptions of SIRs were available and informative (Task 1 and Task 2). Examining our results in greater detail suggests some more specific reasons why participants had difficulty properly interpreting HAI data presented in these tabular formats. For example, in Task 3 the hypothetical hospitals had identical written SIR descriptions (indicating that the 95% CIs for the SIRs of the 2 hospitals were either both on the same side of 1, or both overlapped 1). However, the CAUTI rates were very different between the 2 hospitals; one hospital clearly performed better than the other if the CAUTI data were interpreted correctly (see the Task 3 example table in Figure 1). For questions in this task, participants answered correctly only 50% of the time on average.
For the first 2 questions (nos. 4 and 5) in Task 2, both hospitals had similar catheter-days (denominator of infection rate), while the hospitals in the third question (no. 6) had substantially different catheter-days. Participants answered the first 2 questions correctly 73% of the time, compared with 35% for the third question. This suggests that participants were comparing the raw number of infections rather than calculating an infection rate. Note that participants could have ignored the numerical data in all of the Task 2 questions and instead used the written SIR descriptions to correctly compare the hospitals as they did in Task 1. This indicates that providing the written SIR descriptions along with numerical data is not beneficial (and may be misleading, as they appeared to be in Task 3 as described above).
Real-world data are generally much more complex than the contrived CAUTI data in our survey (eg, they include more than 2 hospitals, denominators may differ substantially among hospitals), which would lead to even lower rates of correct hospital comparisons. Thus, on the basis of the results of this study we expect that members of the general public in many cases may reach incorrect conclusions when comparing hospitals using the current HAI reporting format.
To our knowledge, this is the first quantitative study rigorously assessing the interpretability of CMS Hospital Compare’s format for presenting hospital-level HAI data to the general public. We are aware of only one other publication addressing the usability of publicly available HAI data,Reference Rajwan, Barclay, Lee, Sun, Passaretti and Lehmann 5 but this study did not quantitatively assess understanding of data.
Strengths of our study include the large sample size (N=110), random selection of participants, and the method for presenting participants with HAI data on an iPad to ensure that the formatting of the HAI data were identical to how they appear on CMS Hospital Compare. The primary limitation of this study is that our inpatient population may not represent members of the general public because they were hospitalized. However, our sample was racially diverse, had a range of ages, and had a variety of educational attainment and household income levels. Additionally, subanalysis of only participants with a 4-year college degree demonstrated a pattern of decreasing performance from Task 1 to Task 4 similar to the primary analysis, suggesting that our results hold across levels of educational attainment. Similar results were seen in a subanalysis of participants with greater healthcare experience, suggesting that our results are independent of familiarity with healthcare. Finally, performance on Tasks 2-4 among participants who got at least 2 questions correct on Task 1 was nearly identical to that of the overall sample, indicating that our sample did not consist of patients too sick to complete the survey.
In summary, this study found that the current tabular methods are inadequate for presenting HAI data to the general public as used by CMS Hospital Compare and other hospital comparison websites. However, 36% of participants in our survey indicated that a website for comparing hospitals would have been useful in choosing a hospital for their current hospitalization, indicating that there is public demand for a usable form of these data. Further research is necessary to identify methods for improving the way these data are presented. This research should include rigorous quantitative assessment of public understanding using realistic data.
ACKNOWLEDGMENTS
Financial support. Agency for Healthcare Research and Quality (HS18111 to D.J.M.), the Veterans Administration Health Services Research and Development (CRE 12-289 to D.J.M), the Baltimore Veterans Affairs Medical Center Geriatrics Research, Education, and Clinical Center (J.D.S.), the National Institute on Aging at the National Institutes of Health (NIA 5 P30 AG028747 to J.D.S), the National Institute of Diabetes and Digestive and Kidney Diseases at the National Institutes of Health (NIDDK 5 P30 DK072488 to J.D.S.), and the National Institutes of Health (5 K24AI079040-05 to A.D.H.).
Potential conflicts of interest. All authors report no conflicts of interest relevant to this article.
SUPPLEMENTARY MATERIAL
To view supplementary material for this article, please visit http://dx.doi.org/10.1017/ice.2015.260