Hostname: page-component-6bf8c574d5-h6jzd Total loading time: 0.001 Render date: 2025-02-22T00:19:47.088Z Has data issue: false hasContentIssue false

Structure, Process, and Outcome Quality of Surgical Site Infection Surveillance in Switzerland

Published online by Cambridge University Press:  22 August 2017

Stefan P. Kuster*
Affiliation:
Swissnoso - National Center for Infection Control, Bern, Switzerland Division of Infectious Diseases and Hospital Epidemiology, University and University Hospital of Zurich, Zurich, Switzerland
Marie-Christine Eisenring
Affiliation:
Swissnoso - National Center for Infection Control, Bern, Switzerland Service of Infectious Diseases, Central Institute, Valais Hospital, Sion, Switzerland
Hugo Sax
Affiliation:
Swissnoso - National Center for Infection Control, Bern, Switzerland Division of Infectious Diseases and Hospital Epidemiology, University and University Hospital of Zurich, Zurich, Switzerland
Nicolas Troillet
Affiliation:
Swissnoso - National Center for Infection Control, Bern, Switzerland Service of Infectious Diseases, Central Institute, Valais Hospital, Sion, Switzerland Services of Infectious Diseases and Preventive Medicine, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
*
Address correspondence to Stefan Kuster, MD, MSc, Division of Infectious Diseases and Hospital Epidemiology, University Hospital Zurich, Raemistrasse 100 / HAL14 D6 8091 Zürich, Switzerland (stefan.kuster@usz.ch).
Rights & Permissions [Opens in a new window]

Abstract

OBJECTIVE

To assess the structure and quality of surveillance activities and to validate outcome detection in the Swiss national surgical site infection (SSI) surveillance program.

DESIGN

Countrywide survey of SSI surveillance quality.

SETTING

147 hospitals or hospital units with surgical activities in Switzerland.

METHODS

Site visits were conducted with on-site structured interviews and review of a random sample of 15 patient records per hospital: 10 from the entire data set and 5 from a subset of patients with originally reported infection. Process and structure were rated in 9 domains with a weighted overall validation score, and sensitivity, specificity, positive predictive value, and negative predictive value were calculated for the identification of SSI.

RESULTS

Of 50 possible points, the median validation score was 35.5 (range, 16.25–48.5). Public hospitals (P<.001), hospitals in the Italian-speaking region of Switzerland (P=.021), and hospitals with longer participation in the surveillance (P=.018) had higher scores than others. Domains that contributed most to lower scores were quality of chart review and quality of data extraction. Of 49 infections, 15 (30.6%) had been overlooked in a random sample of 1,110 patient records, accounting for a sensitivity of 69.4% (95% confidence interval [CI], 54.6%–81.7%), a specificity of 99.9% (95% CI, 99.5%–100%), a positive predictive value of 97.1% (95% CI, 85.1%–99.9%), and a negative predictive value of 98.6% (95% CI, 97.7%–99.2%).

CONCLUSIONS

Irrespective of a well-defined surveillance methodology, there is a wide variation of SSI surveillance quality. The quality of chart review and the accuracy of data collection are the main areas for improvement.

Infect Control Hosp Epidemiol 2017;38:1172–1181

Type
Original Articles
Copyright
© 2017 by The Society for Healthcare Epidemiology of America. All rights reserved 

Surgical site infections (SSIs) are the most common hospital-acquired infections; they are associated with increased morbidity and mortality, prolonged length of hospital stay, and increased cost.Reference Kirkland, Briggs, Trivette, Wilkinson and Sexton 1 Reference Weber, Zwahlen and Reck 6 Infection surveillance with feedback has been shown to reduce SSI rates.Reference Haley, Culver and White 7 Nationwide SSI surveillance has been performed in Switzerland since 2011.Reference Troillet, Aghayev, Eisenring and Widmer 8 In line with a broader international trend, SSI rates of each participating hospital have been made publicly available since 2014, reinforcing the need for valid data collection.

Surveillance methods should be standardized to ensure the quality and reliability of surveillance data.Reference Bruce, Russell, Mollison and Krukowski 9 The accuracy of the data depends on the experience, qualifications, training, and awareness of the surveillance staff.Reference Gastmeier, Kampf and Hauer 10 , Reference Wenzel, Osterman and Townsend 11 Validation is the only independent means to determine the accuracy of surveillance data; thus, validation is essential in determining the reliability of a SSI surveillance network in which data are aggregated from multiple data collectors and are used for comparisons among hospitals.Reference McCoubrey, Reilly, Mullings, Pollock and Johnston 12 , Reference Haley, Van Antwerpen and Tserenpuntsag 13

Validation measures are designed to detect potential sources of bias. With regard to validation of SSI surveillance, particularly selection bias (methods of patient inclusion), information and detection bias (completion of required medical information), and assessment bias (correct interpretation of the study outcome) need to be considered. Although the best means of validating a SSI surveillance module is still unknown, methods of calculating sensitivity, specificity, positive predictive values, and negative predictive values (with or without structured interviews for structure and process validation) have been widely acknowledged. 14 Reference Zuschneid, Geffers and Sohr 18

To assess the quality of the Swissnoso SSI surveillance program, structure and process for SSI surveillance were reviewed at all participating hospitals using audits and structured interviews with all persons involved in surveillance. SSI outcome data were validated by reviewing a random sample from each hospital of 10 patient records (with or without infection) and 5 additional randomly selected records of patients with infection.

MATERIALS AND METHODS

SSI Surveillance Method

In Switzerland, the first multicenter surveillance system for SSI was developed in the mid-1990s. The system was developed according to the principles of the US National Nosocomial Infections Surveillance (NNIS) system, currently known as the National Healthcare Safety Network (NHSN)Reference Culver, Horan and Gaynes 19 Reference Staszewicz, Eisenring, Bettschart, Harbarth and Troillet 23 and is described in detail in a previous publication.Reference Troillet, Aghayev, Eisenring and Widmer 8 Full documentation of the surveillance methodology is available for participating hospitals on the Swissnoso website. 14

Since 2014, starting with the 2011 data, when participation in the program became mandatory, the Swiss National Association for the Development of Quality in Hospitals and Clinics (ANQ) has openly publishing the surveillance results by hospital, including their names, NNIS/NHSN-adjusted SSI rates, and quality of surveillance as rated during onsite visits. 24

Validation of Participating Hospitals

Since October 1, 2012, we validated the structure and process of SSI surveillance as well as SSI outcome data during dedicated validation visits for all hospitals required to participate in the program nationwide using a standardized data collection form. Hospitals were visited on site by 1 of 3 specifically trained investigators (2 registered nurses and 1 physician) with profound knowledge of the Swissnoso SSI surveillance methodology.

Surveillance Structure and Process Assessment

On-site structured interviews and observations of the surveillance process were performed with all persons involved in SSI surveillance, regardless of education, background, or the percentage of full-time equivalents ascribed for surveillance.

A weighted score was attributed according to a structured that was developed based on existing literature and expert consensus. The questionnaire covered training of persons performing the surveillance, work environment (including understaffing), potential conflicts of interest, data sources for patient selection, completeness of inclusion, completeness of required medical information (for diagnosis of SSI, during hospitalization and after discharge), quality of postdischarge surveillance, presence and type of medical supervision, and losses to follow-up (ie, attrition bias) (Table 1 and Supplemental Tables S1, S2, and S3). When different teams performed SSI surveillance at different sites or units of a hospital (eg, pediatric surgery, abdominal surgery, or cardiac surgery), each team was assessed separately and a score was attributed to each team.

TABLE 1 Surgical Interventions Included in the Validation and Followed by the147 Audited Surveillance Teams, and Team Characteristics, October 1, 2012, to June 26, 2016

SSI Outcome Validation

Patient records of electronic case report forms (eCRFs) submitted between January 1, 2009, and October 31, 2015, were eligible for review. A random sample of 10 patient records was drawn from all cases and all types of surgeries that were submitted by each respective hospital, irrespective of the presence or absence of SSI (Dataset A). In addition, for each hospital, 5 records of patients with SSI were randomly selected from all cases and all types of surgeries with originally reported infections that were submitted by the respective hospital (Dataset B).

Patient records were reviewed by the validators with assistance from on-site participants and were checked against eCRFs and paper CRFs. The outcome determination by the independent investigator was regarded as the gold standard. All cases with infection, all misclassifications (false positive and false negative), and all questionable cases needing further clarification were reviewed and resolved by consensus with 1 or 2 additional senior investigators (M.C.E. and N.T.).

Statistical Analysis

Descriptive statistics were used to outline the surveillance structures and processes of participating hospitals. Differences between groups were assessed in univariate analyses using the χ2, Fisher’s exact test, Wilcoxon rank-sum test, or Student t test, as appropriate. Multivariate linear regression analysis was used to evaluate associations between surveillance parameters (ie, language region, hospital size, hospital status [private vs public], number of hospital beds, full-time equivalents dedicated to surveillance, duration of participation in surveillance) and validation scores for the overall score and scores within individual domains. Multivariate analyses assessing the association between surveillance structure and process parameters (ie, language region, hospital size, hospital status [private vs public], number of procedures included per year, full-time equivalents dedicated to surveillance, understaffing, and overall validation score and scores of individual domains, respectively), and misclassification of infections status and types of infections, respectively, were performed using generalized estimating equations (GEE; logit link models with binomial distribution of the dependent variable and exchangeable within-group correlation structure). This method accounted for cluster effects on the hospital level, as several cases per hospital that shared the same surveillance structure and process parameters were assessed.

The quality of outcome reporting dataset comprising all randomly drawn cases (ie, cases with and without infection, as classified by the hospital) from each visited hospital yielded cases that fell into 4 categories: (1) cases reported by hospital and identified by Swissnoso validation staff as SSI cases (true positives); (2) cases not reported by hospital and ruled out as SSI cases by Swissnoso validation staff (true negatives); (3) cases reported by hospital but ruled out as SSI cases by Swissnoso validation staff (false positives); and (4) cases not reported by the hospital but identified as SSI cases by Swissnoso validation staff (false negatives). From these numbers, sensitivity, specificity, positive predictive value, and negative predictive value with 95% confidence intervals were calculated for the overall data set, with the exception of cases with incomplete information at the time of the validation visit.

All statistical analyses were performed using Stata 14.2 software (StataCorp, College Station, TX), and 2-sided P values <.05 were considered statistically significant.

Sample Size Estimation

In a preceding sample-size estimation, a random sample of 913 patient records was considered necessary to achieve a 95% confidence interval for sensitivity of 5% when an overall SSI prevalence of 8% and a sensitivity of 95% were assumed.Reference Buderer 25

RESULTS

Between October 1, 2012, and June 26, 2016, all 147 hospitals or hospital units that participated in the surveillance and had submitted cases by October 31, 2015, were visited and audited in 25 of 26 Swiss cantons. Overall, 107 hospitals (72.8%) were from the German-speaking region of Switzerland; 31 (21.1%) were from the French-speaking region, and 9 (6.1%) were from the Italian-speaking region. 96 (65.3%) were public hospitals or hospital units, and 9 (6.1%) were university affiliated. Furthermore, 87 (59.2%) hospitals participated in the surveillance for more than 3 years at the time of validation (median time of participation: 3.4 years; range, 0.8–15.8 years).

Structure and Process Validation

The characteristics of the 147 surveillance teams are shown in Table 1. The 2 surgical procedures that are most strongly represented are colon surgery (followed by 70.8% of validated hospitals) and hip prosthesis surgery (69.4%). Understaffing was noted in 34.7% of the surveillance teams, and 35.4% of medical supervisors had not undergone the required structured training in the surveillance methodology. Conflicts of interest (ie, surveillance supervised by a member of the surgical team) were detected among 11.6% of medical supervisors.

Table 2 depicts the 9 domains assessed in the process validation score, their individual scores and weights, and the unweighted mean score per domain among the 147 hospitals or hospital units. The overall mean score was 34.85 points (standard deviation [SD], 6.95 points), with a median of 35.5 points (range, 16.25–48.5 points) for a maximum of 50 points (Figure 1). The 2 domains that contributed most to lower scores were ‘follow-up during hospitalization’ (weighted mean difference from maximum score, 3.97 points; SD, 2.30 points) and ‘data quality of eCRF compared to original data’ (weighted mean difference from maximum score: 3.22 points; SD, 1.64 points).

FIGURE 1 Distribution of scores in 147 participating surveillance teams audited between October 1, 2012, and June 26, 2016.

TABLE 2 Domains, Scores, Weights, and Mean Scores per Domain in 147 Surveillance Teams

NOTE. SD, standard deviation; eCRF, electronic case report form.

The associations between hospital status, language region, duration of participation in the surveillance program, and hospital size with scores within individual domains are depicted in Table 3. In multivariate linear regression analysis, public hospital status (P<.001), Italian-speaking region (P=.021) and duration of participation in the surveillance programs (P=.018) were associated with higher validation scores, whereas hospital size was not. The number of full-time equivalents dedicated to surveillance was neither associated with the overall score nor with scores of individual domains.

TABLE 3 Factors Associated With Higher Scores Within Individual Domains in 147 Surveillance Teams

NOTE. ECRF, electronic case report form; FTE, full-time equivalents.

a Bold values indicate statistical significance in multivariate models.

SSI Outcome Validation

A total of 1,110 randomly selected clinical cases (Dataset A, ie, irrespective of the presence or absence of SSI) with complete follow-up were reviewed between October 1, 2012, and June 26, 2016. The overall infection rate , as determined by the validators, was 4.4% (95% confidence interval [CI], 3.3%–5.8%). The characteristics of these cases are shown in Table 4. Overall, 15 cases (1.4%) were incorrectly classified as no infection, and 1 case (0.09%) was misclassified as an infection, accounting for a specificity of the surveillance of 99.9% (95% CI, 99.5%–100%), a sensitivity of 69.4% (95% CI, 54.6%–81.7%), a positive predictive value of 97.1% (95% CI, 85.1%–99.9%), and a negative predictive value of 98.6% (95% CI, 97.7%–99.2%). Of the 15 false negatives, 9 occurred in colon surgery cases, 3 in hip prosthesis cases, 2 in caesarean section cases, and 1 in an appendectomy case. 8 of these 15 cases (53.3%) were superficial incisional infections and 7 (46.7%) were organ-space infections. The 15 cases with missed infections were from 15 different hospitals; 4 (26.7%) were missed in private hospitals and 10 (66.7%) were missed in in non–university-affiliated public hospitals. Furthermore, 1 (6.7%) occurred in a university-affiliated hospital, corresponding to the distribution of reviewed cases among these hospital categories.

TABLE 4 Characteristics of 1,110 Randomly Selected Clinical Cases With Complete Follow-Up Reviewed Between October 1, 2012, and June 26, 2016

NOTE. PPV, positive predictive value; NPV, negative predictive value. NA, not applicable.

In univariate GEE, misclassification of infection status was associated with lower quality of supervision of suspected cases by a medical supervisor (P=.009), and unweighted mean (standard deviation) scores in cases with false-negative classification were 2.1 (0.93) compared to 2.61 (0.72) for cases without false-negative classification (Table 2, Domain 7). However, misclassification of infection status was not associated with the overall validation score or other domains of the score, hospital size, private hospital status, number of operations followed, medical training of the medical supervisor, or full-time equivalents dedicated for surveillance or understaffing (Supplemental Table S4).

In total, 486 cases with infections were randomly selected from 128 of 147 (87.1%) hospitals or hospital units (Dataset B, ie, randomly selected cases among those with SSI, as classified by the hospitals). The remainder of hospitals had no cases of infection at the time of the validation visit. Among these 486 cases, 204 (42.2%) were superficial incisional infections, 52 (10.8%) were deep incisional infections, and 226 (46.8%) were organ-space infections (Table 5). Misclassifications occurred in 46 (9.5%) cases. 11 superficial incisional infections were incorrectly classified as deep incisional (n=7) or organ-space (n=4) infections, 4 deep incisional infections were incorrectly classified as superficial incisional infections, and 31 organ-space infections were incorrectly classified as superficial incisional (n=7) or deep incisional (n=24) infections. 9.4% of classifications were incorrect in colon surgery, 21.2% in infections after hip arthroplasty and 18.2% in knee arthroplasty, the latter two mainly due to incorrect classification of organ-space infections as deep incisional infections. These 46 misclassifications occurred in 34 hospitals or hospital units. Of 34 hospitals, 8 (23.5%) had 2 cases with misclassification, 2 hospitals (5.9%) had 3. In univariate GEE, misclassification of types of infections was associated with lower overall validation scores (P<.001), higher number of operations performed (P=.021), lower adequacy of follow-up during hospitalization (P=.015), lower adequacy of documentation of cases with infection (P<.001), lower quality of supervision of suspected cases by the medical supervisor (P=.003, domain no. 7; see Table 2), lower infectious-diseases–related expertise of the medical supervisor (medical supervisor’s background; P=.007), and lower participation in mandatory training sessions (P=.009). In multivariate GEE, misclassification was independently associated with lower adequacy of documentation of cases with infection (P=.023) (Supplemental Table S5).

TABLE 5 Characteristics of 483 Randomly Selected Cases With Originally Reported Infection at the Main Surgical Site and Complete Documentation and Follow-Up Reviewed Between October 1, 2012, and June 26, 2016

DISCUSSION

Using on-site, full-day visits in all hospitals participating in SSI surveillance in Switzerland, we have demonstrated a wide variation of surveillance quality, with overall quality scores ranging from 16.25 to 48.5 (of 50) points. Room for improvement was detected for the important domains of chart review and quality of data extraction from patient charts. Overall, 15 infections were not reported, accounting for 1.4% of all cases that were classified as no SSIs by the hospitals and 30.6% of all included SSIs.

The association between Italian-speaking region and the overall score was possibly associated with the involvement of the same study personnel in the surveillance across several hospitals, guaranteeing better homogeneity in surveillance methodology. Public hospitals perform more extensive medical documentation to ensure high treatment quality across different treatment teams and thus reached higher validation scores. In private hospitals, there may be less variation in medical personnel involved in patient care; thus, thorough documentation may not be considered equally important. Last, the expertise that accumulates over the years explains the association between overall validation scores and duration of participation in the surveillance program. Our dataset showed an association between misclassifications of infection status (ie, SSI present as compared to absent) and the quality of supervision by a medical supervisor, but not with total validation scores, hospital size, private hospital status, number of operations followed, training of the medical supervisor, full-time equivalents dedicated for surveillance or understaffing. Misclassification of the type of infection (ie, superficial incisional, deep incisional infection, or organ/space infection) was independently associated with lower adequacy of documentation.

Taken together, our findings highlight the importance of high-quality data for interfacility comparisons and, more importantly, public reporting of healthcare-associated surveillance data. Interpretive variation despite uniform surveillance definitions has been shown previously.Reference Birgand, Lepelletier and Baron 26 , 27 Furthermore, public reporting of HAI surveillance data in a system where there is great disincentive to have unfavorable outcome data may result in exclusion or reclassification of events as opposed to preventing actual negative outcomes.Reference Talbot, Bratzler and Carrico 28 Therefore, apart from a standardized methodology, validation of surveillance data, surveillance methods, and operations within participating facilities by an independent party are key for quality assurance under such circumstances.

As mentioned previously, the best means to validate a SSI surveillance module is still unknown. Therefore, various approaches have been proposed in the scientific literature or are available together with the surveillance methodologies, such as the validation toolkits provided by the NHSN. 29 The methods applied to validate surgical site infection surveillance (SSIS) in The Netherlands have been published in 2007 by Mannien et al.Reference Mannien, van der Zeeuw, Wille and van den Hof 30 Thereby, process validation by means of a structured interview as well as a prevalence study were performed. Overall positive predictive values and negative predictive values were then calculated.

Similarly, validation of SSIS data was performed in Scotland by McCoubrey et al.Reference McCoubrey, Reilly, Mullings, Pollock and Johnston 12 Validation in terms of structure (ie, trained personnel and systems for SSIS, systems to ensure complete inclusion, check and confirm the number of operations) and in terms of process (ie, phone interview for identification of the systems for SSIS data collection and management at a local level) were performed. Outcome validation was conducted by calculating sensitivity, specificity, positive predictive value, and negative predictive value of the last 15 cases of SSI and 60 further randomly selected cases.

Gastmeier et alReference Gastmeier, Kampf and Hauer 10 compared 2 validation methods in a prevalence survey (Nosokomiale Infektionen in Deutschland Erfassung und Prävention, NIDEP) on nosocomial infections.Reference Gastmeier, Kampf and Hauer 10 On one hand, as in other previous studies,Reference Wenzel, Osterman and Townsend 11 , Reference Broderick, Mori, Nettleman, Streed and Wenzel 31 Reference Larson, Horan, Cooper, Kotilainen, Landry and Terry 33 bedside validation of the 4 physician investigators was performed using 2 supervisors as the gold standard, and sensitivity and specificity were calculated. In addition, the investigators were validated using case studies.Reference Larson, Horan, Cooper, Kotilainen, Landry and Terry 33

However, limitations of these approaches—including ours—are that, first, it remains unclear how and whether results of validation of structure and process by structured interviews translate to the validity of infection outcomes. Second, validation by case studies allows for the assessment of knowledge among the persons performing surveillance, but the conclusion from case study results on SSIS performance is inappropriate with regard to potential conflicts of interest because people may behave differently in the setting of case studies as compared to real-life situations in their own hospital. Third, there is no consensus about the sensitivity required to consider surveillance results to be valid. And last, given the low prevalence of SSI, large numbers of patient charts need to be reviewed to achieve an adequate level of precision.Reference Buderer 25 , Reference Carley, Dosman, Jones and Harrison 34

In conclusion, validation of process and structure of SSI surveillance and of outcome data helps identify areas for improvement and estimate the proportion of underreporting of SSI. Validation results are reported openly together with SSI rates in Switzerland to help the public appraise the results of SSI rates in individual hospitals. However, the efforts and cost of validation are substantial; therefore, more sensitive and efficient methods for the detection of false-negative outcome measures are urgently needed. Future research should focus on the association between poor performance in process and structure measurement and reported SSI rates.

ACKNOWLEDGMENTS

We would like to thank Marylaure Dubouloz, Katja Di Salvo, and all participating hospitals for data collection and collaboration. These data were collected in collaboration with the Swiss National Association for the Development of Quality in Hospitals and Clinics (ANQ).

Financial support: No financial support was provided relevant to this article.

Potential conflicts of interest: All authors report no conflicts of interest relevant to this article.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/ice.2017.169.

Footnotes

PREVIOUS PRESENTATION: These data were presented in part at the Fourth International Conference on Prevention & Infection Control (ICPIC) on June 23, 2017, in Geneva, Switzerland

a

Authors of equal contribution.

b

Members of Swissnoso are (in alphabetical order): Carlo Balmelli, MD, Lugano; Marie-Christine Eisenring, RN, ICP, CNS, Sion; Stephan Harbarth, MD, MS, Geneva; Stefan P. Kuster, MD, MSc, Zurich; Jonas Marschall, MD, MSc, Bern; Virginie Masserey Spicher, MD, Bern; Didier Pittet, MD, MS, Geneva; Christian Ruef, MD, Zurich; Hugo Sax, MD, Zurich; Matthias Schlegel, MD, St. Gallen; Alexander Schweiger, MD, Basel; Nicolas Troillet, MD, MSc, Sion; Andreas F. Widmer, MD, MSc, Basel; Giorgio Zanetti, MD, MSc, Lausanne.

References

REFERENCES

1. Kirkland, KB, Briggs, JP, Trivette, SL, Wilkinson, WE, Sexton, DJ. The impact of surgical-site infections in the 1990s: attributable mortality, excess length of hospitalization, and extra costs. Infect Control Hosp Epidemiol 1999;20:725730.CrossRefGoogle ScholarPubMed
2. Perencevich, EN, Sands, KE, Cosgrove, SE, Guadagnoli, E, Meara, E, Platt, R. Health and economic impact of surgical site infections diagnosed after hospital discharge. Emerg Infect Dis 2003;9:196203.CrossRefGoogle ScholarPubMed
3. Wenzel, RP. The Lowbury Lecture. The economics of nosocomial infections. J Hosp Infect 1995;31:7987.CrossRefGoogle ScholarPubMed
4. Badia, JM, Casey, AL, Petrosillo, N, Hudson, PM, Mitchell, SA, Crosby, C. Impact of surgical site infection on healthcare costs and patient outcomes: a systematic review in six European countries. J Hosp Infect 2017;96:115.CrossRefGoogle ScholarPubMed
5. Jenks, PJ, Laurent, M, McQuarry, S, Watkins, R. Clinical and economic burden of surgical site infection (SSI) and predicted financial consequences of elimination of SSI from an English hospital. J Hosp Infect 2014;86:2433.CrossRefGoogle ScholarPubMed
6. Weber, WP, Zwahlen, M, Reck, S, et al. Economic burden of surgical site infections at a European university hospital. Infect Control Hosp Epidemiol 2008;29:623629.CrossRefGoogle Scholar
7. Haley, RW, Culver, DH, White, JW, et al. The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol 1985;121:182205.CrossRefGoogle ScholarPubMed
8. Troillet, N, Aghayev, E, Eisenring, MC, Widmer, AF, Swissnoso. First results of the Swiss National Surgical Site Infection Surveillance Program: Who seeks shall find. Infect Control Hosp Epidemiol 2017;38:697704.CrossRefGoogle ScholarPubMed
9. Bruce, J, Russell, EM, Mollison, J, Krukowski, ZH. The quality of measurement of surgical wound infection as the basis for monitoring: a systematic review. J Hosp Infect 2001;49:99108.CrossRefGoogle ScholarPubMed
10. Gastmeier, P, Kampf, G, Hauer, T, et al. Experience with two validation methods in a prevalence survey on nosocomial infections. Infect Control Hosp Epidemiol 1998;19:668673.CrossRefGoogle Scholar
11. Wenzel, RP, Osterman, CA, Townsend, TR, et al. Development of a statewide program for surveillance and reporting of hospital-acquired infections. J Infect Dis 1979;140:741746.CrossRefGoogle ScholarPubMed
12. McCoubrey, J, Reilly, J, Mullings, A, Pollock, KG, Johnston, F. Validation of surgical site infection surveillance data in Scotland. J Hosp Infect 2005;61:194200.CrossRefGoogle ScholarPubMed
13. Haley, VB, Van Antwerpen, C, Tserenpuntsag, B, et al. Use of administrative data in efficient auditing of hospital-acquired surgical site infections, New York State 2009–2010. Infect Control Hosp Epidemiol 2012;33:565571.CrossRefGoogle ScholarPubMed
14. Swissnono website. https://www.swissnoso.ch/module/ssi-surveillance/material/handbuch-formulare/. Updated 2017. Accessed July 25, 2017.Google Scholar
15. Friedman, ND, Russo, PL, Bull, AL, Richards, MJ, Kelly, H. Validation of coronary artery bypass graft surgical site infection surveillance data from a statewide surveillance system in Australia. Infect Control Hosp Epidemiol 2007;28:812817.CrossRefGoogle ScholarPubMed
16. Huotari, K, Agthe, N, Lyytikainen, O. Validation of surgical site infection surveillance in orthopedic procedures. Am J Infect Control 2007;35:216221.CrossRefGoogle ScholarPubMed
17. Masia, MD, Barchitta, M, Liperi, G, et al. Validation of intensive care unit-acquired infection surveillance in the Italian SPIN-UTI network. J Hosp Infect 2010;76:139142.CrossRefGoogle ScholarPubMed
18. Zuschneid, I, Geffers, C, Sohr, D, et al. Validation of surveillance in the intensive care unit component of the German nosocomial infections surveillance system. Infect Control Hosp Epidemiol 2007;28:496499.CrossRefGoogle ScholarPubMed
19. Culver, DH, Horan, TC, Gaynes, RP, et al. Surgical wound infection rates by wound class, operative procedure, and patient risk index. National Nosocomial Infections Surveillance System. Am J Med 1991;91:152S157S.CrossRefGoogle ScholarPubMed
20. Emori, TG, Culver, DH, Horan, TC, et al. National nosocomial infections surveillance system (NNIS): description of surveillance methods. Am J Infect Control 1991;19:1935.CrossRefGoogle ScholarPubMed
21. Hubner, M, Diana, M, Zanetti, G, Eisenring, MC, Demartines, N, Troillet, N. Surgical site infections in colon surgery: the patient, the procedure, the hospital, and the surgeon. Arch Surg 2011;146:12401245.CrossRefGoogle ScholarPubMed
22. Romy, S, Eisenring, MC, Bettschart, V, Petignat, C, Francioli, P, Troillet, N. Laparoscope use and surgical site infections in digestive surgery. Ann Surg 2008;247:627632.CrossRefGoogle ScholarPubMed
23. Staszewicz, W, Eisenring, MC, Bettschart, V, Harbarth, S, Troillet, N. Thirteen years of surgical site infection surveillance in Swiss hospitals. J Hosp Infect 2014;88:4047.CrossRefGoogle ScholarPubMed
24. National Association for Quality Development in Hospitals and Clinics (ANQ) website. http://www.anq.ch/akutsomatik/akutsomatik-anq-hplus/. Updated 2017. Accessed July 24, 2017.Google Scholar
25. Buderer, NM. Statistical methodology: I. Incorporating the prevalence of disease into the sample size calculation for sensitivity and specificity. Acad Emerg Med 1996;3:895900.CrossRefGoogle ScholarPubMed
26. Birgand, G, Lepelletier, D, Baron, G, et al. Agreement among healthcare professionals in ten European countries in diagnosing case-vignettes of surgical-site infections. PLoS One 2013;8:e68618.CrossRefGoogle ScholarPubMed
27. Nosocomial infection rates for interhospital comparison: limitations and possible solutions. A report from the National Nosocomial Infections Surveillance (NNIS) System. Infect Control Hosp Epidemiol 1991;12:609621.Google Scholar
28. Talbot, TR, Bratzler, DW, Carrico, RM, et al. Public reporting of health care-associated surveillance data: recommendations from the healthcare infection control practices advisory committee. Ann Intern Med 2013;159:631635.CrossRefGoogle ScholarPubMed
29. National Healthcare Safety Network (NHSN) External Validation Guidance and Toolkit 2016 Centers for Disease Control and Prevention website. https://www.cdc.gov/nhsn/pdfs/validation/2016/2016-nhsn-ev-guidance.pdf. Published 2016. Accessed July 24, 2017.Google Scholar
30. Mannien, J, van der Zeeuw, AE, Wille, JC, van den Hof, S. Validation of surgical site infection surveillance in the Netherlands. Infect Control Hosp Epidemiol 2007;28:3641.CrossRefGoogle ScholarPubMed
31. Broderick, A, Mori, M, Nettleman, MD, Streed, SA, Wenzel, RP. Nosocomial infections: validation of surveillance and computer modeling to identify patients at risk. Am J Epidemiol 1990;131:734742.CrossRefGoogle ScholarPubMed
32. Cardo, DM, Falk, PS, Mayhall, CG. Validation of surgical wound surveillance. Infect Control Hosp Epidemiol 1993;14:211215.CrossRefGoogle ScholarPubMed
33. Larson, E, Horan, T, Cooper, B, Kotilainen, HR, Landry, S, Terry, B. Study of the definition of nosocomial infections (SDNI). Research Committee of the Association for Practitioners in Infection Control. Am J Infect Control 1991;19:259267.CrossRefGoogle ScholarPubMed
34. Carley, S, Dosman, S, Jones, SR, Harrison, M. Simple nomograms to calculate sample size in diagnostic studies. Emerg Med J 2005;22:180181.CrossRefGoogle ScholarPubMed
Figure 0

TABLE 1 Surgical Interventions Included in the Validation and Followed by the147 Audited Surveillance Teams, and Team Characteristics, October 1, 2012, to June 26, 2016

Figure 1

FIGURE 1 Distribution of scores in 147 participating surveillance teams audited between October 1, 2012, and June 26, 2016.

Figure 2

TABLE 2 Domains, Scores, Weights, and Mean Scores per Domain in 147 Surveillance Teams

Figure 3

TABLE 3 Factors Associated With Higher Scores Within Individual Domains in 147 Surveillance Teams

Figure 4

TABLE 4 Characteristics of 1,110 Randomly Selected Clinical Cases With Complete Follow-Up Reviewed Between October 1, 2012, and June 26, 2016

Figure 5

TABLE 5 Characteristics of 483 Randomly Selected Cases With Originally Reported Infection at the Main Surgical Site and Complete Documentation and Follow-Up Reviewed Between October 1, 2012, and June 26, 2016

Supplementary material: File

Kuster supplementary material 1

Supplementary Table

Download Kuster supplementary material 1(File)
File 109.6 KB