Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-11T08:37:57.413Z Has data issue: false hasContentIssue false

Communicable Disease Surveillance Systems in Disasters: Application of the Input, Process, Product, and Outcome Framework for Performance Assessment

Published online by Cambridge University Press:  02 April 2018

Javad Babaie*
Affiliation:
Health Services Management, School of Management and Medical Informatics, Tabriz University of Medical Sciences, Tabriz, East Azerbaijan, Iran Iranian Center of Excellence in Health Management, School of Management and Medical Informatics, Tabriz University of Medical Sciences, Tabriz, East Azerbaijan, Iran Tabriz Health Services Management Research Center, Tabriz University of Medical Sciences, Tabriz, East Azerbaijan, Iran
Ali Ardalan
Affiliation:
Department of Disaster Public Health, School of Public Health, Tehran University of Medical Sciences, Tehran, Iran Department of Disaster and Emergency Health, National Institute of Health Research, Tehran University of Medical Sciences, Tehran, Maine, Iran
Hasan Vatandoost
Affiliation:
Medical Entomology and Vector Control, Tehran University of Medical Sciences, Tehran, Tehran, Iran
Mohammad Mahdi Goya
Affiliation:
Communicable Diseases Management Center, Ministry of Health, Tehran, Iran
Ali Akbarisari
Affiliation:
Health Management and Economics, Tehran University of Medical Sciences, Tehran, Tehran, Iran
*
Correspondence and reprint requests to Javad Babaie, Health services management, School of management and medical informatics, Tabriz University of Medical Sciences, Tabriz, East Azerbaijan, IR (e-mail: javad1403@yahoo.com).
Rights & Permissions [Opens in a new window]

Abstract

Objective

One of the most important measures following disasters is setting up a communicable disease surveillance system (CDSS). This study aimed to develop indicators to assess the performance of CDSSs in disasters.

Method

In this 3-phase study, firstly a qualitative study was conducted through in-depth, semistructured interviews with experts on health in disasters and emergencies, health services managers, and communicable diseases center specialists. The interviews were analyzed, and CDSS performance assessment (PA) indicators were extracted. The appropriateness of these indicators was examined through a questionnaire administered to experts and heads of communicable diseases departments of medical sciences universities. Finally, the designed indicators were weighted using the analytic hierarchy process approach and Expert Choice software.

Results

In this study, 51 indicators were designed, of which 10 were related to the input (19.61%), 17 to the process (33.33%), 13 to the product (25.49%), and 11 to the outcome (21.57%). In weighting, the maximum score was that of input (49.1), and the scores of the process, product, and outcome were 31.4, 12.7, and 6.8, respectively.

Conclusion

Through 3 different phases, PA indicators for 4 phases of a chain of results were developed. The authors believe that these PA indicators can assess the system’s performance and its achievements in response to disasters. (Disaster Med Public Health Preparedness. 2019;13:158–164)

Type
Original Research
Copyright
Copyright © Society for Disaster Medicine and Public Health, Inc. 2018 

When hazards occur in vulnerable communities, they destroy infrastructure, 1 which leads to the loss of healthcare facilities and structuresReference Ardalan, Mowafi and Khoshgsabeghe 2 or disruption of their performance. 3 Disruption in the health system breaks down the health conditions in the disaster-affected areas. In addition to causing mortality and injuries, disasters also disrupt access to health services.Reference Djalali, Hosseinijenab and Hasani 4 As a result, these events pave the way for epidemics and outbreaks of communicable diseases.Reference Thomas, Hsu and Kim 5 - Reference Tohma, Suzuki and Otani 7 Many people and media believe that there is a high risk for an epidemic of communicable diseases after disasters,Reference Yan and Mei 8 , Reference Schneider, Tirado and Rereddy 9 although credible sources do not support this claim.Reference Yan and Mei 8 However, rumors and stories about outbreaks of communicable diseases in disasters horrify the disaster-affected people. The cholera epidemic after the 2010 Haiti earthquake, which affected more than 604 634 people and led to the hospitalization of 329 697 and the death of 7463, is evidence in this regard.Reference Brazilay, Schaad and Magloire 10

Therefore, it is critical that health systems set up a communicable diseases surveillance system (CDSS) immediately after a disaster for an appropriate and effective response.Reference Nelli, Kakar and Rahim Khan 11 - Reference Polonsky, Luquero and Francois 14 "A surveillance system is a systematic process for collecting, summarizing, analyzing and publishing data, and the results of its findings to stakeholders for the development of interventions." 15

It is so important that the Centers for Disease Control and Prevention (CDC) in the United States formed a disaster surveillance work group for this purpose in 2006.Reference Scnall, Wolkin and Noe 16 The Texas Department of State Health Services also developed a surveillance system (SS) for this purpose, named the Disaster-Related Mortality Surveillance, which was activated for the first time during Hurricane Ike.Reference Choudhary, Zane and Beasley 17

Like all programs, the performance of these systems should be assessed by experts. Their strengths and weaknesses should be extracted for the continuous development of the system. Performance assessment (PA) is a systematic process that monitors and assesses the achievement of goals of different parts of an organization or program and provides stakeholders with results.Reference Smith, Mossialos and Papanicolas 18 , Reference Tashobya, Da Silveira and Ssengooba 19 However, despite numerous experiences in setting up a SS for communicable diseases after disasters,Reference Magloire, Mung and Harris 20 , Reference Sabatinalli, Kakar and Rahim Khan 21 there are not any PA indicators. There is just a PA framework in which the indicators are not covered and only the features of a PA have been discussed. Although the CDC guideline is designed for health care systems in general and not specifically for communicable diseases and disasters, it is mainly used in the similar studies. 15

Therefore, considering the importance of PA for all programs, the problems caused by lack of PAs, and the knowledge gap in this field, this study was conducted to design PA indicators in four areas—input, process, product, and outcome—for CDSSs in disasters.

METHODS

A 3-phase mixed-method (quantitative/qualitative) design was used for the development of PA indicators of SSs in response to disasters. Of course, before starting this study, the researchers conducted a systematic literature review to identify the existing PA indicators of SSs.Reference Babaie, Ardalan and Vatandoost 22 Then a qualitative approach was used for selecting indicators and developing potentially related indicators. A semistructured questionnaire was used to conduct focus group discussions and interviews. A purposeful sampling method was used to select individuals to participate in the group discussions and interviews. Selection criteria for interviews were the experience of individuals in the Communicable Diseases Management program in previous disasters and their desire to participate in the interviews. They were experts in the field of health in disasters and emergencies, health services managers, and officials of Communicable Diseases Management Departments of the Islamic Republic of Iran Ministry of Health. Oral consent was obtained from the participants. The focus group discussions and interviews started with the main question, “What indicators do you think can be used to assess the performance of a CDSS in a disaster?” Other questions based on the given answers were asked to obtain more detailed and richer data. The discussions of each session were immediately extracted and analyzed. Gaps in the data from the group discussion sessions and interviews were addressed in subsequent meetings. The meetings continued until data saturation was achieved.

The content of the discussions was analyzed manually using the Strauss and Corbin model (1998)Reference Strauss and Corbin 23 because this study was a part of another study that was conducted by using the grounded theory method. (First, the texts of the interviews were divided into meaning units. Then the codes were extracted from the meaning units. Similar codes were put together and subcategories were extracted. Finally, by putting the codes together, the main categories were ultimately extracted.) The outcome of this phase was the development of SS PA indicators.

After the indicators were developed, their appropriateness was investigated. A Likert scale questionnaire was designed and its validity and reliability were tested. Then, each of the indicators was measured on the Likert scale from 1 to 5 (ie, completely appropriate to completely inappropriate). The questionnaire was distributed among the heads of Communicable Disease Management Departments in medical universities throughout a country-level seminar. The completed questionnaires were collected, and the results were analyzed using Statistical Package for the Social Sciences 20 (SPSS20). The indicators that were considered appropriate by more than 70% of the participants were retained, and the other indicators were excluded from the study.

Next, a panel of experts from the Ministry of Health and health care administration experts screened and finalized the set of PA indicators. Nine interested individuals were invited to form the panel, of whom 6 were present at the meeting (participation rate, 66.66%).

The Communicable Diseases Management Center officers in the East Azerbaijan Province of Iran were randomly selected to determine the importance of the indicators and to weight them. They were asked to compare elements in the model to assess the performance of CDSSs in response to disasters using the analytical hierarchy process introduced by Professor Thomas L. Saaty. The analytical hierarchy process, one of the best-known and most widely used multivariate decision-making techniques, uses paired comparisons with several options and criteria.Reference Giri and Nejadhashem 24 Weighting was done in 2 phases. In the first phase, 5 criteria that had to have an indicator, such as clarity, relevance, economic impact, adequacy, and ability to monitor, were weighted.

In the second phase, participants used the criteria to compare all the indicators in pairs and weight them. A questionnaire was designed with the software and distributed among the participants. The participants were taught how to complete the questionnaires and were asked to complete them slowly and patiently. Finally, the questionnaires were completed and collected within 3 weeks, and the data were entered into the software.

Results were analyzed using the Expert Choice software 11, which was designed for fuzzy computing and multiple-criteria techniques. Its indicators were designed for the analytic hierarchy process.

RESULTS

A total of 21 people were interviewed for extracting PA indicators of CDSSs, of whom 10 (47.62%) were male and 11 (52.38%) were female. Five (23.8%) of the participants of interviews were specialists or subspecialists, 10 (47.62%) were general practitioners, and 6 (28.57%) were experts or top experts. Of these, 2 (9.52%) had less than 10 years of job experience, 4 (19.05%) had between 10 and 15 years, 9 (42.86%) had between 16 and 20 years, 3 (14.29%) had 21 to 25 years, and 3 (14.29%) had a history of over 25 years. The mean length of the interviews was 53 minutes.

Results of Interview Analysis

A total of 363 codes were extracted, of which 91 repeated codes were excluded. The remaining codes were classified into 40 subcategories and 4 main categories: input, process, product, and outcome. Of the 40 subcategories, 13 (32.5%) were inputs, 11 (27.5%) were processes, 8 (20%) were products, and 8 (20%) were outcomes.

Results of the Second Phase

This phase was carried out with the participation of 49 heads of communicable disease management departments at the medical universities and schools throughout the country.

A total of 67 questionnaires were distributed, of which 39 were completed and returned (response rate, 58.21%). In addition, 19 questionnaires were distributed among officers in CDM, of which 10 completed questionnaires were returned to the researchers after frequent follow-up (response rate 52.63%). Of the 49 people who participated in this phase of the study, 1 was less than 30 years old (2.04%), 12 were between 30 and 40 (24.49%), 20 were between 40 and 50 (40.82%), 13 were older than 50 (26.53%), and 3 (6.12%) did not fill out this section so their ages remained unknown. Thirty-four (69.39%) of the participants were male and the rest were female.

In terms of job experience, 8 (16.33%) of the participants had between 5 and 10 years, 6 (12.24%) had between 10 and 15 years, 5 (10.2%) had between 15 and 20 years, 18 (36.73%) had between 20 and 25 years, and 9 (18.37%) had over 25 years. The job experience of 3 participants was unknown.

Of the 111 proposed indicators in this phase, 33 indicators (29.73%) did not meet the desired criteria, from the participants' viewpoints, and were excluded from the study. Finally, 77 of the indicators won the approval of at least 70% of the participants. Because of the high number of indicators and the difficulty of assessing them, the proposed indicators were revised again by an expert panel of researchers and specialists. Finally, 26 indicators (33.77%) were excluded and the remaining 51 entered the next phase (weighting). Of the final 51 indicators, 10 were related to input (19.61%), 17 to process (33.33%), 13 to product (25.49%), and 11 to outcome (21.57%).

Determining the Criteria for Weighting

The criteria were compared two by two to determine the weight. For example, the “economic” criterion was compared to the “clear” criterion to determine which was more important.

The calculation results are shown in Table 1.

Table 1 Importance of the Criteria

Inconsistency: 0.03

It should be noted that, in cases where the inconsistency was higher than the standard (0.1), the participants were asked to revise their scoring. This continued until the acceptable inconsistency was achieved.

Weighting of the Indicators

At this phase, the priority of each indicator was judged with the criteria. The judgment was based on Saaty's 9-point scale. The results of this calculation were registered after the paired comparison matrix with the criteria. Inconsistency of items was calculated through normalization of row and column averages. The question answered in this section was, “Among the criteria of clarity, relevance, economic impact, adequacy, and ability to monitor, which one is preferred and to what extent?”

Determining the Final Weights of the Indicators

At the final stage of the weighting process, the results of the 2 previous phases were integrated and the final weights of items were calculated. The detailed results were presented in Table 2. The highest weight score, which was 49.1 out of 100, belonged to the indicators in input, and the scores of processes, products, and outcomes were 31.4, 12.7, and 6.8, respectively.

Table 2 Weights of Indicators

Finally, the developed indicators for PA of CDSSs in disasters and their definitions were shown in Table 3.

Table 3 Final List of Performance Assessment Indicators for Communicable Disease Surveillance Systems Designed for Use in Natural Disasters

DISCUSSION

The present study was designed and conducted to develop PA indicators based on the input, processes, products, and outcomes framework for CDSSs in response to natural disasters. The researchers interviewed 21 experts in related fields in 2 phases and extracted 51 indicators in the 4 areas of input, processes, products, and outcomes. The indicators were weighted using the analytical hierarchy process approach.

The absence of such indicators was noted in previous years by many experts.Reference Kouadio, Koffi and Attoh-Toure 25 - Reference Osman, Berbary and Sidani 27 The frameworks and indicators proposed and applied in previously published work had been designed for usual conditions (not disasters). It is obvious that the conditions following a disaster are different from the normal ones. So, the designed CDSS and associated processes will be different from those designed for normal conditions. 28 Thus, PA indicators of such systems should also be different from those that are appropriate under usual conditions. Although some of the indicators in previous studies and developed frameworks have been used in this study, these are the first indicators that have been developed specifically for the PA of CDSSs in disasters.

For example, all 9 attributes of the CDC guidelines, 15 31 of the World Health Organization (WHO)-proposed indicators in Communicable Diseases Surveillance and Response: A Guide to Monitoring and Evaluating, 28 and 7 of the proposed indicators in the “Surveillance System” chapter in Communicable Diseases Control in Emergencies: A Field Manual 29 were included in the PA indicators list.

In this study, the performance indicators for each of the 4 areas—input, processes, products, and outcomes—are presented. In Control of Infectious Diseases in Emergencies, CDSS includes 6 processes: diagnosis, reporting, examining, verification, analysis, and feedback. Although some indicators have been formulated for each of the 6 processes, these indicators are not used without supporting activities, such as input, education, communication, and supervision. Essentially, any shortcomings in this area will affect the performance. Thus, these indicators are included in the present PA.

However, these areas has been considered in previous studies of PA (in fields other than disaster). In various studies, some indicators have been developed for the PA of these areas and have been used practically.Reference Veillard, Champagne and Klazinga 30 - 32 The 4 indicators provided by WHO for assessment have been noted. 29

Other important points in this study are the weights of indicators and their importance in the PA. These indicators can be used not only to assess the performance of designed SSs in response to disasters, but also to rank them through the obtained indicator scores. In this study, the highest weight (49.1 out of 100) is devoted to the input indicators. Although they are not among the highest indicators proposed by the WHO, 41 out of the 95 indicators are in the input areas. 29 The results are similar in the outcomes. Outcome indicators in this study have the lowest role in PA (8.6 out of 100). Among the indicators proposed by the WHO, only one indicator is devoted to outcome as well. The indicators help identify the strengths and weaknesses of CDSSs in disasters. Obviously, the extraction of these cases is the responsibility of every manager and could possibly improve and enhance CDSSs in the future.

CONCLUSION

Natural hazards and the disasters that they create in communities have always been an inevitable part of human life. They will also continue to occur in the future. The common effects of these disasters are the destruction of infrastructure including health facilities. This paves the way for the occurrence and prevalence of communicable diseases that may intensify the side effects of the disasters as they themselves become the secondary disasters. To manage these conditions, the first and the most important step is to design and set up a CDSS. The performance of a SS, like that of any other program, should be assessed for efficiency and effectiveness. In the existing literature, attention is paid to the lack of such indicators, but few practical actions have been taken. To overcome this deficiency, this study recommends 51 indicators for PA of CDSS. The researchers believe that these indicators will be effective and useful in the PA of SSs. Although there may be shortcomings and problems with these indicators, it is hoped that researchers around the world will overcome these weaknesses by testing the indicators in the field.

Acknowledgements

This study was part of a PhD thesis/dissertation supported by Tehran University of Medical Sciences.

Funding

This study has been funded and supported by I.R. Iran's National Institute of Health Research (NIHR), Tehran University of Medical Sciences.

References

REFERENCES

1. The United Nations Office for Disaster Risk Reduction (UNISDR). 2009; UNISDR terminology on disaster risk reduction. UNISDR website. http://www.unisdr.org/we/inform/publications/7817. Accessed October 18, 2014.Google Scholar
2. Ardalan, A, Mowafi, H, Khoshgsabeghe, HY. Impacts of natural hazards on primary health care facilities of Iran: a 10-year retrospective survey. PLoS Curr. 2013:5. doi: pii: ecurrents.dis.ccdbd870f5d1697e4edee5.Google Scholar
3. International Strategy for Disaster Reduction (ISDR). Hospitals safe from disasters. ISDR website. http://www.unisdr.org/2009/campaign/pdf/wdrc-2008-2009-information-kit.pdf. Accessed December 21, 2014.Google Scholar
4. Djalali, A, Hosseinijenab, V, Hasani, A, et al. A fundamental, national, disaster management plan: an education based model. Prehosp Disaster Med. 2009;24(6):565-569.Google Scholar
5. Thomas, TL, Hsu, EB, Kim, HK, et al. The incident command system in disasters: evaluation methods for a hospital-based exercise. Prehosp Disaster Med. 2005;20(1):14-23.Google Scholar
6. Myint, NW, Kaewkungwal, J, Singhasivanon, P, et al. Are there any changes in burden and management of communicable diseases in areas affected by Cyclone Nargis? Confl Health. 2011;5(1):9.Google Scholar
7. Tohma, K, Suzuki, A, Otani, K, et al. Monitoring of influenza virus in the aftermath of Great East Japan earthquake. Jpn J Infect Dis. 2012;65:542-544.Google Scholar
8. Yan, G, Mei, X. Mobile device-based reporting system for Sichuan earthquake-affected areas infectious diseases reporting in China. Biomed Environ Sci. 2012;25(6):724-729.Google Scholar
9. Schneider, MC, Tirado, Mc, Rereddy, S, et al. Natural disasters and communicable diseases in the Americas: contribution of veterinary public health. Veterinaria Italiana. 2012;48(2):193-218.Google Scholar
10. Brazilay, EJ, Schaad, N, Magloire, R, et al. Cholera surveillance during the Haiti epidemic- the first two years. N Engl J Med. 2013;368(7):599-609.Google Scholar
11. Nelli, G, Kakar, SR, Rahim Khan, M, et al. Early warning disease surveillance after a flood emergency — Pakistan, 2010. MMWR Morb Mortal Wkly Rep. 2012;61(49):1002-1007.Google Scholar
12. Topran, A, Ratard, R, Bourgeois, SS, et al. Surveillance in hurricane evacuation centers-louisiana, September-October 2005. MMWR Morb Mortal Wkly Rep. 2006;55(02):32-35.Google Scholar
13. Williams, W, Guariso, J, Guillot, K, et al. Surveillance for illness and injury after hurricane Katrina. New Orleans, Louisiana, September 8-25, 2005. MMWR Morb Mortal Wkly Rep. 2005;54(40):1018-1020.Google Scholar
14. Polonsky, J, Luquero, F, Francois, G, et al. Public health surveillance after the 2010 Haiti earthquake: the experience of Médecins Sans Frontières. PLoS Curr. 2013;7:5.Google Scholar
15. Centers for Diseases Control and Prevention. Updated guidelines for evaluating public health surveillance systems. MMWR Morb Mortal Wkly Rep. 2001;50(RR-13):1-51.Google Scholar
16. Scnall, AH, Wolkin, AF, Noe, R, et al. Evaluation of a standardized morbidity surveillance form for use during disasters caused by natural hazards. Prehosp Disaster Med. 2011;26(2):90-98.Google Scholar
17. Choudhary, E, Zane, DF, Beasley, C, et al. Evaluation of active mortality surveillance system data for monitoring hurricane-related deaths-Texas, 2008. Prehosp Disaster Med. 2012;27(4):392.Google Scholar
18. Smith, PC, Mossialos, E, Papanicolas, I. Performance measurement for health system improvement: experiences, challenges and prospects. World Health Organization Regional Office for Europe website. http://www.euro.who.int/__data/assets/pdf_file/0003/84360/E93697.pdf. Accessed April 11, 2014.Google Scholar
19. Tashobya, CK, Da Silveira, VC, Ssengooba, F, et al. Health systems performance assessment in low-income countries: learning from international experiences. Global Health. February 13 2014:10 5. doi: 10.1186/1744-8603-10-5.Google Scholar
20. Magloire, R, Mung, K, Harris, S, et al. Launching a national surveillance system after an earthquake — Haiti, 2010. MMWR Morb Mortal Wkly Rep. August. 6 2010;59(30):933-935.Google Scholar
21. Sabatinalli, G, Kakar, SR, Rahim Khan, M, et al. Early warning disease surveillance after a flood emergency – Pakistan 2010. MMWR Morb Mortal Wkly Rep. 2012;61(49):1002-1007.Google Scholar
22. Babaie, J, Ardalan, A, Vatandoost, H, et al. Performance assessment of communicable disease surveillance in disasters: a systematic review. PLoS Curr. February 24 2015; doi: 10.1371/currents.dis.c 72864d9c7ee99ffbe9ea707fe4465.Google Scholar
23. Strauss, AL, Corbin, JM. Basics of Qualitative Research: Techniques and Procedures for Developing Grounded Theory, 2nd ed. Sage Publication Inc. 1998.Google Scholar
24. Giri, S, Nejadhashem, AP. Application of analytical hierarchy process for effective selection of agricultural best management practices. J Environ Manag. 2014;132:165-177. doi: 10.1016/j.jenvman.2013.10.021. Epub 2013 Dec 3.Google Scholar
25. Kouadio, IK, Koffi, AK, Attoh-Toure, H, et al. Outbreak of measles and rubella in refugee transit camps. Epidemiol Infect. 2009;137(11):1593-1601.Google Scholar
26. Altevogt, BM, Pope, AM, Hill, MN, et al, Research priorities in emergency preparedness and response for public health systems. The National Academies Press website. http://www.nap.edu/catalog/12136.html. Accessed April 05, 2013.Google Scholar
27. Osman, IH, Berbary, LN, Sidani, Y, et al. Data envelop analysis model for the appraisal and relative performance evaluation of nurses at an intensive care unit. J Med Syst. 2011;35:1039-1062.Google Scholar
28. Communicable Diseases Surveillance and Response Systems: A Guide to Monitoring and Evaluating. World Health Organization; 2006. http://www.who.int/csr/resources/publications/surveillance/WHO_CDS_EPR_LYO_2006_2/en/.Google Scholar
29. Connolly MA, ed. Communicable Diseases Control in Emergencies: A Field Manual. World Health Organization; 2005. http://www.who.int/diseasecontrol_emergencies/publications/9241546166/en/.Google Scholar
30. Veillard, J, Champagne, F, Klazinga, N, et al. A performance assessment framework for hospitals: the WHO regional office for Europe PATH project. Int J Qual Health Care. 2005;17(6):487-499.Google Scholar
31. Murray, CJ, Frenk, J. A framework for assessing the performance of health systems. Bull World Health Organ. 2000;78(6):717-731.Google Scholar
32. Ghana Ministry of Health. Holistic assessment of the health sector, program of woke 2012. Ghana Ministry of Health website. http://www.moh-ghana.org. Accessed July 14, 2013.Google Scholar
Figure 0

Table 1 Importance of the Criteria

Figure 1

Table 2 Weights of Indicators

Figure 2

Table 3 Final List of Performance Assessment Indicators for Communicable Disease Surveillance Systems Designed for Use in Natural Disasters