Household emergency preparedness (HEP) includes the actions, or empirical referents, that represent preparation for a disaster. Increased levels of HEP could save lives, prevent worsening of chronic medical conditions, and decrease the likelihood of responders having to brave dangerous situations to assist those in need. Reference Heagele1-Reference Killian, Moon, McNeill, Person-Michener and Garrison10 Currently, there is no ‘gold standard’ HEP instrument. Reference Heagele1,Reference Bodas, Siman-Tov, Kreitler and Peleg8,Reference Nukpezah and Soujaa11-Reference Gargano, Caramanica, Sisco, Brackbill and Stellman17 Household emergency preparedness can be considered a mature concept in that it has commonly accepted recommendations, distinct characteristics, and defined boundaries. Therefore, the concept of HEP is ready to be operationalized into a valid and reliable instrument that is appropriate for measurement of HEP levels in developed and developing nations. Instrument development starts with defining the construct and developing the questions. The instrument should then be administered to a representative sample and psychometric testing should be performed on the results. The purpose of this study was to generate a consensus on the concept definition of HEP from experts representing multiple disciplines and countries, along with community stakeholders, to develop a valid, all-hazards and reliable HEP instrument.
METHODS
Instrument questions were generated via 3 methods: literature search, using existing instruments, and expert panels. A criterion-referenced measurement framework was used because the goal of the instrument is to determine whether a respondent has acquired a predetermined set of target behaviors. Reference Waltz, Strickland and Lenz18
Literature Search
In 2016, Heagele documented a lack of a valid and reliable HEP instrument in the public health and emergency management research community. Reference Heagele1 For the current study, a review of the existing literature was conducted again to determine if a valid and reliable instrument emerged since that publication. The search was delimited to peer-reviewed academic journal articles published between January 2015 and January 2019, with abstracts available, and in the English language. Simultaneous searches of the MEDLINE Complete (EBSCO, Ipswich, MA), CINAHL Complete (EBSCO, Ipswich, MA), APA PsycINFO (APA, Washington, DC), Social Sciences Full Text (EBSCO, Ipswich, MA), and Health and Psychosocial Instruments (EBSCO, Ipswich, MA), databases were conducted in December 2018. Eligibility criteria included original research studies measuring HEP of individuals or households (i.e., not communities, responders, or health care organizations) with a survey, tool, scale, instrument, or checklist. The following key word combinations were used: “household OR citizen OR emergency OR disaster AND preparedness,” “individual AND preparedness,” “individual AND disaster OR emergency AND preparedness OR planning,” “disaster OR emergency AND supplies OR kit,” and “hazard AND preparedness OR readiness.” After removing duplicates, 38 articles met eligibility criteria from the 1587 initial results. Another 14 articles were identified either through citation trails or the social networking website for researchers, ResearchGate.
Of the 52 articles suggested, only 22 articles provided brief instrument question development information. Reference Bagwell, Liggin, Thompson, Lyle, Anthony and Baltz2-Reference Nakai, Tsukasaki and Kyota4,Reference McNeill, Alfred, Mastel-Smith, Fountain and MacClements7,Reference Bodas, Siman-Tov, Kreitler and Peleg8,Reference Wirtz and Rohrbeck15-Reference Gargano, Caramanica, Sisco, Brackbill and Stellman17,Reference McNeill, Killian, Moon, Way and Garrison19-Reference Tam, Huang and Chan32 However, none of these articles specifically focused on instrument development. A few researchers analyzed data from nationally representative surveys that included questions on HEP, such as the Behavioral Risk Factor Surveillance System Questionnaire, Reference Smith and Notaro6 Health Retirement Survey, Reference Cox and Kim5,Reference Killian, Moon, McNeill, Garrison and Moxley12 General Social Survey, Reference Nukpezah and Soujaa11 and the Public Readiness Index’s Readiness Quotient. Reference Najafi, Ardalan, Akbarisari, Noorbala and Elmi33,Reference Ranjbar, Soleimani, Sedghpour, Shahboulaghi, Paton and Noroozi34 For the researchers who created their own instruments, the Federal Emergency Management Agency (FEMA) recommendations were the most popular definition of HEP used to inspire the instrument questions. The authors of 21 articles provided their instrument questions in their articles. Reference Bagwell, Liggin, Thompson, Lyle, Anthony and Baltz2,Reference Cox and Kim5,Reference McNeill, Alfred, Mastel-Smith, Fountain and MacClements7,Reference Killian, Moon, McNeill, Garrison and Moxley12,Reference Howe14-Reference Gargano, Caramanica, Sisco, Brackbill and Stellman17,Reference McNeill, Killian, Moon, Way and Garrison19,Reference Jackson20,Reference Murakami, Siktel, Lucido, Winchester and Harbord24,Reference Stewart, Grahmann, Fillmore and Benson25,Reference Thomas, Sobelson and Wigington28,Reference Toor, Burke and Demeter29,Reference Najafi, Ardalan, Akbarisari, Noorbala and Elmi33,Reference Smith and Notaro35-Reference Nash40 Some authors asked broad questions about HEP, such as “did you assemble a disaster supply kit?,” whereas others asked about the presence of specific supply kit items such as flashlights, radios, food, and water. Only 8 articles contained information on pilot-testing of the instrument. Reference Bagwell, Liggin, Thompson, Lyle, Anthony and Baltz2,Reference Nakai, Tsukasaki and Kyota4,Reference Bodas, Siman-Tov, Kreitler and Peleg8,Reference Annis, Jacoby and DeMers22,Reference Tam, Huang and Chan32,Reference Ranjbar, Soleimani, Sedghpour, Shahboulaghi, Paton and Noroozi34,Reference Adame and Miller41,Reference Phibbs, Good, Severinsen, Woodbury and Williamson42 A total of 15 articles included instrument reliability data, Reference Cox and Kim5,Reference Bodas, Siman-Tov, Kreitler and Peleg8,Reference Nukpezah and Soujaa11,Reference Killian, Moon, McNeill, Garrison and Moxley12,Reference Kim and Zakour16,Reference Jackson20,Reference Kleier, Krause and Ogilby23,Reference Yong, Lemyre, Pinsent and Krewski26,Reference Chilton and Alfred27,Reference Ranjbar, Soleimani, Sedghpour, Shahboulaghi, Paton and Noroozi34,Reference Wei and Lindell36-Reference Welton-Mitchell, James, Khanal and James38,Reference Adame and Miller41,Reference Najafi, Ardalan, Akbarisari, Noorbala and Elmi43 and 4 articles contained validity data. Reference Bodas, Siman-Tov, Kreitler and Peleg8,Reference Killian, Moon, McNeill, Garrison and Moxley12,Reference Wei and Lindell36,Reference Notebaert, Clarke and MacLeod44 The remainder of the articles provided no instrument development data or reliability and validity information.
Researchers accrue evidence for validity by examining the scores resulting from an instrument that is used for a specific purpose, with a specified group of respondents, under a certain set of conditions. Reference Waltz, Strickland and Lenz18 Validity must be assessed each time an instrument is used in order to determine if validity generalization can be made for various populations under study. Reference Waltz, Strickland and Lenz18
Rigorous research designs need to start with instruments that are psychometrically sound; information about the psychometric properties should be obtained and evaluated before an instrument is selected for use. Reference Waltz, Strickland and Lenz18,Reference Devon, Block and Moyle-Wright45 Without access to publications focusing on instrument development, it is impossible for researchers to evaluate the psychometric properties of current HEP instruments. Unfortunately, no official publications detailing original instrument development of any HEP instruments were available for review. As such, it was impossible to discern the conceptual basis that guided the instrument development, critique the methods used to generate questions, and compare the questions to an original concept definition.
After the literature search and review of the evidence, the investigators agreed that a valid and reliable HEP instrument had not emerged. The decision to proceed with the development of a HEP instrument via a Delphi study was made.
Existing Instruments: Criterion Validity
Criterion validity is established when a newly developed instrument has an empirical association with a commonly used, benchmark instrument. Reference DeVellis46 Lacking a benchmark instrument resulted in lack of criterion validity for HEP measurement, but many of the instruments were inspired by the FEMA recommendations. This lends support for criterion validity of these HEP instruments. Data were collected for 2 studies with iterative versions of the Preparedness Assessment (PA), an instrument that was based on the “Ready” 47 and the Texas “Ready or Not?” 48 campaign materials. Reference McNeill, Alfred, Mastel-Smith, Fountain and MacClements7,Reference McNeill, Killian, Moon, Way and Garrison19 The initial PA was a dichotomous survey where respondents were asked to answer ‘yes’ if they had an item, or ‘no’ if they did not. Examples of questions posed to participants include, ‘‘Have you ever had any emergency preparedness education?’’ and ‘‘Do you have a first aid kit?’’ Participants reporting they possessed the item scored 1 (yes); while those reporting they did not scored 0 (no). These question response scores were summed to create a preparedness index with a possible range of 0 to 28. Reference McNeill, Killian, Moon, Way and Garrison19
The PA questions were compared to questions of 22 other instruments from the literature search; the similarities and differences between what was included on the instruments and how questions were worded were examined. The investigators, including the developers of the PA, agreed to adapt the PA for the Delphi study. This new instrument was named the Household Emergency Preparedness Instrument (HEPI). The goal was to create an all-hazards, comprehensive, easily understandable HEP instrument to present to the disaster research community.
Expert Panels: The Delphi Technique
The Delphi technique was used to establish face and content validity and evaluate cultural bias of the HEPI. The online Delphi technique is a widely accepted survey method used to generate group consensus and develop measurement instruments from geographically dispersed expert participants spanning a wide range of disciplines and roles. Reference Waltz, Strickland and Lenz18,Reference Hsu and Sandford49,Reference Parks, d’Angelo and Gunashekar50
There is no consensus on what constitutes an expert, but Delphi participants should be primary stakeholders with various interests related to the target issue, have somewhat related backgrounds and experiences, and include representation from all relevant social and cultural groups. Reference Waltz, Strickland and Lenz18,Reference Hsu and Sandford49,Reference Parks, d’Angelo and Gunashekar50 While there is also no general agreement on the number of participants required for a consensus study, Delphi studies are commonly completed with under 100 participants. Reference Hsu and Sandford49,Reference Parks, d’Angelo and Gunashekar50 The Delphi technique is recommended when: (a) the participants do not have a history of adequate communication; (b) input is needed from more individuals than can effectively interact in a face-to-face exchange; (c) time and cost make frequent group meetings infeasible, and; (d) participant anonymity is needed to reduce the effects of dominant individuals (i.e., the bandwagon effect). Reference Waltz, Strickland and Lenz18,Reference Hsu and Sandford49,Reference Parks, d’Angelo and Gunashekar50 In addition to being fast, inexpensive, and versatile, the strengths of the Delphi technique include: opinions of many experts can be condensed into a few precise statements; participants can respond at their own convenience; and participant anonymity limits the potential influence of other experts to conform to social norms, organizational culture, or standing within a profession. Reference Waltz, Strickland and Lenz18,Reference Hsu and Sandford49,Reference Parks, d’Angelo and Gunashekar50
Recruitment
Delphi participants are purposefully selected for their expertise. Reference Hsu and Sandford49 Interdisciplinary colleagues of the investigators with disaster response or disaster research experience, and the corresponding authors of the articles found in the literature search were e-mailed an IRB-approved recruitment and consent script, along with the link to the survey. The snowball sampling technique where participants suggest other potential participants was also utilized. In addition, the World Association of Disaster and Emergency Medicine (WADEM) sent the recruitment message via e-mail to members on the investigators’ behalf. Inclusion criteria were English-reading adults aged 21 years or older, reflecting the highest age of consent worldwide. 51
Participants came from 36 low, middle, and high income countries on 5 different continents (Figure 1) and represented disaster response experts from the disciplines of public health, emergency management, medicine, nursing, pharmacology, firefighting, emergency medical services, social work, psychology, sociology, epidemiology, bio-ethics, hospitality, national security, environmental management, geography, public administration, humanitarian relief, education, and business. Participants were asked to self-identify as either an expert or a community stakeholder. Table 1 describes the sample demographic characteristics.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220610110822815-0347:S193578932000292X:S193578932000292X_fig1.png?pub-status=live)
FIGURE 1
TABLE 1 Sample Demographics
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220610110822815-0347:S193578932000292X:S193578932000292X_tab1.png?pub-status=live)
Data Collection
While there are no universal guidelines for how to carry out a Delphi study, Reference Parks, d’Angelo and Gunashekar50 common data collection procedures include asking participants to complete multiple iterations of a structured, formal, electronic questionnaire designed to elicit opinions and exchange feedback. All rounds of the Delphi were exchanged via Qualtrics online survey software, version August 2019 (Qualtrics, Provo, UT, USA).
Typically 3 rounds are needed to reach a consensus. Reference Hsu and Sandford49 Through their endorsement of the degree to which specific HEP actions and items are essential, participants compared their concept definition of HEP to the HEPI questions. They also evaluated question clarity and conciseness and pointed out ways of measuring the phenomenon that may have been excluded. Reference DeVellis46 Qualitative and statistical analyses were used by the investigators to modify subsequent iterations of the questionnaire until consensus was reached. Reference Waltz, Strickland and Lenz18,Reference Hsu and Sandford49 Once the responses were tabulated and summarized, they were returned to the participants. Additional feedback was sought on the questions that did not achieve consensus. Participants were given 2 weeks to participate in each round. Prospective rounds were limited to the group of participants who responded to the questionnaire in the first round.
Data Analysis
Descriptive statistics were used to summarize the collective judgements of the participants (using the Statistical Package for the Social Sciences for Windows, version 25.0 software; International Business Machines Corporation, Armonk, NY, USA). Reference Waltz, Strickland and Lenz18,Reference Hsu and Sandford49 In the first round, consensus on a HEPI question was achieved when 80% of the participants’ votes fell within 2 categories based off a 5 point scale. Reference Hsu and Sandford49 A HEPI question was kept when 80% or more of the participants marked the question as important or essential. Likewise, questions were discarded if 80% of the responses fell in 2 categories on the lower end of the rating scale.
The open-ended responses were analyzed with qualitative content analysis via NVivo software for Mac version 11.4.1 (QSR International Pty Ltd, Doncaster, Victoria, Australia). Content analysis aims to attain a condensed description of a phenomenon by categorizing written data so that it can be counted. Reference Elo and Kyngäs52 Participants’ responses were coded and placed into categories. These data were primarily used to edit existing HEPI questions.
RESULTS
A total of 3 rounds of the Delphi study were required to ascertain consensus from the large panel of participants. Highlights of each round are described below and detailed in Table 2.
TABLE 2 HEPI Questions Retained and Discarded by Round
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220610110822815-0347:S193578932000292X:S193578932000292X_tab2.png?pub-status=live)
a new or revised question based on panel recommendations from round one
First Round
The Delphi study started with a 106-question HEPI; 50 questions from the PA plus 56 new questions that were developed by the investigators from an extensive review of the literature. Participants (n = 154) were asked to rate each question on the HEPI from 1 to 5 (1 = unessential, 2 = a little important, 3 = neutral, 4 = important, and 5 = essential). In addition, 8 questions were included to determine how to score the HEPI, quantity of food, water, and medication needed, and if there should be child, pet, and access and functional skills subscales. Open-ended questions (2) were added to provide the Delphi participants the opportunity to describe their HEPI experience in greater detail and served as a virtual focus group. Reference Stewart and Williams53,Reference Streubert and Carpenter54 Participants were asked if any additional questions should be included on the HEPI, if supply kit items were named something different in their community, and if any questions were difficult to understand. Demographic questions (8) were also included to describe characteristics of the Delphi participants.
The number of questions which achieved consensus for inclusion on the HEPI was 35. Valid percent responses were utilized for each question, meaning that the percentage that did not include missing data was used to calculate consensus. Participants agreed that an access and functional needs sub-scale should be offered (n = 125, 94%) for respondents with a health issue or disability, a pet preparedness sub-scale should be offered (n = 112, 84.2%) for respondents with pets, and a child preparedness sub-scale should be offered (n = 129, 97%) for respondents with a child.
At the completion of Round 1, it was undecided how respondents should answer questions about preparedness kit items or how the final instrument should be scored. The amount of extra food, water, prescription medications, and medical supplies a household should have in order to be considered prepared for a disaster did not achieve consensus and went to Round 2 of the Delphi study. None of the survey questions achieved consensus (80% or higher on the lower end of the rating scale) to be discarded.
A specific theme that emerged from the qualitative data via 23 comments was skepticism of a global, all-hazards instrument. Delphi participants felt that the HEPI may have to be tailored to specific communities, according to the disaster scenarios expected and capability of responders. For example, a snow shovel was a supply kit item in Round 1. Participants commented that this item was not relevant for respondents living in tropical climates. Participants struggled to narrow in on a specific time frame for quantity of water, food, and medical supplies needed to be considered prepared because it was felt that how quickly these supplies would be brought in post-disaster would vary greatly from country to country, and even from community to community within the same country. Participants also acknowledged that acquisition of an adequate supply of medications prior to predicted disasters is a systemic versus a personal responsibility problem due to the policies of insurance companies.
A second theme that emerged was potential response burden of the instrument. A total of 14 new questions could have been added to the HEPI from the qualitative data. However, due to comments on the length of the HEPI and concern for response burden, only 6 of the 14 questions were added to the HEPI because these actions and items were mentioned more than once. The questions proposed by the participants that were not added to the HEPI can be found in Table 3.
TABLE 3 Proposed Questions Created from the Qualitative Data – Not Included in the Delphi Study
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220610110822815-0347:S193578932000292X:S193578932000292X_tab3.png?pub-status=live)
Second Round
After adjusting for consensus in the first round to retain 35 questions, 71 of the previous questions were re-evaluated in Round 2. New questions about preparedness actions (5) and 1 question about a disaster supply kit item were created from the qualitative data in the first round. In the second round, the participants were only given 2 response options for the HEPI questions (1 = unessential, 2 = essential) and no open-ended questions were included.
At the conclusion of Round 2 (n = 85), 14 more questions achieved consensus to stay on the HEPI. In addition, 2 out of the 6 questions added from the first round’s qualitative recommendations were considered “essential” by 80% or more of the participants and were retained. Discarded HEPI questions (those items deemed un-essential) can be found in Table 2. Panel recommendations for quantification of select supply items failed to reach consensus. The decision was made to take the top choice for each of the requested recommendations:
-
1) food and water – 1 week (n = 34, 43%)
-
2) prescription medication – 2 weeks (n = 38, 48.1%)
-
3) medical supplies – 2 weeks (n = 33, 42.31%)
The way in which respondents should answer questions about disaster supply kit items did not achieve consensus, but a scaled response (I do not have this item, I have this item in my home, or I have this item in my disaster supply kit) received more votes (n = 44, 56.41%). The approach to scoring the final instrument met consensus (n = 62, 79.5%), with the result that respondents’ choices would be weighted in terms of the importance of an item or action.
Third Round
The investigators assembled a 51-question HEPI incorporating the recommendations from Round 2 for quantification and scaling. The domains represented in the final HEPI include: preparedness actions (11 questions), communication planning (3 questions), evacuation planning (12 questions), disaster supplies (16 questions), and specific to those with access or functional needs (9 questions). To decide which questions will be weighted higher when scoring the HEPI, Delphi participants (n = 79) were asked to weight questions from 1 = least essential, to 5 = most essential. See Table 2 for questions included and the corresponding mean weight recommended by the panelists.
At the conclusion of Round 3, the questions were assessed for reliability. The retained questions had a total Cronbach’s alpha of 0.96. Cronbach’s alpha for the domain scales ranged from α = 0.74 (communication plans) to α = 0.92 (disaster supplies). This version of the HEPI will be pilot tested.
DISCUSSION
Delphi participants came to a consensus that adequate HEP is defined as the completion of several preparedness actions and assembling a disaster supply kit that can be taken in a precipitous evacuation (Table 4). Content validity of the HEPI is now supported because the participants agreed that the instrument questions adequately captured the domains of content of the phenomenon of interest. Reference Devon, Block and Moyle-Wright45
TABLE 4 Concept Definition of Household Emergency Preparedness
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20220610110822815-0347:S193578932000292X:S193578932000292X_tab4.png?pub-status=live)
The HEPI is an all-hazards, comprehensive survey used to ascertain if a respondent is prepared for the common conditions that disasters create (i.e., living without power, limitations on drinking water, and being unable to leave the home to acquire additional supplies for a few days). As discussed previously, there is support for criterion validity of this instrument. The HEPI questions are objective and ask about what the respondent presently owns or does in a multiple-choice format, allowing the participants little latitude in constructing their responses. Reference Waltz, Strickland and Lenz18 For the questions that ask about child, pet, and access/functional needs preparedness, a “this does not apply to me” response option is provided. Due to the dynamic conceptualization of HEP, the present-time context and format of the instrument questions are appropriate for this phenomenon. The investigators intend to make the HEPI free for non-commercial use, so long as proper credit is afforded to the developers of the HEPI in publications. Respondent participation in the HEPI should take about 15 minutes. The HEPI questions do not ask about sensitive information. Both respondent and researcher burden should be low for using the HEPI.
After the instrument was developed, field testing (n = 23) was used to determine if any questions were difficult to respond to, unclear, or in need of revision. There were no recommended HEPI revisions after the field testing. This version of the HEPI has a Flesch Kincaid reading level of 6.3. The instrument will be pre-tested on a sample of respondents under conditions that approximate as nearly as possible, the conditions expected to exist when it is employed. Reference Waltz, Strickland and Lenz18 The first internet-based pilot test of the HEPI will be conducted on a convenience sample of faculty, staff, and students from one of the most diverse universities in the United States, the City University of New York (CUNY). The globally representative sample obtained from CUNY will be ideal to evaluate cultural bias of the HEPI. After the initial pool of HEPI questions are developed, scrutinized, and administered to an appropriately large and representative sample, the authors of the HEPI will evaluate the performance of each question to determine which questions to keep on the final instrument. Reference DeVellis46 Some of the discarded questions may be added back to the HEPI after the pilot test if the majority of the variance is not explained by the current version.
Delphi participants will be provided with the HEPI and encouraged to utilize the instrument with diverse populations in their own communities, especially as a measurement of change in pre- and post-intervention studies and longitudinal studies evaluating the outcomes of adequate HEP. Researchers may translate the HEPI into languages other than English and make modifications to tailor the instrument to specific populations of interest so long as they disclose these changes and provide psychometric data for the instrument in publications. This data may inform future modifications of the HEPI.
Preparedness recommendations are not centered on income, 47 and individuals affected by disasters commonly experience basic post-disaster needs related to food, water, shelter, safety and health. Reference Association55 Household emergency preparedness may be tailored to the individuals’ specific needs and based on contextual considerations such as culture, environment, setting, and the types of disasters for which the household is most at risk. Reference Association55 However, the investigators were able to develop an instrument that assesses preparedness for the conditions that all disasters create, such as power outages, limitations on drinking water, and the inability to acquire additional supplies.
Limitations
“Critics of the Delphi method assert that results represent the opinions of experts and may or may not be consistent with reality,” Reference Waltz, Strickland and Lenz18 which is why community stakeholders were included as participants. The requirement for English reading skills may have limited participation. It may also have impacted attempts for global representation. As expected, there was over representation from the US, Australia, and Canada. There was no representation from South America or Russia. Inclusion of WADEM and snowball sampling in the survey dissemination was an attempt to expand survey outreach and increase representation of this non-random sample.
Pet preparedness was considered an essential element by the majority of the participants; however, there was no representation from the field of veterinary medicine which might have limited the number and type of questions included on the final HEPI. Data regarding sub-specialties within fields was not collected. Therefore, it is unknown if the input of Delphi participants with expertise in pediatrics, geriatrics, and access and functional needs issues was included. However, participants with this expertise were targeted during recruitment. Finally, the impact of attrition due to the decrease in sample size from 154 for the first round to 79 for the third round is unknown. The size of the original and final panels may have provided sufficient protection related to inclusivity.
CONCLUSION
It is anticipated that this instrument will benefit society. Once the instrument is adequately pilot tested, it can be used to determine whether there is an association between being prepared for a disaster and surviving the disaster without the need for rescue or outside assistance. For medically frail community members, it can be determined whether there is an association between being prepared for a disaster and surviving the disaster without an acute exacerbation of a chronic illness and with no change in baseline functional status. This instrument can also be used in experimental studies to build evidence for promising individual and community HEP interventions. Researchers are encouraged to use the HEPI to provide additional validity and reliability data, which may inform future modifications of the instrument.
Acknowledgements
We would like to acknowledge the work of our research assistants Asha Ewrse, BSN, RN, Wenpin Hu, BSN, RN, Kamil Krekora, BSN, RN, and Soon-Hee Shimizu, BSN, RN. You contributed meaningfully to this important research while maintaining academic success in a rigorous Bachelor of Science in Nursing program. That is an amazing accomplishment.
Ethical considerations
This international Delphi study received institutional review board (IRB) approval from Hunter College, The City University of New York (protocol #2019-0121), Texas Christian University (protocol #M-1904-141-1905), and The University of Texas at Tyler (protocol #Spring2019-101). Participants’ consent was implied by completing the questionnaire.
Conflicts of Interest Statement
This research received no specific grant from any funding agency in the public, commercial, or not-for-profit sectors. The authors have no financial or personal relationships with other people or organizations that could inappropriately influence or bias their work. We have no commercial associations that could pose a conflict of interest or financial bias.