Introduction
As the world experiences larger, more frequent, and less predictable emergencies due to changing demographics, climate change, and urbanization, there is an increasing demand for humanitarian action and a greater need for trained and experienced providers who possess both the professional skills and competencies required of a humanitarian worker. A recent survey by the network Enhancing Learning and Research for Humanitarian Action (ELRHA) indicated that existing global staffing levels of roughly 210,800 humanitarian workers in the field are rising at an average annual rate of six percent. While the survey showed that 92% of field-based humanitarian workers are national staff from countries in which the humanitarian emergencies and/or disasters occur, there are very few training or professional development courses that target their needs.Reference Walker and Russ 1 Furthermore, the ignominious responses to the 2004 Indian Ocean tsunami and the 2010 Haitian earthquake revealed “unacceptable practices in the delivery of emergency medical assistance calling for greater accountability, quality control, more stringent oversight and coordination” and provoked a strong call for an international registry of deployable provider organizations and their providers.Reference Burkle, Redmond and McArdle 2 These issues have accelerated the process of professionalization among the ranks of the humanitarian workforce.
Academic affiliated training programs, primarily based in the European Union, the United Kingdom and North America, and in a number of humanitarian organizations, are currently organized to educate and train a cadre of professionals certified in humanitarian action. 3 - 9 The process of professionalization of the humanitarian sector has been described by Walker and colleagues and mimics processes long recognized in medicine and law.Reference Walker, Hein and Russ 10 Essential to the process are clearly-defined competencies that should be tested to determine whether workers can use them effectively to determine capabilities prior to deployment. Disaster simulations and drills have been used in training programs to test the preparation of responders for diverse scenarios, and are considered effective tools to plan for and mitigate the effects of disasters.Reference Gebbie, Valas, Merrill and Morse 11 Simulation followed by timely evaluation is one way to mimic the field deployment process, test core competences, and ensure that a competent workforce is proactively built to manage the inevitable emergencies and crises they will face. The inclusion of an evaluation component assures an additional level of accountability that training and simulation alone cannot guarantee.
In November 2011, the World Health Organization (WHO) collaborated with the Humanitarian Training Initiative (HTI) to create the Public Health Pre-Deployment Course simulation exercise (SimEx), the 2011 WHO-HTI SimEx, which was conducted in Tunisia. This simulation was the first to apply and evaluate a competency-based framework through simulation using an evidence-based approach. The primary objective was to evaluate the workforce during a humanitarian crisis simulation and provide real-time feedback to both the participants and the WHO to help determine whether participants were immediately deployable in a crisis response, and if not, in which competency-specific areas they would need additional training. Secondary objectives were to create and test the applicability of an evaluation tool that would be competency-based and incorporate the skills and behaviors required for accountability-based crisis management and response.
Methods
Competency-Based Framework
The Consortium of British Humanitarian Agencies (CBHA) is the first humanitarian organization to build a humanitarian competency framework that includes a learning and evaluation framework specific to general humanitarian training and field craft. The Core Humanitarian Competencies Framework developed by the CBHA consists of 16 core competencies distributed among six categories and divided into two main sections: core behaviors for all staff and additional behaviors for first-level line managers.Reference Emmens and Swords 12 Most humanitarian stakeholders now recognize this framework as the standard for categorizing competencies. Whereas the framework currently is used largely for self-assessment, hiring, and performance benchmarking, it has not yet been used to measure performance in an academic or training environment.
Competency-Based Evaluation Tool
Linking the CBHA competency framework to learning objectives that can be measured and evaluated in the classroom and in simulation-based training is the next step (Figure 1). The SimEx facilitators created the tool by first listing the sub-competencies under each core competency heading in order of importance. The most important sub-competency under each competency was translated into a learning objective and linked to a measurable indicator. These indicators provided six measures of competency that were evaluated by a joint team of facilitators. Each participant could be evaluated throughout the SimEx period by facilitators from WHO and HTI using the tool. It allowed each facilitator to identify the participant by team color, role on the team, skill station, time, and date. The six competencies evaluated during every interaction were:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921030648883-0228:S1049023X13009217:S1049023X13009217_fig1g.jpeg?pub-status=live)
Figure 1 Evaluation Tool
-
1. negotiate effectively with persons inside and outside the WHO, including national and local authorities;
-
2. effectively apply the principles of humanitarian reform;
-
3. work productively in an environment in which clear information or direction is not always available;
-
4. quickly reallocate resources and reset priorities in response to unexpected events;
-
5. work collaboratively with team members to achieve results;
-
6. reduce vulnerability by complying with safety and security protocols.
The six competencies were scored using a five-point Likert item in which the qualifier 1 reflects “poor” performance and qualifier 5 reflects “excellent” performance. Not relevant (NR) was also included to enable facilitators to refrain from evaluating a competency not relevant to the skill station or interaction. Each evaluation form also allowed for qualitative comments to provide more in-depth feedback, including things the participant did well, suggested areas for improvement, and other comments. Each response team member had a specific role, designated by a number, which was consistent across the teams. For example, team member number one was the team leader.
The competency-based evaluation tool was designed for this SimEx and applied to 31 participants, over a three-day simulation outside of Tunis, Tunisia. Information was collected on the relative performance of these individuals. This tool was created to be used by WHO leadership as an evidence-based way to critically and constructively evaluate participants prior to field deployment.
All 31 participants identified themselves as humanitarian professionals. They all attended the two-week Public Health Pre-Deployment Course, now in its seventh iteration. 13 Sixty-one percent (19/31) of participants were based in WHO or WHO-related missions, and 39% (12/31) were from non-governmental humanitarian agencies. Twelve facilitators (six each from HTI and WHO) evaluated and led participants during the simulation. In order to standardize the tool's application, each facilitator was trained for approximately 30 minutes on the use of the tool shown in Figure 1. Each facilitator had extensive familiarity with the CBHA competencies. 14 , 15 Each facilitator was to evaluate each team at least once. Data were entered into a Microsoft Excel (Excel Mac 2008, Microsoft Corporation, Redmond, Washington USA) spreadsheet and analyzed using SPSS statistical software (SPSS-IBB SPSS Statistics 20, IBM Corporation, Armonk, New York USA).
Analysis provided individual average Likert scores for all six competencies. This allowed for an evidence-based evaluation of each participant's performance during the simulation. Graphs generated for each participant (Figures 2 through 4) were used to help WHO facilitators formulate individual recommendations ranging from “immediately deployable” to “recommend further instruction.” Overall average Likert scores were calculated for all six competencies. This provided a way to compare individual participants to the overall group performance by competencies. This also provided an overview for facilitators on the collective performance of all participants by competency.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921030648883-0228:S1049023X13009217:S1049023X13009217_fig2g.jpeg?pub-status=live)
Figure 2 Total Number of Evaluations Per Competency
Follow-up interviews of eight participants were conducted six months after the simulation using Skype (Version 6.0.0.2968, Skype Communications, Luxembourg) and incorporated into a video used to explain and promote the SimEx as a humanitarian educational tool. All participants were asked whether the course was beneficial to their work and would recommend it to others.
Institutional Review board approval was not required as the program evaluation was requested by the World Health Organization, Geneva, Switzerland.
Results
Figure 2 lists the total number of evaluations per competency. The smallest number of scores collected per competency was 155 (for “Effectively [applies] the principles of UN humanitarian reform”) and the largest number of scores was 281 (for “[Works] collaboratively with team members to achieve results”). During the simulation, these numbers were calculated in real time, and the facilitators were then instructed to collect more evaluations if needed.
Figure 3 lists one team leader's individual results across all six competencies. This scoring chart is an example of the one available to facilitators to give feedback to the individual participant. In this example, the participant scored the lowest (3.2) in “[Negotiates] effectively with persons inside and outside the WHO, e.g.: national and local authorities” and the highest (4.0) in “[Reduces] vulnerability by complying with safety and security protocols” and “Effectively [applies] the principles of UN humanitarian reform.” Using this graph, the facilitator might give the individual feedback and incorporate qualitative comments recorded on individual forms. Additionally, when comparing this graph to Figure 4, the individual's performance can be compared to the others in the class. Lastly, with this individual grid, the facilitator might then make recommendations for improvement in specific competencies.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921030648883-0228:S1049023X13009217:S1049023X13009217_fig3g.jpeg?pub-status=live)
Figure 3 An Individual's Results Across All Six Competencies
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921030648883-0228:S1049023X13009217:S1049023X13009217_fig4g.jpeg?pub-status=live)
Figure 4 Class Evaluation by Competency
Figure 4 lists how the entire class did in each of the competencies. The highest scores were achieved for “[Works] collaboratively with team members to achieve results” and “Effectively [applies] the principles of UN humanitarian reform” and the lowest scores for “[Reduces] vulnerability by complying with safety and security protocols.” During the simulation, real-time data collection showed poor compliance with this last competency, allowing the facilitators to step out of their evaluation role and provide additional feedback and instruction regarding security.
Follow-up interviews, six months post training, of eight of 31 participants were conducted via Skype. One hundred percent (8/8) agreed that the course and simulation were beneficial and that they would recommend them to others.
Discussion
Simulation exercises are an ideal method to evaluate competence prior to deployment into a real humanitarian setting. Simulation exercises provide a safe way to introduce and practice competencies in a setting that does no harm and will be mutually beneficial to the participant and employer. Simulation has been used in many settings, notably in medicine and aeronautics, to train providers who will ultimately be responsible in life-and-death situations.Reference Ericcson 16 , 17 “See one, do one, teach one” was the mantra before simulation became popular. Now, simulation exercises allow providers to have the “experience” of intricate operations or flying large cargo planes prior to ever putting someone at risk.
In this exercise, performance assessments were standardized, dynamic, and immediate. For example, during a planned security event, participants were evaluated just after the scenario on how well they achieved the CBHA six competencies during that scenario. Also, if there were enough “poor” scores in a critical area, facilitators would give direct, out-of-role feedback to the group. This proved operationally critical (since security is a growing threat to the humanitarian space 18 ) and emphasized the need to ensure participants met this competency above all others. The process was unique because evaluations occurred not only throughout the simulation, but also were provided by facilitators with a broad range of expertise. All participants were evaluated repeatedly with a standardized tool. Integrating competency metrics and dynamic quantitative and qualitative measurement tools with real-time evaluation made for an innovative approach to participant evaluation in humanitarian training.
While the evaluation forms for each participant reflected different skills and scenarios, the composite of all evaluations reflects each participant's core competencies. Every participant received an average score for each competency, and the standardized format allowed for comparison between individuals. Each person had a “roadmap” of areas to work on, which facilitators used to guide their recommendations about best next steps. This novel tool also allowed WHO leadership to assess select participants’ suitability for deployment with an evidence-based approach.
This feedback evaluation tool will help to determine the core competency of humanitarian aid personnel. With this tool, one can teach and evaluate to competencies and help close the gap between the workforce presently available and those with the necessary competencies to do the job. This marks a critical accountability step in the pathway to the professionalization of humanitarian workers who not only have discipline-specific knowledge, but also the operational skills and attitudes necessary in crisis situations.
Limitations
There are several limitations in the use of this tool. The cost and logistical burden of conducting a course and simulation are great and currently can be borne only by a limited number of organizations. There is not yet a recognized standard for curriculum or competencies, or any agreed accountability mechanisms for verifying competency. It is also difficult to assess inter-rater reliability in such a subjective evaluation. However, this tool can be adapted to training situations to validate and standardize evaluations. Using standard deviations for individual scores would not be appropriate or helpful because the frequencies are too low. By assigning a range to each individual average score, one can achieve a higher level of detail. For example, Team 1 Member 2 scored 3.4 on competency 1 (negotiate effectively) but a total of nine scores in that competency ranged from 1 to 4.
After six months, only eight participants were located. These participants were understandably hard to contact for in-depth feedback as they were often based in crisis situations, primarily in the Middle East and Northern Africa. A more intensive follow-up evaluation is planned to determine whether the knowledge, skills, and attitudes persisted, were improved upon, or were detracted from, based on the course and the simulation. This would assist in predicting how frequently trainers would need to refresh participants’ competencies. Increasing the number and frequency of data collected would show day-to-day improvements and direct the facilitators in real time to intensify instruction in weak areas. Paper forms were used to collect the information and then data were entered manually into a spreadsheet. Using electronic data collection may help decrease the time and personnel required for data entry and analysis.
Conclusions
This study addresses the development of a CBHA competency-based evaluation tool to measure participant performance in real time during the 2011 WHO-HTI SimEx. Individuals were assessed on their ability to meet each competency. This represents the first time that a tool to evaluate competency-based performance has been applied in real time and incorporated into facilitator decision making at the individual level during a simulation exercise. In doing so, the authors believe this evaluation process provides an additional critical level of personal and organizational accountability not attained by educational training and simulation exercise instruction alone. 19 - Reference Green, Modi, Lunney and Thomas 22 By creating an objective evaluation tool of courses and simulations based on internationally recognized competencies, the development and accountability promotion of the professionalization pathway is supported. 23 This instrument is a training and education option available to all humanitarian organizations. Further work is needed to validate, generalize, and standardize this tool in future training courses.