Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-06T03:04:59.473Z Has data issue: false hasContentIssue false

Requirements for Independent Community-Based Quality Assessment and Accountability Practices in Humanitarian Assistance and Disaster Relief Activities

Published online by Cambridge University Press:  13 June 2012

Thomas D. Kirsch*
Affiliation:
Center for Refugee and Disaster Response, The Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland USA Johns Hopkins School of Medicine, Johns Hopkins University Medical Institutions, Baltimore, Maryland USA
Paul Perrin
Affiliation:
Center for Refugee and Disaster Response, The Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland USA
Frederick M. Burkle
Affiliation:
Harvard Humanitarian Initiative, Harvard School of Public Health, Cambridge, Massachusetts USA
William Canny
Affiliation:
Catholic Relief Services, Baltimore, Maryland USA
Susan Purdin
Affiliation:
International Rescue Committee, New York, New York USA
William Lin
Affiliation:
Johnson and Johnson Services, Inc., New Brunswick, New Jersey USA
Lauren Sauer
Affiliation:
Johns Hopkins School of Medicine, Johns Hopkins University Medical Institutions, Baltimore, Maryland USA
*
Corresponding Author: Thomas D. Kirsch, MD, MPH 5801 Smith Avenue Davis Bldg., Suite 3220 Baltimore, MD 21209 USA E-mail tkirsch1@jhmi.edu
Rights & Permissions [Opens in a new window]

Abstract

During responses to disasters, the credibility of humanitarian agencies can be threatened by perceptions of poor quality of the responses. Many initiatives have been introduced over the last two decades to help address these issues and enhance the overall quality of humanitarian response, often with limited success. There remain important gaps and deficiencies in quality assurance efforts, including potential conflicts of interest. While many definitions for quality exist, a common component is that meeting the needs of the “beneficiary” or “client” is the ultimate determinant of quality. This paper examines the current status of assessment and accountability practices in the humanitarian response community, identifies gaps, and recommends timely, concise, and population-based assessments to elicit the perspective of quality performance and accountability to the affected populations. Direct and independent surveys of the disaster-affected population will help to redirect ongoing aid efforts, and generate more effective and comparable methods for assessing the quality of humanitarian practices and assistance activities.

Kirsch TD, Perrin P, Burkle FM Jr, Canny W, Purdin S, Lin W, Sauer L. Requirements for independent community-based quality assessment and accountability practices in humanitarian assistance and disaster relief activities. Prehosp Disaster Med. 2012;27(3):1-6.

Type
Special Report
Copyright
Copyright © World Association for Disaster and Emergency Medicine 2012

Quality is doing the right thing, right, the first time, and doing it better the next time, with the resource constraints and to the satisfaction of the community.”Reference Franco, Silimperi and van Zanten 1

Defining Quality and Accountability in Humanitarian Activities

There are many definitions that exist for “quality,” including some specifically related to humanitarian responses. The most common component to these definitions is that meeting the needs of the “beneficiary” or “client” is the ultimate determinant of quality. 2 Researchers suggest that “quality of care” is a concept that is most meaningful when applied to the individual end user. 3 , Reference Campbell, Roland and Buetow 4 While definitions for quality may seem simple and straightforward at the theoretical level, quality is quite a difficult notion to measure and operationalize. Quality assurance processes define, measure, and improve quality. The perspective from which quality is defined plays a critical role in determining the quality assessment measures and methods that are used.

What constitutes quality can vary widely among donors, humanitarian staff, and affected populations.Reference Moss Kanter and Summers 5 Many definitions of quality stress the importance of the end users, yet despite the explicit statements of the Red Cross/Non-Governmental Organization (NGO) Code of Conduct, their perspective often is not taken into account. In itself, end-user satisfaction is an important outcome. 3 Poor satisfaction represents an immediately-measureable outcome; and, over time, the treatment or lack of it may lead to further disability that may or may not be amenable to conventional measures. Slim has linked accountability and legitimacy in defining accountability as “the process by which an NGO holds itself openly responsible for what it believes, what it does and what it does not do in a way which shows it involving all concerned parties and actively responding to what it learns.” 6

While efforts to increase quality, effectiveness, and impact may mean less rapid disbursement of support and higher humanitarian overheads in the short term, 7 they will help ensure higher returns for the humanitarian investment in the long term. 8 Currently, there is a recognition of a lack of good information on impact and quality in disaster assistance. 7 , Reference Oakley, Pratt and Clayton 9 In addition to the challenges that emergency response contexts bring to rigorous study design and data collection required for the assessment of impact and quality, there is recognition by donors that rigorous evaluations may not be the highest priority for every project implementation. Donors look to implementing partners to find appropriate ways in which to evaluate the effectiveness and quality of interventions. The very nature of donor assistance mechanisms limits the degree to which donors can dictate to implementing partners as to how program evaluations are to be carried out, and this gap has been interpreted by some as an unwillingness on the part of the donor to provide incentives for improving quality and/or disincentives for low-quality responses. 8 , 10

Operational Consequences

Whereas the medical profession is highly regulated and must ensure quality control in all countries in normal times, such requirements may be waived or ignored in field situations during disaster situations. 11 However, disaster and humanitarian response agencies, inclusively referred to here as NGOs, not always may be sufficiently accountable to the recipients of their assistance, whether due to a focus on donor and government reporting requirementsReference Easterly 12 or to a lack of capacity to fully engage assistance recipients in evaluations of humanitarian assistance. The International Federation of Red Cross and Red Crescent Societies (IFRC) refers to disaster relief as “the world's largest unregulated industry.” 13 Such lack of accountability contradicts the voluntary IFRC/NGO Code of Conduct, which asserts that the humanitarian workforce should be accountable to “both those we seek to assist and to those from whom we accept resources.” 14 It also strikes at the very heart of the Humanitarian Charter (as expressed by the Sphere Project), which states “we expect to be held accountable… [and] we acknowledge that our fundamental accountability must be to those we seek to assist.” 15

Disaster relief is a large industry. Between 1975 and 2005, a total of 6,367 disasters due to natural hazards killed more than 2 million people, displaced 182 million, affected several billion more, and caused US $1.4 trillion worth of damages worldwide. 16 Nearly one-third of the population of the planet was affected by disasters in the 1990s alone. 17 As a result of these disasters, annual spending on global disaster responses rose continuously between 2000 and 2005, peaking at an estimated US $18 billion in 2005—a year involving responses to an unusually high number of large-scale disasters. 17 In 2007, global disaster responses were valued at an estimated US $14.2 billion, approximately US $9.2 billion of which represented total official humanitarian funding. 17 The remainder of the funding comprised private and foundation donations to NGOs, direct remittance funding to affected communities, and local government funding.

There are an estimated 3,000-4,000 internationally-operating NGOs based in the western world, approximately 260 of which are strictly humanitarian response organizations.Reference Stoddard 18 In 2004-2005, an estimated 300 international NGOs responded to the 2004 Indian Ocean earthquake and tsunami, with approximately 2,000 foreigners working in Aceh Provence alone. 19 In spite of, or perhaps as a result of, the scope of humanitarian assistance, there is criticism about disaster and humanitarian responses in general, and the work of NGOs more specifically. During past disasters, the credibility of humanitarian responses often was threatened by perceptions of lack of impact, poor coordination, waste and inefficiency, corruption and fraud, political motivation for assistance, and a lack of professionalism, 2 which ultimately was interpreted as leaving the needs of recipients unmet.

Aid agencies are held accountable for their services, but not necessarily in the same way as are private sector businesses. For-profit businesses rely on financial measures (e.g., return on assets or profit margin) to measure market satisfaction and quality of products and services, whereas nonprofit organizations tend to be assessed in relation to their mission or services, which are more intangible and difficult to measure.Reference Moss Kanter and Summers 5 While NGOs have made progress in quality assurance over the past decade, they still place far more emphasis on reporting back to donors than they do on evaluating their impact on beneficiaries. 20 Unlike private businesses competing in a free-market system, humanitarian agencies do not necessarily risk losing market share when the end-user perceives a low-quality product or response. However, they do run the risk of losing business if the donor is not satisfied. Donors focus on input and output indicators, as well as efficiency and capacity measures, rather than demonstration of impact, quality, or learning.Reference Moss Kanter and Summers 5 , 21 As such, annual reports of disaster-response agencies tend to provide statistics on the amount of money distributed and the number of people served. However, accountability to donors should not be mistaken for quality assurance, nor should it be assumed that it necessarily equates to accountability to the affected populations.Reference Larose and Adams 22

Moreover, there is a fundamental difference between the end-users of the product in both cases; consumers can use their pocketbooks to influence the product and decision-making of private-sector businesses, whereas the end-users of NGOs command no such influence, which often leaves them beholden to the influence of the donors’ well-intentioned funds.

Critical findings of the Rwanda joint evaluation in 1994 indicated serious deficiencies in the humanitarian responses to the genocide, and thus challenged humanitarian actors to ensure higher quality post-disaster services and to restore public trust. 3 , 14 , Reference Stoddard 18 This challenge created a momentum that resulted in the drafting and adoption of a code of conduct in 1995, the creation of the Active Learning Network for Accountability and Performance in Humanitarian Action (ALNAP), and International Sphere Standards in 1997, the Humanitarian Accounting Partnership principles in 2003, 3 and standards in 2007, 2 and several with other accountability initiatives. These are delineated and described in Appendix A. However, these accountability initiatives have not provided a methodology to assess truly the impact of aid on the affected population. Additionally, experiences within the management of both the 2010 earthquake in Haiti and the Indian Ocean tsunami of 2004 revealed “unacceptable practices in the delivery of international emergency medical assistance. Serious questions have been raised about the clinical competencies and practices of some foreign medical teams. It is now recognized that there needs to be greater accountability, more stringent oversight and better coordination of their work.” 23 Unfortunately, this long-standing dilemma can create a competitive environment that further compromises the delivery and quality of care services.Reference Subbarao, Wynia and Burkle 24

Existing Quality Performance and Accountability Activities

The Sphere Project, arguably the most widely-recognized initiative promoting quality in disaster and humanitarian responses, has made important contributions to improve the quality of assistance and accountability of humanitarian agencies. Sphere represents a great step forward in setting standards against which quality can be measured; yet, establishing standards is only one part of a quality assurance system for disaster relief. Numerous criticisms have been leveled at the Sphere standards, 15 , Reference Larose and Adams 22 , 25 - Reference Tong 29 including claims that they lack flexibility and prioritization among the hundreds of standards. 17 Essentially, the key message is that the Sphere standards are useful and necessary, but adherence to the standards—which may not even be possible to assess in a given context—does not necessarily guarantee quality.

There have been other initiatives, each filling a unique niche, that represent additional steps to ensure the quality of disaster responses. Specific aspects of quality, such as funding efficiency (Charity Navigator), staff professionalism (People In Aid), participation (Humanitarian Accountability Project, Coordination SUD (Solidarité Urgence Développement), and technical performance (Sphere, Code of Conduct, Quality COMPAS) have been taken into account. One initiative has even sought to increase the accountability of donors to promote quality (“Good Humanitarian Donorship”). These initiatives clearly reflect that members of the humanitarian sector are sincere in their quest to improve both the accountability and the quality of the disaster response.Reference Larose and Adams 22 However, there remain some important gaps and deficiencies in these efforts. Moreover, there is a potential conflict of interest with some of these initiatives, as they are attempts at self-regulation, and are not mandatory or routinely enforced. Non-governmental organizations often compete with one another for a finite pool of resources; therefore, it is not ideal to have some NGOs in control of the accreditation process for “competing” NGOs. Some humanitarian providers feel that strict rules and accountability will reduce their flexibility in emergency response. 13 Many NGOs tend to respond based on available funding, which may be a logical reflex. Additionally, many funding decisions are based on a NGO's perceived capacity to respond, which often is equated to that NGO's expenditures during previous emergencies. This leads to the dilemma that responding to actual needs rather than maximally expending total available or allocated funds may reduce the perceived capacity of the organization, which, in turn, could reduce the amount of funding in the future.Reference Larose and Adams 22 Ensuring a high degree of quality and accountability within this context actually might affect adversely a humanitarian organization's “market share.”

An important deficiency in quality assessments lies in the fact that existing initiatives largely focus on ensuring quality of humanitarian institutions and systems rather than on the impact of the response, the experience of disaster-affected individuals, or the quality of the end product. As previously noted, most definitions of quality stress the centrality of the end user; thus, measuring the met and unmet needs and expectations of the end client is a key form of quality assessment,Reference Franco, Silimperi and van Zanten 1 , Reference Hilhorst 30 - Reference Rutta, Williams and Mwansasu 34 and a way to improve project impact. 3 Improvements in end-user participation at the onset of programming notwithstanding, quality assessment initiatives in the humanitarian field still largely overlook the end users’ perspectives. Even where these end-user measures exist, they have been piecemeal and disjointed,Reference Oakley, Pratt and Clayton 9 , 35 which makes it difficult to compare quality results not only between disasters, but among agencies responding to the same disaster.

Another difficulty is manifest in the data collection process. Currently, data, especially assessment data, are gathered by many different organizations, using different methods, forms, and levels of expertise and training. Individual NGOs may have limited expertise and staff to undertake these complicated surveys. There is a need to build further the capacity of humanitarian actors so that more rigorous and representative data can be collected and made available for analysis. Neither implementers nor donors, for the most part, have taken advantage of the full range of survey and data collection methodologies. Increasingly, evaluation specialists are engaging in a dialogue with NGOs and donors, and providing suggestions for survey research appropriate to emergency response contexts.Reference Spence and Lachlan 36

The WHO Health Cluster program takes place with NGO partners who work together both at the global/regional and country levels to “improve the effectiveness, predictability, and accountability of humanitarian health action.” 23 At the country level, health partners work to jointly assess and analyze information, prioritize the interventions, build an evidence-based strategy and action plan, monitor the health situation and the health sector response, adapt/re-plan as necessary, mobilize resources, and advocate for humanitarian health action. 23 While the independent surveys discussed here are autonomous in nature to the NGO, they would become interdependent at the Health Cluster level for the common good where separate levels of system coordination and accountability decisions must occur.

Blueprint for Improved Practice and Outcomes

All humanitarian agencies would agree that serving affected populations is central to their mission, yet there is much less end-user feedback than would be expected. 7 One method to engage the affected population in quality assurance is to evaluate the quality of disaster relief programs through the use of population-based qualitative and quantitative surveys. Surveys of a population's needs, both ongoing and already met, can serve to redirect aid activities and to judge ultimately the overall quality of a response. Some researchers have posited that the inadequacy of information on quality and impact of disaster responses is due to a lack of suitable tools and standardized, rigorous methodologies.Reference Oakley, Pratt and Clayton 9 , 21 The time is ripe for a methodology and application tool for disaster response quality assessment from the perspective of affected populations.

Direct assessments of the affected population require additional resources, time, and efforts as compared to other, more readily obtainable, quality assessment methods (e.g., structure and process). 20 A major constraint is that emergencies demand speed in difficult and challenging circumstances, and it may be unrealistic to expect aid agencies to undertake objective outcome assessments at the same time as mobilizing a crisis response. 8 In the chaotic aftermath of a disaster-producing event, personnel justifiably may feel that they do not have the time or resources to conduct end-user “satisfaction” surveys. Despite this, it is imperative to obtain feedback on whether the recipients’ needs are met and how satisfied the recipients are with the assistance provided. This not only serves to identify ongoing areas of need, but also serves to better direct the use of limited resources in the current emergency as well as to improve future humanitarian practices.

As time elapses, the needs of the impacted population change and response activities move from one phase into another. Government agencies and response organizations typically extrapolate the needs based on gross data collected from the major agencies and in the most hard-hit areas. This approach would be most useful if the needs of the entire affected population transformed from one stage to another in unison. But this rarely is the case; individual needs and ability to recover depend on a range of coping mechanisms, as well as the socioeconomic and infrastructure conditions before the onset of the disaster. Quality assessments designed to capture data based on a broad and representative sampling of impacted communities recovering at various rates will provide humanitarian responders with better information and ultimately will mean end-users will be better served.

While some agencies may undertake client satisfaction studies, an agency-based approach may produce biased information, as people may not want to complain for fear of losing assistance. Many NGOs have noted that anonymous, indirect complaints mechanisms or umbrella complaints mechanisms set up across agencies, NGOs, and the UN, often work more effectively than direct mechanisms. 20 In addition, response agency-specific surveys cannot satisfactorily assess certain aspects of quality, such as access or participation, as they more likely would survey users of their services, rather than the larger, general population, some of whom may not have received any services.

Such realities and concerns call for an independent, population-based assessment of humanitarian response quality using a standardized assessment tool to quickly and relatively simply elicit the perspective of affected populations. A consistent, independent, population-based approach has a number of benefits. Removing the assessment onus from the response agencies, whose primary mission is to save lives, allows—and using an independent group would ensure—appropriate methodological rigor and a directed effort to collect data. A broader, population-based (rather than agency-specific) methodology would allow for an anonymous and proactive assessment that mixes effectiveness items (i.e., impact) with satisfaction items, and may overcome dependent beneficiaries’ reluctance to complain. An independent quality assessment also could examine quality issues that arise between agencies and programs that may not be discovered when examined at the agency level. Finally, a population-based assessment that can capture information regarding the humanitarian response as a whole, and not simply about individual agencies, would be more generalizable within the disaster and comparable to other disasters.

Many initiatives have been introduced to enhance the quality of humanitarian responses, but concerns and documented problems with the quality of the responses continue. One key missing component is the centrality of the end-user in identifying quality programs. The affected population must have a voice in quality assessment through direct participation quality assessment methodologies, especially those at the Health Cluster level. Direct and independent surveys of the disaster-affected population will help to direct and redirect ongoing aid efforts, and can be an effective and comparable method for assessing the quality of humanitarian and disaster responses and services to the Health Cluster.

Why might there be more success today in establishing improved quality performance and accountability than in years past? First, there are increasing numbers of well-trained individuals who claim careers as humanitarian professionals. This number has doubled during the last decade.Reference Walker, Hein and Russ 37 Second, the debacle in coordination, quality performance, and accountability among some foreign medical teams as witnessed in Haiti, once again underscored the desperate need to remediate what has become a chronic and seemingly unsolvable problem. A WHO/PAHO post-earthquake meeting in Cuba in December 2010 explored potential options including the development of an international registry of foreign medical provider organizations. 11 If implemented, this would complement similar initiatives by both international NGOs and academic institutions to develop core competency-based accreditation for all providers. These initiatives would be a vital first step in reaching standards leading to quality performance and accountability. The global health movement is catalyzing nation-states to recognize that many current crises, and those anticipated in the future, are beyond a single nation's capacity to respond.Reference Burkle 38 Ninety-two percent of the NGOs now agree to the professionalization of the humanitarian profession, a movement that will better ensure the viability of setting standards among global health initiatives including aid activities and the processes leading to those objectives.Reference Walker, Hein and Russ 37

Lastly, there are analogies to the accomplishments of the International Health Regulations Treaty established after the 2003 SARS (severe acute respiratory syndrome) pandemic that have shown that there can be global cooperation when public health emergencies are declared. This Treaty requires standards in capacity building, unprecedented global cooperation and surveillance accountability, yet retains responsibility of the individual nation-states to comply and build that capacity. 39 Clearly, a similar global authority eventually will be necessary for large-scale health-related crises 40 to bring about universal standards, 41 the ultimate hallmark for quality performance and accountability.

Abbreviations

CRS:

Catholic Relief Services

IFRC:

International Federation of Red Cross and Red Crescent Societies

NGO:

non-governmental organization

OECD:

Organisation for Economic Co-operation and Development

Appendix A: International Initiatives to Promote Quality in Disaster Relief

Abbreviations: ALNAP, Active Learning Network for Accountability and Performance in Humanitarian Action; SUD, Solidarité Urgence Développement; CRS, Catholic Relief Services; HAP, Humanitarian Accountability Partnership International; IFRC, International Federation of Red Cross and Red Crescent Societies; NGOs, non-governmental organizations; OECD, Organisation for Economic Co-operation and Development; SUD, Solidarité Urgence Développement; URD, Urgence Rehabilitation Développement.

References

1. Franco, LM, Silimperi, DR, van Zanten, TV, et al. . Sustaining Quality of Healthcare: Institutionalization of Quality Assurance. Bethesda, MD: Center for Human Services; 2002. http://www.chs-urc.org/pdf/monographinstitQA.pdf. Accessed May 15, 2012.Google Scholar
2. Humanitarian Accountability Partnership Editorial Steering Committee. HAP 2007 Standard in Humanitarian Accountability and Quality Management. Geneva, Switzerland: HAP International; 2007. http://www.hapinternational.org/pool/files/hap-2007-standard(1).pdf. Accessed May 15, 2012.Google Scholar
3. Emergency Capacity Building Project. Impact Measurement and Accountability in Emergencies: The Good Enough Guide. Cowley, Oxford, UK: Oxfam GB; 2007. http://www.ecbproject.org/inside-the-guide/view-the-good-enough-guide. Accessed May 25, 2012.Google Scholar
4. Campbell, SM, Roland, MO, Buetow, SA. Defining quality of care. Soc Sci Med. 2000;51(11):1611-1625.CrossRefGoogle ScholarPubMed
5. Moss Kanter, R, Summers, DV. Doing well while doing good: dilemmas of performance measurement. In: Powell WW (ed). On Profit Organizations and the Need for a Multiple Constituency Approach in the Non Profit Sector: A Research Handbook. New Haven and London: Yale University Press; 1987:154-166.Google Scholar
6. Slim H. International Council on Human Rights Policy. By what authority? The legitimacy and accountability of non-governmental organisations. 2002. Journal of Humanitarian Assistance. http://www.ichrp.org/files/papers/65/118_Legitimacy_Accountability_Nongovernmental_Organisations_Slim_Hugo_2002.pdf. Accessed May 25, 2012.Google Scholar
7. Watson C. Impact Assessment of Humanitarian Response: A Review of the Literature. Medford, MA: Feinstein International Center of Tufts University; 2008. https://wikis.uit.tufts.edu/confluence/download/attachments/19271809/Impact_10_07_08.pdf?version=1. Accessed May 15, 2012.Google Scholar
8. Center for Global Development. When Will We Ever Learn? Improving Lives Through Impact Evaluation. Washington, DC: Center for Global Development; 2006. http://www.3ieimpact.org/doc/WillWeEverLearn.pdf. Accessed May 15, 2012.Google Scholar
9. Oakley, P, Pratt, B, Clayton, A. Outcomes and Impact: Evaluating Change in Social Development. Oxford, UK: International NGO Training and Research Centre; 1998.Google Scholar
10. Fritz Institute. Evidence of Impact: Challenges and New Directions—The 2006 Impact Conference Proceedings; May 19-20, 2006; Sebastopol, CA. http://www.fritzinstitute.org/prgHI-Conference2006.htm. Accessed May 15, 2012.Google Scholar
11. Pan American Health Organization. Proceedings of the WHO/PAHO Technical Consultation of Foreign Medical Teams Post Sudden Onset Disasters; December 7-9, 2010; Havana, Cuba. http://new.paho.org/disasters/index.php?option=com_docman&task=doc_download&gid=1761&Itemid. Accessed May 15, 2012.Google Scholar
12. Easterly, WR. The White Man's Burden: Why the West's Efforts to Aid the Rest Have Done So Much Ill and So Little Good. New York, NY: The Penguin Press; 2006.Google Scholar
13. International Federation of Red Cross and Red Crescent Societies. World Disasters Report 2005: Focus on information in disasters. Bloomfield, CT: Kumarian Press; 2005.Google Scholar
14. Code of Conduct for the International Red Cross and Red Crescent Movement and NGOs in Disaster Relief, 1995. International Federation of Red Cross and Red Crescent Societies Web site. http://www.ifrc.org/en/publications-and-reports/code-of-conduct/. Accessed May 25, 2012.Google Scholar
15. Sphere Project. Humanitarian Charter and Minimum Standards in Disaster Response. Geneva, Switzerland: Sphere Project; 2004.Google Scholar
16. Scheuren JM, le Polain O, Below R, et al. Annual Disaster Statistical Review: The Numbers and Trends 2007. Brussels, Belgium: Center for Research on the Epidemiology of Disasters (CRED). http://www.cred.be/sites/default/files/ADSR_2007.pdf. Accessed May 15, 2012.Google Scholar
17. Development Initiatives. Global Humanitarian Assistance 2007/2008. Somerset, UK: 2008. http://www.globalhumanitarianassistance.org/wp-content/uploads/2010/07/2007-GHA-report.pdf. Accessed May 15, 2012.Google Scholar
18. Stoddard, A. Humanitarian NGOs: Challenges and Trends. London, UK: Overseas Development Institute; 2003. HPG Briefing 12. http://www.odi.org.uk/resources/download/272.pdf. Accessed May 15, 2012.Google Scholar
19. Canny B. A review of NGO coordination in Aceh post-earthquake/tsunami. Study sponsored by International Council of Voluntary Agencies (ICVA). 2005. http://reliefweb.int/sites/reliefweb.int/files/resources/FEA7B9C91F77119949257021001CFEC0-icva-idn-8apr.pdf. Accessed May 15, 2012.Google Scholar
20. United Nations Office for the Coordination of Humanitarian Affairs. Beneficiary feedback: “Thanks but no thanks”? http://www.irinnews.org/Report.aspx?ReportId=78640. Published June 9, 2008. Accessed May 15, 2012.Google Scholar
21. Hoffman CA and the Humanitarian Policy Group, Overseas Development Institute. Measuring the impact of humanitarian aid: a review of current practice. Humanitarian Policy Group Research Report Number 15. London, UK. http://www.odi.org.uk/resources/docs/343.pdf. Published June 2004. Accessed May 15, 2012.Google Scholar
22. Larose, L, Adams, J. Accountability and quality: uncomfortable bedfellows? Humanitarian Exchange. 2002;21:19-21.Google Scholar
23. Office for the Coordination of Humanitarian Affairs (OCHA) Strategic plan. In: OCHA in 2011: Annual Plan and Budget—Responding in a Changing World. Geneva, Switzerland: OCHA; 2011:8-10. http://ochaonline.un.org/ocha2011/OCHA2011_jpg2000_200dpi.pdf. Accessed May 15, 2012.Google Scholar
24. Subbarao, I, Wynia, MK, Burkle, FM Jr. The elephant in the room: collaboration and competition among relief organizations during high-profile disasters. J Clin Ethics. 2010;21(4):328-334.CrossRefGoogle ScholarPubMed
25. Hilhorst D. Being good at doing good? Review of debates and initiatives concerning the quality of humanitarian assistance. Paper presented at international working conference, Enhancing the Quality of Humanitarian Assistance; October 12, 2001; Netherlands Ministry of Foreign Affairs. http://reliefweb.int/sites/reliefweb.int/files/resources/9CCA704F81272F5FC1256CD3003C409D-neth-good-oct01.pdf. Accessed May 15, 2012.Google Scholar
26. Walker, P, Purdin, S. Birthing Sphere. Disasters. 2004;28(2):100-111.CrossRefGoogle ScholarPubMed
27. Griekspoor, A, Collins, S. Raising standards in emergency relief: how useful are Sphere minimum standards for humanitarian assistance? BMJ. 2001;323(7315):740-742.CrossRefGoogle ScholarPubMed
28. Dufour, C, Geoffrey, V, Maury, H, Grünewald, F. Rights, standards and quality in a complex humanitarian space: Is Sphere the right tool? Disasters. 2004;28(2):124-141.CrossRefGoogle Scholar
29. Tong, J. Questionable accountability: MSF and Sphere in 2003. Disasters. 2004;28(2):176-189.CrossRefGoogle ScholarPubMed
30. Hilhorst, D. Being good at doing good? Quality and accountability of humanitarian NGOs. Disasters. 2002;26(3):193-212.CrossRefGoogle ScholarPubMed
31. Aharony, L, Strasser, S. Patient satisfaction: What we know about and what we still need to explore. Med Care Rev. 1993;50(1):49-79.CrossRefGoogle ScholarPubMed
32. Campbell, J. How consumers/survivors are evaluating the quality of psychiatric care. Eval Rev. 1997;21(3):357-363.CrossRefGoogle ScholarPubMed
33. Hansson, L, Björkman, T, Berglund, I. What is important in psychiatric inpatient care? Quality of care from the patient's perspective. Qual Assur Health Care. 1993;5(1):41-47.CrossRefGoogle ScholarPubMed
34. Rutta, E, Williams, H, Mwansasu, A, et al. . Refugee perceptions of the quality of healthcare: findings from a participatory assessment in Ngara, Tanzania. Disasters. 2005;29(4):291-309.CrossRefGoogle ScholarPubMed
35. Active Learning Network for Accountability and Performance (ALNAP). Humanitarian action: learning from evaluation. In: ALNAP Annual Review. London, UK: ALNAP; 2001. http://reliefweb.int/sites/reliefweb.int/files/resources/98A220E8FF3F4EEAC1256C24005D5378-ar2001_all.pdf. Accessed May 15, 2012.Google Scholar
36. Spence, PR, Lachlan, KA. Disasters, crises, and unique populations: suggestions for survey research. New Directions for Evaluation. 2010;126:95-106.CrossRefGoogle Scholar
37. Walker, P, Hein, K, Russ, C, et al. . A blueprint for professionalizing humanitarian assistance. Health Aff (Milwood). 2010;29(12):2223-2230.CrossRefGoogle ScholarPubMed
38. Burkle, FM Jr. Future humanitarian crises: challenges for practice, policy, and public health. Prehosp Disaster Med. 2010;25(3):191-199.CrossRefGoogle ScholarPubMed
39. World Health Organization. Implementation of the International Health Regulations (2005). Report on the review committee on the functioning of the International Health Regulations in relation to the pandemic (H1N1) 2009. http://apps.who.int/gb/ebwha/pdf_files/WHA64/A64_10-en.pdf. Sixty-fourth World Health Assembly, Provisional agenda item 13.2, May 5, 2011. Accessed May 15, 2012.Google Scholar
40. Burkle FM Jr, Redmond AD, McArdle DF. An authority for crisis coordination and accountability [early online publication ahead of print October 18, 2011]. Lancet. doi:10.1016/S0140-6736(11)60979-3.CrossRefGoogle Scholar
41. Institute of Medicine of the National Academies. Guidance for Establishing Crisis Standards of Care for Use in Disaster Situations: A Letter Report. Washington, DC: The National Academies Press; 2009. http://books.nap.edu/openbook.php?record_id=12749&page=R1. Accessed May 15, 2012.Google Scholar
42. Coordination SUD. The French national platform of international solidarity. http://www.coordinationsud.org. Accessed May 15, 2012.Google Scholar
43. People In Aid. Code of Good Practice in the Management and Support of Aid Personnel. London, UK: People In Aid; 2003. http://www.peopleinaid.org/pool/files/code/code-en.pdf. Accessed May 15, 2012.Google Scholar
44. How do we rate charities’ financial health? Charity Navigator Web site. http://www.charitynavigator.org/index.cfm?bay=content.view&cpid=35. Accessed May 15, 2012.Google Scholar
45. COMPAS method: a quality assurance method for humanitarian aid. Quality COMPAS Web site. http://www.compasqualite.org/en/compas-method/presentation-compas-method.php. Accessed May 15, 2012.Google Scholar
46. Principles and good practice of humanitarian donorship, 2003. Good Humanitarian Donorship Web site. http://www.goodhumanitariandonorship.org/gns/about-us/about-ghd.aspx. Accessed May 15, 2012.Google Scholar