Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-06T12:37:10.405Z Has data issue: false hasContentIssue false

Applied Political Science and Evidence-Based Foreign Assistance in Democracy, Human Rights, and Governance

Published online by Cambridge University Press:  21 June 2018

Aaron J. Abbarno
Affiliation:
Democracy International
Nicole Bonoff
Affiliation:
University of Wisconsin–Madison
Rights & Permissions [Opens in a new window]

Abstract

Type
Symposium: Whose Research Is It? Notable Ways Political Scientists Impact the Communities We Study
Copyright
Copyright © American Political Science Association 2018 

Government and international organization bureaucracies increasingly call for policy and programs that are informed by evidence. But their institutional cultures, regulations, time constraints, and professional incentives create an unfamiliar environment that can be difficult for researchers to navigate and may seem unreceptive to social science. Political scientists who want to work with practitioners or share their research to inform policy decisions should understand these challenges and be equipped to handle them.

We observed and cultivated many successful scholar–practitioner partnerships as academic fellows embedded within the US Agency for International Development (USAID) between 2013 and 2017, where our goal was to promote uptake of evidence and evaluation rigor in “Democracy, Human Rights, and Governance” (DRG) interventions. Our experience revealed some lessons about how academics can promote their work, improve its usefulness, and apply their research skills within bureaucratic confines.

This essay characterizes certain hurdles scholars may encounter at government organizations like USAID and provides practical guidance on how to overcome them. In particular, we highlight strategies for effective collaboration and ways to make existing research more accessible to policy makers.

CHALLENGES OF EVIDENCE-BASED FOREIGN ASSISTANCE

“Evidence-based programming” and what it entails are familiar enough to policy makers: relinquish anecdotes about what “works,” leverage theory in program design, let the data speak, and build evidence into future interventions. This is widely embraced in principle but unevenly adopted in practice. Intellectual and institutional barriers—including qualified interest in social science, staff that is largely unfamiliar with political science research, weak incentives for change, and rigid procurement regulations—limit whether and how well practitioners access evidence and apply it to development program and policy design.

Bounded Interest in Social Science

Interest in social scientific evidence among practitioners at USAID is not new or insincere. Finkel, Pérez-Liñan, and Seligson’s (Reference Finkel, Pérez-Liñan and Seligson2007) longitudinal study of foreign democracy assistance is widely consumed and is still used to justify funding for DRG programs. In 2012, the Center of Excellence on Democracy, Human Rights and Governance (DRG Center) established a specialized Learning Division to elevate rigorous research in response to a National Research Council report (Goldstone et al. Reference Goldstone, Garber, Gerring, Gibson, Seligson and Weinstein2008) that urged USAID to prioritize evidence-based decision making. The Learning Division has since sponsored 13 nation-wide surveys; 20 cross-disciplinary literature reviews and working papers; and has worked with political scientists and economists to design and carry out 33 Randomized Controlled Trial (RCT) and quasi-experimental “impact evaluations” of DRG projects in 26 countries around the developing world. This work is a bright spot in USAID evidence-based programming; it informs strategic planning, helps challenge assumptions underlying deadwood programs, and can push the boundaries of what we expect will work in the field. However, genuine interest in the social scientific method at USAID is in short supply. The National Research Council report jumpstarted certain initiatives, but despite its recommendations—more RCT impact evaluations, better measurement, case studies for theory building, and comprehensive knowledge management—support for more rigorous evidence, or skepticism of traditional forms of less rigorous evidence, have not become widespread.

Low Familiarity with Political Science Research

Political science can offer abundant data and evidence relevant to DRG programs, but accessibility is a problem because political science research is difficult to grasp. Cumbersome details of research design and methodology distract and may even distress lay readers. Academic articles feature disciplinary jargon and often ignore contextual information that policy makers deem important for evidence-based decisions (Shah and Gerson Reference Shah, Gerson, Nussle and Orszag2015). Practitioners generally lack advanced training in research design and analysis, which limits their ability to interpret data (Callen et al. Reference Callen, Khan, Khwaja, Liaqat and Myers2016) and use political science research to inform their work meaningfully. Perhaps partly for this reason, development professionals tend to seek intellectual guidance outside of peer-reviewed journals. Such outlets often publish thoughtful, accessible analyses and contribute to high-level policy debates that frame practitioners’ work. But they generally do not scrutinize assumptions relevant to program design, provide analyses that inform evaluation strategies, or draw the sort of inferences that shape project implementation in the field.

Weak Incentives for Science

Science demands skepticism. Progress is slow and nuanced and failure is essential for advancement. By contrast, international development demands success. One-off “success stories” are more critical for justifying congressional funding than “empirical regularities.” Negative and null findings may be liabilities for the USAID missions’ budgets and the reputations of non-profit and private organizations that implement USAID programs. The consequent tendency to define evidence down (Lester Reference Lester2016) means defensible but undesirable results can be downplayed and positive anecdotal results misconstrued as generalizable beyond what the data can support. Officially, evidentiary standards remain open to interpretation. USAID’s Evaluation Policy (2016) requires RCTs or quasi-experimental impact evaluations for “any new or untested approach that is anticipated to be expanded in scale or scope.” However, the policy states that it is “a matter of professional judgment” whether an approach is tested or untested—a judgment that is inherently difficult for USAID staff to make given the constraints outlined above.

Science demands skepticism. Progress is slow and nuanced and failure is essential for advancement. By contrast, international development demands success. One-off “success stories” are more critical for justifying congressional funding than “empirical regularities.”

Procurement Rules and Relationship Management

The relationships required to conduct impact evaluations and other research activities effectively in the field are complex and difficult to manage. USAID procurement rules, project timelines and staff turnover hinder smart remedies. Project evaluations are most often contracted independently from implementation to prevent potential conflicts of interest. Academic principal investigators, who formally work for the evaluation contractor, can create new conflicts when they influence program design choices, such as withholding treatments to control groups, carefully managing rollouts around data collection activities, and questioning the underlying assumptions or causal logic of programming. Rigid contractual independence often precludes researchers from close involvement in project and evaluation design, limiting the ability of evaluators to effectively collaborate with the implementer. USAID must therefore help orchestrate these relationships. Sometimes it does so very effectively, as with the DRG Center’s annual Impact Evaluation Clinic, where academics, mission staff, and evaluation partners confer to design development interventions that are amenable to evaluation with RCTs. But frequent turnover of Foreign Service officer (i.e. managerial) staff at field missions and in Washington also means that priorities may abruptly shift away from research and evaluation.

Applied Political Science: Practical Guidance for Success

The bureaucratic working environment is often unfamiliar to academics and can at times be unreceptive to social science. Political scientists can overcome these challenges and encourage improvements in evidence-based programming and rigorous project evaluations through the teaching and research they already do. Based on our experience with USAID, we suggest some specific ways that scholars can make existing research more accessible to practitioners and advocate strategies for effective collaboration with practitioners in project evaluation and research.

Making Existing Research More Accessible to Practitioners

Practitioners are unfamiliar with political science research. Many believe the language in political science publications is inaccessible, consider the research methodologies unintelligible, and prefer newspapers and periodicals over peer-reviewed academic journals.

Whenever possible, we encourage academics to expand their publication targets to include outlets that policy makers are likely to read. Academics whose work is profiled in the Washington Post have access to this audience, for instance. Absent time or incentive to publish in policy outlets, we recommend that academics produce policy briefs as supplements to their research publications. These should be no longer than two pages and include clear interpretations and visualizations of the findings and explicit explanations of the level of confidence with which policy makers should regard them. Low social science literacy among practitioners means it is important to sell your subject first and your methodology second. When you share your work, include a three-sentence blurb: the first sentence on the main findings of the research; the second sentence on the implications of the finding for policy or programs; and the third sentence about the methods or contextual information on the research. Policy briefs can be submitted directly to USAID field missions or the DRG Center, which publishes a monthly newsletter and often reserves space to profile new and relevant work.Footnote 1

Excellent academic consortia that advocate for the uptake of evidence and improved evaluation also exist. Groups like the Abdul Latif Jameel Poverty Action Lab (J-PAL), and Evidence in Governance and Politics (EGAP) use outreach, research, and academic-project matchmaking to encourage evidence-based programming and quality impact evaluations. We encourage political scientists to engage with these networks not only to increase access to USAID and other donor organizations, but also to gain familiarity with pressing questions and practitioner needs. Academics can also have success reaching out to Missions or the DRG Center directly, offering to analyze survey or other data to help inform future program design. This is especially useful for graduate students hoping to build professional profiles.

When packaged and targeted appropriately, academic products do inform policy and programmatic decisions. As an example, one literature review that addresses questions about human rights awareness campaigns highlighted the benefits of collaboration with media organizations, pre-tested message design, and careful consideration of unintended consequences; it is now part of an official training for USAID staff. Similarly, original research on decentralization and its consequences will be incorporated into the DRG Center’s official Democratic Decentralization Programming Handbook.

Suggestions for Effective Collaboration

Through academic consortia, as well as through partnership with USAID’s DRG Center, or through submitting unsolicited research proposals to USAID field missions, academics can also become more involved in field research and program evaluations directly. Here, we list ways that scholars can facilitate success in field research and evaluations—especially RCT and quasi-experimental impact evaluation.

Engage Early

It is important to engage in program evaluations as early as procurement rules will permit. Attend events like impact evaluation “clinics” sponsored by USAID’s DRG Center or the World Bank. These events allow researchers to help structure the scope of the project and sometimes to tweak the intervention itself. Based on the DRG Center clinic, USAID/Nicaragua incorporated an impact evaluation design into its call for proposals, essentially guaranteeing that the winning implementing partner organization would design and roll out its project in a manner amenable to an RCT.

Be Flexible

Despite the best planning, things change frequently. Be prepared to think creatively of ways to deal with difficult research design issues. For example, a security issue has come up in the communities you were planning to work in and now you must redo the sampling. Or procurement takes too long and you miss an important event, such as an election, and now you must rethink the outcome of interest. An academic’s flexibility in this environment comes from having multiple contingency plans. The ability to act on those contingency plans depends on how well the academic has prepared the organization and the implementer for these worst case scenarios.

Still, be sure to avoid tactless comments—“that idea won’t get me a publication,” among other real examples—that erode trust and only damage scholar–practitioner partnerships.

Be Prepared to Compromise

Establishing a balance between methodological rigor and programming considerations is a negotiation. Feel empowered to push back on poorly thought out ideas by an implementing partner or funding organization, or if you are concerned an intervention may do harm. If the organization has gone through the process of finding an academic to design an RCT, you should not try to be flexible about bad ideas. Come up with options and alternatives, but speak up if you do not think there is a good opportunity for learning by policy makers or the wider academic community. Still, be sure to avoid tactless comments—“that idea won’t get me a publication,” among other real examples—that erode trust and only damage scholar–practitioner partnerships. Many implementers presume that academics care more about publication potential than good development programs or their beneficiaries; they worry that research will drive programming, rather than the other way around. Be prepared to compromise on intervention scope, sampling, rollout, and level of randomization.

Understand the Project

Spend enough time with the implementer to know the specifics of their program. This may seem obvious, but in our civic education RCT in Georgia, confusion about the target student population emerged unexpectedly and required a full redesign that delayed implementation by over a year. It is also necessary to grasp the project’s policy value. In Georgia, it was necessary to advocate for the RCT among high-level audiences. We educated the Deputy Minister of Education and Science on the potential outcomes framework, the evaluation design and roll out, and developed “utilization workshops” to assist the ministry in using evaluation findings for its programmatic choices.

Do Not Rely Only on USAID

Use time in the field to talk with government officials, the implementing partner, and other organizations about data availability that USAID may not be prepared to produce. Sometimes, you might stumble upon a quick additional research project that the organization can fund. In Peru, smaller add-on projects were developed and pursued as a result of this “ground work” because they had the potential to be quick wins for USAID. In Zambia, a close working relationship with the government GIS office was instrumental in determining sampling for a health-related evaluation.

Manage Your Time

Carefully consider how much time you can commit to this work. Academics who enter these relationships are surprised by the amount of time they must dedicate to dealing with bureaucracy. Procurement actions and contracts, especially by government agencies, take a very long time. An academic might wait months or a full year for a contract to be signed between an implementer and an agency. Use that time for contingency planning because once the contract is signed, those parties will want to begin work immediately. Similarly, USAID may invite concept papers for grants, review them for six months, and then only give one month to submit a full technical proposal, followed by another to incorporate comments, cost clarifications, and suggested revisions. Impact evaluations will require frequent field visits and implementer oversight. Advocate for a research assistant who can handle some of this work, as well as topline report writing. It is essential to include this assistance as a budget line item.

Communicate Regularly

In one successful impact evaluation in Ghana, all actors communicated and worked together towards a common goal of having the most rigorous evaluation possible. The evaluation team comprised an outside academic and an internal researcher from USAID, which made for easier communication flows between all parties. Mission buy-in meant the evaluation team could begin their work years in advance of the RCT itself, co-designing the experiment with the Mission before federal procurement, and identifying necessary compromises. The evaluation team also worked with the implementing partner after procurement to alter the designs to the on the ground realities. That responsive communication and flexibility cultivated implementer support, as the organization did not see their vision altered for purely academic pursuits.

There are many success stories from the DRG Center’s evaluation work, but in all cases academic partners embraced these strategies to facilitate collaboration. Still, even if these conditions are satisfied, unforeseen obstacles can undo good work.

SHOULD POLITICAL SCIENTISTS ENGAGE?

The fundamental challenges of working with large government entities may discourage academic engagement. Publication potential is low when contingent on hurdling all obstacles noted above. And working with applied development projects does little for professional advancement. But political science need not be understood only as a public good; there are several important advantages to these collaborations. For instance, working on government projects may create foundations for funding for large scale research. Learning the bureaucratic landscape is certainly useful for submitting unsolicited proposals and grant proposals. Collaboration also provides access to organizations and government ministries that implement far more projects than the ones funded by USAID. Hence, opportunities for joint collaboration with policy makers multiply as they are taken. This access and potential for future funding will likely outweigh the difficulties in the long run.

What lingers in the background is how researchers can maintain their standards of scholarly and methodological rigor in the face of the obstacles described above. We encourage academics collaborating with practitioners to go into these projects with eyes wide open. Understand the bureaucracy and why challenges exist. Use your knowledge to improve the evidentiary standards in programming and evaluation, but also determine which battles you cannot win. Professional scholars must stay engaged for government and international organizations to internalize the value of thoughtful research designs, measurement strategies, and evidentiary standards for scientifically defensible results.

Footnotes

1. Academics and graduate students are invited to join the DRG Center Listserv by filling out this form (https://goo.gl/forms/ddcTywxwvzej9jkf2) and can contact Danielle Spinard of the DRG Learning Division to disseminate their policy briefs ().

References

REFERENCES

Callen, Michael, Khan, Adnan, Khwaja, Asim I., Liaqat, Asad, and Myers, Emily. 2016. “These 3 Barriers Make It Hard for Policymakers to Use The Evidence That Development Researchers Produce.” The Washington Post, August 13.Google Scholar
Finkel, Steven E., Pérez-Liñan, Aníbal, and Seligson, Mitchell A.. 2007. “The Effects of U.S. Foreign Assistance on Democracy Building, 1990–2003.” World Politics 59 (3): 404–39.Google Scholar
Goldstone, Jack A., Garber, Larry, Gerring, John, Gibson, Clark C., Seligson, Mitchell A., and Weinstein, Jeremy. 2008. Improving Democracy Assistance: Building Knowledge through Evaluations and Research. Washington, DC: The National Academies Press.Google Scholar
Lester, Patrick. 2016. “Defining Evidence Down.” Stanford Social Innovation Review. https://ssir.org/articles/entry/defining_evidence_down.Google Scholar
Shah, Raj and Gerson, Michael. 2015. “Foreign Assistance and the Revolution of Rigor.” In Nussle, Jim and Orszag, Peter, eds. Moneyball for Government, 2nd Edition. Washington, DC: Results for America.Google Scholar