Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-06T16:15:22.980Z Has data issue: false hasContentIssue false

Organizational Identity and Positionality in Randomized Control Trials: Considerations and Advice for Collaborative Research Teams

Published online by Cambridge University Press:  27 May 2022

Nicholas Haas
Affiliation:
Aarhus University, Dennmark
Katherine Haenschen
Affiliation:
Northeastern University, USA
Tanu Kumar
Affiliation:
William & Mary, USA
Costas Panagopoulos
Affiliation:
Northeastern University, USA
Kyle Peyton
Affiliation:
Australian Catholic University, Australia
Nico Ravanilla
Affiliation:
University of California–San Diego, USA
Michael Sierra-Arévalo
Affiliation:
University of Texas at Austin, USA
Rights & Permissions [Opens in a new window]

Abstract

Type
Field Experiments: Thinking Through Identity and Positionality
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the American Political Science Association

Social scientists are increasingly conducting field experiments with a variety of external partners, including nonprofit, nongovernmental, and political organizations; government bureaucracies; survey research firms; and technology companies such as Facebook and Twitter. Field experimentalists interested in voter mobilization, for example, have collaborated with partisan organizational partners (OPs) that aim to garner support for their candidates (Panagopoulos Reference Panagopoulos2013; Panagopoulos and Bailey Reference Panagopoulos and Bailey2020); with nonpartisan OPs focused on mobilizing underrepresented minorities or young voters (Bedolla and Michelson Reference Bedolla and Michelson2012; Nickerson Reference Nickerson2007); with governments that can deliver registration or mobilization information directly to voters (Hopkins et al. Reference Hopkins, Meredith, Chainani, Olin and Tse2021; Malhotra, Michelson, and Valenzuela Reference Malhotra, Michelson and Valenzuela2012); with news media organizations that disseminate treatments intended to inform voters (Haenschen and Jennings Reference Haenschen and Jennings2019); and with institutions of higher education that deliver information to student populations (Bennion and Nickerson Reference Bennion and Nickerson2016). Between 2000 and 2017, approximately 62% of field experiments published in American Political Science Review, American Journal of Political Science, or Journal of Politics involved some type of external partnership (Butler Reference Butler2019).

Working with external organizations offers benefits as well as challenges (Davis and Michelitch Reference Davis and Michelitch2022). Partnerships can be extensive: in addition to implementing an intervention, partners can have an active role at every stage of the research process, from funding support to data collection. Such wide-ranging relationships can facilitate research on critical questions; however, inattention to the power and social structures in which organizations are embedded also can create practical challenges for research design, implementation, and ethics.

This article focuses on field experiments and the role of “organizational positionality”—that is, an OP’s position in relation to the social and political context in which it operates. We draw on a diverse set of experiences across different research contexts to describe how an organization’s funding sources, political allegiances, scope of operations and mandate, legal status, identities, and reputation shape its goals and incentives, as well as how it is perceived by and interacts with others involved in the research process (Davis and Michelitch Reference Davis and Michelitch2022; Kapiszewski, MacLean, and Read Reference Kapiszewski, MacLean and Read2015). We also highlight how a failure to consider organizational positionality can undermine research credibility and produce negative outcomes. The article concludes by offering suggestions for how researchers can successfully navigate organizational positionality.

Prior work based on qualitative field research (Dean et al. Reference Dean, Furness, Verrier, Lennon, Bennett and Spencer2018) has focused primarily on aspects of individual identity, particularly the role of researcher identity and positionality (Foote and Bartell Reference Foote and Bartell2011; Rowe Reference Rowe, Coghlan and Brydon-Miller2014; Savin-Baden and Major Reference Savin-Baden and Major2013), as opposed to organizational positionality. Similarly, existing scholarship on field experimental ethics traditionally revolves around compliance with the Belmont Report principles; however, issues stemming from the positionality of OPs do not necessarily overlap with Institutional Review Board considerations (Gueron Reference Gueron, Mosteller and Boruch2002; Humphreys Reference Humphreys2015).

We extend these ongoing debates by focusing on how divergence between an OP’s goals and incentives and those of researchers can create practical and ethical challenges. In doing so, we also address the recent call by Soedirgo and Glas (Reference Soedirgo and Glas2020) for “active reflexivity”—that is, the continuous interrogation of positionality at all stages of the research process—which builds on the notion that positionality can affect “the totality of the research process” (Holmes Reference Holmes2020, 3).

DEFINING ORGANIZATIONAL POSITIONALITY

Like existing definitions of individual positionality, we define “organizational positionality” as an OP’s social and political position within the context in which it operates. Similar to how individual positionality can vary with aspects of researchers’ identities (Coghlan and Brydon-Miller Reference Coghlan and Brydon-Miller2014; Fujii Reference Fujii2017; Marsh and Furlong Reference Marsh, Furlong, Lowndes, Marsh and Stoker2017; McCorkel and Myers Reference McCorkel and Myers2003; Sikes Reference Sikes and Opie2004), we argue that organizational positionality in the research context also can vary with OP features that affect its goals, experiences, and broader incentives.

We define “organizational positionality” as an OP’s social and political position within the context in which it operates.

Instead of focusing on how positionality arising from individual identity can impact the research process (see Harrison and Michelson Reference Harrison and Michelson2022; Hartman et al. Reference Hartman, Cheema, Karim, Lake, Liaqat and Mohmand2022; and Kim et al. Reference Kim, Badrinathan, Choi, Karim and Zhou2022 in this symposium), we hone in on aspects of an OP as a whole that may affect its interactions with researchers and participants and the research process. These features may include an OP’s funding source(s), its mission, where it is based and whether it is international or local, its political allegiances (or lack thereof), whether it is a government agency or a nongovernmental organization, and its role in the study. When these features lead OP goals and incentives to diverge from those of the researchers or participants, it presents practical and ethical challenges in the research process.

DIVERGENCE IN IDENTITY, GOALS, AND INCENTIVES

First, the political and/or ideological convictions of an OP can constrain the scope of feasible research questions. For example, partisan electoral organizations might seek to leverage a researcher’s position as a technical expert to conduct experiments that yield high internal returns to the organization (or the “movement”) at the cost of low returns for researchers and the broader community. Nonpartisan organizations similarly might seek to leverage a researcher’s reputation and institutional affiliation to conduct a program evaluation to satisfy external funders. Failure to identify these realities early can create future conflict among the ideologies, goals, and motivations of the researchers and their OPs at crucial stages of the research process.

Second, the role of the OP in the study—often in which researchers are technical experts and OPs are implementers with local knowledge—can lead to disagreements about the design and implementation of field experiments. For example, Peyton, Sierra-Arévalo, and Rand’s (Reference Peyton, Sierra-Arévalo and Rand2019) community policing experiment in New Haven, Connecticut, was well received when it was proposed to the New Haven Police Department (NHPD). This was, in part, because “community policing” had long been the core of NHPD’s operations and its organizational position relative to other city agencies, the New Haven public, and the wider sociopolitical environment of US policing. As a result, NHPD leadership embraced the use of technical experts in an attempt to scientifically validate community policing and its effect on public attitudes. However, the researchers still met initial resistance to conducting the intervention across the entire city: they had to jettison a more complex design that randomly assigned interactions based on officer race and gender. NHPD’s position was one in which building trust was central to its relationships with local residents. Therefore, NHPD argued that it was more appropriate to target specific districts where it believed trust was lowest and that it made little sense to have officers “randomly” interact with residents outside of their patrol district. Ultimately, a compromise was reached: officers were not taken out of their local district to randomize the race and gender of officers during police–civilian interactions (i.e., “treatments”); rather, interactions would be randomly assigned to households across the entire city.

Third, a mismatch between an OP’s long-term mission and researchers’ goals and incentives can also present challenges. For example, prestigious publication outlets do not incentivize the reporting of “null results,” and researchers generally aim to maximize the likelihood of finding effects when designing and implementing studies with OPs. These short-term incentives may not align well with the longer-term mission of implementing organizations. Consider Kumar, Post, and Ray’s (Reference Kumar, Post and Ray2018) study, an impact evaluation of NextDrop, which is a social enterprise in Bangalore’s water sector that was reliant on water-utility employees for program implementation. Although the authors hypothesized that providing employees with incentives would increase the probability of programmatic success, NextDrop was reluctant because it did not anticipate being able to continue payments after the study concluded. NextDrop had little desire to implement monetary incentives if they adversely affected the organization’s mission of developing a sustainable solution in Bangalore’s water sector—even if those incentives were likely to generate short-term program success and a positive experimental effect. Similarly, NHPD’s initial reluctance to deliver treatments in both high- and low-trust areas was not intended to frustrate causal identification but rather to pursue its organizational goal of efficient resource allocation.

Fourth, when an OP’s goals and incentives diverge from those of the study population, it can result in ethical challenges as well. In particular, researchers and OPs with goals unrelated to those of the study population may promote interventions that seek to change participants’ behavior in ways that participants may not have requested or wanted. For example, Bryan, Choi, and Karlan (Reference Bryan, Choi and Karlan2021) partnered with International Care Ministries, an evangelical Protestant anti-poverty organization, to measure the effects of a religious-education program on “ultrapoor” Filipino households. They found that the intervention, in addition to generating positive effects on income, changed the religious identity of members of the study population. In the discussion of ethics in this case, the authors justified the study as the opportunity to randomize and evaluate an intervention that would have occurred anyway and that, furthermore, occurs frequently in the context of interest.

An OP’s goals and incentives relative to the study population are particularly important to consider when OPs are in a position of power vis-á-vis study participants. In this situation, the OP’s relative power can pose a barrier to the elicitation of informed and uncoerced consent if respondents feel compelled to participate in the study for fear of reprisal or losing access to material benefits. This was an important concern for Haim, Ravanilla, and Sexton (Reference Haim, Ravanilla and Sexton2021) when they partnered with the Philippine National Police to evaluate a program that offered leaders in conflict-affected villages access to government programs. Leaders concerned about their personal safety may have felt compelled to participate for fear that they otherwise would lose protection from the government and leave their communities exposed to rebel attacks. Perceptions about an OP’s identity or reputation also have the potential to influence outcome measures that rely on participants’ self-reports or other behaviors. For example, Cilliers, Dube, and Siddiqi (Reference Cilliers, Dube and Siddiqi2015) found that the presence of a white foreigner changes an individual’s behavior in an experimental game designed to measure other-regarding preferences. Those participants who were from villages that received more development aid perceived the game as a test of aid suitability and attempted to signal their need.

Partnerships with researchers can even affect or alter an OP’s goals and incentives, potentially raising other ethical challenges. Coville et al.’s (Reference Coville, Galiani, Gertler and Yoshida2020) controversial study, conducted in partnership with the Nairobi City Water and Sewerage Company (NCWSC), provides a useful example. At the researchers’ direction, NCWSC randomly enforced water shutoffs in very poor communities, across which NCWSC typically exercised forbearance when customers were unable to pay their water bills. As a result, communities lost water access when they might not have if the experiment had not been conducted. When the goals and incentives of researchers and OPs are prioritized over those of the population under study, high-impact interventions may come at the expense of the needs and desires of the participants.

Fifth, when studies potentially impact or are conducted in collaboration with multiple actors beyond the researchers and the OP, competing goals and incentives can raise ethical questions. For instance, when an intervention aims to redistribute power among actors such as the state, customary leaders, and organized armed groups or between individuals of different identity categories (e.g., men and women), experimental researchers must consider carefully the role that their own involvement plays—even when it is limited to evaluation. For example, Haas and Khadka (Reference Haas and Khadka2020) evaluated a United Nations program in Somalia that ultimately hoped to increase women’s demand for state and customary forums for justice. This was at the expense of courts run by the militant group Al-Shabaab, which was believed by some parties to perform better under the status quo in several areas (e.g., the enforcement of verdicts). Whereas scholars sometimes frame these interventions as “state-building” or “countering violent extremism,” it rarely is acknowledged that their involvement can risk being perceived as an endorsement of a contested evaluation of “good” and “bad” actors.

SUGGESTED PRACTICES

Actively reflecting on positionality in relation to both OPs and those under study at every stage of the research process encourages researchers to proactively identify and address the practical and ethical challenges that may arise when collaborating across organizations with different identities, goals, and incentives. We suggest strategies and “lessons learned” for the design, implementation, and analysis stages of field experiments conducted with OPs (Humphreys and Weinstein Reference Humphreys and Weinstein2009; Levine Reference Levine, Druckman and Green2021; List Reference List2011).

We first recommend that any field experiment conducted with an OP be designed as mutually beneficial for the researcher and the OP. At the design stage, for example, researchers and OPs can have an open discussion anchored to a simple question: Is the experiment worth doing, even if the estimated treatment effect is indistinguishable from zero? If the answer is no, then we recommend reconsidering whether it is worth investing limited time and resources in the subsequent stages of the project. Experiments that are conducted without establishing a mutual understanding about the potential costs and limitations for all parties involved raise important problems that fall outside of the scope of standard ethical guidelines (see Gueron Reference Gueron, Mosteller and Boruch2002 and Humphreys Reference Humphreys2015 for further discussion).

Experiments that are conducted without establishing a mutual understanding about the potential costs and limitations for all parties involved raise important problems that fall outside of the scope of standard ethical guidelines.

Second, immersive fieldwork and pilot studies can both inform the experimental design and alert researchers to potential issues that might arise from differences in the identities and positionalities of OPs, participants, and other actors involved in the research process. Peyton, Sierra-Arévalo, and Rand (Reference Peyton, Sierra-Arévalo and Rand2019), for example, used insights that emerged from qualitative fieldwork (i.e., participant observation and officer interviews) to inform their experimental design and outcome measurement, and they continued this work during the intervention by observing police–civilian interactions (i.e., “treatments”) delivered in the field. Similarly, Haim, Ravanilla, and Sexton (Reference Haim, Ravanilla and Sexton2021) spent a year conducting multiple pilot studies and holding group discussions with target respondents in conflict-affected communities before implementing their intervention. Relationships and insights from years of experience working in Somalia, as well as qualitative data collected throughout the research process and extensive piloting before it began, helped Haas and Khadka (Reference Haas and Khadka2020) to implement an assessment program that did not impose their subjective evaluations onto participants. This also ensured a constant line of communication among study participants, members of the greater society, researchers, and OPs.

Third, taking time to explain the logic behind random assignment is essential, and researchers should not assume that concerns raised by OPs about experimental designs stem from an aversion to random assignment (Peyton Reference Peyton2013). In the community-policing experiment, for example, Peyton, Sierra-Arévalo, and Rand (Reference Peyton, Sierra-Arévalo and Rand2019) had to abandon their initial plan to randomly assign officers based on their race and gender. This was not because the OP was inherently opposed to randomization but rather because allowing officers to operate within their own district aligned with professional incentives to build trust and rapport with residents who later might cooperate with police. After the researchers amended their design so that officers could stay in their home district, NHPD leadership became a key advocate for randomization of treatment across the entire city. Researchers also should consider following Haim, Ravanilla, and Sexton (Reference Haim, Ravanilla and Sexton2021) by running smaller pilot studies with OPs that establish the groundwork for more significant projects and also build mutual trust among parties (see also List Reference List2011).

Concerns about randomization also can be alleviated by waitlisting participants for a control group. For example, in a randomized trial of a legal-aid program in Liberia (Sandefur and Siddiqi Reference Sandefur and Siddiqi2013), the implementing OP and researchers believed it would be unethical to deny individuals access to paralegal services. As a solution, participants randomly assigned to the control group were guaranteed paralegal access after a three-month period.

Fourth, researchers should consider enlisting local collaborators or working with OPs already embedded in the local community. They also should discourage OPs from participating in any activities that cannot be maintained as part of a longer-term strategy. Shutting off water in slum areas, for example, was not part of NCWSC’s long-term strategy for a sustained presence in Nairobi. Recognizing this fact early in the research process could have attenuated the ethical concerns raised by the study. Working with such embedded organizations can have practical challenges, such as an aversion to randomization and imperfect program implementation. However, Usmani, Jeuland, and Pattanayak (Reference Usmani, Jeuland and Pattanayak2018) provided evidence that embedded organizations have superior local information and better program implementation than non-embedded organizations. When evaluating an existing intervention wherein the OP’s goals and incentives diverge from those of the researchers and/or study population, researchers also may consider whether the study provides the OP with resources or legitimacy it otherwise may not have.

Fifth, drafting a pre-analysis plan (PAP) in collaboration with an OP provides several potential benefits. A PAP document can be used to create a shared understanding of the hypotheses to be tested and a timeline for the intervention, as well as early opportunities to resolve misalignment of goals between researchers and the OP. This process promotes norms of transparency that build trust between the OP and the researchers and allows researchers to incorporate the positionally situated expertise of their OP. Perhaps most important, the PAP document forces the parties to articulate exactly which analyses will be performed and why. This “ties the hands” of both parties, insulating the data-analysis process from pressure to publish results that cast positive light on the OP or help the researcher land a publication in a top journal. In one example, in addition to posting the document in a public repository (i.e., standard practice), Yokum, Ravishankar, and Coppock (Reference Yokum, Ravishankar and Coppock2019) held several events in which they shared and discussed their PAP with stakeholders and the public.

In cases in which OPs do not have the resources to engage in evaluations of a detailed and often methodologically advanced PAP, a memorandum of understanding (MOU) may be a more reasonable approach that serves many of the same purposes. Although planned analyses are not fully agreed on in an MOU, researchers can use them to establish shared expectations by signaling a commitment to running analyses about which the OP cares that may not be related to plans for academic publication.

Perhaps the most impactful (and, simultaneously, most difficult) way to minimize practical and ethical problems arising from the misalignment of incentives and goals among researchers, OPs, and study populations is to broaden the scope of researchers’ incentives. These efforts include editorial willingness to publish null results that nevertheless teach us something about important political, social, and economic processes (Duflo Reference Duflo2017). Researchers also can take care to embed questions of program implementation in a study that may further improve the publication prospects of a null result.

Perhaps the most impactful (and, simultaneously, most difficult) way to minimize practical and ethical problems arising from the misalignment of incentives and goals among researchers, OPs, and study populations is to broaden the scope of researchers’ incentives.

CONCLUSION

This article highlights some of the challenges that may result from differences in the identities and positionalities of researchers, their OPs, and research participants. With a greater awareness of how different organizationally grounded assumptions, pressures, and biases can influence the research process, researchers are better equipped to avoid potential pitfalls. Furthermore, we urge researchers to consider our suggestions for how they might encourage successful collaborations. In particular, we advise researchers to engage in active reflexivity in all stages of the research process to incorporate the unique identities and positions of OPs and study participants.

References

REFERENCES

Bedolla, Lisa Garcia, and Michelson, Melissa R.. 2012. Mobilizing Inclusion: Transforming the Electorate through Get-Out-the-Vote Campaigns. New Haven, CT: Yale University Press.CrossRefGoogle Scholar
Bennion, Elizabeth A., and Nickerson, David W.. 2016. “I Will Register and Vote, If You Teach Me How: A Field Experiment Testing Voter Registration in College Classrooms.” PS: Political Science & Politics 49 (4): 867–71.Google Scholar
Bryan, Gharad, Choi, James J., and Karlan, Dean. 2021. “Randomizing Religion: The Impact of Protestant Evangelism on Economic Outcomes.” Quarterly Journal of Economics 136 (1): 293380.CrossRefGoogle Scholar
Butler, Daniel M. 2019. “Facilitating Field Experiments at the Subnational Level.” Journal of Politics 81 (1): 371–76.CrossRefGoogle Scholar
Cilliers, Jacobus, Dube, Oeindrila, and Siddiqi, Bilal. 2015. “The White-Man Effect: How Foreigner Presence Affects Behavior in Experiments.” Journal of Economic Behavior & Organization 118:397414.CrossRefGoogle Scholar
Coghlan, David, and Brydon-Miller, Mary (eds.). 2014. The SAGE Encyclopedia of Action Research. Los Angeles: SAGE Publications.CrossRefGoogle Scholar
Coville, Aidan, Galiani, Sebastian, Gertler, Paul, and Yoshida, Susumu. 2020. “Enforcing Payment for Water and Sanitation Services in Nairobi’s Slums.” Unpublished manuscript, last modified July 2021. PDF file.Google Scholar
Davis, Justine, and Michelitch, Kristin. 2022. “Field Experiments: Thinking Through Identity and Positionality.” PS: Political Science & Politics. DOI: 10.1017/S1049096522000671.CrossRefGoogle Scholar
Dean, Jon, Furness, Penny, Verrier, Diarmuid, Lennon, Henry, Bennett, Cinnamon, and Spencer, Stephen. 2018. “Desert Island Data: An Investigation into Researcher Positionality.” Qualitative Research 18 (3): 273–89.CrossRefGoogle Scholar
Duflo, Esther. 2017. “Ely Lecture: The Economist as Plumber.” American Economic Review 107 (5): 126.CrossRefGoogle Scholar
Foote, Mary Q., and Bartell, Tonya Gau. 2011. “Pathways to Equity in Mathematics Education: How Life Experiences Impact Researcher Positionality.” Educational Studies in Mathematics 78:4568.CrossRefGoogle Scholar
Fujii, Lee A. 2017. Interviewing in Social Science Research: A Relational Approach. New York: Routledge.Google Scholar
Gueron, Judith M. 2002. “The Politics of Random Assignment: Implementing Studies and Affecting Policy.” In Evidence Matters: Randomized Trials in Education Research, ed. Mosteller, Frederick and Boruch, Robert, 1549. Washington, DC: Brookings Institution Press.Google Scholar
Haas, Nicholas, and Khadka, Prabin B.. 2020. “Increasing Women’s Access to Justice in Weak States: Experimental Evidence from Somalia.” Unpublished manuscript, last modified November 20, 2020. PDF file.Google Scholar
Haenschen, Katherine, and Jennings, Jay. 2019. “Mobilizing Millennial Voters with Targeted Internet Advertisements: A Field Experiment.” Political Communication 36 (3): 357–75.CrossRefGoogle Scholar
Haim, Dotan, Ravanilla, Nico, and Sexton, Renard. 2021. “Sustained Government Engagement Improves Subsequent Pandemic Risk Reporting in Conflict Zones.” American Political Science Review 115 (2): 717–24.CrossRefGoogle Scholar
Harrison, Brian F., and Michelson, Melissa R.. 2022. “LGBTQ Scholarship: Researcher Identity, RCTs, & Ingroup Positionality.” PS: Political Science & Politics. DOI: 10.1017/S1049096522000038.CrossRefGoogle Scholar
Hartman, Alexandra, Cheema, Ali, Karim, Sabrina, Lake, Milli, Liaqat, Asad, and Mohmand, Shandana Khan. 2022. “Field Experiments on Gender: Where the Personal and Political Collide.” PS: Political Science & Politics. DOI: 10.1017/S1049096522000415.CrossRefGoogle Scholar
Holmes, Andrew G. D. 2020. “Researcher Positionality: A Consideration of Its Influence and Place in Qualitative Research—A New Researcher Guide.” International Journal of Education 8 (4): 110.Google Scholar
Hopkins, Daniel J., Meredith, Marc, Chainani, Anjali, Olin, Nathaniel, and Tse, Tiffany. 2021. “Results from a 2020 Field Experiment Encouraging Voting by Mail.” Proceedings of the National Academy of Sciences 118 (4): e2021022118. https://doi.org/10.1073/pnas.2021022118.CrossRefGoogle ScholarPubMed
Humphreys, Macartan. 2015. “Reflections on the Ethics of Social Experimentation.” Journal of Globalization and Development 6 (1): 87112.CrossRefGoogle Scholar
Humphreys, Macartan, and Weinstein, Jeremy M.. 2009. “Field Experiments and the Political Economy of Development.” Annual Review of Political Science 12:367–78.CrossRefGoogle Scholar
Kapiszewski, Diana, MacLean, Lauren M., and Read, Benjamin L.. 2015. Field Research in Political Science: Practices and Principles. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Kim, Eunji, Badrinathan, Sumitra, Choi, Dhongyun Danny, Karim, Sabrina, and Zhou, Yang-Yang. 2022. “Navigating ‘Insider’ and ‘Outsider’ Status as Researchers Conducting Field Experiments.” PS: Political Science & Politics. DOI: https://doi.org/10.1017/S1049096522000208.CrossRefGoogle Scholar
Kumar, Tanu, Post, Alison E., and Ray, Isha. 2018. “Flows, Leaks, and Blockages in Informational Interventions: A Field Experimental Study of Bangalore’s Water Sector.” World Development 106:149–60.CrossRefGoogle Scholar
Levine, Adam S. 2021. “How to Form Organizational Partnerships to Run Experiments.” In Advances in Experimental Political Science, ed. Druckman, James N. and Green, Donald P., 199216. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
List, John A. 2011. “Why Economists Should Conduct Field Experiments and 14 Tips for Pulling One Off.” Journal of Economic Perspectives 25 (3): 316.CrossRefGoogle Scholar
Malhotra, Neil, Michelson, Melissa R., and Valenzuela, Ali A.. 2012. “Emails from Official Sources Can Increase Turnout.” Quarterly Journal of Political Science 7 (3): 321–32.CrossRefGoogle Scholar
Marsh, David, and Furlong, Paul. 2017. “A Skin Not a Sweater: Ontology and Epistemology in Political Science.” In Theory and Methods in Political Science, ed. Lowndes, Vivien, Marsh, David, and Stoker, Gerry, 1741. London: Palgrave Macmillan Education.Google Scholar
McCorkel, Jill A., and Myers, Kristen. 2003. “What Difference Does Difference Make? Position and Privilege in the Field.” Qualitative Sociology 26 (2): 199231.CrossRefGoogle Scholar
Nickerson, David W. 2007. “Quality Is Job One: Professional and Volunteer Voter Mobilization Calls.” American Journal of Political Science 51 (2): 269–82.CrossRefGoogle Scholar
Panagopoulos, Costas. 2013. “Positive Social Pressure and Prosocial Motivation.” Political Psychology 34:265–75.CrossRefGoogle Scholar
Panagopoulos, Costas, and Bailey, Kendall. 2020. ‘“Friends-and-Neighbors’ Mobilization: A Field Experimental Replication and Extension.” Journal of Experimental Political Science 7 (1): 1326.CrossRefGoogle Scholar
Peyton, Kyle. 2013. “Ethics and Politics in Field Experiments.” Experimental Political Scientist 3:2036.Google Scholar
Peyton, Kyle, Sierra-Arévalo, Michael, and Rand, David G.. 2019. “A Field Experiment on Community Policing and Police Legitimacy.” Proceedings of the National Academy of Sciences 116 (40): 19894–98.CrossRefGoogle ScholarPubMed
Rowe, Wendy E. 2014. “Positionality.” In The SAGE Encyclopedia of Action Research, ed. Coghlan, David and Brydon-Miller, Mary, 628. Los Angeles: SAGE Publications.Google Scholar
Sandefur, Justin, and Siddiqi, Bilal. 2013. “Delivering Justice to the Poor: Theory and Experimental Evidence from Liberia.” Unpublished manuscript, last modified November 15, 2013. PDF file.Google Scholar
Savin-Baden, Maggi, and Major, Claire H.. 2013. Qualitative Research: The Essential Guide to Theory and Practice. Abingdon-on-Thames, Oxfordshire, UK: Routledge.Google Scholar
Sikes, Pat. 2004. “Methodology, Procedures, and Ethical Concerns.” In Doing Educational Research: A Guide for First-Time Researchers, ed. Opie, Clive, 1533. London: SAGE Publications.CrossRefGoogle Scholar
Soedirgo, Jessica, and Glas, Aarie. 2020. “Toward Active Reflexivity: Positionality and Practice in the Production of Knowledge.” PS: Political Science & Politics 53 (3): 527–31.Google Scholar
Usmani, Faraz, Jeuland, Marc, and Pattanayak, Subhrendu K.. 2018. “NGOs and the Effectiveness of Interventions.” Unpublished manuscript, last modified May 2018. PDF file.CrossRefGoogle Scholar
Yokum, David, Ravishankar, Anita, and Coppock, Alexander. 2019. “A Randomized Control Trial Evaluating the Effects of Police Body-Worn Cameras.” Proceedings of the National Academy of Sciences 116 (21): 10329–32.CrossRefGoogle ScholarPubMed