Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-06T15:44:35.669Z Has data issue: false hasContentIssue false

What, Where, Who, and Why? An Empirical Investigation of Positionality in Political Science Field Experiments

Published online by Cambridge University Press:  27 May 2022

Cristina Corduneanu-Huci
Affiliation:
Central European University, Austria
Michael T. Dorsch
Affiliation:
Central European University, Austria
Paul Maarek
Affiliation:
Université Paris-Panthéon-Assas, France
Rights & Permissions [Opens in a new window]

Abstract

Type
Field Experiments: Thinking Through Identity and Positionality
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of the American Political Science Association

Political scientists’ positionality (i.e., their own identities, beliefs, and assumptions about the context of a study) often receives implicit recognition in publications but rarely is explicitly addressed (Davis Reference Davis2020; Davis and Mitchelich Reference Davis and Mitchelich2022; Soedirgo and Glas Reference Soedirgo and Glas2020). Field experiments as real-life social laboratories and the current gold-standard technology of policy expertise occupy a unique place. To a much larger extent than other methods of social science research, “real-world” experimentation entails actual economic or political stakes, complex ethical dimensions, demanding logistical costs and infrastructure, and—in an increasing number of cases—a direct link to decision makers. This article provides a first empirical basis for discussions of positionality as data on experimenter characteristics currently is unavailable.

Field experiments as real-life social laboratories and the current “gold-standard” technology of policy expertise occupy a unique place. To a much larger extent than other methods of social science research, “real-world” experimentation entails actual economic or political stakes, complex ethical dimensions, demanding logistical costs and infrastructure, and—in an increasing number of cases—a direct link to decision makers.

To provide a scientometric analysis, 1 we compiled an original dataset that pools all field experiments by political scientists preregistered between 2014 and 2019 across three main social science registries: primarily from the Evidence in Governance and Politics (EGAP) and marginally from the American Economic Association (AEA) registry for randomized controlled trials (RCTs) and the Registry for International Development Impact Evaluations (RIDIE). Table 1 is a basic overview of our data. In the absence of other sources, preregistration allows us to capture the universe of completed, ongoing, and planned studies.

Table 1 Preregistered Experiments in the Three Registries

Notes: Preregistered experiment data collected from 2014 to 2019. Discipline is reported for the principal investigator. Field experiments exclude survey and lab experiments.

We are aware of potential biases inherent in our data-collection strategy. Since preregistration emerged in the international development community (i.e., AEA and EGAP), field experiments conducted in academic fields or cultures without preregistration norms might be captured only partially. This omission raises valid sample-bias concerns. Therefore, for robustness, we also compiled and coded a parallel dataset on all field experiments published in top political science journals. 2

Among the many potential dimensions relevant to positionality, we explored three aspects empirically: (1) the geographical distribution of field experiments and related time trends; (2) the clustering of field experiments by institution, author, and topic; and (3) the type of partners involved in experimentation. The following sections discuss each dimension and present descriptive trends.

GEOGRAPHICAL DISTRIBUTION OF FIELD EXPERIMENTS IN POLITICAL SCIENCE

Three main sets of issues are explicitly associated with the location of field experiments: (1) the often-unacknowledged power relationships between experimenters and subjects of research; (2) relatedly, the North–South hierarchies embedded in the knowledge-production process that are exacerbated by the high costs associated with field experiments; and (3) site-selection bias and geographical clustering of field experimentation. The first issue usually emphasizes the fact that the randomization “in the tropics” conducted by Western researchers from wealthy institutions on poor subjects may not always meet full ethical standards (Cronin-Furman and Lake Reference Cronin-Furman and Lake2018; Herman et al. Reference Herman, Panin, Owlsley, Blair, Dyzenhaus, Wellman, Grossman, Opalo, Singh, Alarian, Pruett and Tan2022; McDermott and Hatemi Reference McDermott and Hatemi2020). A Nobel Prize–winning social scientist noted “…nearly all RCTs on the welfare system are done by better-heeled, better-educated, and paler people on lower-income, less-educated, and darker people” (Deaton Reference Deaton2020, 21).

When we examined the geographical distribution of the current wave of field experiments, we found that, indeed, the majority of researchers are located in the Global North, with the United States accounting for most of the experiments, as shown in the bottom-left panel of figure 1. However, the top-left panel of figure 1 shows that between 2014 and 2019, the concentration of the country locations of field research, computed as a Hirschmann–Herfindahl index, declined steadily. In the limit, a concentration index of 1 means that all experiments are taking place in one country, whereas a concentration index of 0 means that they are taking place in different countries. 3 The concentration of experiments originating in the developing world has been low and constant over time. The distribution of the data on published field experiments is even more skewed and relatively stable over time: among 16 countries of origin, 84.39% of 205 coauthors were affiliated with US institutions between 2014 and 2019. Less than 4% of published RCTs in political science originated from three emerging economies (i.e., Brazil, China, and Russia) and none from low-income countries. Therefore, scholars must grapple with the following tradeoff: as experimental evidence becomes more standard, the research of political scientists trained at and employed by North American institutions, although contributing significantly to knowledge, entails potential positionality biases.

Figure 1 Geographic Clustering of Field Experiments

The top-left/top-right panel plots calculated Hirschman–Herfindahl Indexes of concentration by the country of institutions/study location country of preregistered experiments. The bottom-left/bottom-right panel plots country shares of institutions/study location country of preregistered experiments.

It is notable, however, that despite the recent notoriety gained by the randomista movement in development, field experimentation has much deeper historical roots in advanced industrial democracies. The earliest field experiments in political science studied “get-out-the-vote” mailings during the 1924 US presidential election; followed by voter mobilization in Ann Arbor, Michigan, in the early 1950s (Gerber Reference Gerber, Druckman, Greene, Kuklinski and Lupia2011); and several US federal and local government program evaluations during the 1960s and 1970s (Ogden Reference Ogden2017). Assuaging the randomization “in-the-tropics” concern to some extent, field experiments in political science have developed a geographical bimodal distribution over time, with studies of voting behavior mainly focused on the United States. This was followed in a second wave by experiments on non-Western countries in the subfields of comparative politics and political economy of development. Empirically, what is striking in our data in terms of study location is the shift from the non-advanced industrialized countries into the United States and Europe during the period we studied (2014–2019). Figure 2 demonstrates the similarly increasing trend in “RCT domestication” (i.e., the researcher country also being the site of experimentation) in the top-left panel and the relative decline of the share of “North–South” experiments (i.e., an advanced industrialized country fielding an experiment in a developing country) in the top-right panel. “Domestication” refers to the country of study corresponding to the country of the lead researcher’s institution. 4

Figure 2 Geographic Positionality of Field Experiments

The top-left panel plots the share of preregistered experiments taking place in the same country as the preregistering institution. The top-right panel plots the share of experiments preregistered by an institution in an advanced industrialized democratic (AID) country taking place in a non-AID country. The bottom-left panel plots the share of preregistered experiments taking place outside of the preregistering country where English is an official language in the location. The bottom-right panel shows the shares of the top nine country locations between 2014 and 2019.

In parallel, and to the contrary, there is an opposite concern in terms of geographical coverage of experimentation from a knowledge-gain perspective—namely, that the geographical focus in political science traditionally has been Western-centric rather than global. This epistemic concern rests on a long-standing geographical imbalance in political science research. Wealthier democratic countries with English, Spanish, or French as their main spoken language were more likely to be studied (Wilson and Knutsen Reference Wilson and Knutsen2020). An emerging literature also demonstrates that field experiments are more likely to occur in certain contexts (i.e., political, geographical, and institutional) (Blair, Iyengar, and Shapiro Reference Blair, Iyengar and Shapiro2013; Corduneanu-Huci, Dorsch, and Maarek Reference Corduneanu-Huci, Dorsch and Maarek2021; Das Reference Das2020). Our data optimistically show that the sites of experimentation diversified to more than 100 countries between 2014 and 2019.

The share of nondomestic experiments taking place in a country where English is an official language—a potential source of site-selection bias—also showed a marked decrease during the period. Moreover, several low- and middle-income countries from Sub-Saharan Africa, East Asia, and the Middle East (e.g., Ghana, Liberia, Mali, Uganda, Lebanon, and Vietnam) were in the top-10 ranking of RCT geographical coverage as they were featured in multiple studies published in leading political science journals (Figure A5 in the Online Appendix). These countries previously were epistemically marginal in the leading political science literature. Liberia, for instance, was studied in only 14 of a total 27,689 articles appearing in top political science journals during more than a century. However, it became a top-10 site for field experiments, accounting for approximately 6% of all field RCTs published in the same journals in only five recent years.

Nevertheless, despite clear evidence of geographical diversification and epistemic gains in previously neglected contexts, the experimental sites remained highly concentrated, with only five countries accounting for approximately 50% of all preregistered field experiments. If anything, the geography of RCTs became more concentrated during the period that we studied. The cautious diversification of experimental sites according to geographical and topical evidence-gap maps, coupled with stricter ethical criteria for field experimentation (Phillips Reference Phillips2021), may address critiques of North–South epistemic power divides and may ensure that the topical interests of field experimenters more closely match local developmental priorities.

The cautious diversification of experimental sites according to geographical and topical evidence-gap maps, coupled with stricter ethical criteria for field experimentation, may address critiques of North–South epistemic power divides and may ensure that the topical interests of field experimenters more closely match local developmental priorities.

INSTITUTION AND TOPIC CLUSTERING AMONG EXPERIMENTAL RESEARCHERS

Mirroring geographical patterns, the skewed distribution of knowledge production is not unique to field experiments because it affects most academic and policy research. Nevertheless, field experimentation stands out in this respect because of the high costs and complex logistical requirements compared to other methods of inquiry. This increases the likelihood that the top producers are influential institutions and authors who have access to resources as well as policy and research networks. Therefore, the concern is that the ensuing institutional and geographical imbalances are even more pronounced for field experiments. A comprehensive bibliometric review of 25 development journals (2000–2019) revealed that, indeed, even when controlling for an overwhelming majority of articles on Africa written by non-African authors (87%), the share of experimental research on an African country by an African author is 2.5 times lower than the share of the equivalent body of observational research (Panin Reference Panin2020). Similar scientometric studies of other world regions also highlighted the relative diversity of observational studies (Cansun and Arik Reference Cansun and Arik2018; Codato, Madeira, and Bittencourt Reference Codato, Madeira and Bittencourt2020). We expanded this analysis by examining the affiliation of the lead primary investigators among all preregistered experiments in political science.

Figure 3 (left panel) plots the share of experiments by institutions that have political science departments ranking among the world’s top 10, according to the Shanghai world rankings (Shanghai Ranking 2019). We found that, indeed, the pool of experimenters was highly concentrated institutionally, with the top 10 academic institutions accounting for 20% to 40% of all experiments between 2014 and 2019. There was a clear downward trend during this period, however, which suggests a diffusion of the methodology that may be driven by its strength in causal identification, decreasing costs, and economies of scale. 5

Figure 3 Clustering Around Pioneering Institutions

Share of preregistered experiments that include authors from the top 10 universities in political science according to the Shanghai world rankings of universities, and their numbers on the right. Note that eight of these 10 institutions have endowments greater than $10 billion dollars USD, placing them among the top 15 endowed universities in the world. All of them except one have endowments greater than $4 billion dollars USD.

We also found a relatively high author concentration around influential researchers and associated networks. Figure 4 shows that the share of experiments with pioneering authors (i.e., at least 10 experiments in the left panel and at least five experiments in the right panel) was in the same range as the share from top institutions. In our supplementary data on published work in leading political science journals, 71% of all field experiments had at least one author from a top-20 institution and 40% of all authors were associated with a top-10 program.

Figure 4 The Impact of Pioneering Researchers

Share of preregistered experiments by political scientists with at least 10 (five) experiments on the left-hand (right-hand) side.

Influence clustering entails both costs and benefits. Preexisting experimental infrastructure lowers transaction costs and generates economies of scale in knowledge production. Conversely, barriers to entry for a wider pool of researchers increase the overall costs of innovation. The availability of grants and other funding schemes that would privilege researchers from broader institutional circles and more systematic inclusion and crediting of Global South contributors could diversify the institutional pool in field experimentation. During the past two decades, pioneering policy labs and research networks (i.e., Abdul Latif Jameel Poverty Action Lab, Center for Effective Global Action, Working Group in African Political Economy, and Evidence in Governance and Politics) made substantial investments in fostering collaborations between researchers from the Global North and the Global South. Knowledge transfers and local capacity building in experimentation generate significant positive externalities that may further address epistemic inequalities.

The experimental topics also entail positional implications. We text-mined experiment titles across our registries to generate several theoretically informed and occasionally overlapping categories of field experiments in political science: electoral learning, governance and accountability, minority representation, and postconflict recovery. Figure 5 (left panel) shows that governance as a conceptual-umbrella category accounted for approximately 50% of preregistered experiments. We also coded a residual “policy-learning” category that is highly heterogeneous. Although our coding scheme is imperfect, the high share of this category may reflect broader disciplinary debates regarding the role of experimentation in addressing underlying theoretical mechanisms in political science (Humphreys and Weinstein Reference Humphreys and Weinstein2009) versus adopting a more technical “mindset of plumbers” advocated in economics (Duflo Reference Duflo2017). 6

Figure 5 Field Experiments by Topic and Partners

The left-hand plot breaks down the share of preregistered experiments by text-mined key words. The right-hand side plots the share of preregistered experiments with public and private partners, respectively.

Monitoring topical trends is important because each topic raises its own unique set of ethical and research-design issues; therefore, a “one-size-fits-all” normative formula may not be optimal. For instance, policy evaluations are characterized by less-regulated ethical norms than pure research, and any steps toward codifying these norms would be crucial. In the case of postconflict or poverty-alleviation interventions, the experimental design itself may increase these ethical stakes given that a control group of vulnerable recipients does not receive the treatment (Evans Reference Evans2021). This is not the case in field experimentation on voter turnout or informational treatments. Moreover, social scientists in other disciplines increasingly have begun to conduct field experiments on similar topics. Fieldwork consolidation to prevent excessive location clustering and participant fatigue may benefit from further formal or informal cross-disciplinary coordination mechanisms.

EXPERIMENTAL PARTNERS

For experimentation on political topics, the three key actors with stakes in the experimental design and implementation are the subjects/citizens, the researchers, and the implementers (i.e., research firms, donors, political parties, nongovernmental organizations, government agencies, and others) (Haas et al. Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022). Of the total articles with field experiments published between 2000 and 2017 in three top political science journals, 62 % entailed a partnership (Levine Reference Levine, Druckman and Green2021). Figure 5 (right panel) shows the shares and evolution of partnership relationships over time. Political scientists rely more on partnerships with non-state partners than with governments, and the share of experiments with private partners increased during the period of our sample. We suspect that this is because of both practicality and topic-related circumspection.

On a practical level, private or nongovernmental organizations (NGOs) often are preferred as main implementing partners because they are more flexible and able to randomize, whereas many governments—for legal or political reasons—face difficulties when designing control groups and eliminating recipients from treatment. Moreover, governments traditionally have been more reluctant to engage with academic researchers. In development, field experimentation initially focused on working with NGOs and adopting a gradualist and practical approach with respect to partnerships, “…not by knocking at the minister’s door but by marginal successes that create credibility for the movement, gains for policy makers” (Duflo quoted in Ogden Reference Ogden2017, 25). In general, design adjustments tailored to the constraints of various non-state partners ranging from NGOs to political campaigns have proven fruitful for sustained researcher–implementation partner collaboration in political science (Green, Calfano, and Aronow Reference Green, Calfano and Aronow2014).

Topic-wise, in some cases and contexts, the study of sensitive political phenomena—to a larger extent than in the case of economic development—also precludes direct partnerships with governments or political leaders, which may explain the pattern in our data. The search for and use of evidence by decision makers often entail instrumental considerations. Circumspect awareness of this aspect is important for the relationship with the implementation partner. For instance, for conditional-cash-transfer experiments taking place in Africa, the researchers, in collaboration with international NGOs, often chose not to work with local politicians because of clientelistic concerns (Ouma Reference Ouma2020).

Partner selection is paramount for both normative questions—ethical parameters and project legitimacy in the field (Haas et al. Reference Haas, Haenschen, Kumar, Panagopoulos, Peyton, Ravanilla and Sierra-Arévalo2022; Ouma Reference Ouma2020), as well as for scientific outcomes. From a normative standpoint, if the government is the main partner in a field experiment that relies on cluster randomization, its “right to rule” in certain policy areas (e.g., public schools and public health clinics) alleviates some ethical concerns regarding the lack of informed consent (Evans Reference Evans2021). Scientifically, the interaction and basic trust between the community and the partner organization are essential for both experimental compliance and findings. There is evidence that the type of partner—either government or NGO—involved in the implementation of identical field experiments conducted on the same site could lead to divergent treatment effects (Allcott Reference Allcott2015; Bold et al. Reference Bold, Kimenyi, Mwabu and Sandefur2018; Vivalt Reference Vivalt2020). For example, a study of educational reforms in Kenya by Bold et al. (Reference Bold, Kimenyi, Mwabu and Sandefur2018) found that the treatment significantly raised learning outcomes when implemented by an international NGO, whereas an identical intervention had no impact when implemented by the Kenyan government. The explicit acknowledgment of the political-economy stakes that partners have is a crucial positional aspect for both normative stances and scientific value.

The explicit acknowledgment of the political-economy stakes that partners have is a crucial positional aspect for both normative stances and scientific value.

CONCLUSION

This article introduces a new dataset on the incidence of field experiments in political science as measured through experiment preregistrations and corroborated by systematic data on field experiments published in top political science journals. This contribution can be useful for the analysis of where, with whom, and why this research methodology is used. Because of the significant degree of overlap across social sciences, monitoring of basic trends across disciplines is likely to be beneficial for further understanding the issue of positionality in field experimentation and coordination strategies. We documented several empirical trends centered around the interaction among researchers, experimental subjects, and implementation partners. In conclusion, although we are strong advocates of the discipline’s methodological shift toward credible causal identification for which field experiments are the gold standard, our analysis—and this symposium more broadly—raises positionality issues. Geographical, institutional, topical, and relational identities in field experimentation should be considered by political scientists.

ACKNOWLEDGMENTS

We thank Viktoriia Poltoraskaya for her excellent research assistance. Part of this research was carried out while Michael Dorsch was a fellow in residence at Aix-Marseille School of Economics, whom he thanks for their hospitality and financial support.

DATA AVAILABILITY STATEMENT

Research documentation and data that support the findings of this study are openly available at the PS: Political Science & Politics Harvard Dataverse at https://doi.org/10.7910/DVN/JVW4NF.

Supplementary Materials

To view supplementary material for this article, please visit http://doi.org/10.1017/S104909652200066X.

Footnotes

1. As defined by the academic journal Scientometrics, “Scientometrics is (…) concerned with the quantitative features and characteristics of science” (Scientometrics 2021).

2. This analysis used the web-scraping tool developed by Wilson and Knutsen (Reference Wilson and Knutsen2020) that surveyed 27,689 articles published in American Political Science Review, American Journal of Political Science, British Journal of Political Science, Comparative Politics, Comparative Political Studies, Journal of Politics, International Organization, and World Politics between 2000 and 2019. See the online appendix.

3. Technically, we computed a Hirschmann–Herfindahl index of country sum of the squared country shares of experiments for a given year (e.g., for country i in year t, the index is calculated as $ {HHI}_t=\sum \limits_{i=1}^N{s}_{i,t}^2 $ , where stj is the share of experiments in country i in year t). Higher index values indicate greater concentration. The Hirschmann–Herfindahl index was developed as a way to measure the extent of market power in economic contexts, with a concentration index of 1 corresponding to a monopolized market.

4. One caveat is in order: our data do not capture systematically the researcher’s country of origin, which may have important implications for the insider–outsider status (Kim et al. Reference Kim, Badrinathan, Danny, Karim and Zhou2022). We emphasize institutional affiliation instead as a proxy for access to resources.

5. Costs for RCTs on social welfare and development, for instance, ranged between $50,000 USD and $1,000,000 USD during several years (Shah et al. Reference Shah, Wang, Fraker and Gastfriend2015). Earlier social-welfare experiments in the United States had budgets of up to $40 million USD (Ogden Reference Ogden2017). Online surveys are significantly less costly than field experiments (Dupuis, Endicott-Popovsky, and Crossler Reference Dupuis, Endicott-Popovsky and Crossler2013).

6. Our investigation of the data on published field experiments corroborates the fact that governance and accountability leads across topics, followed by minority representation and voting behavior.

References

REFERENCES

Allcott, Hunt. 2015. “Site-Selection Bias in Program Evaluation.” Quarterly Journal of Economics 130 (3): 1117–165.CrossRefGoogle Scholar
Blair, Graeme, Iyengar, Radha K., and Shapiro, Jacob N.. 2013. “Where Policy Experiments Are Conducted in Economics and Political Science: The Missing Autocracies.” Princeton, NJ: Princeton University. Working Paper.Google Scholar
Bold, Tessa, Kimenyi, Mwangi, Mwabu, Germano, Sandefur, Justin, et al. 2018. “Experimental Evidence on Scaling Up Education Reforms in Kenya.” Journal of Public Economics 168:120.CrossRefGoogle Scholar
Cansun, Sebnem, and Arik, Engin. 2018. “Political Science Publications about Turkey.” Scientometrics 115 (1): 169–88.CrossRefGoogle Scholar
Codato, Adriano, Madeira, Rafael, and Bittencourt, Maiane. 2020. “Political Science in Latin America: A Scientometric Analysis.” Brazilian Political Science Review 14 (3): e0007. DOI:10.1590/1981-3821202000030005.CrossRefGoogle Scholar
Corduneanu-Huci, Cristina, Dorsch, Michael T., and Maarek, Paul. 2021. “The Politics of Experimentation: Political Competition and Randomized Controlled Trials.” Journal of Comparative Economics 49 (1): 121.CrossRefGoogle Scholar
Cronin-Furman, Kate, and Lake, Milli. 2018. “Ethics Abroad: Fieldwork in Fragile and Violent Contexts.” PS: Political Science & Politics 51 (3): 607–14.Google Scholar
Das, Sabyasachi. 2020. “(Don’t) Leave Politics Out of It: Reflections on Public Policies, Experiments, and Interventions.” World Development 127:104792.CrossRefGoogle Scholar
Davis, Justine M. 2020. “Manipulating Africa? Perspectives on the Experimental Method in the Study of African Politics.” African Affairs 119 (476): 452–67.CrossRefGoogle Scholar
Davis, Justine M., and Mitchelich, Kristin. 2022. “Field Experiments: Thinking through Identity and Positionality.” PS : Political Science & Politics. DOI:10.1017/S1049096522000671.CrossRefGoogle Scholar
Deaton, Angus. 2020. “Randomization in the Tropics Revisited: A Theme and Eleven Variations.” Cambridge, MA: National Bureau of Economic Research. Technical Report.CrossRefGoogle Scholar
Duflo, Esther. 2017. “Richard T. Ely Lecture: The Economist as Plumber.” American Economic Review 107 (5): 126.CrossRefGoogle Scholar
Dupuis, Marc, Endicott-Popovsky, Barbara, and Crossler, Robert. 2013. “An Analysis of the Use of Amazon’s Mechanical Turk for Survey Research in the Cloud.” In Proceedings of the International Conference on Cloud Security Management: ICCSM 2013, 1018. Seattle: University of Washington.Google Scholar
Evans, David K. 2021. “Towards Improved and More Transparent Ethics in Randomised Controlled Trials in Development Social Science.” Washington, DC: Center for Global Development.Google Scholar
Gerber, Alan. 2011. “Field Experiments in Political Science.” In Cambridge Handbook of Experimental Political Science, ed. Druckman, James N., Greene, Donald P., Kuklinski, James H., and Lupia, Arthur, 115–38). Cambridge: Cambridge University Press. DOI:10.1017/CBO9780511921452.009.CrossRefGoogle Scholar
Green, Donald P., Calfano, Brian R., and Aronow, Peter M.. 2014. “Field Experimental Designs for the Study of Media Effects.” Political Communication 31 (1): 168–80.CrossRefGoogle Scholar
Haas, Nicholas, Haenschen, Katherine, Kumar, Tanu, Panagopoulos, Costas, Peyton, Kyle, Ravanilla, Nico, and Sierra-Arévalo, Michael. 2022. “Organizational Identity and Positionality in Randomized Control Trials: Considerations and Advice for Collaborative Research Teams.” PS : Political Science & Politics. DOI:10.1017/S1049096522000026.CrossRefGoogle Scholar
Herman, Biz, Panin, Amma, Owlsley, Nicholas, Blair, Graeme, Dyzenhaus, Alex, Wellman, Elizabeth, Grossman, Allison, Opalo, Ken, Singh, Anisha, Alarian, Hannah, Pruett, Lindsay, and Tan, Yvonne. 2022. “Field Experiments in the Global South: Assessing Risks, Localizing Benefits, and Addressing Positionality.” Forthcoming in PS : Political Science & Politics.CrossRefGoogle Scholar
Humphreys, Macartan, and Weinstein, Jeremy M.. 2009. “Field Experiments and the Political Economy of Development.” Annual Review of Political Science 12:367–78.CrossRefGoogle Scholar
Kim, Eunji, Badrinathan, Sumitra, Danny, D. H. Choi, Karim, Sabrina, and Zhou, Yang-Yang. 2022. “Navigating ‘Insider’ and ‘Outsider’ Status as Researchers Conducting Field Experiments.” PS : Political Science & Politics. DOI:10.1017/S1049096522000208.CrossRefGoogle Scholar
Levine, Adam Seth. 2021. “How to Form Organizational Partnerships to Run Experiments.” In Advances in Experimental Political Science, ed. Druckman, James N. and Green, Donald P., 199216. Cambridge: Cambridge University Press. DOI:10.1017/9781108777919.014.CrossRefGoogle Scholar
McDermott, Rose, and Hatemi, Peter K.. 2020. “Ethics in Field Experimentation: A Call to Establish New Standards to Protect the Public from Unwanted Manipulation and Real Harms.” Proceedings of the National Academy of Sciences 117 (48): 30014–21.CrossRefGoogle Scholar
Ogden, Timothy. 2017. Experimental Conversations: Perspectives on Randomized Trials in Development Economics. Cambridge, MA: MIT Press.Google Scholar
Ouma, Marion. 2020. “Trust, Legitimacy, and Community Perceptions on Randomisation of Cash Transfers.” CODESRIA Bulletin 1:2528.Google Scholar
Panin, Amma. 2020. “Economics Experiments in Africa: How Many and by Whom.” CODESRIA Bulletin 4:2329.Google Scholar
Phillips, Trisha. 2021. “Ethics of Field Experiments.” Annual Review of Political Science 24:277300.CrossRefGoogle Scholar
Scientometrics. 2022. “Aims and Scope.” https://www.springer.com/journal/11192/aims-and-scope. (Accessed November 4, 2021.)CrossRefGoogle Scholar
Shah, Neil Buddy, Wang, Paul, Fraker, Andrew, and Gastfriend, Daniel. 2015. “Evaluations with Impact. Decision-Focused Impact Evaluation as a Practical Policymaking Tool.” Washington, DC: International Initiative for Impact Evaluation. Working Paper 25.CrossRefGoogle Scholar
Shanghai Ranking. 2019. https://www.shanghairanking.com/rankings/gras/2019/RS0504. (Accessed November 4, 2021.)Google Scholar
Soedirgo, Jessica, and Glas, Aarie. 2020. “Toward Active Reflexivity: Positionality and Practice in the Production of Knowledge.” PS: Political Science & Politics 53 (3): 527–31.Google Scholar
Vivalt, Eva. 2020. “How Much Can We Generalize from Impact Evaluations?Journal of the European Economic Association 18 (6): 3045–89.CrossRefGoogle Scholar
Wilson, Matthew Charles, and Knutsen, Carl Henrik. 2020. “Geographical Coverage in Political Science Research.” Perspectives on Politics: 116.Google Scholar
Figure 0

Table 1 Preregistered Experiments in the Three Registries

Figure 1

Figure 1 Geographic Clustering of Field ExperimentsThe top-left/top-right panel plots calculated Hirschman–Herfindahl Indexes of concentration by the country of institutions/study location country of preregistered experiments. The bottom-left/bottom-right panel plots country shares of institutions/study location country of preregistered experiments.

Figure 2

Figure 2 Geographic Positionality of Field ExperimentsThe top-left panel plots the share of preregistered experiments taking place in the same country as the preregistering institution. The top-right panel plots the share of experiments preregistered by an institution in an advanced industrialized democratic (AID) country taking place in a non-AID country. The bottom-left panel plots the share of preregistered experiments taking place outside of the preregistering country where English is an official language in the location. The bottom-right panel shows the shares of the top nine country locations between 2014 and 2019.

Figure 3

Figure 3 Clustering Around Pioneering InstitutionsShare of preregistered experiments that include authors from the top 10 universities in political science according to the Shanghai world rankings of universities, and their numbers on the right. Note that eight of these 10 institutions have endowments greater than $10 billion dollars USD, placing them among the top 15 endowed universities in the world. All of them except one have endowments greater than $4 billion dollars USD.

Figure 4

Figure 4 The Impact of Pioneering ResearchersShare of preregistered experiments by political scientists with at least 10 (five) experiments on the left-hand (right-hand) side.

Figure 5

Figure 5 Field Experiments by Topic and PartnersThe left-hand plot breaks down the share of preregistered experiments by text-mined key words. The right-hand side plots the share of preregistered experiments with public and private partners, respectively.

Supplementary material: Link

Corduneanu-Huci et al. supplementary material

Corduneanu-Huci et al. supplementary material

https://doi.org/10.7910/DVN/JVW4NF
Link
Supplementary material: PDF

Corduneanu-Huci et al. supplementary material

Corduneanu-Huci et al. supplementary material

Download Corduneanu-Huci et al. supplementary material(PDF)
PDF 601.3 KB