Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-10T14:33:07.167Z Has data issue: false hasContentIssue false

Case Studies and Analytic Transparency in Causal-Oriented Mixed-Methods Research

Published online by Cambridge University Press:  10 October 2017

Jeb Barnes
Affiliation:
University of Southern California
Nicholas Weller
Affiliation:
University of California, Riverside
Rights & Permissions [Opens in a new window]

Abstract

Type
Symposium: The Road Less Traveled: An Agenda for Mixed-Methods Research
Copyright
Copyright © American Political Science Association 2017 

There is a growing call for greater “analytic transparency” in empirical research or clarity about how researchers draw conclusions from their data (APSA Ethical Guidelines 2012). Translating this general call into specific practices, however, is deceptively complex. Part of the challenge is that a “one-size-fits-all” approach does not exist. In addition, analytic transparency must be balanced with other concerns, especially the protection of human subjects, the relevance of which will depend on the type of research project. Given the importance of context to analytic transparency, the most that we can hope for are frameworks—or common sets of questions—for facilitating the development of best practices within particular research traditions.

With this goal in mind, this article explores analytic transparency in one type of mixed-methods research: causal-oriented mixed-methods research (C-MMR), which is research that combines large-N quantitative research and case studies to investigate a hypothesized causal relationship between an explanatory variable, X, and an outcome, Y, across cases. We focus on C-MMR and analytic transparency—as opposed to “production transparency” that concerns clarity about how data are collected or generated—for several reasons. One is that C-MMR is increasingly common and the subject of a burgeoning literature as well as multiple short courses at the American Political Science Association Annual Meeting and sessions at the Institute for Qualitative and Multi-Method Research (Beach and Pedersen Reference Beach and Pedersen2016; Brady and Collier Reference Brady and Collier2010; Dunning Reference Dunning2012; Lieberman Reference Lieberman2005; Seawright Reference Seawright2016; Weller and Barnes Reference Weller and Barnes2014; Reference Weller and Barnes2016). Another reason is that C-MMR raises thorny analytic transparency issues because each component of the research features distinct inferential goals and contributes differently to the empirical analysis, depending on the type of mixed-methods research.

We begin by identifying core questions related to analytic transparency in C-MMR and then consider the role of cases studies in light of these questions in connection with two subcategories of C-MMR: triangulation-based and integration-based C-MMR (Seawright Reference Seawright2016). The article offers practical guidance that adds to the broader conversation about analytic transparency in mixed-methods research.

FRAMING ANALYTIC TRANSPARENCY IN CAUSAL-ORIENTED MIXED-METHODS RESEARCH

Analytic transparency in C-MMR is daunting because it is multilayered and the layers are interdependent. Although we recognize that each project presents its own challenges, we believe that scholars using C-MMR should consider the following three basic questions in thinking about analytic transparency:

  1. 1. What is the causal relationship being explored?

  2. 2. What is the intended analytic contribution(s) of each method?

  3. 3. What is the empirical contribution of each method?

The nature of the causal relationship being studied is highly project-specific. However, the last two questions raise general issues regarding the role of case studies in triangulation- and integration-based C-MMR, and the varying role of case studies in these approaches suggests different concerns for analytic transparency.

CASE STUDIES IN TRIANGULATION-BASED CAUSAL-ORIENTED MIXED-METHODS RESEARCH

Triangulation-based C-MMR uses large-N and small-N data to study independently the same theoretical relationship. Footnote 1 The basic idea is that separate analytic cuts at understanding the same phenomenon will yield more valid results. The goal is convergence among different methods so that using methods with distinct ontological and epistemological assumptions is a strength; indeed, the more divergent the methods, the better. Perhaps the most common example uses large-N analysis to establish a robust relationship between X and Y and detailed process-tracing case studies to probe the X/Y relationship in specific settings. Here, the quantitative and qualitative analyses typically focus on different types of observable implications of the X/Y relationship. The large-N work centers on outcomes and the small-N work focuses on processes, but the goal in both types of research is to make claims about X as a cause of Y.

The goal is convergence among different methods so that using methods with distinct ontological and epistemological assumptions is a strength; indeed, the more divergent the methods, the better.

The extensive literature on process tracing clarifies the role of case studies in establishing a causal relationship (Bennett Reference Bennett, Box-Steffensmeiser, Brady and Collier2008; Reference Bennett, Brady and Collier2010; Bennett and Checkel Reference Bennett and Checkel2015; Collier Reference Collier2011; Mahoney Reference Mahoney2010; Reference Mahoney2012). Table 1 re-creates a well-known typology of four empirical tests used in process tracing (Bennett Reference Bennett, Brady and Collier2010; Collier Reference Collier2011; Van Evera Reference Van Evera1997). Doubly decisive tests provide the strongest evidence, and passing them confirms the hypothesized X/Y relationship in the case and eliminates rival hypotheses about that case. The weakest evidence comes from straw-in-the-wind tests in which passing the test only confirms the relevance of the hypothesis but does not eliminate the rivals. Hoop tests and smoking-gun tests fall in between these extremes. Passing a hoop test affirms the relevance of the hypothesized X/Y relationship in the case, whereas failing it eliminates the hypothesis. Passing a smoking-gun test confirms the relationship but does not eliminate its rivals. These tests are not mutually exclusive; the art of process tracing is combining different types of tests to build a persuasive account of causal processes in a single case. Footnote 2 By using this language, scholars can clarify the intended contributions of case studies in triangulation-based C-MMR and how these insights differ from (or clarify) the findings of the large-N component of their work.

Table 1 Process Tracing for Causal Inference: Necessary and Sufficient X/Y Relationships

Note: Reproduced from Collier Reference Collier2011.

Analytic transparency requires identifying not only the intended contributions of each method but also the value added of each component to the empirical findings. Although table 1 is framed as “tests” for causal inference, they are not about a causal X/Y relationship across cases. In this regard, it is telling that when scholars explain these tests, they commonly resort to analogies of detectives (often Sherlock Holmes) trying to solve a specific crime with multiple suspects (Collier Reference Collier2011) but without concern for making claims about other crimes. By contrast, in C-MMR, scholars are interested not only in whether X causes Y in a specific case (the “whodunit”) but also in the X/Y relationship across cases (Weller and Barnes Reference Weller and Barnes2014). Whether we can make a general inference about X as a cause of Y based on process-tracing evidence from a single case (or set of cases) is a separate question.

The empirical value added of case studies in triangulation-based C-MMR significantly depends on the nature of the underlying causal claim. If we expect that X is necessary for Y, then a case that demonstrates that Y occurs without X can significantly enhance our findings. If X is hypothesized to be sufficient for Y, finding X without Y in a single case is similarly telling. However, if the posited X/Y relationship is probabilistic, then assessing the empirical contributions of case studies is trickier. With probabilistic relationships, we expect to find individual cases in which X does not cause Y. Therefore, it is consistent with the hypothesized relationship to find cases in which Y emerges in the absence of X and those in which we observe X without Y. As a result, even if we had doubly decisive process-tracing evidence in specific cases, generalization is difficult, and case-study evidence alone reveals little about the X/Y relationships implied by a probabilistic relationship across cases. Humphreys and Jacobs (Reference Humphreys and Jacobs2015) demonstrated this problem in a formal Bayesian approach by showing that our belief in the truth of a given hypothesis could increase, decrease, or remain unchanged depending on the number and type of cases chosen for the small-N analysis.

Case selection, of course, also is important. For example, if we believe that a particular case is “most likely” for finding a causal relationship, and we find doubly decisive evidence that X does not cause Y in that case, then the case study weakens our belief in the hypothesis to some degree. (However, a small number of case studies cannot eliminate the hypothesis that X causes Y in a probabilistic sense.) The more general point seems obvious but bears emphasis: analytic transparency requires attention to how cases are selected and how inferences from a case study affect our understanding of the general causal relationship (see Humphreys and Jacobs Reference Humphreys and Jacobs2015 for one approach).

CASE STUDIES AND INTEGRATION-BASED CAUSAL-ORIENTED MIXED-METHODS RESEARCH

Integration-based C-MMR also uses multiple methods to identify a causal effect, but it leverages large-N and small-N work differently (Dunning Reference Dunning2012; Harding and Seefeldt Reference Harding, Seefeldt and Morgan2013; Seawright Reference Seawright2016). Instead of using each method to provide separate evidence of the X/Y relationship, it envisages a division of labor among methods in which the large-N work estimates causal effects whereas the smaller-N work probes the validity of the large-N work’s underlying assumptions. Here, combining methods improves our confidence in the underlying X/Y relationship not because the findings of the large-N and small-N analyses converge but rather because the small-N work demonstrates that the large-N work satisfies the requisites for causal inference.

The potential-outcomes model provides language for fleshing out this approach’s underlying intuitions and highlighting key contributions of case studies (Imbens and Rubin Reference Imbens and Rubin2015). The crux of the potential-outcomes model is that causal inference requires solving a missing-data problem. The causal effect is the difference between the outcome in the treatment condition (Ytreatment) and the outcome in the control condition (Ycontrol). For any case or unit of analysis, it is only possible to observe a case in either the treatment or the control condition. To identify the causal effect, we must estimate what the outcome would have been for cases that received the treatment if they had not received the treatment and the outcome for nontreatment cases had they received the treatment. However, we cannot observe both states of the world. Solving the missing-data problem typically requires identifying and comparing a treatment case and a control (or nontreatment) case. The comparison rests on two assumptions: Strongly Ignorable Treatment Assignment (SITA) and Stable Unit Treatment Value Assumption (SUTVA). Footnote 3

SITA allows us to “ignore” how units were assigned to treatment because the assignment mechanism is unrelated to any factors that would be correlated with both the treatment and the potential outcome (but not the observed outcome because the essence of a causal effect is that the treatment will affect the observed outcome). Footnote 4 There are three basic ways to satisfy SITA: (1) using randomization via experimental manipulation; (2) taking advantage of a natural or artificial intervention (i.e., a natural experiment or a regression discontinuity); and (3) modeling the assignment mechanism (i.e., creating “as if” randomization or conditional independence). Regardless of which method is used, the credibility of causal inference depends on the assumption’s plausibility.

The second major assumption of the potential-outcomes approach—SUTVA—encompasses two assumptions needed to make a causal inference based on a relatively simple comparison between the treatment and control groups. The first aspect of SUTVA is noninterference between units, which requires that the treatment received by the treatment group does not affect units in the control group. The second aspect is that the treatment is consistent within groups, which means that subjects and/or units grouped together in the analysis received equivalent treatments. Under this framework, the role of cases studies in integration-based C-MMR is providing evidence related to SITA and SUTVA in a particular project.

As a practical matter, the large-N or experimental work in integration-based C-MMR must already provide a reasonable basis for drawing causal inferences because if it is accepted that the large-N work fails SITA or SUTVA, then there is little reason to conduct any follow-up work. If the assumptions of causal inference seem plausible on their face, then case studies can provide further evidence regarding whether the assumptions are satisfied. This also implies that, in the language of table 1, researchers will be most interested in hoop tests, which reject the hypothesis that a particular assumption is met in a given case (or set of cases).

As a practical matter, the large-N or experimental work in integration-based C-MMR must already provide a reasonable basis for drawing causal inferences because if it is accepted that the large-N work fails SITA or SUTVA, then there is little reason to conduct any follow-up work.

Particularly promising avenues of inquiry for small-N analysis in integration-based C-MMR center on treatment assignment, treatment spillovers, and treatment consistency. Regarding treatment assignment, case studies can probe the presence or absence of confounders associated with both treatment assignment and potential outcomes. For this purpose, researchers will want to select cases intentionally knowing the values of both treatment and outcome and to look for unmeasured variables correlated with both treatment assignment and the observed outcome (Dunning Reference Dunning2012).

Case studies also can help researchers understand whether SUTVA has been met by focusing on two crucial issues: noninterference between units and treatment consistency. Noninterference can be violated even if the treatment is assigned randomly, and case studies can be useful in addressing this concern (Miguel and Kremer Reference Miguel and Kremer2004; Nickerson Reference Nickerson2008). Regarding treatment spillover, researchers must identify cases in which the treatment applied to one (or more) of the cases also would have affected the cases that did not receive the treatment. Treatment consistency relates to variation in the treatment among those assigned to the same experimental group. For example, in an experiment studying the effect of aspirin on headaches, SUTVA is violated if some subjects take different quantities of aspirin even though they were all assigned to take the same amount. For studying treatment consistency, researchers should select multiple cases that are expected to have received similar treatments and then investigate whether the units, in fact, received the same treatment.

In summary, case studies can play a range of roles in integration-based C-MMR. Each role implies different case-selection strategies and is quite distinct from how case studies are used in triangulation-based C-MMR. For researchers addressing analytic transparency, it is crucial to locate the logic of the case studies and case-selection strategies within the broader logic of the underlying type of C-MMR being used.

CONCLUSION

Improving analytic transparency requires a common framework that can facilitate best practices for specific types of research. Although C-MMR often is messier in practice than it appears in abstract discussions and analytic transparency raises many project-specific issues, we identified three general questions for considering analytic transparency in C-MMR. In addressing these questions, researchers must relate the role of case studies to the type of C-MMR being used because they promise different contributions to triangulation-and integration-based C-MMR. Even if addressing these questions does not produce precise estimates of the contribution of each component of a C-MMR analysis, they clarify how scholars are trying to build causal arguments brick by brick—which is, after all, the goal of analytic transparency.

Footnotes

1. Our use of the term triangulation is common among political scientists but differs somewhat from how scholars in other fields use it (Greene, Caracelli, and Graham Reference Greene, Caracelli and Graham1989). What we refer to as “triangulation” is closest to what they consider a “complementary” research design.

2. Fairfield (Reference Fairfield2013; Reference Fairfield2015) discussed how specifying these tests can enhance analytic transparency in a qualitative project centered on process tracing.

3. Case studies in integration-based C-MMR can focus on other issues, including measurement, description of a phenomenon, understanding scope conditions, and unpacking causal mechanisms.

4. Stated another way is that if cases changed from the treatment group to the comparison group (or vice versa), we must believe that their actual, observed outcome also would change in a manner consistent with the estimated treatment effect.

References

REFERENCES

APSA Committee on Professional Ethics, Rights and Freedoms. 2002. A Guide to Professional Ethics in Political Edition, 2nd edition. Washington, DC: American Political Science Association. Available at: http://www.apsanet.org/portals/54/Files/Publications/APSAEthicsGuide2012.pdf Google Scholar
Beach, Derek, and Pedersen, Rasmus. 2016. Causal Case-Study Methods: Foundations and Guidelines for Comparing, Matching and Tracing. Ann Arbor: University of Michigan Press.CrossRefGoogle Scholar
Bennett, Andrew. 2008. “Process Tracing: A Bayesian Perspective.” In The Oxford Handbook of Political Methodology, ed. Box-Steffensmeiser, Janet, Brady, Henry, and Collier, David, 702–21. New York: Oxford University Press.Google Scholar
Bennett, Andrew. 2010. “Process Tracing and Causal Inference.” In Rethinking Social Inquiry: Diverse Tools, Shared Standards, ed. Brady, Henry and Collier, David, 207–19. Lanham, MD: Rowman & Littlefield.Google Scholar
Bennett, Andrew, and Checkel, Jeffrey (eds.). 2015. Process Tracing in the Social Sciences: From Metaphor to Analytic Tool. New York: Cambridge University Press.Google Scholar
Brady, Henry, and Collier, David. 2010. Rethinking Social Inquiry: Diverse Tools, Shared Standards. Lanham, MD: Rowman & Littlefield.Google Scholar
Collier, David. 2011. “Understanding Process Tracing.” PS: Political Science and Politics 44 (4): 823–30.Google Scholar
Dunning, Thad. 2012. Natural Experiments in the Social Sciences: A Design-Based Approach. Cambridge: Cambridge University Press.Google Scholar
Fairfield, Tasha. 2013. “Going Where the Money Is: Strategies for Taxing Elites in Unequal Democracies.” World Development 47 (July): 4257.Google Scholar
Fairfield, Tasha. 2015. “Reflections on Analytic Transparency in Process-Tracing Research.” Newsletter of the American Political Science Association Organized Section for Qualitative and Multi-Method Research 13 (1): 4751.Google Scholar
Greene, Jennifer, Caracelli, Valerie, and Graham, Wendy. 1989. “Towards a Conceptual Framework for Mixed-Method Evaluative Designs.” Educational Evaluation of Policy Analysis 11 (3): 235–74.CrossRefGoogle Scholar
Harding, David J., and Seefeldt, Kristin S.. 2013. “Mixed Methods and Causal Analysis.” In Handbook of Causal Analysis for Social Research, ed. Morgan, Stephen, pp. 91110. Springer Netherlands.Google Scholar
Humphreys, Macartan, and Jacobs, Alan. 2015. “Mixing Methods: A Bayesian Approach.” American Political Science Review 109 (4): 653–73.Google Scholar
Imbens, Guido, and Rubin, Donald B.. 2015. Causal Inference for Statistics, Social, and Biomedical Sciences: An Introduction. Cambridge: Cambridge University Press.Google Scholar
Lieberman, Evan. 2005. “Nested Analysis as a Mixed-Method Strategy for Comparative Research.” American Political Science Review 99 (3): 435–52.Google Scholar
Mahoney, James. 2010. “After KKV: The New Methodology of Qualitative Research.” World Politics 62 (1): 120–47.Google Scholar
Mahoney, James. 2012. “The Logic of Process Tracing in the Social Sciences.” Sociological Methods and Research 41 (4): 570–97.Google Scholar
Miguel, Edward, and Kremer, Michael. 2004. “Worms: Identifying Impacts on Education and Health in the Presence of Treatment Externalities.” Econometrica 72 (1): 159217.Google Scholar
Nickerson, David W. 2008. “Is Voting Contagious? Evidence from Two Field Experiments.” American Political Science Review 102 (2): 4957.Google Scholar
Seawright, Jason. 2016. Multi-Method Social Science. Cambridge: Cambridge University Press.Google Scholar
Van Evera, Stephen. 1997. Guide to Methods for Students of Political Science. Ithaca, NY: Cornell University Press.Google Scholar
Weller, Nicholas, and Barnes, Jeb. 2014. Finding Pathways. New York: Cambridge University Press.Google Scholar
Weller, Nicholas, and Barnes, Jeb. 2016. “Pathway Analysis and the Search for Causal Mechanisms.” Sociological Methods and Research 45 (3): 424–57.Google Scholar
Figure 0

Table 1 Process Tracing for Causal Inference: Necessary and Sufficient X/Y Relationships