Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-06T12:05:13.278Z Has data issue: false hasContentIssue false

A simple solution to a complex problem: Manipulate the mediator!

Published online by Cambridge University Press:  14 December 2021

Scott Highhouse*
Affiliation:
Bowling Green State University
Margaret E. Brooks
Affiliation:
Bowling Green State University
*
*Corresponding author. Email: shighho@bgsu.edu
Rights & Permissions [Opens in a new window]

Abstract

Type
Commentaries
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of the Society for Industrial and Organizational Psychology

Murphy (Reference Murphy2021) argues that the interest value of industrial-organizational (I-O) science is being sacrificed in favor of fancy methods and analyses. Dissatisfaction with methodological and analytical complexity is also a theme in survey responses of Society for Industrial and Organizational Psychology (SIOP) scholars (Highhouse & Schmitt, Reference Highhouse, Schmitt, Schmitt and Highhouse2013; Highhouse et al., Reference Highhouse, Zickar and Melick2020). The reasons for this apparent overemphasis on methodology and statistics can serve as a separate commentary, but any explanations should probably include a discussion of scarcity effects on value (i.e., psychometric complexity is overvalued because it is novel), the influence of complex management theory on I-O psychology (i.e., multiple contingencies and levels of contingencies must be modeled), and the behavioral patterns of methodological gatekeepers (see Whitaker & Guest, Reference Whitaker and Guest2020). The focus of our commentary, however, is narrower: testing mediation.

Mediation and causation

Murphy (Reference Murphy2021) notes that tests of mediation are notoriously difficult to interpret. He provides many reasons for this and discusses ways to avoid misinterpreting what Spencer et al. (Reference Spencer, Zanna and Fong2005) call measurement-of-mediation designs. Implicit in Murphy’s discussion of mediation is that in the vast majority of I-O studies of mediation, the relation between the mediating and dependent variable is examined via passive observation (Stone-Romero & Rosopa, Reference Stone-Romero and Rosopa2008). Mediation, however, implies causation. In simple mediation models, the independent variable causes the mediator (XM) and the mediator causes the dependent variable (MY). Shadish et al. (Reference Shadish, Cook and Campbell2002) noted that there are three requirements for inferring causality. We list these as they apply specifically to mediation: (a) M precedes Y in time, (b), M and Y vary together, and (c) there are no plausible alternative explanations for the relationship between M and Y. Only randomized experiments satisfy all three requirements.

Therefore, measurement-of-mediation designs are vulnerable to misinterpretation. To believe the results of studies using nonexperimental designs, the reader must assume that the researcher has correctly specified the causal ordering of variables and has controlled all important confounds through design features or statistical means (Stone-Romero & Rosopa, Reference Stone-Romero and Rosopa2008). The assumption that no omitted variables influence the XMY causal chain is referred to as the no omitted variable assumption or the no confounding assumption. (Pirlott & MacKinnon, Reference Pirlott and MacKinnon2016). This assumption is rarely justified and often violated. One way to provide evidence for the assumption is to follow up a study that used measurement of mediation with one that demonstrates the XMY relationship experimentally (Podsakoff & Podsakoff, Reference Podsakoff and Podsakoff2019).

Manipulate the mediator

In the field of social psychology, attention to the overreliance on measurement-of-mediation designs led Spencer et al. (Reference Spencer, Zanna and Fong2005) to call for researchers to test mediation by establishing a causal chain through experimentation. This involves doing two sequential studies: in study 1, X is manipulated and M and X are measured, and in study 2, M is manipulated and X is measured. Pirlott and MacKinnon (Reference Pirlott and MacKinnon2016) distinguished between directly manipulating the presence or absence of the mediator versus encouraging or discouraging the mediator in manipulation-of-mediator designs. We discuss the direct approach first.

An example of directly manipulating the mediator is a study by Meyer and Gellatly (Reference Meyer and Gellatly1988). They examined the mediating effect of perceived performance norms on the relationship between assigned goals and performance. In the first experiment, the authors manipulated assigned goal (X; easy, difficult, impossible) and measured its effect on perceived performance norms (M) and performance (Y). In the second study, the researchers experimentally manipulated performance norms (M; low, high, no norm) to examine their effect on performance (Y). More recently, Nolan and Highhouse (Reference Nolan and Highhouse2014) examined the mediating effect of perceived autonomy on the relationship between standardized hiring practices and user resistance. In their first experiment, the authors manipulated type of hiring procedure (X; highly standardized, unstandardized) to examine the effects on perceived autonomy (M) and user resistance (Y). In the second experiment, the researchers manipulated the amount of autonomy (M) in the standardized hiring procedure to examine its effect on user resistance (Y).Footnote 1

Imai et al. (Reference Imai, Tingley and Yamamoto2013) point out that directly manipulating the mediator is not always possible and that even when it is, the researcher must make a strong argument that the measured and manipulated versions of the mediator are consistent (i.e., they are the same construct and would have the same effect on the outcome). In these situations, they recommend employing “designs with imperfect manipulation” (p. 18), referred to as encouragement designs. These involve manipulations that are directed at increasing (encouraging) or decreasing (discouraging) the value of the mediator. Pirlott and MacKinnon (Reference Pirlott and MacKinnon2016) point to a study by Li et al. (Reference Li, Johnson, Cohen, Williams, Knowles and Chen2012), who manipulated their mediator “belief in a soul” by randomly assigning people to either write an essay suggesting that souls do exist (encouragement condition) or do not exist (discouragement condition). This approach involves strengthening or weakening the mediator, rather than fully changing the mediator.

Although manipulating the mediator is surely an improvement in terms of our ability to identify a causal mechanism, some experimentalists suggest that even establishing a causal chain falls short of demonstrating how an independent variable affects an outcome through the mediator (e.g., Imai, et al., Reference Imai, Tingley and Yamamoto2013). These authors suggest that parallel and crossover designs are better equipped to identify causal mechanisms.

In an experimental mediation using a parallel design, a sample is randomly split in two and two randomized studies are conducted in parallel: one study manipulates X and measures M and Y, and the other study simultaneously manipulates X and M, and measures Y. This is essentially a combination of a measurement-of-mediator design and a manipulation-of-mediator design. In a specific version of a crossover design suited to testing mediation, participants are exposed to both conditions of the independent variable in randomized order. In each condition, at stage 1, the M and Y are measured. Then, at stage 2 participants are assigned to the opposite condition and they are assigned to the value of M from the first stage—and Y is measured. An important assumption of crossover designs is no carryover effects. Both parallel and crossover designs can be used with direct or encouragement manipulations.

Anticipated objections to manipulating the mediator

Objection 1: My model is too complex

This may be true. The approaches addressed in this commentary are largely limited to investigations of only one mediator. More complex models are not amenable to experimentation in the way we have described. We would suggest, though, that before testing complex models with multiple mediators, one should start with rigorous tests of simpler mediation relationships that can establish a solid base on which complex iterations can be built.

Objection 2: Most organizational constructs cannot be manipulated

We believe that many of the organizational constructs that researchers believe are not amenable to manipulation can, with a little creatively, be manipulated. If religion researchers can manipulate belief in a soul, then surely organizational researchers can manipulate things like burnout, turnover intentions, and job ambiguity (see Breaugh & Colihan, Reference Breaugh and Colihan1994; Podsakoff & Podsakoff, Reference Podsakoff and Podsakoff2019). Encouragement designs can allow us to use experimental designs when direct manipulation is difficult. However, we recognize that it is not always possible, either practically or ethically, to manipulate mediators.

Objection 3: Ican’t statistically estimate the indirect effect of the mediator

Podsakoff and Podsakoff (Reference Podsakoff and Podsakoff2019) noted that, using these designs, it is not possible to estimate statistically the indirect effect of the mediator or to calculate how much of the effect of X on Y can be attributed to M. This is a limitation of relying strictly on a manipulation-of-mediator approach. Despite not being able to statistically estimate the indirect effect of the mediator, we argue that creative experimental design not only allows stronger establishment of causality but also provides more nuanced information about the relationships among the variables.

Final thoughts

Although there are a number of limitations of experimental approaches that manipulate the mediator, many of these can be addressed by including within the same research program a complementary study using the measurement-of-mediation design. Both approaches to establishing a causal chain may be used to offset the limitations of each. Like Podsakoff and Podsakoff (Reference Podsakoff and Podsakoff2019), we believe that the benefits of experimental approaches to mediation outweigh the limitations. Researchers should consider this underused approach as a simple tool that can provide powerful evidence. Like Murphy (Reference Murphy2021), we encourage I-O researchers to go back to the basics and consider whether statistically simpler solutions may be appropriate.

Footnotes

1 Podsakoff & Podsakoff (Reference Podsakoff and Podsakoff2019) provide other published examples of manipulating mediators.

References

Breaugh, J. A., & Colihan, J. P. (1994). Measuring facets of job ambiguity: Construct validity evidence. Journal of Applied Psychology, 79(2), 191202.CrossRefGoogle Scholar
Highhouse, S., & Schmitt, N. (2013). A snapshot in time: Industrial and organizational psychology today. In Schmitt, N. & Highhouse, S. (Eds.), Handbook of psychology (Volume 12: Industrial and Organizational, pp. 313). Wiley.Google Scholar
Highhouse, S., Zickar, M., & Melick, S. (2020). Prestige and relevance of the scholarly journals: Impressions of SIOP members. Industrial and Organizational Psychology: Perspectives on Science and Practice, 13(3), 273290.CrossRefGoogle Scholar
Imai, K., Tingley, D., & Yamamoto, T. (2013). Experimental designs for identifying causal mechanisms. Journal of the Royal Statistical Society: Series A(Statistics in Society), 176(1), 551.CrossRefGoogle Scholar
Li, Y. J., Johnson, K. A., Cohen, A. B., Williams, M. J., Knowles, E. D., & Chen, Z. (2012). Fundamental(ist) attribution error: Protestants are dispositionally focused. Journal of Personality and Social Psychology, 102(2), 281290.10.1037/a0026294CrossRefGoogle ScholarPubMed
Meyer, J. P., & Gellatly, I. R. (1988). Perceived performance norm as a mediator in the effect of assigned goal on personal goal and task performance. Journal of Applied Psychology, 73(3), 410430.CrossRefGoogle Scholar
Murphy, K. R. (2021). In praise of table 1: The importance of making better use of descriptive statistics. Industrial and Organizational Psychology: Perspectives on Science and Practice, 14(4), 461477.Google Scholar
Nolan, K. P., & Highhouse, S. (2014). Need for autonomy and resistance to standardized employee selection practices. Human Performance, 27(4), 328346.CrossRefGoogle Scholar
Pirlott, A. G., & MacKinnon, D. P. (2016). Design approaches to experimental mediation. Journal of Experimental Social Psychology, 66, 2938.10.1016/j.jesp.2015.09.012CrossRefGoogle ScholarPubMed
Podsakoff, P. M., & Podsakoff, N. P. (2019). Experimental designs in management and leadership research: Strengths, limitations, and recommendations for improving publishability. Leadership Quarterly, 30(1), 1133.10.1016/j.leaqua.2018.11.002CrossRefGoogle Scholar
Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and quasi-experimental designs for generalized causal inference. Houghton-Mifflin.Google Scholar
Spencer, S. J., Zanna, M. P., & Fong, G. T. (2005). Establishing a causal chain: Why experiments are often more effective than mediational analyses in examining psychological processes. Journal of Personality and Social Psychology, 89, 845851. http://dx.doi.org/10.1037/0022-3514.89.6.845 CrossRefGoogle ScholarPubMed
Stone-Romero, E. F., & Rosopa, P. J. (2008). The relative validity of inferences about mediation as a function of research design characteristics. Organizational Research Methods, 11(2), 326352. http://dx.doi.org/10.1177/1094428107300342 CrossRefGoogle Scholar
Whitaker, K., & Guest, O. (2020). #bropenscience is broken science: Kirstie Whitaker and Olivia Guest ask how open “open science” really is. Psychologist, 33, 3437.Google Scholar