Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-06T22:06:47.237Z Has data issue: false hasContentIssue false

Is ego depletion too incredible? Evidence for the overestimation of the depletion effect

Published online by Cambridge University Press:  04 December 2013

Evan C. Carter
Affiliation:
Department of Psychology, University of Miami, Coral Gables, FL 33124-0751. mikem@miami.eduevan.c.carter@gmail.com
Michael E. McCullough
Affiliation:
Department of Psychology, University of Miami, Coral Gables, FL 33124-0751. mikem@miami.eduevan.c.carter@gmail.com

Abstract

The depletion effect, a decreased capacity for self-control following previous acts of self-control, is thought to result from a lack of necessary psychological/physical resources (i.e., “ego depletion”). Kurzban et al. present an alternative explanation for depletion; but based on statistical techniques that evaluate and adjust for publication bias, we question whether depletion is a real phenomenon in need of explanation.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2013 

Much of Kurzban et al.'s discussion centers on the so-called depletion effect (i.e., the reduction of task performance between self-control tasks; Baumeister et al. Reference Baumeister, Bratslavsky, Muraven and Tice1998). For example, in sections 3.1 and 3.2 of the target article the authors argue that currently popular theoretical accounts of the depletion effect (i.e., that it is due to the depletion of some necessary resource) are inadequate and that an opportunity cost model is more appropriate. Assuming the depletion effect is a real phenomenon, we believe that the authors' account is indeed preferable to other explanations that have been proffered. However, based on the meta-analytic methods that Hagger et al. (Reference Hagger, Wood, Stiff and Chatzisarantis2010a) used to evaluate the depletion effect, there is license for doubting that depletion really occurs. If one wishes to believe it is real (which may also be licensed), then it could be meaningfully weaker than Hagger et al. concluded.

Hagger et al. estimated that the overall size of the depletion effect was d = .62 (95% CI [confidence interval] = .57, .67). However, a meta-analytic estimate of an overall effect size is biased to the extent that the sample of experiments used to derive that estimate misrepresents the population of experiments that have been conducted on the effect. Samples of experiments can easily become unrepresentative if the probability that an experiment is included in a meta-analytic sample is influenced by the results of the experiment, a phenomenon known as publication bias (e.g., if findings confirming a particular idea are more easily published and, consequently, more easily identified and included in the meta-analysis). Importantly, Hagger et al.'s meta-analytic estimate resulted from a sample of experiments that was drawn exclusively from the published literature. Their neglect of the relevant unpublished results leaves open the possibility that the estimate is therefore inflated. Here, we summarize some results from our work that was prompted by this possibility (Carter & McCullough, submitted).

Based on Ioannidis and Trikalinos (Reference Ioannidis and Trikalinos2007), Schimmack (Reference Schimmack2012) proposed the “incredibility-index” (IC-index) as an estimate of the probability that a set of studies contains fewer statistically non-significant findings than would be credible under unbiased sampling (i.e., the number of significant findings is “incredible”). The IC-index, which takes values from 0 to 1 (where higher values suggest greater incredibility), is calculated through a binomial test on the observed number of significant results (151 of the 198 experiments analyzed by Hagger et al. were significant), given the probability that a single experiment will be significant (estimated as the average statistical power of the set of experiments). Based on post-hoc power calculations for each experiment in the Hagger et al. dataset, in which we assumed the true effect size was d = .62, average power was estimated to be .55, which resulted in an IC-index greater than .999 (for the binomial test, p = 3.72E-10). Therefore, it is extremely likely that more non-significant findings exist than are included in Hagger et al.'s meta-analysis, because the probability of drawing a set of 198 experiments in which only 47 or fewer were non-significant is roughly 3.7 in one billion.

Hagger et al. addressed the possibility of publication bias in their dataset by calculating the fail-safe N (Rosenberg Reference Rosenberg2005), but this method for assessing the robustness of a meta-analytic conclusion to publication bias is considered far from adequate (Sutton Reference Sutton, Cooper, Hedges and Valentine2009). Alternatively, regression-based methods can both assess and correct for publication bias in a sample of experiments (Stanley Reference Stanley2008). In a weighted least squares regression model in which effect sizes are regressed on the standard errors (SEs) of those effect sizes, effect size and SE should be unrelated. However, if publication bias exists, SEs will be negatively associated with effect size (Egger et al. Reference Egger, Davey Smith, Scheider and Minder1997). Additionally, one can think of the intercept in this model as an estimate of the effect size of a hypothetical, infinitely large study (i.e., one with zero sampling error variance: Moreno et al. Reference Moreno, Sutton, Thompson, Ades, Abrams and Cooper2011; Stanley Reference Stanley2008). Simulation studies suggest that such regression-based extrapolation yields accurate estimates of true effect sizes in the face of publication bias (Moreno et al. Reference Moreno, Sutton, Ades, Stanley, Abrams, Peters and Cooper2009; Stanley Reference Stanley2008).

We applied two regression models to Hagger et al.'s dataset: One in which the predictor was SE, and an alternative model in which the predictor was SE-squared (SE2; Moreno et al. Reference Moreno, Sutton, Ades, Stanley, Abrams, Peters and Cooper2009). In both models, the regression coefficient for the predictor was significant (t SE = 11.87; t SE 2 = 11.99; ps < .001), which is consistent with the presence of publication bias. The model-based estimates of the true underlying effect differed, however. Using SE-squared, the corrected effect size was d = .25 (95% CI [.18, .32]). Using SE as the predictor, the corrected effect size was a non-significant d = −.10 (95% CI [−.23, .02]). So, based on these methods, ego depletion could be a small effect – less than half the size of that estimated by Hagger et al.; but it could also be a non-existent effect for which belief has been kept alive through the neglect of null findings. If the true effect size is close to d = .25, then the set of experiments Hagger et al. analyzed was extremely underpowered (Mean power = .15, 95th percentile = .24). And even these less skeptical results counsel caution: Assuming the mean effect size is d = .25, researchers hoping to study depletion by comparing two means with 80% power should be prepared to collect a sample with N > 460, not N = 84 (as implied by Hagger et al.'s estimate of d = .62).

The great pity here is that editorial vigilance could have obviated these concerns: Editors and reviewers of meta-analyses should insist on rigorous efforts to track down the hard-to-find (i.e., unpublished) results. As things stand, we believe that the highest priority for research on the depletion effect should not be arriving at a better theoretical account, but rather, determining with greater certainty whether an effect to be explained exists at all.

References

Baumeister, R. F., Bratslavsky, E., Muraven, M. & Tice, D. M. (1998) Ego depletion: Is the active self a limited resource? Journal of Personality and Social Psychology 74(5):1252–65. doi:10.1037/0022-3514.74.5.1252.CrossRefGoogle Scholar
Carter, E. C. & McCullough, M. E. (submitted) Publication bias and the limited strength model of self-control: Has the evidence for ego depletion been overestimated? Google Scholar
Egger, M., Davey Smith, G., Scheider, M. & Minder, C. (1997) Bias in meta-analysis detected by a simple, graphical test. British Medical Journal 316:629–34.Google Scholar
Hagger, M. S., Wood, C., Stiff, C. & Chatzisarantis, N. L. D. (2010a) Ego depletion and the strength model of self-control: A meta-analysis. Psychological Bulletin 136(4):495525. doi:10.1037/a0019486.Google Scholar
Ioannidis, J. P. A. & Trikalinos, T. A. (2007) An exploratory test for an excess of significant findings. Clinical Trials 4:245–53.Google Scholar
Moreno, S. G., Sutton, A. J., Ades, A. E., Stanley, T. D., Abrams, K. R., Peters, J. L. & Cooper, N. J. (2009) Assessment of regression-based methods to adjust for publication bias through a comprehensive simulation study. BMC Medical Research Methodology 9:117.Google Scholar
Moreno, S. G., Sutton, A. J., Thompson, J. R., Ades, A. E., Abrams, K. R. & Cooper, N. J. (2011) A generalized weighting regression-derived meta-analysis estimator robust to small-study effects and heterogeneity. Statistics in Medicine 31:1407–17.Google Scholar
Rosenberg, M. S. (2005) The file-drawer problem revisited: A general weighted method for calculating fail-safe numbers in meta-analysis. Evolution 59:464–68.Google Scholar
Schimmack, U. (2012) The ironic effect of significant results on the credibility of multiple-study articles. Psychological Methods 17(4):551–66. doi: 10.1037/a0029487.Google Scholar
Stanley, T. D. (2008) Meta-regression methods for detecting and estimating empirical effects in the presence of publication selection. Oxford Bulletin of Economics and Statistics 70:103–27.Google Scholar
Sutton, A. J. (2009) Publication bias. In: The handbook of research synthesis and meta-analysis, ed. Cooper, H., Hedges, L. & Valentine, J., pp. 435–52. Russell Sage Foundation.Google Scholar