Crossref Citations
This article has been cited by the following publications. This list is generated based on data provided by
Crossref.
Curtis, Jessica
Burkley, Edward
and
Burkley, Melissa
2014.
The Rhythm Is Gonna Get You: The Influence of Circadian Rhythm Synchrony on Self‐Control Outcomes.
Social and Personality Psychology Compass,
Vol. 8,
Issue. 11,
p.
609.
Carter, Evan C.
and
McCullough, Michael E.
2014.
Publication bias and the limited strength model of self-control: has the evidence for ego depletion been overestimated?.
Frontiers in Psychology,
Vol. 5,
Issue. ,
Xu, Xiaomeng
Demos, Kathryn E.
Leahey, Tricia M.
Hart, Chantelle N.
Trautvetter, Jennifer
Coward, Pamela
Middleton, Kathryn R.
Wing, Rena R.
and
Shu, Hua
2014.
Failure to Replicate Depletion of Self-Control.
PLoS ONE,
Vol. 9,
Issue. 10,
p.
e109950.
2014.
Turn It All You Want: Still No Effect of Sugar Consumption on Ego Depletion.
Journal of European Psychology Students,
Vol. 5,
Issue. 3,
p.
1.
Dill, Brendan
and
Holton, Richard
2014.
The Addict in Us all.
Frontiers in Psychiatry,
Vol. 5,
Issue. ,
Hagger, Martin S.
and
Chatzisarantis, Nikos L. D.
2014.
It is premature to regard the ego-depletion effect as “Too Incredibleâ€.
Frontiers in Psychology,
Vol. 5,
Issue. ,
Pageaux, Benjamin
Marcora, Samuele M.
Rozand, Vianney
and
Lepers, Romuald
2015.
Mental fatigue induced by prolonged self-regulation does not exacerbate central fatigue during subsequent whole-body endurance exercise.
Frontiers in Human Neuroscience,
Vol. 9,
Issue. ,
VALIAN, VIRGINIA
2015.
Bilingualism and cognition.
Bilingualism: Language and Cognition,
Vol. 18,
Issue. 1,
p.
3.
Hagger, Martin S.
2015.
Conservation of Resources Theory and the ‘Strength’ Model of Self-Control: Conceptual Overlap and Commonalities.
Stress and Health,
Vol. 31,
Issue. 2,
p.
89.
Englert, Chris
and
Bertrams, Alex
2015.
Integrating attentional control theory and the strength model of self-control.
Frontiers in Psychology,
Vol. 6,
Issue. ,
Cheung, Tracy TL
Kroese, Floor M
Fennis, Bob M
and
De Ridder, Denise TD
2015.
Put a limit on it: The protective effects of scarcity heuristics when self-control is low.
Health Psychology Open,
Vol. 2,
Issue. 2,
Chow, Jason T.
Hui, Chin Ming
and
Lau, Shun
2015.
A depleted mind feels inefficacious: Ego‐depletion reduces self‐efficacy to exert further self‐control.
European Journal of Social Psychology,
Vol. 45,
Issue. 6,
p.
754.
Yamada, Norihito
2015.
Discussing “anger” from the perspective of psychiatry with a focus on palliative care settings.
Okayama Igakkai Zasshi (Journal of Okayama Medical Association),
Vol. 127,
Issue. 3,
p.
197.
Engelhardt, Christopher R.
Hilgard, Joseph
and
Bartholow, Bruce D.
2015.
Acute exposure to difficult (but not violent) video games dysregulates cognitive control.
Computers in Human Behavior,
Vol. 45,
Issue. ,
p.
85.
Blain, Bastien
Hollard, Guillaume
and
Pessiglione, Mathias
2016.
Neural mechanisms underlying the impact of daylong cognitive work on economic decisions.
Proceedings of the National Academy of Sciences,
Vol. 113,
Issue. 25,
p.
6967.
Lee, Nick
Chatzisarantis, Nikos
and
Hagger, Martin S.
2016.
Adequacy of the Sequential-Task Paradigm in Evoking Ego-Depletion and How to Improve Detection of Ego-Depleting Phenomena.
Frontiers in Psychology,
Vol. 7,
Issue. ,
Evans, Daniel R.
Boggero, Ian A.
and
Segerstrom, Suzanne C.
2016.
The Nature of Self-Regulatory Fatigue and “Ego Depletion”.
Personality and Social Psychology Review,
Vol. 20,
Issue. 4,
p.
291.
Cunningham, Michael R.
and
Baumeister, Roy F.
2016.
How to Make Nothing Out of Something: Analyses of the Impact of Study Sampling and Statistical Interpretation in Misleading Meta-Analytic Conclusions.
Frontiers in Psychology,
Vol. 7,
Issue. ,
Glöckner, Andreas
2016.
The irrational hungry judge effect revisited: Simulations reveal that the magnitude of the effect is overestimated.
Judgment and Decision Making,
Vol. 11,
Issue. 6,
p.
601.
Lurquin, John H.
Michaelson, Laura E.
Barker, Jane E.
Gustavson, Daniel E.
von Bastian, Claudia C.
Carruth, Nicholas P.
Miyake, Akira
and
Wicherts, Jelte M.
2016.
No Evidence of the Ego-Depletion Effect across Task Characteristics and Individual Differences: A Pre-Registered Study.
PLOS ONE,
Vol. 11,
Issue. 2,
p.
e0147770.
Much of Kurzban et al.'s discussion centers on the so-called depletion effect (i.e., the reduction of task performance between self-control tasks; Baumeister et al. Reference Baumeister, Bratslavsky, Muraven and Tice1998). For example, in sections 3.1 and 3.2 of the target article the authors argue that currently popular theoretical accounts of the depletion effect (i.e., that it is due to the depletion of some necessary resource) are inadequate and that an opportunity cost model is more appropriate. Assuming the depletion effect is a real phenomenon, we believe that the authors' account is indeed preferable to other explanations that have been proffered. However, based on the meta-analytic methods that Hagger et al. (Reference Hagger, Wood, Stiff and Chatzisarantis2010a) used to evaluate the depletion effect, there is license for doubting that depletion really occurs. If one wishes to believe it is real (which may also be licensed), then it could be meaningfully weaker than Hagger et al. concluded.
Hagger et al. estimated that the overall size of the depletion effect was d = .62 (95% CI [confidence interval] = .57, .67). However, a meta-analytic estimate of an overall effect size is biased to the extent that the sample of experiments used to derive that estimate misrepresents the population of experiments that have been conducted on the effect. Samples of experiments can easily become unrepresentative if the probability that an experiment is included in a meta-analytic sample is influenced by the results of the experiment, a phenomenon known as publication bias (e.g., if findings confirming a particular idea are more easily published and, consequently, more easily identified and included in the meta-analysis). Importantly, Hagger et al.'s meta-analytic estimate resulted from a sample of experiments that was drawn exclusively from the published literature. Their neglect of the relevant unpublished results leaves open the possibility that the estimate is therefore inflated. Here, we summarize some results from our work that was prompted by this possibility (Carter & McCullough, submitted).
Based on Ioannidis and Trikalinos (Reference Ioannidis and Trikalinos2007), Schimmack (Reference Schimmack2012) proposed the “incredibility-index” (IC-index) as an estimate of the probability that a set of studies contains fewer statistically non-significant findings than would be credible under unbiased sampling (i.e., the number of significant findings is “incredible”). The IC-index, which takes values from 0 to 1 (where higher values suggest greater incredibility), is calculated through a binomial test on the observed number of significant results (151 of the 198 experiments analyzed by Hagger et al. were significant), given the probability that a single experiment will be significant (estimated as the average statistical power of the set of experiments). Based on post-hoc power calculations for each experiment in the Hagger et al. dataset, in which we assumed the true effect size was d = .62, average power was estimated to be .55, which resulted in an IC-index greater than .999 (for the binomial test, p = 3.72E-10). Therefore, it is extremely likely that more non-significant findings exist than are included in Hagger et al.'s meta-analysis, because the probability of drawing a set of 198 experiments in which only 47 or fewer were non-significant is roughly 3.7 in one billion.
Hagger et al. addressed the possibility of publication bias in their dataset by calculating the fail-safe N (Rosenberg Reference Rosenberg2005), but this method for assessing the robustness of a meta-analytic conclusion to publication bias is considered far from adequate (Sutton Reference Sutton, Cooper, Hedges and Valentine2009). Alternatively, regression-based methods can both assess and correct for publication bias in a sample of experiments (Stanley Reference Stanley2008). In a weighted least squares regression model in which effect sizes are regressed on the standard errors (SEs) of those effect sizes, effect size and SE should be unrelated. However, if publication bias exists, SEs will be negatively associated with effect size (Egger et al. Reference Egger, Davey Smith, Scheider and Minder1997). Additionally, one can think of the intercept in this model as an estimate of the effect size of a hypothetical, infinitely large study (i.e., one with zero sampling error variance: Moreno et al. Reference Moreno, Sutton, Thompson, Ades, Abrams and Cooper2011; Stanley Reference Stanley2008). Simulation studies suggest that such regression-based extrapolation yields accurate estimates of true effect sizes in the face of publication bias (Moreno et al. Reference Moreno, Sutton, Ades, Stanley, Abrams, Peters and Cooper2009; Stanley Reference Stanley2008).
We applied two regression models to Hagger et al.'s dataset: One in which the predictor was SE, and an alternative model in which the predictor was SE-squared (SE2; Moreno et al. Reference Moreno, Sutton, Ades, Stanley, Abrams, Peters and Cooper2009). In both models, the regression coefficient for the predictor was significant (t SE = 11.87; t SE 2 = 11.99; ps < .001), which is consistent with the presence of publication bias. The model-based estimates of the true underlying effect differed, however. Using SE-squared, the corrected effect size was d = .25 (95% CI [.18, .32]). Using SE as the predictor, the corrected effect size was a non-significant d = −.10 (95% CI [−.23, .02]). So, based on these methods, ego depletion could be a small effect – less than half the size of that estimated by Hagger et al.; but it could also be a non-existent effect for which belief has been kept alive through the neglect of null findings. If the true effect size is close to d = .25, then the set of experiments Hagger et al. analyzed was extremely underpowered (Mean power = .15, 95th percentile = .24). And even these less skeptical results counsel caution: Assuming the mean effect size is d = .25, researchers hoping to study depletion by comparing two means with 80% power should be prepared to collect a sample with N > 460, not N = 84 (as implied by Hagger et al.'s estimate of d = .62).
The great pity here is that editorial vigilance could have obviated these concerns: Editors and reviewers of meta-analyses should insist on rigorous efforts to track down the hard-to-find (i.e., unpublished) results. As things stand, we believe that the highest priority for research on the depletion effect should not be arriving at a better theoretical account, but rather, determining with greater certainty whether an effect to be explained exists at all.