Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-11T09:48:47.408Z Has data issue: false hasContentIssue false

Replications can cause distorted belief in scientific progress

Published online by Cambridge University Press:  27 July 2018

Michał Białek*
Affiliation:
Department of Psychology, University of Waterloo, Waterloo, ON N2L 3G1, Canada. mbialek@uwaterloo.cahttp://mbialek.com.pl Centre for Economic Psychology and Decision Sciences, Kozminski University, 03-301 Warsaw, Poland.

Abstract

If we want psychological science to have a meaningful real-world impact, it has to be trusted by the public. Scientific progress is noisy; accordingly, replications sometimes fail even for true findings. We need to communicate the acceptability of uncertainty to the public and our peers, to prevent psychology from being perceived as having nothing to say about reality.

Type
Open Peer Commentary
Copyright
Copyright © Cambridge University Press 2018 

Zwaan et al. extensively discuss six concerns related to making replication mainstream. I raise a different one – distorted perception of science by the public and, perhaps also, by peer scientists.

Among the public there is a “myth of science”: an implicit assumption that scientific findings report true effects, and that, once a study is conducted, all scientists agree on its results (Pitt Reference Pitt1990). For example, when a mathematician is showing a proof for a formula, it makes everybody acknowledge its validity. Similarly, in logic or philosophy, finding a counterexample falsifies the whole theory. People can have similar expectations toward empirical sciences like psychology. Anticipating these expectations, mass media present people with reports of scientific advancements, rarely mentioning any associated uncertainty (Dudo et al. Reference Dudo, Dunwoody and Scheufele2011). In this context, it is not that surprising that presenting people with information about the level of scientific consensus on a particular finding, even an unlikely high one (e.g., 98%), sometimes backfires. People interpret the less-than-100% consensus as a degree of uncertainty they did not expect, and, as a result, they reduce their belief in such a finding (Aklin & Urpelainen Reference Aklin and Urpelainen2014). In short, people (including my past self) expect scientific findings to be certain, and failing to meet their expectation may lead to disbelief in reported findings, in scientific domains, or even in science as a whole.

This is relevant to the effort to make replications mainstream because the replication movement necessarily introduces a substantial degree of uncertainty into science. For example, the well-cited Open Science Collaboration (2015) was expected to replicate only 65.5% of tested studies under the assumption that every original study reported a true effect (Gilbert et al. Reference Gilbert, King, Pettigrew and Wilson2016). Yet, only 47% of the original studies were successfully replicated, becoming a vivid illustration of the “replication crisis.” Regardless of whether the Open Science Collaboration would succesfully replicate half or two-thirds of investigated studies, both of these numbers are substantially lower than those expected by the lay audience, namely, that all original studies should replicate. In fact, it is likely that the replication crisis would have arisen even if the Open Science Collaboration had replicated a much higher proportion of original studies, and even more than the expected 65.5%.

Among scientists, consensus on an issue is likely to depend on congruity of data. Given that replications can fail even for true findings, making replication mainstream backfires so that even experts might be suffering doubt in true findings and might struggle to distinguish between true and false findings. This, in turn, will magnify the doubt of the public and drift the real-world applications of scientific discovery toward zero. Empirical evidence indicates that casting any doubt on scientific evidence decreases support for the implementation of a public policy based on such evidence (Koehler Reference Koehler2016). Underlining scientific uncertainty is sometimes used eristically: “serves nothing but defeating or postponing new regulations, allowing profitable but potentially risky activities to continue unabated” (Freudenburg et al. Reference Freudenburg, Gramling and Davidson2008).

To be clear, the present argument is not that scientists should stop replicating studies because they will look bad in the eyes of the public. Rather, we need to actively work on communicating the acceptability of uncertainty associated with scientific findings to the public (and to our peers too). We, as scientists, simply do not want to be perceived as the ones who know nothing and, therefore, are not worth listening to. Quite the opposite, we want to communicate the noisy, but steady progress of science in general, and psychology in particular. We also want our findings to be implemented in public policy, so that we contribute to making the world a better place. To accomplish that, we need to ensure that the public understands how science works and that uncertainty is something natural in science, and not a sign of junk science. The implementation of public policies informed on scientific evidence should be made like judicial verdicts. They should be based on evidence beyond “reasonable doubt,” not on absolute certainty.

One thing we could do is to keep in mind how things look among the public, and emphasize the importance of replications not in terms of weeding out “bad science,” but the normal self-correction that is the very basis of scientific discovery. Communicating uncertainty associated with scientific progress will determine whether massive replications will have predominantly positive or negative effects.

References

Aklin, M. & Urpelainen, J. (2014) Perceptions of scientific dissent undermine public support for environmental policy. Environmental Science and Policy 38:173–77. Available at: http://doi.org/10.1016/j.envsci.2013.10.006.Google Scholar
Dudo, A., Dunwoody, S. & Scheufele, D. A. (2011) The emergence of nano news: Tracking thematic trends and changes in US newspaper coverage of nanotechnology. Journalism and Mass Communication Quarterly 88:5575. Available at: http://doi.org/10.1177/107769901108800104.Google Scholar
Freudenburg, W. R., Gramling, R. & Davidson, D. J. (2008) Scientific certainty argumentation methods (SCAMs): Science and the politics of doubt. Sociological Inquiry 78:238. Available at: http://doi.org/10.1111/j.1475-682X.2008.00219.x.Google Scholar
Gilbert, D. T., King, G., Pettigrew, S, & Wilson, T. D. (2016) Comment on “estimating the reproducibility of psychological science.Science 351(6277):1037. Available at: http://doi.org/10.1126/science.aad7243.Google Scholar
Koehler, D. J. (2016) Can journalistic “false balance” distort public perception of consensus in expert opinion? Journal of Experimental Psychology: Applied 22(1):2438. Available at: http://doi.org/10.1037/xap0000073.Google Scholar
Open Science Collaboration (2015) Estimating the reproducibility of psychological science. Science 349(6251):aac4716. Available at: http://doi.org/10.1126/science.aac4716.Google Scholar
Pitt, J. C. (1990) The myth of science education. Studies in Philosophy and Education 10:717. Available at: http://doi.org/10.1007/BF00367684Google Scholar