Zwaan et al. extensively discuss six concerns related to making replication mainstream. I raise a different one – distorted perception of science by the public and, perhaps also, by peer scientists.
Among the public there is a “myth of science”: an implicit assumption that scientific findings report true effects, and that, once a study is conducted, all scientists agree on its results (Pitt Reference Pitt1990). For example, when a mathematician is showing a proof for a formula, it makes everybody acknowledge its validity. Similarly, in logic or philosophy, finding a counterexample falsifies the whole theory. People can have similar expectations toward empirical sciences like psychology. Anticipating these expectations, mass media present people with reports of scientific advancements, rarely mentioning any associated uncertainty (Dudo et al. Reference Dudo, Dunwoody and Scheufele2011). In this context, it is not that surprising that presenting people with information about the level of scientific consensus on a particular finding, even an unlikely high one (e.g., 98%), sometimes backfires. People interpret the less-than-100% consensus as a degree of uncertainty they did not expect, and, as a result, they reduce their belief in such a finding (Aklin & Urpelainen Reference Aklin and Urpelainen2014). In short, people (including my past self) expect scientific findings to be certain, and failing to meet their expectation may lead to disbelief in reported findings, in scientific domains, or even in science as a whole.
This is relevant to the effort to make replications mainstream because the replication movement necessarily introduces a substantial degree of uncertainty into science. For example, the well-cited Open Science Collaboration (2015) was expected to replicate only 65.5% of tested studies under the assumption that every original study reported a true effect (Gilbert et al. Reference Gilbert, King, Pettigrew and Wilson2016). Yet, only 47% of the original studies were successfully replicated, becoming a vivid illustration of the “replication crisis.” Regardless of whether the Open Science Collaboration would succesfully replicate half or two-thirds of investigated studies, both of these numbers are substantially lower than those expected by the lay audience, namely, that all original studies should replicate. In fact, it is likely that the replication crisis would have arisen even if the Open Science Collaboration had replicated a much higher proportion of original studies, and even more than the expected 65.5%.
Among scientists, consensus on an issue is likely to depend on congruity of data. Given that replications can fail even for true findings, making replication mainstream backfires so that even experts might be suffering doubt in true findings and might struggle to distinguish between true and false findings. This, in turn, will magnify the doubt of the public and drift the real-world applications of scientific discovery toward zero. Empirical evidence indicates that casting any doubt on scientific evidence decreases support for the implementation of a public policy based on such evidence (Koehler Reference Koehler2016). Underlining scientific uncertainty is sometimes used eristically: “serves nothing but defeating or postponing new regulations, allowing profitable but potentially risky activities to continue unabated” (Freudenburg et al. Reference Freudenburg, Gramling and Davidson2008).
To be clear, the present argument is not that scientists should stop replicating studies because they will look bad in the eyes of the public. Rather, we need to actively work on communicating the acceptability of uncertainty associated with scientific findings to the public (and to our peers too). We, as scientists, simply do not want to be perceived as the ones who know nothing and, therefore, are not worth listening to. Quite the opposite, we want to communicate the noisy, but steady progress of science in general, and psychology in particular. We also want our findings to be implemented in public policy, so that we contribute to making the world a better place. To accomplish that, we need to ensure that the public understands how science works and that uncertainty is something natural in science, and not a sign of junk science. The implementation of public policies informed on scientific evidence should be made like judicial verdicts. They should be based on evidence beyond “reasonable doubt,” not on absolute certainty.
One thing we could do is to keep in mind how things look among the public, and emphasize the importance of replications not in terms of weeding out “bad science,” but the normal self-correction that is the very basis of scientific discovery. Communicating uncertainty associated with scientific progress will determine whether massive replications will have predominantly positive or negative effects.
Zwaan et al. extensively discuss six concerns related to making replication mainstream. I raise a different one – distorted perception of science by the public and, perhaps also, by peer scientists.
Among the public there is a “myth of science”: an implicit assumption that scientific findings report true effects, and that, once a study is conducted, all scientists agree on its results (Pitt Reference Pitt1990). For example, when a mathematician is showing a proof for a formula, it makes everybody acknowledge its validity. Similarly, in logic or philosophy, finding a counterexample falsifies the whole theory. People can have similar expectations toward empirical sciences like psychology. Anticipating these expectations, mass media present people with reports of scientific advancements, rarely mentioning any associated uncertainty (Dudo et al. Reference Dudo, Dunwoody and Scheufele2011). In this context, it is not that surprising that presenting people with information about the level of scientific consensus on a particular finding, even an unlikely high one (e.g., 98%), sometimes backfires. People interpret the less-than-100% consensus as a degree of uncertainty they did not expect, and, as a result, they reduce their belief in such a finding (Aklin & Urpelainen Reference Aklin and Urpelainen2014). In short, people (including my past self) expect scientific findings to be certain, and failing to meet their expectation may lead to disbelief in reported findings, in scientific domains, or even in science as a whole.
This is relevant to the effort to make replications mainstream because the replication movement necessarily introduces a substantial degree of uncertainty into science. For example, the well-cited Open Science Collaboration (2015) was expected to replicate only 65.5% of tested studies under the assumption that every original study reported a true effect (Gilbert et al. Reference Gilbert, King, Pettigrew and Wilson2016). Yet, only 47% of the original studies were successfully replicated, becoming a vivid illustration of the “replication crisis.” Regardless of whether the Open Science Collaboration would succesfully replicate half or two-thirds of investigated studies, both of these numbers are substantially lower than those expected by the lay audience, namely, that all original studies should replicate. In fact, it is likely that the replication crisis would have arisen even if the Open Science Collaboration had replicated a much higher proportion of original studies, and even more than the expected 65.5%.
Among scientists, consensus on an issue is likely to depend on congruity of data. Given that replications can fail even for true findings, making replication mainstream backfires so that even experts might be suffering doubt in true findings and might struggle to distinguish between true and false findings. This, in turn, will magnify the doubt of the public and drift the real-world applications of scientific discovery toward zero. Empirical evidence indicates that casting any doubt on scientific evidence decreases support for the implementation of a public policy based on such evidence (Koehler Reference Koehler2016). Underlining scientific uncertainty is sometimes used eristically: “serves nothing but defeating or postponing new regulations, allowing profitable but potentially risky activities to continue unabated” (Freudenburg et al. Reference Freudenburg, Gramling and Davidson2008).
To be clear, the present argument is not that scientists should stop replicating studies because they will look bad in the eyes of the public. Rather, we need to actively work on communicating the acceptability of uncertainty associated with scientific findings to the public (and to our peers too). We, as scientists, simply do not want to be perceived as the ones who know nothing and, therefore, are not worth listening to. Quite the opposite, we want to communicate the noisy, but steady progress of science in general, and psychology in particular. We also want our findings to be implemented in public policy, so that we contribute to making the world a better place. To accomplish that, we need to ensure that the public understands how science works and that uncertainty is something natural in science, and not a sign of junk science. The implementation of public policies informed on scientific evidence should be made like judicial verdicts. They should be based on evidence beyond “reasonable doubt,” not on absolute certainty.
One thing we could do is to keep in mind how things look among the public, and emphasize the importance of replications not in terms of weeding out “bad science,” but the normal self-correction that is the very basis of scientific discovery. Communicating uncertainty associated with scientific progress will determine whether massive replications will have predominantly positive or negative effects.