Yarkoni's sobering description of the state of psychological science should make us all uncomfortable. While Yarkoni acknowledges that the ongoing problems undermining the validity of research are structural and reflect the norms of the field, his proposed courses of action focus nearly exclusively on individual researchers, and how they can improve the quality of their own research. However, there are pivotal players in shaping psychological science whose influence should likewise be addressed: the gatekeepers (i.e., journals, editors, publishers, and society board members). Not only do they bear much of the responsibility for the current state of affairs, but much of the change that Yarkoni (and we) desire would follow swiftly if a small group of gatekeepers decided to make it a priority. Moreover, unless the gatekeepers change, changes made by individual scientists will not be sustainable. Thus, targeting gatekeepers when calling for reform is not only more just, but a more practical avenue for achieving long-term change.
Relying on individual researchers' initiative to decide to take the harder road is not enough. Some motivated researchers will indeed heed the call, take pains to improve the generalizability of their findings, and rein in their conclusions to better correspond with their evidence. Ideally, these individuals would be rewarded for doing so. In reality, they may not be – and some journals, editors, funders, and committee members may find such calibrated claims to be underwhelming. Maybe the field improves slightly, but most likely incentive structures remain unchanged and consequently, many of those motivated researchers on the job market may find themselves passed over in favor of candidates who followed the traditional road of sweeping generalizations.
Gatekeepers such as journal editors, society board members, and publishers, on the other hand, are in a safe position to disrupt the status quo and change the standards for what counts as excellent research. These decision-makers set policies that determine what factors are weighed in journal acceptance – the currency by which researchers are evaluated. If the leading journals decided to raise their standards on a particular dimension, such as generalizability, researchers would be motivated to meet this new standard. Journals would have nothing to lose, unless they fear that their reputations depend on maintaining low standards.
Yarkoni suggests that we do not judge anyone too harshly for choosing the road of business as usual given the norms that have long been embraced in psychology. We acknowledge the unfairness in holding individual researchers to higher standards than those applied in the field at large. However, “ignore[ing] the bad news” should not be a viable option for those who wish to publish in our top journals – and certainly not for those who run them. Rather, journals (and the editors leading them) should be judged harshly if they repeatedly demonstrate a lack of concern about the generalizability of the findings they publish, and continue to reward novelty over rigor. If we cannot expect (or even demand) this from the very gatekeepers who shape incentives in the field, presumably valued for their ability to identify high quality work, what can we expect from them?
We agree wholeheartedly with Yarkoni that “Researchers must be willing to look critically at previous studies and flatly reject – on logical and statistical, rather than empirical, grounds – assertions that were never supported by the data in the first place, even under the most charitable methodological assumptions” (sect. 5, para. 3). We simply believe that the onus is primarily on those who control the biggest rewards – acceptance into the field's top journals – to put these practices into action. If we are going to ask this of researchers, we should not hesitate to expect the same (and arguably more) of editors, such as not letting such studies through peer review in the first place.
There are several paths journals can take moving forward. First, they could “raise the bar” and require authors to improve the quality of their research to support the sorts of claims that the field has traditionally enjoyed making. Many journals are already quite selective, but do not place sufficient weight on the validity of the methods and inferences when making editorial decisions. This could be addressed by paying methodologists and statisticians to serve as expert reviewers and encouraging registered reports (to provide feedback and identify problems prior to data collection, when they can still be addressed).
Second, journals could require claims to be limited to only what the research supports and accept that discussion sections will be far less spectacular. Statements on limitations and constraints on generality (Simons, Shoda, & Lindsay, Reference Simons, Shoda and Lindsay2017) should not be treated as confessionals to be buried within discussion sections and otherwise never spoken of again. Rather, claims made throughout an article should be expected to align with these statements. Press releases should similarly be written in ways that are compatible with the strength of the evidence in the paper, even if that means they will garner less attention from the press, policymakers, and the public. Although some may mourn the decline in attention and influence, the credibility that this would afford the field would be well worth the cost. If journals want to allow authors space to speculate beyond what they have evidence for, these speculations should be relegated to a clearly-marked section, and should not find their way into abstracts, conclusions, and press releases.
We can not force journals to take any of these steps. Although we can urge them to do so, they may choose to simply carry on with business as usual – enforcing haphazard standards, rewarding novelty, and maintaining the “kind of collective self-deception” Yarkoni described. In that case, however, they should admit that what they are doing is not science (Campbell, Reference Campbell, Connor, Altman and Jackson1984). Moreover, editors who continue to give stamps of approval to articles riddled with unsubstantiated claims should strongly consider Yarkoni's first proposed course of action: doing something else.
Yarkoni's sobering description of the state of psychological science should make us all uncomfortable. While Yarkoni acknowledges that the ongoing problems undermining the validity of research are structural and reflect the norms of the field, his proposed courses of action focus nearly exclusively on individual researchers, and how they can improve the quality of their own research. However, there are pivotal players in shaping psychological science whose influence should likewise be addressed: the gatekeepers (i.e., journals, editors, publishers, and society board members). Not only do they bear much of the responsibility for the current state of affairs, but much of the change that Yarkoni (and we) desire would follow swiftly if a small group of gatekeepers decided to make it a priority. Moreover, unless the gatekeepers change, changes made by individual scientists will not be sustainable. Thus, targeting gatekeepers when calling for reform is not only more just, but a more practical avenue for achieving long-term change.
Relying on individual researchers' initiative to decide to take the harder road is not enough. Some motivated researchers will indeed heed the call, take pains to improve the generalizability of their findings, and rein in their conclusions to better correspond with their evidence. Ideally, these individuals would be rewarded for doing so. In reality, they may not be – and some journals, editors, funders, and committee members may find such calibrated claims to be underwhelming. Maybe the field improves slightly, but most likely incentive structures remain unchanged and consequently, many of those motivated researchers on the job market may find themselves passed over in favor of candidates who followed the traditional road of sweeping generalizations.
Gatekeepers such as journal editors, society board members, and publishers, on the other hand, are in a safe position to disrupt the status quo and change the standards for what counts as excellent research. These decision-makers set policies that determine what factors are weighed in journal acceptance – the currency by which researchers are evaluated. If the leading journals decided to raise their standards on a particular dimension, such as generalizability, researchers would be motivated to meet this new standard. Journals would have nothing to lose, unless they fear that their reputations depend on maintaining low standards.
Yarkoni suggests that we do not judge anyone too harshly for choosing the road of business as usual given the norms that have long been embraced in psychology. We acknowledge the unfairness in holding individual researchers to higher standards than those applied in the field at large. However, “ignore[ing] the bad news” should not be a viable option for those who wish to publish in our top journals – and certainly not for those who run them. Rather, journals (and the editors leading them) should be judged harshly if they repeatedly demonstrate a lack of concern about the generalizability of the findings they publish, and continue to reward novelty over rigor. If we cannot expect (or even demand) this from the very gatekeepers who shape incentives in the field, presumably valued for their ability to identify high quality work, what can we expect from them?
We agree wholeheartedly with Yarkoni that “Researchers must be willing to look critically at previous studies and flatly reject – on logical and statistical, rather than empirical, grounds – assertions that were never supported by the data in the first place, even under the most charitable methodological assumptions” (sect. 5, para. 3). We simply believe that the onus is primarily on those who control the biggest rewards – acceptance into the field's top journals – to put these practices into action. If we are going to ask this of researchers, we should not hesitate to expect the same (and arguably more) of editors, such as not letting such studies through peer review in the first place.
There are several paths journals can take moving forward. First, they could “raise the bar” and require authors to improve the quality of their research to support the sorts of claims that the field has traditionally enjoyed making. Many journals are already quite selective, but do not place sufficient weight on the validity of the methods and inferences when making editorial decisions. This could be addressed by paying methodologists and statisticians to serve as expert reviewers and encouraging registered reports (to provide feedback and identify problems prior to data collection, when they can still be addressed).
Second, journals could require claims to be limited to only what the research supports and accept that discussion sections will be far less spectacular. Statements on limitations and constraints on generality (Simons, Shoda, & Lindsay, Reference Simons, Shoda and Lindsay2017) should not be treated as confessionals to be buried within discussion sections and otherwise never spoken of again. Rather, claims made throughout an article should be expected to align with these statements. Press releases should similarly be written in ways that are compatible with the strength of the evidence in the paper, even if that means they will garner less attention from the press, policymakers, and the public. Although some may mourn the decline in attention and influence, the credibility that this would afford the field would be well worth the cost. If journals want to allow authors space to speculate beyond what they have evidence for, these speculations should be relegated to a clearly-marked section, and should not find their way into abstracts, conclusions, and press releases.
We can not force journals to take any of these steps. Although we can urge them to do so, they may choose to simply carry on with business as usual – enforcing haphazard standards, rewarding novelty, and maintaining the “kind of collective self-deception” Yarkoni described. In that case, however, they should admit that what they are doing is not science (Campbell, Reference Campbell, Connor, Altman and Jackson1984). Moreover, editors who continue to give stamps of approval to articles riddled with unsubstantiated claims should strongly consider Yarkoni's first proposed course of action: doing something else.
Acknowledgements:
None.
Financial support
This work was supported by a NSF Graduate Research Fellowship (#1247392) to Sarah R. Schiavone.
Conflict of interest
None.