Perhaps you have heard of Brian Wansink, or if not the man himself then certainly his results. Wansink was a nutritional scientist at Cornell, who specialised in the psychology of eating. He had authored ‘studies suggesting people who grocery shop hungry buy more calories; that preordering lunch can help you choose healthier food; and that serving people out of large bowls encourage them to serve themselves larger portions’, and on the basis of his expertise on these phenomena had been feted by the press and called to assist Google and the US Army run programmes designed to encourage healthy eatingFootnote 1.The problem, however, was that he had not in fact produced evidence for these claims. On some occasions he had misreported his own data – that is to say, lied about what he had found. And on other occasions had more or less tortured his data with no regard to proper statistical method so as to give him something, anything, to publish. He knowingly entered falsehoods into the information stream of science, which is to say – he committed fraud (Bright, Reference Bright2017 p. 291).
Brian Wansink misled scientific inquiry. An academic investigating Wansink's misconduct found that to the best of his knowledge (as of the 6th of April 2017) ‘there are currently 42 publications from Wansink which are alleged to contain minor to very serious issues, which have been cited over 3,700 times, are published in over 25 different journals, and in eight books, spanning over 20 years of research’.Footnote 2 When one considers that academic papers are typically reviewed by multiple scientists before publication, and many academics will reorient their research programmes to respond to well cited previously published literature, and the efforts of scientists to then uncover Wansink's misconduct – this is countless hours of time and effort from PhD researchers who could be doing more valuable things. And it is not just this opportunity cost to deplore – the practical interventions founded on Wansink's research were not free! For instance, the US government implemented a policy to redesign school cafeterias based on Wansink's ‘findings’ that cost nearly $20 million.Footnote 3 Scientific fraud can have serious consequences, and it's worth grappling with.
Philosophers have a bad habit of only concerning themselves with science done well, but when you think about it there is something puzzling about scientific fraud. Why bother to carry out scientific research if you are just going to lie about your results? Shouldn't scientists want to know what's true, and won't lying get in the way of that?
It can all seem more puzzling when one thinks about the sort of person who becomes an academic scientist. The average starting salary for a PhD physicist in the USA is $88,670Footnote 4, whereas someone with their skills has the option of going into the finance industry wherein ‘entry-level quants with PhDs from top universities [may earn] base salaries as high as $125k and hedge funds offer up to $175k base salary’Footnote 5. This is to say, to become an academic physicist it seems one must actively reject base pecuniary motives when making one's life choices. One might naturally think that therefore scientists are instead motivated by the sort of truth-seeking curiosity that makes fraud such a puzzle. Indeed, the scientific attitude has been defined as one of searching for truth and being willing to follow the evidence (Douglas, Reference Douglas2014; McIntyre, Reference McIntyre2019). Yet the physics world has seen its share of fraud scandals all the same (for an engrossing deep dive into one such story see Reich, Reference Reich2009). To state plainly the philosophical question that shall motivate us in the rest of this essay: what gives?
Well, sometimes the answer is clear enough. There are large industries quite uninterested in what is true except so far as it relates to their ability to sell certain products. In some cases their interests might be harmed if the truth about those products were known. Prominent examples have been cigarette companies seeking to undermine public confidence in results showing that smoking caused cancer, or coal and oil companies seeking to discredit research demonstrating the reality and harms of human caused climate change (Oreskes & Conway, Reference Oreskes and Conway2011). These industries sometimes pay scientists to lie which can evidently have pernicious effects (Holman & Bruner, Reference Holman and Bruner2015), and they have other more subtle and nefarious means of promoting untruth in science (Weatherall et al, Reference Weatherall, O'Connor and Bruner2020). The extent to which capitalist allocation of scientific resources is consistent with scientific honesty is dubious at best, and perhaps this whole way of arranging things needs to be rethought. But let us set aside these cases of fraud in profit-seeking-industry science. While the case of Wansink – academic at Cornell but also consultant at Google – shows that there is no strict division between academia and industry, on the whole commercial science comes with its own set of philosophical and ethical concerns (Pinto, Reference Pinto2015). Sufficient unto the day are the problems of academic science.
I said before that one might naturally think that if scientists are not in it for the money then they do what they do for curiosity's sake. When a philosopher says ‘one might naturally think’ something this is a prelude to telling you why it would be quite mistaken to do as much. And indeed the most common explanation for scientific fraud relies on a third option we haven't considered yet for what might motivate working scientists. Over the course of the twentieth century sociologists, historians, and economists collectively developed a theory of scientific motivation that they thought accounted for the prevalence of fraud. Coming into the twenty first century philosophers have started getting in on the act too, and we now have a fairly robust received view about why fraud occurs – and one which is suggestive of what may be done to stop it. Our first task shall thus be to spell out this theory, the theory of the credit economy of science, and credit motivated fraud.
Being a philosopher, I can't help but relate everything back to Plato. It will all be relevant in a moment, I promise! Plato famously divided the soul into three parts. There was the acquisitive element of the soul, epithumia, concerned with material comfort and the pleasures of the body. There was the spirited element of the soul, thumos, concerned with honour and esteem. And there was reason, nous, concerned with finding truth and the creation of harmony. Consequently, in his Republic, Plato suggested there were to be three castes of citizen in the ideal city corresponding to these elements of the soul. There was the many, primarily motivated by their acquisitive souls, carrying out the manual labour society needs to reproduce itself. The guardians, soldiers who are primarily governed by the sense of honour and esteem, and who are charged with protecting the city and its laws. And the philosophers, rulers, those elite few primarily moved by their sense of reason, naturally put in charge of the affairs of the city. One suspects that with this taxonomy in mind Plato would therefore have immediately seen the trick behind that ‘one might naturally think’ – in setting aside the idea that scientists are an acquisitive bunch, we moved straight to the hypothesis that they were truth seekers, something like the reason governed philosopher rulers. This then made fraud puzzling. But we missed a taxon! Between epithumia and nous there is a third option, thumos. Might the scientists be more akin to the guardians, governed by thumos above all?
So sociologists of science tell us! Starting with the pioneering work of Robert Merton in the 1960s (Merton, Reference Merton1968; Reference Merton1969) it has become increasingly common to talk of scientists as ‘credit seekers’. That is to say, people who seek the honour, esteem, glory, of being well regarded in their field. To be a scientific credit seeker is to be someone who seeks status within a community, the community of peers in a scientific discipline. While there are still puzzles about how precisely to understand what counts as the relevant credit-conferring community (c.f. Lee, Reference Lee2018), there is broad consensus that roughly put this is something that many scientists are in fact driven by. This can be at the level of individual motivation – science comes with a steep prestige and status hierarchy (Zuckerman, Reference Zuckerman1970; Cole & Cole, Reference Cole and Cole1974) and scientists tend to want to climb that hierarchy. But even if they are not personally concerned with this, the way we have institutionally arranged science can essentially mandate acting as if one was such an honour, or credit, seeker.
We hand out resources through grants and prizes, we decide who gets a job and where, we decide which students will be placed with which academics – we do all of these tasks and many more in academia – primarily by deciding how ‘impressive’ we find candidates (Partha & David, Reference Partha and David1994; Stephan, Reference Stephan1996). The informal judgements scientists routinely make about the quality of one another's work, and the potential or intellectual capacity of the scientists doing the work, are not just the stuff of idle gossip, but an essential element of how we in fact allocate the resources necessary to carry out scientific research. Hence even if you do not intrinsically care about your standing in the scientific community, if you want access to the resources necessary to play the game at all, you have to in fact gain scientific acclaim (Latour & Woolger, Reference Latour and Woolgar1979, ch. 5). Hence everyone must to some extent act as if they are a credit-seeker.
Scientists win credit by establishing priority on new claims (Merton, Reference Merton1957; Strevens, Reference Strevens2003). What this means is that scientists have to publish an adequate defence of some novel claim in some recognised venue (typically but not always a peer reviewed scientific journal) before anyone else has done so. What counts as ‘an adequate defence’ of a novel claim will vary from field to field – mathematicians may be required to produce a logically valid proof of a theorem, psychologists experimental support for a generalisation about human behaviour, philosophy professors an argumentative essay defending a key claimFootnote 6. But the point is one needs to be the first to get such a thing out in one of the field-acknowledged venues for placing interesting work. If one does this, it counts to your credit – in the long run, the more interesting people in the field find the claim (or your defence of it) the more credit that accrues from having published it, and in the short run the more prestigious the venue you publish in the more credit is accrued to you (for more on the long-run/short-run credit distinction see Heesen & Bright, Reference Heesen and Bright2020, §3.5). Scientists are both intrinsically motivated, and de facto incentivised, to seek credit through establishing priority in this manner.
We are now in a position to state and understand the standard theory of scientific fraud. Scientists want and need credit for new results. To ensure their results are new, i,e, novel enough to have a plausible claim to establishing priority, scientists have to do their work sufficiently quickly that nobody else beats them to the punch (often referred to as being ‘scooped’ on a claim). And it is here problems arise. First, there are all sorts of points in the typical research process where scientists have relatively unconstrained choices to make about how to proceed, and an unscrupulous actor can mask questionable research practices by strategic choices (Simmons et al, Reference Simmons, Nelson and Simonsohn2011). In such a scenario the need for speed can produce temptations to cut corners and work to a lower standard (Heesen, Reference Heesen2018). This sort of perfidy seems to have been involved in Wansink's case, for instance. Second, even more brazen types of sheer data fabrication can be induced, in order to make results seem stronger and more interesting, sufficiently well supported to be worth publishing, or even just to wholesale invent phenomena (for a classic study of how credit seeking can incentivise this sort of brazen fraud see Broad & Wade, Reference Broad and Wade1982). Third and finally, the emphasis on novelty disincentives the sort of replication and checking work that would make catching fraudulent or unsupported claims more likely, removing or greatly reducing the threat of punishment that might otherwise deter fraud (Romero, Reference Romero2016). Even when such checking work does occur, individual responsibility and thus punishment may be nigh impossible to assign (Huebner & Bright, Reference Huebner and Bright2020). And the mere fact that something has been shown false does not immediately prevent academics from accruing credit from having published it (LaCroix, et al Reference LaCroixforthcoming), so detection may not be sufficiently timely to actually discourage fraud if one can benefit from one's ill gotten credit gains in the mean time (Zollman, Reference Zollman2019; Heesen, Reference Heesen2020). The race for credit encourages fraud and generally low epistemic standards, and discourages the sort of work that might catch cheats out.
This analysis of what produces fraud is suggestive of a solution: replace thumos with nous! If the problem with scientists is that their desire for honour and esteem, credit, is tempting them to cut corners and commit fraud, we should try to discourage such glory-seeking behaviour. This can happen through motive modification (advocated in Du Bois, 1898; see Bright, Reference Bright2018). In this sort of scheme one tries to both filter for scientists who are more concerned with the quest for truth than glory seeking when one looks for junior researchers. And one also engages in a kind of soul crafting (Appiah, Reference Appiah2005, ch.5), using the educational and cultural institutions of science to mould researchers’ desires and sense of identity to be focussed much more on truth-seeking than worldly acclaim. This should also be paired with institutional redesign, so that people are not forced to become de facto credit seekers (Nosek et al, Reference Nosek, Spies and Motyl2012), and replication and checking work is properly encouraged (Bruner, Reference Bruner2013; Romero, Reference Romero2018). Credit-seeking caused fraud – removing credit-seeking is thus removing the cause of fraud. Not, of course, that soul-making and institutional design are trivial or simple matters! But we at least know roughly what we need to do, where we should be heading towards, do we not?
Well, maybe. (This is often the answer to philosophical questions, for what it is worth.) The problem is that I have not told you even half the story so far about thumos in science. We'll set aside the question of whether having scientists driven by nous rather than thumos would actually decrease fraud (on which see Bright, Reference Bright2017Footnote 7). Let us admit that, yes, it is indeed plausible that credit seeking causes scientific fraud. Researchers from multiple different fields using multiple different methods have tended to arrive at that conclusion, it's about as well validated a causal claim as one is going to get in the social sciences. But the problem for this anti-fraud plan is that causing fraud is not all credit-seeking does!
In fact, much of the attention philosophers have given to the credit economy is precisely because at the close of the twentieth century it came to be appreciated that in many interesting ways credit seeking actually promotes surprisingly helpful behaviours for communities whose collective purpose is to seek the truth (Kitcher, Reference Kitcher1990; Goldman, Reference Goldman1999, ch.8). I will give a couple of examples here. First, there is the phenomenon of scientific sharing. As has been long noted scientists see themselves as bound by what is called the ‘communist norm’ (cf Merton, Reference Merton1942). According to this norm, it is required of scientists that they share their research with others, to make sure that whoever wants to have access to their research results and (in the ideal case) data may do so. Polling finds that scientists continue to support this norm (Anderson, Reference Anderson, Ronning, De Vries and Martinson2010) and the recent growth of what is called the Open Science MovementFootnote 8 suggests the communist ideal still has the ability to inspire scientists to take action.
And communism is a good thing indeed! Sharing research speeds up the process of discovery by ensuring that people have access to the ideas and data they need to spur their own research forward. It fosters teamwork and checking of one's own research; by being able to easily compare what you and your lab are seeing with what is being found elsewhere one can correct mistakes and identify points of shared interest or possibilities for mutually advantageous collaboration. Finally, it makes fundamental research available to technologists and engineers, who can produce socially useful innovations by building off scientific work. How then does credit relate to this communist norm?
Well, it has been argued that it is exactly the quest for credit which can produce and sustain the communist norm (Heesen, Reference Heesen2017). Recall how it was that the quest for scientific credit encourages fraud – by encouraging people to rush their work, cut corners to get results before someone else could establish priority. Well, in the good case, the very same thumos which drives such things is also driving scientists to get their work out there and share it as soon as possible. To take a very famous example, Darwin apparently had evidence for his groundbreaking theories long before he published them. Historians debate precisely why he sat on the results so long (Van Wyhe, Reference Wyhe2007), but it is evident that the risk of being scooped by the evolutionary theorising of Alfred Wallace played at least some role in prompting him to action. He didn't want to lose out on the possibility of gaining credit for his hard work, and this helped motivate him to get round to actually publishing his theory. Clearly we want such work to be shared for the good of science! And the nice thing about the credit motive given the priority rule is that it ensures that scientists have every reason to want other people to see what they have got as soon as it is presentable so as to gain the credit reward. What is more, they should want this work to be as widely accessible as possible so no one may be in any doubt as to who deserves priority, and have as wide an audience as possible may credit them with the discovery. If this is right then it is not just credit seeking in general, but the exact feature of credit seeking which brings about fraud which is at the same time responsible for this benefit!
For another example, consider the benefits of scientific pluralism. In general, we do not want every scientist to address the same problems via the same methods (Zollman, Reference Zollman2010). We do, of course, want some replication and double checking, and it is a problem that the credit incentive does not lead to enough such work. But we don't want everybody to be just doing the same thing as everyone else. For one thing, this would mean we simply fail to explore possibilities that might be fruitful if tapped for new discoveries. For another thing, this can mean errors go undetected – for the best way to expose some systematic flaw with a given scientific method is to show that approaching the same topic from a different angle consistently leads to divergent results. When we find that we tend to spend some examining our methods, and it is then that problems are uncovered and addressed. To take an example familiar from the lives of everyone on earth at the time I publish this, it is a very good thing indeed that multiple lines of inquiry were tried out simultaneously in developing vaccines for COVID-19. Because of this we have different vaccines which can act as an insurance against the possible failures of the others, and which may have different benefits for different populations. This is not unique to vaccines, but reflective of the aforementioned general features of science. So for science to function properly we need scientists to spread out in logical space, to explore various possible theories that might account for the data, and various possible methods for testing claims and generating and analysing data.
But this variety does not come easy! Indeed, from a certain angle, it can look rather puzzling for scientists to adopt different methods to address the same problems. If scientists all wanted to get at the truth, and were sharing their information about how effective various methods are, shouldn't they all use whatever is communally agreed to be the most effective truth-seeking method? Likewise and for similar reasons, should they not analyse data in light of what is communally agreed to be the most plausible theory? It is credit seeking that can help us avoid this rush to methodological conformism.
For credit can encourage scientists to adopt different methods or seek out novel problems (Kitcher, Reference Kitcher1990; Zollman, Reference Zollman2018). For a junior scientist approaching a new problem, they may be well aware that the field validated methods and theories offer the most promising method of acquiring the truth on a given question. However, if they take the path more travelled they shall find themselves working in the wake of all the best minds of the previous generation, trying to say something novel where they have not. No easy task! Eventually it is better to take your chance on a perhaps less reliable or less well validated method – it may at least allow you to say something novel, and in this way gain priority on a claim, even if ultimately it is more likely to turn out to be mistaken in the long run. At the communal level, ensuring there is a regular supply of such people trying out new things (even if, individually, it often ends up in failure) is how science keeps itself fresh and avoids the stagnant conformity mooted above. It is the pursuit of credit in the scientific community that presently provides an avenue for such diversity to be incentivised.
We are now in a position to review. We were puzzled and concerned by the phenomenon of scientific fraud in academia. Puzzled because it did not seem like academic scientists should have the sort of base pecuniary motives that one might suspect lead to fraud. Concerned because, as in the case of Wansink, fraud can have serious intellectual and social consequences if not detected in time. So to explain why this fraud occurred we looked at the theory of the scientific credit economy. We saw that understanding scientists as governed by thumos, as engaged in the spirited pursuit of honour and esteem from their peers via claiming priority on new results, allowed us to explain how it is that scientific fraud occurred. What is even better, it is paired with a natural theory as to how to discourage fraud. But immediately upon realising that, we were forced to concede that thumos has its positive sides too, and can encourage behaviours which are useful for the scientific community's ability to carry out reliable inquiry, and which may be hard to motivate for truth seekers. So where does that leave us, and what should we do?
I don't know. Philosophy is hard.
That may seem somewhat despondent. This is, in part, just a reflection of a sad reality. We do not yet really know what to do, such is the state of our knowledge. Not that we have made no progress on the question. We have a good understanding of various of the effects of credit – we know how it can encourage fraud, and we also know how it encourages pluralism and sharing of one's work. So far so good. What we are sorely lacking, however, is any means of integrating our models or theories in this domain. That is to say, we do not have a sufficiently general and well confirmed theory of science as a social phenomenon that we can confidently predict and assess the overall effects of modifying our culture or institutional structure so as to decrease the significance of scientific thumos. Without that, it is very hard to say with any measure of confidence what we ought do.
Of course there is one bit of advice that would seem to fall out of this: study the problem more! And indeed my hope is that those who read this essay will see in it a call to arms. In the study of scientific fraud we have a puzzle that has it all. The scope and influence of science in contemporary society gives the problem an obvious and immediate practical importance – it's worth putting in the effort to do it right. By exploring the puzzle of scientific fraud we are led to a dilemma wherein it seems that the most natural method of reducing fraud would also reduce the incidence of positive phenomena. Are there other options we should seek, or if not how can we weigh the value of reducing fraud against the disvalue of reducing sharing and methodological diversity? These are deep questions about what is possible and what is desirable for us, touching upon matters in both ethics and epistemology. At all times we must be aware that science is a social enterprise, and to understand both how it operates and the likely effects of any intervention one proposes, one must understand how its cultures and institutions corral, inhibit, and generate possibilities for, the millions of scientists at work today. A keen social sensibility and understanding of social mechanisms is thus necessary here too.
The study of scientific fraud and how it may be reduced thus offers the student of philosophy a chance to practice that integrative skill for which our discipline aspires to be known. It is said that philosophers wish to know how things, in the broadest possible sense, hang together, in the broadest possible sense. I hope to have persuaded you that in considering those places where science is coming apart at the seams, one may gain valuable practice in just this sort of broad perspective taking, and do so in a way that addresses an urgent problem of our times.
It's easy to judge Wansink as a bad apple who cared too much for his career and not enough about the truth. But if the standard theory of fraud is even roughly correct, he was in some sense simply responding to the culture and institutions we have set up in academia. If you find people systematically breaking the rules the option is always available to you to shake your fists at them for their wicked ways and hope that sufficient moral condemnation will stem the tide of bad behaviour. But another option is to carefully study the social system giving rise to this behaviour, and with sleeves rolled up and an experimental attitude, get to work creating a better world.