1 Introduction
A number of recent articles have put into question the reproducibility of published research and have allocated some of the blame for un-reproducibility on a lack of integrity. This is why messages and articles by, for example, John Ioannidis, or those published in The Economist, are as alarming to the European Commission as they are for the scientific community.
John Ioannidis, called by the BMJ the scourge of sloppy science, 1 and the author of a 2005 paper entitled ‘Why most public research findings are false’, estimated in 2014 that 85% of research resources are wasted.Reference Ioannidis 2 The Economist on 19 October 2013, in an article entitled ‘How Science goes wrong’, said that:
Last year researchers at one biotech firm, Amgen, found they could reproduce just six of 53 ‘landmark’ studies in cancer research. Earlier, a group at Bayer, a drug company, managed to repeat just a quarter of 67 similarly important papers. A leading computer scientist frets that three-quarters of papers in his subfield are bunk. In 2010 roughly 80,000 patients took part in clinical trials based on research that was later retracted because of mistakes or improprieties.
The European Commission has a dual interest in ensuring that research results are of high quality, are credible and are reliable, as it is a funder and user of outputs. This, indeed, is key to evidence- and science-based policy as much as it is key to effective use of financial resources and the true advancement of scientific knowledge.
As an evidence-based policy maker, the role of the European Commission is to ‘promote the general interest of the Union and [to] take appropriate initiatives to that end’. Notably, ‘Union legislative acts may only be adopted on the basis of a Commission proposal’. 3 In order to propose Union legislative acts, the Commission has to keep in mind the interest of all members of the European Union and of all of its citizens, which requires, for any proposal, thorough evidence-based analysis, documented in an accompanying impact assessment. More on this issue can be found on the regulation site of the European Commission. 4
To support better regulation and evidence-based policies, the European Commission deploys substantial efforts to have at its disposal a multifaceted system of science advice. It maintains an in-house research institute, the Joint Research Centre, which draws on over 50 years of scientific experience and continually builds its expertise. Located across five different countries, the JRC hosts specialist laboratories and unique research facilities and is home to thousands of scientists working to support EU policy.
In addition, the European Commission systematically consults and relies on more than ten specialised scientific EU agencies, such as the European Food Safety Authority (EFSA), the European Centre for Disease Prevention and Control (ECDC) or the Fundamental Rights Agency (FRA). Expert groups or committees also play a key role in the provision of science advice to the Commission. Among them, for example, we can find Scientific Committees on Consumer Safety or on Health, Environmental and Emerging Risks.
In 2012, the European Commission added another element to its science advice tool box. Professor Anne Glover was nominated the first-ever Chief Science Advisor to the European Commission President. She stayed in office until 2014. In 2015, the Chief Science Advisor function was replaced by a better resourced Scientific Advice Mechanism (SAM). 5 The mechanism has three components: a seven-person-strong high level group of scientific advisors; a structured relationship with five European Science Academy networks; and a support unit provided by the European Commission’s Research & Innovation Directorate-General. The High Level Group of scientific advisors is mandated first to provide science advice whenever this is critical to European policy making. Second, it is to suggest topics to the Commission for which science advice is useful. Third, it provides advice to the Commission on the interfaces between science and policy making.
As a funder, the European Commission has been running Research & Innovation Programmes for more than 30 years. In 1986, the Single European Act included for the first time a specific chapter on research, which put the emphasis on applied research aimed at supporting the competitiveness of European industry. By 2007, the European Research Council (ERC) had been launched. The ERC, which represents 17% of the €80 billion budget of the current Horizon 2020 Framework Programme, supports fundamental research carried out by individual teams. While the results of research from Horizon 2020 and from previous Framework Programmes are yet another source to inform policy development, the Research Framework Programme, which has become the world’s largest publicly funded civil research programme, now also provides funding to research that might have no other purpose than the increase of knowledge. This is why the Commission is taking a series of actions to support research integrity and reproducibility. Research of the highest quality requires application of the highest integrity standards, and this will ultimately also lead to higher reproducibility.
2 Research Integrity and Reproducibility as Part of Open Science
2.1. Defining Reproducibility
One session at the 2016 Academia Europaea annual conference was entitled ‘reproducibility of published research’. Such a session could have been preceded by a discussion on what reproducibility actually means. Even a superficial look at articles or blogs on reproducibility shows that at least three expressions are used without a clear and recognised definition: reproducibility, replicability and repeatability. This kind of discussion is strongest in the medical and related fields (such as biochemistry and biology). In these areas, most articles seem to agree that reproducibility refers to the ability to duplicate (i.e. to reproduce) an entire analysis either by the same researcher or by someone else working independently, whereas reproducing a single experiment is rather called replication.
However, Goodman et al.Reference Goodman, Fanelli and Ioannidis 6 point out that although
the importance of multiple studies corroborating a given result is acknowledged in virtually all of the sciences, the modern use of ‘reproducible research’ was originally applied not to corroboration, but to transparency, with application in the computational sciences. Computer scientist Jon Claerbout coined the term and associated it with a software platform and set of procedures which allows the reader of a paper to see the entire processing trail from the raw data and code to figures and tables.
Others, again, define repeatability as referring to a researcher repeating her or his own experiments and testing how accurately they can be repeated, whereas reproducibility would refer to other researchers trying to reproduce results.
A constructive dialogue across all disciplines would certainly be facilitated by some agreement on terminology. This would contribute to clarifying where different methodologies are used by different sciences and where, therefore, different requirements exist in terms of reproducibility.
Notwithstanding terminology, the following cases can be distinguished in most sciences (the term reproducibility is always used, although other authors might use other terms):
-
(1) Obtaining the same results if an identical experimental set-up is used by different researchers.
-
(2) Obtaining the same results when identical data sets are used by different researchers.
-
(3) Obtaining the same results using different methodologies and/or different data sets.
The first case can be imagined, for example, in biology, chemistry or physics, where another researcher should be able to exactly reproduce an experiment and the results based on it if all information on the experiment is available. If this is not possible, there can be a suspicion of science carried out sloppily, or methods described insufficiently, or, in the worst case, of data fabrication and falsification. However, in the social sciences or humanities, an identical experimental set-up frequently leads to different results, without any suspicion of wrongdoing, when that set-up is used with different research subjects.
The second case refers to proper data analysis and the proper use of statistical methods. If an identical data set for the same question yields different results by different researchers, it is worthwhile looking into the statistical methods used, as some methods might have been incorrectly applied.
The third case refers to a classical component of the scientific method, namely corroborating research results by other researchers through application of various approaches and methods. If it is not possible to reproduce a research result with other methods, there is not necessarily any problem with that. However, if it is possible, a given research result is usually considered more robust.
As these few examples show, problems of reproducibility can have many different reasons, some of them related to the complete availability of raw data or method descriptions; others relating to the methodological knowledge of researchers (including knowledge of statistics); and some problems may point at problems of research integrity.
2.2. Commission Activities to Support Reproducibility
The European Commission conceives reproducibility of research both as an element of its Research Integrity policy and as part of its Open Science policy: 7
Open Science represents a new approach to the scientific process based on cooperative work and new ways of diffusing knowledge by using digital technologies and new collaborative tools. […] Open Science is as important and disruptive a shift as e-commerce has been for retail. Just like e-commerce, it affects the whole ‘business cycle’ of doing science and research – from the selection of research subjects, to the carrying out of research and to its use and re-use – as well as all the actors and actions involved up front.
The impact of all these trends is already visible, and already affecting some of the most burning issues in how research is carried out, such as the slowness of the publication process, the increasing criticism of the existing peer review system, and the challenge of reproducing reliable research results – all of which Open Science has the potential to strengthen and enhance by facilitating more transparency, openness, networking and collaboration, and by fostering interdisciplinary research. In being open, science will be fully accountable for its use of public resources.
Open Science can transform science into ‘better’ science. Better science means making science:
-
∙ good: by making science more credible and replicable; for example, by addressing governance and scientific integrity;
-
∙ efficient: by avoiding duplication of resources and optimising the re-usability of data; and,
-
∙ open: by improving the accessibility of data and knowledge at all stages of the research cycle, and enabling text and data mining by ensuring the appropriate conditions within copyright law.
The growing scrutiny of research integrity constitutes an important key driver of Open Science. With evidence coming to light of cases where research results appear not to be reproducible, the re-use of data can help foster the reproducibility of studies.
3 Fostering Research Integrity
Research integrity, which can be defined as ‘the performance of research to the highest standards of professionalism and rigour, in an ethically robust manner’, is important to science because it creates trust, and trust is at the heart of the research process. Researchers must be able to trust and rely on each other’s work and ‘they must also be trusted by society since they provide scientific expertise that may impact people’s lives’. Thus, ‘research integrity has the potential to increase the quality of research in the European research ecosystem, thereby increasing its overall effectiveness and impact into the future’. 8 Research integrity, considered by the Commission as a prerequisite to scientific excellence, will support Open Science in particular by promoting behaviours leading to a better access to and sharing of available data. Research integrity can also build trust between science and wider society, optimise returns on investment, and protect the EU and its interests. It therefore constitutes one of the priorities of European research policy. The European Commission is developing a policy on research integrity comprising two main pillars.
(a) Minimising breaches of research integrity in activities funded by Horizon 2020
Horizon 2020 requires participants to meet the highest standards of research integrity, as set out in the European Code of Conduct for Research Integrity. 9 Various elements safeguard adherence to these principles and enable the detection of research misconduct, including different tools to detect cases of misconduct during the evaluation process and the technical review of project proposals.
(b) Increasing adherence to the highest standards of research integrity in the research and innovation system, in the EU and internationally
The Commission intends to increase awareness of the importance of actively seeking a high level of integrity, to make available a tool kit to support organisations in building or adapting their integrity system, and to contribute to the availability of effective training material. It is also financing projects to identify the root causes of research misconduct and suitable responses. Several actions have been launched to promote higher levels of research integrity in the EU and beyond, including cooperation with stakeholders to review the European Code of Conduct for Research Integrity (ALLEA code); the creation of a European Research Integrity research community; the promotion of a research integrity culture through capacity building, awareness and skills; and efforts to increase reproducibility, exchange of best practices and international cooperation.
Other organisations and actors involved in research have, in recent years, also stepped up efforts to improve reproducibility and to raise awareness on reproducibility and research integrity. The scientific journals Science and Nature have both decided, in 2013 and 2014 respectively, to add statistical competence to the peer review process of articles submitted for publication. They recognised that in many scientific publications results are indeed misrepresented due to the inaccurate use of statistical methods (very few scientists have a thorough training in statistics). The assessment of the proper use of statistics has thus become a major parameter for most bodies dealing with the assessment of scientific evidence.
Conclusion
The European Commission attaches great importance to research integrity and open science. When open science becomes normal scientific practice, availability and publication of research data will be common. This, combined with the highest standards of research integrity, will lead to more research results becoming reproducible, which will contribute to increasing trust in science.
The role of Academies and Learned Societies can be very important both for research integrity in general, and for questions of reproducibility in particular. Many Science Academies represent a broad range of scientific disciplines and are thus a very appropriate forum to, for example, agree on questions of terminology regarding reproducibility. More importantly, they should be able to identify for what type of research one should reasonably expect that research results need to be reproducible to be recognised.
Johannes Klumpers holds a PhD in Forest Sciences and is currently Head of the Scientific Advice Mechanism Unit of the European Commission. Previously, he led several other units in the Commission's Directorate-General for Research & Innovation, dealing with gender equality, humanities research and budget. In addition to the European Group on Ethics in Science and New Technologies (https://ec.europa.eu/research/ege/index.cfm), the Scientific Advice Mechanism Unit supports the Commission's High Level Group of Scientific Advisors (https://ec.europa.eu/research/sam) and interacts with the SAPEA Consortium (Science Advice for Policies by the European Academies) of the five European science academy networks Academia Europaea, ALLEA, Eurocase, EASAC and FEAM. The Unit is also responsible for policies in relation to research integrity and research ethics.