The scientific process relies heavily on the peer-review mechanism, which plays a key role in controlling scientific quality, improving performance, and providing credibility. The lack of scientific transparency leads to a peer-review process that is less than ideal and often results in irreproducible science that might have an unpredictable impact on public health or socioeconomic or political factors. To choose just one example, a 2012 report has shown that the results of 47 of 53 published peer-reviewed cancer studies could not be reproduced (Begley and Ioannidis, 2015). Consequently, the last decade has seen increasing awareness and recognition of the weaknesses that have infiltrated the biomedical research peer-review system, and it has been widely acknowledged that this system, under its current implementation, has its limitations.

In terms of journals, the number of signatories of the Transparency and Openness Promotion (TOP) guidelines, as of March 2019, was close to 5000, with an average increase of approximately 120 journals every month over the last three years. 

Several impactful studies have detailed the reasons behind the current crisis (Begley and Ioannidis, 2015; Couchman, 2014; Drubin, 2015; Frye et al., 2015; Iorns and Chong, 2014) and identified a spectrum of causes, including inappropriate study design, lack of reagent validation, inadequate documentation of methods and datasets, and insufficient sharing of data and methods – essential for detailed analysis and replication.



Consequently, the replication crisis calls for significant improvements to current practices (McNutt, 2014) that are ultimately meant to restore confidence and facilitate the peer-review process. This implies that the data must be readily available in a comprehensive format that allows the rapid testing of new hypotheses, meaningful meta-analysis, and the development of robust methods. The following points constitute an excellent starting point (Freedman et al., 2017):

  • Introduction of blinded analyses to mitigate subconscious biases;
  • Repetition of experiments whenever possible;
  • Validation of reagents (including cell lines and antibodies);
  • Careful determination of appropriate data analysis procedures and statistical tests;
  • Sharing of all results, including negative and positive controls.


Beyond experiments and reporting, adequate disclosure of potential conflicts of interest (COI), and proper credit to the researchers involved are two of the critical components of scientific transparency. Although often met with skepticism, we believe that sponsored research must always be properly identified.

Study design principles

There is no shortage of publications suggesting that industry funding increases the likelihood of pro-industry conclusions (Babor and Miller, 2014; Pisinger et al., 2019), and the genre has certainly been of help in identifying many publication biases and cases of misconduct in both industry and non-industry funded research (Allison and Cope, 2010; Boutron et al., 2010; Cope and Allison, 2010; Golder and Loke, 2008). However numerous, these publications do not grant industry-wide generalizations. In 2011, a study found that, over the last few decades, only 3.8% of all misconduct cases that led to the retraction of medical and scientific publications were associated with support from industry (Woolley et al., 2011). In a perspective article, Barton and colleagues reported that they could not find any data showing that financial conflicts of interest lead to a drop in scientific quality (Barton et al., 2014). In fact, on average, industry-funded clinical trials have been found to be qualitatively superior in their reporting and adherence to regulations when compared with non-industry funded clinical trials run by academic institutions (Del Parigi, 2012).

It is noteworthy to point out that scientific transparency is never a concluded process, and there will always be adjustments and more things we can do in order to facilitate innovation and cooperation. We are only at the beginning of an era of big data, high-throughput experiments, and massive parallel calculations, and our scientific tools are in the process of being reshaped for this new reality. Initiatives like INTERVALS spearhead this transformation and encourage transparent sharing of data to allow easy review and understanding, which will facilitate the objective evaluation of the evidence (Carlo et al., 1992).