Evaluating preprints


, , ,

I am hugely enthusiastic for communicating research by preprints. So naturally, I am happy to see when the president and strategic advisers of one of the most elite funding institutes embraces preprints:

For centuries, publishing a scientific article was just about sharing the results. More recently, publishing research articles in a journal has served two distinct functions: (i) Public disclosure and (ii) Partial validation by peer-review (Vale & Hyman, 2016). The partial validation is sometimes followed up by strong validation: (iii) Independent reproduction and building upon the published work.

Preprints clearly can serve the first function, public disclosure. It has been less clear to me how to validate and curate the highly heterogeneous research that is published as preprints. I think this question remains open, though I have seen signs that some preprints are strongly validated (independently reproduced & built upon) even before the more conventional partial validation by peer-review.

For example, the methods and ideas underlying Single Cell ProtEomics by Mass Spectrometry (SCoPE-MS) were independently validated by multiple laboratories. Some presented their results at conferences before our preprint was peer-reviewed:

Several groups published their results after our preprint was published in a peer-reviewed journal, crediting the preprint for the ideas:

More (that I know of) are underway. All inspired by a preprint.  I see this as a datapoint that preprints can get strong validation even outside of the boundaries of the peer-review system that has dominated our field for the last few decades.  It’s not a complete solution for evaluating all preprints, but I think it’s very encouraging evidence that preprints can be strongly validated even before the weak validation of peer-review!

Show Buttons
Hide Buttons