Shepherding preprints through a pandemicBMJ 2020; 371 doi: https://doi.org/10.1136/bmj.m4703 (Published 15 December 2020) Cite this as: BMJ 2020;371:m4703
People tend to have opinions on preprints and whether they help or hinder progress in research. I’m an unabashed preprint advocate. Of course, some preprints are more important and interesting than others, and some prove to be plain wrong, just like journal articles. And I declare an interest: last year BMJ joined forces with Cold Spring Harbor Laboratory and Yale University to launch a preprint server for clinical medicine, medRxiv (pronounced “med-archive”),1 to enable quicker exchange of research ideas.2 In its first six months medRxiv handled a few hundred articles. In 2020 so far it has posted 12 000, mostly on one topic: coronavirus.3
Before the launch we decided what types of papers to post, how to screen them quickly while limiting risk to patients and populations, and what requirements to place on authors.4 The pandemic changed none of these criteria, but they were all tested repeatedly through discussion channels and video meetings—and our concerns and processes have evolved with each phase of the pandemic.
For example, medRxiv aims to post only research articles (including systematic reviews) and protocols, not opinions or commentaries. But what counts as research? Some blog posts and newspaper articles contain more data and analysis than many preprint submissions. We decided that a preprint describing publicly available data should include research methods, contain more than just graphs and discussion, and discuss the research presented, rather than using a small amount of data to justify an extensive opinion.
Aiming to reduce health related public panic, in January and February we considered declining any extreme predictions of R0 or of rising cases in different countries. But, as case numbers surged, this seemed naively optimistic. We also decided—unlike most journals—to post articles that might be out of date in a couple of months because the data were changing quickly.
Coping with volume
In March and April, as our teams in the northeastern US and the UK entered lockdown, we all worked long hours (from home), dealing with medRxiv submissions that increased 10-fold in three months. Our “day jobs” were also expanding—the Yale clinicians were seeing patients, while at BMJ and Cold Spring Harbor we learnt to produce journals with entirely remote teams. In April and May it became clear that, no matter how hard everyone worked, we had to scale up to deal with the volume of submissions: first, training more screeners who establish that a submission is within scope and meets reporting requirements; next, recruiting more affiliates—clinical researchers who confirm that a submission describes clinical research and would not be dangerous if posted; and, finally, securing help for the single clinical adviser who looked at every paper before posting.
Studies with human participants are posted on medRxiv only if they’ve had ethical oversight. We thought this a straightforward requirement, but we heard from many clinicians who were certain that they didn’t need oversight because they had “the answer” to treating the most seriously ill people, sometimes with treatments that were banned or reported to be dangerous. When we saw President Trump promoting unproven treatments and branding a (preprinted) study that countered his view an “enemy statement,”5 this upped the ante on how we handled treatment claims, and we ensured that all had clinician input.
Preprint v press release
Another area of debate was studies of small numbers of people. Early on we saw reports of loss of taste and smell in handfuls of patients in Italy: was this important or just an interesting curiosity? Ensuring that patients aren’t identifiable became a focus as we received increasing numbers of submissions from clinicians that read like a referral letter to a close colleague—describing patients’ home life, work, family relationships, dates of illness, and comorbidities. Some countries’ governments also disregarded patient confidentiality in the drive to control viral spread.6
And now, with a light appearing at the end of the pandemic tunnel7 and our workloads and workforces stabilised, discussions focus on claims about treatments and vaccines. Should we post studies of ill defined herbal remedies, for example, even within a registered clinical trial? Are there particular requirements for vaccine studies? Discussions continue.
One example of when we had no doubt about posting a preprint as quickly as possible was the Recovery trial’s preliminary result on dexamethasone.8 This was submitted a few days after the UK chief medical officer recommended the treatment and a few weeks before the peer reviewed article appeared in the New England Journal of Medicine.9 How many lives were saved by improved treatment in those weeks mid-year? And how much better was it that physicians could read the full preprint before prescribing, rather than relying on a press release?
Such examples drive us to carry on the work of preprint shepherding—despite occasional accusations of censorship and bias from authors whose preprints we decline to post, and despite the view of some eminent journal editors that, when we focus on speed, we devalue accuracy or reliability. Of course, sharing research openly and at pace has risks, the worst of which medRxiv aims to mitigate, but aren’t the risks of opacity and delay even greater? If posting preprints weren’t an option, how much of this research would remain behind closed doors today, and how much less informed would be the global discussions of policy and treatment options?
We launched medRxiv as a service to the clinical research community, and we look forward to hearing your ideas and contributions as its development continues.
Founding and management of medRxiv
medRxiv’s founders and management committee are John Inglis and Richard Sever (Cold Spring Harbor Laboratory); Harlan Krumholz and Joseph Ross (Yale University School of Medicine); and Claire Rawlinson and Theodora Bloom (BMJ).
Head of content (and screening) is Samantha Hindle, supported by Dinar Yunusov (both at Cold Spring Harbor Laboratory), and the lead clinical and editorial consultant is John Fletcher (BMJ), who has been supported in the covid era by Ginny Barbour (Queensland University of Technology).
For more information see www.medrxiv.org/about-biorxiv.
Competing interests: TB is executive editor of The BMJ and a cofounder of medRxiv. A full declaration of interests is online at www.bmj.com/about-bmj/editorial-staff/theodora-bloom.
Provenance and peer review: Commissioned, not peer reviewed.