Intended for healthcare professionals

CCBY Open access
Research

Prediction models for diagnosis and prognosis of covid-19: systematic review and critical appraisal

BMJ 2020; 369 doi: https://doi.org/10.1136/bmj.m1328 (Published 07 April 2020) Cite this as: BMJ 2020;369:m1328

Linked Editorial

Prediction models for diagnosis and prognosis in covid-19

Read our latest coverage of the coronavirus outbreak

This article has an addendum. Please see:

  1. Laure Wynants, assistant professor1 2,
  2. Ben Van Calster, associate professor2 3,
  3. Gary S Collins, professor4 5,
  4. Richard D Riley, professor6,
  5. Georg Heinze, associate professor7,
  6. Ewoud Schuit, assistant professor8 9,
  7. Marc M J Bonten, professor8 10,
  8. Johanna A A Damen , assistant professor8 9,
  9. Thomas P A Debray, assistant professor8 9,
  10. Maarten De Vos, associate professor2 11,
  11. Paula Dhiman, research fellow4 5,
  12. Maria C Haller, medical doctor7 12,
  13. Michael O Harhay, assistant professor13 14,
  14. Liesbet Henckaerts, assistant professor15 16,
  15. Nina Kreuzberger, research associate17,
  16. Anna Lohmann, researcher in training18,
  17. Kim Luijken, doctoral candidate18,
  18. Jie Ma, medical statistician5,
  19. Constanza L Andaur Navarro, doctoral student8 9,
  20. Johannes B Reitsma, associate professor8 9,
  21. Jamie C Sergeant, senior lecturer19 20,
  22. Chunhu Shi, research associate21,
  23. Nicole Skoetz, medical doctor17,
  24. Luc J M Smits, professor1,
  25. Kym I E Snell, lecturer6,
  26. Matthew Sperrin, senior lecturer22,
  27. René Spijker, information specialist8 9,
  28. Ewout W Steyerberg, professor3,
  29. Toshihiko Takada, assistant professor4,
  30. Sander M J van Kuijk, research fellow23,
  31. Florien S van Royen, research fellow8,
  32. Christine Wallisch, research fellow7 24 25,
  33. Lotty Hooft, associate professor8 9,
  34. Karel G M Moons, professor8 9,
  35. Maarten van Smeden, assistant professor8
  1. 1Department of Epidemiology, CAPHRI Care and Public Health Research Institute, Maastricht University, Peter Debyeplein 1, 6229 HA Maastricht, Netherlands
  2. 2Department of Development and Regeneration, KU Leuven, Leuven, Belgium
  3. 3Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, Netherlands
  4. 4Centre for Statistics in Medicine, Nuffield Department of Orthopaedics, Musculoskeletal Sciences, University of Oxford, Oxford, UK
  5. 5NIHR Oxford Biomedical Research Centre, John Radcliffe Hospital, Oxford, UK
  6. 6Centre for Prognosis Research, School of Primary, Community and Social Care, Keele University, Keele, UK
  7. 7Section for Clinical Biometrics, Centre for Medical Statistics, Informatics and Intelligent Systems, Medical University of Vienna, Vienna, Austria
  8. 8Julius Center for Health Sciences and Primary Care, University Medical Centre Utrecht, Utrecht University, Utrecht, Netherlands
  9. 9Cochrane Netherlands, University Medical Centre Utrecht, Utrecht University, Utrecht, Netherlands
  10. 10Department of Medical Microbiology, University Medical Centre Utrecht, Utrecht, Netherlands
  11. 11Department of Electrical Engineering, ESAT Stadius, KU Leuven, Leuven, Belgium
  12. 12Ordensklinikum Linz, Hospital Elisabethinen, Department of Nephrology, Linz, Austria
  13. 13Department of Biostatistics, Epidemiology and Informatics, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
  14. 14Palliative and Advanced Illness Research (PAIR) Center and Division of Pulmonary and Critical Care Medicine, Department of Medicine, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
  15. 15Department of Microbiology, Immunology and Transplantation, KU Leuven-University of Leuven, Leuven, Belgium
  16. 16Department of General Internal Medicine, KU Leuven-University Hospitals Leuven, Leuven, Belgium
  17. 17Evidence-Based Oncology, Department I of Internal Medicine and Center for Integrated Oncology Aachen Bonn Cologne Dusseldorf, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
  18. 18Department of Clinical Epidemiology, Leiden University Medical Centre, Leiden, Netherlands
  19. 19Centre for Biostatistics, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
  20. 20Centre for Epidemiology Versus Arthritis, Centre for Musculoskeletal Research, University of Manchester, Manchester Academic Health Science Centre, Manchester, UK
  21. 21Division of Nursing, Midwifery and Social Work, School of Health Sciences, University of Manchester
  22. 22Faculty of Biology, Medicine and Health, University of Manchester, Manchester, UK
  23. 23Department of Clinical Epidemiology and Medical Technology Assessment, Maastricht University Medical Centre+, Maastricht, Netherlands
  24. 24Charité Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Berlin, Germany
  25. 25Berlin Institute of Health, Berlin, Germany
  1. Correspondence to: L Wynants laure.wynants{at}maastrichtuniversity.nl
  • Accepted 31 March 2020
  • Final version accepted 4 May 2020

Abstract

Objective To review and critically appraise published and preprint reports of prediction models for diagnosing coronavirus disease 2019 (covid-19) in patients with suspected infection, for prognosis of patients with covid-19, and for detecting people in the general population at increased risk of becoming infected with covid-19 or being admitted to hospital with the disease.

Design Living systematic review and critical appraisal.

Data sources PubMed and Embase through Ovid, Arxiv, medRxiv, and bioRxiv up to 7 April 2020.

Study selection Studies that developed or validated a multivariable covid-19 related prediction model.

Data extraction At least two authors independently extracted data using the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist; risk of bias was assessed using PROBAST (prediction model risk of bias assessment tool).

Results 4909 titles were screened, and 51 studies describing 66 prediction models were included. The review identified three models for predicting hospital admission from pneumonia and other events (as proxy outcomes for covid-19 pneumonia) in the general population; 47 diagnostic models for detecting covid-19 (34 were based on medical imaging); and 16 prognostic models for predicting mortality risk, progression to severe disease, or length of hospital stay. The most frequently reported predictors of presence of covid-19 included age, body temperature, signs and symptoms, sex, blood pressure, and creatinine. The most frequently reported predictors of severe prognosis in patients with covid-19 included age and features derived from computed tomography scans. C index estimates ranged from 0.73 to 0.81 in prediction models for the general population, from 0.65 to more than 0.99 in diagnostic models, and from 0.85 to 0.99 in prognostic models. All models were rated at high or unclear risk of bias, mostly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, high risk of model overfitting, and vague reporting. Most reports did not include any description of the study population or intended use of the models, and calibration of the model predictions was rarely assessed.

Conclusion Prediction models for covid-19 are quickly entering the academic literature to support medical decision making at a time when they are urgently needed. This review indicates that proposed models are poorly reported, at high risk of bias, and their reported performance is probably optimistic. Hence, we do not recommend any of these reported prediction models to be used in current practice. Immediate sharing of well documented individual participant data from covid-19 studies and collaboration are urgently needed to develop more rigorous prediction models, and validate promising ones. The predictors identified in included models should be considered as candidate predictors for new models. Methodological guidance should be followed because unreliable predictions could cause more harm than benefit in guiding clinical decisions. Finally, studies should adhere to the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) reporting guideline.

Systematic review registration Protocol https://osf.io/ehc47/, registration https://osf.io/wy245.

Readers’ note This article is a living systematic review that will be updated to reflect emerging evidence. Updates may occur for up to two years from the date of original publication. This version is update 1 of the original article published on 7 April 2020 (BMJ 2020;369:m1328), and previous updates can be found as data supplements (https://www.bmj.com/content/369/bmj.m1328/related#datasupp).

Introduction

The novel coronavirus disease 2019 (covid-19) presents an important and urgent threat to global health. Since the outbreak in early December 2019 in the Hubei province of the People’s Republic of China, the number of patients confirmed to have the disease has exceeded 3 231 701 in more than 180 countries, and the number of people infected is probably much higher. More than 220 000 people have died from covid-19 (up to 30 April 2020).1 Despite public health responses aimed at containing the disease and delaying the spread, several countries have been confronted with a critical care crisis, and more countries could follow.234 Outbreaks lead to important increases in the demand for hospital beds and shortage of medical equipment, while medical staff themselves could also get infected.

To mitigate the burden on the healthcare system, while also providing the best possible care for patients, efficient diagnosis and information on the prognosis of the disease is needed. Prediction models that combine several variables or features to estimate the risk of people being infected or experiencing a poor outcome from the infection could assist medical staff in triaging patients when allocating limited healthcare resources. Models ranging from rule based scoring systems to advanced machine learning models (deep learning) have been proposed and published in response to a call to share relevant covid-19 research findings rapidly and openly to inform the public health response and help save lives.5 Many of these prediction models are published in open access repositories, ahead of peer review.

We aimed to systematically review and critically appraise all currently available prediction models for covid-19, in particular models to predict the risk of developing covid-19 or being admitted to hospital with covid-19, models to predict the presence of covid-19 in patients with suspected infection, and models to predict the prognosis or course of infection in patients with covid-19. We include model development and external validation studies. This living systematic review, with periodic updates, is being conducted in collaboration with the Cochrane Prognosis Methods Group.

Methods

We searched PubMed and Embase through Ovid, bioRxiv, medRxiv, and arXiv for research on covid-19 published after 3 January 2020. We used the publicly available publication list of the covid-19 living systematic review.6 This list contains studies on covid-19 published on PubMed and Embase through Ovid, bioRxiv, and medRxiv, and is continuously updated. We validated the list to examine whether it is fit for purpose by comparing it to relevant hits from bioRxiv and medRxiv when combining covid-19 search terms (covid-19, sars-cov-2, novel corona, 2019-ncov) with methodological search terms (diagnostic, prognostic, prediction model, machine learning, artificial intelligence, algorithm, score, deep learning, regression). All relevant hits were found on the living systematic review list.6 We supplemented this list with hits from PubMed by searching for “covid-19” because when we performed our initial search this term was not included in the reported living systematic review6 search terms for PubMed. We further supplemented the list with studies on covid-19 retrieved from arXiv. The online supplementary material presents the search strings. Additionally, we contacted authors for studies that were not publicly available at the time of the search,78 and included studies that were publicly available but not on the living systematic review6 list at the time of our search.9101112

We searched databases on 13 March 2020 and 24 March 2020 (for the first version of the review), and 7 April 2020 (for the first update of the review). All studies were considered, regardless of language or publication status (preprint or peer reviewed articles; updates of preprints will only be included and reassessed in future updates after publication in a peer reviewed journal). We included studies if they developed or validated a multivariable model or scoring system, based on individual participant level data, to predict any covid-19 related outcome. These models included three types of prediction models: diagnostic models for predicting the presence of covid-19 in patients with suspected infection; prognostic models for predicting the course of infection in patients with covid-19; and prediction models to identify people at increased risk of developing covid-19 in the general population. No restrictions were made on the setting (eg, inpatients, outpatients, or general population), prediction horizon (how far ahead the model predicts), included predictors, or outcomes. Epidemiological studies that aimed to model disease transmission or fatality rates, diagnostic test accuracy, and predictor finding studies were excluded. Titles, abstracts, and full texts were screened in duplicate for eligibility by independent reviewers (two from LW, BVC, and MvS), and discrepancies were resolved through discussion.

Data extraction of included articles was done by two independent reviewers (from LW, BVC, GSC, TPAD, MCH, GH, KGMM, RDR, ES, LJMS, EWS, KIES, CW, AL, JM, TT, JAAD, KL, JBR, LH, CS, MS, MCH, NS, NK, SMJvK, JCS, PD, CLAN, and MvS). Reviewers used a standardised data extraction form based on the CHARMS (critical appraisal and data extraction for systematic reviews of prediction modelling studies) checklist13 and PROBAST (prediction model risk of bias assessment tool) for assessing the reported prediction models.14 We sought to extract each model’s predictive performance by using whatever measures were presented. These measures included any summaries of discrimination (the extent to which predicted risks discriminate between participants with and without the outcome), and calibration (the extent to which predicted risks correspond to observed risks) as recommended in the TRIPOD (transparent reporting of a multivariable prediction model for individual prognosis or diagnosis) statement.15 Discrimination is often quantified by the C index (C index=1 if the model discriminates perfectly; C index=0.5 if discrimination is no better than chance). Calibration is often quantified by the calibration intercept (which is zero when the risks are not systematically overestimated or underestimated) and calibration slope (which is one if the predicted risks are not too extreme or too moderate).16 We focused on performance statistics as estimated from the strongest available form of validation (in order of strength: external (evaluation in an independent database), internal (bootstrap validation, cross validation, random training test splits, temporal splits), apparent (evaluation by using exactly the same data used for development)). Any discrepancies in data extraction discussed between reviewers, followed by conflict resolution by LW and MvS if needed. The online supplementary material provides details on data extraction. We considered aspects of PRISMA (preferred reporting items for systematic reviews and meta-analyses)17 and TRIPOD15 in reporting our article.

Patient and public involvement

It was neither appropriate nor possible to involve patients or the public in the design, conduct, or reporting of our research. The study protocol and preliminary results are publicly available on https://osf.io/ehc47/ and medRxiv.

Results

We retrieved 4903 titles through our systematic search (fig 1; 1916 on 13 March 2020 and 774 on 24 March 2020, included in the first version of the review; and 2213 on 7 April 2020, included in the first update). Two additional unpublished studies were made available on request (after a call on social media). We included a further four studies that were publicly available but were not detected by our search. Of 4909 titles, 199 studies were retained for abstract and full text screening (85 in the first version of the review; 114 were added in the first update). Fifty one studies describing 66 prediction models met the inclusion criteria (31 models in 27 papers included in the first version of the review; 35 models in 24 papers added in the first update).789101112181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162 These studies were selected for data extraction and critical appraisal (table 1 and table 2).

Fig 1
Fig 1

PRISMA (preferred reporting items for systematic reviews and meta-analyses) flowchart of study inclusions and exclusions. CT=computed tomography

Table 1

Overview of prediction models for diagnosis and prognosis of covid-19

View this table:
Table 2

Risk of bias assessment (using PROBAST) based on four domains across 51 studies that created prediction models for coronavirus disease 2019

View this table:

Primary datasets

Thirty two studies used data on patients with covid-19 from China, two studies used data on patients from Italy,3139 and one study used data on patients from Singapore40 (supplementary table 1). Ten studies used international data (supplementary table 1) and two studies used simulated data.3541 One study used US Medicare claims data from 2015 to 2016 to estimate vulnerability to covid-19.8 Three studies were not clear on the origin of covid-19 data (supplementary table 1).

Based on 26 of the 51 studies that reported study dates, data were collected between 8 December 2019 and 15 March 2020. The duration of follow-up was unclear in most studies. Two studies reported median follow-up time (8.4 and 15 days),1937 while another study reported a follow-up of at least five days.42 Some centres provided data to multiple studies and several studies used open Github63 or Kaggle64 data repositories (version or date of access often unclear), and so it was unclear how much these datasets overlapped across our 51 identified studies (supplementary table 1). One study24 developed prediction models for use in paediatric patients. The median age in studies on adults varied (from 34 to 65 years; see supplementary table 1), as did the proportion of men (from 41% to 67%), although this information was often not reported at all.

Among the six studies that developed prognostic models to predict mortality risk in people with confirmed or suspected infection, the percentage of deaths varied between 8% and 59% (table 1). This wide variation is partly because of severe sampling bias caused by studies excluding participants who still had the disease at the end of the study period (that is, they had neither recovered nor died).720212244 Additionally, length of follow-up could have varied between studies (but was rarely reported), and there might be local and temporal variation in how people were diagnosed as having covid-19 or were admitted to the hospital (and therefore recruited for the studies). Among the diagnostic model studies, only five reported on prevalence of covid-19 and used a cross sectional or cohort design; the prevalence varied between 17% and 79% (see table 1). Because 31 diagnostic studies used either case-control sampling or an unclear method of data collection, the prevalence in these diagnostic studies might not have been representative of their target population.

Table 1 gives an overview of the 66 prediction models reported in the 51 identified studies. Supplementary table 2 provides modelling details and box 1 discusses the availability of models in a format for use in clinical practice.

Box 1

Availability of models in format for use in clinical practice

Sixteen studies presented their models in a format for use in clinical practice. However, because all models were at high risk of bias, we do not recommend their routine use before they are properly externally validated.

Models to predict risk of developing coronavirus disease 2019 (covid-19) or of hospital admission for covid-19 in general population

The “COVID-19 Vulnerability Index” to detect hospital admission for covid-19 pneumonia from other respiratory infections (eg, pneumonia, influenza) is available as an online tool.865

Diagnostic models

The “COVID-19 diagnosis aid APP” is available on iOS and android devices to diagnose covid-19 in asymptomatic patients and those with suspected disease.12 The “suspected COVID-19 pneumonia Diagnosis Aid System” is available as an online tool.1066 The “COVID-19 early warning score” to detect covid-19 in adults is available as a score chart in an article.30 A nomogram (a graphical aid to calculate risk) is available to diagnose covid-19 pneumonia based on imaging features, epidemiological history, and white blood cell count.43 A decision tree to detect severe disease for paediatric patients with confirmed covid-19 is also available in an article.24 Additionally an online tool is available for diagnosis based on routine blood examination data.45

Diagnostic models based on images

Three artificial intelligence models to assist with diagnosis based on medical images are available through web applications.232629676869 One model is deployed in 16 hospitals, but the authors do not provide any usable tools in their study.33 One paper includes a “total severity score” to classify patients based on images.54

Prognostic models

To assist in the prognosis of mortality, a nomogram,7 a decision tree,21 and a computed tomography based scoring rule are available in the articles.22 Additionally a nomogram exists to predict progression to severe covid-19.32 A model equation to predict disease progression was made available in one paper.60

Overall, seven studies made their source code available on GitHub.8113435384755 Thirty one studies did not include any usable equation, format, or reference for use or validation of their prediction model.

RETURN TO TEXT

Models to predict risk of developing covid-19 or of hospital admission for covid-19 in general population

We identified three models that predicted risk of hospital admission for covid-19 pneumonia in the general population, but used admission for non-tuberculosis pneumonia, influenza, acute bronchitis, or upper respiratory tract infections as outcomes in a dataset without any patients with covid-19 (table 1).8 Among the predictors were age, sex, previous hospital admissions, comorbidity data, and social determinants of health. The study estimated C indices of 0.73, 0.81, and 0.81 for the three models.

Diagnostic models to detect covid-19 in patients with suspected infection

Nine studies developed 13 multivariable models to diagnose covid-19. Most models target patients with suspected covid-19. Reported C index values ranged between 0.85 and 0.99, except for one model with a C index of 0.65. Two studies aimed to diagnose severe disease in patients with confirmed covid-19: one in adults with confirmed covid-19 with a reported C index value of 0.88,46 and one in paediatric patients with reported perfect performance.24 Several diagnostic predictors were used in more than one model: age (five models); body temperature or fever (three models); signs and symptoms (such as shortness of breath, headache, shiver, sore throat, and fatigue; three models); sex (three models); blood pressure (three models); creatinine (three models); epidemiological contact history, pneumonia signs on computed tomography scan, basophils, neutrophils, lymphocytes, alanine transaminase, albumin, platelets, eosinophils, calcium, and bilirubin (each in two models; table 1).

Thirty four prediction models were proposed to support the diagnosis of covid-19 or covid-19 pneumonia (and monitor progression) based on images. Most studies used computed tomography images. Other image sources were chest radiographs39474849555658 and spectrograms of cough sounds.53 The predictive performance varied widely, with estimated C index values ranging from 0.81 to 0.998.

Prognostic models for patients with diagnosis of covid-19

We identified 16 prognostic models (table 1) for patients with a diagnosis of covid-19. Of these models, eight estimated mortality risk in patients with suspected or confirmed covid-19 (table 1). The intended use of these models (that is, when to use them, in whom to use them, and the prediction horizon, eg, mortality by what time) was not clearly described. Five models aimed to predict progression to a severe or critical state, and three aimed to predict length of hospital stay (table 1). Predictors (for any outcome) included age (seven models), features derived from computed tomography scoring (seven models), lactate dehydrogenase (four models), sex (three models), C reactive protein (three models), comorbidity (including hypertension, diabetes, cardiovascular disease, respiratory disease; three models), and lymphocyte count (three models; table 1).

Four studies that predicted mortality reported a C index between 0.90 and 0.98. One study also evaluated calibration.7 When applied to new patients, their model yielded probabilities of mortality that were too high for low risk patients and too low for high risk patients (calibration slope >1), despite excellent discrimination.7 One study developed two models to predict a hospital stay of more than 10 days and estimated C indices of 0.92 and 0.96.20 The other study predicting length of hospital stay did not report a C index. Neither study predicting length of hospital stay reported calibration. The five studies that developed models to predict progression to a severe or critical state reported C indices between 0.85 and 0.99. One of these studies also reported perfect calibration, but it was unclear how this was evaluated.32

Risk of bias

All models were at high risk of bias according to assessment with PROBAST (table 1), which suggests that their predictive performance when used in practice is probably lower than that reported. Therefore, we have cause for concern that the predictions of these models are unreliable when used in other people. Box 2 gives details on common causes for risk of bias for each type of model.

Box 2

Common causes of risk of bias in the reported prediction models

Models to predict risk of developing coronavirus disease 2019 (covid-19) or of hospital admission for covid-19 in general population

These models were based on Medicare claims data, and used proxy outcomes to predict hospital admission for covid-19 pneumonia, in the absence of patients with covid-19.8

Diagnostic models

Controls are probably not representative of the target population for a diagnostic model (eg, controls for a screening model had viral pneumonia).124145 The test used to determine the outcome varied between participants,1241 or one of the predictors (eg, fever) was part of the outcome definition.10

Diagnostic models based on medical imaging

Generally, studies did not clearly report which patients had imaging during clinical routine, and it was unclear whether the selection of controls was made from the target population (that is, patients with suspected covid-19). Often studies did not clearly report how regions of interest were annotated. Images were sometimes annotated by only one scorer without quality control.2527475255 Careful description of model specification and subsequent estimation were lacking, challenging the transparency and reproducibility of the models. Every study used a different deep learning architecture, some were established and others specifically designed, without benchmarking the used architecture against others.

Prognostic models

Study participants were often excluded because they did not develop the outcome at the end of the study period but were still in follow-up (that is, they were in hospital but had not recovered or died), yielding a highly selected study sample.720212244 Additionally, only three studies accounted for censoring by using Cox regression1942 or competing risk models.62 One study used the last available predictor measurement from electronic health records (rather than measuring the predictor value at the time when the model was intended for use).21

RETURN TO TEXT

Twenty four of the 51 studies had a high risk of bias for the participants domain (table 2), which indicates that the participants enrolled in the studies might not be representative of the models’ targeted populations. Unclear reporting on the inclusion of participants prohibited a risk of bias assessment in 13 studies. Six of the 51 studies had a high risk of bias for the predictor domain, which indicates that predictors were not available at the models’ intended time of use, not clearly defined, or influenced by the outcome measurement. The diagnostic model studies that used medical images as predictors in artificial intelligence were all scored as unclear on the predictor domain. One diagnostic imaging study used a simple scoring rule and was scored at low predictor risk of bias. The publications often lacked clear information on the preprocessing steps (eg, cropping of images). Moreover, complex machine learning algorithms transform images into predictors in a complex way, which makes it challenging to fully apply the PROBAST predictors section for such imaging studies. Most studies used outcomes that are easy to assess (eg, death, presence of covid-19 by laboratory confirmation). Nonetheless, there was reason to be concerned about bias induced by the outcome measurement in 18 studies, among others, because of the use of subjective or proxy outcomes (non covid-19 severe respiratory infections).

All but one study were at high risk of bias for the analysis domain (table 2). Many studies had small sample sizes (table 1), which led to an increased risk of overfitting, particularly if complex modelling strategies were used. Three studies did not report the predictive performance of the developed model, and three studies reported only the apparent performance (the performance with exactly the same data used to develop the model, without adjustment for optimism owing to potential overfitting). Only five studies assessed calibration,712324350 but the method to check calibration was probably suboptimal in two studies.1232

Nine models were developed and externally validated in the same study (in an independent dataset, excluding random training test splits and temporal splits).71225324243515259 However, in six of these models, the datasets used for the external validation were not representative of the target population.712254259 Consequently, predictive performance could differ if the models are applied in the targeted population. In one study, commonly used performance statistics for prognosis (discrimination, calibration) were not reported.42 Gozes and colleagues52 and Fu and colleagues51 had satisfactory predictive performance on an external validation set, but it is unclear how the data for the external validation were collected, and whether they are representative. Gong and colleagues32 and Wang and colleagues43 obtained satisfactory discrimination on probably unbiased but small external validation datasets.

One study presented a small external validation (27 participants) that reported satisfactory predictive performance of a model originally developed for avian influenza H7N9 pneumonia. However, patients who had not recovered at the end of the study period were excluded, which again led to a selection bias.22 Another study was a small scale external validation study (78 participants) of an existing severity score for lung computed tomography images with satisfactory reported discrimination.54

Discussion

In this systematic review of all prediction models related to the covid-19 pandemic, we identified and critically appraised 51 studies that described 66 models. These prediction models can be divided into three categories: models for the general population to predict the risk of developing covid-19 or being admitted to hospital for covid-19; models to support the diagnosis of covid-19 in patients with suspected infection; and models to support the prognostication of patients with covid-19. All models reported good to excellent predictive performance, but all were appraised to have high risk of bias owing to a combination of poor reporting and poor methodological conduct for participant selection, predictor description, and statistical methods used. As expected, in these early covid-19 related prediction model studies, clinical data from patients with covid-19 are still scarce and limited to data from China, Italy, and international registries. With few exceptions, the available sample sizes and number of events for the outcomes of interest were limited. This is a well known problem when building prediction models and increases the risk of overfitting the model.70 A high risk of bias implies that the performance of these models in new samples will probably be worse than that reported by the researchers. Therefore, the estimated C indices, often close to 1 and indicating near perfect discrimination, are probably optimistic. Eleven studies carried out an external validation,712222532424351525459 and calibration was rarely assessed.

We reviewed 33 studies that used advanced machine learning methodology on medical images to diagnose covid-19, covid-19 related pneumonia, or to assist in segmentation of lung images. The predictive performance measures showed a high to almost perfect ability to identify covid-19, although these models and their evaluations also had a high risk of bias, notably because of poor reporting and an artificial mix of patients with and without covid-19. Therefore, we do not recommend any of the 66 identified prediction models to be used in practice.

Challenges and opportunities

The main aim of prediction models is to support medical decision making. Therefore it is vital to identify a target population in which predictions serve a clinical need, and a representative dataset (preferably comprising consecutive patients) on which the prediction model can be developed and validated. This target population must also be carefully described so that the performance of the developed or validated model can be appraised in context, and users know which people the model applies to when making predictions. Unfortunately, the included studies in our systematic review often lacked an adequate description of the study population, which leaves users of these models in doubt about the models’ applicability. Although we recognise that all studies were done under severe time constraints caused by urgency, we recommend that any studies currently in preprint and all future studies should adhere to the TRIPOD reporting guideline15 to improve the description of their study population and their modelling choices. TRIPOD translations (eg, in Chinese and Japanese) are also available at https://www.tripod-statement.org.

A better description of the study population could also help us understand the observed variability in the reported outcomes across studies, such as covid-19 related mortality. The variability in the relative frequencies of the predicted outcomes presents an important challenge to the prediction modeller. A prediction model applied in a setting with a different relative frequency of the outcome might produce predictions that are miscalibrated71 and might need to be updated before it can safely be applied in that new setting.16 Such an update might often be required when prediction models are transported to different healthcare systems, which requires data from patients with covid-19 to be available from that system.

Covid-19 prediction problems will often not present as a simple binary classification task. Complexities in the data should be handled appropriately. For example, a prediction horizon should be specified for prognostic outcomes (eg, 30 day mortality). If study participants have neither recovered nor died within that time period, their data should not be excluded from analysis, which most reviewed studies have done. Instead, an appropriate time to event analysis should be considered to allow for administrative censoring.16 Censoring for other reasons, for instance because of quick recovery and loss to follow-up of patients who are no longer at risk of death from covid-19, could necessitate analysis in a competing risk framework.72

Instead of developing and updating predictions in their local setting, individual participant data from multiple countries and healthcare systems might allow better understanding of the generalisability and implementation of prediction models across different settings and populations. This approach could greatly improve the applicability and robustness of prediction models in routine care.7374757677

The evidence base for the development and validation of prediction models related to covid-19 will quickly increase over the coming months. Together with the increasing evidence from predictor finding studies78798081828384 and open peer review initiatives for covid-19 related publications,85 data registries6364868788 are being set up. To maximise the new opportunities and to facilitate individual participant data meta-analyses, the World Health Organization has recently released a new data platform to encourage sharing of anonymised covid-19 clinical data.89 To leverage the full potential of these evolutions, international and interdisciplinary collaboration in terms of data acquisition and model building is crucial.

Study limitations

With new publications on covid-19 related prediction models rapidly entering the medical literature, this systematic review cannot be viewed as an up-to-date list of all currently available covid-19 related prediction models. Also, 45 of the studies we reviewed were only available as preprints. These studies might improve after peer review, when they enter the official medical literature; we will reassess these peer reviewed publications in future updates. We also found other prediction models that are currently being used in clinical practice but without scientific publications,90 and web risk calculators launched for use while the scientific manuscript is still under review.91 These unpublished models naturally fall outside the scope of this review of the literature.

Implications for practice

All 66 reviewed prediction models were found to have a high risk of bias, and evidence from independent external validation of the newly developed models is currently lacking. However, the urgency of diagnostic and prognostic models to assist in quick and efficient triage of patients in the covid-19 pandemic might encourage clinicians to implement prediction models without sufficient documentation and validation. Although we cannot let perfect be the enemy of good, earlier studies have shown that models were of limited use in the context of a pandemic,92 and they could even cause more harm than good.93 Therefore, we cannot recommend any model for use in practice at this point.

We anticipate that more covid-19 data at the individual participant level will soon become available. These data could be used to validate and update currently available prediction models.16 For example, one model predicted progression to severe covid-19 within 15 days of admission to hospital with promising discrimination when validated externally on two small but unselected cohorts.32 A second model to diagnose covid-19 pneumonia showed promising discrimination at external validation.43 A third model that used computed tomography based total severity scores showed good discrimination between patients with mild, common, and severe-critical disease.54 Because reporting in these studies was insufficiently detailed and the validation was in small Chinese datasets, validation in larger, international datasets is needed. Owing to differences between healthcare systems (eg, Chinese and European) on when patients are admitted to and discharged from hospital, and testing criteria for patients with covid-19, we anticipate most existing models will need to be updated (that is, adjusted to the local setting).

When creating a new prediction model, we recommend building on previous literature and expert opinion to select predictors, rather than selecting predictors in a purely data driven way16; this is especially important for datasets with limited sample size.94 Based on the predictors included in multiple models identified by our review, we encourage researchers to consider incorporating several candidate predictors: for diagnostic models, these include age, body temperature or fever, signs and symptoms (such as shortness of breath, headache, shiver, sore throat, and fatigue), sex, blood pressure, creatinine, basophils, neutrophils, lymphocytes, alanine transaminase, albumin, platelets, eosinophils, calcium, bilirubin, creatinine, epidemiological contact history, and potentially features derived from lung imaging. For prognostic models, these predictors include age, features derived from computed tomography scoring, lactate dehydrogenase, sex, C reactive protein, comorbidity (including hypertension, diabetes, cardiovascular disease, respiratory disease), and lymphocyte count. By pointing to the most important methodological challenges and issues in design and reporting of the currently available models, we hope to have provided a useful starting point for further studies aiming to develop new models, or to validate and update existing ones.

This living systematic review and first update has been conducted in collaboration with the Cochrane Prognosis Methods Group. We will update this review and appraisal continuously to provide up-to-date information for healthcare decision makers and professionals as more international research emerges over time.

Conclusion

Several diagnostic and prognostic models for covid-19 are currently available and they all report good to excellent discriminative performance. However, these models are all at high risk of bias, mainly because of non-representative selection of control patients, exclusion of patients who had not experienced the event of interest by the end of the study, and model overfitting. Therefore, their performance estimates are probably optimistic and misleading. We do not recommend any of the current prediction models to be used in practice. Future studies aimed at developing and validating diagnostic or prognostic models for covid-19 should explicitly address the concerns raised. Sharing data and expertise for development, validation, and updating of covid-19 related prediction models is urgently needed.

What is already known on this topic

  • The sharp recent increase in coronavirus disease 2019 (covid-19) incidence has put a strain on healthcare systems worldwide; an urgent need exists for efficient early detection of covid-19 in the general population, for diagnosis of covid-19 in patients with suspected disease, and for prognosis of covid-19 in patients with confirmed disease

  • Viral nucleic acid testing and chest computed tomography imaging are standard methods for diagnosing covid-19, but are time consuming

  • Earlier reports suggest that elderly patients, patients with comorbidities (chronic obstructive pulmonary disease, cardiovascular disease, hypertension), and patients presenting with dyspnoea are vulnerable to more severe morbidity and mortality after infection

What this study adds

  • Three models were identified that predict hospital admission from pneumonia and other events (as proxy outcomes for covid-19 pneumonia) in the general population

  • Forty seven diagnostic models were identified for detecting covid-19 (34 were based on medical images); and 16 prognostic models for predicting mortality risk, progression to severe disease, or length of hospital stay

  • Proposed models are poorly reported and at high risk of bias, raising concern that their predictions could be unreliable when applied in daily practice

Acknowledgments

We thank the authors who made their work available by posting it on public registries or sharing it confidentially. A preprint version of the study is publicly available on medRxiv.

Footnotes

  • Contributors: LW conceived the study. LW and MvS designed the study. LW, MvS, and BVC screened titles and abstracts for inclusion. LW, BVC, GSC, TPAD, MCH, GH, KGMM, RDR, ES, LJMS, EWS, KIES, CW, JAAD, PD, MCH, NK, AL, KL, JM, CLAN, JBR, JCS, CS, NS, MS, RS, TT, SMJvK, FSvR, LH, and MvS extracted and analysed data. MDV helped interpret the findings on deep learning studies and MMJB, LH, and MCH assisted in the interpretation from a clinical viewpoint. RS and FSvR offered technical and administrative support. LW and MvS wrote the first draft, which all authors revised for critical content. All authors approved the final manuscript. LW and MvS are the guarantors. The corresponding author attests that all listed authors meet authorship criteria and that no others meeting the criteria have been omitted.

  • Funding: LW is a postdoctoral fellow of Research Foundation–Flanders (FWO). BVC received support from FWO (grant G0B4716N) and Internal Funds KU Leuven (grant C24/15/037). TPAD acknowledges financial support from the Netherlands Organisation for Health Research and Development (grant No 91617050). KGMM and JAAD gratefully acknowledge financial support from Cochrane Collaboration (SMF 2018). KIES is funded by the National Institute for Health Research School for Primary Care Research (NIHR SPCR). The views expressed are those of the author(s) and not necessarily those of the NHS, the NIHR, or the Department of Health and Social Care. GSC was supported by the NIHR Biomedical Research Centre, Oxford, and Cancer Research UK (programme grant C49297/A27294). JM was supported by the Cancer Research UK (programme grant C49297/A27294). PD was supported by the NIHR Biomedical Research Centre, Oxford. MOH is supported by the National Heart, Lung, and Blood Institute of the United States National Institutes of Health (grant No R00 HL141678). The funders played no role in study design, data collection, data analysis, data interpretation, or reporting. The guarantors had full access to all the data in the study, take responsibility for the integrity of the data and the accuracy of the data analysis, and had final responsibility for the decision to submit for publication.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf and declare: no support from any organisation for the submitted work; no competing interests with regards to the submitted work; LW discloses support from Research Foundation–Flanders (FWO); RDR reports personal fees as a statistics editor for The BMJ (since 2009), consultancy fees for Roche for giving meta-analysis teaching and advice in October 2018, and personal fees for delivering in-house training courses at Barts and The London School of Medicine and Dentistry, and also the Universities of Aberdeen, Exeter, and Leeds, all outside the submitted work; MS coauthored the editorial on the original article.

  • Ethical approval: Not required.

  • Data sharing: The study protocol is available online at https://osf.io/ehc47/. Most included studies are publicly available. Additional data are available upon reasonable request.

  • The lead authors affirm that the manuscript is an honest, accurate, and transparent account of the study being reported; that no important aspects of the study have been omitted; and that any discrepancies from the study as planned have been explained.

  • Dissemination to participants and related patient and public communities: The study protocol is available online at https://osf.io/ehc47/.

http://creativecommons.org/licenses/by/4.0/

This is an Open Access article distributed in accordance with the terms of the Creative Commons Attribution (CC BY 4.0) license, which permits others to distribute, remix, adapt and build upon this work, for commercial use, provided the original work is properly cited. See: http://creativecommons.org/licenses/by/4.0/.

References

View Abstract