Intended for healthcare professionals

  1. Laure Wynants, assistant professor1 2 3,
  2. Luc J M Smits, professor1,
  3. Ben Van Calster, associate professor2 3 4
  1. 1Department of Epidemiology, CAPHRI Care and Public Health Research Institute, Maastricht University, Maastricht, Netherlands
  2. 2Department of Development and Regeneration, KU Leuven, Leuven, Belgium
  3. 3EPI-Centre, KU Leuven, Leuven, Belgium
  4. 4Department of Biomedical Data Sciences, Leiden University Medical Centre, Leiden, Netherlands
  1. Correspondence to: L Wynants laure.wynants{at}

Well conducted and transparently reported trials would be an excellent start

In academia and society at large, attention on artificial intelligence (AI) in healthcare is tremendous. Although many researchers and commentators claim that AI improves screening, diagnosis, and prognostication, those who delve deeper will notice a scarcity of external validation studies and randomised controlled trials evaluating the true impact of AI on healthcare.123 Findings from the few published randomised controlled trials are mixed. In one trial, endoscopy assisted by an automatic AI detection system found more colorectal adenomas than did unassisted endoscopy.4 In another, an AI platform for diagnosing childhood cataracts was less accurate than a senior consultant.5 To gauge the quality of such evidence, readers need a detailed account of study methods and results. Systematic reviews, however, show that studies on AI are often poorly reported.26

Reporting guidelines

New extensions of the SPIRIT (Standard Protocol Items: Recommendations for Interventional Trials) (doi:10.1136/bmj.m3210) and CONSORT (Consolidated Standards of Reporting Trials) (doi:10.1136/bmj.m3164) reporting guidelines, published in The BMJ, encourage authors to be transparent and comprehensive when …

View Full Text

Log in

Log in through your institution


* For online subscription