Intended for healthcare professionals


Clinical applications of machine learning algorithms: beyond the black box

BMJ 2019; 364 doi: (Published 12 March 2019) Cite this as: BMJ 2019;364:l886
  1. David S Watson, doctoral student123,
  2. Jenny Krutzinna, postdoctoral researcher1,
  3. Ian N Bruce, professor of rheumatology and director45,
  4. Christopher EM Griffiths, foundation professor of dermatology56,
  5. Iain B McInnes, Muirhead professor of medicine7,
  6. Michael R Barnes, reader of bioinformatics,23,
  7. Luciano Floridi, professor of philosophy and ethics of information and director of the digital ethics lab13
  1. 1Oxford Internet Institute, University of Oxford, 1 St Giles’, Oxford OX1 3JS, UK
  2. 2Centre for Translational Bioinformatics, William Harvey Research Institute, Queen Mary University of London, London, UK
  3. 3The Alan Turing Institute, London, UK
  4. 4Arthritis Research UK Centre for Epidemiology, Centre for Musculoskeletal Research, Faculty of Biology Medicine and Health, The University of Manchester, Manchester, UK
  5. 5NIHR Manchester Biomedical Research Centre, Manchester University Hospitals NHS Foundation Trust, Manchester M13 9WL, UK
  6. 6The Dermatology Centre, Salford Royal NHS Foundation Trust, The University of Manchester, Salford, UK
  7. 7Institute of Infection, Immunity and Inflammation, University of Glasgow, Glasgow, UK
  1. Correspondence to: D Watson david.watson{at}

To maximise the clinical benefits of machine learning algorithms, we need to rethink our approach to explanation, argue David Watson and colleagues

Key messages

  • Machine learning algorithms may radically improve our ability to diagnose and treat disease

  • For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models

  • Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers

Machine learning algorithms are an application of artificial intelligence designed to automatically detect patterns in data without being explicitly programmed. They promise to change the way we detect and treat disease and will likely have a major impact on clinical decision making. The long term success of these powerful new methods hinges on the ability of both patients and doctors to understand and explain their predictions, especially in complicated cases with major healthcare consequences. This will promote greater trust in computational techniques and ensure informed consent to algorithmically designed treatment plans.

Unfortunately, many popular machine learning algorithms are essentially black boxes—oracular inference engines that render verdicts without any accompanying justification. This problem has become especially pressing with passage of the European Union’s latest General Data Protection Regulation (GDPR), which some scholars argue provides citizens with a “right to explanation.” Now, any institution engaged in algorithmic decision making is legally required to justify those decisions to any person whose data they hold on request, a challenge that most are ill equipped to meet. We urge clinicians to link with patients, data scientists, and policy makers to ensure the successful clinical implementation of machine learning (fig 1). We outline important goals and limitations that we hope will inform future research.

Fig 1

Overview of the opportunities and challenges associated with black box models in …

View Full Text

Log in

Log in through your institution


* For online subscription