Intended for healthcare professionals

News News Analysis

Row over Babylon’s chatbot shows lack of regulation

BMJ 2020; 368 doi: https://doi.org/10.1136/bmj.m815 (Published 28 February 2020) Cite this as: BMJ 2020;368:m815
  1. Gareth Iacobucci
  1. The BMJ

Four regulators oversee AI in healthcare, but there is still a lack of accountability, finds Gareth Iacobucci, after one doctor’s concerns about a triage system are dismissed

The debate over regulation of artificial intelligence in healthcare was reignited this week when a doctor who had repeatedly questioned the safety of Babylon Health’s “chatbot” triage service (box 1) went public with his concerns. David Watkins, a consultant oncologist at the Royal Marsden NHS Foundation Trust, has regularly tweeted videos (under his Twitter name @DrMurphy1) of Babylon’s chatbot triage to show what he says is the bot missing potentially significant red flag symptoms.

Box 1

How do patients use Babylon’s chatbot?

Babylon Health’s symptom checker works through a chatbot that patients can use on a smartphone app or website. Users put in their symptoms, and the checker, after asking questions, identifies possible causes and gives them advice, such as “book a consultation with a GP” or “go to hospital.” Since 2017 NHS patients have been able to access it if they sign up to Babylon’s GP at Hand digital service. A growing number of NHS trusts are looking to partner with Babylon to make the technology available to patients in secondary care too.

RETURN TO TEXT

Watkins made his identity public for the first time on 24 February when he spoke at a debate on artificial intelligence in healthcare at the Royal Society of Medicine.1 He presented an example of a Babylon chatbot triage he had carried out as a test, where he posed as a 67 year old white man who was a 20 …

View Full Text

Log in

Log in through your institution

Subscribe

* For online subscription