Artificial intelligence versus clinicians
BMJ 2020; 369 doi: https://doi.org/10.1136/bmj.m1326 (Published 03 April 2020) Cite this as: BMJ 2020;369:m1326Linked Research
Artificial intelligence versus clinicians
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Dear Editor
Rampton discusses the impact of artificial intelligence (AI) in healthcare (1).
Imagine a world where your doctor is made in a factory, not a medical school. The new “foundation doctor” no longer exists because you are now seen by a robot. A little far-fetched? Perhaps not.
Imagine a doctor that never tires and never forgets, a healthcare system that no longer needs to apologise for “human error” because, quite frankly, it doesn’t exist. Imagine never having to wait for a GP appointment or even better, a NHS bed because you could have a programmed robot at the touch of a button. Is this the care we’re heading towards, and more pertinently - is this the care we want?
Doctors have a duty to inform patients to the best of their abilities - to hold their hands during the difficult times and to listen to their concerns. A synthetic alternative would hardly suffice. We live in a world in which all the information we could ever want is readily available, but it’s the act of exploring our fears with a healthcare worker that truly provides the most reassurance. A robot cannot be used to counsel individuals, to offer compassion or to provide ‘holistic’ care that is central to our practice. When it comes to breaking bad news - a relaxed environment, a private room and a human’s empathy are hard to compete with.
However, AI or ‘machines’ can assist us with our daily tasks so that we can focus on using our scientific knowledge to optimise patient care. AI has enabled us to successfully collect large samples of data within trusts, use clinical coding to improve operational efficiency and accelerate the process of genomic interpretation. These processes are carefully controlled by expert clinicians and technicians alike. Yet, on my usual commute on the London underground, the overheads are plastered with advertisements of phone apps - those that promise to reach a diagnosis within two clicks. Can we trust these apps? In years to come, health apps will dominate the industry and it is our responsibility as clinicians to understand their basis and provide validation. Without succinct knowledge, it will be impossible to advise patients on the merits and downfalls of online apps when they eagerly ask our opinion.
As future clinicians, we need to evolve alongside AI. We must understand its risks to avoid them and appreciate its benefits to maximise its use. As medical students, we have attended ‘Healthcare Hackathons’. These events allow clinicians, software developers and graphic designers to collaborate and find solutions to simple problems using technology. By attending such events, we have been inspired by the pro-active roles clinicians can play to design and implement apps to enhance patient care. We are equipped with a strong scientific basis and know what our patients desire, whereas developers are experts in programming and assessing feasibility. It is time for clinicians to drive innovation in healthcare and improve communication with developers to ensure AI is fully meeting its purpose.
The future of medicine is likely to be diverse, and we live in exciting times where anything is possible. There is no doubt that AI may radically change how care is provided for the better, however, we must draw a line to ensure that care isn’t compromised. Clinicians must accept the implementation of AI into their practice and not shy away or avoid its presence. We must put it to the test and be in a position where we can accurately critique it.
AI may be efficacious, but it is up to us to judge its true effectiveness in the real world.
References:
1. Vanessa Rampton. Artificial intelligence versus clinicians. BMJ 2020; 369:m1326.
Competing interests: No competing interests
Machines vs. Humans: A tug of war in the healthcare
Dear Editor,
It is often asked, could machines replace doctors? The introduction of various new technologies in healthcare like Artificial Intelligence (AI) has heated up this debate in the recent times. We believe that there are several pros and cons to the use of these technologies in the management of patients.
The pros of Artificial Intelligence:
In recent years, AI has been increasingly incorporated throughout healthcare delivery systems. We agree with the author that the deep learning AI systems, which learn by themselves from a large set of examples, continually integrate new knowledge and perfect themselves with a speed that humans cannot match (1). Across the globe there is an acute shortage of qualified doctors and other healthcare professionals, especially in remote areas, putting patients at a disadvantage. AI may help in alleviating some of the shortages and the stresses of burnout in doctors by sharing some of their work. Automating some of the routine tasks that take up a doctor’s time, such as documentation, administrative reporting, or even triaging images, can free up doctors to focus on the more important challenges of the management of their patients (2). AI can further offer diagnoses and treatments, issue reminders for medication, create precise analytics for pathology, images, and predict overall health based on electronic health records and personal history, again easing some of the burden placed on doctors (3).
Strong advocates of AI emphasise that most symptoms of illness have physical causes and that the devotion and concern simulated by machines sufficiently replicates human forms of communication. Machines can now provide mental health assistance via chatbot, monitor patient health, and even predict cardiac arrest, seizures, or sepsis (1).
The cons of Artificial Intelligence:
AI-powered predictive analytics can surely identify potential ailments faster than human doctors, but when it comes to decision-making, AI cannot yet fully and safely take over for human physicians. In other industries, machine learning bots are often able to quickly correct themselves after making a mistake, with or without human intervention, with little to no harm done. However, there is no room for trial and error when it comes to patient health, well-being, and safety (3). The opponents or sceptics of AI argue that it is overhyped, profit driven, and not always in patients’ best interests and AI may not outperform doctors (1). Moreover, human doctors can relate to their patient as a fellow mortal, vulnerable being and gain holistic knowledge of the patient’s illness as related to his or her life. Thus doctors and other healthcare workers often establish a genuinely intimate and empathetic connection with their patients. This requires knowledge of social relationships and norms that is not easily accessible to machines (1). In addition, AI presents challenges of data privacy, security and medical errors, which are compounded by the fact that most algorithms need access to massive datasets for training and validation (2).
Balancing Human factors and Machines:
The majority of AI experts believe that a blend of human experience and digital augmentation will be the natural settling point for AI in healthcare. Each type of intelligence will bring something good for healthcare, and both should work together to improve the delivery of care (2). The increased comfort with AI technologies should not represent a decrease in the value patients place on a face-to-face interaction with an empathetic, informed, and attentive human physician. Technological advancements are rapidly changing the face of healthcare, offering a range of benefits but also some serious drawbacks. As we move further into the fourth industrial revolution, patients and practitioners alike will be keeping an eye on the latest innovations and advancements (3).
But, the most important aspect of medical management of patients has taught us over the centuries that "doctor-patient relationships can have a therapeutic effect, regardless of the treatment prescribed” and hence machines can never replace humans in totality and are only tools for assisting them and augmenting their efficiency.
Reference:
1. RamptonV. Artificial intelligence versus clinicians. BMJ 2020; 369 doi: https://doi.org/10.1136/bmj.m1326 (Published 03 April 2020)
2. Bresnick J. Arguing the Pros and Cons of Artificial Intelligence in Healthcare. Health Analytics.com. 17th September 2018. https://healthitanalytics.com/news/arguing-the-pros-and-cons-of-artifici....
3. The Dangers of AI in the Healthcare Industry [Report]. THOMAS. 7th May 2019. https://www.thomasnet.com/insights/the-challenges-and-dangers-of-ai-in-t...
Competing interests: No competing interests