Patient commentary: Stop hyping artificial intelligence—patients will always need human doctorsBMJ 2018; 363 doi: https://doi.org/10.1136/bmj.k4669 (Published 07 November 2018) Cite this as: BMJ 2018;363:k4669
- Michael Mittelman, executive director1,
- Sarah Markham, visiting researcher2,
- Mark Taylor, head of impact3
- 1American Living Organ Donor Fund, Philadelphia, PA, USA
- 2Institute of Psychiatry, Psychology and Neuroscience, King’s College London, London, UK
- 3National Institute for Health Research Central Commissioning Facility, London, UK
- Correspondence to: M Mittelman
We regularly hear that artificial intelligence (AI) is transforming the world, including healthcare, and how this disruptive innovation could spell the end for many professions as we currently know them, including lawyers and doctors.12
In healthcare, however, it is crucial that technology company executives, researchers, hospital managers, and academics ask the right questions about AI’s impact on patients.
Between the three of us, we have over 61 years of patient history in health systems worldwide. We have different needs, conditions, and comorbidities, including multiple sclerosis, end stage renal disease, mental illness, epilepsy, and Crohn’s disease. None of us can envisage our relationships with our many human doctors changing because of artificial intelligence.
Patients haven’t always benefited from the promises of technology. The implementation of electronic health records, which has made little difference to patients to date, is but one example. Hospitals and health systems have spent exorbitant amounts of money only to find that records are inaccurate and inaccessible.
AI is sure to proceed further into the hospital, examination room, and primary care, but patients should not have to bear the brunt of any developments that have not been proved beneficial for their current condition.
This isn’t Luddism. Technology companies have given patients few reasons to trust them with all their medical data. We continue to see systems breached and patient information exposed. What happened to “First do no harm”? It’s not clear that medical ethics and responsibility for patients currently outweigh the motive of corporate profit.
Picture the raw emotion
Imagine a mother and father being told that their 3 year old son will lose his kidneys from a rare disease. Picture the raw emotion on their faces. Next, picture yourself as a new college graduate, seeing a new doctor in a new city. That 22 year old has a brand new job, excited for her future. She waits in the examination room only to be told that she has an invisible illness, a mental health condition.
Now imagine those same scenarios with no healthcare professional in the room. The only interaction is with an artificial form of intelligence. A machine. It is unthinkable.
We can’t be reduced to data
Could patients ever rely on a machine to manage their entire care pathway? Decisions about technology continue to be implemented without true partnership with patients, without knowing what patients want or need. There is much more to medicine than analysing data.
More fundamentally, AI lacks the ability to sense, process, interpret, replicate, or deploy some very human forms of communication.
Sensing that your doctor truly cares about what you are going through, and really does want to help, makes a profound difference to patients’ experience of, and ability to manage, their health. This shared enterprise is innately human and requires a genuinely intimate and empathetic connection between two beings of the same kind.
The therapeutic power of the human-clinician encounter depends on a relationship between two humans who both can fully contextualise and appreciate the patient’s values, wishes, and preferences.
Ill and vulnerable
Patients need to be cared for by people, especially when we are ill and at our most vulnerable. A machine will never be able to show us true comfort. The ability to understand fully the “human condition” will always be essential to health management.
AI may have the potential to become a useful and innovative aide in healthcare, but we hope there will always be room for humanity—human healthcare professionals. Ultimately, no one wants to be told he or she is dying by an entity that can have no understanding of what that means. We see AI as the servant rather than the director of our medical care.
Competing interests: We have read and understood BMJ policy on declaration of interests and have no relevant interests to declare.
Provenance and peer review: Commissioned; not externally peer reviewed.