Equity in medical devices: trainers and educators play a vital role
BMJ 2024; 385 doi: https://doi.org/10.1136/bmj.q1091 (Published 15 May 2024) Cite this as: BMJ 2024;385:q1091- Margaret Whitehead, professor emerita, Department of Public Health, Policy and Systems1,
- Enitan Carrol, professor, Department of Clinical Infections, Microbiology and Immunology1,
- Frank Kee, professor, Centre for Public Health2,
- Raghib Ali, professor, MRC Epidemiology Unit3,
- Chris Holmes, professor, Department of Statistics4
- Correspondence to: M Whitehead mmw{at}liverpool.ac.uk
The impact of racial bias on the performance of medical devices used in the NHS gained public and political attention during the covid pandemic, with the recognition that patients with darker skin tones may be put at increased risk of hidden hypoxaemia.123 Further inequities in medical device performance are coming to light, particularly with the advent of AI enabled devices in medicine. This was the impetus for the appointment of the UK Independent Review of Equity in Medical Devices, of which we are members.4
Our report, published in March 2024, found that inequities can be introduced at every stage in the device lifecycle, with the potential to harm patients. While tackling these inequities requires systemwide solutions, there’s a central role for education and training of healthcare professionals, as well as others in the digital technology sector.4
Skin tone diversity training
Knowledge of the signs and symptoms of diseases across skin tones in the diverse UK population is essential for accurate assessment and diagnosis, yet skin tone diversity is often missing from clinical education.5 Healthcare professionals need to be trained about using medical devices to examine skin lesions, as clinical signs may differ according to skin tone, and their training should include images of skin lesions in all skin tones.4 The British Association of Dermatologists’ comprehensive review of training images regarding skin diversity is an exemplar of what can be done immediately to support and improve knowledge among healthcare professionals.5
Beyond dermatology, there are many non-invasive devices that take measurements through a patient’s skin using light related technologies, in which results may vary by skin tone. The potential for racial bias in the performance of such devices is substantial yet not widely known.4
The education, training, and continued professional development of healthcare professionals need to be expanded to provide awareness of, and the capacity to deal with, inequities arising in the use of medical devices with varying performance by skin tone. Clinicians must be able to identify such sources of bias, similar to how they’re currently trained to recognise potential safeguarding issues.
The relevant academic bodies could start by reviewing how health professionals’ requirements in medical education and continued professional development cover equity issues arising when using medical devices in general and skin tone diversity issues in particular. Appropriate training materials in medical education should then be developed in response, including raising awareness of the need to include inequity when reporting adverse events to regulators.4
Potential biases in AI enabled devices
The use of AI in screening and clinical decision making tools is increasing in many areas of healthcare.6 Yet there’s a lack of understanding among many frontline professionals of the potential for biases in device development and deployment and of the need to consider equity at every stage.
A widespread problem is that women, ethnic minorities, and low income groups are under-represented in the datasets used to “train” the AI algorithms, including clinical trials and device calibration. Images of lesions in skin cancer datasets, for instance, are overwhelmingly from light skinned people and carry the potential for underdiagnosis of skin cancers in patients with darker skin tones.7
Flawed assumptions can also be built into the algorithmic modelling. For example, the assumption that past healthcare costs (driven by insurance claims) could proxy for healthcare need was a source of bias against low income and Black patients in a widely used prediction tool in the US.8
Bias may arise from misguided beliefs about the role of physiological differences among subgroups. For example, the incorrect assumption that Black patients had different muscle mass led to the addition of a race coefficient to improve the estimate of glomerular filtration rate (eGFR), but this created an artefactually raised eGFR in Black patients.9 This inhibited the early identification of kidney disease and potentially delayed referral to nephrologists. Such evidence triggered the removal of the race coefficient from the eGFR estimation equation used in clinical practice, at least in the US.9 Comparable problems arising with race coefficients have been observed in the prediction of vaginal birth, childhood urinary tract infection, and fracture risk, in each case underestimating the risk.9
Electronic patient records may already contain systemic inequities: for example, women can present with cardiac disease differently from men and have it consistently underdiagnosed.10 When patient records are used as a source of AI training data this bias against women could be amplified.11 Thus, training data used in such AI applications require careful human curation and checking for inherent biases. Medical data from patient records are often unstandardised, thereby limiting the clinical validity and robustness of the AI algorithms.12
Educating for evidence based, AI enabled medicine
Our review recommends systemwide education and training to improve health equity literacy among AI developers on the one hand, with reciprocal AI literacy of healthcare professionals, service commissioners, and the public on the other.4
It’s essential that healthcare professionals are equipped with the right skills to deploy and monitor AI safely and effectively.12 There’s an important role for UK academic and professional bodies in developing and providing training, education, and guidance in identifying and managing equity issues in AI devices.
In parallel, we need to educate healthcare professionals about the importance of recruiting a more representative range of participants in developmental testing of devices and in clinical trials, as well as the continual monitoring of new device deployment. Finally, all involved in healthcare have a responsibility to improve the quality of health data on ethnicity and the social determinants of health—for example, through standardised and more granular and complete recording of ethnicity.
We foresee broader educational challenges in the practice and evaluation of the next generation of evidence based, AI enabled medicine, when clinicians and algorithms “work together.” This will have medical, ethical, legal, and governance implications, which undergraduate and postgraduate educators alike will have to consider to avoid further perpetuating bias in medical devices.
Footnotes
Competing interests: The authors were appointed by the secretary of state for health and social care to carry out the Independent Review of Equity in Medical Devices with the following roles: Margaret Whitehead as chair, with panel members Raghib Ali, Enitan Carrol, Chris Holmes, and Frank Kee. Raghib Ali is also chief executive and chief medical officer of Our Future Health.
Provenance and peer review: Not commissioned, not externally peer reviewed.