Can we trust AI not to further embed racial bias and prejudice?BMJ 2020; 368 doi: https://doi.org/10.1136/bmj.m363 (Published 12 February 2020) Cite this as: BMJ 2020;368:m363
- Poppy Noor, journalist
- The Guardian, New York
When Adewole Adamson received a desperate call at his Texas surgery one afternoon in January 2018, he knew something was up. The call was not from a patient, but from someone in Maryland who wanted to speak to the dermatologist and assistant professor in internal medicine at Dell Medical School in the University of Texas about black people and skin cancer.
Over the next few weeks, over a series of phone calls, Adamson would learn a lot about the caller. Avery Smith is a software developer in his 30s whose wife, LaToya, had died the year before from melanoma. Smith had been researching how black people with melanoma were being underserved in healthcare for years, so when Smith said that the algorithms driving new cancer software were racist, Adamson was listening.
Melanoma is most common in white skin. Black people are less likely to get it, but they are more likely to die from it.1 Smith said that the images used to train algorithms to detect melanoma were predominantly, if not all, of white skin. He explained why this meant that artificial intelligence would struggle to detect cancerous moles in darker skin and, therefore, why the praise from some quarters for AI that has proved just as good as the best clinicians at detecting skin cancers in white people meant little for ethnic minority communities.
Of course, none of this is new. Mole checking guidance and the images used to train doctors to recognise skin cancers are also predominantly of white skin—possibly explaining the late detection of melanoma in black people. But the scale and speed at which AI could embed …