Can we trust AI? Depends on the data.
As a lay patient advocate with a past profession working in IT and now retired, I have taken up the opportunity to input to studies of AI in imaging, studies searching for connections of multi-morbidity, and others. Preparing for this work included familiarisation with the AI environment, reading papers and looking at the work of the British Computer Society who are looking at ways to reduce/remove bias in AI.
If AI uses data, then that data must be truly representative of the population it is intended to serve. A second consideration is that development and programming are tasks mostly undertaken by young (often white) males, who may not appreciate subtle considerations necessary to be fair and appropriate with regard to age, sex, ethnicity and other basic attributes of human beings. These attributes affect symptoms of disease, effectiveness of drugs and treatments, outcomes, and side effects. Past failures in AI have been due to little consideration given to these basic matters. Development teams need to be mixed in terms of ethnicity and sex to try and reduce sub-conscious bias, as was evident in the Deepmind kidney injury app, where only 6.32% of the data was about female patients. Thus the app did not work as effectively for them.
To try and reduce data bias, the British Computer Society is recommending greater involvement of women and minorities in degree and further degree courses in computing and data.
Data companies and bodies such as DATA-CAN, the cancer data hub, can contribute. DATA-CAN is hosting a Black Interns Initiative providing young Black people with the opportunity to experience a range of careers in health data science.
The rigorous use of equality impact assessments should be implemented in all AI work, and in the creation and programming of Apps, just as they should be used in clinical studies.
Competing interests: No competing interests