US hospital algorithm discriminates against black patients, study findsBMJ 2019; 367 doi: https://doi.org/10.1136/bmj.l6232 (Published 28 October 2019) Cite this as: BMJ 2019;367:l6232
A computer algorithm used by many US hospitals to predict risk and allocate follow-up resources directs significantly more extra care to white than to black patients with the same disease burden, shows an analysis published in Science.1
The researchers did not name the particular algorithm but called it “one of the largest and most typical examples of a class of commercial risk-prediction tools that, by industry estimates, are applied to roughly 200 million people in the United States each year.”
The algorithm is undermined by its use of previous health expenditure as the metric by which to judge patients’ health needs, the study found. It has long been known that black patients’ health expenditure is lower on average than that of equally sick white patients, owing to various causes such as poverty, mistrust engendered by past deliberate discrimination, and bias among doctors. By making past expenditure a guide to future need, the algorithm perpetuates and deepens this gap, the researchers said.
Risk prediction algorithms have proliferated since President Barack Obama’s 2012 Affordable Care Act offered financial incentives to health systems for keeping patients healthier through preventive care. They aim to sort patients by risk and direct those who stand most to benefit towards extra services such as screening and physical therapy.
Working with an unnamed “large academic hospital,” the researchers studied all white and black primary care patients enrolled in risk based care programmes from 2013 to 2015. They excluded Asian and Hispanic patients from the analysis. The sample population numbered 49 618, of whom 6079 (12.3%) were black.
The black patients were significantly less healthy, with higher average blood pressure and blood sugar concentration, and an average of 1.9 chronic diseases each (white patients had 1.2).
Patients allocated to the hospital’s care management programme in a simulation of its algorithm were 17.7% black. But when the researchers simulated an algorithm with “no predictive gap between Blacks and Whites,” it allocated 46.5% of these places to black patients.
The Washington Post later named the algorithm’s manufacturer as Optum, a pharmacy benefit manager and a division of UnitedHealth Group. Other algorithms performing similar tasks are made by Johns Hopkins University, the University of California at San Diego, 3M Health Information Systems, the Centers for Medicare and Medicaid Services, and IBM.
The algorithm’s maker replicated the study’s findings on its own patient database and is now working to remove the bias.
“Predictive algorithms that power these tools should be continually reviewed and refined, and supplemented by information such as socio-economic data, to help clinicians make the best-informed care decisions for each patient,” Optum spokesman Tyler Mason told the Washington Post.2 “As we advise our customers, these tools should never be viewed as a substitute for a doctor’s expertise and knowledge of their patients’ individual needs.”
Commenting on the study in an accompanying article in Science, Ruha Benjamin of Princeton University wrote, “Whereas in a previous era, the intention to deepen racial inequities was more explicit, today coded inequity is perpetuated precisely because those who design and adopt such tools are not thinking carefully about systemic racism.”3
Even in an algorithm that deliberately excluded race as a factor, she noted, “a seemingly benign choice of label (that is, health cost) initiates a process with potentially life-threatening results.”
Algorithms offer the promise of neutrality, wrote Benjamin, but if the human assumptions behind them are faulty they have “the power to unjustly discriminate at a much larger scale than biased individuals.”