Misaligned AI constitutes a growing public health threat
BMJ 2023; 381 doi: https://doi.org/10.1136/bmj.p1340 (Published 12 June 2023) Cite this as: BMJ 2023;381:p1340Advances in artificial intelligence (AI) have the potential to transform medicine by generating novel cures, improving diagnostics, making care more accessible, reducing costs, and alleviating the workload of clinicians.1 These technologies could help people live longer healthier lives, yet, as many physicians and AI researchers have noted, AI also poses health risks.23 It is difficult to ensure that algorithms reliably “capture our norms and values, understand what we mean or intend, and, above all, do what we want,” a challenge that has been referred to as the alignment problem.4 The risks related to misaligned AI—when systems’ behaviours don’t match the objectives or principles of their human creators or users—constitute a growing public health threat that the medical community can and should respond to.
Misaligned algorithms have already jeopardised the health of millions of people. For instance, the stated goal of one commercial algorithm used throughout the US healthcare system was to identify patients who would benefit from additional care. Yet this algorithm used healthcare costs as a measure of healthcare need, resulting in it prioritising white patients over sicker black patients, many of whom faced greater barriers to accessing care.56 Lapses like this have led diverse stakeholders in the medical community to raise concerns.7
Outside of healthcare systems, algorithms designed to promote health have also risked doing the opposite. In 2021, Mark Zuckerberg announced Facebook’s goal of facilitating the rollout of covid-19 vaccines, in part by promoting vaccine related content from bodies like the World Health Organization. Yet Facebook posts from official sources were often flooded with critical comments, including conspiracy theories and misinformation. In its efforts to promote pro-vaccine content, Facebook’s algorithm reportedly wound up showing anti-vaccine comments to users 775 million times a day, potentially undermining vaccine uptake.8
AI has already harmed people’s health in concrete ways, but as algorithms become more powerful, complex, and broader in scope and uptake, experts are also raising more speculative concerns. A recent statement from the Center for AI Safety—“Mitigating the risk of extinction from AI should be a global priority alongside other societal scale risks such as pandemics and nuclear war”—was signed by AI pioneers and hundreds of AI researchers, chief executives of leading AI companies, and government officials, as well as global health leaders, medical school professors, and bioethicists.9
AI development is advancing rapidly, with billions of dollars being poured into companies whose aims are to develop artificial general intelligence (AGI), or machine intelligence that surpasses human abilities.1011121314 For the private actors at the bleeding edge of AGI research, public health is often, at best, a secondary consideration. Many experts believe AGI is possible, but even if it is not, the algorithms these companies develop along the way may be capable of causing considerable harm.15
The role of the medical community
The growing risk of misaligned AI is relevant to the medical community for two reasons. Firstly, the potential health benefits of AI are often used to justify certain avenues of development. Some of those striving to build AGI have, for example, suggested that their technologies might improve access to healthcare.16 Yet all benefits come with risks, and the medical community is well placed to weigh these up as we advance certain AI capabilities. Indeed, the medical community may have an especially important part to play, as computer scientists have, at times, overhyped the medical benefits associated with AI advances. For instance, a preeminent computer scientist argued in 2016 that we should stop training radiologists because they would soon be replaced by AI—a prediction that has aged poorly.1718
Secondly, any matter that could harm the health of millions of people—and that experts fear could be catastrophic—is definitionally a public health risk. Although it is difficult to specify the nature of a theoretical future threat, AI already has the power to cause great harm, even in the hands of benevolent agents: researchers recently used an AI model designed to aid drug discovery to instead generate 40 000 potential chemical warfare agents in six hours.19 A powerful algorithm misaligned with users’ values—or in the hands of nefarious agents—could pose even greater risks.
As others have also recently noted, physicians have a successful history of shaping discourse around transformative technologies that pose public health risks.20 Physicians played a vital role in forming Nobel Peace Prize winning organisations such as the International Physicians for the Prevention of Nuclear War (IPPNW) and the International Campaign to Abolish Nuclear War. Both organisations have been pivotal in the global efforts to ban nuclear testing and establish international treaties for nuclear disarmament.21 The parallels between achieving nuclear capability and advanced AI are clear—both are dual use technologies that are capable of unleashing considerable benefits and harms: both are important, century defining technologies; and both risk escalating geopolitical tensions.22 IPPNW had data from the Hiroshima and Nagasaki bombings to support its advocacy work, but it would be a mistake to await similar catastrophes when AI experts are already sounding the alarm.
To reduce the health risks of AI, the medical community should advocate for regulations endorsed by AI experts and which are consistent with those that typically govern science, medicine, and public health. Such regulations might include the development of safety standards, including pre-deployment reporting, public incident reporting, external review processes, licensing requirements, and clear liability rules.23 These measures could help ensure AI systems are predictable and interpretable.2425
More generally, when governments are considering policy proposals pertaining to AI, the medical community should ensure that any risks created by AI misalignment are viewed through a public health lens. Pushes to use AI in a new capacity or to develop its capabilities should always be appropriately tempered by an understanding of its repercussions for population health and health equity.
Finally, just as physicians would only prescribe drugs that regulatory agencies have deemed safe, healthcare systems, scientific laboratories, and other health actors should only adopt AI models that have been evaluated and deemed adequately robust to misalignment and misuse.26 In doing so, we can help create ethical norms and market forces that incentivise adherence to safety standards.
The potential of AI in medicine is undeniable, but so are the dangers of misaligned AI. By advocating for policies that tackle the growing risk posed by misaligned AI, the medical community can help ensure these transformative technologies are used to build a healthier, safer world.
Footnotes
Competing interests: LP has received a research grant from Effective Altruism Funds, which has received support from Open Philanthropy. BT is a global health researcher for a non-profit organisation, Rethink Priorities, that receives support from Open Philanthropy, though the opinions expressed in this manuscript are the authors’ own. Open Philanthropy separately funds work related to AI safety, though neither LP nor BT receive any funding related to this.
Provenance: not commissioned; not externally peer reviewed.
Acknowledgement: We would like to thank Emma Bluemke, Alaina Bever, Ryan Carey, Holly Elmore, Alex Lintz, and Quique Toloza for their helpful comments on an early draft.