Intended for healthcare professionals

Analysis

Health information for all: do large language models bridge or widen the digital divide?

BMJ 2024; 387 doi: https://doi.org/10.1136/bmj-2024-080208 (Published 11 October 2024) Cite this as: BMJ 2024;387:e080208

Linked Editorial

Artificial intelligence and global health equity

  1. Arthur Tang, senior lecturer1,
  2. Neo Tung, undergraduate student2,
  3. Huy Quang Nguyen, medical doctor3,
  4. Kin On Kwok, associate professor4,
  5. Stanley Luong, senior lecturer1,
  6. Nhat Bui, undergraduate student1,
  7. Giang Nguyen, undergraduate student1,
  8. Wilson Tam, associate professor and deputy head (research)5
  1. 1School of Science, Engineering and Technology, RMIT University, Vietnam
  2. 2School of Mathematics and Statistics, University of Melbourne, Australia
  3. 3Oxford University Clinical Research Unit, Vietnam
  4. 4JC School of Public Health and Primary Care, Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
  5. 5Alice Lee Centre for Nursing Studies, National University of Singapore, Singapore
  1. Correspondence to: K O Kwok kkokwok{at}cuhk.edu.hk

Large language models have the potential to enhance equitable access to health information, but their poor performance in some languages could exacerbate the digital divide in healthcare, say Arthur Tang and colleagues

Key messages

  • Large language models (LLMs) like ChatGPT could have a role in narrowing the health information digital divide, democratising access to healthcare

  • But evidence indicates that LLMs might exacerbate the digital disparity in health information access in low and middle income countries

  • Most LLMs perform badly in low resources languages like Vietnamese, resulting in the dissemination of inaccurate health information and posing potential public health risks

  • Coordinated effort from policy makers, research funding agencies, big technology corporations, the research community, healthcare practitioners, and linguistically underrepresented communities is crucial to bridge the gap in AI language inclusivity

Imagine asking a health information chatbot for advice on atrial fibrillation symptoms and receiving information on Parkinson’s disease—a completely unrelated condition. This is not a fictional scenario; it is what currently happens when you inquire about medical information in the Vietnamese language using OpenAI’s GPT-3.5 (through ChatGPT). This mix-up, far from a simple error, illustrates a critical problem with artificial intelligence (AI) driven healthcare communication in languages like Vietnamese. We explore the complex interplay between AI advancements and equitable access to accurate health information in low resources languages—a term used in computational linguistics to describe languages with limited digital resources available for computational model development.1

Large language model technology in medical communication

Large language models (LLMs) like OpenAI’s GPT-3.5 and GPT-4 and Google’s Gemini Pro have sparked interest among medical researchers and practitioners with their ability to generate human-like conversations across various medical domains. Their capacity to draw information from massive training datasets enables them to generate comprehensive and seemingly accurate responses. The robustness of LLMs and the natural interactions they provide could transform digital health communication and education, …

View Full Text

Log in

Log in through your institution

Subscribe

* For online subscription