Health information for all: do large language models bridge or widen the digital divide?
BMJ 2024; 387 doi: https://doi.org/10.1136/bmj-2024-080208 (Published 11 October 2024) Cite this as: BMJ 2024;387:e080208Linked Editorial
Artificial intelligence and global health equity
- Arthur Tang, senior lecturer1,
- Neo Tung, undergraduate student2,
- Huy Quang Nguyen, medical doctor3,
- Kin On Kwok, associate professor4,
- Stanley Luong, senior lecturer1,
- Nhat Bui, undergraduate student1,
- Giang Nguyen, undergraduate student1,
- Wilson Tam, associate professor and deputy head (research)5
- 1School of Science, Engineering and Technology, RMIT University, Vietnam
- 2School of Mathematics and Statistics, University of Melbourne, Australia
- 3Oxford University Clinical Research Unit, Vietnam
- 4JC School of Public Health and Primary Care, Chinese University of Hong Kong, Hong Kong Special Administrative Region, China
- 5Alice Lee Centre for Nursing Studies, National University of Singapore, Singapore
- Correspondence to: K O Kwok kkokwok{at}cuhk.edu.hk
Key messages
Large language models (LLMs) like ChatGPT could have a role in narrowing the health information digital divide, democratising access to healthcare
But evidence indicates that LLMs might exacerbate the digital disparity in health information access in low and middle income countries
Most LLMs perform badly in low resources languages like Vietnamese, resulting in the dissemination of inaccurate health information and posing potential public health risks
Coordinated effort from policy makers, research funding agencies, big technology corporations, the research community, healthcare practitioners, and linguistically underrepresented communities is crucial to bridge the gap in AI language inclusivity
Imagine asking a health information chatbot for advice on atrial fibrillation symptoms and receiving information on Parkinson’s disease—a completely unrelated condition. This is not a fictional scenario; it is what currently happens when you inquire about medical information in the Vietnamese language using OpenAI’s GPT-3.5 (through ChatGPT). This mix-up, far from a simple error, illustrates a critical problem with artificial intelligence (AI) driven healthcare communication in languages like Vietnamese. We explore the complex interplay between AI advancements and equitable access to accurate health information in low resources languages—a term used in computational linguistics to describe languages with limited digital resources available for computational model development.1
Large language model technology in medical communication
Large language models (LLMs) like OpenAI’s GPT-3.5 and GPT-4 and Google’s Gemini Pro have sparked interest among medical researchers and practitioners with their ability to generate human-like conversations across various medical domains. Their capacity to draw information from massive training datasets enables them to generate comprehensive and seemingly accurate responses. The robustness of LLMs and the natural interactions they provide could transform digital health communication and education, …
Log in
Log in using your username and password
Log in through your institution
Subscribe from £184 *
Subscribe and get access to all BMJ articles, and much more.
* For online subscription
Access this article for 1 day for:
£50 / $60/ €56 (excludes VAT)
You can download a PDF version for your personal record.