Intended for healthcare professionals

CCBYNC Open access
Opinion How Are Social Media Influencing Vaccination?

Generative artificial intelligence can have a role in combating vaccine hesitancy

BMJ 2024; 384 doi: https://doi.org/10.1136/bmj.q69 (Published 16 January 2024) Cite this as: BMJ 2024;384:q69

How are social media influencing vaccination? Read the full collection

  1. Heidi J Larson, professor12,
  2. Leesa Lin, assistant professor13
  1. 1London School of Hygiene and Tropical Medicine, London, UK
  2. 2Institute for Health Metrics and Evaluation, University of Washington, Seattle, USA
  3. 3Laboratory of Data Discovery for Health, Science Park, Hong Kong

Artificial intelligence has potential to counter vaccine hesitancy while building trust in vaccines, but it must be deployed ethically and responsibly, argue Heidi Larson and Leesa Lin

Given the sluggish pace of traditional scientific approaches, artificial intelligence (AI), particularly generative AI, has emerged as a significant opportunity to tackle complex health challenges, including those in public health.1 Against this backdrop, interest has focused on whether AI has a role in bolstering public trust in vaccines and helping to minimise vaccine hesitancy, which the World Health Organization named as one of the top 10 global health threats.2

Understanding vaccine hesitancy and misinformation

Vaccine hesitancy is a state of indecision before accepting or refusing a vaccination.3 It is a dynamic and context specific challenge that varies across time, place, and vaccine type. It is influenced by a range of factors, including sociocultural and political dynamics, as well as individual and group psychology. Its multifaceted and temporal nature makes it a moving target, challenging to predict and harder to tackle. Additionally, the emergence of misinformation in public health, notably during crises such as the covid-19 pandemic, calls for rapid, data driven responses.4

Traditional public health approaches often struggle to keep pace with the swift dissemination of misinformation. Despite initiatives to counter misinformation through fact checking, such misinformation still retains a substantial influence over people’s beliefs, trust, and decision making processes.56 This underscores the need for innovative strategies that not only counteract misinformation but also delve into the psychological factors that render misinformation more compelling than factual information.

Another factor is that specific demographic groups are known to be particularly susceptible to misinformation. People with conservative ideologies, at younger ages, with lower socioeconomic status, and specific mental health conditions exhibit heightened susceptibility to and propagation of misinformation,78 which affects their adherence to health advice and is linked to lower vaccine uptake.

AI’s role in addressing vaccine hesitancy

AI identifies misinformation by analysing text against a well developed knowledge base—a collection of verified facts and data created by experts. Large language models (LLM’s), a subset of generative AI, provide a valuable tool to help discern patterns, sentiments, and pivotal factors that influence vaccine acceptance or reluctance. AI systems, including LLMs, determine emotionally charged language through natural language processing (NLP), which allows them to analyse and understand human language. AI uses algorithms to identify keywords and phrases that are typically associated with emotional tones, such as joy, anger, or sadness. LLMs’ unique analytical capabilities enable precise and context specific insights, thereby facilitating the development of targeted intervention strategies.9 Generative AI methods such as sentiment tracking and topic modelling can interpret and generate content, including text and images, and resolve complex data analysis challenges rapidly.10 They allow real time understanding of hesitancy topics and trends,11 which is essential to inform interventions such as data driven chatbots that provide or contextualise health information.12

Nevertheless, such tools, if not employed judiciously, have the potential to generate misinformation intentionally or unintentionally by referencing inaccurate information, influencing public opinion, and discouraging vaccination.13 Effectively addressing these challenges demands continuous vigilance and educational initiatives.14

Through the analysis of discourse on social media, health forums, and news articles, LLM’s can empower public health experts to identify the root causes of vaccine hesitancy, tailor communication strategies, and debunk misinformation with data driven precision and context specific insights.

The deployment of LLMs in health communication, particularly its ability to be detect public concerns early and inform communication, is an important opportunity.15 AI’s precision and capacity to replicate the desired tone positions it as a pivotal player in public health messaging; it can deliver clear, concise, and contextually relevant information. Some research has found that patients sometimes prefer AI generated messages as they have a more empathetic tone than doctors,16 although one systematic review found that public opinion on AI in healthcare was mixed.17

AI’s adaptability in health communication extends beyond message generation. It can customise messages according to specific demographics, amplifying their relevance and influence. This capability is particularly useful in tackling vaccine hesitancy, where the underlying reasons for reluctance can diverge substantially among distinct populations, each characterised by unique cultural and emotional landscapes.

Risks around AI

While AI offers substantial benefits to public health, it also has risks. AI’s potential to replicate human-like content can risk reproducing biases and amplifying misinformation, especially around sensitive issues like vaccine acceptance. However, there are risks beyond misinformation, as AI can amplify the emotional drivers of vaccine hesitancy, especially anger and fear, by generating emotionally stimulating messages that are highly contagious and more likely to be believed and spread.181920 On the other hand, the current trend of AI development is to make it easier for everyone to use, not just engineers—similar to how smartphones made technology more accessible.

This phenomenon highlights the urgency of enhancing the capabilities of AI tools, enabling them to provide not only more accurate information in responses but also the ability to identify and counter emotionally charged yet false content that can discourage vaccine acceptance. It calls for a nuanced approach that balances the development, monitoring, and regulation of AI, coupled with educating the public about its risks and benefits.

As AI continues to evolve, integration into public health programmes requires a commitment to ethical principles, transparency, and a dedication to augmenting human expertise rather than replacing it. Only through this holistic approach can we fully unlock AI’s capacity to navigate and tackle the complexities of emotions and misinformation and build vaccine confidence to advance public health.

Acknowledgments

HL and LL acknowledge financial support from the AIR@InnoHK administered by the Innovation and Technology Commission of the Government of the Hong Kong Special Administrative Region. The funder had no role in the study design, data collection, data analysis, data interpretation, or writing of the manuscript.

Footnotes

  • Competing interests: HL and LL are part of the research group from the Vaccine Confidence Project at LSHTM, which received research grants from GlaxoSmithKline (GSK) and Merck Investigator Initiated Studies not associated with the project.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

  • The article is part of a collection that was proposed by the Advancing Health Online Initiative (AHO), a consortium of partners including Meta and MSD, and several non-profit collaborators (https://www.bmj.com/social-media-influencing-vaccination). Non-research articles were independently commissioned by The BMJ with advice from Sander van der Linden, Alison Buttenheim, Briony Swire-Thompson, and Charles Shey Wiysonge. Peer review, editing, and decisions to publish articles were carried out by the respective BMJ journals. Emma Veitch was the editor for this collection.

http://creativecommons.org/licenses/by-nc/4.0/

This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 4.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/4.0/.

References