Intended for healthcare professionals


Use of generative artificial intelligence in medical research

BMJ 2024; 384 doi: (Published 31 January 2024) Cite this as: BMJ 2024;384:q119

Linked Research

Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing

  1. Nazrul Islam, associate professor1,
  2. Mihaela van der Schaar, professor2
  1. 1Faculty of Medicine, University of Southampton, Southampton, UK
  2. 2Cambridge Centre for AI in Medicine, Cambridge, UK
  1. Correspondence to: N Islam Nazrul.Islam{at}

Policies must be standardised to ensure accountability and maintain public trust

Among the most groundbreaking advancements in artificial intelligence (AI) is the emergence of generative AI. These models, including tools such as ChatGPT and Bard, harness deep learning techniques to interpret and replicate complex patterns found in vast datasets. Their primary function is to generate original text or images, demonstrating an understanding of intricate associations, such as the context and semantics of language. For instance, chatbots such as ChatGPT interpret user inputs, typically sentences or paragraphs, to craft unique, contextually relevant responses. This capability stems from training on diverse and extensive textual and imaging data, enabling the model to predict and generate language and images with high accuracy. ChatGPT had 100 million monthly users within the first two months of its introduction, showing the considerable impact and appeal of generative AI.1

As expected, the use of generative AI tools in academic research and writing has been increasing.1 In response, many academic publishers and journals, including The BMJ, have published and/or updated their policies around the responsible use of generative AI in academic research …

View Full Text

Log in

Log in through your institution


* For online subscription