Is The BMJ the right journal for my research article?

What kind of research does The BMJ publish?

The BMJ gives priority to articles reporting original, robust research studies that can improve decision making in medical practice, policy, education, or future research and will be important to general medical readers internationally.

The BMJ welcomes studies that will aid the translation of knowledge and implementation of evidence into practice and policy, and is particularly interested in evaluations of the comparative effectiveness of interventions. This knowledge may be most relevant to the day to day decisions doctors make with patients, to public health, or to policy decisions about healthcare.

We are pleased to consider a wide range of study types, as long as the right design has been used to answer a relevant, important, and sufficiently original research question. These include randomised controlled trials of the effectiveness and safety of treatments and other clinical or healthcare interventions for patients with common diseases, studies on diagnostic tests, clinical and epidemiological observational studies (particularly on aetiology, prognosis, risk, and safety), evaluations of educational and quality improvement initiatives, qualitative studies that help to explain why and how doctors and patients do things, and systematic reviews of all of these study types.

The BMJ is also interested in original studies on research methodology, research reporting, peer review, and evidence based medicine. The same criteria apply to these as to all the other types of research we consider. We will give priority to studies that will be relevant and interesting to enough of our readers (not only to editors, statisticians, and other experts on methodology) and will help them make better decisions when conducting research; searching for evidence; or using research evidence in their practice, their teaching, or their learning. We also publish essays about designing, conducting, and reporting research, in our research methods and reporting section.

We hope that authors seeking open access publication for their research will always consider The BMJ, as we fully comply with the open access mandates of all major funders. If you do not succeed at The BMJ you may wish to try submitting your article to our sister journal BMJ Open next.

The doctors we aim to reach with these articles work in many different settings and countries; most are specialists in hospitals, community units, and clinics or family doctors in primary care. The BMJ is still the only high impact general medical journal that publishes original research from and about primary care every week. The team also includes editors working in clinical practice and research in the United States and rest of Europe as well as the UK.

Why does The BMJ reject so many papers?

We receive many more research articles than we can publish, and send fewer than half for external peer review. Our rejection rate for research is currently around 93%.

Our decisions are based mainly on the suitability of the specific research question and the study design: indeed, we will often publish an article reporting a study with “negative” results if its research question was sufficiently important and well answered. By the same token we may reject an article where the overall subject is relevant, topical, and important but the study didn’t ask a research question that added enough.

We appreciate that authors do not want to waste time by sending their research articles to the wrong journal, and it isn’t always possible for us to answer every emailed presubmission inquiry.

Another resource, the Authors' Submission Toolkit: A practical guide to getting your research published, summarises general tips and best practices to increase awareness of journals' editorial requirements, how to choose the right journal, submission processes, publication ethics, peer review, and effective communication with editors - much of which has traditionally been seen as mysterious to authors.

We have produced the checklist below to help you decide whether The BMJ is the right journal for your work. If your work does not seem to fit in The BMJ you may prefer to try another journal with a more specialist or local readership or a higher acceptance rate. You may also want to consider submitting to BMJ Open, our online only sister journal which also publishes all research open access. BMJ Open authors are asked to pay article-processing charges on acceptance.

Things that make publication in The BMJ impossible or unlikely

Overall lack of suitability

You have searched thebmj.com, and/or PubMed/Google Scholar or other databases, and have found that The BMJ never/very rarely publishes research on this topic.

Types of research The BMJ never publishes

  • Unethical research;
  • Pure laboratory based research;
  • Animal research;
  • Physiological, pharmacological, sociological or other studies conducted with healthy volunteers rather than with patients/whoever the research question is really aimed at; and
  • Research questions in laboratory, animal, and volunteer studies are too preliminary for most readers of The BMJ.

Types of research The BMJ does not usually publish (even if well conducted) owing to lack of usefulness to readers

  • Cost of illness studies: in isolation these are of limited usefulness;
  • Surveys of self-reported practice, rather than observed practice;
  • Simple ("open loop") audits without intervention and reaudit;
  • Placebo controlled trials of drugs or devices: these are usually much less useful to readers of the journal than trials comparing new interventions/regimens head to head against current best practice/intervention; and
  • Economic evaluations of single clinical trials, when The BMJ has not also published the main trial outcomes. We do not feel it is sufficiently useful to our readers to publish such an economic evaluation without the trial.
  • Insufficiently important or clear research question

    • The study lacked a clear research question before you began collecting data.
    • The study began with looking at routinely collected data and then trying to devise a research question and method retrospectively.
    • The research question and answer (conclusions) are unlikely to improve real decisions in clinical practice, public health, health policy, health services delivery, or future research.

    Insufficiently original, relevant, or important overall message

    • The findings confirm but do not add to previous research.
    • This article is too similar to one you have submitted or published elsewhere.
    • More robust studies with similar findings have been published.
    • Other researchers and authors are unlikely to use and cite these findings.
    • The message has little relevance or usefulness to readers beyond the setting of the study.
    • The study is about one or more rare conditions usually seen only by specialists.

    Our online only sister journal BMJ Open (research articles only) may be a suitable publication for this kind of manuscript. BMJ Open peer reviews for sound methods and reporting; BMJ Open authors are asked to pay article-processing charges on acceptance.

    Inappropriate study designs

    The study design was not appropriate for the research question

    Research question Appropriate study design for that question
    Does this treatment work? Systematic review, RCT
    How good is a diagnostic test? (Prospective) cohort study

    Should we screen?

    RCT

    What causes this disease?

    RCT, prospective cohort study, case control study (rare diseases)

    What did people think or do?

    cohort study, cross sectional survey, qualitative study

    Suboptimal study designs

    • Case series with no (or inadequate) control group. We will consider such an article, however, if it is sufficiently informative and important for clinical and public health practice or policy and is compelling, well described, and topical eg describing the early management of a major threat to public health.
    • Retrospective study using case notes, charts, or other routinely collected records in one or only a few hospital/general practice/doctor’s office(s).
    • Intervention study with no control group. We may consider such an article, however, if it reports a large scale (eg regional or national) public health or health services intervention, or a "journalology"/peer review research study where it was impossible to set up a formal control group.
    • Non-randomised trial of a comparison or intervention. We may consider such an article, however, if it reports the evaluation of a quality improvement initiative where the rationale and process evaluation may have been more important than the outcomes; a large scale public health or health services intervention; or a "journalology"/peer review research study where randomisation was impossible.

    Internal validity/robustness of the study

    • It had insufficient statistical power, making interpretation difficult.
    • It used (an) unvalidated research instrument(s).
    • It was a trial with any of these problems:

      • Inappropriate control group(s) or no controls;
      • Inadequate randomisation;
      • Inadequate allocation concealment;
      • Inadequate blinding/masking;
      • Important and invalid deviation(s) from the trial protocol;
      • No power calculation;
      • Lack of statistical power;
      • Analyses that were not preplanned;
      • No analysis of harms.
    • It was a systematic review for which the search:
      • Used terms that were insufficiently defined, limiting the relevance of the findings to clinical practice, policy, or future research;
      • Was inappropriately limited only to recent studies or to those published only in the English language.
    • It was a systematic review in which only one researcher appraised the studies.
    • It was an observational study with important confounding and bias (owing to absent or incomplete measurement of/adjustment for important factors).
    • It was a survey with a low response rate (<65%)...
      • ...although the research topic was one where a high response rate should have been achievable (eg the questions were not particularly sensitive or personal and potential participants not hard to reach)
      • ...and there was an important non-response bias
      • ...and there was no valid analysis of non-response bias
    • It was a qualitative study in which:

      • There was no theoretical framework.
      • The sampling strategy was not clearly described.
      • The sampling was driven by convenience rather than theory.
      • Procedures for data analysis were not clearly described and theoretically justified.
      • The data analysis did not relate to the original research question(s).
      • The data were purely descriptive.
      • The methods did not yield any more insightful results than a simple quantitative study could provide.

    External validity/generalisability of the study

    • The inclusion/exclusion criteria were not clearly defined.
    • The inclusion/exclusion criteria and/or sampling method yielded participants who were unrepresentative of most patients with the condition.
    • The study was a trial of a drug or device with limited relevance to general readers because it compared a new intervention or new regimen/indication against placebo, rather than head to head against best current treatment(s).
    • In a randomised trial the doses/administration/other parameters did not reflect normal practice in all study arms, introducing bias.
    • The study’s research question and findings are out of date because too much time has passed since the study was completed (this will vary with the field of research and the rate of innovation within it).

    Trial registration

    The article reports a randomised trial that should have been registered (see our page on the research article type for more information), but was not.

    The manuscript:

    • Has a poorly written abstract that does not clearly state the research question or study design - this might prompt rapid rejection, as we initially screen by abstract.
    • Does not state the research question in the article sufficiently clearly for readers, editors, and reviewers to understand why you did the study.
    • Gives conclusions that are not directly supported by the results.
    • Does not follow The BMJ's detailed advice about research articles (see the page about the research article type for more information).

    These points about the manuscript are all fixable, and we would not usually reject an article just because it is not well presented. Nor do we expect the grammar to be perfect when the authors do not have English as their first language. However, very poor presentation or use of a format that looks nothing like an article in The BMJ may deter busy editors, given the large volume of articles submitted.

    Ethics problems, research misconduct, and publication misconduct

    • the study’s conduct/reporting did not comply fully with the World Medical Association’s Declaration of Helsinki on Ethical Principles for Medical Research Involving Human Subjects. In particular:

      • Participants did not give informed consent.
      • The study lacked necessary approval by, or a waiver from, a research ethics committee/institutional review board.
    • Individual participants are potentially identifiable in the article but did not give written consent for publication.
    • The study involved possible research misconduct or publication misconduct (including plagiarism, falsification of data, improprieties of/disputes over authorship, and failure to comply with legislative and regulatory requirements affecting research). The BMJ editors usually ask for clarification and explanation rather than simply rejecting such work, and may refer such articles to the ethics committee (see The BMJ ethics committee page the Committee on Publication Ethics (COPE), or to the authors' institution(s), funder(s), or licensing body(ies) before reaching a decision on the manuscript.