Intended for healthcare professionals

Views & Reviews Personal View

Research papers should omit their authors’ affiliations

BMJ 2014; 349 doi: https://doi.org/10.1136/bmj.g6439 (Published 03 November 2014) Cite this as: BMJ 2014;349:g6439
  1. Matthew Harris, Commonwealth Fund Harkness fellow in healthcare policy and practice, New York University, New York, USA
  1. mjh599{at}nyu.edu

Omitting the provenance of research in published reports might reduce bias when readers assess their use, thinks Matthew Harris

We pay a lot of attention to the internal validity of research. Was the research well designed? Were there biases? Were confounders appropriately adjusted for? Were the methods adequately described? We do not, however, pay a lot of attention to how we consume that research. All things being equal would you pay more attention to a study from Harvard University in the United States or one from the University of Abuja in Nigeria?

If you chose Harvard University, you are not alone. Analyses of submissions to Gastroenterology and Cardiovascular Research have shown that reviewers judge research articles from their own country more favourably than those from other countries.1 2 In one controversial experiment, published scientific articles were resubmitted with fictitious names and institutions to the prestigious journals that had published them 18 months earlier. Eight of the nine articles that made it through the review process were rejected—even though the research was identical, it mattered where it was conducted.3

Selective perception bias

We may pay more attention to research from one context than another, but what constitutes a credible, reliable, or comparable source is a selective perception bias.4 Such biases cause us to judge research based on our prior view of where the research was conducted and by whom, rather than on the merit of the research alone. This process may be subconscious and hard to recognise.

If you chose Harvard University you might have thought that research from the US is more immediately relevant to your own context. There may be similarities in terms of language, culture, national wealth, regional location, geopolitical ties, heritage, legal or political systems, ethnic and religious ties, or the country’s intellectual and research capacity. You might also have thought that research from a reputable institution is more likely to be well conducted. However, at the level of implementation none of these factors necessarily determine whether that research can be generalised outside its specific context.5 Given that all the specifics of a study (including treatment conditions, location, treatment administration, investigator, timing, and scope and extent of measurement) potentially limit its ability to be generalised, there is no intrinsic reason why research from the US should be considered more applicable to the UK than research from Nigeria. Bias in favour of Harvard means that research from less well known institutions will be unduly discounted.

The emerging discipline of “reverse innovation” in healthcare—the idea that healthcare solutions developed in poor countries can be adopted in richer countries—is a good example of why selective perception bias should not be allowed to limit the spread of ideas. Despite many lean innovations arising from poorer countries,6 7 there are few, if any, examples of their adoption in rich countries.8 When was the last time that the UK implemented any innovation or research that had been developed and trialled in Nigeria?

The lack of diffusion of policies and innovations from poorer to richer countries cannot be entirely attributable to our cognitive biases. We may also be less exposed to innovations from low income settings.9 For example, in psychiatry only 6% of the literature is published from regions that represent 90% of the global population,10 in HIV/AIDS only 6.8% of research originates from Africa,11 and in cardiovascular research 8% of the literature originates from developing countries, even though they account for 90% of the disease burden.12

Equally, the characteristics of the innovation (whether it is compatible, able to be tested in a trial, observable, and has advantages over existing technologies) and the characteristics of the adopter’s context (openness to change, local leadership, available resources) will all have a role.13 Yet cognitive bias does exist. We need to understand how much it affects our interpretation of research evidence from poorer countries.

Use of decision support tools such as DECIDE (developing and evaluating communication strategies to support informed decisions and practice based on evidence)14 can promote unbiased assessment of the causal mechanisms underlying a link between an exposure and an outcome in a single context and whether such mechanisms can be generalised to other contexts. This tool will help to lessen the effect of our cognitive biases; however, it is not widely available.

Perhaps journals should have a policy to publish research without the authors’ affiliations to reduce bias. In the meantime, next time you sneak a peek at an author’s affiliations, ask yourself if they mattered to you.

Notes

Cite this as: BMJ 2014;349:g6439

Footnotes

  • Competing interests: I have read and understood BMJ policy on declaration of interests and declare that I have no competing interests.

  • Provenance and peer review: Not commissioned; not externally peer reviewed.

References

View Abstract