BMJ joins campaign to put “science into assessment of research,” as its impact factor rises
(Published 03 July 2013)
Cite this as: 2013;347:f4327
The BMJ’s impact factor has risen from 14 in 2011 to 17 in 2012, placing it fourth among the world’s top general medical journals and first among open access, general medical journals, latest figures show.
The impact factor is a measure of the average number of citations to articles that a journal has recently published. The 2012 impact factor was calculated from the number of times articles published in a journal during 2010 and 2011 were cited by articles in other indexed journals during 2012. This figure was divided by the total number of “citable items” (mainly research and clinical review articles) published in 2010-11.
The BMJ’s editor in chief, Fiona Godlee, said, “The increase in the BMJ’s impact factor suggests that we are publishing research that is more highly cited than we used to. We have continued our policy of seeking to publish research that is useful to clinicians and policymakers and will help them make better decisions.”
Meanwhile, the BMJ has signed up to an international campaign that seeks to encourage alternatives to the impact factor. The San Francisco Declaration on Research Assessment (DORA (http://am.ascb.org/dora)) was drawn up by a group of editors and publishers of scholarly journals who wanted to improve the way outputs of scientific research are evaluated and used. DORA recommends that when research funding, appointments, and promotions are considered, journal based metrics—such as the impact factor—should not be used. The aim of the declaration’s authors was to put “science into the assessment of research.”
The declaration states, “Funding agencies, institutions that employ scientists and scientists themselves all have a desire, and need, to assess the quality and impact of scientific outputs. It is thus imperative that scientific output is measured accurately and evaluated wisely.”
It calls for the use of methods that assess research on its own merits rather than on the basis of the journal in which it is published. It also recommends capitalising on the opportunities provided by online publication, such as relaxing unnecessary limits on the numbers of words, figures, and references in articles and exploring new indicators of significance and impact.
Explaining the limitations of the impact factor, Godlee said, “The impact factor is used as a measure of how good a journal is compared with others and then extrapolated beyond that to indicate how good individual articles are according to the journal in which they have been published. So it is both blunt and indirect—articles and their authors are judged by the company they keep.”
She added, “The impact factor also favours some types of research more than others. Research that is clinically relevant—the sort that the BMJ wants to publish—may not be as highly cited as a highly specialist study reporting a research technique.”
Godlee also cautioned that the way the impact factor was calculated was neither transparent nor always consistent across journals.
Noting that DORA asks signatories to refrain from routinely publicising their impact factors, Godlee said, “It’s hard. We know the impact factor is flawed, and we want a better measure for everyone’s sake. But the journal is doing well, and we’re pleased for our authors, and of course we hope this will mean more people will want to send us their best work.”
Cite this as: BMJ 2013;347:f4327