Let's dump impact factorsBMJ 2004; 329 doi: https://doi.org/10.1136/bmj.329.7471.0-h (Published 14 October 2004) Cite this as: BMJ 2004;329:0-h
- Kamran Abbasi, acting editor ()
A passionate exchange about academic medicine is possible—as shown by our current online discussion (http://bmj.bmjjournals.com/misc/webchat.shtml)—and it reveals disillusionment. Emphasis on where research is published—relying on impact factors to reward academic work with funding or promotion—is ripping the soul out of academia. “Publications (sic) become more important than teaching and the actual research itself,” said one discussant.
I asked an author why his paper, which fitted naturally in the BMJ, had been submitted to another journal. The response was pained, a touch embarrassed, but honest: the dean of his institution had instructed researchers to publish in journals with the highest possible impact factor to help with the research assessment exercise. This was a major consideration.
In South Asia, job promotion often depends largely on the number of research papers published, and some doctors go to unreasonable lengths to “persuade” editors to publish their work. The quality of clinical work or even of the research itself is less important than the length of a citation list on a curriculum vitae. China, too, offers promotion according to the number of research papers.
There are other systems. Germany, for example, has an intense hierarchy, where the chief specialist is one notch below God—or one notch above—with junior staff promoted on a whim or shunted to a dead end post in a flash of irritation. What value research or academic excellence in such an environment?
Japan, from where I write this week, has managed to marry these arbitrary approaches. In the country's fierce hierarchy, promotion is aided by applicants listing journal impact factors beside references in their citation list. Candidates boast individual impact factors of, for example, over 100, somewhere in the 30s, or a miserable 0.3. Japan's fascination with genomics and impact factors is hindering advancement in academia for good clinicians with little basic science research experience.
Professor Takeo Nakayama, a public health doctor from Kyoto University, and his team studied the likelihood of papers from high impact factor journals being cited in US evidence based guidelines (JAMA 2004;290: 755-6). Although a correlation existed, “journals with low impact factors were also cited frequently as providing important evidence.” Effect on readers' knowledge or clinical practice remains unmeasured, they conclude, and clinical and preventive research is undervalued.
Impact factors have much to answer for, as do deans, sponsors, government agencies, and employment panels who use them as a convenient—but flawed—performance measure. How can a score count for so much when it is understood by so few and its value is so uncertain? In defence, worshippers of impact factors say we have no better alternative. Isn't it time for the scientific community to find one?
To receive Editor's choice by email each week subscribe via our website: bmj.com/cgi/customalert