How bibliometrics and school rankings reward unreliable science
BMJ 2023; 382 doi: https://doi.org/10.1136/bmj.p1887 (Published 17 August 2023) Cite this as: BMJ 2023;382:p1887How much is a citation worth? $3? $6? $100 000?
Any of those answers is correct, according to back-of-the-envelope calculations over the past few decades.123 The spread between these numbers suggests that none of them is accurate, but it’s inarguable that citations are the coin of the realm in academia.
Bibliometrics and school rankings are largely based on publications and citations. Take the Times Higher Education rankings, for example, in which citations and papers count for more than a third of the total score.4 Or the Shanghai Ranking, 60% of which is determined by publications and highly cited researchers.5 The QS Rankings count citations per faculty as a relatively low 20%.6 But the US News Best Global Universities ranking counts publication and citation related metrics as 60%.7
These rankings are not, to borrow a phrase, merely academic matters. Funding agencies, including many governments, use them to decide where to award grants. Citations are the currency of academic success, but their value also attracts more money and resources to institutions and academics.
Such metrics can reap huge rewards but, unfortunately, they’re also simple to game. And so, following Goodhart’s law—“When a measure becomes a target, it ceases to be a good measure”—citations are gamed,8 in increasingly cunning ways. Authors and editors create citation rings and cartels.9 Companies pounce on expired domains to hijack indexed journals10 and take their names, fooling unsuspecting researchers. Or researchers who are well aware of the game use this vulnerability to publish papers that cite their work.
Universities pay cash bonuses to faculty members who publish papers in highly ranked journals.11 Some institutions have reportedly even schemed to hire prominent academics who either add an affiliation to their papers or move employers outright.12 This means that those researchers’ papers—and citations—count toward the universities’ rankings. Researchers cite themselves, a lot.13 Journals have been found to encourage, or even require, authors to cite other work in the same periodical,14 and they fight over papers they think will be highly cited to win the impact factor arms race.15
Paper mills, which sell everything from authorship to complete articles, have proliferated,16 and while they’re not a new phenomenon, they have industrialised in recent years.17 They have figured out ways to ensure that authors peer review their own papers.18 In the United States, the “newest college admissions ploy” is “paying to make your teen a ‘peer-reviewed’ author.”19
Following the money
Faced with criticism, which they see as an existential threat to their careers, some researchers have resorted to the courts,20 suing critics21 and journals to prevent them22 from publishing critiques or expressions of concern. While neither journals nor authors “lose” citations for papers that have been retracted when impact factors or h indices are calculated, the appearance of a retraction on a researcher’s CV is typically seen as a career death knell—despite evidence to the contrary.23
All of that leads to retractions that are “slow, opaque and inconsistent”24 when they happen at all. The UK House of Commons’ Science, Innovation and Technology Committee recently recommended that corrections and retractions should take no more than two months.25 In practice, we’re a long way from this goal, with retractions typically taking years.26
Imagine if all of this effort were directed at coming up with more robust experiments, better treatments for sick people, or ways to make those treatments cheaper and more equitable. Instead, publishers, institutions, and academics are stuck in the cycle of following the money. Publishers respond to demand by creating an astronomical number of “special issues,”27 and paper mills target those vulnerable issues. More and more junk is published, drowning out the better science in a sea of noisy nonsense.28
The world has begun to catch on, probably as the result of increased public attention and pressure in the media and elsewhere. Journals seem to have become increasingly willing to retract papers over the years, including thousands suspected to be the products of paper mills.29 Others have been delisted by Clarivate’s Web of Science platform, losing their impact factors and putting their futures in jeopardy.30
But all of this is a game of whack-a-mole. Any approach to solving this problem cannot succeed without tackling the incentives themselves. A good place to start is by deflating the importance of citations in the promotion, funding, and hiring of scientists. The hope is that this effort would dovetail with publishers distancing themselves from models that require more and more volume to grow profits. At the same time, if we must replace bad metrics with better ones—which is not necessarily the case, and any metric really can be gamed—universities and funders could find ways to reward behaviour such as data sharing and correcting the record.
A proposal to change the UK’s Research Excellence Framework would limit the importance of publications in assessment, although only by 10%. The Declaration on Research Assessment (DORA)31 and the Leiden Manifesto for research metrics32 recommend not considering impact factors when conducting such assessments—and while thousands of institutions have signed on, very few walk the walk.33 Meanwhile, some US graduate schools are declining to participate in US News rankings.34
These nascent developments are important. If we want science with impact, we need to reward behaviour that is consistent with good research practices, not impact factors.
Footnotes
This article developed from a talk Ivan Oransky gave at Stanford University in May 2023.
Competing interests: None declared.
Provenance and peer review: Commissioned; not externally peer reviewed.