News

Academics should leave research credentials behind when they move, Stern report says

BMJ 2016; 354 doi: https://doi.org/10.1136/bmj.i4260 (Published 02 August 2016) Cite this as: BMJ 2016;354:i4260
  1. Nigel Hawkes
  1. London

Some methods used by universities to burnish their research credentials should no longer be permitted within the Research Excellence Framework, a committee chaired by Lord Stern has recommended.1

Under existing rules universities can recruit high ranking academics at the last minute, rather like Premier League football clubs buying top players in the transfer window. These star recruits, sometimes on fractional contracts that involve little real commitment, bring their research excellence scores with them, adding lustre to the universities they join.

In addition, universities can list as many or as few of their staff as they wish, controlling the denominator used in calculating research excellence. By listing only a small number of research high fliers, they can make their research productivity look better than universities that include all or most of their academic staff.

Under Stern’s proposals neither of these devices would be available in future assessments. The incentive to import star recruits would be reduced because their past research scores would no longer be portable, remaining at the university where the research was done. Stern adds that, in the future, all academic staff—not just a hand-picked selection—should be included in entries.

Research excellence assessments have been carried out at irregular intervals since 1986, most recently in 2014, and they play an increasingly important role in awarding £2bn (€2.37bn; $2.65bn) a year in grants from research councils. They use a variety of tools to measure the quality of research in UK universities but have come under increasing pressure from critics who say that they favour “safe” research that can be guaranteed to produce measurable outputs within the time frame, discouraging risk taking. The whole process is burdensome and costly: the 2014 exercise is estimated by the Stern report to have cost £246m, most of it spent by universities preparing their returns.

Some academics feared that Stern, a professor at the London School of Economics and president of the British Academy, would seek to cut costs by placing more emphasis on metrics—the measurement of research impact by, for example, the number of citations a paper gets and the prestige of the journals that publish it. But his report reiterates that peer review should be the principal measure of quality, backed by up bibliometric data where appropriate.

James Wilsdon, of Sheffield University, who chaired an independent review of the role of metrics, said that he was relieved by this. “Such an outcome was far from guaranteed,” he wrote in the Guardian.2 “Towards the end of last year both government and commercial players like Elsevier were pushing for a metrics based Research Excellence Framework. Lord Stern deserves credit for holding out against such pressures.”

Wilsdon was less happy about Stern’s proposed ban on portability, saying that it may make it harder for young researchers at the start of their career to get new jobs. He suggests exempting this group from the rule, which should apply only to those with permanent jobs. Stern should have anticipated this problem and indicated how it might be avoided, said Wilsdon.

This view is widely shared by individual academics, judging by those who wrote to Stern with their views. The loss of portability got no support at all from individual respondents and only moderate support from universities. Furthermore, opinions among individuals about the value of the whole exercise differed sharply from Stern’s: asked whether it had negative influences, their support for the exercise was very strong.

The other changes proposed by Stern relate to measurement of the social and economic effects of research. He calls it “one of the success stories” of the 2014 exercise, but universities complain that this is tricky and time consuming, having cost them £55m. Stern recommends more flexibility in the requirements for impact case studies, no increase in their number, and widening them to include effects on government policy, public engagement, cultural life, and teaching.

The next step will be for the UK government and funding councils to turn the report into formal proposals for the next Research Excellence Framework exercise, which is due in 2021. This will require testing to see whether the proposals allow scope for further gaming, as well as modelling or piloting new ideas. The results will be published in 2017, in time for universities to prepare for submissions in 2020, with the final outcome published at the end of 2021.

References

View Abstract

Sign in

Log in through your institution

Subscribe