Intended for healthcare professionals

Feature Academic Medicine

Beyond the impact factor

BMJ 2009; 338 doi: (Published 13 February 2009) Cite this as: BMJ 2009;338:b553
  1. Geoff Watts, freelance journalist
  1. 1London
  1. geoff{at}

    Details of how the new research excellence framework will assess UK research are expected later this year. Geoff Watts looks at the possibilities for fairer evaluation of applied medical research

    The RAE is dead; long live the REF.1 We’re talking research quality: what’s good and what’s not; what’s worth doing and what isn’t. But how is good quality to be defined? And to what extent should that judgment take account not only of the intellectual excellence of a research programme but also, in the case of medicine, its value to clinical practice? The question is a live one, and set to become livelier. One answer may lie in a still embryonic form of appraisal: the “social impact” of research.

    For the benefit of readers who don’t follow the vicissitudes of the research community, the more familiar of these three letter abbreviations, RAE, stands for research assessment exercise: the process by which, until last year, the Higher Education Funding Council of England surveyed the quality of research carried out in British universities. The REF, the research excellence framework,2 is its replacement. In a nutshell, the old system was based principally on peer review of an academic department’s research, while the new one will rely far more on metrics: a raft of statistical measures of the research under scrutiny. These could include a department’s total non-government research income, the number of its postgraduates, and—worrisome to some—a bibliometric analysis of its research output.

    Universities take understandable pride in being able to quote good research ratings—but there’s more at stake here than academic egos. The assessment figures are used to determine how research funds are allocated.

    What is good research?

    One of the pitfalls in any judgment of research is the “apples and pears” problem: like is not being compared with like. Richard Smith, a previous BMJ editor, illustrated it by contrasting research on apoptosis or programmed cell death with a study of the cost effectiveness of different incontinence pads.3 Scientists, he pointed out, would rate the original work on apoptosis as of high quality even though, 30 years after its discovery, it had had no measurable effect on health. By contrast, work on incontinence pads would be far less likely to score points, although its social benefit could be immediate and important.

    Roger Jones of the department of general practice at King’s College School of Medicine in London is one of many who fear for the future of primary care research if the research excellence framework were to be dominated by unreconstructed bibliometrics. “We would suffer if journal impact factors became dominant, not because our research is any worse, but because the impact factors of the journals that the applied specialties publish in are not in the same range as most of those used by the biomedical people. Nature and Cell and Science and all those where you put your work if you’re a lab scientist wouldn’t be interested in our stuff. But they’ve got very high impact factors. The best we can do in primary care or health services research is probably the New England Journal of Medicine. But it’s still unusual to see primary care in there.”

    Martin Roland, director of the National Primary Care Research and Development Centre in Manchester, agrees. If bibliometrics are going to be used, he says, they’ll need to be more subtle than a simple comparison of traditional impact factors. You’d need to ask where a particular journal ranks among others in its field. “Only with a rating like this might you get as many brownie points for Social Science and Medicine as you would for the New England Journal of Medicine.

    In fact, a close reading of the funding council’s website shows that although it intends to exploit bibliometrics, it does not see “any role for journal impact factors.”4 Instead it wants to rely on citation measures: the number of citations received by each paper in a defined period. Even this, however, might not entirely satisfy Professor Jones, who points out that citations to research in topics such as public and family health tend to be spread out over a longer period than is usually the case in basic science. And there is also the fact that many important findings are published in outlets other than learned journals, such as official reports.

    Over the years, organisers of the research assessment exercise became aware of some of the pitfalls of assessment and devised ways of taking account of them, although opinions differ on their adequacy. The new system will start the agonising all over again. The funding council has said that it intends to have a single framework covering all disciplines. And although bibliometrics will be the core of the system, other measurements will be used “where this is necessary to produce a robust assessment that fully reflects the range of research approaches and outputs characteristic of each subject group.”5 These other methods will include further quantitative indicators and, where appropriate, qualitative information.

    Reassuring? Perhaps. Either way, to judge by this and the council’s other statements, the detail is yet to be decided and still up for grabs. In which case researchers fearful that any new arrangement may not favour their particular enterprise have much to play for. And the discussion has already begun.

    Practical impact

    Appreciating that bibliometrics are not always the best measure of the usefulness of research, the council has begun contemplating how the framework should deal with what it calls “user valued research.” One approach that’s begun to generate interest among people working in primary care and other areas of predominantly applied research is the measurement of its practical benefits: its societal impact. But how to do it? Philip Hannaford, head of the Institute of Applied Health Sciences at Aberdeen, talks of one research project by his university that was disseminated via the internet. “It’s been downloaded from the website around 200 000 times. It’s having an impact. But how it’s being used, and where, is impossible to tease out.” Professor Roland points to another difficulty. One measure of the value of policy research must be the extent to which politicians refer to it and use it. “But quite often we find ourselves demonstrating that things the government’s done haven’t worked,” he says. “You don’t get credit for that. We rarely have secretaries of state standing up in parliament saying ‘Martin Roland’s just shown that my wheeze is a complete flop.’”

    Undeterred by these and a score of other hurdles, several enthusiasts are already trying to move assessment of societal impact from a worthy goal to a practical reality. One of the pioneering countries has been the Netherlands. A working group of the Royal Netherlands Academy of Arts and Sciences6 compiled a list of the kind of evidence that applied researchers might use to show the societal impact of their work. The list is lengthy and diverse. It ranges from the predictable (policy documents, treatment guidelines, clinically useful products, etc) to the less obvious (such as presentations to non-scientists, membership of advisory committees, and publicity through the media). Chris van Weel, the current president of WONCA, the world organisation of family doctors, and a professor at the University Medical Centre in Nijmegen, admits that trying to capture all this on a single numerical scale would be difficult. “It’s much more about building a narrative that is based on facts, and not just on individual reviewers’ judgments.”

    Any such scheme would have to be exceptionally flexible. For example, publication in an international rather than a purely national journal normally earns more kudos. But if whatever is being reported is particularly relevant to circumstances in the Netherlands, publication in a Dutch language journal might be more appropriate and should be reflected in the assessment. The practicability of the Dutch approach will remain uncertain until pilot studies have been completed.

    One of the British pioneers of assessing societal impact is Martin Buxton of Brunel University’s Health Economics Research Group. He and his colleagues have developed a “payback” model in which they seek to capture the economic returns of a research enterprise. They recognise several categories of return: knowledge, benefits to future research, health sector benefits, political and administrative benefits, and broader economic benefits.

    One of the group’s earlier tests of their new method comprised a retrospective evaluation of the research and development funded by the North Thames region of the NHS.7 The method seemed to work, and a similar approach has since been tested on other research programmes, including that of the Arthritis Research Campaign. But whether a labour intensive Rolls Royce assessment of this kind could realistically be applied to the research excellence framework, which covers all the applied health research output of UK Universities is debatable.

    In passing it’s worth noting that purely bibliometric measures did not correlate well with Professor Buxton’s benefit ratings in the NHS example. “Almost half the projects that did not have any journal publications had a benefit score, indicating that they appeared to have made some impact on policy or practice through dissemination by means other than journal publication.”

    At the Medical University of Vienna yet another group is pressing ahead with societal impact assessment—and, to judge by the comments of Manfred Maier, the university’s professor of general practice, with good reason. In Austria, he says, the scientific impact factor is all important. “Almost nothing else counts. This is the situation that forced some of us to think about a meaningful addition.” He has nothing against scientific impact factors, he adds, but would like to see the societal impact of a piece of work quoted as well.

    The Viennese project is still a work in progress. But it relies on scoring several aspects of the research including its aims and the extent to which its findings might be, or have been, translated into tangible benefits. Bravely, its authors have even designed it to yield a numerical score. The scheme’s check list of relevant items has already undergone preliminary testing and refinement, but Professor Maier hopes to garner further opinion in the near future.

    Whether the REF will eventually incorporate anything of this kind is anyone’s guess. Professor Jones sees the allure, in principle, of including a societal impact assessment but will withhold his blessing until the bones of the schemes so far mooted have had a little more flesh applied to them. Professor Hannaford is concerned by the complexity of some of the ideas on the table and admits that, anyway, he wasn’t too unhappy with the old system. His view of the future is pragmatic. “Universities are full of bright, ambitious people. If staff can get extra money for societal impact they will find ways of manipulating the system to demonstrate it.”

    So, the RAE is dead; long live the REF . . . until it too is found wanting and added to the heap of defunct evaluation schemes already piled up in the dustier store cupboards of academic managers and policymakers.


    Cite this as: BMJ 2009;338:b553


    • Competing interests: None declared.


    Log in

    Log in through your institution


    * For online subscription