How should we rate research?
BMJ 2006; 332 doi: https://doi.org/10.1136/bmj.332.7548.983 (Published 27 April 2006) Cite this as: BMJ 2006;332:983All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
What on earth possessed the BMJ to commission an editorial on the RAE
from two people who form part of the machinery for implementing and
perpetuating it? This editorial has led to widespread dissent in the
corridors of university departments. But since these people are to be the
ones marking our homework, why would we want to upset them by disagreeing
with them in print?
Richard Smith once wrote an editorial on death and declared as a
conflict of interest "I was recently very upset by the death of my pet
rabbit". I'm confident that, if she put her mind (and perhaps a working
group) to it, Fiona Godlee could take the BMJ's conflict of interest policy
to a higher level of sophistication.
Competing interests:
Will be submitting to the RAE as part of the UCL return
Competing interests: No competing interests
What do we really mean by assessing the ‘quality’ of health research?
[1] We can give the editors of the Lancet and New England Journal of
Medicine the power to shape UK research funding for the coming decade
(because wide circulation generic publications produce the biggest impact
in the US-focused Web of Knowledge) if we base funding on past citations.
But is the quality of a ‘fashionable’ author who published on the
molecular biology of heart disease inherently ‘better’ than an ear nose
and throat surgeon who published in a smaller, specialist field like
neuroma? We can concentrate future funding even more on high-cost (and
high-risk) projects, but is the quality of research using expensive
isotopes and scanners inherently ‘better’ than research in primary care on
breastfeeding? The Higher Education Funding Council (HEFCE) has no
interest in improving the clinical care in the National Health Service nor
in the public well-being of society as a whole. [2] HEFCE will always go
for the concentration of money by the most direct way into their favoured
institutions (consider the way the results of the 2001 Research Assessment
Exercise were fiddled, post-hoc) because it makes their life simple to
deal with a few big players rather than stimulate innovative or world-
changing research. Metrics focused on past citations and grants will
ensure this cosy concentration on the same old friends of HEFCE: the only
future change in the game might be higher education mergers (as in
Manchester) into even bigger players. No wonder the Greater Manchester
Research Alliance pioneered the ‘Research Passport’!
The far-sighted director of the Economic and Social Research Council
(Ian Diamond) gave the Academy for the Social Sciences a fascinating
briefing on metrics in 2005: use of existing citation databases like the
US Web of Knowledge profoundly understates the strengths of small
disciplines, especially when published materials originate in the European
Union. A key concept to which the ESRC introduced me was ‘esteem’, by
which they meant something dynamic and cumulative (not just on how many
public committees or inquiries someone sat). Subsequently I have spent a
year pondering what qualities are admirable or judged excellent in
scientific esteem, and so far have identified two dimensions that could be
measured. Both proposals stemmed from encountering the health scientist
that I had most ‘esteemed’: Peter Medawar, although they were also readily
detectable in meeting other Nobel laureates who had worked in the UK, such
as Nikolaas Tinbergen, Edgar Adrian, John Eccles or Frederick Sanger.
One is a pedigree of ideas (seeking the ‘origin’ in originality), and one
is a pedigree of influence (‘shaping’ the shapers).
What are the seminal ideas that gave paternity to a widely growing
family tree of research (like Medawar’s immunology)? This is not at all
the same as the most highly cited papers, which may actually be derivative
and technical. It needs a mapping exercise (sometimes called data mining
in areas like patents) to identify the seminal work, and in many areas of
health these family trees are not hard to map (think, say, worldwide of
diagnostic magnetic resonance and the trees are only 30 years old with
perhaps three original seeds?). With apologies for my sexist language,
but I imagine Dorothy Hodgkin or Marie Curie showed a similar pattern of
‘paternity’ in their disciplines.
One common feature of all the laureates, above, is that they nurtured
teams and networks, from which new research leaders emerged. The Fields
Medal is the mathematics equivalent of the Nobel prise for medicine.
Mathematicians seem more aware that working with one paradigm-shifter can
help shape other trail-blazers. In the early decades of the Institute of
Psychiatry, Aubrey Lewis had an extraordinary influence on promoting
academic careers and charting the challenges for mental health research.
[3] This influence lasted at least 60 years (while at the Maudsley
Hospital in the 1990s one could still trace Lewis’ progeny) and spanned
the British Commonwealth. The fecundity of Lewis can be measured by the
amazing quantity and variety of research still produced by his one small
Institute, but even more so by measuring the international collaborations
that have nurtured research leaders far beyond the UK. Now that is a
legacy of ‘quality’.
1 Hobbs FDR, Stewart PM. How should we rate research? BMJ 2006; 332:
983-984.
2 Caan W. Inequalities and research need to be balanced. BMJ 2002;
324: 51-52.
3 Angel K, Jones E, Neve M. (eds), European psychiatry on the eve of
war: Aubrey Lewis, the Maudsley Hospital and the Rockefeller Foundation in
the 1930s. Medical History, Supplement No 22, London: Wellcome Trust
Centre for the History of Medicine at University College London, 2003.
[With many apologies for such a long 'letter' - this has become more
like a 'personal opinion'.]
Competing interests:
Once, briefly, held an honorary lectureship at the IoP. Was once a member of the Foresight forum of the AcSS.
Competing interests: No competing interests
Simply counting number of publications may be a measure of research
performance but does the number of publications also relate to clinical
relevance and importance? A search under zinc deficiency at
www.pubmed.gov brings up 6420 publications. It is inexplicable that the
great clinical importance of accurately diagnosing and repleting zinc
deficiency for normal immune function has not yet been recognised by most
practising clinical biochemists or clinicians.
In contrast, there are 16224 references to hormone replacement
therapy although use of progestins and oestrogens has been proved to
increase the risk of breast and endometrial cancers, strokes, thromboses
and myocardial infarctions.
The number of publications may reflect funding motivation rather than
genuine long-lasting scientific merit.
Competing interests:
None declared
Competing interests: No competing interests
There seems to be much confusion in the debate concerning how to
replace the UK Research Assessment Exercise. For example, it should be
noted that numbers of publications and citations are research output
measures; while research council funding and numbers of academic and
research staff are input measures.
The big problem of the RAE is its non-transparency and lack of
objectivity due to the unmeasurable role of subjective opinion - the same
criticism applies to research council decisions. Therefore, it would be a
mistake for future funding systems to try and replicate the results of the
current RAE: we can do better.
A system of research evaluation based on fundamental output metrics
such as publications and/ or citations is (like any simple measure)
inevitably incomplete and imperfect - but has the great advantage that
these deficiencies are not concealed.
A league table of English and Scottish university publications and
citations accumulated in in the ISI Web of Science for 2000-4 gives
plausible and consistent results.
Publications (Citations)
1 (1) Cambridge
2 (2) Oxford
3 (4) Imperial College, London
4 (3) UCL
5 (6) Manchester
6 (5) Edinburgh
7 (7) Bristol
8 (8) Birmingham
9 (9) Glasgow
10 (10) Kings College, London
11 (12) Leeds
12 (11) Sheffield
Since publication and citations rank order is so similar, this
suggests that measurement of each university's annual publications may be
exactly the simple, sensitive, objective and transparent research
performance measure which is needed – and one that is superior to the
complex, slow, subjective and methodologically obscure evaluations of the
RAE.
Competing interests:
None declared
Competing interests: No competing interests
The best way to rate research is the outcome of that research and the
cost effect value.
Competing interests:
None declared
Competing interests: No competing interests
To simplify funding of academic research it is planned to introduce
assessment based mainly on metrics as performance indicators. These may
include research income, publications, citations, and numbers of research
students. There seems to be only little doubt about the relevance of
publications and citations scores to reflect scientific output. However,
research income or number of students as such do not really reflect
efficacy of research. Measures like publications and citations divided by
research income and number of students could represent a much better
judgement of the efficacy of a research group. Indeed, it will be no real
help for the community when researchers are just able to achieve high
research income but not an adequate output.
Competing interests:
None declared
Competing interests: No competing interests
Academic Medicine, not all is by all means rosy
Editor: In their article on “How should we rate research” Hobbs
& Stewart present a rather rosy view of what has been achieved by the
Research Assessment Exercise (RAE) and are encouraged by the Government’s
intention to simplify the procedure(1). This is based essentially on a
metrics based assessment, rather than the peer reviewed exercise.
The review of the literature in the Editorial was somewhat selective,
e.g. some papers (2) and letters (3) expressing a contrary view were not
cited, even if only to criticise their viewpoints. It is surprising that
there was scarcely any mention of teaching. It is encouraging, however,
that the MRC and the DH’s R&D will now hold a single fund for health
related research and this should do more to encourage opportunities for
translational research. Its success will be dependent on collaboration
between Heads of Medical Schools and NHS Chief Executives, not always
known for their collaborative approach, as well as such Research
Institutes as the MRC. The new initiative must be protected from being
absorbed by making good budgetary deficits, any Government’s proposals to
set unrealistic targets for NHS Trusts and NHS research, which is not of
the highest quality.
The major question, however, which needs to be addressed relates to
the recruitment and retention of clinical academic staff. Despite a 40%
increase in Medical School intake since 2000, there were 84 vacant Chairs
in medical specialities by 2005(4). Between 2002 and 2004, clinical
lecture grades were reduced by 42% thereby removing the seed corn of
future academic posts. Pathology has experienced a marked reduction and
worryingly, in some schools, this discipline has only a marginal role in
medical undergraduate education. Consequently, those now qualifying may
have little understanding of the mechanisms by which disease is produced
and which clinical laboratory investigations should be selected for
diagnosis. Particularly in such craft specialities as surgery, and
obstetrics and gynaecology lecturers no longer view academic medicine as
an attractive career option, despite opportunities for research in such
areas as vascular, including coronary artery, surgery, endoscopic
techniques and joint replacement. Research and teaching assessments,
clinical appraisals by the GMC and locally, as well as clinical
responsibilities deter the young from an academic career.
The BMJ Editorial has not provided any reassurance that the new
system will address the blight, much of which must be laid at the feet of
the RAE (2). The way forward must be to give every encouragement for
ensuring that high standards of basic medical sciences, clinical practice
and teaching are supported in close proximity. Contrary to views
sometimes expressed, basic scientists have an important role in teaching
and not only for intercalated BScs. Teams who assess by the newly
established criteria must have vision and experience and take into account
clinical practice, research and teaching, giving appropriate weight to
each, as must be the case for Schools of Medicine. A previous review
conducted by the Wessex Institute concluded that, in medicine, compared
with other disciplines, assessors judged their peers unduly harshly.
“Quis custodiet ipsos custodies?” Juvenal. Vi.347-8.
(1) Hobbs F.D. & Stewart P.M. How should we rate research? BMJ
332:983-984
(2) Banatvala J. Bell P. Symonds M. The Research Assessment Exercise is
bad for UK Medicine. The Lancet 2005 365:458-460
(3) Symonds E.M. Bell P. Banatvala J. The Future of Academic Medicine
looks bleak. BMJ 351:694
(4) Clinical Academic Staffing Levels in UK Medical and Dental Schools:
Data update 2004 Council of Heads of Medical Schools June 2005
Professor J.E. Banatvala CBE,
Emeritus Professor of Clinical Virology, University of London
Professor Sir Peter Bell, Emeritus Professor of Surgery, University
of Leicester
Professor E.M. Symonds, Emeritus Professor of Obstetrics &
Gynaecology, University of Nottingham
Competing interests:
None declared
Competing interests: No competing interests