Intended for healthcare professionals

Feature

Should we ditch impact factors?

BMJ 2007; 334 doi: https://doi.org/10.1136/bmj.39146.545752.BE (Published 15 March 2007) Cite this as: BMJ 2007;334:569
  1. Richard Hobbs, head of primary care
  1. Department of Primary Care and General Practice, University of Birmingham, Birmingham B15 2TT
  1. f.d.r.hobbs{at}bham.ac.uk

    Even advocates of impact factors admit that they are a flawed measure of quality. Gareth Williams believes we should get rid of them whereas Richard Hobbs thinks refinement is the answer

    From 2008, state funding for academic research in the UK will be calculated differently. The research assessment exercise, which is based substantially on (high intensity, high cost) peer review, will transfer to bibliometric scoring.1 Such metrics include journal impact factors, published annually in Journal Citation Reports. The impact factor is based on the average number of citations of individual “source” articles (original papers, research reports, and reviews) published in that journal during the preceding two years.

    Of course, as with most routinely collected data, there are problems. Some of them are basic—only 2.5% of journals are tracked2 and not all disciplines routinely cite others' work. Numerous other non-quality factors drive article citations and therefore the impact: reviews cite other articles the most3; basic sciences routinely cite many more references than …

    View Full Text

    Log in

    Log in through your institution

    Subscribe

    * For online subscription