Intended for healthcare professionals

Letters

Quality of impact factors of general medical journals

BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7395.931 (Published 26 April 2003) Cite this as: BMJ 2003;326:931

Quality matters—and the choice of indicator matters too

  1. Miquel Porta (mporta{at}imim.es), head, Clinical and Molecular Epidemiology of Cancer Unit
  1. Institut Municipal d'Investigació Mèdica, Carrer del Dr Aiguader, 80, E-08003 Barcelona, Spain
  2. Centre for Statistics in Medicine, Institute of Health Sciences, University of Oxford, Oxford OX3 7LF

    EDITOR—Results of the analysis by Joseph add new data and a remarkable twist to existing knowledge on weaknesses in the accuracy of the data that the Institute for Scientific Information (now part of the Thomson company) uses.1 Although the institute has long struggled to avoid mistakes, the vast amount of data needed to create its products emphasises the importance of more stringent quality checks.2 These controls are impossible to perform by users of the institute's indices and databases, since most users do not have access to the original, raw data—access, for example, to data on which articles were counted to be part of the denominator of the bibliographical “impact factor”.2

    Figure1

    See p 283 (1 February) for complete figure

    The results offered by Joseph are a reminder that the impact factor is often not the scientometric indicator of choice. If you want to know the bibliographical “impact” of a journal then you should first consider looking at the total number of citations received by such a journal (not just those received over the previous two years). 2 3 In passing you are likely to avoid the pitfall uncovered by Joseph: the total number of citations received is not much influenced by the “number of items” chosen to compute the bibliographical impact factor.

    With increasing access via the internet to the data of the Institute of Scientific Information, more attention is being devoted to the specific number of citations received by each individual article. This will not be a magic bullet,3 but it should further contribute to avoid another intrinsic “weakness” of bibliographical impact factors, for which no one is to blame: they are just the average of a highly skewed distribution; often, 85% of citations received “by a journal” are actually received by about 15% of the articles it published.4 Much of the appeal of impact factors stems precisely from the fact that an average is so simple a measure. 2 3 But as scientists we surely can go beyond that.

    Footnotes

    • Competing interests None declared.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.

    Research quality can be assessed by using combination of approaches

    1. Joseph L Y Liu, research fellow (joseph.liu{at}cancer.org.uk)
    1. Institut Municipal d'Investigació Mèdica, Carrer del Dr Aiguader, 80, E-08003 Barcelona, Spain
    2. Centre for Statistics in Medicine, Institute of Health Sciences, University of Oxford, Oxford OX3 7LF

      EDITOR—Although Porta's suggestion [above] on the use of the number of citations received by each article is an improvement over the journal impact factor as a measure of a publication's research quality,1 it does not solve the problem of different practices in citing references between disciplines, which is largely unrelated to quality.2

      For example, when I conducted a title search for articles in 1995 on the Institute for Scientific Information's web of science database on 10 February 2003, I found that the number of citations for the top 10 papers in the BMJ, Lancet, New England Journal of Medicine, JAMA, and Annals of Internal Medicine on malaria (an average of 64) and diarrhoea (31) are substantially lower than those for coronary heart disease (435) and breast cancer (289). As I and my co-authors have suggested,2 the citation number of individual papers should be adjusted according to discipline to improve on an imperfect but widely used indicator of research quality. However, the inherent limitations of a single numerical summary measure and the lack of empirical evidence on the effectiveness of traditional peer review indicate the need to assess research quality using a combination of approaches, including post-publication peer review, indicators of the social impact of research, and the evaluation of research performance by independent expert panels using transparent and evidence based criteria.35

      Footnotes

      • Competing interests None declared.

      References

      1. 1.
      2. 2.
      3. 3.
      4. 4.
      5. 5.
      View Abstract

      Log in

      Log in through your institution

      Subscribe

      * For online subscription