- Eugene Garfield, chairman emeritus (email@example.com)a
- aInstitute for Scientific Information, 3600 Market Street, Suite 450, Philadelphia, PA 19104, USA
- Accepted 17 May 1996
Impact factors are widely used to rank and evaluate journals. They are also often used inappropriately as surrogates in evaluation exercises. The inventor of the Science Citation Index warns against the indiscriminate use of these data. Fourteen year cumulative impact data for 10 leading medical journals provide a quantitative indicator of their long term influence. In the final analysis, impact simply reflects the ability of journals and editors to attract the best papers available.
Counting references to rank the use of scientific journals was reported as early as 1927 by Gross and Gross.1 In 1955 I suggested that reference counting could measure “impact,”2 but the term “impact factor” was not used until the publication of the 1961 Science Citation Index (SCI) in 1963. This led to a byproduct, Journal Citation Reports (JCR), and a burgeoning literature using bibliometric measures. From 1975 to 1989, JCR appeared as supplementary volumes in the annual SCI. From 1990-4, they have appeared in microfiche, and in 1995 a CD ROM edition was launched.
Calculation of current impact factors
The most used data in the JCR are impact factors—ratios obtained from dividing citations received in one year by papers published in the two previous years. Thus, the 1995 impact factor counts the citations in 1995 journal issues to “items” published in 1993 and 1994. I say “items” advisedly. There are a dozen major categories of editorial matter. JCR's impact calculations are based on original research and review articles, as well as notes. Letters of the type published in the BMJ and the Lancet are not included in the publication count. The vast majority of research journals do not have such extensive correspondence sections. The effects of these differences in calculating journal impact …