Research output on primary care in Australia, Canada, Germany, the Netherlands, the United Kingdom, and the United States: bibliometric analysisBMJ 2011; 342 doi: https://doi.org/10.1136/bmj.d1028 (Published 08 March 2011) Cite this as: BMJ 2011;342:d1028
- Julie Glanville, project director1,
- Tony Kendrick, head of primary care26,
- Rosalind McNally, outreach librarian3,
- John Campbell, head of primary care 4,
- FD Richard Hobbs, head of primary care56
- 1York Health Economics Consortium, University of York
- 2Department of Primary Care, University of Southampton
- 3National Primary Care Research and Development Centre, University of Manchester
- 4Department of Primary Care, Peninsula Medical School
- 5Department of Primary Care Clinical Sciences, University of Birmingham
- 6NIHR National School for Primary Care Research
- Correspondence to: F D R Hobbs, Primary Care Clinical Sciences Building, University of Birmingham, Edgbaston, Birmingham B15 2TT
- Accepted 2 January 2011
Objective To compare the volume and quality of original research in primary care published by researchers from primary care in the United Kingdom against five countries with well established academic primary care.
Design Bibliometric analysis.
Setting United Kingdom, United States, Australia, Canada, Germany, and the Netherlands.
Studies reviewed Research publications relevant to comprehensive primary care and authored by researchers from primary care, recorded in Medline and Embase, with publication dates 2001-7 inclusive.
Main outcome measures Volume of published activity of generalist primary care researchers and the quality of the research output by those publishing the most using citation metrics: numbers of cited papers, proportion of cited papers, and mean citation scores.
Results 82 169 papers published between 2001 and 2007 in the six countries were classified as research on primary care. In a 15% pragmatic random sample of these records, 40% of research on primary care from the United Kingdom and 46% from the Netherlands was authored by researchers employed in a primary care setting or employed in academic departments of primary care. The 141 researchers with the highest volume of publications reporting research findings published between 2001 and 2007 (inclusive) authored or part authored 8.3% of the total sample of papers. For authors with the highest proportion of publications cited at least five times, the best performers came from the United States (n=5), United Kingdom (n=4), and the Netherlands (n=2). In the top 10 of authors with the highest proportions of publications achieving 20 or more citations, six were from the United Kingdom and four from the United States. The mean Hirsch index (measure of a researcher’s productivity and impact of the published work) was 14 for the Netherlands, 13 for the United Kingdom, 12 for the United States, 7 for Canada, 4 for Australia, and 3 for Germany.
Conclusion This international comparison of the volume and citation rates of papers by researchers from primary care consistently placed UK researchers among the best performers internationally.
The UK Research Assessment Exercise reported in late 2008.1 This was the sixth such national peer review evaluation of the quality of research carried out by higher education institutions, providing a quality rating for defined areas of research. This rating, converted to a multiplication factor on volume measures such as numbers of staff and research students and amount of grant expenditure on peer review, determines the quality adjusted research funding to institutions. In 2009-10, the Higher Education Funding Council for England allocated £1.6bn (€1.9bn; $2.6bn) in quality adjusted research to UK institutions2 that had collectively attracted £3.7bn in total research grants and contracts (2007-8)3 and £658m in income from charitable grants (2009-10).2
In addition to providing an objective rationale for varying the allocation of resources to support research in institutions, the results of the Research Assessment Exercise are used for many secondary purposes, including rating the research environment for future external research awards. The Research Assessment Exercise therefore generates a cycle of quality rating, being applied prospectively both directly through the block allocation for quality adjusted research and indirectly through its “kudos” score. This may have had the effect of further concentrating research investment in the United Kingdom—the proportion of total public funding to the top 10th of researchers in the United Kingdom increased from 47% in 1980-1 to 57% in 1997-8, compared with a decline in the United States from 47% to 43%.4
In the 2008 Research Assessment Exercise, 75% of the total score in any given subject area was derived from the assessment of publication outputs. Lesser weighting was given to the research environment and evidence of “esteem.” Units of assessment were aligned to major research themes, such as cardiovascular disease, but also recognised the methodological diversity of the more applied clinical research areas of clinical epidemiology, health services research, and primary care. The 2008 scoring system placed more emphasis on research of “international quality,” evolving from the 2001 Research Assessment Exercise 0-5 star rating to an overall quality profile assessment of below national standard (“unclassified”), nationally recognised research (one star), internationally recognised research (two stars), internationally excellent research (three stars), and world leading research (four stars). The Research Assessment Exercise therefore provided a qualitative assessment (and comparison by institution), ranked by international importance, of research within the United Kingdom.
The reliability of these international ratings of research importance is, however, unclear. Assessors categorised international research in the absence of any published criteria on what defines “recognised” compared with “world beating” status. In terms of crude comparisons, overall UK research tops the world league for papers per dollar expended and citations per dollar expended and is ranked fourth for papers per researcher.5 Furthermore, the United Kingdom delivers 9% of the world’s research effort for 4.5% of the world’s research expenditure (and in a high cost economy).4 But how does a relatively young academic discipline such as primary care compare on quality internationally? Subpanels encompassing more applied clinical research, including primary care, have historically been rated below laboratory and hospital based research in Research Assessment Exercises. Furthermore, commentators have questioned the viability of primary care research.6
Despite limitations, the United Kingdom may transfer future state support for academic research from the substantially peer led Research Assessment Exercise review (high intensity, high cost) to assessments in the Research Excellence Framework, potentially informed by bibliometric indicators and expert review.7 Subpanels for the Research Excellence Framework may use data on citations to inform their assessments. To internationally benchmark UK research on primary care, we carried out volume and quality analyses of publications by primary care researchers from six countries with well established primary care research before the publication of the 2008 Research Assessment Exercise.
We undertook the project in two stages. Firstly, we assessed the volume of research on primary care by country adjusted for national differences, where we defined a primary care researcher as a researcher based wholly or in part in a university investigating primary care issues, or a primary care professional (such as a general practitioner or family doctor) engaged in research. Secondly, we assessed the publication profile of the researchers with the highest volume of output in each country, where we defined the highest volume as researchers from primary care with the highest number of publications reporting research findings published between 2001 and 2007 (inclusive) and identified by literature searches of Medline and Embase.
For the purposes of this study we defined primary care as care that is characterised by first contact, accessibility, longitudinality, and comprehensiveness,8 where clinicians provide integrated, accessible healthcare services and are accountable for tackling a large majority of personal healthcare needs, developing a sustained partnership with patients and practising in the context of family and community.9
These criteria of “comprehensive” and “large majority of personal healthcare needs” define generalist led primary care, such as general practice or family medicine, and exclude community care by subspecialists, such as that provided by internal medicine or community paediatrics. The lack of generalist primary care is perceived by the World Health Organization as substantially contributing to the widely varied health outcomes and healthcare costs observed across countries.10
To further guide the search and selection of records, we assumed the concepts of primary care and a primary care researcher to have various dimensions: the people who deliver primary care, the activities typical of comprehensive generalist primary care, and the level of care or setting as an entry point to the healthcare system. A researcher from primary care could be an academic researcher based in a department of primary care, a primary care professional engaged in research, or a researcher investigating primary care issues. Scoping for the project also identified service areas sometimes labelled as primary care but that were not included in this research—namely, subspecialty community care (such as internal medicine or community paediatrics and obstetrics), dental care, optical care, emergency care, military primary care, prisoner services, research into the history of primary care, and publications such as news, letters, or commentaries, which largely do not report original data.
Identifying records of primary care research
We developed and tested sensitive search strategies to identify publications in primary care. The strategies used many synonyms to capture the relevant dimensions. To maximise the sensitivity of the strategy we looked for the search terms not only in the title, abstract, and indexing of records but also in the author affiliation fields and in journal names. Only English terms were used. We limited records to those relevant to six countries: the United Kingdom, the United States, Germany, the Netherlands, Canada, and Australia. These countries were selected as representing those with well established records for primary care research in both service and academic settings and for which we would be able to obtain standardised data on spending on research and researcher numbers using data from the Organisation for Economic Cooperation and Development (OECD). We limited the searches to records with publication dates from 2001 to 2007.
Records were retrieved by searching for the six countries in the title, abstract, indexing, author affiliation, and country of publication fields. We searched Medline and Embase using the Ovid interface in November 2007 (see web extra for full search strategies). The search strategy was initially piloted by comparing the completeness and accuracy of the search strategy against a manually derived list of all publications provided by a pilot sample of six UK researchers from primary care. Minor adjustments of the strategy were retested in a second sample until the searches yielded 90% of the actual reported outputs.
The records retrieved were loaded into EndNote bibliographic software and deduplicated using multiple approaches. Each country was loaded into a separate EndNote database, and on entry into the database was assigned a sequential EndNote record number. From the result set we removed records relating to countries other than the six of interest. We extracted a pragmatic 15% random sample of the remaining records for each of the six survey years and for each country. Random numbers were generated using the Random.org integer generator facility (www.random.org/). We requested a set of numbers between 1 and the total records in an EndNote database. The numbers returned were then matched to EndNote record numbers.
Using the data available in the title, abstract, indexing, and affiliation fields, we categorised the records according to the following questions: Did the publication relate to primary care broadly defined? Did the publication report research findings rather than comment, opinion, or informal review? Was at least one of the authors involved in primary care either as a healthcare professional or as a researcher? We allocated records to a country based on information from an author affiliation field, setting information in the abstract, and subject indexing. Where authors from more than one country were identified we assigned the records to each country. Selection was made by one individual from a team of selectors and queries discussed with a lead selector (JG).
The samples were assessed for the current Research Assessment Exercise census period—namely, whole years for 2001 to 2006 and 10 months for 2007. As the 2007 data did not represent a complete year we excluded such data from the calculation of means, and also to avoid any biases in favour of higher impact journals and English language journals, which may be indexed by databases more quickly than journals with lower impact factors and those in languages other than English.
Adjusting for national differences
To facilitate comparisons across the six countries the research output by primary care researchers or practitioners was adjusted for national differences. We calculated annual estimates based on the 15% sample and then weighted them by the gross domestic product spent on all research and development (gross expenditure on research and development) and the number of full time equivalent researchers (all disciplines). Data were required for a seven year period and the best source of such comparative statistics was the OECD series. We downloaded statistics from OECD.Stat on 25 February 2008.11 As the statistics had not been published for 2007 at the time of the analysis, we used 2006 values for 2007 calculations.
Analysis by journal impact factors
One researcher (JG) collected the journal titles of the records that formed the 15% sample. We obtained the impact factors of those journals for each year from 2001 to 2006 from the ISI Web of Science Journal Citation Reports.12 As the impact factors for 2007 had not been published at the time of the analysis, we gave the 2007 publications the impact factor value for 2006. Although impact factors for all journals in the sample were obtained where available, for this report we included only the records reporting research methods or findings by researchers from primary care. To show the distribution of publications in journals of different impact factors we grouped the journals by where their impact factor fell, within ranges 0 (no impact factor), less than 2, 2-4.99, 5-9.99, 10 and above.
Analysis of author output
After we had assessed the 15% sample, we gathered information on the research publication performance of researchers from primary care for the complete dataset. Rather than search on specific named academic departments the selection was led by the data. We retrieved the records with at least one primary care author by searching the affiliation fields of all records using a range of English, German, and Dutch terms (including “primary” and “family practice”) to capture primary care organisations and academic departments (see web extra). This generated a list of researchers in academic primary care departments. Because author affiliation is incomplete in records, we then searched the dataset by the authors’ names and we tabulated the number of publications in the database for each author.
Information on affiliation was incomplete in some records. Once we had identified authors with the highest output volume, we searched all retrieved records again for additional records using the name rather than just relying on information about affiliation. This identified further records that were included in the authors’ publications lists. With this more complete information we selected the 30 highest volume authors for the United States and the United Kingdom. Because of limitations in funding we selected only 20 authors identified as having the largest number of publications for each of Australia, Canada, Germany, and the Netherlands. Australia had two authors tied on 16 publications so we included 21 authors for Australia. The choice of these cut-offs was pragmatic, determined by the resources available to the project.
We obtained the citation rates for the publications by the highest volume authors from all three citation indexes provided through ISI Web of Science.12 Data were obtained from 19 May to 6 June 2008. Variation in authors’ names was taken into account by searching for both first initial and second initial—for example, “davis r OR davis rb”. We used variants to search for authors with compound surnames—for example, “Barrett-Connor e OR barrettconnor e”.
Using Excel we recorded the citation scores for each publication for each author. The citation performance did not represent all publications by authors for 2001-7, because although it was possible to identify most authors uniquely within Web of Science, a few with common names were more difficult to identify. This may have affected around 15% of the authors in the list of 141. For those authors it is uncertain whether all their publications have been identified, although knowing the journal and date of many of their publications improved the identification rates of their data.
We used several measures to assess the publication performance of each high volume author: total number of an author’s publications identified by the search strategy; percentage of an author’s identified publications that received at least one citation; percentage of an author’s identified publications that received at least five citations; percentage of an author’s cited publications that achieved specific citation score ranges; mean citation score per publication, calculated by totalling all citation scores and dividing by the number of publications (counting publications with and without a citation score); and mean citation score per cited publication, calculated by totalling all citation scores and dividing by the number of cited publications. We also calculated the Hirsch index for each researcher. The h index (h) means that h of an author’s publications have at least h citations each and the author’s other papers have at most h citations each.13 This index was calculated from the citation scores collected from the ISI Citation Index data for this project using an Excel spreadsheet.
Identifying records and sample selection
Overall, 121 701 records on primary care or by practitioners or researchers from primary care were downloaded from Medline and Embase. After the removal of duplicate and irrelevant records 82 169 remained (table 1⇓). The 15% random sample comprised 12 421 papers. Figure 1⇓ presents the number of papers assessed at different stages during the study.
Of the 15% sample of papers, 3627 (29.2%) had at least one author who was a researcher from primary care. Germany had the lowest proportion of primary care papers authored by researchers from primary care (annual mean 11.8% for 2001-7). The United Kingdom and the Netherlands had the highest annual mean proportions of papers authored by researchers from primary care (39.9% and 46.7% for 2001-7). Table 2⇓ shows the annual breakdown of the 3627 papers in the 15% sample categorised as being authored by researchers from primary care. The United States and United Kingdom produced the highest volume of research by primary care researchers in the sample: 1606 records and 1204 records, respectively.
The analysis of the 3627 papers for the proportion of authors reporting research findings showed that on average 81.9% (n=226) of papers from the Netherlands in the sample years 2001-6 reported research findings. This compared with 66.9% (701) for the United Kingdom and 57.2% (804) for the United States. Canada and Germany had low volumes of research publications, but high proportions of those countries’ publications reported research (73.6% (142) and 67.2% (41), respectively).
Adjusting for national differences
Analyses of publication output weighted by national factors such as gross domestic product, gross expenditure on research and development (all disciplines), and number of researchers (all disciplines) provided information on the relative productivity of the six countries. The total annual numbers of publications reporting research were estimated from the sample. The Netherlands was the highest volume publisher per 10 000 researchers, followed by the United Kingdom (table 3⇓). In comparison, the United States and Germany seemed to show a consistently lower performance based on the number of researchers employed. Using the estimated output figure, the Netherlands seemed to rapidly increase its productivity in primary care research, quadrupling the number of primary care research reports per 10 000 researchers during 2001-6. The output of the United States increased but more slowly than that of the Netherlands: the estimated output per 10 000 researchers had doubled from 2001 to 2006. In this estimate the United Kingdom seemed to have a steady rate of growth in published research by authors from primary care, but these estimates showed that productivity had not yet doubled from the 2001 level.
The estimated number of research publications being produced annually by researchers or practitioners from primary care, weighted by the gross expenditure on research and development, showed a similar picture. The Netherlands had a high rate of productivity and a high rate of growth. The United Kingdom had a similar rate of productivity and slightly slower rate of growth. Australia and Canada showed slow signs of growth, and the United States and Germany had low rates of productivity and few signs of growth (table 4⇓).
Analysis by journal impact factors
The 15% sample was analysed by journal impact factors. The number of research publications (2001-7) in journals with impact factors is shown in total and as a percentage of all publications in the sample (table 5⇓). Authors from primary care in both the United States and the United Kingdom published the largest numbers of research findings in journals with an impact factor. When research output in journals with an impact factor was weighted by gross expenditure on research and development, the Netherlands had the highest numbers of publications in primary care research per billion dollars of gross expenditure on research and development over the period, followed by the United Kingdom (fig 2⇓).
On the assumption that publication in journals with higher impact factors is generally seen as desirable and one surrogate for quality, the analysis explored publication patterns in journals with different ranges of impact factors. In terms of the mean number of research reports by researchers from primary care (2001-6, 15% sample) per billion dollars of gross expenditure on research and development, the Netherlands achieved the highest mean numbers of publications in journals with an impact factor of 5 and above, and the United Kingdom achieved the highest mean numbers of publications in journals with an impact factor below 5 (fig 3⇓). The United States and Germany had much lower mean numbers of publications in all impact factor categories.
In terms of an international picture of publication of research findings in primary care, the analysis of the 15% sample by impact factor seemed to show an overall increase in the number of publications appearing in journals with an impact factor, and an increase in publications in journals with a higher impact factor (fig 4⇓).
Analysis of author output
Comparison of the publication and citation performances of researchers from primary care with the highest output volume, identified from the full search results (82 169 records) for this study, showed that 141 researchers were authors or coauthors of 6813 (8.3%) of the retrieved records. The number of individual publications across these 141 researchers ranged from 249 to 12. Forty authors had at least 80% of their publications in 2001-7 cited at least once, led by the United States with 16 authors, the United Kingdom with 11, and the Netherlands with 10. For the 22 authors who had at least 90% of their papers cited at least once, the United States had eight authors, the United Kingdom had seven, the Netherlands had five, and Canada had two. A UK academic had the highest percentage of papers cited, over 40 times (21.3% of papers), followed by a UK and an American academic with 14.6%.
For authors who had the highest proportion of publications (cited and uncited) cited at least five times, the best performers came from the United States (five), United Kingdom (four), and the Netherlands (two). In the top 10 of authors with the highest proportions of publications achieving 20 or more citations, six were from the United Kingdom and four from the United States. An American academic achieved the highest mean citation score across all papers (cited and uncited), with 47 citations per publication (70 per cited paper). A UK researcher achieved the next highest citation score, with 26 citations per publication (30 per cited paper).
The mean h index was 14 for the Netherlands, 13 for the United Kingdom, 12 for the United States, 7 for Canada, 4 for Australia, and 3 for Germany. A regression on the mean h index showed the mean h index was similar in the Netherlands, the United Kingdom, and the United States. The mean h index of the other three countries was significantly different (P<0.000) from that of the United Kingdom: Germany (95% confidence interval −11.69 to −8.48), Canada (−8.33 to −5.14), and Australia (−10.50 to −7.21)
On the basis of this comparison between six countries, UK researchers from primary care with a high volume of publications are internationally competitive. In every comparison the United Kingdom was in the top two countries for output volume and measures of performance, such as numbers of publications per 10 000 researchers or per billion gross expenditure on research and development. When we standardised the output measures to allow national comparisons of productivity of the authors, the United Kingdom and the Netherlands were top among the six countries. The sample also showed that the United Kingdom’s research output from primary care was increasing, although at a slower rate than that of the Netherlands and the United States.
The average quality of high volume UK researchers from primary care was also higher than comparators. UK researchers were in the top 10 for all performance measures, and their main competitors were American and Dutch researchers. Furthermore, the UK researchers tended to score better for citations and mean number of citations. The United Kingdom also had six researchers in the top 10 for achieving high proportions of publications with 20 or more citations.
These data show that UK primary care research competes successfully with selected high spending research economies and, based on an initial volume screen, can be equated with “world leading” status. The literature on the research output of different specialties within countries is large. However, international comparisons are rare. One of the few papers reporting research volume focuses on respiratory disease but does not address quality.14 Where comparisons were available they tended to measure different things and to have different objectives, such as how to measure better the productivity between institutions of differing sizes.15
This analysis presents performance data using abstracts of research papers retrieved by a sensitive search strategy, using two major bibliographic databases, limited to a specific period, and for six countries, achieved largely by using the data available within the records that were downloaded. These elements affect the reliability of the performance measures in several ways.
Generic limitations of bibliometric retrieval
The search strategy was designed to retrieve a high number of records relevant to primary care and produced by researchers from primary care. The performance of the strategy is, however, likely to be affected by a range of factors. Firstly, the terminology used to describe primary care, primary care professionals, and primary care activities is diverse and non-standardised, which can hamper efficient retrieval. Not all primary care research is helpfully signposted as such in the title and abstract or in indexing applied by database producers. Secondly, during development, we tested the strategy’s performance against a reference standard of UK publications and it may run the risk of over-performing in retrieving records relevant to the UK context. Despite being tested against UK records, the strategy was designed to be sensitive in its retrieval of non-UK publications by using the search features available in Medline and Embase. Thus the strategy includes a variety of non-UK terms for primary care, the appropriate Medical Subject Headings (MeSH) and EMTREE index terms for family practice and primary care (which should be used to index records about primary care research no matter the country of origin of the record), as well as search terms to identify journals published in the six countries of interest.
Details of author affiliations (if present) may not always indicate the primary care departments to which authors belong, making comprehensive but efficient retrieval difficult. Evidence from our analysis of authors shows that the search strategy missed publications by authors from primary care. When gathering citation information from the ISI citation indexes, we could see additional publications (some with ambiguous authorship) in the citation indexes. Many of these seemed to be relevant publications by the authors being checked but had not been revealed by the specific searches. These factors and others mean that the records retrieved by the search may not have captured all those that might be called “primary care research” (even by our broad definition), and the strategy will not have identified all the available publications by researchers from primary care to be found in the databases. The impact of any missing publications on the performance indicators, such as citation averages, is unknown. The limitations of the search strategy are, however, likely to be equally applicable in each country so should not alter the main comparisons. However, our study does illustrate the difficulties of achieving bibliographic comparisons across subject topics that are difficult to capture.
This analysis represents a snapshot of research output, covering papers published from 2001 to October 2007, recorded in two major databases, and available to be retrieved on a specific search date. It does not capture the total historical performance of a researcher through his or her lifetime or the research community in a country. Nor does it necessarily capture all the citations available to be analysed for the countries and authors of interest.
Data limitations in database records
The data available for searching and for categorisation, in particular those on affiliation, were as supplied by Medline and Embase. In any record, data on affiliation may vary, with no data, data for one author, or data for all authors. The affiliation data may give information at the organisational or departmental level. For the author analysis phase of this research, we identified authors with a high volume of output by a frequency analysis of records where affiliation could be clearly identified. Once those authors had been identified we researched all the records downloaded from the initial search by author name to collect additional records that had poor or no information on affiliation. This ensured high sensitivity for the author citation analysis because the analysis was informed by additional searches for known authors. However, variation in the affiliation data available for searching in Medline (usually one author) and Embase (usually all authors), compounded by the use of abbreviations and acronyms in affiliation names, meant that some records by researchers from primary care may not have been retrieved by the original search.
This is a key aspect of retrieval that needs to inform comparative organisational analysis using bibliographic records. Better authority control of authors’ names and institutional details would improve retrieval and analysis. Embase and ISI databases may be the preferred first sources for the purposes of bibliometric analysis rather than Medline as they offer affiliation details for more than the first author and so potentially enhance retrieval. As clinical research becomes ever more collaborative, as with the development of the National Institute for Health Research School for Primary Care Research, being able to attribute publications beyond the lead author alone becomes increasingly important. In addition, improved facilities for authors to collect their own records on publications and promote them may assist in improving access to publication patterns and metrics.16 Efficient linking of publications to authors and organisations is, however, likely to be delayed by frequent name changes and regroupings of departments within organisations, combined with the movement of researchers between organisations.
The records by researchers and practitioners were categorised according to whether they reported the findings of research. The judgments about whether research was being reported were based on identifying a description of research methods or results in the title or abstract. The categorisation is based on the data available in the record and is heavily reliant on both the presence of an abstract and clear reporting in the abstract. It is highly likely that on examination of the full text of the journal article they report some records would change category (in either direction). This limitation also means that more specific judgments on the quality of individual research papers was impossible from the abstracts alone.
No single generally accepted measure for performance exists, so performance tends to be a complex picture. Choosing several performance measures provides different views on the output of high volume researchers. We used bibliometric analysis for the study, which is the commonest methodology used to assess the quality of research outside the United Kingdom. The ISI citation indexes were used in this research because they are probably the largest resource available and the most widely used for performance analyses. ISI’s Science Citation Index Expanded, used for this research, fully indexes over 6650 major journals across 150 scientific disciplines. However, several aspects of the ISI indexes might affect the results. Impact factors change over time and this has been incorporated into the analysis by using the relevant data for 2001 to 2006. Without the data on impact factors for 2007, the calculations for 2007 should be treated with caution.
The choice of grouping for the impact factors (<2, 2-4.99, 5-9.99, and ≥10) is pragmatic. Few of the journals in the sample had an impact factor greater than 10, so the detail for analysis was set below 10. The analysis using these levels of impact factor grouping has shown a trend and changes in publication pattern. Other groupings could be tested to see if the trends persist.
Many primary care journals in the sample had no ISI impact factor during the period of analysis and so do not feature in the analysis—for example, BMC Family Practice, Clinics in Family Practice, Education for General Practice, Education for Primary Care, and European Journal of General Practice. In addition ISI do not index large quantities of German and Dutch journals—for example, many German researchers with a high output volume published in the non-indexed Zeitschrift fur Allgemeinmedizin, Zeitschrift fur Arztliche Fortbildung und Qualitatssicherung, Internistische Praxis, and Chirurgische Praxis. This meant that researchers from Germany and the Netherlands may have under-reported compared with the United Kingdom, United States, Australia, and Canada, researchers who publish largely in English language publications. This did not seem to adversely affect the Dutch authors in this sample who were competing in the “top 10” on many performance measures, perhaps because of their strong tradition of also publishing in English. Researchers do exhibit a range of biases in how they cite papers, such as preferentially citing national studies, which will impact on citation analysis.17 The impact factor of individual journals does not, however, guarantee the quality of the research of individual researchers publishing in the journal. So it may be more informative to look at the citation history of the individual researcher.18
Study specific limitations
Primary care definition
This analysis is based on comparisons in generalist primary care and of practitioners who provide comprehensive rather than subspecialty care in the community. For most countries this represents general practice and, for the United States, family practice.
The analysis depends on the consistency of the coding. Within the available resources it was only possible to use one coder. Ideally there should have been at least two independent coders. Without independent double coding errors are likely in the coding, the direction and number of which are unknown.19
Six country comparison
The countries selected for comparison with the United Kingdom were chosen because of their potential to be competitive for volume and quality of primary care research and the availability of data to allow comparisons on spending for research and numbers of researchers. The choice of the five comparator countries was pragmatic, although supported by international comparisons of output in science and engineering research: American authors produced 205 000 articles in 2005, accounting for 29% of the world’s total.20 The United States was followed by Japan with 8% and the United Kingdom, Germany, and China with 6% each. Chinese publications increased at an average annual rate of 16% between 1995 and 2005, surpassing France in 2003 and nearly equalling Germany and the United Kingdom in 2005.20 A recent review of international research publications reported that the six countries we selected feature in the top 10 publishing the highest volume research among OECD countries in 2005.21 However, future analyses could be expanded to cover France and China.
Weighting volume of outputs
Ideally the research productivity measures would have been weighted by national funding levels of primary care research and numbers of researchers, or, at a higher level, funding of health and medical research and researchers. However, achieving such comparable data for six countries was not possible, so the more generic values of total gross expenditure on research and development (all disciplines) and total researchers in all disciplines had to be used in this analysis.
Selection of authors for individual analysis of citation rates
The analysis of individual researchers was limited to those with the highest output volume, rather than being predetermined by the investigators’ knowledge of high quality academics, since this would have inevitably biased the study towards our greater knowledge of UK researchers. This initial selection by volume standardised the identification of publications for the individual citation analysis across the six countries. However the citation rates of researchers with fewer publications may compete with those of high volume researchers; indeed, although the search strategy did identify high quality UK researchers from highly rated departments in the Research Assessment Exercise 2008, it did not select all of the most highly rated UK researchers from primary care. Some of those with the highest citation rate were not selected because their publication rate was slightly below the cut-off for the initial volume measure. However, this limitation applied to all six countries and although the mean quality scores for primary care research as a whole may be underestimated, it is unlikely to have distorted the comparison between countries.
Since resources only allowed analysis of the outputs of 141 individual researchers, the performance of other researchers from primary care within this collection of records remains unknown. Authorship volume and the results of analysis by author volume may also be affected by different academic and research cultures where authorship conventions differ. For example, in the decision of who should be the first author and who should be included in the author list.
The search strategy may not be reliable in distinguishing primary care research actually done by researchers from primary care. For example, three of the top 10 researchers with high output volume in the United Kingdom were epidemiologists working in large academic centres that include primary care. These researchers met our criteria but may not consider themselves primary care researchers. However, these same limitations relate to all countries and are unlikely to have distorted the findings, except for possibly the United States where the ratio of epidemiologists to primary care academics is higher than that for the other five countries (D Mant, personal communication).
This international comparison of the volume and citation rates of UK primary care research accords with the Research Assessment Exercise 2008 primary care panel’s judgment that more than half of the outputs submitted to them were “internationally excellent or world leading,” a profile “compatible with UK primary care research frequently being among the best internationally.”22
What is already known on this topic
The UK Research Assessment Exercise in 2008 rated 50% of UK primary care research as world class or internationally excellent, but no direct international comparisons exist
What this study adds
In six countries with strong primary care, the United Kingdom and the Netherlands produce the most cited primary care led primary care research
Identifying research on primary care that is carried out by primary care researchers is difficult using routine bibliometric methods
Only 29% of research papers on primary care had at least one primary care researcher as author
Cite this as: BMJ 2011;342:d1028
We thank the following primary care academics who, in addition to the authors, compared the search generated publication lists with their own records in two rounds of validation checks: Marin Roland, Helen Smith, Brendan Delaney, Paul Little, Paul Wallace, and Bob McKinlay; the input of Kath Wright, Kate Light, Lindsey Myers, Su Golder, Steven Duffy, Kate Misso, David Fox, and Lisa Stirk from the Centre for Reviews and Dissemination at the University of York who assisted with the development of the search strategy and the categorisation of primary care research records; statistical assistance from Andrew Jones. Part of this research was carried out while JG was employed at the Centre for Reviews and Dissemination, University of York, and completed while employed at the York Health Economics Consortium.
Contributors: All authors contributed to the study design. JG led the bibliometric reviewing and preliminary analysis. FDRH and JG led the preliminary interpretation, to which all authors contributed. FDRH and JG drafted the paper and are guarantors. All authors edited the final paper.
Funding: This study was jointly funded by the Society for Academic Primary Care and the National Institute for Health Research (NIHR) National School for Primary Care Research.
Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; most of the authors (TK, RM, JC, FDRH) are UK primary care researchers, and TK and FDRH competed in the Research Assessment Exercise 2008. FDRH is director of the National Institute for Health Research School for Primary Care Research.
Ethical approval: Not required.
Data sharing: No additional data available.
This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.