Intended for healthcare professionals

Feature Christmas 2010: Reading between the Lines

On the impossibility of being expert

BMJ 2010; 341 doi: https://doi.org/10.1136/bmj.c6815 (Published 14 December 2010) Cite this as: BMJ 2010;341:c6815
  1. Alan G Fraser, reader in cardiology1,
  2. Frank D Dunstan, professor of medical statistics2
  1. 1Wales Heart Research Institute, School of Medicine, Cardiff University, Cardiff CF14 4XN, UK
  2. 2Department of Primary Care and Public Health, School of Medicine, Cardiff University, Cardiff, UK
  1. Correspondence to: AG Fraser fraserag{at}cf.ac.uk

More scientific and medical papers are being published now than ever before. Alan G Fraser and Frank D Dunstan think that new strategies are needed to deal with this avalanche of information

Every doctor has an ethical duty to keep up to date. Is this just getting more difficult or has it already become impossible? Since Alvin Toffler coined the phrase “information overload” in 1970,1 the growth of scientific and medical information has been inexorable. There are now 25 400 journals in science, technology, and medicine, and their number is increasing by 3.5% a year2; in 2009, they published 1.5 million articles.2 PubMed now cites more than 20 million papers.

One response of the medical profession to the increasing scientific basis and clinical capacity of medicine has been to increase subspecialisation. This may restrict the breadth of knowledge of the ultraspecialist, but can such subspecialists still maintain their depth of expertise? Taking one medical subspecialty as an example, we have examined the gap between information and human capacity, and we explore the implications for any doctor who wants to practise evidence based medicine.

Methods

We searched the database of the US National Library of Medicine (PubMed) on 12 September 2010 for references relating to diagnostic imaging in cardiology. Table 1 shows the search terms used.

Table 1

Search strategies for identifying articles on diagnostic imaging in cardiology

View this table:

Citations with any reference to echocardiography (the mainstay of diagnosis) were searched first, and then the strategy was narrowed to echocardiography as a main topic and restricted to controlled clinical trials (strategies 1-4; table 1). It is recommended that junior colleagues should be trained in several imaging modalities,3 and so we performed further searches for the concept of “multimodality imaging” in cardiology. This included single photon emission computed tomography (SPECT), positron emission tomography (PET), magnetic resonance imaging, computed tomography (CT), and coronary arteriography, as well as cardiovascular ultrasound (strategies 5-6, table 1).

All searches were performed for each year from 1966 (the year before ultrasonics was introduced as a search term in PubMed; echocardiography was added in 1973) to 2009. Trends in papers on echocardiography were modelled; a good fit—from a cubic model containing time, the square of time, and the cube of time—was used to predict the numbers of publications to the end of 2010 and annually to 2015.

Results

Table 2 shows the publications retrieved by each search, along with totals for the last full calendar year. Figures 1 and 2 show annual totals to 2009 and predictions from 2010 until 2015.

Figure1

Fig 1 Trends in numbers of papers listed each year in PubMed, according to search strategies 1-4 in table 1. The numbers to the right of the vertical line, after 2009, are projected totals. MeSH=medical subject heading, Majr=as a main topic; CTs=controlled clinical trials

Figure2

Fig 2 Trends in numbers of papers cited each year in PubMed, according to search strategies 5 and 6 in table 1

Table 2

 Results of search strategies

View this table:

Search 5 without the cardiovascular system gave 700 011 citations. A search for “diagnostic imaging” [Mesh] and “cardiovascular system” [Mesh] gave 195 106 papers, or 159 661 if limited to human studies, core clinical journals, and Medline.

To estimate the time that it might take a new entrant to the subspecialty to read all the previous literature, we assumed that he or she could read five papers an hour (one every 10 minutes, followed by a break of 10 minutes) for eight hours a day, five days a week, and 50 weeks a year; this gives a capacity of 10 000 papers in one year. Reading all papers referring to echocardiography (search 1) would take 11 years and 124 days, by which time at least 82 142 more papers would have been added, accounting for another eight years and 78 days. Before our recruit could catch up and start to read new manuscripts published the same day, he or she would—if still alive and even remotely interested—have read 408 049 papers and devoted (or served a sentence of) 40 years and 295 days. On the positive side, our recruit would finish just in time to retire.

Reading only the major studies would need more than four years for strategy 3 and more than five years for strategy 6. Alternatively, if only one year was allocated for study, then for strategy 3 our recruit would need to read 95 papers every single day. If our recruit kept to the European Working Time Directive, he or she would have to read 138 papers a day, or for strategy 6, 162 papers a day at a rate of one every three minutes.

To keep up to date, the cardiac imaging specialist needs to read 30 papers a week on echocardiography or 43 a week on multimodality imaging. If limited to one paper every working day (estimated total 250 a year), then the chance that he or she will read any particular paper is 1 in 8.9. The chance that a colleague on the opposite side of the world will read that same paper in the same year is 1 in 79. If each reads a random selection of 250 papers a year, then the median number that both will read can be estimated at 28 (range 16-40) or 1.3% of the total.

A search strategy restricted to evidence on outcomes would not work for this subspecialty. The average number of controlled clinical trials that were published during the decade 2000-9 (search 4) was 26 each year. This number is not increasing, and it represents only 2% of the publications with echocardiography as a main topic (search 3) or 0.5% of all papers referring to echocardiography (search 1) during the same period.

Discussion

The gap between what we can learn and what is known is increasing all the time. We now know less and less about more and more, so being expert means knowing and publicly acknowledging the limits of your ignorance. Our analysis of one subspecialty showed no evidence whatsoever that the problem is abating, and remarkably similar patterns were reported recently for clinical trials and systematic reviews.4 Keeping up with the literature has already become a Sisyphean task. We are even engulfed by information overload about “information overload”5; searching this term on Google gives about 980 000 hits.

We did assume that all papers have equal value, which is clearly untrue, but we did not allow any time in our calculations for obtaining papers, and we ignored the projected increases. Impact factors are concentrated in a few journals,6 and citation errors are common,7 so trying to use either to select what to read would be unreliable for a topic such as medical imaging. Misconceptions are promulgated when authors do not check primary sources,8 9 and even when findings are contradicted by randomised controlled trials.10 Systematic reviews are not all being kept up to date.11 Thus, delegating the selection of reading material to others might be unrewarding.

If anything, we probably underestimated calls on the doctor’s time for reading. Accurate diagnosis is a prerequisite for evidence based medicine, but experts in multimodality imaging should also know something about clinical and other aspects of their specialty. The fearsome (but thankfully purely hypothetical) strategy of getting a recruit to read 10 000 papers a year would produce a splendidly dull and narrow specialist who has never read the Christmas edition of the BMJ or treated a patient. Doctors need time to read outside medicine too; they can choose from 133 224 books published in the United Kingdom in 2009,12 or 122 462 982 active domains on the internet.13

Faced by the deluge of data, it is tempting to be nihilistic. After all, being ignorant of 100% of the literature in your field is not significantly different from being ignorant of “only” 98%. On the other hand, reading even 2% is more than reading nothing at all. The average specialist reads 322 papers a year,14 and a few brave, exceptional, and overcommitted people might accept the challenge of reading more. For most ordinary mortals, this is impossible, and for clinicians who are not also researchers it may no longer be sensible. Reading the literature has become a collective rather than an individual pursuit, and each of us must change our behaviour to reflect this.

Medical students and doctors in training can be taught to search databases effectively.15 Doctors can use new information technologies to gain prompt and efficient access via the internet to the most relevant new data for their specialty.16 The Cochrane Collaboration is admirable, but its programme of systematic reviews is not comprehensive, so all academic institutions and medical professional associations should contribute to collective efforts to summarise medical evidence and build trustworthy, interactive repositories of knowledge on the internet. Authors of clinical guidelines have a particular responsibility to ensure that their recommendations are based on rigorous and re-testable meta-analyses of randomised controlled trials, health technology assessments, and systematic reviews. Appropriate resources must be allocated for researchers and experienced senior colleagues to dedicate time to these activities. More could also be done to develop decision support tools.17

In fields such as diagnostic imaging the plethora of publications hides the fact that we still lack sufficient evidence for rational practice. The small proportion of papers that were cited as controlled clinical trials (search 4) reinforces this notion. The best way to assess any diagnostic strategy is a controlled trial in which investigators randomise patients to diagnostic strategies and measure mortality, morbidity, and quality of life,18 but only 2.4% of diagnostic recommendations in the guidelines from the American Heart Association and the American College of Cardiology are supported by this level of evidence.19

It is time for the profession to be more imaginative. How can we reduce the number and increase the quality of publications?20 Can we remove the responsibility of all researchers to publish all their results? Could they contribute instead to wikis? Can we construct open access internet resources that allow data mining? Only initiatives such as these will overcome our worrying finding that colleagues in the same medical discipline may inhabit intellectual worlds with little overlap. Happy reading!

Notes

Cite this as: BMJ 2010;341:c6815

Footnotes

  • Contributors: AGF conceived the study, performed the literature searches, and drafted the manuscript. FDD performed the modelling and revised the manuscript. AGF is guarantor.

  • Funding: None received.

  • Competing interests: The authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; FDD does occasional epidemiological consultancy for a law firm (Jacob, Medinger, & Finnegan) and has grants for unrelated work. AGF has no other relationships or activities that could appear to have influenced the submitted work.

  • Provenance and peer review: Not commissioned; externally peer reviewed.

References