Intended for healthcare professionals

General Practice

Evidence based general practice: a retrospective study of interventions in one training practice

BMJ 1996; 312 doi: (Published 30 March 1996) Cite this as: BMJ 1996;312:819
  1. P Gill, research tutora,
  2. A C Dowell, directora,
  3. R D Neal, research fellowa,
  4. N Smith, research fellowa,
  5. P Heywood, deputy directora,
  6. A E Wilson, lecturera
  1. a Centre for Research in Primary Care, Leeds University, Leeds LS2 9LN
  1. Correspondence to: Dr Dowell.
  • Accepted 19 February 1996


Objectives: To estimate the proportion of interventions in general practice that are based on evidence from clinical trials and to assess the appropriateness of such an evaluation.

Design: Retrospective review of case notes.

Setting: One suburban training general practice.

Subjects: 122 consecutive doctor-patient consultations over two days.

Main outcome measures: Proportions of interventions based on randomised controlled trials (from literature search with Medline, pharmaceutical databases, and standard textbooks), on convincing non-experimental evidence, and without substantial evidence.

Results: 21 of the 122 consultations recorded were excluded due to insufficient data; 31 of the interventions were based on randomised controlled trial evidence and 51 based on convincing non-experimental evidence. Hence 82/101 (81%) of interventions were based on evidence meeting our criteria.

Conclusions: Most interventions within general practice are based on evidence from clinical trials, but the methods used in such trials may not be the most appropriate to apply to this setting.

Key messages

  • Key messages

  • 81% of general practice can be described as evidence based using this method of assessment

  • Evidence derived from different methodologies may be important for the assessment of the evidence base of general practice


The recent enthusiasm for developing evidence based medical practice has been the subject of debate.1 The establishment of the Cochrane Collaboration2 and the publication of a new journal, Evidence-Based Medicine,3 highlight the fact that the research and scientific basis of medicine is currently subject to close scrutiny.

Last year the Lancet published a paper4 that challenged previously held beliefs that less than 20% of medical practice was based on scientific evidence.5 The authors assessed the evidence base for the treatment of 109 medical patients in hospital. Their findings, that up to 80% of acute hospital interventions had a scientific rationale, produced much comment6 7 8 and a challenge from the authors to repeat the study in other clinical settings. By applying the same methodology we investigated the degree to which general practice is evidence based.


Consecutive consultations over two days were reviewed by retrospective analysis of case notes from a suburban training general practice. For each consultation, two of the authors independently recorded the primary diagnosis and intervention before reaching consensus. The primary diagnosis was defined as the first problem recorded for the consultation and the primary intervention as “the treatment or manoeuvre that represented the practitioner's attempt to cure, alleviate, or care for the patient in respect of the primary diagnosis.”4 The evidence for the interventions was then searched for in Medline (1966-95), standard textbooks, and pharmaceutical companies' databases.

We classified the interventions as did Ellis et al: (i) intervention based on evidence from randomised controlled trial; (ii) intervention based on convincing non-experimental evidence; (iii) intervention without substantial evidence, not meeting criterion (i) or (ii). To assess non-randomised controlled trial interventions we held a consensus meeting of our academic team of five general practitioners and a non-medical arbiter. Only interventions with unanimous consensus of the team were allocated to either groups (ii) or (iii).


Of the 122 consultations recorded, 21 were excluded as the patients were referred (six patients) or sent for investigations (five patients) to hospitals; the remaining 10 patients had insufficient data, leaving a study sample of 101 diagnosis-intervention pairs.

Primary interventions were classified “evidence based” if they fulfilled the criteria for category (i) or (ii), with the result that 81% of patients (82/101) had received evidence based interventions (tables 1 and 2).9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 The remaining 19% (table 3) were judged to have received treatment that had no substantial evidence from our search.

Table 1

Interventions (n=31) substantiated by evidence from randomised controlled trials

View this table:
Table 2

Interventions (n=51) substantiated by convincing non-experimental evidence

View this table:
Table 3

Interventions (n=19) without substantial evidence

View this table:


This pilot study has shown that the majority of interventions within general practice are based on evidence. This is comparable with the findings of the study set in an acute unit in Oxford.4

Our study has some limitations. As it is a retrospective study within one training practice, the results cannot be generalised. By limiting our search to databases such as Medline, which we acknowledge are not comprehensive,34 we may have failed to find all the evidence available. Neither did we make any attempt to assess the methodological quality of the trials identified. Nevertheless, we believe that our study raises some points that are worthy of debate.


General practice is characterised by patients presenting with multiple and ill defined problems; a specific diagnosis may not be reached within a single consultation. We accepted the primary definition of the clinical problem by the general practitioner at face value. In using the primary diagnosis as denominator we have not only reduced the complexity of general practice but also lost some of its reality.6 We recognise that it may be difficult to allocate a specific diagnosis to a symptom, such as “painful tongue,” in the same way as, in a hospital setting, “non-cardiac chest pain” may have various causes.

General practice consultations may be triggered by a variety of circumstances (certification or external pressure, for example), whereas a more critical, often clinical, event usually precipitates a hospital admission. Clinical problems have many facets, hence diagnoses and interventions are often multiple, particularly when physical, psychological, and social elements are considered. The diagnosis-treatment pair “depression/counselling” can tell only a fraction of the story of a complex interaction. There are patients in whom the disease is neither clear nor relevant to the patient's problem,35 and the presence of any disease is not always proved by investigations. Also, there is pressure to record a medical diagnosis to justify treatment.36 On the other hand, secondary diagnoses and social problems may not be recorded.37 Nevertheless, we agree with Bridges-Webb et al that doctors' diagnoses remain relevant, if not absolutely valid, since doctors are likely to base their recommendations on the labels they report.38 This difficulty in separating out primary events is not unique to general practice, and in all specialties assigning an appropriate diagnostic label or labels this must be considered as an integral part of evidence based practice.


The study raises several questions concerning the appropriateness and quality of randomised control trial evidence. Firstly, randomised controlled trials may not necessarily indicate the most cost effective current treatment for general practice. For example, use of a third generation cephalosporin may be substantiated by a randomised controlled trial for a urinary tract infection, but in an uncomplicated case trimethoprim may be just as effective—and certainly cheaper.

Further questions that arose during our literature search concern the issue of how endpoints for randomised controlled trials can be measured. There is clear evidence that angiotensin converting enzyme inhibitors and calcium channel blockers reduce blood pressure, but none that they reduce cardiovascular morbidity and mortality. Should this be regarded as “good” evidence? How should we deal with randomised controlled trials that are apparently outdated by evidence that another treatment is available (for example, oral ketoconazole rather than topical clotrimazole for candida albicans), or evidence from other sources suggesting later that a treatment can be harmful (temazepam capsules, for example)? Do randomised controlled trials have to be compared against placebo, or is it acceptable to compare with the currently accepted “standard” treatment, even though this may not have previously been tested against placebo in a randomised controlled trial?

Evidence based practice has to accept the possibility that evidence from randomised controlled trials has not necessarily the value of a “gold standard” but has more the value of a coffee future—likely to be altered by tomorrow's experience. Furthermore, some interventions were originally assessed within secondary care but their main use is in the community.


Despite the healthy debate resulting from the Oxford study,6 7 8 both patients and policy makers might want doctors to try to base as many of their interventions as possible on evidence from clinical trials. There may be a temptation to produce a league table of specialties and settings—for example, inpatient medicine might be found to be “better than” general practice by 1%.

We could question whether it is feasible or even desirable to pursue the goal of 100% evidence based practice. Much of the work within general practice, as well as in other settings, consists of medicine that combines science with art, sociology, mythology, and pastoral care. These aspects of care must be incorporated into an appropriate paradigm of evidence based practice rather than that determined solely by clinical trials.

Linked to this is the search for appropriateness of the methods used to provide the evidence. We believe that for general practice, and possibly in other settings too, the most important evidence may be found in developing alternative methodologies which complement conclusions from randomised controlled trials.

We thank staff at The Street Lane Practice for their help.


  • Funding None.

  • Conflict of interest None.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.
  38. 38.