Problem based learning in continuing medical education: a review of controlled evaluation studiesBMJ 2002; 324 doi: https://doi.org/10.1136/bmj.324.7330.153 (Published 19 January 2002) Cite this as: BMJ 2002;324:153
- P B A Smits, occupational physician ()a,
- J H A M Verbeek, PhD occupational physicianb,
- C D de Buisonjé, educational researchera
- a Netherlands School of Occupational Health, PO Box 2557, 1000 CN Amsterdam, Netherlands,
- b Coronel Institute, Academic Medical Centre/University of Amsterdam, PO Box 22660, 1100 DD Amsterdam, Netherlands
- Correspondence to: P B A Smits
- Accepted 20 September 2001
Problem based learning is one of the best described methods of interactive learning, and many claim it is more effective than traditional methods in terms of lifelong learning skills, and is more fun.1 In the early 1990s, four systematic reviews of undergraduate medical education cautiously supported the short term and long term outcomes of problem based learning compared with traditional learning.2–5 Since then, many medical curricula have changed to problem based learning, but a recent review has questioned the value of problem based learning in undergraduate medical education.6
Postgraduate and continuing medical education differ from undergraduate education in that they go beyond increasing knowledge and skills to improving physician competence and performance in practice, ultimately leading to better patient health.7 Problem based learning may also be effective in this context.8 There is some evidence that interactive sessions can change professional practice, but there have been few well conducted trials. 9 10
We could find no reviews of the effectiveness of problem based learning in continuing medical education. Controlled evaluation studies provide the best evidence of effectiveness of educational methods, in line with the movement of best evidence medical education.11 We therefore conducted a systematic review of the literature to find out if there is evidence that problem based learning in continuing medical education is effective.
Reviews of undergraduate medical education cautiously support the short term and long term outcomes of problem based learning compared with traditional learning
The effectiveness of problem based learning in continuing medical education, however, has not been reviewed
This review of controlled evaluation studies found limited evidence that problem based learning in continuing medical education increased participants' knowledge and performance and patients' health
There was moderate evidence that doctors are more satisfied with problem based learning
We searched the databases Medline, Embase, Psyclit, the Educational Resources Information Centre (ERIC), the Cochrane Library, and the Research and Development Resource Base in CME on the Internet (RDRBWEB) from 1974 (the year Neufeld and Barrows published their new approach to medical education) to August 2000. We searched for studies with the keywords “problem-based (PBL),” “practice-based,” “self-directed,” “learner centred,” and “active learning.” We combined the search results with another search using the keywords “continuing medical education (CME),” “continuing professional development (CPD),” “post-professional,” “postgraduate,” and “adult learning.” Finally, we conducted a manual search of relevant references in the included studies.
We included studies in which the author(s) had indicated that the educational intervention was problem based and in which the learning process in essence resembled the methods used at McMaster University or the University of Maastricht. 12 13 This consists of a tutor facilitated, problem based learning session in which a small, self directed group starts with a brainstorming session. A problem is posed that challenges their knowledge and experience. Learning goals are formulated by consensus, and new information is learnt by self directed study. It ends with a group discussion and evaluation.
For this review, we included keywords to find relevant educational articles in the total domain of postgraduate and continuing medical education and continuing professional development. We scanned all the studies collected for controlled trials with a pretest/post-test design. Because of the small number of randomised trials, we did not exclude other types of controlled trial. With this strategy, we hoped to find all relevant controlled studies on problem based learning in continuing medical education.
Review method of selected studies
Two reviewers (PBAS and JHAMV) independently assessed the quality of the studies using five quality criteria. Each criterion was allotted a maximum of 10 points, making a maximum possible score of 50 points (see appendix on bmj.com for more details). We discarded a sixth possible criterion, that groups should be treated equally, with the exception of experimental education.14 Many factors may influence the outcome of education (tutor, educational materials, lecture rooms, etc), and it was not possible to extract this information from the studies: we therefore could not assess equal treatment of groups.
Studies with a total score of ≥25 points were considered to be of high quality, and those with <25 points were of low quality. We distinguished two different categories of study by the control groups: in one category problem based learning was compared with a more lecture based programme, while in the other it was compared with no educational intervention.
For each study, we looked for four outcome variables— participants' knowledge, performance, and satisfaction and patients' health—and assessed the level of evidence on these. We graded the evidence for the effectiveness of problem based learning as strong if there was a positive outcome in two high quality studies, as moderate if there was a positive outcome in one high quality and one low quality study, as limited if there was a positive outcome in one high quality study or one or more low quality study, and none if there was a contradictory outcome or no outcome.
Results of literature search
Six controlled trials met our inclusion criteria.15–20 A manual search of references from these studies did not yield any new trials that met our criteria. Five of the studies assessed the effect of continuing medical education on general practitioners. Four studies contained over 50 participants, 16–18 20 and two had fewer than 20. 15 19
Table 1 shows the results of our quality assessment of the six studies. Two studies were of “high” quality, 15 17 and the others were “low.” Two of the trials were randomised trials. 15 16 In Heale et al's study, however, the group of randomly allocated doctors was combined with a group who did not want to participate in the entire study.16 Whether the randomisation is valid in terms of equality of groups is unclear.
Results of studies
Table 2 shows the results of the six studies. Outcome measurement was often restricted to only one variable. No study measured both the preferred outcome variables—participants' performance and patients' health. In one of the high quality studies—problem based learning via email versus use of internet resources—neither educational programme increased participants' knowledge, but group size was small.15 The other high quality study—problem based versus lecture based learning—showed positive results for problem based learning in terms of participants' knowledge, clinical reasoning, and satisfaction.17 It is unclear whether these effects can be attributed to the problem based learning format, however, because of differing periods of educational exposure.
Table 3 shows the level of evidence we found for the outcome variables. With the three studies that compared problem based learning with another educational format, we found no evidence that problem based learning affected participants' knowledge and performance and moderate evidence that it increased participants' satisfaction. None of the studies measured patients' health. The other three studies compared problem based learning with no educational intervention and were of low quality. They show limited evidence that problem based learning was effective in improving participants' knowledge and performance and patients' health (table 3). Differing degrees of satisfaction cannot be compared in this study design.
We found few relevant studies, of varying quality. There is no consistent evidence that problem based learning in continuing medical education was superior to other educational strategies in increasing doctors' knowledge and performance but moderate evidence that it led to higher satisfaction. There is limited evidence that problem based learning increased doctors' knowledge and performance and patients' health more than no educational intervention at all.
However, the studies in which the control group received no educational intervention can give information only on the effects of receiving education, not of the specific educational method. With the studies that compared problem based learning with another method, in order to deduce that one educational intervention is more effective, the content, process, and influencing variables in both interventions must be clearly stated. The information on the educational interventions given in the three studies can be rated as completely absent,16 poor,15 and reasonable.17
In studies not restricted to problem based learning, there is some evidence that interactive educational methods in continuing medical education are more effective in changing doctors' performance and patients' health.9 The results of this literature study on problem based learning in continuing medical education seem to be comparable with those on problem based learning in undergraduate medical education.2–4
Studying the effectiveness of education is complex, 21 22 but we should be able to perform studies of higher quality than those reviewed here, especially when comparing educational methods. As our review found, it is apparently not impossible to randomise participants to different educational methods. We have to do better in defining an educational method and controlling what actually happens in educational practice. We also have to clarify the aims of our education. Is our objective to increase knowledge, change attitudes, or improve health care? Outcome variables should correspond with our objectives, and preferably several different variables should be measured, including participants' performance and patients' health.9 There seems to be agreement that a small significant effect found is evidence of effectiveness. 21 23 Evaluation is further complicated by professional and social context, as is shown in research on implementation of guidelines.24 This calls for randomisation, because observational studies can easily be biased by these factors.
Contributors: PBAS coordinated the systematic review, formulated the study hypothesis, performed the searches, assessed the quality of studies, and wrote the article. JHAMV initiated the review, helped formulate the study hypothesis, supervised the systematic approach of the review, reported on the results, and helped write the article. CDdB helped formulate the study hypothesis, performed the searches, and made a first review and draft of the study results. PBAS and JHAMV are guarantors for this article.
Editorials by Prideaux and Goldbeck-Wood and Peile
Funding This study was supported by the Netherlands Organisation of Scientific Research (NWO) as part of the priority programme on fatigue and work and by the Netherlands School of Occupational Health.
Competing interests None declared.
Details of criteria used to assess quality of studies appear on bmj.com