Intended for healthcare professionals

Editorials

Researching the outcomes of educational interventions: a matter of design

BMJ 2002; 324 doi: https://doi.org/10.1136/bmj.324.7330.126 (Published 19 January 2002) Cite this as: BMJ 2002;324:126

RCTs have important limitations in evaluating educational interventions

  1. David Prideaux, professor of medical education (David.Prideaux{at}flinders.edu.au)
  1. Office of Education, School of Medicine, Flinders University, Box 2100 GPO, Adelaide, South Australia 5001

    Learning in practice p 153

    Problem based learning, an educational intervention characterised by small group and self directed learning, is one of medical education's more recent success stories, at least in terms of its ubiquity. From its beginnings in McMaster University in the 1960s it has been adopted in undergraduate medical courses worldwide. It is also being used in postgraduate and continuing medical education.

    Problem based learning has been the subject of at least four much quoted reviews, three published in the early 1990s and one more recently.14 Such attention is not surprising. What might be surprising is that the effects of such a popular educational approach are seemingly small, except in the area of student satisfaction. According to the reviews the extent of knowledge gained by such measures as performance in licensing examinations is at best unclear. Participants in problem based learning, however, can expect small gains in clinical reasoning.

    The paper by Smits and colleagues in this issue provides a review of problem based learning in postgraduate and continuing education (p 153).5 It is, however, based on only six studies which met the authors' inclusion criteria for controlled study designs. The conclusions of the paper are similar to those of the major reviews. There is limited evidence that use of problem based learning in postgraduate and continuing medical education increases knowledge, doctor performance, and patient outcomes. There is moderate evidence for increased satisfaction of participants.

    The debate on systematic reviews of problem based learning was taken to a new level with the publication of two articles in Medical Education in September 2000. 6 7 They focused on the potential effects of research design on the findings of reviews. Albanese concentrated on effect size while Norman and Schmidt argued for a theory based approach to the study of educational interventions. Taking the debate to this level is timely given the recent interest in the nature of evidence in medical education research, particularly through the work of the best evidence medical education movement. Smits and his colleagues claim that controlled evaluation studies provide the best evidence of educational effectiveness. Despite claims in the paper to the contrary, this is not necessarily supported by the advocates of best evidence medical education, who have moved away from grading studies according to the gold standard of randomised control to a scheme based on criteria such as quality, utility, and strength of evidence.8 Norman and Schmidt provide a critique of the randomised control trial approach to researching curriculum interventions suggesting that such studies are doomed to fail. This is familiar to educational researchers outside medicine who some time ago abandoned the supremacy of randomised designs to embrace a range of quasi-experimental and qualitative designs.

    Three of the limitations of randomised control studies for studying educational interventions are highlighted by the paper. The first is randomisation. While randomisation is theoretically possible in educational research it is often not feasible nor justifiable. Is it justifiable to enrol medical professionals in postgraduate and continuing education programmes in which they are given no choice over the learning methods they will engage in? Furthermore, as Norman and Schmidt point out, randomisation relies on the maintenance of blind allocation.7 Maintaining blinding is rarely possible in research on educational interventions.

    The second issue is control of variables. At the very least the intervention itself may be variable. There are many variants of problem based learning. The process of education depends on the context. A myriad of factors, including facilities and resources, teacher and student motivation, individual expectations, and institutional ethos affect the process. Again it is theoretically possible to control for such variables but in doing so the key factors that determine the success or failure of the intervention may be removed.

    The third issue concerns the choice of appropriate outcome measures. There is much interest in the defining of clear outcomes for medical education and hence for medical education research. 9 10 But the outcomes must be appropriate for the intervention. For example, is improved patient health an appropriate measure of educational effectiveness in continuing medical education? After all it is influenced by a whole range of factors within and outside a doctor's control.

    Education is a discipline that is rich in theory. One of the functions of educational theory is to make predictions about outcomes and their relationships that can be tested through empirical work. Much research about medical education proceeds devoid of theory. More not less theory based research is needed7 so that researchers will focus on significant outcomes that are amenable to intervention.

    There is a clear imperative to research the effects of educational interventions at all levels of medical education and training. The research, however, must be designed so that the findings can be truly ascribed to the intervention rather than being an artefact of the methods used.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.