The need for outcome measures in medical education

BMJ 2005; 331 doi: https://doi.org/10.1136/bmj.331.7523.977 (Published 27 October 2005) Cite this as: BMJ 2005;331:977
  1. Lambert Schuwirth, associate professor (l.schuwirth{at}educ.unimaas.nl),
  2. Peter Cantillon, senior lecturer
  1. Department of Educational Development and Research, Maastricht University, Netherlands
  2. Department of General Practice, National University of Ireland, Galway, Republic of Ireland

    Complex educational interventions demand complex and appropriate evaluations

    How can we ever be sure that educational approaches such as problem based learning are better than traditional ones? Change merely for the sake of change is futile. Changes in medical education should lead to better outcomes, but what is the best way to show cause and effect?

    For simple research questions straightforward methods suffice, but more complex questions require more complicated study designs. A question such as “Is drug A more effective than a placebo?” is highly relevant, and the methods needed to answer it may be relatively straightforward. However, the question “Why does drug A lead to a better outcome than a placebo?” is more complicated, and “Does using drug A lead to better health for the population?” even more so. Answering more complicated questions often requires a programme of research rather than a single study.

    Some authors would say that a randomised controlled trial is the best way to answer a question such as “Is problem based learning more likely than traditional education to produce good doctors?”1—and some would even say that anything less was unethical.2 Others argue that randomised controlled trials of large scale educational interventions are doomed to failure and should not be tried.3

    In this week's BMJ Tamblyn and colleagues report how they have taken up the gauntlet. They did not do a trial, however: they compared the quality—using a range of outcome measures—of the doctors who graduated before and after the introduction of a problem based learning curriculum.4 They found interesting differences between the groups, thus providing important material for debate and further research. The design of the study also provides food for thought.

    Deciding whether problem based learning produces better doctors requires, at least, clear consensus on what constitutes a better doctor. In trials of therapeutic interventions the outcome of each patient's management is a product of the interaction between multiple variables. These include the patient's personal characteristics such as age, sex, social status, type of disease, and concordance with treatment, as well as healthcare issues such as travelling distance to hospital and availability of diagnostic facilities and support staff. Furthermore, societal factors such as litigation and rationing may limit doctors' options. Comparing two cohorts while controlling for all these confounding variables is a tall order.

    In addition, there are many factors in doctors' lives other than the formal educational system that may influence their performance. These encompass not only personal preferences but also the time lag between education and starting practice and the influence of further specialist training.

    Lastly, the authors' selection of outcome measures may prove controversial. For example, a doctor's rate of carrying out breast cancer screening, even if it is an indicator of other preventive work, may not necessarily be a good indicator of overall medical competence and performance.

    Does this mean that changes in competence and performance are not measurable and that evaluation is pointless? We think not. It is essential to collect such data, not only to seek evidence for the notion that some broad changes in education are for the better, but also to gain more insight into exactly which elements of education work best. A single large scale study is unlikely to achieve all of this.5 Nor will research that looks only at one dimension using oversimplified outcome measures6 or describing no more than convictions or beliefs. Evaluating a complex educational intervention such as a new curriculum demands a complete programme of research.6 7 Studies such as that by Tamblyn and colleagues add pieces to the puzzle rather than provide definitive answers.


    • Learning in practice p 1002

    • Competing interests None declared.


    View Abstract

    Log in

    Log in through your institution


    * For online subscription