Learning In Practice

Can poorly performing doctors blame their assessment tools?

BMJ 2005; 330 doi: https://doi.org/10.1136/bmj.38469.406597.0B (Published 26 May 2005) Cite this as: BMJ 2005;330:1254
  1. Richard Baker, professor (rb14{at}le.ac.uk)1
  1. 1Department of Health Sciences, University of Leicester, Leicester General Hospital, Leicester LE5 4PW

    In the health services of the Western world, a search is under way to find effective and practical methods for assessing the performance of doctors. Regulators want to identify those doctors with unacceptable performance, educators want methods to guide their various interventions, and managers want information for monitoring and appraisal. Use of multisource feedback (sometimes called 360° feedback) is relatively new in health care, although it has been used in the commercial world for a decade or more. The paediatricians in Sheffield who used SPRAT are being followed by doctors in different disciplines using similar instruments.1 In the United Kingdom, the potential of multisource feedback in the annual appraisal of doctors is already being explored by several local groups. Will this lead to a proliferation of confusing—and pescatorial—acronyms? Before boarding the multisource feedback fishing boat, however, we need to be sure that the instruments adopted are satisfactory and that the costs of their use are justified by the benefits.

    The assessment instruments must be reliable and valid. Although achievement of reliability is not usually an insurmountable problem, achievement of validity is challenging. Indeed, development of a single instrument that can assess all relevant aspects of clinical performance of a particular medical discipline is almost certainly impossible. Clinical practice is simply too complex. Nevertheless, a mix of instruments could well meet the assessment needs of regulators, educators, managers, and doctors themselves.2 When a mix of instruments is used, doctors whose performance is assessed as unsatisfactory are unlikely to be able to blame the assessment tools. Methods are available to confirm the reliability and validity of assessment instruments,3 but evidence relevant to multisource feedback instruments is limited. For example, in a recent review of physician peer assessment instruments, only three met the inclusion criteria of having some data on how the instrument was developed and being validated using psychometric methods.4 SPRAT is therefore a welcome addition.

    In order to assess the benefits of multisource feedback we need to know whether it promotes improvement in performance when used in education or management contexts and whether it safely identifies underperformance in the context of regulation. The place of patients and the need to include peer assessors who have directly observed the doctor consulting with patients must also be established.5 Deciding the place of multisource feedback in the assessment of doctors will then be possible. In the studies already done, it is notable how many peers were willing to rate their colleagues, and those who were assessed also seem to take part reasonably readily. This is encouraging, but whether the same degree of cooperation would apply in a formal scheme is unclear, although it is probable that the purpose of the scheme—education, performance management, or regulation—would influence willingness to take part. Nevertheless, if multisource feedback lives up to its early promise, it will have contributed to the development of reliable and valid assessments of doctors' performance, and that is something to be welcomed.

    Footnotes

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    View Abstract