Career Focus

Quality control in postgraduate training

BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7142.2 (Published 09 May 1998) Cite this as: BMJ 1998;316:S2-7142
  1. Khaldoun Sharif, Lecturer,
  2. Masoud Afnan, Consultant
  1. Department of Obstetrics and Gynaecology, Birmingham Women's Hospital, Birmingham B15 2TG

    Embedded Image

    Measuring the pass rates of candidates in each training programme would sharpen the sensibilities of all involved say Khaldoun Sharif and Masoud Afnan

    Membership or fellowship of a royal college is an essential qualification for a specialist medical career in the United Kingdom. One of the main aims of postgraduate medical training is to prepare trainees for a specialist career. This inherently includes training for the relevant membership or fellowship examination. But how good are individual training programmes in doing this job? We simply do not know.

    You might reasonably expect that professional pride would ensure that postgraduate deans (who pay for the training), the royal colleges (which recognise particular hospitals' suitability to do the training), and the training organisations publicly demonstrated the effectiveness of the training. However, none do: pass rates for trainees in each training programme are not published and probably not even collected. This is despite a clear recommendation by the standing committee on postgraduate medical and dental education (SCOPME) to monitor postgraduate medical education by all organisations involved in order to evaluate its effectiveness.(1)

    SCOPME also recommends that this monitoring should be an integral part of the educational process and should include (among other things) examination pass rates.1 This information would definitely be possible to collect. Directors of training programmes know (or should know) which candidates are sitting which exams. The results are available from the royal colleges, and some - such as those of the Royal College of Obstetricians and Gynaecologists (www.rcog.org.uk) - are published on the internet. With some organisation and minimal clerical work, it is possible to know how many candidates appeared for each exam diet and how many passed.

    Potential benefits of publishing rates

    Of course, the fact that something is accurately measurable is not a good enough indication to measure it. There has to be a real educational benefit from the exercise. In our opinion, all interested parties will benefit from publishing pass rates for individual training programmes.

    Firstly, postgraduate deans, who pay for the training, would surely want to know if it is effective. If they find that a certain training programme is persistently underperforming, then they may wish to look into why that is. Improving the training skills of the trainers (such as by attending “Training the Trainers” courses) may be necessary.(2)

    Secondly, prospective trainees might want to know the pass rates for individual programmes before applying for training posts. Would you have applied to a particular medical school if you knew that it persistently had a low pass rate? Probably not, and the same might apply to postgraduate training. The fact that there are now many applicants and only a few training posts available is not a valid reason for not monitoring the effectiveness of training.

    The trainers themselves may benefit from knowing the effectiveness of their training. In almost all endeavours, there is always room for improvement. Monitoring the outcome of any process is the first step to recognising the need for, and implementing, improvement. And if all this leads to better training of doctors, then it will ultimately lead to better patient care, which is surely our main aim in medicine.

    Fear the league table?

    Publishing exam pass rates for individual training programmes will risk creating an educational league table. Recent experience with league tables for schools, hospital waiting lists, and in vitro fertilisation clinics suggests that the idea would not be welcomed by everybody. The main objections have been that league tables were not monitoring a valid outcome, not monitoring it accurately, or not taking account of confounding variables that might affect the outcome. The same objections are likely to be raised against a league table of pass rates, and they should be addressed and discussed before the idea is implemented.

    Postgraduate exams are not perfect, but they are essential for achieving medical specialist status. If we accept them as such (and in the United Kingdom, de facto if not de jure, we do) then we must accept them as a valid method (among others) of assessing the quality of postgraduate specialist training. Also, monitoring the local pass rate accurately is readily practicable. Therefore, we can accurately measure a valid outcome of postgraduate training. In fact, this had already been done on an annual basis until recently through the General Medical Council: under the Medical Act of 1983 the royal colleges provided information about pass rates to the GMC's education committee. This committee, however, has ceased monitoring the outcome of postgraduate qualifications after the GMC decided in November 1996 not to register postgraduate qualifications of any kind. Therefore, there is even more need now for individual training organisations to publish their trainees' pass rates.

    Correcting for confounding variables

    The true challenge, of course, is to control for confounding variables so that we can make pass rates as truly representative as possible of the quality of postgraduate training. For example, most training rotations include many hospitals, with trainees spending about a year in each. As all these units contribute to the training, then the pass rates should be collected per rotation rather than per unit.

    Secondly, there are often unexpected short term peaks and troughs in any measured variable, and pass rates are no exception. A long term view should be taken, and perhaps it is better to have a two or three year average of pass rates instead of six monthly or yearly rates.

    Thirdly, different groups of trainees have different needs and different pass rates, and it would be more accurate to report the pass rates for those groups separately. For example, it has long been recognised that overseas graduates have lower pass rates than British graduates. In 1994-5 the pass rate in the final examinations held by the medical royal colleges and faculties was 27.1% (1,819/6,722) for overseas graduates compared with 62.1% (3,687/5,940) for British graduates. This difference has hardly changed since the 1986-7 results.3 Therefore, it might be a good idea to report pass rates for overseas and British graduates separately in each training programme.

    Other adjustments for confounding variables that would make the results more representative could be agreed on separately in each specialty. This should be done by consultation between all interested parties - postgraduate deans, training programme directors, the royal colleges, and the trainees.

    The idea of publishing exam pass rates for individual training programmes is not new, and it has been called for repeatedly.(4)(7) More recently, SCOPME has endorsed this idea and recommended that both the royal colleges and individual training programmes should collect and publish their pass rates as an integral part of the education and training process.1 8 As the new “Calman” training is starting in many specialties, perhaps now is the right time for this idea.

    References

    View Abstract