GMC's proposals for revalidation would not be accurate, economical, or fair
BMJ 2000; 321 doi: https://doi.org/10.1136/bmj.321.7270.1220 (Published 11 November 2000) Cite this as: BMJ 2000;321:1220All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
I have been swamped with media and medical responses to my letter on
the GMC’s re-validation proposals. Having a day job to do (University
Staff Development Officer at Cambridge University), I have not the time to
deal with all of these adequately. But the following is an attempt to
respond to these and explain my thinking.
Annual appraisal is an excellent idea – talking confidentially with a
colleague about your successes and failures, and translating the outcome
into an action plan for your professional development over the next year
cannot but enhance the medical profession’s competence overall. But if
these annual appraisals are to be included in a five-year portfolio of
evidence which will be used to determine whether you can continue in
practice or not, would you be so candid as to your problems, mistakes and
needs? It seems unlikely to me.
The method of assessment proposed – “portfolio-based assessment” – is
a relatively new approach, now frequently used in formative assessment,
such as appraisal. But because, inevitably for a large number of
candidates, lots of assessors are needed, it is likely to impact unfairly
on candidates: examiners set different standards. A recent study showed
that individual medical portfolio assessors disagreed substantially in
their judgements as to the adequacy of portfolios. And each candidate for
re-validation will have his or her unique trio of assessors, adding to the
likely inaccuracies.
Of course, the assessors could be trained: but for the numbers
concerned (some 20,000 doctors will need to be revalidated each year),
substantial training seems improbable, and its impact is impossible to
estimate. Its costs would add greatly to those already suggested.
For a fraction of the true costs of the quinquennial review as
currently proposed, two options – either or both of which might be used -
are open to the Council:
- a system of formal written and/or computer-based testing could be
developed by a professional testing body (cf. the National Board of
Medical Examiners in the USA). To make a robust estimate of a doctor’s
likely competence in practice would take no more than a day, using modern
testing procedures. An added benefit would be that an analysis of the
results of such examinations could be used to target postgraduate medical
education into areas of particular need.
- a much cheaper system of confidential peer review could be
developed, where a group of colleagues who know a doctor’s practice are
asked to provide a number of ratings of her/him on a standardised form.
These ratings have been shown to reflect actual levels of performance
well. (For comments/details, contact Professor David Newble at the
University of Sheffield Medical School – d.newble@sheffield.ac.uk)
All this emphasis on re-validation implies that the initial
validation of doctors is satisfactory. In fact it is quite a shambles.
Each University with a medical school issues its own medical
qualification, and there is no effective mechanism for moderating
standards. What is done is for the GMC to periodically send round a bunch
of people (mainly from medical schools) to “inspect” the examinations,
which are then deemed to be “sufficient”. There is no attempt to equate
standards across these universities, other than by relying on them using
external examiners – a system unsupported as to its adequacy by a
scientific literature. And we have evidence that the quality of medical
graduates can vary greatly: a published study of the performance of
thousands of candidates for the MRCGP showed huge differences in
performance as between the graduates of the various medical schools.
What is needed is a national qualifying examination, run by a
professional assessment body, which is taken by all medical students, as
in the United States. This would ensure that all medical students,
qualifying from whatever medical school, were all above a set standard.
Part of the GMC’s current motto is “protecting patients”: this would help
to assure that.
Richard Wakeford
Convenor, the Cambridge Conferences on Medical Education
Educational Adviser, Cambridge University School of Clinical Medicine
Hughes Hall,
Cambridge
Further information on the Cambridge Conferences is posted on the
Conference website, www.cambridgeconferences.org.
Competing interests: No competing interests
Richard Wakeford is obviously (to me) correct in stating that the GMCs
proposals for revalidation are unworkable and far too expensive in time
and money. This is the first support I have seen in print for my position.
I have written in the same vein to Hospital Doctor, the BMA News Review,
the CMO and the GMC (formally, in response to requests for comment) itself
since February 1999. If we are to be ‘revalidated’ it must be by the
equitable, workable and least expensive process which is clearly an
examination. Discussion can take place about the form of the examination,
but examination it must be.
The ‘Authorities’ in the form of the Colleges and so on have been
strangely silent on this matter, in my view because those who view
themselves as the ‘leaders of the profession’ fear they might have to go
through the examination themselves. This would be in stark contrast to the
rather more agreeable process of touring the countryside revalidating
people they either know know or who are clearly "one of us". That version
of revalidation would do nothing to avert the disasters associated with
Wisheart, Ledward, Neale and others.
Competing interests: No competing interests
Annual appraisal can of course be helpful, but GMC and NHS proposals
are seen as controversial by many GPs.
Doctors have been told we will be appraised but there are too many
unanswered questions, for example, who will appraise? Although it has been
widely assumed that GP tutors will appraise GPs, no official statement has
been made to this effect. Tutors I have spoken with are willing to
consider appraising colleagues in a supportive, non-threatening, formative
context ( given acceptable training, resources and ground rules) but do
not wish to be Department of Health inquisitors searching for "bad"
doctors.
Many GPs fear that appraisal could be used as a managerial rather
than an educational and supportive tool. Educationalists hope to see
appraisal develop as something which will produce happier, better trained
GPs and identify those doctors who are in difficulty at a remedial stage.
This process will occasionally identify a doctor whose practice is
unacceptable and who may need to be guided towards another career, but to
design the whole process around this tiny minority would have a
destructive effect outweighing any benefits.
There are many good GPs in their 50s whose mortgage is paid off,
whose spouse has agreeable and well paid employment, and whose children
are through university. If appraisal and revalidation are introduced in
the wrong way and without adequate resources, some of these doctors will
retire earlier than they would have done. If this happens on any scale, it
would undo all the good that might be done by identifying a couple of
hundred underperforming doctors.
Annual appraisal can be used as a tool to promote quality of care for
patients and also to support doctors, but at present many GPs see it as
more of a threat than an oportunity. Questions about the selection,
training, remit and funding of appraisers must be answered satisfactorily
and quickly if GPs are to have faith in the process.
Stephen Hayes
conflict of interest-the Wessex GP tutor group, including myself,
has indicated a willingness to take on an appraisal role if satisfactorily
resourced
Competing interests: No competing interests
Dear Editor, I agree with Wakefords that revalidation by the GMC is
unlikely to be accurate or fair. In particular, failure of a doctor to
meet revalidation criteria may mean that his hospital does not grant
sufficient grants or time for CME and audit.
I would be in favour of a written examinations, peer review, and
provision of evidence of audit and professional development, provided
these activities are properly funded with regular protected CPD sessions.
However, I don't beleive that current Government economics allow NHS
Trusts the substantial and sustained finance and time involved in CPD of
doctors. Unfortunately, the Government's current economic ideas revolve
around how to get the most out of doctors for free.
Dr Ramin Baghai MBBS, MRCP
SHO in Intensive Care
University Hospital Lewisham
Competing interests: No competing interests
Revalidation
Editor - Richard Wakeford in the BMJ of the 11 November
2000 made a
number of interesting points about the development of the revalidation
process. These were helpful in continuing the debate.
To begin with, however, it is important to recall the objective of
revalidation. This is not only to detect poor performance, it is also to
encourage all doctors to develop and to improve their practice.
Revalidation is therefore more than a sieve and the GMC's proposals
recognise this.
One of the foundations of the GMC's proposals is the need to ensure
fairness. The GMC need to be as sure as they can that the same outcome
would occur after consideration of a folder by any of the revalidation
groups - wherever they are established. This is why the GMC have stressed
the need for training those who will undertake the five-year assessments.
They have, of course, much experience of this already in the training of
assessors for their Performance Procedures . This is why the GMC are
developing robust external quality assurance mechanisms to look both at
the process and the outcomes of the assessments. This is why the GMC are
working with the Royal Colleges and others to develop explicit standards
for each area of medical practice.
The GMC are also concerned - of course - with reliability. Clear
guidance
and effective training are important to achieve this. The GMC are also
aware that they need to test their assumptions. This they are doing
through the establishment of a piloting process. Professor Pauline McAvoy
is leading this.
The piloting will also contribute to the determination of the costs
(and
benefits) of revalidation. These will provide the GMC with information on
which to make a decision. They will take into account the GMC's decision
to build on existing arrangements - particularly for information
collection. The GMC are working closely with both the public and
independent sectors so that they build on arrangements that are being
developed for the implementation of Clinical Governance.
There are a number of problems with the suggestion that rather than
ask
doctors to use their actual practice to demonstrate that they should be
revalidated, the GMC should set up a series of tests.
First, it is important to ensure that the blue print that we all work
to
is Good Medical Practice. This is the basis for registration and the blue
print for all the GMC's regulatory work including, for example, the
Performance Procedures. The importance of working to a blue print is
accepted practice. A system based on tests could only partly cover the
blue print. Tests, for example, are not good ways to capture information
about communication skills and team working (both of which are central to
a doctor's effective practice).
Second, such tests would not reflect what doctors do. This point was
made
clearly by Cees Van der Vleuten:
… there is overwhelming evidence that the reliability of measurements of
clinical competence is hampered by the fact that competence is content
specific. Achieving competence in one area (for example in one clinical
case) is not a good predictor of competence in another, even if the areas
are closely connected.
What is true of undergraduate examinations must be equally true in the
post graduate sector.
Third, an examination-based approach would send wrong messages to the
profession about how to prepare for 'the test'. 'Consequential validity'
(the problems of assessment driven learning) is important. We believe that
it would be far better for doctors to prepare for revalidation by
addressing their actual practice than by spending time in libraries.
The GMC are ahead of much - if not all - of the rest of the world in
developing an effective system of revalidation. One that looks at how well
doctors actually practise, at the quality of patients' experience, not how
well doctors jump through artificial hoops.
Professor Brian Jolly
Professor of Medical Education
Department of Medical Education,
University of Sheffield,
Coleridge House,
Northern General Hospital,
Herries Road,
Sheffield S5 7AU
Professor Pauline McAvoy
Chief Executive
Gateshead and South Tyneside Health Authority,
Ingham House,
Horsley,
Hill Road,
South Shields,
NE33 3BN
Professor Dame Lesley Southgate
Professor of Primary Care and Medical Education
CHIME (Centre for Health Infomatics and Multiprofessional Education),
4th Floor, Holborn Union Building,
Archway Campus,
Highgate Hill,
London N19 3UA
Competing interests: No competing interests