Are the new junior doctor selection processes value for money?BMJ 2010; 340 doi: https://doi.org/10.1136/bmj.c3042 (Published 16 June 2010) Cite this as: BMJ 2010;340:c3042
- Namita Kumar, consultant physician and rheumatologist and honorary senior clinical lecturer1,
- Janet Grant, director2
- 1University Hospital of North Durham
- 2Open University Centre for Education in Medicine, Faculty of Health and Social Care, Milton Keynes
Namita Kumar and Janet Grant take a look at how doctors are selected
Modernising Medical Careers (MMC) is the current programme by which postgraduate medical training is delivered in the United Kingdom. In February 2003, the four UK health departments published a policy statement on MMC setting out the principles underpinning this major reorganisation of postgraduate medical education and training. Reform had been believed to be long overdue and was “driven by the need for care based in more effective teamwork, a multi-disciplinary approach, and more flexible training pathways tailored to meet service and personal development needs.”1 In 2004, the next steps were published, which started to take a radical look at the way doctors are trained, the speed and quality with which they are trained, and the end product of that process. It also examined the opportunities for streamlining training and increasing flexibility.1 Along with this came the development of structured, transparent, and robust selection processes. The apparent intention of this, along with the two year foundation programme that gives wider experience of specialties before doctors have to opt for one, was to place trainees in a programme that would suit them, thus allowing efficient production of the certificated specialist. The system would engender the streamlined production of fit for purpose doctors who would smoothly enter a training scheme and leave at the end with a certificate of completion of training. This could also, theoretically, enable political and social accountability, but has this happened?
The medical training application service
MMC was fully implemented in 2007. All doctors were to start in their programmes at the same time, and so a suitable efficient system was required to process the applications of all junior doctors applying for specialty training. Thus the medical training application service (MTAS) was born. In May 2007, the dramatic MTAS fiasco resulted in a huge outcry from the profession after security breaches in the application service, doubts about the ability of the content to do justice to the applicant, and other concerns. As a result, Sir John Tooke was charged with looking not only at MTAS but MMC in its entirety. Many recommendations resulted from this inquiry,2 not least that basic specialty training, after foundation training, for certain specialties such as medicine and surgery should be “uncoupled” from higher specialty training. This means that a doctor needs to be selected at several stages after medical school: typically, into foundation, then for basic specialty training, and finally for further specialist training, and perhaps even subspecialty training. This reverts to a broad process similar to the pre-MMC era.
These events have seen the importation of industrial job selection processes into professional training, resulting in selection centres and a headlong stampede into multiple station assessment centres. These have led to the ranking of applicants so they may be offered posts on the basis of their scores. Thus, selection is now seen as a high stakes assessment,3 with the associated assessment burden and, doubtless, the assessment anxiety. Except that the selection assessment processes do not seem to be conducted with quite the same levels of rigour as, for example, medical royal college professional examinations.
Although the purpose of these new selection processes has not been clearly stated, it could be that the applicant is shown to be well matched to the specialty they wish to train in or it could be that this person has reached a certain standard or threshold to allow appointment to a training programme. However, the selection processes do not, in general, seem to be subject to robust standard setting procedures that define the pass mark that would be obligatory in similar professional examinations.
Areas for concern
As the new approach is being rolled out, concerns have been growing. Indeed this has led the Postgraduate Medical Education and Training Board (PMETB) to set up a working group to explore the issues further.
There are several areas that might be of concern. In the absence of a clear outline of the problem that the new selection approach was designed to tackle, we must judge the development on criteria that would be applied to assessment, selection, and medical training in general.
Firstly, effectiveness. The purpose of selection is to allocate candidates to the positions being applied for in a way that reflects the qualities required to enter and then successfully progress through that stage. NHS management have concerns over the extent to which the selection process provides trainees to deliver a safe service, enhance the reputation of the organisation and commit to its enhancement, and guarantee future staffing.
Finally, the system should stand up to cost benefit analysis since it is supported by the opportunity costs associated with consultant time and financial costs to the deanery, as well as loss of income to the trust from cancelled clinical activity.
The case in practice
In relation to effectiveness, industrial models tend to match candidates to jobs whereas in our case, candidates are to be matched to a training programme. The selection process must, therefore, not predict that a candidate will be able to perform a job (as industrial selection must do), but that the candidate will be trainable within a work context to perform a future job some career stages hence and as yet unspecified. The selection and assessment process does not use a psychometric career matching instrument such as Sci59,4 so it seems not to be matching candidate to specialty but simply determining whether the candidate is appointable to the specialty of their choice. So perhaps the threshold purpose is the main aim—even if cut-off scores are not robustly determined.
Currently, different specialties employ a variety of methods to “assess” applicants. The difference in approaches is not in itself a bad thing, and indeed acknowledges the fact that different specialties may require different skills. Methods being used mimic those that are already used in candidates’ workplace based assessments and medical royal college examinations, such as multiple stations (in some cases rather like an objective structured clinical examination), case based discussions, directly observed performances, clinical scenarios, and brief structured interviews, possibly based on portfolios.
Several papers have stated that these processes, when properly developed and applied, can be statistically valid and reliable, although assessment experts are reticent about the use of workplace based assessment methods, such as case based discussions, for high stakes decisions and promote them for use as formative educational instruments.5 Further, reliability and validity tend to be based on wide sampling of performance rather than on a single incident.6 Most assessments systems are based on knowing what they want to measure. However, although in the literature available we are given the illusion of reliability and validity, it is often unclear what specific traits are actually being assessed, particularly in machine marked tests,7 but also in some other stations. Furthermore, there is no differentiation between stable and trainable traits, which should perhaps be considered when selecting people for further training. Thus, the blueprinting of current processes seems inadequate in relation to the guarantees the service might require. Other data suggest that a wide range of attributes beyond clinical knowledge and academic achievement need to be considered to ensure doctors train and work within a specialty for which they have a particular aptitude,8 but that work again does not differentiate between stable and trainable characteristics.
Given this, we cannot always be sure of the rationale for the selection criteria being applied. The point to be remembered is that we are not selecting fully trained consultants or specialists, but those early on in their careers who have the potential to be trained into the specialists of the future. Skills such as leadership, innovation, and dealing with uncertainty are all needed for such senior roles but may well develop during training. Although professional skills are mentioned on job descriptions, they are not always specifically tested or assessed.
To resolve these concerns, the literature on both assessment and career progression is helpful. The assessment literature shows that wide sampling of performance is required to reach acceptable levels of reliability and validity6 because performance tends to be case specific.9 This might suggest that a fully valid and reliable system of assessment for selection would be both prohibitively expensive as well as possibly redundant if the candidate has taken similar assessments during foundation or basic specialty training. At the same time, the literature suggests that global professional judgment is an accurate predictor of performance and that atomised methods of assessment that break down integrated performance into its component parts lack relation to the reality of practice.10 In addition to this, the literature shows that the only reliable, if incomplete, predictors of success in medicine are academic attainment and motivation, although such factors are highly contaminated and might be stable or trainable traits, making prediction on the basis of a range of discrete variables a risky strategy.1112 This would suggest that a more integrated, less atomised approach to selection that focuses on achievements (perhaps through discussion of a portfolio) and on motivation for the specialty in question (perhaps by examining evidence of interest, experience, and understanding of the specialty) would be a secure approach.
Much is made of “choice” within the current NHS. It is clear that hospitals have different demographics, serve different populations, and involve themselves to varying degrees with research and education. However, there is no choice for the organisation about who they feel is most appropriate to work within that organisation. Similarly, teamwork was one of the underpinning tenets of the new arrangements for postgraduate training, yet teams can no longer select their own members. This is particularly frustrating if that unit releases large numbers of consultants to staff the stations at a regional selection process and the candidates are then appointed elsewhere, leaving vacancies on the rotation and a continuous quest for locums in a poor workforce market. These costs of the new selection to individuals, to the NHS and patient care, and to the deaneries have not yet been calculated. A robust cost-benefit analysis is urgently required.
The content of job descriptions for post-foundation training13 shows clearly that the selection process addresses areas that should be covered by the foundation programme, evidence of achievement of which should be available in the portfolio. Therefore, for most UK graduates there should be no need to reassess these areas as they are clearly documented. An example of this is communication skills, which is already one of the most highly and frequently taught and assessed generic skills. Those doctors who have not completed a UK foundation programme do need assessment, but this could be applied just to those doctors.
The current multiple station model is resource intensive as it usually requires two assessors at each of a number of stations, perhaps over two or three days. There is simultaneously a view that lay input should be used; however, this may not add value.14 An options appraisal and cost-benefit analysis—for example, looking at the added value of multiple stations and the use of selection centres—needs to be performed, even while we are changing practice wholesale.
In human resources terms, the new processes are more transparent. Good human resources practice is clearly visible with more interviewers having undergone the appropriate training such as equality and diversity, which has been clearly shown to be an issue in the past with doctor recruitment and career progression.15 However, these benefits could be attained in other ways.
The way forwards
There is, of course, the need for a fair selection process, especially if the future holds careers that do not offer a pathway to a consultant or general practitioner principal post. The profession, service, and patients need to be reassured that the best are being selected to appropriate specialties in a robust and appropriate manner. However, we need to ensure that we are not creating a process or industry that is itself not evidence based and is far in excess of what is required and that could actually be performed using fewer resources, particularly in these leaner times for the NHS. There is no evidence that applicants are selected more appropriately; they are simply selected differently. The literature suggests that global professional judgment, a review of achievements, and seeking indications of motivation for the specialty would make a feasible, cost effective, professionally relevant, and manageable approach.
Competing interests: Both authors are board members of the Postgraduate Medical Education and Training Board.