Intended for healthcare professionals

Learning In Practice

A new selection system to recruit general practice registrars: preliminary findings from a validation study

BMJ 2005; 330 doi: (Published 24 March 2005) Cite this as: BMJ 2005;330:711
  1. Fiona Patterson (f.patterson{at}, professor1,
  2. Eamonn Ferguson, professor2,
  3. Tim Norfolk, research associate1,
  4. Pat Lane, director3
  1. 1 Department of Psychology, City University, London EC1V 0HB,
  2. 2 School of Psychology, University of Nottingham, Nottingham NG7 2RD,
  3. 3 Postgraduate General Practice Education, South Yorkshire and South Humberside Deanery
  1. Correspondence to: F Patterson
  • Accepted 4 January 2005


Objective To design and validate a new competency based selection system to recruit general practice registrars, comprising a competency based application form, referees' reports, and an assessment centre.

Design Longitudinal predictive validity study and a matched case comparison.

Setting South Yorkshire and East Midlands region, United Kingdom, comprising three deaneries.

Participants 46 of 167 doctors were followed up in training after three months in practice, and 20 general practice trainers were selected by using traditional recruitment methods.

Main outcome measures Trainer ratings of trainee performance in practice on targeted competencies.

Results Performance ratings of targeted competencies at the assessment centre predicted trainer ratings of performance in the job. Furthermore, those trainees recruited through the new competency based process performed significantly better in the job than those recruited through traditional recruitment processes.

Conclusion A new competency based selection process using assessment centres improves the validity of selection of general practice registrars compared with traditional selection techniques.


The chief medical officer for England argues that “Reform must take account of… weak selection and appointment procedures: these are not standardised and are frequently not informed by core competencies.” Assessment centres are one selection technique that conforms to these principles.18 We report on the reliability and validity of a new selection process for recruiting into general practice training.9 10

The reliability and validity of a selection system can be improved by combining several different competency based methods.1117 The selection process and assessment centre described here is based on a validated competency model10 comprising the three stages of initial application forms, referees' reports, and an assessment centre.

Eleven competencies have been described for general practice.9 10 Of these, six were targeted for the selection process: empathy and sensitivity, communication skills, clinical expertise, problem solving, professional integrity, and coping with pressure.9 10 The remaining five competencies were judged to be more appropriate for training (for example, awareness of legal and ethical issues).

The validity of this process is addressed by two research questions. Firstly, does performance at an assessment centre predict performance in a job (predictive validity) and, secondly, do doctors recruited through this process perform better in a job than those recruited through traditional selection processes?


We used a longitudinal design and a matched case comparison to tackle the two research questions. Independent assessors were blinded at all stages of selection and in practice three months into training.


Intervention group

Table 1 details the demographics and sample sizes at each stage of the selection process. From an initial sample of 354 doctors applying for a training programme within three deaneries of the former Trent region during 2001, 188 were shortlisted through application form and attended an assessment centre. Of these, 56% (n = 105) were white, 20% (38) Indian, 8% (15) Pakistani, 6% (11) black African, and 10% (19) of other ethnic origin. Overall, 140 of the 167 doctors successful at the centre accepted training places. Seventy one trainees were in a general practice registrar post at the time of the study (others were still doing hospital based practice and had not yet taken up their post). For those in posts, supervisor assessments were obtained for 46 trainees (65% response rate) to appraise their performance three months into practice.

Table 1

Performance and criterion measures in new system for selecting general practice registrars

View this table:

Comparison group

The comparison group, identified in another UK geographical region, comprised trainees selected using traditional methods. Sixty four trainers of general practice registrars were invited to participate, rating their trainee on the 48 item competency inventory. Twenty responded (response rate 31%). As the response rate was relatively low and the demographic profile of this group did not closely match that of the initial intervention sample (table 2), we selected a matched sample from the intervention group, matched on a case by case basis with the comparison sample for sex, and approximating as close as possible for age and ethnicity.

Table 2

Mean supervisor competency rating (on seven point scale) of job performance in intervention and comparison groups after three months in practice

View this table:

The intervention (the new selection procedure)

The three methods designed for selection were the application form, referees' reports, and an assessment centre. Ratings for each method were based on a four point scale (from 1 for poor to 4 for outstanding). Assessments were made independently and assessors were blind to all other ratings at each stage in the process.

Application form

Candidates completed a paper based application form requesting biographical information (for example, medical qualifications), six structured competency questions (experience relating to each competency), and personal statements (for example, reasons why they wished to pursue a career as a general practitioner). Forms were rated independently by three assessors (experienced trainers or course organisers). Each assessor attended a four hour training session to enhance reliability.

Referees' reports

Referees provide performance ratings based on each six competencies. Two referees (the applicant's current employer and a previous employer chosen by the applicant), who were blind to scores on the application forms, rated each candidate independently.

Assessment centre

Shortlisted doctors spent a day at an assessment centre, comprising a series of work related simulation exercises, each lasting between 20 and 40 minutes. These allowed the assessors to observe the behaviour of candidates on the competencies. The assessors attended a one day training session in behavioural observation and rating before attending the assessment centre. Exercises were devised using a multi-method, multi-trait approach so that three competencies were assessed during each exercise. Exercises included a simulated consultation, where candidates took the role of doctor and a medical actor played a patient in a given scenario; a group exercise, where four candidates were asked to resolve a work related issue; a written exercise, in which candidates were asked to prioritise a set of impending work related issues, justifying the order chosen; a competency based structured interview, where candidates were asked to provide evidence of their suitability for the role, based on previous experience; and a medical interview, comprising technical questions relating to clinical practice. Having completed the assessment centre process, data were collated for each candidate and assessors discussed each applicant's performance to reach decisions on recruitment. An overall performance rating was calculated by summing competency ratings from all exercises (maximum score 72).

Outcome measures

The main outcome measure was supervisor ratings of trainee job performance three months into their work, assessed using a 48 item inventory. The inventory consisted of behavioural indicators associated with each of the six competencies (for example, for coping with pressure, is clear and rational when dealing with difficult issues or situations). Ratings were based on a seven point Likert type scale (from 1 for needing significant development to 7 clearly demonstrated). This inventory showed good internal reliability (Cronbach's α 0.83). Trainers in both the intervention and comparison group samples completed this in an identical way for formative purposes only.

Data analysis

We carried out the analysis using Pearson correlation coefficients and independent samples t test. The assumption of normality of each dependent variable was assessed in terms of whether or not their skew and kurtosis was significantly different from normal.18 All variables were normal. To examine if the distributional qualities of the variables were appropriate for the t tests, we applied Levine's test for equality of variance. In cases where equality variance was not found to be equal, we examined the appropriate adjusted t tests.

Reliability of the selection process was assessed for correlations between competency ratings for each stage. The predictive validity of the assessment centre was shown in three ways: correlations between competency ratings at the assessment centre and supervisor job ratings, comparison of supervisor competency ratings of job performance between a group of high performers (group 1) at the assessment centre and a group of lower performers (group 2), and a case control comparison carried out on supervisor competency ratings.


We found good internal reliability for the new selection system for recruiting general practice registrars. The three selection methods of application form, referees' reports, and observer ratings at an assessment centre were significantly correlated.

Group 1 were rated more highly by supervisors across all competency domains. We found a significant positive correlation between total competency rating at the assessment centre and overall rating of job performance (r = 0.35). In total, 24 participants were identified as higher performers at the assessment centre and 22 as lower performers by using median splits based on the overall criterion score at the assessment centre. Overall trainee performance was rated significantly higher in practice for those who had been identified as high performers at the assessment centre (t = 2.3; table 3). Those achieving a higher score on the individual competencies at the centre performed significantly better in practice than those achieving lower scores on three competencies (empathy, problem solving, and coping with pressure) and marginally better on two further competencies (clinical expertise and communication skills). These results provide initial evidence of the predictive validity for the assessment centre.

Table 3

Mean supervisor competency rating between high and lower performers at assessment centre after three months in practice

View this table:

Performance ratings in the intervention group were significantly higher than their matched counterparts for overall performance and for the competencies of clinical expertise, problem solving, and communication skills (table 2). Trainees recruited using the competency based selection system therefore performed better on blinded supervisor assessments.

What is already known on this topic

Selection practices in medicine are relatively unsystematic and inconsistently applied

Assessment centres are one of the most reliable and valid selection methods but have not been applied in the medical domain

Many selection methods and recruitment systems are not informed by core competencies and are not standardised

What this study adds

A new competency based selection system based on a thorough job analysis improved selection of general practice registrars

An assessment centre improved the validity and reliability of selection


A competency based selection system for recruiting general practice registrars on the basis of an application form, referee's report, and an assessment centre improved the reliability and validity of selection compared with traditional methods. The significant positive correlations between the competencies assessed through application, reference, and the assessment centre showed good evidence of reliability.

The validity of the assessment centre was shown by a significant positive association between performance at the centre and job performance three months into practice, and by recruited trainees being rated more highly by trainers than a matched group in another region. The reliability and validity (predictive and case comparison) findings are encouraging but preliminary, given the small sample size. Furthermore, as the comparison and matched group were demographically not equivalent to the initial intervention sample (354 applicants), caution is needed in generalisability of the results. Our findings do, however, indicate that these evidence based methods are worthy of further investigation.

The assessment centre method is also worth pursuing in other medical specialties and undergraduate selection, where the design is based on a thorough role analysis.10 Such analysis is essential if good reliability and validity is to be achieved and for a legally defensible system.19 The current assessment centre system for general practice registrars operates on an assessor to candidate ratio of 1:3 whereas traditional methods have a ratio of 1:1.5. This is an obvious saving in resources and time.


  • Contributors FP, EF, and PL conceived the overall study; TN commented on the design and nature of the study, managed the daily running of the project, and collected and analysed the data; FP and EF analysed and interpreted the data; FP wrote the paper; EF, TN, and PL commented on early drafts of the paper; EF and PL helped to write the final draft of the paper. FP is guarantor.

  • Funding FP and EF have received research funding through the NHS Executive. Researchers are independent of the funders and the sponsor supports publication.

  • Competing interests None declared.

  • Ethical approval Research ethics committee of Sheffield University.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.