Intellectual aptitude tests and A levels for selecting UK school leaver entrants for medical school

BMJ 2005; 331 doi: (Published 8 September 2005)
Cite this as: BMJ 2005;331:555
  1. I C McManus, professor of psychology and medical education (i.mcmanus{at},
  2. David A Powis, professor of health professional education2,
  3. Richard Wakeford, educational adviser3,
  4. Eamonn Ferguson, professor of health psychology4,
  5. David James, professor of fetomaternal medicine5,
  6. Peter Richards, president6
  1. 1 Department of Psychology, University College London, London WC1E 6BT
  2. 2 University of Newcastle, New South Wales 2308, Australia
  3. 3 Cambridge University School of Clinical Medicine, Addenbrooke's Hopsital, Cambridge CB2 2QQ
  4. 4 School of Psychology, University of Nottingham, Nottingham NG7 2RD
  5. 5 Queen's Medical Centre, University of Nottingham, Nottingham NG7 2UH
  6. 6 Hughes Hall, University of Cambridge, Cambridge CB1 2EW
  1. Correspondence to: I C McManus
  • Accepted 6 June 2005

An extension of A level grades is the most promising alternative to intellectual aptitude tests for selecting students for medical school

How to make the selection of medical students effective, fair, and open has been contentious for many years.w1 A levels are a major component of selection for entry of school-leavers into UK universities and medical schools,w2 but intellectual aptitude tests for the selection of medical students are burgeoning—they include the Oxford medicine admissions test1 and the Australian graduate medical school admissions test 2 (table). Tests such as the thinking skills assessmentw3 are being promoted for student selection generally. The reasons include a political climate in which government ministers are advocating alternatives to A levels, some support for them in the Schwartz report on admissions to higher education,3 lobbying from organisations such as the Sutton Trust,w4 and the difficulty of distinguishing between the growing numbers of students achieving three A grades at A level. We examine the problems that intellectual aptitude tests are addressing, their drawbacks, any evidence that they are helpful, and alternatives.

View this table:

Aptitude tests currently used in United Kingdom by medical schools and other university courses

Medical schools need selection procedures that are evidence based and legally defensible. We therefore explored a series of questions around these developments.

Many UK school leavers apply to university with other educational qualifications, including the international Baccalaureat and Scottish highers. Medical schools are increasingly admitting entrants other than directly from school. These different mechanisms of entry require separate study. We discuss the main school-leaver route through A levels.

Are A levels predictive of outcome?

Many beliefs are strongly held about undergraduate student selection but without any “visible means of support”4: one is that A levels are not predictive of outcome at university. The opposite is true. A study of 79 005 18 year old students entering university in 1997-8 and followed through until 2000-1 ( shows a clear relation between A level grades and university outcome (fig 1). The result is compatible with many other studies of students in general,5w5 w6 and of medical students in particular.58w7-w10 Small studies of individual students in individual years at individual institutions are unlikely to find such correlations—the reasons being statistical, including lack of power, restriction of range, and attenuation of correlations caused by unreliability of outcome measures (see

Fig 1

Outcome of students in relation to A level grades in all subjects. Grades are based on best three. A=10, B=8, C=6, D=4, and E=2 points

Summary points

So many applicants are achieving top grades at A level that it is increasingly impractical to select for medical schools primarily on such achievement

Schools are introducing tests of intellectual aptitude without evidence of appropriateness, accuracy, or added value, making them open to legal challenge

Since the 1970s, university achievement has been shown to be predicted by A levels but not by intelligence tests

The discriminative ability of A levels might be restored by introducing A+ and A++ grades, as recommended in the Tomlinson Report

If additional grades at A level cannot be introduced, medical schools could collectively commission and validate a new test of high grade scientific knowledge and understanding

An argument exists for also developing and validating tests of non-cognitive variables in selection, including interpersonal communication skills, motivation, and probity

Why are A levels predictive?

The three broad reasons why A levels may predict outcome in medicine are: cognitive ability—A levels are indirect measures of intelligence, and intelligence correlates with many biological and social outcomes9; substantive content—A levels provide students with a broad array of facts, ideas, and theories about disciplines such as biology and chemistry, which provide a necessary conceptual underpinning for medicine; and motivation and personality—achieving high grades at A level requires appropriate motivation, commitment, personality, and attitudes, traits that are also beneficial at medical school and for lifelong learning.

Cognitive ability alone cannot be the main basis of the predictive ability of A levels, because measures of intelligence and intellectual aptitude alone are poor predictors of performance at university. This is not surprising, as an old axiom of psychology says that “the best predictor of future behaviour is past behaviour,” here meaning that the future progress in passing medical school examinations will best be predicted by performance in past examinations. Of course success in medicine and being a good doctor are not identical nor are either of these the same as passing medical school examinations; but those who fail medical school examinations and have to leave medical school never become doctors of any sort. The predictive value of A levels most likely results either from their substantive content, their surrogate assessment of motivation to succeed, or both.

Separating the substantive and motivational components of A levels is straightforward in principle. If the substantive content of A levels is important for prediction then there will be a better prediction of outcome from disciplines underpinning medical science, such as biology and chemistry, than there will be from other subjects—for example, music, French, or economics. Alternatively if motivational factors are the main basis for the predictive power of A levels, indicating pertinent personality traits and attitudes such as commitment, the particular subject taken will be less relevant and an A grade in music, French, or economics will be found to predict performance at medical school as well as an A grade in biology or chemistry. Few analyses, however, differentiate between these factors. But evidence is increasing that A level chemistry is a particularly good predictor of performance in basic medical science examinations6 10 (although not all studies find an effectw7 w11), and A level biology also seems to be important.6 11 As almost all medical school entrants take at least two science A levels, however, one of which is chemistry, this leaves little variance to partition. It should also be remembered that there are probably true differences in the difficulty of A levels, with A grades easier to achieve in subjects such as photography, art, Italian, and business studies, than in chemistry, physics, Latin, French, mathematics, biology, and German.12 A clear test of the need for substantive content will occur should medical schools choose to admit students without A level chemistry, as has been suggested,w12 or perhaps without any science subjects. If students with only arts A levels perform as well as those with science A levels, then the substantive content of A levels is unimportant—and there are suggestions that arts and humanities subjects independently predict outcome.13w13

What do intellectual aptitude tests do?

Aptitude has many meanings,14 but the glossary to the Schwartz report says that aptitude tests are “designed to measure intellectual capabilities for thinking and reasoning, particularly logical and analytical reasoning abilities.”3 Aptitude, however, also refers to noncognitive abilities, such as personality. We therefore talk here of “intellectual aptitude,” both in the sense described by Schwartz and in the meaning used in the United States for what used to be known as scholastic aptitude tests (SATs; but now the “A” stands for assessment) and which are largely assessments of intellectual ability.w14

Most intellectual aptitude tests assess a mixture of what psychologists call fluid intelligence (logic and critical reasoning, or intelligence as process15) and crystallised intelligence (or intelligence as knowledge, consisting of general culturally acquired knowledge of vocabulary, geography, and so on). We thus consider intelligence tests and intellectual aptitude tests as broadly equivalent, but distinguish them from achievement tests, such as A levels, which assess knowledge of the content and ideas of academic subjects such as chemistry and mathematics.

Although purveyors of tests such as the biomedical admissions test (table) argue that they are not measures of intelligence but of “critical thinking,” there is little agreement on what critical thinking means.w15-w17 Critical thinking is related more to aspects of normal personality than it is to IQ and reflects a mixture of cognitive abilities (for example, deductive reasoning) and dispositional attitudes to learning (for example, open mindedness).16 Evidence also shows that critical thinking skills, not dispositions, predict success in examinations17 and that critical thinking may lack temporal stability.18 The content and the timed nature of the biomedical admissions test suggest it will correlate highly with conventional IQ tests.

What do intellectual aptitude tests predict?

We know of three studies that have compared intellectual aptitude tests with A levels (see

The investigation into supplementary predictive information for university admissions (ISPIUA) project5 studied 27 315 sixth formers in 1967, who were given a three hour test of academic aptitude (TAA); 7080 entered university in 1968 and were followed up until 1971. The results were clear: “TAA appears to add little predictive information to that already provided by GCE results [A levels and O levels] and school assessment in general.”

The Westminster study14 followed up 511 entrants to the Westminster Medical School between 1975 and 1982 who had been given a timed IQ test, the AH5.w18 Intellectual aptitude did not predict outcomes measured in 2002, whereas A level grades were predictive of both academic and career outcomes.

The 1991 cohort study looked at 6901 applicants to UK medical schools in 1990, of whom 3333 were admittedw19 w20 and followed up.w21 w22 An abbreviated version of the timed IQ test was given to 786 interviewees.w18 A levels were predictive of performance in basic medical science examinations, in final clinical examinations, and in part 1 of a postgraduate examination, whereas the intellectual aptitude test was not predictive (see

In the United States (presently outside the hegemony of the A level system) a recent study of dental school admissions19 evaluated an aptitude selection test, carefully founded in the psychology of cognitive abilities and skill acquisition. The scores were of no predictive value for clinical achievement at the end of the course.

Do aptitude tests add anything to what A levels already tell us?

In interpreting the validity of aptitude tests it should be acknowledged that some aptitude tests are not content free and do assess substantive components. For instance, the biomedical admissions test seeks to test aptitude and skills (problem solving, understanding argument, data analysis, and inference) in section 1 and scientific knowledge and applications in section 2. Section 2 contains questions on biology, chemistry, physics, and mathematics ( at the level of key stage 4 of the UK national curriculum ( Because most applicants for medical school are studying many of those subjects for A level, section 2 may be a better predictor of university outcome than is section 1, because it is indirectly assessing breadth of scientific background (and in all likelihood it will be found to correlate with A level grades, but not necessarily to add any predictive value).

On balance, A levels predict university achievement mainly because they measure the knowledge and ideas that provide the conceptual scaffolding necessary for building the more advanced study of medicine.20w23 As with building a house, the scaffolding will later be taken down and its existence forgotten, but it will nevertheless have played a key part in construction. Motivation and particular personality traits may also be necessary, just as a house is built better and faster by an efficient, conscientious builder. However, intellectual aptitude tests assess neither the fundamental scientific knowledge needed to study medicine nor the motivation and personality traits. Pure intellectual aptitude tests only assess fluid intelligence, and empirically that is a weak predictor of university performance. Intelligence in a builder, although highly desirable, is no substitute for a professional knowledge of the craft of house building.

What are the problems of using A levels for selection?

Reported problems in using A levels in selection are threefold: the increasing numbers of candidates with three A grades at A level; social exclusion, and type of schooling.

A continuing concern of the UK government is that entry to medical school is socially exclusive.21 The class distribution of entrants has been unchanged for over half a century, with a preponderance of applicants and entrants from social class 1.w24 It is not clear whether the entrant profile reflects bias by selectors,w25 an active choice not to apply for medicine by those from social classes IV and V,w26 or an underlying distribution of ability by social class.w27

Although most children in the United Kingdom attend state schools, the minority attending private (independent) schools are over-represented among university entrants, possibly because as a group they achieve higher A level scores.

Can these problems be fixed?

The increasing numbers of candidates with three grade As at A level

Intellectual aptitude tests are seen as a way to stretch the range, continuing to differentiate when A level grades are at their ceiling. The problem is that although these tests undoubtedly provide variance (as indeed would random numbers), it is not useful variance since it does not seem to predict performance at university.

The simplest solution to the ceiling problem is that suggested in the Tomlinson Report22 of introducing A+ and A++ grades at A level, so that A levels continue to do what they do well, rather than being abandoned and replaced by tests with unproved validity. An appropriate alternative strategy would be to commission a new test of high level scientific knowledge and understanding that measures above the top end of A levels, but it may be better to stay with what we already know.

Fig 2

Predictive value for gaining first class or upper second class degree of A levels obtained at different types of school (see for definition of school types)

Social exclusion

It has been suggested that a pool of talented individuals capable of becoming good doctors is excluded by current admission methods. Even if that were so (and we know of no evidence to support it) there is no basis for believing that intellectual aptitude tests are capable of identifying them.

Type of schooling

To tackle possibly unfair over-representation of entrants from independent schools, a case has been made for university selectors taking into account type of school and the relative performance of the school: achieving three grade Cs at A level in a school where most pupils gain three grade Es may predict university achievement better than gaining three grade Cs in a school where most pupils gain three grade Bs. Detailed analyses by the Higher Education Funding Council for England, however, show that after taking into account the grades achieved by an individual student, the aggregate achievement of the school from which the student has come provides no additional prediction of university outcome (see The funding council has good evidence to show that on aggregate, pupils from independent schools under-perform at university compared with those with the same grades from state schools (fig 2).

Intellectual aptitude tests are not a solution to this problem. A solution might be to upgrade the A level grades of applicants from state schools so that, say, one A grade and two B grades are treated as equivalent to two A grades and one B grade from an independent school applicant (that is, increasing by 20 points on the new tariff of the Universities and Colleges Admissions Service (http://www.ucas/), or by 2 points on the older scheme shown in figure 2). Any system should also take into account that many pupils are at independent schools until age 16 and then transfer to (state) sixth form colleges for their A levels. A proper, holistic assessment of each student will, however, require more information than is readily available on the present admissions form.

What is the potential value of non-cognitive aptitude tests?

The aptitude tests we have considered are those that assess cognitive skills. Other skills, however, are needed by doctors, such as the ability to communicate and to empathise, having appropriate motor and sensory skills, and having the appropriate attitudes and ethical standards necessary for professionalism (see for example, None of these is disputed, and there are strong arguments that selectors can and should take such measures into account at the same time as they are assessing the intellectual skills necessary for coping with the course.w28 w29 We have not considered such aptitudes in detail here because few UK medical schools are as yet using them in selection (as opposed to research and validation), although initiatives are in progress, both in the United Kingdom and elsewhere.w30 w31 Situational selection tests have been used in five Flemish medical and dental schools and were found to predict performance in the final first year's examinations better than tests of cognitive ability.23

It is also the case, however, that if selection is to be made on the basis of several independent characteristics, then the extent of selection on each is inevitably lower than if there is selection only on any one of them,24 until eventually the selective power of each is so reduced that “if you select on everything you are actually selecting on nothing.”w32 An attractive argument is that if most students with three grade As at A level (or indeed even lower grades) can cope with a medical course, then instead of looking for selection using A+ and A++ grades, medical schools should be selecting from the existing pool more systematically on non-cognitive measures. That will require validation of the measures, but it might result in a cohort of doctors who are not only academically able but also well suited to medicine because of temperament and attitude. Whether the slight potential decrease in the academic qualifications of entrants will be offset by their increased suitability on non-cognitive measures will depend on a precise knowledge of the relation between academic ability, suitability, and examination performance, and in particular whether these are linear or non-linear and show thresholds. It is an important question that must be answered by empirical study.


Schwartz urged universities to use “tests and approaches that have already been shown to predict undergraduate success” and to assess applicants holistically. We conclude that A levels, using a more finely developed marking system at the top end (A+ and A++ grades, for example) have the greatest potential towards enabling enhanced selection by medical schools' admissions staff: such grades will be maximally robust, in view of the testing time (and coursework) involved.

We understand why the new intellectual aptitude tests are being introduced, but are concerned that they are being introduced uncritically and without published evidence on their reliability and validity. Typically, they involve only an hour or two of testing time and are thus unlikely to have high reliability or generalisability (particularly owing to content specificity), although no data have been published. Their validity can be doubted for good reason, as published studies have found that intellectual aptitude compares poorly with A levels in predicting the outcome of university and medical school, and it has not been shown to add value to the selection process.

The appropriate alternative to refining A level grades would be for the medical schools to commission a new test, reliably assessing high grade scientific knowledge and understanding. At the same time, more research into the value of non-cognitive tests is clearly important and required.

We accept that our criticism of intellectual aptitude tests could be shown to be misplaced when the medical schools using them publish their evidence on predictive validity and reliability. Currently the tests are being justified, not by means of any reported data but by general assertions of organisational quality, unspecified relations between scores and university examinations, and by the observation that admissions staff are using them.25 Without evidence, medical schools using these tests are vulnerable to legal challenge.


We thank the referees and William James and Chris Whetton for help with preparing this paper.


  • Embedded ImageSupplementary information and figures are on

  • Contributors DAP and ICM conceived the article. ICM wrote the first draft. DAP, RW, EF, DJ, and PR provided input. RW revised the draft, following referees' comments. ICM is guarantor.

  • Funding None.

  • Competing interests DAP is a member of the team developing the personal qualities assessment.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.