UKCAT among the pigeons
BMJ 2008; 336 doi: https://doi.org/10.1136/bmj.39519.621111.59 (Published 27 March 2008) Cite this as: BMJ 2008;336:691All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Beyond the technical difficulties of the UKCAT system there are wider
areas for concern if UKCAT scores are widely used to screen medical school
applicants.
There is a long standing shortage of medical students interested in
pursuing a career in psychiatry and a chronic shortage of consultant
psychiatrists (see:
http://www.rcpsych.ac.uk/researchandtrainingunit/centreforappliedresearc...).
The UKCAT screen is actually entirely a test of technical reasoning skills
and it's use as a discriminating tool will surely distort the strengths
shown by medical students so that we are attempting to train a group of
junior doctors selected to have exceptional but narrow skills and
aptitudes.
Medicine requires a broad range of human abilities, some extremely
difficult to assess in any computerised test. Valuing what is measurable
(reliably) is a catastrophic error compared to measuring (imperfectly)
what is valuable. We must question whether we are drifting towards medical
schools filled with potential Surgeons & Physicians and empty of
General Practitioners & Psychiatrists.
Would you want to be cared for by Dr House?
Competing interests:
None declared
Competing interests: No competing interests
As a student who hails from a low socio-economic area in South Wales
and who knows first hand o the fears that students hold regarding student
debt I feel that UKCAT is nothing more than a divisive sieving exercise to
further increase the divide between those who can and cannot afford to
study medicine – a degree already overburdened with student debt and now,
uncertainty over postgraduate prospects and job security!
The comments of Professor Johnson of Nottingham University are
worrying and imply that it is acceptable to use students as guinea pigs in
trialling this hit and miss screening tool. To claim that the test is
cheaper than other entry examinations and almost “value for money” is like
justifying robbing the rich as it not as bad as robbing from the poor!
The UKCAT is merely an experiment being carried out on unassuming
medical students to be. Forcing them to pay the fee to undertake the test
so that Universities can validate it’s accuracy is unethical and unfair.
You would not expect patients to pay to take part in a clinical trial
surely?
The fee is deterring those from poorer backgrounds from even
considering applying to medical school courses. Within a few years,
revision courses will have been developed that will include “how to do
well at UKCAT – yet again probable increasing the divide between the
affluent and less affluent, the public schools and state schools and those
in the know from the majority of school leavers.
Surely gaining entrance into medical school should be based on your
talents, academic ability and overall thirst to learn. Not on how much
money is in the bank to be able to afford to pay for entrance examinations
and the like. Those who argue that the UKCAT will serve to differentiate
between similarly outstanding candidates should perhaps reconsider the
benefit of interviews. Face to face discussions are the best way to tease
out the personality traits that are considered fundamental for doctors,
allow their personal statement claims to be validated and for the medical
school to get an overall impression of the applicants attitude,
personality ad ability. With communication skills being emphasised more
and more in undergraduate curriculum surely the interview serves as a
perfect way to kick start that process.
I only hope that UKCAT disappears as quickly as it surfaced ad that
the hard work and successes of widening access schemes into medicine that
has been achieved over recent years will not be undermine by the fear held
by many students that medicine as a career is a financial burden beyond
their reach.
Competing interests:
BMA Medical Students Committee Deputy Chairperson
Competing interests: No competing interests
The article necessarily focuses on UKCAT with cursory mention of
BMAT. Both
remain unproven and should be interpreted with extreme caution.
The evidence for BMAT is also limited, derived from partial
assessment at
Cambridge (247/300 medical and 66/103 veterinary students), hindered by
the compensatory nature of the selection process. There is correlation
between scores achieved and performance in first year exams, but only
strong in veterinary medicine, and no correlation between medical
students’
clinical skills ratings and BMAT performance (1). Neither test can claim
to
predict core non-academic qualities of a doctor.
Recommendations of the Quality Assurance Agency for Higher Education
(QAA) are not always adhered to. QAA requires an exercise of judgement,
underpinned by reference to transparent and justifiable criteria in the
selection process. Rejection based on just one 30-minute part of BMAT
without validity is unjustifiable. Cambridge Assessment recommends that
all
evidence available be taken into account in decision-making.
Institutions contribute to an already complex admissions process in
not being
transparent about BMAT and cut-off scores in published criteria.
Applicants
are unable to make informed choices and may jeopardise their chance of
admission. Of few institutions using BMAT, one attaches importance to
section 3, and another to sections 1 & 2. Imperial selects for
interview only
from those that score highly in all sections without first making this
clear.
Consequently it sifts through 2,700 applications, possibly disadvantaging
deserving students through inability to carry out thorough assessments.
Institutions are using individual sections as hurdles to progression to
interview, which was not intended. Conversely, Cambridge interviews almost
all of its applicants, and assessment correctly encompasses all criteria.
The article is timely and there is an urgent need for consensus on
interpretation and application of aptitude tests and any other hidden
factors,
to ensure fairness.
Recent studies (2,3) from UCL and Imperial on the effect of gender
and
ethnicity on exam performance are also of concern if integrated into
selection
processes through bias or otherwise. This is relevant, as applications are
not
anonymised (4) by UCAS. The work is questionable since just over half of
students agreed to disclose ethnic origin, possibly without ‘express
consent’.
Ethnicity of others was surprisingly guessed from photographs, names and
hearsay. Small numbers with inconsistent teaching attachments and
examination programmes in dissimilar institutions precludes meaningful
analysis. Religion and socio-economic factors were not considered and
census characteristics not adequately detailed. The studies may imply that
learning styles and ability differences are genetically determined. Data
protection and ethical considerations are paramount when emphasis is on
diversity and equal opportunities in education.
1. A report on the predictive validity of the BMAT (2005) for 1st
year
examination performance on the Medicine and Veterinary Medicine courses
at the University of Cambridge, Joanne L. Emery, 26 September 2007
2. Effect of Ethnicity and Gender on performance in undergraduate
medical
examinations. Haq I, Higham J, Morris R, Dacre J. Med Ed
2005;39:1126-1128
3. Exploring the underperformance of male and minority ethnic
medical
students in first year clinical examinations. Woolf K, Haq I, McManus I
C,
Higham J, Dacre J. Adv. Health Sci Educ Theory Pract DOI 10.1007/
s10459-007-9067-1
4. McManus I C, Richards P, Winder B C, Sproston K A, Styles V.
Medical School Applicants from Ethnic Minority Groups: identifying if
and when they are disadvantaged. BMJ 1995;310:496-500
Competing interests:
Parent
Competing interests: No competing interests
More Research is Needed
The author makes some interesting points regarding the UKCAT in this article. Understandably, there are currently some frustrations associated with the UKCAT. However, I believe there are some other points to consider before we throw the baby out with the bathwater.
Most applicants to medicine now have three A’s, leaving admissions departments to make decisions, usually based on personal statements and interviews. Neither of these methods has been proven to have predictive validity, and unfortunately they tend to favour students who have access to coaching. [1] To make matters worse, every year when A level results come out, we see that students from independent schools are more likely to achieve three A’s than students from state schools. [1] The proposals to split the A grade into A*, A**, and A*** will only exacerbate the problem of disadvantage due to school type.
The UKCAT is intended to be an exam that cannot be coached and does not depend on previous schooling. Furthermore, it would allow universities to compensate lower A level scores with a high UKCAT score and vice versa. While the often-cited McManus articles [2, 3] are valuable, they should not rule out the concept of ever using aptitude tests in the UK. The conclusions made are based on A level grades from the 1960’s and 1970’s; clearly things have changed in respect of grade inflation. [4] Furthermore, the aptitude tests used may differ substantially from the UKCAT and are perhaps unfair comparisons from which to draw conclusions.
Research conducted in other areas and disciplines disagrees with some of the conclusions made by McManus et al. For example, using high school grades along with the SAT Reasoning Test has been shown to yield a better prediction of first year university grades than using high school grades alone. [5] Furthermore, a study which piloted the use of the SAT Reasoning Test in the UK revealed that A levels and the SAT “assess somewhat distinct constructs, suggesting that the SAT, or a test like it, may be of value in predicting university performance.” [6]
With regards to the criticisms made about the MCAT, it is important to note that the structure of this exam differs from that of the UKCAT. Medicine is a graduate course in the US, and as such the MCAT draws on acquired scientific knowledge (i.e. theoretically, one can study for the MCAT). MCAT results will be paired up with university grades (not high school grades akin to A level grades) in attempt to predict future clinical performance. Therefore, criticisms made about the MCAT cannot be extended to the UKCAT because they are distinctly different exams.
In conclusion, while there have been teething problems for the UKCAT, it is only in its first two years of existence. Given the time and money already invested into this exam, both by universities and the individuals paying to take it, would it not be better to wait until firm evidence is produced based on rigorous research before we decide the fate of the UKCAT?
1. The Sutton Trust, University admissions by individual schools. 2007.
2. McManus, I., et al., Intellectual aptitude tests and A levels for selecting UK school leaver entrants for medical school. BMJ, 2005. 331: p. 555-559.
3. McManus, I., et al., A levels and intelligence as predictors of medical careers in UK doctors: 20 year prospective study. BMJ, 2003. 327: p. 139-142.
4. Woolcock, N., Second-chance exam 'dumbs down' GCSE, in The Times. 2008.
5. Zwick, R., Fair Game? The Use of Standardized Admissions Tests in Higher Education. 2002, London: Routledge Falmer. 228.
6. McDonald, A., P. Newton, and C. Whetton, A Pilot of Aptitude Testing for University Entrance. 2001, National Foundation for Educational Research.
Competing interests:
None declared
Competing interests: No competing interests