SeriesAssessment of clinical competence
Section snippets
Blueprinting
If students focus on learning only what is assessed, assessment in medical education must validate the objectives set by the curriculum. Test content should be carefully planned against learning objectives–a process known as blueprinting.2 For undergraduate curricula, for which the definition of core content is now becoming a requirement,3 this process could be easier than for postgraduate examinations, where curriculum content remains more broadly defined. However, conceptual frameworks
Standard setting
Inferences about students' performance in tests are essential to any assessment of competence. When assessment is used for summative purposes, the score at which a student will pass or fail has also to be defined. Norm referencing, comparing one student with others, is frequently used in examination procedures if a specified number of candidates are required to pass—ie, in some college membership examinations. Performance is described relative to the positions of other candidates. As such,
Validity versus reliability
Just as summative and formative elements of assessment need careful attention when planning clinical competence testing, so do the issues of reliability and validity.
Reliability is a measure of the reproducibility or consistency of a test, and is affected by many factors such as examiner judgments, cases used, candidate nervousness, and test conditions. Two aspects of reliability have been well researched: inter-rater and inter-case (candidate) reliability. Inter-rater reliability measures the
Assessment of “knows” and “knows how”
The assessment of medical undergraduates has tended to focus on the pyramid base: “knows”—ie, the straight factual recall of knowledge, and “knows how”—ie, the application of knowledge to problem-solving and decision-making. This method might be appropriate in early stages of the medical curriculum, but, as skill teaching is more vertically integrated, careful planning of assessment formats becomes crucial. Various test formats of factual recall are available, which are easy to devise and
Traditional long and short cases
Although abandoned for many years in North America, the use of unstandardised real patients in long and short cases to assess clinical competence remains a feature of both undergraduate and postgraduate assessment in the UK. Such examinations are increasingly challenged on the grounds of authenticity and unreliability. Long cases are often unobserved, the assessment relies on the candidate's presentation, representing an assessment of “knows how” rather than “shows how”. Generally, only one
Assessment of “does”
The real challenge lies in the assessment of a student's actual performance on the wards or in the consulting room. Increasing attention is being placed on this type of assessment in postgraduate training, because revalidation of a clinician's fitness to practise and the identification of badly performing doctors are areas of public concern. Any attempt at assessment of performance has to balance the issues of validity and reliability, and there has been little research into possible approaches
References (34)
- et al.
Assessing clinical competence, vol 7
(1985) Determining the content of certification examinations
Tomorrow's doctors: recommendations on undergraduate medical education
(1993)- et al.
Longitudinal reliability of the Royal Australian College of General Practitioners certification examination
Med Educ
(1995) Standard setting in medical education
Acad Med
(1996)A measurement framework for performance based tests
- et al.
Performance-based assessment: lessons learnt from the health professions
Educ Res
(1995) - et al.
Reliability, validity and efficiency of multiple choice questions and patient management problem items formats in the assessment of physician competence
Med Educ
(1985) - et al.
The feasibility, acceptability and reliability of openended questions in a problem based learning curriculum
- Wass V, Jones R, van der Vleuten CPM. Standardised or real patients to test clinical competence? The long case...
Psychometric characteristics of the objective structured clinical examination
Med Educ
The assessment of clinical skills/competence/performance
Acad Med
Extended matching items: a practical alternative to free response questions
Teach Learn Med
Constructing written test questions for the basic and clinical sciences
The effect of structure in scoring methods on the reproducibility of tests using open ended questions
The assessment of professional competence: developments, research and practical implications
Adv Health Sci Educ
Improving oral examinations: selecting, training and monitoring examiners for the MRCGP
BMJ
Cited by (736)
The negative assessment in specialized health training: A chimera?
2024, Educacion MedicaEvidence for commonly used teaching, learning and assessment methods in contact lens clinical skills education
2023, Contact Lens and Anterior EyeThe Grade Debate: Evidence, Knowledge Gaps, and Perspectives on Clerkship Assessment Across the UME to GME Continuum
2023, American Journal of MedicineValidity Study of an End-of-Clerkship Oral Examination in Obstetrics and Gynecology
2023, Journal of Surgical Education