Criteria, competencies, and confidence tricksBMJ 2006; 332 doi: https://doi.org/10.1136/bmj.332.7535.233 (Published 26 January 2006) Cite this as: BMJ 2006;332:233
- Richard Wakeford, vocational training scheme organiser ()1
- Accepted 14 October 2005
Baker proposes that focus groups of professionals, managers, and the public should be convened to detail the minimum standards expected of doctors regarding their fitness to practise.1 Although the intention of this proposal is laudable, it is unrealistic and unworkable for several reasons: the medical profession is too complex and changeable to characterise in this way; the standards, criteria, and thresholds may be contentious and unacceptable if they are too concise (see examples below) or too lengthy to allow “ownership” by the profession; long educational checklists are not helpful in practice.
Firstly, the work of doctors is highly complex and case specific (box). It develops along with the doctor's own professional expertise, and it cannot be analysed by simple lists and straightforward criteria.2 This may frustrate people like Baker, who argues that if there is a lack of clarity about what is unacceptable, “How can patients decide when a doctor should be reported for investigation, and what confidence should they place in medical regulation?” But this does not make the task feasible. Indeed, what confidence would patients have in a profession whose members' skills and standards of practice could be reduced to a few sets of tick boxes? And would they really think this is possible?
The examples in Baker's article show the second difficulty: they are not precise. For example, consider the statement “the doctor does not misuse drugs.” What is the meaning of “does not”?—never? in private? where it might be legal? What does “misuse” mean?—the author probably means the dishonest prescribing of drugs for the doctor's own use, but other interpretations are possible. What does “drugs” mean?—which drugs? is alcohol included? how will the list change as the law is modified? This statement is not precise, and guidelines, thresholds, and criteria will need to be detailed and lengthy because the work done by doctors is so complex. As such, they would be incapable of being supported and implemented by the profession.
Thirdly, detailed lists are not new. “Instructional learning objectives” (ILOs) were proposed as a general solution in education in the late 1950s, and behavioural psychologists promised that they would lead to systematic, effective teaching and assessment. A detailed classification and hierarchy of learning objectives was accompanied by a thinner, popularising treatise.3 4 The message was simple: precise instructional learning objectives for all learning tasks would solve our teaching and assessment problems. Teaching would be delivered in bite sized chunks, by programmed books or teaching machines. If the objectives were framed in behavioural terms they would be testable, thereby solving the problem of competency testing. It was an attractive idea that used persuasive language and seemed to make sense. Academics, educationalists, teachers, and testing agencies signed up. The books sold well. Instructional learning objectives were written and refined, specified, and swapped. Associations were established and conferences held, but of course it did not work. Everybody got bored—educators with writing the teaching materials and learners with the tedious results—and the assessments were not at all straightforward. It was all forgotten in just over a decade.
Doctors' work routinely presents them with problems for analysis, solution, and advice, which are complex and multifaceted, requiring knowledge and skills extending from bioscience through statistics and behavioural science to counselling. These problems are frequently multidimensional and not amenable to simple algorithmic solutions. They require case specific investigations to permit their nature to be properly defined and understood. Medicine's content and its practices are broad and ill defined, and subject to constant updating by a large live research literature, requiring continuing, lifelong understanding of the research mentality and the changing theories it produces. Challenges and problems are thus encountered for which standard solutions are not available and for which novel resolutions will routinely be needed, requiring the developed understandings of risk and probability, and highly sophisticated judgments based on experience and embedded in the live academic literature and professional practice (D Good, personal communication, 2004).
The proposals in Baker's article are similar. The weaknesses of current medical regulation need to be addressed, but medicine—certainly general practice—is rarely straightforward. There are few absolutes, factual or moral. Doctors serve many interests and stakeholders, and sometimes multiple masters. Decision making, like clinical problem solving, is often situation specific, and what is or is not acceptable probably cannot be defined by straightforward standards, criteria, and thresholds. Policy makers should avoid detailed checklists and concentrate on the bigger picture.
RW is an educationalist and specialist in assessing medical competence. He works with the MRCGP examination and experiences the difficulties of agreeing learning objectives for vocational training. He also organises courses for a GP vocational training scheme in Huntingdon, so is in a good position to triangulate and evaluate proposals and to introduce a historical perspective.
Competing interests None declared.