Letters

General practitioners' self assessment of knowledge

BMJ 1998; 316 doi: https://doi.org/10.1136/bmj.316.7144.1609 (Published 23 May 1998) Cite this as: BMJ 1998;316:1609

This article has a correction. Please see:

The vast range of clinical conditions means that doctors cannot know everything

  1. Adrian Edwards, Lecturer,
  2. Michael Robling Matthews, Research officer,
  3. Sarah Matthews, Clinical fellow,
  4. Helen Houston, Senior lecturer,
  5. Clare Wilkinson, Senior lecturer
  1. Division of General Practice, University of Wales College of Medicine, Cardiff CF3 7PN
  2. Department of General Practice, University of Manchester, Manchester M14 5NP
  3. Exeter Postgraduate Medical Centre, Exeter EX2 5DW
  4. Goodfellow Unit, Department of General Practice, University of Auckland, Private Bag 92019, Auckland, New Zealand

    EDITOR—Factual recall is only one element of a needs assessment that should be taken into account in developing continuing medical education programmes for general practitioners. A poor correlation of self assessed knowledge with performance in a knowledge test, as shown by Tracey et al,1 does not invalidate the rationale for a needs assessment. The knowledge tests used by Tracey et al do not reflect the range of practical issues relevant to patient management. Knowledge of practical management that is relevant to the concerns and questions likely to be raised by patients, and confidence and experience in dealing with specific conditions, are the needs that should be assessed in planning continuing medical education programmes.

    In two studies evaluating guidelines in primary care, one set for the use of magnetic resonance imaging for common orthopaedic conditions (182 general practitioners)2 and the other for the management of breast conditions in primary care (113 general practitioners),3 we undertook needs assessments with the participating practices. In each study doctors rated their confidence in using magnetic resonance imaging or managing breast conditions respectively and identified problems from recent clinical practice, and knowledge of practically relevant issues was assessed. In the first study the doctors indicated their frequency of using magnetic resonance imaging. Good knowledge of the contraindications for magnetic resonance imaging was associated with higher self assessed knowledge (P<0.05) and confidence (P<0.01) and frequency (P<0.05) of using magnetic resonance imaging in practice. Knowledge of lifetime risk of breast cancer was associated with confidence in dealing with breast conditions (P<0.05). Knowledge was also associated with membership of the Royal College of General Practitioners (P<0.01)

    General practitioners can correctly assess their knowledge and needs if the relevant aspects are explored. Identifying needs from personal experience is perhaps more valid than general rating scales of knowledge; the knowledge that is assessed should be relevant to the discussions that take place with patients and the management issues that arise in combination with up to date factual information. The range of clinical conditions encountered in primary care is too vast for high levels of personal knowledge to be maintained in all these areas. It is more important that general practitioners can access information when required rather than having it readily accessible. The nature of general practice is also characterised by high levels of uncertainty in dealing with many patients' conditions, and general practitioners have to be able to cope with this in everyday practice.4 Assessing knowledge of practical management, experience, and confidence can then inform the content and method of an educational programme.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.

    Knowledge gaps were identified by general practitioners

    1. Mike Crilly, Clinical research fellow
    1. Division of General Practice, University of Wales College of Medicine, Cardiff CF3 7PN
    2. Department of General Practice, University of Manchester, Manchester M14 5NP
    3. Exeter Postgraduate Medical Centre, Exeter EX2 5DW
    4. Goodfellow Unit, Department of General Practice, University of Auckland, Private Bag 92019, Auckland, New Zealand

      EDITOR—If Tracey et al are correct and clinicians cannot identify their own educational needs then their paper raises important issues about the direction of continuing medical education for general practitioners both in New Zealand and elsewhere.1 However, the data they present concerning the correlation between individual general practitioners' self assessment scores and actual scores on testing with a multiple choice questionnaire are open to a more optimistic interpretation.

      Reconstructing the data from their original figure (concerning thyroid disorders) into a two by two format (table) suggests that general practitioners can identify gaps in their own knowledge. General practitioners who recorded a self assessment score of 4 or less on the nine point semantic scale indicated that they perceived a gap in their knowledge. If a multiple choice questionnaire score of less than 70% (the group average was 68%) is presumed to confirm a gap in knowledge gap then, while the sensitivity of self assessment is a disappointing 28%, the specificity of self assessment by clinicians is a more encouraging 91%. Although the number of individuals is small, this suggests that when clinicians identify gaps in their own knowledge they are usually right but that their judgment is less reliable when they consider their knowledge to be comprehensive.

      General practitioners' self assessment scores and test scores for gaps in their knowledge of thyroid disorders*

      View this table:

      One of the multiple choice questions relates to the risk of a patient with toxic nodular goitre developing hypothyroidism after radioiodine treatment. I did not know the correct response and identified a gap in my own knowledge. The authoritative textbook I consulted contained no answer,2 while a simple search of Medline identified three recent studies.345

      Rather than attempting to overwhelm clinicians with information in the hope that they will recall it when needed, continuing medical education needs to focus increasingly on helping doctors to identify important gaps in their knowledge, ideally as they occur in clinical practice. Clinicians should also be provided with the skills to track down and apply the information when they (and their patients) need it. Instead of undermining self directed learning in continuing medical education this research adds support to such an approach.

      References

      1. 1.
      2. 2.
      3. 3.
      4. 4.
      5. 5.

      Testing can be valid only if questions are relevant to those tested

      1. Phil Taylor, General practice clinical tutor
      1. Division of General Practice, University of Wales College of Medicine, Cardiff CF3 7PN
      2. Department of General Practice, University of Manchester, Manchester M14 5NP
      3. Exeter Postgraduate Medical Centre, Exeter EX2 5DW
      4. Goodfellow Unit, Department of General Practice, University of Auckland, Private Bag 92019, Auckland, New Zealand

        EDITOR—Tracey et al draw attention to the necessity for professional development programmes for general practitioners to be appropriate to the educational needs of those clinicians.1 Unfortunately, their paper makes the assumption that a written true-false test of 50 questions on thyroid disorders is a valid test of general practitioners' educational needs, which can then be compared with their own assessments of that need. This can be the case only if all of the questions in the test are directly relevant to the practice of the general practitioners being tested.

        It is fortunate that they give examples of the questions in the true-false test. As a primary care educationalist with an interest in general medicine, I would assess my knowledge and skills in the management of thyroid disorders as being adequate to practise as a general practitioner. I would, however, assess four of the questions as being not relevant to general practice. The question that asks whether an elderly patient with hyperthyroidism may only have symptoms and signs of general debility is valid since ignorance of this fact may result in general practitioners not testing thyroid function in such cases. Knowing whether most patients with toxic nodular goitre become hypothyroid after radioiodine treatment is not relevant, but knowing that some patients may become hypothyroid after radioiodine treatment is relevant and could have been reflected in a valid question.

        I therefore dispute the conclusion that “general practitioners' insight into their educational needs is poor.” The paper does, however, highlight the need for professional development activities to be based on an objective assessment of needs. More research is needed, but it must not contain invalid assumptions.

        References

        1. 1.

        Author's reply

        1. Jocelyn Tracey, Assistant director
        1. Division of General Practice, University of Wales College of Medicine, Cardiff CF3 7PN
        2. Department of General Practice, University of Manchester, Manchester M14 5NP
        3. Exeter Postgraduate Medical Centre, Exeter EX2 5DW
        4. Goodfellow Unit, Department of General Practice, University of Auckland, Private Bag 92019, Auckland, New Zealand

          EDITOR—In response to Edwards et al, we were not suggesting that needs assessment was not important; rather we were making a case for it to be done more rigorously. In New Zealand most general practitioners select their continuing education on the basis of their self perceived needs, which was reflected in our initial self assessment questionnaire. What our study showed was that this form of needs assessment is inadequate and a more rigorous assessment such as the one reported by Edwards et al is necessary.

          Both Edwards et al and Crilly question how well the knowledge we tested related to practical management in general practice. Our definitive test of knowledge did include case scenarios with choices of management. Also, during the test's development we presented 75 possible true-false questions to a group of eight non-academic general practitioners and asked them to remove all questions they thought were not relevant to their practice. We retained only those questions rated as relevant.

          Crilly suggests using a two by two table to analyse the data. Such an analysis of the actual data shows a sensitivity of 24% and a specificity of 93% (these are different from Crilly's figures as some of the plots on the figure represented two data points). Where general practitioners had a self assessment score of less than 5 the specificity of the test in predicting that they had a knowledge gap on true-false testing was high; of the 11 who scored themselves this low on the self assessment, nine also scored low on the true-false test. A total of 56, however, scored themselves greater than 5 on the self assessment, and yet 52% of these had a knowledge gap on testing. A sensitivity of 24% suggests to us that such a form of self assessment is not useful.

          We agree with both Crilly and Taylor that the range of clinical knowledge in general practice is too vast to maintain and that access to information is important. We do, however, have concerns that some general practitioners in our study perceived themselves to have adequate knowledge in areas where their objectively measured knowledge base was poor. They are therefore unlikely to access the available information. We would promote the availability of banks of objective questions that general practitioners could use for self assessment to plan their portfolio of learning each year.

          View Abstract

          Log in

          Log in through your institution

          Subscribe

          * For online subscription