Letters

Electronic patient records in general practice

BMJ 2001; 323 doi: https://doi.org/10.1136/bmj.323.7322.1184a (Published 17 November 2001) Cite this as: BMJ 2001;323:1184

Published methods of measuring the accuracy of electronic records do exist

  1. Philip J Bayliss Brown, honorary senior lecturer in medical informatics (Phil@hicomm.demon.co.uk)
  1. Department of Diabetes and Endocrinology, St Thomas's Hospital, King's College, London SE1 7EH
  2. University of Wales College of Medicine, Cardiff CF14 4XN
  3. ICRF Medical Statistics Group, Centre for Statistics in Medicine, Institute of Health Sciences, Oxford OX3 7LF
  4. Information and Computing Division, School of Medicine, University of Southampton, MailPoint 820, Southampton General Hospital, Southampton SO16 6YD
  5. Fisher Medical Centre, Millfields, Skipton BD23 1EU
  6. School of Health and Community Studies, University of Derby, Derby DE22 3HL
  7. Research School of Medicine, University of Leeds, Leeds LS2 9LN

    EDITOR—Hassey et al have highlighted the importance of ensuring that electronic records are accurate.1 In their study they explored a method of measuring the validity and utility of electronic records in general practice, including whether the coding of 15 marker diagnoses was a true reflection of the actual prevalence.

    They are, however, wrong in their assertion that no published accounts of measuring the validity of electronic record contents exist. Hogan and Wagner performed a literature review and compared 20 articles that met certain quality criteria.2 They recommended (as did Hassey et al) that measures of completeness (sensitivity or detection rate) and correctness (positive predictive value) were valuable. These measures have also been shown to be valuable in measuring the quality of data retrieval.3

    Other measures derived from 2×2 contingency tables are less likely to be helpful because of the combination of a large total number of records and true negatives. To compensate for this, Hassey et al propose two new descriptive statistics. Previous reports have used Cohen's κ,4 a measure of the strength of agreement between the observed retrieval and the gold standard, against a result that might be expected by chance. Cohen's κ has the advantage of being a well validated single index and has been shown to be a useful index of measuring data retrieval from electronic records where performances of >0.9 can be achieved.3

    When Cohen's κ is applied to the data by Hassey et al, it highlights similar priority areas of …

    View Full Text

    Sign in

    Log in through your institution

    Free trial

    Register for a free trial to thebmj.com to receive unlimited access to all content on thebmj.com for 14 days.
    Sign up for a free trial

    Subscribe