The electronic patient record in primary care—regression or progression? A cross sectional study
BMJ 2003; 326 doi: https://doi.org/10.1136/bmj.326.7404.1439 (Published 26 June 2003) Cite this as: BMJ 2003;326:1439- Julia Hippisley-Cox (julia.hippisley-cox{at}nottingham.ac.uk), senior lecturer in general practice1,
- Mike Pringle, professor in general practice1,
- Ruth Cater, researcher1,
- Alison Wynn, researcher1,
- Vicky Hammersley, Trent Focus Research Network coordinator1,
- Carol Coupland, senior lecturer in medical statistics1,
- Rhydian Hapgood, MRC training fellow in health service research2,
- Peter Horsfield, clinical director, PRIMIS1,
- Sheila Teasdale, service director, PRIMIS1,
- Christine Johnson, lecturer in general practice1
- 1 Division of General Practice, Nottingham University, Nottingham NG7 2RD
- 2 Sheffield Centre for Integrated Genetics, Section of Public Health, ScHARR, University of Sheffield, Sheffield S1 4DA
- Correspondence to: J Hippisley-Cox
- Accepted 17 March 2003
Abstract
Objectives To determine whether paperless medical records contained less information than paper based medical records and whether that information was harder to retrieve.
Design Cross sectional study with review of medical records and interviews with general practitioners.
Setting 25 general practices in Trent region.
Participants 53 British general practitioners (25 using paperless records and 28 using paper based records) who each provided records of 10 consultations.
Main outcome measures Content of a sample of records and doctor recall of consultations for which paperless or paper based records had been made.
Results Compared with paper based records, more paperless records were fully understandable (89.2% v 69.9%, P=0.0001) and fully legible (100% v 64.3%, P < 0.0001). Paperless records were significantly more likely to have at least one diagnosis recorded (48.2% v 33.2%, P=0.05), to record that advice had been given (23.7% vs 10.7%, P=0.017), and, when a referral had been made, were more likely to contain details of the specialty (77.4% v 59.5%, P=0.03). When a prescription had been issued, paperless records were more likely to specify the drug dose (86.6% v 66.2%, P=0.005). Paperless records contained significantly more words, abbreviations, and symbols (P < 0.01 for all). At doctor interview, there was no difference between the groups for the proportion of patients or consultations that could be recalled. Doctors using paperless records were able to recall more advice given to patients (38.6% v 26.8%, P=0.03).
Conclusion We found no evidence to support our hypotheses that paperless records would be truncated and contain more local abbreviations; and that the absence of writing would decrease subsequent recall. Conversely we found that the paperless records compared favourably with manual records.
Introduction
The NHS information strategy,1 the national service frameworks,2 and the NHS plan3 all promote the use of electronic patient records. The national specification for integrated care records service4 aims to develop clinical records, which are to be designed around the patient, integrated across all health and social care settings, and capable of supporting the implementation of care pathways within the national service frameworks. Good quality electronic records can be used to prompt better care5–9 improve coordination of care between primary and secondary care,10 monitor the health of populations,11–13 and undertake primary care based research.14 15
Of the two main concerns about the use of electronic patient records for clinical care, one—regarding the reliability of hardware and the confidentiality and legality of the electronic patient record—has been resolved.16 17 The other concern is that electronic patient records could sacrifice some of the richness of data quality inherent in the written medical record. This could result from lack of adequate computer training or skills or from limitations in the software. Written records may, through pattern recognition, help doctors to recall more about the patient and that particular consultation and the intended management plan than can be gleaned from a computer screen.
We found only one study that compared completeness of electronic and paper medical records, based in a US hospital.18 We therefore undertook a study to test the hypothesis that primary care doctors may (a) enter less information overall and less detail in the computer record and (b) recall less about the consultation from screen records compared with paper records. In other words, paperless records may contain less information and it may be more difficult to retrieve than information from than paper records. We compared (a) the content and quality of a sample of records and (b) doctor recall from consultations in which paperless or paper based records were used.
Participants and methods
Practice recruitment, subjects and setting
We sent a postal invitation to all the general practitioners in the 377 practices in Nottinghamshire, Leicestershire, and Lincolnshire asking them to self categorise into the type of medical records maintained. We initially differentiated between manual (all records kept on paper) and combination (part electronic and part paper record keeping), but piloting made it clear that the appropriate comparison was between paperless records, where all patients' clinical notes were entered on to and stored on computer, and paper based records, where either a combination of manual and electronic records or only manual records were kept. We used random number tables to select 26 general practices with 56 general practitioners from the 105 practices (226 practitioners) who expressed an interest.
Practice visits and data collection
A research assistant (RC or AW) arranged to visit each practice on a day on which each consenting partner agreed to an interview lasting up to 60 minutes (usually about 30 minutes). The researcher identified a consulting session up to six weeks before the interview date by randomly selecting one of 10 possible sessions from “Monday morning” through to “Friday afternoon or evening” using a random numbers table. If the doctor concerned was not consulting in that session, then the first session before the one chosen in which the doctor was consulting was used. If the doctor was on holiday, the randomly selected session in the previous week was chosen. The practice receptionist was asked to retrieve records from 10 consecutive consultations from the start of the selected consultation session in whatever form they were recorded. Appointments where patients did not attend were ignored.
For each consultation the record, in whatever form, was printed or photocopied by the receptionist with the patient's name and address removed in order to ensure that it was completely anonymous. The receptionist attached the following information for each patient—age, sex, and computer number; number of consultations with the general practitioner being interviewed in the 12 months before the index consultation date and number of consultations up to six weeks after the consultation date; and number of consultations with other doctors in the practice over the 12 months before the index consultation date and up to six weeks after. The receptionist or practice manager also filled in a form declaring that the records they had retrieved had been anonymised and that the researcher had not had access to the identity of patients whose records had been assessed.
Interviews with general practitioners
The researcher obtained informed consent from the general practitioners for recorded interviews. During these the general practitioners had access to all their usual manual or computer records for their 10 selected consecutive consultations. The researcher asked, “Please tell me about this consultation,” for each of the 10 consultations and recorded the replies. The interviews were all transcribed and imported into NUDIST qualitative analysis software. A coding frame was developed by RC and discussed with the project team after review of a one in 10 sample of consultations. The modified coding scheme was then applied to 50 consultations selected at random. A second coding was undertaken independently by JHC, and the results directly compared and discrepancies resolved. The final categorisation was then applied to the whole sample by one author (RC). The final categories (see table 3) included spontaneous recall of the patient or consultation; recall of the reason for encounter or diagnosis; evidence of the patient's state of mind, expectations, and beliefs; and clinical details. We used spontaneous recall of the patient or the consultation rather than a direct question to minimise bias. For example, we coded that a general practitioner recalled a patient where he or she explicitly said: “This is a patient I know well” or “I remember this particular patient” or similar phrase.
Medical record scoring system
We developed a scoring system for assessing the printed computer and copied manual records, based on the record scoring system developed and validated by the General Medical Council.19 We scored consultations using the terms “legible,” “medically understandable,” and “medically appropriate.” “Legible” refers to whether the words or characters recorded could be read in full, in part, or not at all by another doctor. “Medically understandable” refers to whether the clinical content of the record could be understood or followed in full, in part, or not at all by another doctor. A record that contained non-standard abbreviations (that were legible) but which couldn't be interpreted would be not understandable. “Medically appropriate” refers to whether the clinical content was deemed appropriate. An inappropriate record would be one with an unexpected decision given the reason for attendance and history, or a record that omitted an aspect of history, examination, diagnosis, or management to an extent that was likely to significantly affect patient care. An example of a record with an unexpected outcome was when a patient with raised blood pressure had bendrofluazide increased to 5 mg (when 2.5 mg is the maximum recommended dose for hypertension). Records that were not legible or understandable could not be assessed for appropriateness.
Further data extracted included diagnosis, reason for consultation, symptoms, medical history, family history, social history, lifestyle (smoking, alcohol intake, etc), patient views, physical examination, clinical values (blood pressure, weight, height, peak flow, etc), issue of a medical sick note, investigations, referrals (and their specialty), advice, and details of prescriptions (drug name, dose, frequency). We also counted the number of words, abbreviations, symbols, numbers, and values in each record. The definitions used are listed in the appendix on bmj.com
Inter-rater and intra-rater reliability
Three general practitioners (MP, RH, and CJ) independently scored a 10% sample of the medical records, and we assessed inter-rater reliability using the κ statistic for each outcome. The median of all the κ values was 0.73 for MP v RH and 0.92 for MP v CJ. MP then scored all the records. We assessed intra-rater reliability on a further random 10% sample of records and found the median κ to be 0.82.
Statistical methods
Our main comparison was between paperless records for consultations and paper based records. Our main hypothesis was that paperless records would contain less detail than other records, and so our main outcome was the number of words recorded in the medical record. Other outcomes included the scoring for the content and quality of the medical record and the degree of recall by general practitioners about their consultations.
We undertook an analysis at the level of the consultation and accounted for clustering by general practitioner by using the survey methods of analysis in STATA (version 7.0) and specifying general practitioner as the primary sampling unit. For binary outcome data, we compared paperless practices with other practices using modified χ2 tests. For continuous outcomes, we have presented means and 95% confidence intervals. We made statistical comparisons by comparing the means of the log transformed data, taking account of clustering by general practitioner. We compared the characteristics of the general practitioners using χ2 tests or Mann-Whitney tests as appropriate. We used a significance level of 0.05 (two tailed).
Power calculation—A retrospective power calculation showed that the study had a power of 89% with a two sided 5% significance level to detect a twofold difference in the number of words in the paperless records compared with paper based records, with 10 consultations per general practitioner and an intracluster correlation coefficient of 0.45.
Results
Characteristics of the general practitioners
One practice dropped out after recruitment, and we therefore interviewed 53 general practitioners from 25 practices. Of these general practitioners, 25 (47%) were classified as using paperless records and 28 used paper based records.
In order to determine whether the practices we recruited were representative of other practices in Trent region, we compared their characteristics with those of the remaining practices in Nottinghamshire, Leicestershire, and Lincolnshire. We used the relevant subset of a database of practice characteristics and performance indicators for all practices in Trent region.20 Recruited practices were statistically similar to the other practices in terms of proportion of practices in rural locations, deprivation scores, total list size, list size per whole time equivalent general practitioner, and dispensing status. They had similar hospital admission rates for a range of common conditions. Similarly, practices with paperless records were similar to paper based practices in terms of characteristics and performance indicators.
The general practitioners using paperless records had similar characteristics to those of the doctors using paper based records in terms of age (median 43 years v 41.5 years, P=0.82), the proportion who were male (17/25 (68%) v 24/28 (86%), P=0.12) length of time at the practice (median 6.0 years v 7.5 years, P=0.84), list size per whole time equivalent (median 1890 v 1959, P=0.56), size of practice (median total list size 6859 v 8705, P=0.53), number of whole time equivalent general practitioners at the practice (median 3.5 v 5.0, P=0.41), and Townsend (deprivation) score of the practice (median −0.42 v −0.13, P=0.31). Of the 25 general practitioners using paperless records, 12 (48%) had 10 minute appointments, compared with 13 (46%) of the 28 doctors using paper based records (P=0.91). There was also no difference between those using paperless records and those using paper based records in the proportions who were from a rural location (P=0.57), who were part of a regional research network (13% v 10%, P=0.80), who were training general practitioners (P=0.13), or who had met higher targets for immunising children under 1 year old, preschool boosters, and cytology (P > 0.05 for all). We found no difference between the two groups for a range of performance and health outcome measures used in an earlier study.20
Thirty seven of the 53 general practitioners used EMIS computer systems, with no significant difference between those using paperless records and those using paper based records in the proportions using EMIS (80% v 61%, P=0.13). The other systems used were Torex (five doctors), Micromedic (six), Microtest (two), GCS LK Global (two) and an inhouse system developed by the practice itself (one).
Patient and consultation characteristics
We had only nine consultations to score for one of the general practitioners using paperless records compared with 10 for all the others, giving 249 consultations with paperless records and 280 with paper based records. The characteristics of the patients from the consultations with paperless records were similar to those of the patients with paper based records in terms of age (median 45 v 48 years, P=0.16), sex (39% v 40% male, P=0.16), proportion who had consulted a general practitioner in the previous year (90% v 91%, P=0.71), and proportion who had consulted the same general practitioner in the six weeks after the index consultation (34% v 42%, P=0.12). However, the patients with paperless records were less likely to have seen the same general practitioner in the preceding year than the patients from the paper based practices (65% v 79%, P=0.009), although this was no longer significant after adjustment for the number of whole time equivalent doctors at each practice (P=0.06).
Table 1 shows that the paperless patient records contained significantly more words, more abbreviations, and more symbols than the paper based records (P < 0.01 for all). We found no drawings in paperless records (as expected). There were seven drawings in the paper based records, of which two were diagrams indicating the site of a breast lump, and five were diagrams of abdomens (two indicating the site of tenderness).
Table 2 shows that there was no difference between the paperless and paper based records in the proportions with an entry, that were medically appropriate, and with a reason for encounter recorded. All 249 paperless records were fully legible, whereas 16 (6%) of the 280 paper based records were totally illegible and 84 (30%) were partially legible (P < 0.0001). The paperless records were also significantly more likely to be fully understandable than the paper based records (89% v 69%, P=0.0001); to have at least one diagnosis recorded (48% v 34%, P=0.05); to record that advice had been given (24% v 11%, P=0.017); when a referral had been made, to contain details of the specialty referred to (77% v 60%, P=0.03); and, when a prescription had been issued, to specify the drug dose (87% v 66%, P=0.005). Twice as many paperless records contained the patient's family history than did paper based records, but the significance was borderline (4% v 2%, P=0.09).
Table 3 shows the results of the interviews with the general practitioners. The proportion of patients spontaneously recalled by the general practitioners did not differ between the two groups (40 (16%) of the patients with paperless records v 49 (18%) of patients with paper based records, P=0.78). Similarly, the proportion of consultations recalled was similar between the two groups (1% v 4%, P=0.10). Recall of other items associated with the consultations was similar, except that recall of advice in the paperless consultations was greater than for paper based consultations (39% v 27%, P=0.03). An analysis of additional information recalled in the interview compared with that recorded in the computer record showed no significant differences between the two groups.
Discussion
This is the first study in primary care to report on the clinical content and quality of paperless medical records compared with paper based records. We found no evidence to support our hypotheses that paperless records would be truncated and contain more local abbreviations; and that the absence of writing would decrease subsequent recall. Conversely, we found that the paperless clinical records compared favourably with manual records and that there was no difference in doctor recall of patients and consultations. Electronic records were easier to understand and were more likely to have at least one diagnosis recorded, to record that advice had been given, and to record the specialty of referrals and the doses of prescribed drugs. Electronic records also contained significantly more words, abbreviations, and symbols. At interview, the general practitioners who used paperless records were able to recall more advice given to patients but were no more or less likely to recall a specific patient or particular consultation. This implies that there is likely to be no detriment to continuity of care as a result of the type of records used.
Limitations of study
This study is descriptive, and the practices from which we recruited were representative of those in the Trent region for basic characteristics, although they tended to be larger. The cross sectional nature of the study does not allow us to determine whether the general practices with paperless records had inherently different clinical management from those with paper based records. However, we found no detectable differences in practice characteristics and standard performance indicators. Given the nature of the technology being evaluated, a randomised trial was not possible, and this study design was the best available to us.
Although the general practitioners who volunteered had usually agreed to take part in the study at the time of the consultations that, six weeks later, were selected for research, the interval and identity of the session to be used for the research were not known to them. We would not expect that agreement to take part in the study would have altered their recording behaviour or their subsequent recall of the consultations.
Conclusions
Our hypothesis was that the constraints of computer entry (such as keyboard skills), might lead to an impoverishment of clinical records in paperless practices (the anecdotal impression gained by the researchers). We have found no evidence to support this hypothesis. We cannot, of course, assume that the higher quality records in paperless practices imply better clinical care or outcomes, but good records are an essential medicolegal protection and a first step to good clinical decision making.
We were also surprised by the low level of specific reference to a patient or consultation, suggesting that few general practitioners in either group remembered the specific consultation or the patient. We did not ask specifically whether they did recall particular consultations or patients, but when the general practitioners were not just relying on a reading of a medical record they commonly referred to specific extra information concerning the patient or that consultation. In the textual analysis we detected such specific references to fewer than one in five patients and one in 20 consultations, a finding that suggests that the doctor-patient relationship may not be as personal as many suppose.
What is already known on this topic
Good quality electronic medical records can enhance patient registration and appointment systems and repeat prescribing; can be of value in monitoring the health of populations; and are used for research
Little is known about the content and quality of electronic records compared with manual records.
What this study adds
Paperless electronic records compare favourably with records using paper-based systems.
Paperless electronic records contain significantly more words and abbreviations. They are more legible and easier to understand. They contain more diagnoses and details of referrals and of medication.
Use of electronic records does not change the doctor's recall of patients or their consultations
Footnotes
-
Definitions used in assessing medical records are listed in the appendix on bmj.com.
-
We thank the participating general practitioners and their staff for their help in conducting this study; and Ms Jane Allen for helping process the grant application.
-
Contributors MP had the original idea for the study. MP and JHC designed and wrote the protocol submitted for funding. JHC scored a sample of the interviews, supervised the data collection; undertook the data manipulation and analysis and primary interpretation. MP undertook the scoring of all the medical records. JHC and MP co-drafted the paper. RC undertook some of the general practitioner interviews, collected and entered data, helped administer the project; devised the coding scheme for the interviews and coded all the interviews and contributed to the interpretation of the findings. AW also undertook general practitioner interviews and contributed to the medical record scoring, background literature review and to project meetings. VH contributed practice recruitment, study design, development of scoring forms and interpretation of the findings. CC advised on the study design and data analysis. RH and CJ contributed to the development of the medical record scoring form and contributed to the interpretation. ST and PH were on the steering group and contributed to the design and interpretation of the findings. JHC is guarantor and accepts full responsibility for the study, had access to the data, and controlled the decision to publish.
-
Funding NHS ICT Research and Development Programme
-
Competing interests None declared.
-
Ethical approval Local research ethics committees in Nottinghamshire, Leicestershire, and Lincolnshire.