Lessons from the central Hampshire electronic health record pilot project:evaluation of the electronic health record for supporting patient care and secondary analysisBMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7444.875 (Published 08 April 2004) Cite this as: BMJ 2004;328:875
- Hugh Sanderson, consultant in public health ()1,
- Trina Adams, clinical systems programme manager2,
- Martin Budden, Surrey LIS programme manager3,
- Chris Hoare, chief information officer2
- 1Winchester and Eastleigh Healthcare Trust, Winchester, Southampton SO22 5DG
- 2Hampshire and Isle of Wight Strategic Health Authority, Southampton SO16 4GX
- 3East Surrey Health Informatics Service, Surrey and Sussex Strategic Health Authority, West Park Hospital, Epsom, Surrey K19 8PB
- Correspondence to: H Sanderson
- Accepted 19 January 2004
The central Hampshire electronic health record (CHEHR) was constructed by linking several electronic patient records.1 Its two main objectives were to test the clinical usefulness of the electronic record in supporting emergency and out of hours care and to determine whether clinical data could be extracted and used to assess patient care.
Supporting emergency care
A clinical committee was established to determine access protocols for staff. All stakeholders were represented. Access to the central Hampshire electronic health record, training in its use, and consent conditions applied to several staff (box 1).
The pilot ran from January to March 2003. Staff who used the system were asked to complete an evaluation form at the end of their shift. Overall, 148 forms were returned, mainly from eight staff (two senior house officers, two nurse advisers, two practice managers, and two general practitioners) who between them had accessed the system on more than 260 occasions. The senior house officers used the system most often. They had been partially funded by the project for the purpose of evaluation so they became expert, and fellow clinicians used them to access patient details.
For social services access was restricted to patients' personal details, their registered general practitioner, and the social service record. Health service staff were allowed more access to the social services record and could access details of patients and residential or non-residential care. The social care plan is not held electronically and was therefore not available for the pilot.
As only three practices participated in the pilot there were numerous occasions when there was no general practice record available for a patient in hospital. Patient records were found on only 47% of attempts. Even when records were found, they did not always contain useful information. Only 20% of forms reported finding the information being sought, but the information was generally thought reliable. Only in a few cases relating to social services records was it necessary to verify the information from other sources.
Box 1: Staff using the central Hampshire electronic health record
31 clinical staff from three general practices
Winchester Hospital's accident and emergency department
Winchester Hospital's emergency medical assessment unit
Hampshire NHS Direct
Hampshire out of hours social services
The central Hampshire electronic health record pilot project was valued by clinical staff for supporting emergency care
As the pilot did not cover all providers, however, its clinical value was limited
The electronic record was used to create a database to analyse processes of care across organisational boundaries
Identified issues were coding systems, data quality, and the availability of analytical skills
These issues need to be addressed locally and nationally
If electronic records are to replace data collection processes within the NHS, significant investment and clinical leadership are required
Clinical staff were asked to rate the usefulness of records for decision making on a scale of 1-5 (no use to very useful). Records were found in 38% of cases and their usefulness recorded: 54% scored ≥ 2 and 20% scored ≤ 4. Even when useful information was not found, over 90% of staff said they would still use the system.
Comments were recorded. These were uniformly positive, recognising the limitations of the coverage of the pilot (box 2). On several occasions the information was thought to be helpful, but there were many instances where other sources or more detail would have been of benefit. Greater familiarity with the system led to a realistic expectation of what information was available and how it could best be used.
The evaluation showed that linkage of records from different systems was possible and could be clinically useful. Although the electronic health record could not be relied on to have relevant clinical information, when it did, it was appreciated and on occasion was thought to be helpful. The lack of free text in the general practice record was, however, problematic as only Read coded data were available to the central Hampshire electronic health record. Accuracy and completeness of coding depends on the input of data, and it is possible for information to be missing or only partially coded.
The pilot also assessed the potential of the electronic health record to supply data for analysis of the processes of care. Data feeds were sent to a database, which were then assessed for usefulness of information.
Anonymisation and coding
By the end of the pilot the database held some information on 96 578 people and contained 4.5 million discrete records. Not all patients had records from each source (table 1).
To ensure that records could be linked but that the identity of patients was untraceable, NHS numbers were encrypted to a unique identifier. The encryption algorithm was maintained securely by the database administrator.
Coded data were available from several sources. To improve ease of analysis, these data were translated into a single coding system whenever possible. As the general practice data were coded in Read version 2, it was initially decided to use this as the standard. Translation tables were set up to map codes from the hospital pathology and pharmacy systems and to access drug codes used by the EMIS system (table 2). However, hospital diagnoses and procedures use codes from the international classification of diseases, 10th revision (ICD-10) and Office of Population Censuses and Surveys 4 (OPCS4), which have fewer codes than the equivalent areas in Read version 2. To reduce the possibility of error, Read diagnosis and procedure codes were mapped to ICD-10 and OPCS4.
Box 2: Comments by users of central Hampshire electronic health record
[The electronic record] “was used to check whether a patient's LBBB was new or old. Records (a discharge summary) showed previous LBBB was noted and the decision not to give streptokinase was made ‘very useful!’”
“Was able to use system to support history given by patient, including important negatives”
“Old A&E discharge summary was helpful, although formal radiology reports would have been useful here”
“Mainly minor seen on shift which can be managed without needing EHR. All in all can be very useful—especially if more sources allow more information on system. In some instances, even now, its use affected immediate management of potentially very ill patients”
“Social services documents were often present but have no info (other than details). Why is the patient known to SS?”
“Tried to find patient that a colleague was seeing who had an extensive past medical history. Found patient, unfortunately no details were on system”
“Free text to support GP notes would be useful”
“Currently, starting to use less—selecting use for more appropriate patients—usually ‘majors’/incoherent patients. If more people/departments put more info, then a greater proportion of patients will be on system”
Examples of analyses
Analysis of the processes and outcomes of care was undertaken within and between organisations. The results were fed back to teams of clinicians and the clinical governance processes within each organisation. Because the completeness of the data could not be guaranteed, the details of the analyses were kept confidential to the organisations.
The analyses of these linked datasets can be used for purposes such as clinical governance (examining the quality of control of patients receiving long term treatment or outcomes after treatment), performance management (comparing the use of health service resources by general practices and hospitals), and supporting needs based commissioning (comparing the needs of the population with the availability of health services).
As the data from primary and secondary care were linked, it was possible to explore the outcomes of hospital care after discharge—for example, postoperative wound infections recorded by general practitioners of patients discharged after elective surgery (figure). Fourteen such infections were noted by one practice; eight in the 260 patients who had had surgery at Winchester and Eastleigh Healthcare Trust. Only one of these infections was noted in the trust's data, which is not surprising as most of these infections appear after discharge.
The linked database also enabled the exploration of rates of use of health service resources by the patients registered with the three practices (table 3). Because of uncertainties about data coverage it was not possible to draw conclusions. In principle, differences in practices' use of resources should help primary care trusts to identify how to manage resources in both primary and secondary care.
Constraints on analysis of electronic data
Other analyses were carried out and discussed with clinical staff (box 3). These show the potential of the electronic health record to support clinical governance and performance management type analyses, particularly across organisational boundaries. This was a complex exercise and had major shortcomings.
Anonymised records are a barrier to the validation and application of analyses. To try to identify the origin of anomalous results it was necessary to go back to the source dataset. This had to be done by the custodians of each element of the linked record and, as the patient identifiers had been encrypted, required laborious work. Good practice in data analysis would be random checks against source data to ensure that no errors have crept in during data processing. Again this is difficult with anonymised data.
A valuable use of the data was to provide staff with an opportunity to review cases and to learn from the causes of a particular process or outcome of care, but this is more difficult with anonymised records. For example, several patients who rang NHS Direct were admitted to hospital within 24 hours, but a significant number of these had not received advice to seek urgent medical help. Without the ability to track back to patient records, the educational feedback to staff is lost.
Box 3: Analyses carried out and discussed with clinical staff
Contacts of patients with health service after a call to NHS Direct, comparing advice received and actions taken
Monitoring control of patients with diabetes from general practice, hospital, and laboratory records
Compliance with NSF drug recommendations for patients discharged after a myocardial infarction
Electrolyte levels in patients receiving diuretics, both in inpatients and in the community
Compliance with drug level and monitoring renal function of patients receiving gentamicin
International normalised ratios (INR) for patients receiving long term warfarin
General practitioner consultation rates for patients attending new outpatient appointments
General practitioners in the pilot were willing to share the coded part of the record with clinical staff but not the free text, which they thought potentially sensitive and confidential. This led to difficulties for clinical purposes and analysis. In some of the general practice systems, blood pressure measurements were recorded as Read codes, but the values were recorded in free text so that the data were unavailable for clinical and statistical use. Similarly, there is no Read code for not being pregnant; several general practitioners use the Read code for pregnant and record “not” in free text, which creates difficulties when the free text is separated from the code. Paradoxically, the development of a more complete set of clinical terms, SNOMED CT (Systematised Nomenclature of Medicine: Clinical Terms), which will enable more complete coding, may make this more difficult because sensitive and confidential material may be included in the coded part of the record. General practitioners may become concerned about transference of data unless there is a way of identifying sensitive information and placing it securely.
Complexity of records
Clinical records can contain important information. For example, the drug records of some inpatients were annotated to show when doses had not been given, and the samples for some pathology tests were noted as unsuitable. Analysts need to be aware of the totality of clinical records.
The database contains multiple tables with multiple events for each patient. Linkage of events into an episodic sequence is useful for comparing actual and expected processes and outcomes of care. Since episodes of illness and care may be consecutive or concurrent and of variable length, there are many ways in which events can be linked, and these variations will affect the analysis. This is a complex task, and few places have the skills to undertake this adequately. Episode groupers have been used in other health systems, and if the NHS is to benefit these need to be made widely available.2
Problems with data
Multiple coding systems entail a significant amount of work to make data homogeneous. This task would be wasteful if undertaken individually and risks the development of inconsistent code maps and analyses. In addition, as coding schemes improve more sophisticated and complex mapping tables will be needed.3 For consistency, this needs to be undertaken nationally and will require dissemination of training and a responsive service for dealing with analysts' queries.
Although the records contained more clinical detail than is normally available in NHS datasets, many of the data items in the datasets for cancers and the National Service Frameworks were not present or were inconsistently captured. An analysis of the potential for capture of the cancer minimum dataset, for example, showed that less than one third of the data items would be available from the central Hampshire electronic health record.4 Without a process for enabling clinicians to complete structured clinical records (with standard definitions) for patients, monitoring of national standards will not be possible. Read codes for diabetes have shown wide variations.5 Although most general practice computer systems standardise the collection of clinical data, this is not always so for chronic diseases.6 Standardising data collection is also possible in many hospital computer systems, but most are used for retrospective data entry and not clinical records. Although the facility to collect structured data is envisaged by the NHS Care Record Service, its use depends on a major change in clinical practice by most hospital clinicians.7
Data from overlapping sources may be of varied usefulness to organisations in a pilot exercise. For the general practices, only data from Winchester Hospital were available. For Winchester Hospital, the general practice record was only available for about 15% of hospital activity. To obtain the best potential use of primary and secondary care analyses requires a comprehensive electronic health record for all patients within a large population. Even then, unless there is access to the records of patients cared for in regional and national centres, some records may have gaps, and it is not clear if these will be covered by the proposals within the national spine of the NHS Care Record Service.
Problems with scaling up the electronic health record
Live clinical record systems are dynamic. Not only can software upgrades change the data structures, but data quality initiatives, training, and changes in staff alter the quantity and quality of information recorded, resulting in the structure and meaning of data changing over time. This requires adaptation of the interface software whenever feeder systems are modified and close collaboration with data quality coordinators and clinical staff to interpret whether changes in analyses are due to recording artefacts or clinical behaviour. This was possible within a small pilot where the analyses could be understood by the local health community. However, the interpretation of data over a wider area is likely to be more difficult. The development of a uniform standard of training across trusts and general practices and maintaining a log of changes to the meaning of the data is not impossible, but it will require a high level of clinical leadership and commitment of significant resources.
The central Hampshire electronic health record pilot project successfully brought together clinical records from different systems to support clinical care and to enable analysis of data from clinical records. The main limitation of the pilot was the lack of total coverage, which would be resolved by a wider implementation.
From the perspective of the analysis of the data, there are more concerns and difficulties. These will need to be considered and properly resourced to ensure reliable and complete information.8
This ambition is still being pursued through the National Patient Record Analysis Service, which proposes the provision of existing and future central returns by extraction from the NHS Care Record Service. The central Hampshire electronic health record pilot project highlights the difficulties of many of the issues raised in the National Patient Record Analysis Service strategic outline case, and it will be important for that project to ensure that the necessary work is undertaken, both centrally and locally.9 This relates in particular to the implementation of systems that encourage consistent and complete clinical records; the training, development, and incentivisation of clinical staff to enable them to record consistently in these systems; the development of analysts with a good understanding of the clinical process; and provision of analytical tools that enable the management of complex episodic data in a reliable and reproducible way.
For their participation and support in this project we thank Paul Kelly; Clive Davis; Roger Greenwood; the partners of Charlton Hill, Watercress, and Stokewood practices; Winchester and Eastleigh Healthcare Trust's Information Management and Technology department, accident and emergency department, and Emergency Medical Assessment Unit; Hampshire NHS Direct; Hampshire Social Services; and Hampshire Ambulance Trust. Kash Patel, Rob Waugh-Bacchus, and Gina Pelletier provided project management and information technology support. EMIS drug codes to Read version 3 tables were supplied by EMIS.
Contributors HS undertook the analyses of the clinical governance database and wrote the paper; he will act as guarantor for the paper. The guarantor accepts full responsibility for the conduct of the study, had access to the data, and controlled the decision to publish. TA undertook the evaluation of the clinical use of the central Hampshire electronic health record and wrote the project reports. MB programme managed the pilot project. CH chaired the project board and steered the development of the project.
Funding The project was funded by the electronic records development and implementation programme (ERDIP) of the NHS Information Authority.
Competing interests None declared.
Ethical approval Ethical approval was provided by the Caldicott guardians of the participating organisations and from discussions with the General Medical Council, the Local Medical Committee, and the British Medical Association.