Effectiveness of appropriately trained nurses in preoperative assessment: randomised controlled equivalence/non-inferiority trialBMJ 2002; 325 doi: https://doi.org/10.1136/bmj.325.7376.1323 (Published 07 December 2002) Cite this as: BMJ 2002;325:1323
- Helen Kinley, researchera,
- Carolyn Czoski-Murray, researcherd,
- Steve George, reader in public healthb,
- Chris McCabe, senior lecturer in health economicsd,
- John Primrose, professor of surgery ()a,
- Charles Reilly, professor of anaesthesiae,
- Richard Wood, professor of surgeryf,
- Paula Nicolson, senior lecturer in psychologyd,
- Caroline Healy, psychologistd,
- Susan Read, senior lecturer in nursing researchg,
- John Norman, professor of anaesthesiac,
- Ellen Janke, specialist registrar in anaesthesiac,
- Hameed Alhameed, specialist registrar in anaesthesiac,
- Nick Fernandes, specialist registrar in anaesthesiae,
- Eileen Thomas, executive director of nursingh
- on behalf of the OpCheck Study Group
- aUniversity Surgery, University of Southampton School of Medicine, Southampton General Hospital, Southampton SO16 6YD
- bHealth Care Research Unit, University of Southampton School of Medicine
- cShackleton Department of Anaesthesia, University of Southampton School of Medicine
- dUniversity of Sheffield School for Health and Related Research, Sheffield S1 4DA
- eUniversity of Sheffield School of Medicine, Division of Clinical Sciences, Department of Anaesthesia, Royal Hallamshire Hospital, Sheffield S10 2RX
- fUniversity of Sheffield School of Medicine, Division of Clinical Sciences, Section of Surgery, Northern General hospital, Sheffield S5 7AU
- gUniversity of Sheffield School of Nursing and Midwifery, Sheffield S3 7ND
- hPortsmouth Health Care NHS Trust, Portsmouth PO3 8LD
- Correspondence to: J Primrose
- Accepted 15 August 2002
Objective: To determine whether preoperative assessments carried out by appropriately trained nurses are inferior in quality to those carried out by preregistration house officers.
Design: Randomised controlled equivalence/non-inferiority trial.
Setting: Four NHS hospitals in three trusts. Three of the four were teaching hospitals.
Participants: All patients attending for assessment before general anaesthesia for general, vascular, urological, or breast surgery between April 1998 and March 1999.
Intervention: Assessment by one of three appropriately trained nurses or by one of several preregistration house officers.
Main outcome measures: History taken, physical examination, and investigations ordered. Measures evaluated by a specialist registrar in anaesthetics and placed in four categories: correct, overassessment, underassessment not affecting management, and underassessment possibly affecting management (primary outcome).
Results: 1907 patients were randomised, and 1874 completed the study; 926 were assessed by house officers and 948 by nurses. Overall 121/948 (13%) assessments carried out by nurses were judged to have possibly affected management compared with 138/926 (15%) of those performed by house officers. Nurses were judged to be non-inferior to house officers in assessment, although there was variation among them in terms of the quality of history taking. The house officers ordered considerably more unnecessary tests than the nurses (218/926 (24%) v 129/948 (14%).
Conclusions: There is no reason to inhibit the development of nurse led preoperative assessment provided that the nurses involved receive adequate training. However, house officers will continue to require experience in preoperative assessment.
Reform of postgraduate medical training and the UK junior doctors' hours initiative have significantly reduced the amount of junior doctor time available for servicing the requirements of the NHS.1–3 Together with the drive for efficiency savings, these changes have increased the pressure to substitute non-medical staff for preregistration house officers.4
Studies of the performance of nurses in preoperative assessment have been limited in size and scope. The only trial in general surgery was not designed to show equivalence and had only 100 participants.5–9 We carried out a randomised controlled equivalence/non-inferiority trial of the effectiveness of appropriately trained nurses and preregistration house officers carrying out assessment in preoperative assessment clinics.
The trial was performed on four hospital sites in three NHS trusts. Patients were recruited from all those attending for assessment before general anaesthesia for general, vascular, urological, or breast surgery. We compared the competence of appropriately trained nurses and preregistration house officers in history taking, physical examination, and ordering of tests. Performance in each was scored as being “correct,” “overassessment,” “underassessment not affecting peri-operative management,” and “underassessment possibly affecting perioperative management.” In the case of tests ordered both underassessment and overassessment could occur in the same patient.
One of two specialist registrars in anaesthesia examined each patient after the nurse or house officer. They carried out the initial assessment of performance by comparing their own assessment with that of the nurse or house officer. All assessments evaluated as underassessment that could affect management and an equal number of assessments sampled from the other three categories were reviewed by one of two consultant panels to decide on the fairness of the decision of the specialist registrar.
The study was approved by the three relevant local ethics committees.
Three nurses were involved in this study, one at each of the study sites (one nurse covered two hospitals). They undertook the anatomy, physical examination, and test ordering modules of taught masters courses in advanced practice or equivalent, a level comparable with that experienced by nurses undertaking this role at various sites across the United Kingdom, although physical examination is not undertaken by most nurses. We did not assess the appropriateness of the level of instruction. Nurses were also supervised by a mentor, who approved a learning logbook at the completion of the learning process. A one month pilot recruitment phase identified logistical problems and established a basic level of experience of assessment in the clinic setting. Preregistration house officers received no training in preoperative assessment except that received during medical school education.
Recruitment and randomisation
This was a block randomised study (four patients to each block) with separate randomisation at each of the three centres. Blocks of four cards were produced, each containing two cards marked with “nurse” and two marked with “house officer.” Each card was placed into an opaque envelope and the envelope sealed. The block was shuffled and, after shuffling, was placed in a box. This process was repeated until more than the required number of cards for each centre had been randomised. The box was then given to a third party at each centre who removed 1, 2, 3, or 4 envelopes containing cards and placed them at the end of the sequence, blind to the researcher. Finally the envelopes were numbered consecutively before administration.
Patients received an information leaflet with their clinic appointment letter. They were invited to participate and, if they agreed, to consent to randomisation at the assessment clinic. At the clinic the next consecutively numbered envelope was opened and assessment proceeded as appropriate. Preregistration house officers involved in this study greatly outnumbered the nurses, and patients randomised to that arm could be processed faster. To avoid excessive delays we halted the recruitment and randomisation process when more than two patients were waiting to see a nurse.
The main objective of an equivalence trial is to show that the response to two or more treatments differs only by an amount that is clinically unimportant. This is usually demonstrated by showing that the true treatment difference is likely to lie between a lower and an upper equivalence level of clinically acceptable differences, often specified as 80% and 125% of a control value.10 A non-inferiority trial is a modification of an equivalence trial with the primary objective of showing that the response to an intervention is not clinically inferior to a comparative agent.
Our trial examined whether nurses performed worse than house officers and is thus a non-inferiority trial. A clinically important difference in performance was defined as 25% more than the control value, in this case the event rate for underassessment possibly affecting perioperative management among house officers. If the 95% confidence interval around the observed difference in event rates between the house officers and the nurses lay completely above the clinically important difference, then the performance of the nurse was judged to be inferior to that of the house officer; if it lay entirely below the clinically important difference it was judged to be non-inferior; if it straddled the clinically important difference, the result was judged uncertain. We calculated the confidence intervals around differences using the confidence interval analysis programme.11 We first compared numbers of cases in which any of history taking, examination, or test ordering were judged potentially to affect perioperative management in a case. Subsequently analyses were undertaken separately for history taking, examination, and test ordering.
We compared individual problems that could have affected perioperative management and unnecessary test ordering between the two trial arms by calculating relative risks and 95% confidence intervals.
We established the expected event rate in the control arm during the pilot phase. Forty patients at each of two sites (Southampton and Sheffield) were assessed by preregistration house officers. At each site six out of 40 (15%) were judged to have been underassessed to an extent that might affect perioperative management. We specified that the nurses should not exceed this 15% by more than 25% (3.75%). As we were concerned primarily with whether the nurses would prove to be inferior to the house officers we used only one sided calculation.10 We specified α=0.1 (equivalent to 0.05 in a two sided calculation) for 80% power (β=0.2). We calculated that we required 2250 patients (1125 in each arm of the trial).12
Over 31 000 elective surgical procedures are carried out annually at the three trial centres (approximate caseloads). We sampled 354 clinics and approached 2070 patients. One hundred and fifty five refused to participate, and eight were excluded because they were unable to understand the trial information, leaving 1907 patients randomised, of whom we could evaluate 1874 (fig 1). Of these, 926 patients were allocated to assessment by house officers and 948 to assessment by nurses. Baseline characteristics were similar in both groups (table 1), although the case mix differed somewhat between centres. Among patients who could be evaluated, 1011 were recruited from Southampton, 627 from Sheffield, and 236 from Doncaster.
History, examination, and test ordering
In 259 cases history taking, examination, or test ordering was judged as underassessment possibly affecting management (table 2): 121/948 (12.8%) assessments by nurses and 138/926 (14.9%) assessments by house officers. The upper 95% confidence limit for the observed difference (1.1%) was less than the clinically important difference (3.7%), implying that appropriately trained nurses are no worse overall than preregistration house officers in assessing patients.
Assessment by consultants
The consultants reviewed all 259 cases in which assessment had been judged as underassessment possibly affecting management and an equal sample of other cases. Most judgments were conservative, cases being returned to the correct assessment” category more often than not. We found no bias in judgments between the two groups. No difference was made to the trial results by changes brought about by consultants' judgments.
Separate analyses of outcome measures
Table 2 summarises the cases in which underassessment may have affected perioperative management. Non-inferiority in history taking was uncertain, the upper 95% confidence limit for the observed difference (3.2) being more than the clinically important difference (1.4). There was some heterogeneity between the nurses at the three centres, however. There were 26/511 (5.1%) cases in Southampton, 18/319 (5.6%) in Sheffield, and 20/118 (16.9%) in Doncaster in which underassessment possibly affected management. No such differences were noted in examination and test ordering.
Over-ordering of tests
Table 2 also shows that house officers order nearly twice as many unnecessary tests as nurses. The upper 95% confidence limit for the observed difference is far less than the clinically important difference and is actually below an analogous clinically important difference based around a lower practical equivalence limit ((0.80×house officer %)−house officer %=−4.7%).
Figure 2 shows accrual of patients by nurses for each month of recruitment. As Doncaster provided fewer patients than the other sites the poor history taking could be due to limited experience.
Table 3 shows problems missed during history taking and examination. Although there was a tendency for nurses to detect fewer cardiorespiratory problems, most of the confidence intervals for the corresponding relative risks encompass zero. Nurses were significantly better at picking up non-cardiorespiratory problems at examination.
Table 4 details those tests that were not ordered and that might have affected perioperative management. There were no differences except for clotting function, though this may be a chance finding. Table 4 also details the tests that were ordered unnecessarily. House officers order significantly more tests, particularly ones that might be regarded as routine” (urea and electrolytes, liver function tests, and haematology) but also echocardiography and clotting function tests.
We have shown that appropriately trained nurses are no worse than preregistration house officers in assessing patients preoperatively, although it might be argued that neither group performed particularly well. Patients face a one in seven chance of a house officer failing to detect something that might affect perioperative management and a one in eight chance of an appropriately trained nurse doing the same. Anaesthetists obviously cannot give up their role of being the final arbiter of a patient's fitness for anaesthesia.13
Although this trial did not reach its planned size, the likely effect of under-recruitment would have been to introduce wider confidence intervals around percentage differences between nurses and house officers, leading to uncertainty in terms of non-inferiority. This happened only in terms of history taking. Although the specialist registrars could not be blinded as to whether assessments were being performed by a nurse or a house officer, our expert panels could find no evidence of systematic bias. Low recruitment at one study site (only 236 of the 1874 patients in the study) meant that we did not achieve our desired sample size, but this does not explain the uncertainty. Rather, there was clear variation in the ability of appropriately trained nurses to take patient histories, one site being clearly different from the other two. The extent of this variation could be because we evaluated only three nurses, and ideally we would have used more. Our nurses differed from those providing most preoperative assessment in the United Kingdom, however, as they had been trained to examine the patient and not merely to take a history and order investigations. Training costs precluded using more nurses to fulfil this role. Low recruitment might still provide the explanation because it led to lack of practice in history taking. The nurse at Doncaster saw only 118 patients compared with 319 and 511 in the two other sites. This finding has implications as there will need to be not only specific training for this extended role but also an assessment of competence before a nurse takes up independent practice.
House officers ordered significantly more unnecessary investigations than appropriately trained nurses. Preoperative investigations in all the study centres were largely determined by protocol, and appropriately trained nurses adhered to protocol more than house officers. This has clear economic implications.
What is known already on this topic
Reform of postgraduate medical training and junior doctors' hours have reduced the amount of junior doctor time available for the requirements of the NHS
In many hospitals preoperative assessment has been taken over by non-medical staff, usually by appropriately trained nurses
What this study adds
Appropriately trained nurses perform no worse than preregistration house officers in the process of preoperative assessment
Variations in performance in nurses were similar to those commonly observed between preregistration house officers
House officers order substantially more unnecessary tests than nurses
With appropriate training nurses have the skills necessary to undertake preoperative assessment to the same level as preregistration house officers, though neither group performed particularly well in this study
For most hospitals in the United Kingdom it is not whether it is optimal or not for doctors to perform preoperative assessment but how the gap left by their non-availability may be filled.4 There will not be enough house officers to carry out preoperative assessment, but some experience is necessary for their training. It is clear that they cannot be replaced entirely by nurses, even if this is seen as a role within which nurses could develop a career.14
We thank all those who undertook the expert panel assessments for their untiring enthusiasm. We also thank all those patients who agreed to take part.
Contributors: JP, SG, CMcC, CC-M, CR, RW, PN, ET, JN, and SR obtained funding for the study and agreed overall design. HK and CCM undertook routine coordination of the study. EJ, HA, and NF assessed the performance of house officers and nurses. CH coordinated expert panels. SG and JP undertook the analysis with assistance from CMcC, HK, and CC-M. All authors contributed to the writing of the paper. SG, JP, and CMcC wrote the final version with assistance from all the authors. JP, SG, and CMcC are guarantors.
Funding National Coordinating Centre for Health Technology Assessment (NCCHTA). The views expressed are those of the authors alone.
Competing interests None declared.