Commentary: measures of early postoperative mortalityBMJ 1994; 309 doi: https://doi.org/10.1136/bmj.309.6951.365 (Published 06 August 1994) Cite this as: BMJ 1994;309:365
- M McKee,
- C Sanderson
The traditional image of the paternalistic surgeon failing to offer patients an informed choice about treatment based on knowledge of the risks and benefits is disappearing rapidly. Driven by growing consumerism and political pressure, there is an increasing volume of information to help patients make choices that are appropriate for them. Examples range from patient information leaflets, through provision of specialist advisers, to interactive video disks.1
If patients are to be enabled to make truly informed choices the information that they receive must be accurate. Unfortunately, much of what is published about the risk of postoperative death may be misleading. Textbooks frequently quote case series published by those working in centres of excellence and relating to patients who may be quite atypical.2 Furthermore, published papers may be subject to considerable bias, ranging from the decision to report a series to acceptance by a journal.3
There are two major problems facing those who wish to measure population based postoperative death rates in the United Kingdom. The first is the absence of data on what happens to patients after they leave hospital. The second is that those who undergo elective surgery may not be typical of the general population, even after allowing for age. For example, the requirement for many procedures for the patient to be fit for anaesthesia may exclude patients with comorbidity.
Assumptions and limitations
Various solutions to the first problem have been proposed, such as the use of 30 day postoperative death rates4 or disease specific time windows.5 However, these do not solve the second problem. Seagroatt and Goldacre have proposed an approach in which they examine the monthly profile of death over the year after surgery.6
Though this approach represents a clear improvement on in hospital fatality rates, it still suffers from some limitations. The results depend on two crucial assumptions. The first is that the level of excess surgical deaths - that is, those that would not have occurred in the absence of surgery - may be indicated by the rate for an initial period of 30 days. There are two problems with this. Firstly, the results may be highly sensitive to the length of the period chosen. This will be true if, for example, the rate of surgical deaths is highest early in the first week or two of the postoperative period and then tails off. Secondly, if the period chosen is not long enough the numbers of surgical deaths on which the calculation is based will be an underestimate. The aim would be to choose the shortest period that does not exclude material numbers of surgical deaths. In this case the period was set at 30 days on the basis of an inspection of the data but there is at least a suggestion that fatality rates after some interventions continue to decline in the 90-364 day reference period. This could be validated by inspecting death certificates and, if necessary, medical records.
The second assumption is that in the absence of surgery surgical patients would have experienced the death rate observed during the 90-364 day reference period. In practice, surgery could precipitate a death within 90 days that would otherwise have occurred later in the year. If so, the reference rate will underestimate the true background rate. This would be expected when deaths due to an intervention are fairly common, as with some emergency surgery.
Value of measures
What this method can do is show where there is no excess risk. If its assumptions are valid it provides an indicator of how risky some procedures are overall compared with others. What it does not do is answer the patient's question of “to what extent is the operation likely to alter my chances of surviving the next five years?” This is partly because the mortality ratio proposed does not take the form of an indicator of relative risk. And it is partly because it is unclear how far an overall figure is relevant to people of different ages and degrees of comorbity,7 a common problem in attempts to inform decision making.8
Seagroatt and Goldacre's paper provides further evidence of the difficulty of attributing mortality to a particular intervention by using routine statistics. This type of analysis can be performed only where there is a well managed, high quality record linkage scheme and coverage of a large enough population to yield stable figures. Incomplete follow up of patients can give very misleading results.9
The demise of regions and the lack of any clear vision about the information function in the new regional offices raise important questions about the extent to which this kind of work will be possible in future. Indeed, the departure of key information staff from regions because of uncertainty about the future is already making access to regional data more difficult.
Finally, this approach compares death rates from different procedures, not from different hospitals. The intention is to use data on death rates to inform patient choice about whether or not to undergo a procedure. This is entirely different from the publication of crude league tables of death rates by hospital or surgeon that, by providing an incentive to avoid treating high risk patients or to collect data in a way that gives favourable results, would probably mislead.10