Commentary: Better data needed for analysisBMJ 1994; 309 doi: https://doi.org/10.1136/bmj.309.6946.34 (Published 02 July 1994) Cite this as: BMJ 1994;309:34
Much has been written about fundholding, but little is based on good research or proper analysis. It is therefore important to look at substantial pieces of work which might be more than a string of anecdotes or selected statistics.1,2 The work from North West Thames region is a brave and imaginative attempt to make the most of insufficient and inadequate data, but because the conclusions are based on so many tenuous assumptions it is open to criticism.
Assumptions on costs and prices
Many health authorities do have separate costs for some fundholding procedures that are purchased on a cost per case contract rather than a block contract. So North West Thames starts at a disadvantage in having only rudimentary data available for analysis. Nevertheless, specialty bed costs are not an appropriate measure of fundholding procedure costs, which tend to have higher costs in the first few days, such as theatre use, than other types of admission. Patients admitted for fundholding procedures tend to have lower average lengths of stay and are often treated as day cases. The average cost per day of a fundholding activity will therefore be higher than the average cost per day for all episodes of care. The cost of fundholding activities to health agencies has therefore been undervalued. The costs of outpatient fundholding activity are also likely to differ from the average.
An essential test of the robustness of assumptions and methods in articles of this type is to see if the data on patient numbers and the sum of costs calculated agree with the total picture described for each hospital in the audited annual cost returns. Because of the sparseness of reliable non-fundholding data the authors have been unable to apply this acid test to their work.
The paper suffers from one general problem. Throughout the paper the authors have used one set of assumptions for non-fundholders and another for fundholders. This is because of the different way the budget for fundholdings and health agencies are set. Unfortunately, two wrongs do not make a right, and sensitivity analysis becomes mandatory when so many assumptions are used.
The wide variation in results for each health agency raises serious questions about the validity of the data. They have made no attempt to compare the historical pattern of care of fundholders and non- fundholders. The difference could explain their results, if they are real. Despite these reservations it must be right that a fairer distribution of funds is more likely if both fundholder and non-fundholder budgets are set using the same formula and the same sources of data.
How did fundholding get into this mess?
One of the key decisions the politicians must have had to consider when choosing to introduce fundholding was whether to bring the scheme in before a reliable information system was available. Presumably the political advantage of launching fundholding at the same time as the other NHS reforms was felt to outweigh the possibility of the scheme foundering because of inadequate information. The paper highlights this inadequacy and shows that little has been done to improve the situation over four years.
If it is confirmed that the scheme produces an inequity of the size described in this paper there is a real possibility that the scheme will founder. If the overfunding is corrected fundholding general practitioners may leave the scheme. If it is not corrected inequity will become institutionalised, and inequity within the NHS seems to be something the public will not tolerate.3 As a recent article in the Economist pointed out, “Not only is the income gap between rich and poor widening, so is the health gap. A National Health Service was supposed to make this unlikely, if not impossible. What has gone wrong? And what can be done about it?”4