Seeing through the alcohol statistics hazeBMJ 2012; 344 doi: https://doi.org/10.1136/bmj.e1273 (Published 22 February 2012) Cite this as: BMJ 2012;344:e1273
All rapid responses
Nigel Hawkes' article on alcohol statistics is a useful contribution to the debate on how we should measure the harm from alcohol consumption.
We need a reliable, stable measure of alcohol harm if we are to track changes over time and if we are accurately to measure the impact of policies designed to reduce the amount people drink.
Unfortunately, alcohol-related hospital admissions, and the associated age standardised rate, as it is presently calculated using the hospital admissions data set is entirely unsuitable for either of these purposes. The weightings attached to each admission to determine its contribution to the overall alcohol-related admissions statistic is derived from case control studies and other epidemiological research from a wide variety of settings and while this is an extremely clever manipulation of the available data set, it has proved totally unsatisfactory in measuring progress in reducing alcohol harm.
Hawkes mentions the vexed problem of coding creep; but even before we get to that issue there is the more fundamental problem that the statistic is very dependant on the propensity to admit. Thus it is very hard to be certain that variations in the statistic across regions and between local authorities cannot be explained by varying admission policies.
Thus, for example, everything else remaining constant, if a hospital developed an effective way of managing heart failure in the community and avoided a proportion of hospital admissions for this diagnosis, then there would be an apparent drop in the alcohol-related hospital admission rate for that area. Even if the real problem of alcohol consumption remained unchanged or worsened. Conversely, if a district managed to solve the problem of binge drinking and avoided the weekly short-term admissions from this cause, the consequent availability of beds might lower the threshold for admission of other conditions such as heart failure with no net change in the alcohol-related hospital admission rate.
It is precisely because this is such a weak and imprecise indicator of alcohol harm that it lends itself to such abuses as coding creep by those who want to make the problem look worse, or counting only primary diagnoses, by those who want to play down the problem.
We need a more robust measure of alcohol harm. By taking a data set that was collected for an entirely different purpose and analysing it cleverly enough to come up with a synthetic metric, we do ourselves and public health no favours. Instead of starting with a data set we should first examine what it is we are trying to measure and then set about defining an indicator and collecting the data with which to calculate it with a known degree of precision.
Competing interests: The author served as the alcohol policy lead for the Department of Health in the West Midlands till 2011
In his interesting and well-argued article about the questionable statistics for alcohol-related hospital admissions, Nigel Hawkes raises a particular issue that deserves widespread discussion. 
Coding creep – the progressive increase in the number of medical conditions recorded for each patient admitted to hospital – is a serious threat to the validity of data collected in order to establish the safety of treatments and procedures. [2,3] This has been recognised more than twenty years and is on the increase. [2,3]
Nowadays, there is a requirement to demonstrate that medical and surgical practices are safe. As part of this process, mortality data are collected and the results compared between different hospitals – and between different clinicians – for the purpose of identifying poor performance and, in turn, improving the standard of health care. But this is the world of statistics...
The simplest way would be to compare the crude mortality rates – in other words, the number of deaths divided by the number of patients admitted – but a hospital found to have increased mortality would object that this measure was unfair because the particular circumstances of their patients increased their risk of dying. And so, adjusted mortality rates which take into account various risk factors – of which one of the most important is the presence of other diseases – are preferred.
But where there’s statistics, there’s data manipulation.  Nowhere is this better seen than in the case of coding creep. Eager to remove themselves from the status of outliers with poor performance, hospitals increase the number of disease labels attached to each patient. This leads to an increase in the expected number of deaths and, as a result, the difference between the observed and expected mortality falls. In other words, the adjusted mortality rate is no longer an outlier but instead lies snugly in the mid-range or better.
Coding creep has been associated with some surprising observations. [2,3] While the number of patients dying annually in hospitals in England remains more or less stable in recent years, the adjusted mortality rate has fallen steeply. In line with this trend, the average number of diagnoses attached to patients has increased, often by startling amounts – so much so, that it is difficult to believe anything other than that the data are flawed. After all, it is easy to see how additional diagnoses may be concocted: a minor cough or a transient wheeze in a smoker may be sufficient for a label of chronic obstructive airways disease; a record of mildly increased blood pressure in the past is surely enough to tick the hypertension box; and trivially raised blood sugar controlled with diet soon becomes diabetes. Who really cares whether these are genuine, or whether the disease is mild, moderate or severe? All that matters is that the mortality rate is reduced – regardless of whether the change reflects any improvement in the outcome of patients.
Data used to create league tables relating to performance may be criticised on many other fronts  and coding creep is but one source of error. Yet it is of great importance because it is so readily open to corruption. Competition invites survival tactics many of which are unacceptable. Not only that, but they bring the data into disrepute. What do we really know about hospital mortality? What, for that matter, do we really know about most health statistics?
1. Hawkes N. Seeing through the alcohol statistics haze. BMJ 2012;344;e1273.
2. Shahian DM, Normand S-L, Torchiana DF, et al. Cardiac surgery report cards: comprehensive review and statistical critique. Ann Thorac Surg 2011;72;2155-68.
3. Mohammed MA, Deeks JJ, Girling A, et al. Evidence of methodological bias in hospital standardised mortality ratios: retrospective database study of English hospitals. BMJ 2009;338;b780
4. Penston J. Stats.con – How we’ve been fooled by statistics-based research in medicine. The London Press. London, November 2010.
5. Pitches D, Buris A, Fry-Smith A. Snakes, ladders, and spin: How to make a silk purse from a sow’s ear – a comprehensive review of strategies to optimise data for corrupt managers and incompetent clinicians. BMJ;327;1436-9.
Competing interests: No competing interests
Nigel Hawkes says that minimum pricing of alcohol as proposed in Scotland "would not be much a deal" because it increases profit for the alcohol industry and reduces excise duty for the Government. (Though this would be compensated in part by an increase in VAT income.)
However, this isn't the point. The main test for readers of the BMJ should be whether minimum alcohol price would benefit public health. No credible health body doubts that it would. The latest of a series of analyses by the University of Sheffield estimates a reduction in consumption of 11.1%, a reduction in acute admissions of 18% and a reduction in mortality of 33% per annum over a 10 year period from a minimum price of 60p per unit.
These are the benefit estimates which BMJ readers should use to weigh up the value of the policy. If there's a price mechanism which can achieve these gains while generating income exclusively for the Government, many would find this an additional attraction. There is no such mechanism available to the Scottish Government under its present powers.
Minimum price is a policy advocated by Scottish doctors and then supported by the Scottish Government because of its health benefits and it should be judged on that criterion, not on whether it raises tax revenue.
Competing interests: Peter Rice is a member of Scottish Health Action on Alcohol Problems. a group formed by the Royal Colleges and Faculties in Scotland and previously a Board Member of Alcohol Focus Scotland, a campaigning charity. There is no financial renumeration for either post.