Intended for healthcare professionals

Editor's Choice

Encouraging improvement

BMJ 2009; 339 doi: (Published 12 November 2009) Cite this as: BMJ 2009;339:b4696
  1. Fiona Godlee, editor, BMJ
  1. fgodlee{at}

    We have two encouraging reports in this week’s BMJ to ward off economic or seasonal gloom. The first is about adult critical care in England. Nearly ten years after a major modernisation and funding initiative led to a 35% increase in staffed beds, Andrew Hutchings and colleagues (doi:10.1136/bmj.b4353) report substantially better processes and lower mortality. They say the initiative has been highly cost effective.

    We’re going to need good critical care as we in the Northern hemisphere enter our flu season. We’re also going to need a good H1N1 vaccine, which brings me to the second encouraging report. In July, the UK’s National Institute for Health Research called for proposals to evaluate new variant H1N1 vaccines in children. Instead of the familiar researchers’ complaint of bureaucratic delay and inefficiency, we have from Andrew Pollard and colleagues (doi:10.1136/bmj.b4652) a story of speed and cooperation. Four weeks after seeking ethical and regulatory approval, the trialists vaccinated their first participant, and they expect to enrol almost 1000 children in the subsequent four weeks. They think we can learn from these exceptional processes, stimulated by pandemic urgency, in ways that could improve timelines for all clinical trials.

    I’m also cheered by the success of our Research Methods and Reporting section. Launched just over a year ago (BMJ 2008;337:a2201), it has already published groundbreaking work and looks set to fulfil its aim of helping to improve the validity and integrity of medical research. This week the section comes of age with a paper from Martin Bland, to my mind the co-godfather (along with Doug Altman, the BMJ’s chief statistical editor) of medical statistics.

    Bland’s article looks at sample size (doi:10.1136/bmj.b3985). It’s a personal and highly readable account that makes an important recommendation. He says we should ditch power calculations, based as they are on significance tests (P values), and instead decide how big a study should be using the width of the confidence interval for a set of outcome measures. It’s clear that power calculations have helped drive the size of studies—from an average of just over 30 participants per study in the BMJ and Lancet in 1972 to over 3000 in 2007. But Bland says they have had their day. Radically, his proposal would remove the need for trialists to specify a primary outcome, something that, as Bland says, is widely abused.

    We pay tribute this week to another father of research methodology, Jerry Morris, who has died at the age of 99 (doi:10.1136/bmj.b4679). His careful comparison of bus conductors and bus drivers showed that exercise prevented heart disease. He was a great proponent of evidence based health policy before the phrase was in common use, and indeed misuse. Two articles this week about the sacking of the UK government’s senior adviser on the misuse of drugs (doi:10.1136/bmj.b4662, doi:10.1136/bmj.b4678) make clear that some politicians would so much prefer to have policy based evidence.


    Cite this as: BMJ 2009;339:b4696

    View Abstract