David Oliver: Seven day services and soundbitesBMJ 2016; 355 doi: https://doi.org/10.1136/bmj.i6043 (Published 11 November 2016) Cite this as: BMJ 2016;355:i6043
“Our plans for seven day services are simple,” the health secretary, Jeremy Hunt, clarified in his party’s conference speech in October.1 The Conservatives’ 2015 election manifesto pledged that hospitals would be “properly staffed, so the quality of care is the same every day of the week.”2 But detail was sketchy.
In 2014 NHS England’s Five Year Forward View had presaged this, saying, “To reduce variations in how patients receive care, we will develop a framework for how seven day services can be implemented affordably and sustainably.”3
Analysts at the Department of Health have warned of the risks and costs of seven day services, however.4 Reports galore have highlighted workforce gaps and declining performance, even with the current service offering.5 6 7 The government has been urged to clarify the specification of seven day services and whether these would include planned, not just urgent, care.8
Well before politicians intervened, the NHS’s medical director, Bruce Keogh, had promoted the cause of seven day services,9 and the Academy of Medical Royal Colleges had published 10 operational standards.10 Given sufficient staffing and money the standards seem reasonable long term ambitions.
Concern over the feasibility and affordability of implementing all 10 standards has led to a focus on the following priorities for earlier implementation: at least 90% of new patients should be reviewed by a consultant within 14 hours; a consultant should review high dependency inpatients twice a day and others daily; and nine common interventions directed by consultants, and 14 common diagnostic services, should be available on weekends as well as weekdays.
In his speech Hunt explicitly cited these standards as the goal. He picked two, to illustrate problems needing action: consultant reviews of new patients and of those in high dependency wards. He said that, “when we checked,” these were happening in “only one in 20” or “one in 10” hospitals.11 Really? What did “when we checked” refer to?
I emailed the Department of Health and was told that it refers to NHS Improving Quality’s 2014 self assessment exercise involving reviews of small numbers of case notes coupled with questionnaires. Big on numbers, rigorous, or peer reviewed, it ain’t.
Reports galore have highlighted workforce gaps and declining performance, even with the current service offering
The raw data are published,12 and NHS Improving Quality (now the Sustainable Improvement Team) has released highlights.13 In reality, the “one in 20” and “one in 10” statistics are misleading because they refer to hospitals meeting a target of 90% of all patients, including patients without acute need. Independent statisticians concluded that 79% of patients were seen by a consultant within 14 hours of admission, however acute their need.14
We need rigorous appraisal of robust data with appropriate context—not “facts” spun like candyfloss and just as insubstantial.
Competing interests: See www.bmj.com/about-bmj/freelance-contributors/david-oliver.
Provenance and peer review: Commissioned; not externally peer reviewed.