Preventing bad reporting on health research
BMJ 2014; 349 doi: https://doi.org/10.1136/bmj.g7465 (Published 10 December 2014) Cite this as: BMJ 2014;349:g7465All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
On reading this excellent article my thoughts turned immediately to a recent press release from NICE concerning its draft guidance on cancer referral (published 20th November 2014).
This crucial update was framed by the following statement in the press release from NICE: "...not enough is currently being done in England to identify cancer and treat it an early stage. Up to 10,000 people in England could be dying each year due to late diagnoses, according to research."
What is not immediately obvious from the text of the press release is that this sobering statistic is based on data from the late 1990s, which predates widespread use of the existing NICE guidance on cancer referral, 2 week wait referrals for suspected cancer and UK Government targets on time from referral to treatment for cancer.
Organisations that are seeking to improve quality in healthcare should be concerned with representing medical evidence properly first and producing sensationalist press statements second. The mainstream media are, I suspect, more than adept at using out of date statistics to embellish healthcare reporting without outside help from institutions such as NICE.
Competing interests: I am a GP Partner working in UK primary care
An excellent critique of a murky area - of which I was previously unaware.
Each year I have to provide evidence to my employer that my actions as a clinician since my last appraisal have been satisfactory, and that my professional probity is assured. It is not easy to prove a negative - i.e. one's 'innocence' - in this way, but one does one's best. If there is prima facie evidence of wrongdoing, surely one's revalidation should be suspended until absence of wrongdoing can be objectively proven beyond reasonable doubt?
An academic is judged by her/his publications (and concomitant press releases), just as I am judged by my clinical activity, outcomes, and behaviour.
How can any academic be revalidated who has been demonstrably shown to provide, or be involved in (however indirectly) an over-optimistic, i.e. misleading, press release? This is surely part of their core business.
Increasingly clinicians are being held to account for their (and their teams') outcomes[1], why not academics, too?
Ref: My NHS, Consultant outcome data http://www.nhs.uk/service-search/performance/Consultants#view-the-data (accessed 26 Dec 2014)
Competing interests: Like all UK doctors, I have to submit a declaration of professional probity each year as part of my appraisal, which informs my five-yearly revalidation requirement
This is a very helpful commentary, in particular the focus on positive things which can be done to improve the quality of academic press releases. One thing which, as a society, we could perhaps give more thought to is the broader causes of exaggeration in academic press releases.
In my experience as a researcher, which is mirrored in my experience as an environmental campaigner, is there is a lot of pressure on academics to spin the results of a study so it speaks to the priorities of the media cycle - hence the overstatement of certainty (because journalists don't report "maybes"), the inference to humans (journalists don't write about rats; they write about what makes the rats relevant to their human readers), the focus on controversy or novelty ("if it bleeds [or cures!] it leads") etc.
This pressure comes in part from the increasing weight placed on column inches and "impact" in judging academics' work and the role this plays in securing funding. So long as impact-and-inches is a metric by which academic performance is judged then there will that much more pressure to spin results in as media-friendly a way as possible (this in addition to all the other motivators for exaggeration). That this is abetted by university PR departments which are also judged on these terms, it seems inevitable there will be exaggerated claims in the press.
Transparency, accountability and naming-and-shaming will help (as academics we have to be responsible for what we write and how it is presented to the media), but it doesn't address the underlying systemic issues of funding shortfalls, novel demands on academics to be noticed as well as nerdy, the interplay between academic publishing and the media cycle, the problems with judging academics by the same standards as PR departments, and everything else which is causing this whole ghastly and regrettable mess.
This is a complex problem and Dr Goldacre's article along with the referenced research are a valuable contribution to getting these issues out in the open. The mistake would be to think the problem is entirely down to malpractice by individuals, without it also being an emergent problem arising from the modern academic context.
Competing interests: I receive money for assisting environmental charities with science communication.
Re: Preventing bad reporting on health research
Regarding your "mischievous" idea, I've done an aggregation and basic analysis at http://www.jacobsilterra.com/2014/12/29/exaggeration-of-science/
In general the practice was widespread, there were no obvious "worst" offenders. This isn't surprising, though, given the limited sample size.
A more comprehensive study over a longer time period would be wonderful; this would also have the benefit of tracking trends over time. A media watchdog group (e.g. factcheck.org) may be best suited for that task.
Competing interests: No competing interests