Key opinion leaders’ guide to spinning a disappointing clinical trial result
BMJ 2018; 363 doi: https://doi.org/10.1136/bmj.k5207 (Published 13 December 2018) Cite this as: BMJ 2018;363:k5207
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Yeh and colleagues satirise the concept of a randomised controlled trial in order to show its limitations [1]. Hartley and colleagues show how excuses can be made to explain away trials which yield undesired results[2]. At least in medicine we accept the need to test ideas and especially the need to evaluate new treatments properly before they become standard of care, and indeed afterwards. This is very different from the way policies for organising health services are introduced. Economists seem oblivious to the fact expressed by Hartley et al: “Most wonderful ideas tend not to work.” So we have seen concepts such as the internal market, independent sector treatment centres and private finance initiatives introduced without evidence for their efficacy or harms.
Doctors have the concept of evaluating the effect of a change and of trying to apply the methodology of clinical research to it. A favourite reorganisation is to centralise a service in order to maximise the expertise available to patients; this has the downside of making the service less accessible to many, especially residents of deprived localities distant from the specialist facility. Several examples of such evaluations of which I am aware seem to show that researchers and indeed referees feel that so long as benefit for centralisation is shown, the validity of the methods is not important[3-5].
References
1] Yeh RW, Kazi D, Valsdottir LR, Nallamothu BK We jumped from aircraft without parachutes (and lived to tell the tale) BMJ 2018; 363:465.
2] Hartley A, Shah M, Nowbar AN, et al, Key opinion leaders’ guide to spinning a disappointing clinical trial result.
3] CRAWFORD SM, HUTSON RC. Improvements in survival of gynaecological cancer in the Anglia region of England: are these all real? BJOG: An International Journal of Obstetrics & Gynaecology 2012; 119: 768–769.
4] CRAWFORD SM. A Centralised multidisciplinary clinic approach for germ cell tumours Ann Oncol 2018 mdy402, https://doi.org/10.1093/annonc/mdy402
5] CRAWFORD SM Changing the System — Major Trauma Patients and Their Outcomes in the NHS (England) 2008–17. https://doi.org/10.1016/j.eclinm.2018.11.001
Competing interests: No competing interests
The playbook misses a key ingredient.
As any spin doctor, political or clinical, would attest, the art of spinning involves looking at 'crumbs of positives' among the 'sea of negatives '.
If you have spent significant amount of time and money and end up with a negative trial, you ask the statistician to perform as many subgroup analysis as possible until you inevitably end up with some positive results.
Investigators- trust yourself. The cup is never half empty. It is always half full.
Competing interests: Significant amount of trial work and advisory work with various pharmaceutical companies
Re: Key opinion leaders’ guide to spinning a disappointing clinical trial result
Hartley and colleagues identified “negative” trials (which the reader presumes to mean trials which were statistically not significant); and accuses opinion leaders for “spinning” results when these findings are not interpreted as showing a treatment does not work.
The authors are misguided and wrong. Statistically non-significant findings rarely demonstrate that a treatment does not work; and most commonly represent uncertainty [Gerandter 20017]. The authors further seem to suggest that reports of non-significant findings should re-calculate required sample size. This is also incorrect, and is not necessary, as the information needed to interpret the trials findings is included in the confidence interval [Moher 2012].
[Gewandter 2017] Gewandter JS, McDermott MP, Kitt RA, Chaudari J, Koch JG, Evans SR, Gross RA, Markman JD, Turk DC, Dworkin RH. Interpretation of CIs in clinical trials with non-significant results: systematic review and recommendations. BMJ Open. 2017 Jul 18;7(7):e017288. doi: 10.1136/bmjopen-2017-017288. Review. PubMed PMID: 28720618; PubMed Central PMCID: PMC5726092.
[Moher 2012] Moher D, Hopewell S, Schulz KF, Montori V, Gøtzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG; CONSORT. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. Int J Surg. 2012;10(1):28-55.
Competing interests: No competing interests