Which clinical studies provide the best evidence?
BMJ 2000; 321 doi: https://doi.org/10.1136/bmj.321.7256.255 (Published 29 July 2000) Cite this as: BMJ 2000;321:255All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Mr. Barton is incorrect in thinking that in law the "best evidence
rule" refers to some hierarchy of evidence. This is precisely what it does
not mean. The rule is a misnomer and simply states a preference for
original documents under some circumstances.
I was pleased to read that the profession is belatedly coming to
realize the truth of what some of us have been saying for years. [1] But I
was disappointed that Mr. Barton did not catalogue the inherent flaws of
randomized trials, namely, the most serious of which is their lack of
external validity. One way to improve "observational" studies is to make
use of what I have called the quasi-randomized trial. [2] When I first
suggested this approach it met with contumely. Now that the profession is
waking up to the simple truth that the differences between statistically
well controlled "observational" studies and randomized clinical trials are
not that great when quite a lot is known about a disease and the non-
treatment factors that influence its outcome, it may deign to take another
look at the suggestion which, I suggest, would be particularly apposite in
collaborative, multicenter trials that are so typical of trials in
oncology.
Nicholas Kadar, MD, JD.
References
[1] Kadar N: Radomized trials for laparoscopic surgery: valid
research strategy or academic gimmick? (Editorial) Gynaecol Endosc
1994;3:69-74.
[2] Kadar N. The quasi-randomized trial: a technique for eliminating
bias resulting from treatment preferences and the inability to "blind".
Gynecaecol Endoscopy 1997;4:197
Competing interests: No competing interests
EDITOR - The two papers on randomised vs. non-randomised studies,
published this year in the New England Journal of Medicine and cited in
Stuart Barton's editorial [1], show first of all that randomised
controlled trials (RCTs) are not the only valuable piece of information
about efficacy/effectiveness of treatments. Having the choice between only
considering RCTs or looking also on well controlled, but non-randomised
trials for systematic reviews etc should always be in favour of the
latter. Just dropping all non-randomised studies for meta-analyses is too
cheap. They may reveal some insights into discrepancies, which might not
be found with RCTs alone.
In medicine, at least on the average, non-randomised studies have long
been regarded to overestimate treatment effects [2]. In psychology, with a
long tradition (e.g. [3]) of working with non-randomised, but properly
controlled studies (so-called quasi-experimental designs), the tendency
seems to be just the other way round. Meta-analyses in psychotherapy
research found lower effect-sizes for well-controlled non-randomised
trials than for corresponding RCTs [4,5]. This is at least partially be
explained by the fact that some quasi-experiments used self-selected
treatment clients who were more distressed than available controls [5].
Perhaps the above-mentioned findings in medicine may be explained by
better controlling of confounding variables in the newer studies. Just
matching age and sex is not enough. But I must admit that I myself trust
in observational studies more, if there are RCTs on the same topic, which
yield similar results.
Nevertheless, they encourage researchers working in fields where RCTs are
often not feasible. And this kind of research enables insights into
research topics up to now not well studied with RCTs, e.g. inter-
individual differences in treatment response.
Peter Schuck, MD, PhD
FBK Research Institute,
Lindenstr. 5,
D-08645 Bad Elster, Germany.
References:
1. Barton S. Which clinical studies provide the best evidence? The
best RCT still trumps the best observational study. BMJ 2000;321:255-6.
2. Kunz R, Oxman AD. The unpredictability paradox: review of
empirical comparisons of randomised and non-randomised clinical trials.
BMJ 1998;317:1185-90.
3. Campbell DT. Factors relevant to the validity of experiments in
social settings. Psychol Bull 1957;54:297-312.
4. Shadish WR, Ragsdale K. Random versus nonrandom assignment in
controlled experiments: do you get the same answer? J Consult Clin Psychol
1996;64:1290-305.
5. Shadish WR, Matt GE, Navarro AM, Phillips G. The effects of
psychological therapies under clinically representative conditions: a meta
-analysis. Psychol Bull 2000;126:512-29.
Competing interests: No competing interests
The editorial by Stuart Barton makes a beginning towards
rectification of the balance between randomised controlled trials (RCTs) and
other forms of research1 but does not go far enough. While quality
RCTs undoubtedly produce good evidence on which practice should be based, the
knowledge generated from RCTs needs to tempered with the wisdom of clinical
experience and other forms of evidence such as observational studies.
The history of the development of treatments for diabetic
retinopathy is a leading example of the application of evidence-based medicine
to clinical practice. However, the evolution of the indications for treatment
of diabetic maculopathy also reveals the limitations of strictly implementing
the results of RCTs to clinical practice. While we have treated clinically
significant macular oedema in eyes with good vision for more than a decade on
the basis of evidence provided by the ETDRS reports2 many clinicians
have felt that this was not warranted. Recent reanalysis of the ETDRS data have
give respectability to this ‘non-evidence based’ view; on the basis of this
revised assessment it is now acceptable to closely follow eyes with good vision
and clinically significant macular oedema, performing photocoagulation only
when the retinal thickening affects or imminently threatens the centre of the
macula3.
The rise of the RCT as the ‘gold standard’ for clinical
research has eclipsed other forms of evidence, like that arising from
observational studies, to the extent that it has acquired a cult status4.
This can sometimes lead to inappropriate clinical practice. It is time to
reinstore the respectability of other kinds of research.
Somdutt PrasadClinical Lecturer in Ophthalmology
Department of Ophthalmology and Orthoptics, University of Sheffield
Floor O, Royal Hallamshire Hospital
Glossop Road, Sheffield S10 2JF
- Barton S. Which clinical studies provide the best
evidence? British Medical Journal 2000;321:255-256 - Early Treatment Diabetic Retinopathy Study Research
Group. Photocoagulation for diabetic macular edema. ETDRS Report Number 4.
Int Ophthalmol Clin 1987;27:265-272 - Ferris FL III, Davis
MD. Treating 20/20 eyes with Diabetic Macular Edema (Editorial) Arch
Ophthalmol 1999;117:675-676 - Ellis SJ,
Adams RF. The cult of the double-blind placebo-controlled trial.
Br J Clin Pract. 1997
Jan-Feb;51(1):36-9. Review.
Competing interests: Editor-
Enrolment of cancer patients in clinical trials
Sir - We read with interest the editorial by Barton (1) and agree that
well conducted randomised controlled trials remain the gold standard for
evidence of efficacy.
Progress in cancer treatment remains slow and a significant reason
for this is that less than 10% of all cancer patients are entered into
clinical trials (2). For most busy clinicians, the infrastructure to
support trial recruitment remains the main problem. The process of
informed consent, randomisation, data collection and long-term follow up
remain the major stumbling block.
In Coventry, we had no funded research support for clinical trials
until 1997, when our trust gave monies for half a clinical research nurse.
This appointment had an immediate and major impact. Prior to 1997, an
average of 90 patients was entered into studies per annum. This has risen
steadily with 291 patients entered in 1999. Having generated momentum and
funding from trial entry and particularly from pharmaceutical studies, we
have now employed two further nurses and a clinical trial secretary. We
are now top recruiters in a number of national studies.
Fifty percent of our referrals originate from our peripheral cancer
units where there is no research nursing support. However 85% of trials
patients come from the cancer centre in Coventry with only 15% from the
peripheral units. As part of the Calman-Hine initiative cancer units are
charged with the responsibility to take part in clinical trials but there
is no central funding for this. Such a structure does not provide equity
of opportunity for patients and diminishes patient choice.
It is recognised that to provide the best care for our patients we
need good quality research. With the government now putting in more
resources into the NHS we must not miss this opportunity to fund such
activity. The MRC and CRC have historically provided little funding to
centres and cancer units outside the major regional cancer centres. Indeed
significant support for clinical trials has generally been funded more
generously by industry with central government funding lagging a poor
second behind.
We cannot produce good quality research without time and resource.
Our own experience demonstrates that minimal funding can prove very cost
effective and if this were translated across cancer units, the recruitment
into national studies would inevitably improve.
If agencies such as NICE are going to be effective we must be able to
provide the evidence from properly conducted rapidly recruiting trials. To
achieve this in cancer medicine would mean supporting research across the
whole of the NHS including cancer units where a significant proportion of
patients are cared for and not just in a few very major cancer centres.
1 Barton S. Which clinical studies provide the best evidence? BMJ
2000;321;255.(29 July)
2 Friedman MA, Cain DF. National Cancer Institute sponsored co-
operative clinical trials. Cancer 1990;65(10 Suppl);2376-82
Competing interests: No competing interests