Monitoring clinical trials—interim data should be publicly available
BMJ 2001; 323 doi: https://doi.org/10.1136/bmj.323.7310.441 (Published 25 August 2001) Cite this as: BMJ 2001;323:441
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
EDITOR - The opinions expressed by Lilford and colleagues1 regarding
monitoring of clinical trials are very relevant to a current UK trial of
radiotherapy for breast cancer. The international standard fractionation
regimen consists of 50 Gray in 2 Gray fractions on 5 days per week, but
the evidence base for this is tenuous. Many years ago it was suggested
from reviews of treatment of advanced disease and skin metastases that
fewer larger fractions may be more effective against breast cancer2, but
radiotherapists have been reluctant to do this because of fear of
increased late morbidity.
In 1986 two oncology centres embarked on a randomised trial comparing the
standard 50 Gray with two schedules treating 5 times per fortnight, 39
Gray and 42.9 Gray, both in 13 fractions. Interim data were presented in
1994, showing a significantly lower morbidity for the 39/13 schedule:
there was no difference in the rates of local recurrence of carcinoma, but
there had been very few recurrences at that time3. The trial closed in
1998 with 1400 patients enrolled. The schedules were continued as one of
the arms of the multicentre START trial4, in order to obtain larger
numbers and therefore more conclusive results. The trial data were taken
over by the START data-monitoring committee and are being kept secret on
the basis that their publication would prejudice recruitment to START.
The final results of START will not be known for several years; meanwhile
25 fractions remains the standard.
The median follow-up of these 1400 patients is now 8 years. If there is
still no evidence of a significantly higher risk of recurrence from the 13
-fraction schedule and the data were made publicly available,
radiotherapists could reasonably offer women with breast cancer a course
of treatment involving fewer hospital attendances and with less side-
effects, rather than continuing to give the standard 25 fractions while
awaiting results of START. Other cancer patients would also benefit from
the consequent reduction in the workload and therefore shorter waiting
lists in our hard-pressed radiotherapy departments. The points raised by
Lilford and colleagues make a good case that is now time to publicise the
latest data from this trial.
1. Lilford RJ, Braunholtz D, Edwards S, Stevens A. Monitoring
clinical trials - interim data should be publicly available. BMJ
2001;323:441-2 (25 August).
2. Cohen L. Radiation response and recovery. In:Schwarz EE, ed. The
biological Basis of Radiation Therapy. London:Pitman, 1966.
3. Yarnold JR, Bliss JM, Regan J, Broad B, Davison J, Harrington G,
Ebbs SR. Randomised comparison of a 13-fraction schedule with a
conventional 25-fraction schedule of radiotherapy after local excision of
early breast cancer: preliminary analysis. Radiother Oncol 1994;32 suppl
1:S101.
4. START trial management group. Standardization of breast
radiotherapy (START) trial. Clin Oncol 1999;11:145-7.
Competing interests: No competing interests
Interpretation of trial data even at trial closure can sometimes be
controversial and difficult. Whilst there may be arguments in some cases
for releasing interim data, [1]depending on the type of trial and strength
of findings, there are surely also strong arguments for patient partipants
and trialists keeping their nerve until all results are gathered in? Why
else do we employ statisticians to undertake power calculations and
provide recommendations of cohort size needed to produce reliable data?
Visualisation of what recruitment by randomisation means has been
facilitated by likening it to the uneven and random distribution of
raindrops on a surface prior to complete coverage. It has also been my
understanding that both earlier trials in a series for review, and interim
results within a trial, can be misleading in the apparent direction of
find.
Marketing pressure can also lead to premature stoppage of trials,
where profit rather patients may be the prime motivation for invoking
them, depriving not only "far term" but also "near term" participants of
the satisfaction of finding out the long term benefits and the overall
health benefits of an intervention already fully administered to many
"near term" participants. This was the case in the controversial stoppage
of the U.S. trial of Tamoxifen for the prevention of breast cancer. [2] It
is interesting that the European Prevention Trials did not follow suit,
perhaps because of a more convergent and sensitive motivation of trialist
with participant, and joint determination to obtain long-term data in
spite of apparent U.S. interim findings. [3]
If profession and patients collaborate in designing trials with
agreed long and short term objectives and aims, agreed stopping rules and
procedures for rapid and thorough dissemination of results, including both
professional and lay interpretations, we might all derive full benefit and
satisfaction from staying the course until the end. If patients are equal
partners in devising such contracts with professionals, by being on trial
steering committees and data monitoring committees, they will have equal
opportunity to make decisions about whether or not to abandon the trial,
or release interim data.
Many participants joint trials for altruistic reasons (23.1%) or
trust in the doctor (21.1%) a recent survey of cancer patients found - not
for what benefit they hope to get from it. [4] Why then adopt a shortcut
process that reduces the amount of learning, understanding and benefit
obtained from trials with participants who will have received an
intervention that cannot be undone?
Hazel Thornton.
Independent advocate for quality in research and healthcare.
References:
[1] Richard J. Lilford, David Braunholz, Sarah Edwards, Andrew Stevens.
Monitoring clinical trials - interim data should be publicly available.
BMJ (2001) 323: 441-442.
[2] Fisher, B. Current controversies in cancer. Tamoxifen for the
prevention of breast cancer: pro. Eur. J.Cancer (2000) 36: 142-150
[3] Thornton, H. Relationship of trial design to value of data for
patient? Eur. J.Cancer (2000) 36: 1585-1586
[4] V. Jenkins and L. Fallowfield. Reasons for accepting or declining to
participate in randomised clinical trials for cancer therapy. British
Journal of Cancer (2000) 82(11); 1783-1788.
Competing interests: No competing interests
Lilford et al [1], make a number of important points about the need
to release interim results from clinical trials. In discussing this need,
however, they write of it as "interim data", not clarifying the separation
between making the results of the interim analyses known and releasing the
actual interim data.
There is an enormous reluctance on the part of researchers to release
their actual data even after they have published the results of their
research.[2] Given this, it seems highly unlikely that they would ever
consider releasing the data that were the basis of any interim analyses.
The release of the actual data should, in the long term, however, be
considered at least as important as the release of interim analyses.
REFERENCES
1. Lilford RJ, Braunholtz D, Edwards S, Stevens A. Monitoring
clinical trials - interim data should be publicly available. BMJ 2001;
323:441-442.
2. Reidpath DD, Allotey PA. Data sharing in medical research: an
empirical investigation. Bioethics 2001; 15: 125-134.
Competing interests: No competing interests
Dear Sir,
I commend the authors call for increased transparency in the reporting of
trial results.
I would like to call for this openness to be extended to the
registration of all pharmaceuticals. In brief, Pharmaceutical companies
submit a confidential dossier of data on a drugs purported efficacy,
safety and mechanism of action. This data is independently scrutinised
and a decision is then made as to whether and under what circumstances a
drug is to become available to the public. Making this data available for
scrutiny, perhaps after a drug has been approved, would allow for public,
academic and commercial scrutiny.
Thus there would be an additional layer of review, which should allow
for earlier detection of any flaws in the provided data.
Competing interests: No competing interests
I have four objections to this proposal.
Firstly, the philosophy. Anyone involved as a subject in a study is
already being used as a means not an end. The fact that they have
volunteered for such a role is admirable, and puts them in a position such
as firefighters, soldiers etc. that risk their lives more than other
members of society, in a way that benefits society (and is rewarded).
Concealing which treatment a person is receiving is perfectly acceptable
and commonplace - therefore perfect disclosure is not an absolute
requirement for a morally justifiable trial. Trials are justified on the
basis that there is insufficient VALID information to allow an unbiased
observer to make a decision. Trials are shown to be valid when a well
designed, well conducted study is published in an appropriate forum. Data
release effectively replaces the last stage with one that is less
rigorous. This is not acceptable because (apart from the danger of
mistaken conclusions) it is not what the researchers and subjects agreed
to.
Secondly, practical aspects. Some studies do not have data monitoring
committees, and those that do may have very different ideas. Thus the
scene is set for confusion on a grand scale. Those who follow Physics will
remember the "cold fusion" debacle. The authors do suggest that there
could be guidelines. These are likely to be a great deal more complicated
than the present situation. This proposal will also greatly reduce the
weight of properly published evidence.
Thirdly, "Moral Hazard". If, I am a researcher planning a trial where it
will be difficult to recruit sufficient subjects to gain certainty, then I
may be tempted to go through the motions of applying for funding, ethical
approval etc. based on an unrealistically rapid recruitment schedule, in
the knowledge that the data will get released anyway before conclusion,
and a good trend will gain approval for the treatment.
Fourthly, misunderstanding of the role of data monitoring committees. I
believe their role is to a) stop trials that are causing undue adverse
events, b) halt trials where the treatment effect has been underestimated
i.e. where the number of subjects needed to show the effect is really much
lower than initially thought. Inconclusive results at this stage represent
a well-designed study.
Competing interests: No competing interests
Monitoring clinical trials - interim data should not be publicly available
Richard Lilford and colleagues show little understanding of the
uncertainties involved in the assessments of treatment effects.1 Few
people are aware of how much point estimates wander about as both more
patients and longer follow-up accrue in a randomised clinical trial. This
means that choices made by patients on the basis of interim analyses is
unlikely to be 'rational'. The trouble is that when results go in a
particular direction, the natural instinct is to assume that they will
continue that way. This is why phrases appear in papers such as 'there was
a trend of 5% in favour of treatment A, but this is not yet significant',
implying that with more data it will become so. Of course, it is just as
likely that future data will add up in the other direction so that the
final result may be against treatment A.
Data monitoring committees have been implemented for good reasons
which have been well discussed in the past. The issues involved in
assessing results are not simple, and involve not only statistical
uncertainty but issues such as length of follow-up, internal consistency,
baseline comparability, compliance, adjustment for repeated tests,etc.2
The publication of interim results of ISIS2 was for a particular subgroup
where benefit was clear.3 There are obviously circumstances where report
of certain results will increase recruitment (for example, where sceptical
clinicians may decide to start entering patients due to an apparently
positive effect), and others where they will reduce it (for example, a
slightly positive result might make clinicians stop recruiting, with the
sceptical ones all stopping use of the treatment and optimistic ones all
deciding to use it).
Rather than going backwards and repeating the mistakes of the past,
where almost all trials were too small to give definite answers - and even
some of those that apparently did, later turned out to be misleading4 - we
should be concentrating on the problem of lack of reporting of final
results for all randomised trials done. At present the information
available on a question is often a biased subset due to this lack of
publication.5,6
1. Lilford RJ, Braunholtz D, Edwards S, Stevens A. Monitoring
clinical trials - interim data should be publicly available. BMJ 2001;
323:441-2.
2. DeMets DL. Data monitoring and sequential analysis - an academic
perspective. J Acquired Immune Deficiency Syndromes 1990; 3(Suppl 2):S123
-S133
3. ISIS-2 Steering Group. Intravenous streptokinase given within 0-4 hours
of onset of myocardial infarction reduced mortality in ISIS-2. Lancet
1987; i:502
4. Collins R, Peto R, Gray R, Parish S. Large-scale randomized evidence:
trials and overviews, in Oxford Testbook of Medicine (ed Weatherall DJ,
Ledingham JGG, Warrell DA) 1996; p21-32
5. Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR. Publication bias in
clinical research. Lancet 1991;337:867-72
6. Dickersin K, Min YI, Meinert CL. Factors influencing publication of
research results: follow-up of applications submitted to two institutional
research boards. JAMA 1992;263:374-8
Competing interests: No competing interests