Letters

Monitoring clinical trials

BMJ 2001; 323 doi: https://doi.org/10.1136/bmj.323.7326.1424 (Published 15 December 2001) Cite this as: BMJ 2001;323:1424

Dissemination of decisions on interim analyses needs wider debate

  1. Sheila M Bird, senior statistician (sheila.bird{at}mrc-bsu.cam.ac.uk)
  1. MRC Biostatistics Unit, Cambridge CB2 2SR
  2. CTSU, Radcliffe Infirmary, Oxford OX2 6HE
  3. Saionara, 31 Regent Street, Rowhedge, Colchester CO5 7EA
  4. School of Information Technology, Business Faculty, Auckland University of Technology, Private Bag 92006, Auckland 1020, New Zealand
  5. School of Health Sciences, Deakin University, Burwood, VIC 3125, Australia
  6. 76 The Crescent, Belmont, Surrey SM2 7BS

    EDITOR—Lilford et al make a case that interim analyses from randomised trials should be shared with participants and doctors and patients.1 These analyses should be shared for the sake of freedom of information and properly informed consent, as a counterweight to paternalism, for the better public understanding of uncertainty, and regardless of drug regulatory or financial considerations.1 But the authors stop short of suggesting how to evolve the design of randomised controlled trials so that future patients and their doctors, in the light of emerging information, might have more choice than between 50:50 randomisation to treatments A and B versus self determination to receive treatment A or B.

    The consumer principle of randomisation, which to my knowledge has not been implemented since its enunciation in 1994,2 offers doctors and patients the option of choosing one of three randomisation ratios, such as 30:70 (uncertain or idiosyncratic preference for B, yet willing to be randomised if allocation is weighted in favour of B), 50:50 (absolute uncertainty or complete altruism), or 70:30 (uncertain or idiosyncratic preference for A, yet willing to be randomised if allocation is weighted in favour of A).

    Importantly, the choice of randomisation strata is an additional patient covariate that was not previously available; comparison between treatments is unbiased within the chosen randomisation stratum; and how the choice of randomisation stratum drifts after disclosure of interim data is a measure of how those data were assimilated by future patients and their doctors. Data monitoring committees might even decide to close one of the randomisation strata, such as closing down 30:70 randomisation if the interim data pointed moderately convincingly in favour of A.

    Not only should there be wider debate about the dissemination to doctors and patients of data monitoring committees' decisions on interim analyses but there should be wider debate about how those decisions might affect both patient information sheets and randomisation ratios.

    References

    1. 1.
    2. 2.

    Interim data should not be publicly available

    1. S M Richards, statistician
    1. MRC Biostatistics Unit, Cambridge CB2 2SR
    2. CTSU, Radcliffe Infirmary, Oxford OX2 6HE
    3. Saionara, 31 Regent Street, Rowhedge, Colchester CO5 7EA
    4. School of Information Technology, Business Faculty, Auckland University of Technology, Private Bag 92006, Auckland 1020, New Zealand
    5. School of Health Sciences, Deakin University, Burwood, VIC 3125, Australia
    6. 76 The Crescent, Belmont, Surrey SM2 7BS

      EDITOR—Lilford et al show little understanding of the uncertainties involved in the assessments of treatment effects.1 Few people are aware of how much point estimates wander about as both more patients and longer follow up accrue in a randomised clinical trial. This means that choices made by patients on the basis of interim analyses are unlikely to be “rational.” The trouble is that when results go in a particular direction, the natural instinct is to assume that they will continue that way. This is why phrases appear in papers such as, “there was a trend of 5% in favour of treatment A, but this is not yet significant,” implying that with more data it will become so. Of course, it is just as likely that future data will add up in the other direction so that the final result may be against treatment A.

      Data monitoring committees have been implemented for good reasons that have been well discussed in the past. The issues involved in assessing results are not simple, and involve not only statistical uncertainty but issues such as length of follow up, internal consistency, baseline comparability, compliance, adjustment for repeated tests, etc.2 The publication of interim results of ISIS-2 was for a particular subgroup where benefit was clear.3 There are obviously circumstances where reporting certain results will increase recruitment (for example, where sceptical clinicians may decide to start entering patients because of apparently positive effect), and others where they will reduce it (for example, a slightly positive result might make clinicians stop recruiting, with the sceptical ones all stopping use of the treatment and optimistic ones all deciding to use it).

      Rather than going backwards and repeating the mistakes of the past, when almost all trials were too small to give definite answers—and even some of those that apparently did, later turned out to be misleading4—we should be concentrating on the problem of lack of reporting of final results for all randomised trials done. At present the information available on a question is often a biased subset owing to this lack of publication.5

      References

      1. 1.
      2. 2.
      3. 3.
      4. 4.
      5. 5.

      Caution may be warranted in releasing interim trial data

      1. Hazel Thornton, independent advocate for quality in research and healthcare (hazelcagct{at}aol.com)
      1. MRC Biostatistics Unit, Cambridge CB2 2SR
      2. CTSU, Radcliffe Infirmary, Oxford OX2 6HE
      3. Saionara, 31 Regent Street, Rowhedge, Colchester CO5 7EA
      4. School of Information Technology, Business Faculty, Auckland University of Technology, Private Bag 92006, Auckland 1020, New Zealand
      5. School of Health Sciences, Deakin University, Burwood, VIC 3125, Australia
      6. 76 The Crescent, Belmont, Surrey SM2 7BS

        EDITOR—Interpreting trial data even at the end of a trial can be controversial and difficult. Though there may be arguments in some cases for releasing interim data,1 depending on the type of trial and strength of findings, there are surely also strong arguments for patient participants and trialists keeping their nerve until all results are gathered in. Why else do we employ statisticians to undertake power calculations and provide recommendations of the cohort size needed to produce reliable data?

        Visualising what recruitment by randomisation means has been facilitated by likening it to the uneven and random distribution of raindrops on a surface before complete coverage. My understanding is that earlier trials in a series for review, and interim results within a trial, can both be misleading in the apparent direction of results found.

        Marketing pressure can also lead to premature stoppage of trials, where profit rather than patients is the prime motivation for the trials. This deprives not only “far term” but also “near term” participants of the satisfaction of finding out the long term benefits and the overall health benefits of an intervention already given to many near term participants. This was the case in the controversial stoppage of the American trial of tamoxifen for the prevention of breast cancer.2 It is interesting that the European prevention trials did not follow suit; this was perhaps because of a more convergent and sensitive motivation of trialists and participants and joint determination to obtain long term data despite apparent interim findings in the United States.3

        If the profession and patients collaborate in designing trials with agreed long term and short term objectives and aims, agreed stopping rules, and procedures for rapid and thorough dissemination of results (including both professional and lay interpretations) we might all derive full benefit and satisfaction from staying the course until the end. If patients are equal partners in devising such contracts with professionals, by being on trial steering committees and data monitoring committees, they will have equal opportunity to make decisions about whether or not to abandon the trial or release interim data.

        Many participants join trials for altruistic reasons (23%) or trust in the doctor (21%), not for what benefit they hope to get from it.4 Why then adopt a shortcut process that reduces the amount of learning, understanding, and benefit obtained from trials with participants who will have received an intervention that cannot be undone?

        References

        1. 1.
        2. 2.
        3. 3.
        4. 4.

        Several points are contentious

        1. David Parry, online learning researcher (dave.parry{at}aut.ac.nz)
        1. MRC Biostatistics Unit, Cambridge CB2 2SR
        2. CTSU, Radcliffe Infirmary, Oxford OX2 6HE
        3. Saionara, 31 Regent Street, Rowhedge, Colchester CO5 7EA
        4. School of Information Technology, Business Faculty, Auckland University of Technology, Private Bag 92006, Auckland 1020, New Zealand
        5. School of Health Sciences, Deakin University, Burwood, VIC 3125, Australia
        6. 76 The Crescent, Belmont, Surrey SM2 7BS

          EDITOR—I have four objections to Lilford et al's proposal for monitoring clinical trials.1

          Firstly, I object to the philosophy. Any people involved as subjects in a study are already being used as a means, not an end. The fact that they have volunteered for such a role is admirable and puts them on a par with firefighters and soldiers, who risk their lives more than other members of society in a way that benefits society. Concealing which treatment a person is receiving is perfectly acceptable and commonplace; therefore, perfect disclosure is not an absolute requirement for a morally justifiable trial. Trials are justified if there is insufficient valid information for an unbiased observer to make a decision.

          Trials are shown to be valid when a well designed, well conducted study is published in an appropriate forum. Release of data effectively replaces the last stage with one that is less rigorous. This is not acceptable because (apart from the danger of mistaken conclusions) it is not what the researchers and subjects agreed to.

          Secondly, I object to practical aspects of the proposal. Some studies do not have data monitoring committees, and the committees that do exist may have very different ideas. Thus the scene is set for confusion on a grand scale. Lilford et al suggest that there could be guidelines. These are likely to be a great deal more complicated than the present situation. This proposal will also greatly reduce the weight of properly published evidence.

          Thirdly, I object to the moral hazard. If I am a researcher planning a trial in which it will be difficult to recruit sufficient subjects to gain certainty then I may be tempted to go through the motions of applying for funding, ethical approval, etc on the basis of an unrealistically rapid recruitment schedule. I will do this in the knowledge that the data will get released anyway before the conclusion of the trial and a good trend will gain approval for the treatment.

          Lastly, I object to the authors' misunderstanding of the role of data monitoring committees. I believe that their role is to stop trials that are causing undue adverse events and halt trials where the treatment effect has been underestimated—that is, where the number of subjects needed to show the effect is really much lower than initially thought. Inconclusive results at this stage represent a well designed study.

          References

          1. 1.

          Interim data are at least as important as interim analyses

          1. Daniel Reidpath, senior lecturer in social epidemiology (reidpath{at}deakin.edu.au)
          1. MRC Biostatistics Unit, Cambridge CB2 2SR
          2. CTSU, Radcliffe Infirmary, Oxford OX2 6HE
          3. Saionara, 31 Regent Street, Rowhedge, Colchester CO5 7EA
          4. School of Information Technology, Business Faculty, Auckland University of Technology, Private Bag 92006, Auckland 1020, New Zealand
          5. School of Health Sciences, Deakin University, Burwood, VIC 3125, Australia
          6. 76 The Crescent, Belmont, Surrey SM2 7BS

            EDITOR—Lilford et al make several points about the need to release interim results from clinical trials.1 In discussing this, however, they write of “interim data,” not clarifying the separation between making the results of interim analyses known and releasing the actual interim data.

            Researchers are always enormously reluctant to release their data even after they have published the results of their research.2 Given this, they are unlikely to consider releasing the data that were the basis of any interim analyses. The release of the actual data, however, should be considered at least as important as the release of interim analyses.

            References

            1. 1.
            2. 2.

            Latest data from START trial should be made available

            1. J M Henk (mhenk{at}doctors.org.uk), honorary consultant clinical oncologist, Royal Marsden NHS Trust
            1. MRC Biostatistics Unit, Cambridge CB2 2SR
            2. CTSU, Radcliffe Infirmary, Oxford OX2 6HE
            3. Saionara, 31 Regent Street, Rowhedge, Colchester CO5 7EA
            4. School of Information Technology, Business Faculty, Auckland University of Technology, Private Bag 92006, Auckland 1020, New Zealand
            5. School of Health Sciences, Deakin University, Burwood, VIC 3125, Australia
            6. 76 The Crescent, Belmont, Surrey SM2 7BS

              EDITOR—The opinions expressed by Lilford et al regarding monitoring of clinical trials are relevant to a current trial in the United Kingdom of radiotherapy for breast cancer.1 The international standard fractionation regimen consists of 50 Gy in 2 Gy fractions on five days a week, but the evidence base for this is tenuous. Many years ago it was suggested from reviews of treatment of advanced disease and skin metastases that fewer larger fractions might be more effective against breast cancer,2 but radiotherapists have been reluctant to do this because of fear of increased late morbidity.

              In 1986 two oncology centres embarked on a randomised trial comparing the standard 50 Gy with two schedules treating five times a fortnight, 39 Gy and 42.9 Gy, both in 13 fractions. Interim data were presented in 1994, showing a significantly lower morbidity for the 39/13 schedule: there was no difference in the rates of local recurrence of carcinoma, but there had been few recurrences at that time.3 The trial closed in 1998 with 1400 patients enrolled. The schedules were continued as one of the arms of the multicentre standardisation of breast radiotherapy (START) trial4 to obtain larger numbers and therefore more conclusive results. The trial data were taken over by the data monitoring committee and are being kept secret on the basis that their publication would prejudice recruitment to the trial. The final results of the trial will not be known for several years; meanwhile, 25 fractions remains the standard.

              The median follow up of these 1400 patients is now eight years. If there is still no evidence of a significantly higher risk of recurrence from the 13-fraction schedule and the data were made publicly available, radiotherapists could reasonably offer women a course of treatment involving fewer hospital attendances and with fewer side effects rather than continuing to give the standard 25 fractions while awaiting results of the standardisation of breast radiotherapy trial. Other patients with cancer would also benefit from the consequent reduction in the workload and therefore shorter waiting lists in our hard pressed radiotherapy departments. The points raised by Lilford et al make a good case that it is now time to publicise the latest data from this trial.

              References

              1. 1.
              2. 2.
              3. 3.
              4. 4.
              View Abstract

              Sign in

              Log in through your institution

              Free trial

              Register for a free trial to thebmj.com to receive unlimited access to all content on thebmj.com for 14 days.
              Sign up for a free trial

              Subscribe