Intended for healthcare professionals


Comparing treatments

BMJ 1995; 310 doi: (Published 20 May 1995) Cite this as: BMJ 1995;310:1279
  1. David Henry,
  2. Suzanne Hill
  1. Senior lecturer Faculty of Medicine and Health Sciences, University of Newcastle, Newcastle, NSW, Australia
  2. Director Clinical Evaluation Section, Drug Safety and Evaluation Branch, Therapeutic Goods Administration, Canberra, ACT, Australia

    Comparison should be against active treatments rather than placebos

    Clinical decisions about management tend to be influenced by a mixture of factors, including the results of clinical trials, the opinions of experts and colleagues, the characteristics of the patients, previous experiences, and plain old habit (good and bad). Where new drugs are concerned, evidence on efficacy from trials is usually good—better than for other clinical interventions. Regulatory agencies have placed high hurdles in the path of drugs, and manufacturers have pursued the questions of efficacy with rigour.

    In the United States the Food and Drug Administration, historically, has required evidence from two placebo controlled trials before licensing a new compound, although some recent approvals have been based on a single trial.1 2 Although these regulatory requirements have been criticised, they generally provide us with good evidence that the drug works under specified conditions. It is often less clear how well the drug will work under different conditions and in patients who do not resemble those in the trials. Less attention has been paid to the scientific principles of generalisation than to those that underpin the conduct of the trials themselves.3

    Unfortunately, at the time that new drugs are released little information exists on how they compare with those already used for the same indications. This is not simply a matter of clinical importance. Many new drugs are expensive, and in some countries drug budgets are growing faster than other health care sectors. This explains the increasing attention being paid to the cost effectiveness of new drugs.4 The key questions are: how much better are the new drugs than the old ones, how much more does it cost to obtain the additionalbenefits, and does the extra cost represent value for money?

    Some governments and other buyers have responded by requiring pharmaceutical manufacturers to submit evidence on the comparative cost effectiveness of their new drugs and those already used for the relevant indication.5 In Australia this process has been made difficult by a general shortage of good quality randomised trials that are large enough to detect clinically and economically important differences between drugs. Comparative trials that are performed before marketing are usually small and sometimes of indifferent quality.

    The reluctance of companies to perform comparative trials during drug development is easily explained. Traditionally, there has been no pressure from regulators to compare their products with those of their competitors. Trials of active treatments have to be large to detect small differences, which will add to the costs. From a commercial standpoint the discovery that a new product, which has yet to establish a market, is no better than an older and cheaper one could be disastrous. This reticence does not, however, extend beyond approval, when marketing departments make claims about the superiority of their products, often on the basis of minimal data.

    Comparative trials get done, but usually well after approval for marketing has been granted and decisions have been made about indications and price. Such trials—for instance, the global initiation of streptokinase and tissue plasminogen activator for occluded coronary arteries (GUSTO) and the comparison of antihypertensive drugs in the treatment of mild hypertension study (TOMHS)—may raise important questions about comparative effectiveness.6 7

    How can we ensure that comparative trials of active treatments are performed early enough? Regulators have been reluctant to base initial judgments about efficacy on the results of trials entailing active comparisons. To quote Robert Temple of the Food and Drug Administration, if the new drug seems indistinguishable from the active control “you don't really know what you've got.” The control may be a poor drug, or the trial may have been incapable of detecting true differences between the drugs. Regulators may therefore license an inferior product. After the drug is licensed it is possible to collect non-randomised data. Although linking data on prescriptions and diagnoses from large administrative databases is invaluable in quantifying adverse effects, this approach is less useful for evaluating the comparative effectiveness of drugs.8 Small genuine differences between active treatments are likely to be swamped by the confounding effects of differing indications, the characteristics of patients, and choices by doctors.

    Glasziou has suggested that governments should support postmarketing randomised trials to answer unresolved questions regarding the clinical and economic performance of new drugs.9 This approach should be used when a new drug seems to offer value for money but the clinical data are weak. In Australia the government subsidises the use of selected drugs through the Pharmaceutical Benefits Scheme. Some drugs are in an “authority required” category—doctors must obtain approval for use by individual patients before the government will pay for the drug. Glasziou suggests that there should be a new category—“authority to prescribe only within a controlled trial.”10

    A move by governments to support large trials makes sense. Some classes of drugs cost tens or hundreds of millions of dollars annually and yet are of uncertain cost effectiveness—for example, angiotension converting enzyme inhibitors in mild hypertension. Investing a proportion of these costs in a trial could provide invaluable information on which to base policies. Drugs are not the only candidates for government sponsored trials—the adoption of new operative procedures and imaging procedures is often based on uncertain evidence. If governments tied payment for these interventions to the conduct ofrandomised trials we would all be much clearer about their true role in the management of patients.


    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    View Abstract