Ethics review roulette: what can we learn?BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7432.121 (Published 15 January 2004) Cite this as: BMJ 2004;328:121
- 1Department of Primary Health Care, Oxford University, Oxford OX3 7LF
- 2James Lind Library, James Lind Initiative, Oxford OX2 7LG
That ethics review has costs and one size doesn't fit all
Ethics review is an “intervention” in the system of health care that has been less evaluated than others. It aims to minimise risks to patients from inappropriate research or inadequate consent, but as a consequence it may delay or inhibit research beneficial to those same patients. The balance of risks and consequences will clearly be different for different types of research: some questionnaires, clinical audits, or comparisons of standard treatments are associated with low risks, while comparisons of known treatments against placebo and studies of new, potentially dangerous interventions carry higher risks.
To what extent might studies of variations in the work of research ethics committees help investigate how this balance is managed? In this week's BMJ, Hearnshaw reports the latest of several investigations documenting variations in the work of research ethics committees.1 The principal messages from this body of evidence are that variations are often striking and the consequences can be substantial. In Hearnshaw's example, a trial of a leaflet intended to improve older patients' involvement in general practitioner consultations, was deemed not to require ethical review in Austria, France, Germany, and Switzerland. In the UK, Belgium, and Slovenia, however, the proposal had to be reviewed by full committees, some of which required multiple copies of the application and an estimated five days of preparatory work.
Previous studies have shown that variation in ethics review is the rule. For example, for one multicentre clinical trial, between 1 and 21 copies of a protocol were required by each of 125 local research ethics committees, with two thirds of the committees withholding approval until the researchers had made amendments that were unrelated to local circumstances.2 Among 53 research ethics committees receiving proposals for another clinical trial, 13% of decisions were made by chairmen's actions and 36% by subcommittees, but in over half the proposal had to be considered by full committees.3
These variations have consequences for efforts to assess the effects of treatments. For example, a trial involving 51 centres needed over 25 000 pieces of paper, 62 hours of photocopying, and an average of 3.3 hours of investigator time for each centre.4 Multicentre research ethics committees were designed to reduce this burden, but another clinical trial that was reviewed by a multicentre committee also needed 5789 A4 pages to meet the varying requirements of local research ethics committees.5
The burdens imposed by ethics review might be justified if it could be shown that, on balance, it does more good than harm to patients' interests. Delays may, however, have important consequences and sometimes jeopardise the interests of patients. For example, after contrasting US and UK requirements for informed consent for participants in the ISIS-2 trial, it has been estimated that about 10 000 unnecessary deaths were directly due to whatever it was that slowed recruitment into the trial in the US.6
Delay of research eventually conducted may be less important than inhibition of efforts to evaluate the effects of healthcare interventions. There are many reasons for not being able to mount clinical trials promptly, but one has become the time and other requirements needed for ethics review. No randomised treatment trials were done during the recent SARS epidemic, with a consequent loss of an opportunity to learn how to treat the disease during the next epidemic. In some spheres the very prospects of ethics review have become daunting. In the United Kingdom, for example, proposals to evaluate the effects of routine treatments for sick newborn infants7 have come under particularly intense scrutiny as a result of unproved allegations of research misconduct made in the report of a grossly incompetent government inquiry.8 Yet the government continues to refer to this inquiry when defending the arrangements introduced for ethics review and research governance.9
The shadow of protracted ethics review has been cast more widely because the boundary between research and ordinary clinical practice is not clear cut. For example, some clinicians are confused about the need for ethics approval for auditing clinical practice to detect and correct suboptimal care.10 Arguably, evaluation of routine practices, such as Hearnshaw's patient leaflet, should be part of the quality improvement expected of any self respecting organisation. While alternative standard treatments—for example, different antihypertensive drugs—can be used interchangeably provided no evaluation is done, formal comparisons of the same treatments are assumed to require ethics review. Randomised “n of 1” trials are properly seen as an element of responsible clinical practice rather than research,11 and some ethics committees have accepted that such evaluations comparing two treatments considered “standard” do not require ethics review (E Triggs, personal communication), but this view is far from universal. Sometimes commonly used and effective treatments—such as prenatal corticosteroids—are regarded as experimental because they have not been licensed for that use. Yet pentosan polysulphate—a drug never before used in humans to treat Creutzfeld-Jakob disease—has been given to a young man on the basis of a high court judgment. The judgment concluded, intriguingly, that although use of the treatment “cannot be regarded as a research project, there would be an opportunity to learn, for the first time, the possible effect of PPS on patients with vCJD.”12 If anything was needed to show that the borderlines between research, audit, and practice are not clear13 this example is surely it.
With the introduction of new legislation and drug regulation processes in Europe next year, it is appropriate to reappraise the processes of ethics review. This cannot be pursued intelligently without confronting the overlap between the spheres of responsibility of research governance and clinical governance. Although ethical standards are clearly essential for all types of evaluation, the notion that “one size of ethics review fits all types of evaluation” should be rejected. It is time that a more concerted effort be made to assess the likelihood of benefits, harms, and costs of different approaches to ethics review for different types of evaluation.
Papers p 140
Competing interests None declared.