Disentangling separate effectsBMJ 2006; 332 doi: https://doi.org/10.1136/bmj.332.7538.0-f (Published 16 February 2006) Cite this as: BMJ 2006;332:0-f
- Jane Smith, deputy editor ()
A cartoon by the American cartoonist Sidney Harris has one white-coated researcher saying to another: “Find out who set up this experiment. It seems that half of the patients were given a placebo and the other half were given a different placebo.” In this week's BMJ we bring you that experiment.
On p 391 Ted Kaptchuk and colleagues report their randomised trial of two “placebo treatments.” They wanted to see whether a sham acupuncture needle had a greater placebo effect than an inert pill in patients with persistent arm pain. The patients were randomised to a two week run-in period with either the sham device or a placebo pill. The patients were then re-randomised (those on sham acupuncture to continue the sham acupuncture or to real acupuncture and those on the placebo pill to continue the placebo or to amitriptyline). During the run-in period pain decreased about equally in both placebo groups, but during the subsequent treatment periods pain fell more in the sham acupuncture group than in the placebo pill group. The types of side effects were different between the two placebo groups, and, say the authors, “clearly mimicked the information given at informed consent.” They admit that it's a limitation that they did not include a group who had no treatment at all.
This paper is one of several this week where authors try to disentangle separate effects. For example, Ann Oakley and colleagues describe how process evaluation can improve the interpretation of randomised trials of complex interventions (p 413). Their illustration is a cluster randomised trial of peer led sex education among schoolchildren, in which process evaluation aimed to document how the interventions were implemented, compare the processes in the two arms, assess the experience of taking part, and study school contexts. They could then see whether outcomes varied with the quality of implementation and whether subgroups of students differed in their responses. For example, they found that when the education was participative the peer led teaching was most effective but that when it wasn't teacher led education was more effective. The authors agree that their methods challenge traditional thinking about clinical trials but argue that they fit with other methodological developments, such as piloting and taking account of context.
Finally, there's a cautionary tale for those who want to speed up patients' access to new treatments. Three months after the US Food and Drug Administration had given natalizumab accelerated approval for treating relapsing multiple sclerosis its makers withdrew it after two patients developed progressive multifocal leucoencephalopathy. On p 416 Abhijit Chaudhuri dissects what went wrong: cumulative safety data weren't available, the trials' end points were dubious, natalizumab's mechanism of action was always risky, and the animal model was not suitable. His message is that what happened with natalizumab should be “a signal to change the way we treat this disease.”