Does this work for you?
BMJ 2008; 337 doi: https://doi.org/10.1136/bmj.a2281 (Published 30 October 2008) Cite this as: BMJ 2008;337:a2281- Nicholas A Christakis, professor of medical sociology, Harvard Medical School, and attending physician, Mt Auburn Hospital, Cambridge, Massachusetts
- christak{at}hcp.med.harvard.edu
Doctors say that a drug “works” if, in comparison with the control arm of a clinical trial, significantly more people in the treatment arm respond. Unfortunately, this is a naive oversimplification, and it breeds complacency among patients and physicians alike.
Criticisms of this perspective have been lodged before. One is that researchers often pick outcomes that are not patient centred—patients do not care if a tumour shows “shrinkage upon radiological visualisation” but rather whether they are in less pain. Another criticism is that when side effects of drugs are factored in, many patients do not think that a drug works very well at all, even as the doctor or drug company extols its virtues; drop-out rates in the active agent arm of trials often exceed those of the placebo arm, providing evidence of patients’ distaste for the overall effects of a drug.
But one problem that has received far less attention is that when patients say a drug “works” they typically mean something quite different from what doctors mean. Patients mean that most patients respond to the drug. This, however, is rarely the case. In fact, some of the most prescribed drugs today have no effect in most patients who take them.
For example, sildenafil (Viagra) works less than half the time: even when used in a dose optimised fashion (where the dose is titrated from 25 mg to 200 mg), and when effectiveness is gauged by the number of men who report that at least 60% of attempts at sexual intercourse are successful, only 48% of men are found to respond to the drug (compared with 11% who respond to a placebo) (BMC Urology 2002;2:6, doi:10.1186/1471-2490-2-6). When the 25 mg dose of the drug is used, 28% of the men report success (compared with 10% in the placebo arm). Most patients taking sildenafil should thus not expect it to “work.” In fact, we could quite honestly tell patients that the 25 mg dose does not work 72% of the time.
The use of pregabalin to treat post-herpetic neuralgia provides a similar example. Roughly 50% of patients report that their pain scores drop by half or more, compared with 20% of patients who received a placebo (Neurology 2003;60:1274-83). At least half of patients, in other words, would not think that this drug has worked by this measure of success.
An alternative way of seeing the same phenomenon is to ask how often placebo “works.” Consider the use of atorvastatin to prevent cardiovascular disease. The ASCOT trial, which followed more than 10 000 patients for an average of 3.3 years, found that 1.9% of people who were taking the drug had a heart attack, whereas among the patients taking a placebo the figure was 3% (Lancet 2003;361:1149-58, doi:10.1016/S0140-6736(03)12948-0). This is an impressive difference, yet many patients might not want to take the drug if they were told that a placebo worked at least 97% of the time.
Countless drugs that have been shown in randomised controlled trials to be effective work in only a minority of patients. Imagine that a drug worked 20% of the time in a trial, compared with 5-10% for a placebo. This is the case for drugs ranging from antihypertensives to minoxidil to cancer chemotherapy. Such a difference in a trial corresponds to an enormous effect size. However, most patients taking such drugs would not benefit—they would hardly think that the drugs “worked.”
If you buy a toaster you expect it to be able to toast bread every time it is used. If it does not, you say it does not work and return or discard it. You do not take solace from the claim that, in fact, 30% of the time in the manufacturer’s laboratory the toaster did a better job in browning bread than sunshine alone.
My point is not that drugs evaluated in randomised controlled trials are not terrific. They are. And the scientific evidence for their efficacy is impressive. Rather, the problem is that patients and doctors lose sight of what trials actually show and either have false expectations of drugs’ effectiveness or are unaware that they should be vigilant about the possibility that the drug may have no effect whatsoever in any one person and hence fail to consider the need to switch or stop taking the drug.
Attention to variation in a patient’s response is thus essential for any drug that does not affect nearly 100% of patients. The variation in response involves two parts: that related to observable factors (such as age or clinical status) and unpredictable variation. Because the original clinical trials showing that drugs work are rarely powered to look at variation in observable factors, post-marketing observational studies are needed to determine which patients, on average, do or do not respond.
As for the unpredictable variation, one appropriate reaction is to have a protocol of administration that evaluates a patient’s response. Doctors sometimes already do this in a systematic way (such as when titrating the administration of highly active antiretroviral treatment). But this practice should be more widespread and more formal—and should especially be implemented in the case of drugs that have been shown to benefit only a minority of patients.
Just because drugs work in trials does not mean they will work in our patients. In fact, we can often expect that they will not work at all.
Notes
Cite this as: BMJ 2008;337:a2281