Education And Debate

What do I want from health research and researchers when I am a patient?

BMJ 1995; 310 doi: (Published 20 May 1995) Cite this as: BMJ 1995;310:1315
  1. Iain Chalmers, former clinician and health services researchera
  1. a Oxford OX2 6HX

    I have attempted to adopt the perspective of a patient—albeit one with a rather atypical background—to explore what I want from health research and researchers. This has left me with the impression that health researchers could serve the interests of the public more effectively in a variety of ways, and that they would be helped to do so by greater lay involvement in planning and promoting health research.

    Three years ago—a couple of decades after I first coauthored a research report—I was presented with an opportunity to wind up my career in health services research. An unexpected consequence has been that I have found it easier to ask myself how health research and researchers might serve lay people more effectively. I have begun to ask “What do I want from health research and researchers?”

    Any personal view is inevitably shaped by personal experiences. Fairly soon after I qualified as a clinician I began to realise that my attempts to apply some of the therapeutic principles taught at medical school were sometimes resulting in unnecessary deaths. This sobering experience led me to be sceptical of received wisdom, an attitude that was reinforced when, as a health services researcher, I became aware of the quality of the evidence on which many therapeutic claims are based. It is against this background that, as a patient, I want decisions about my health care to be informed by reliable evidence.

    What do I want from research?

    People are bound to vary in what they regard as reliable evidence. A leap of faith will always be required to make causal inferences about the effects of health care. For example, after about five treatments from a chiropractor to whom she had been referred by her general practitioner, my wife began to believe that chiropractic could help relieve her chronic shoulder and back pain. Although I was delighted that her longstanding symptoms had subsided, I did not begin to share her belief that chiropractic might have been responsible until a couple of years later when I read the report of a systematic review of the relevant controlled trials.

    For me “reliable evidence” about the effects of health care will usually mean evidence derived from systematic reviews of carefully controlled evaluative research. Sometimes, when the effects of care are dramatic, such research is unnecessary. For example, carefully controlled research is not needed to show that if my cardiac ventricles start fibrillating it would be worth using a defibrillator to try to persuade them to behave more normally; or that if I become crippled by osteoarthritis of the hip joint, a prosthesis will probably relieve my pain and immobility. But most forms of health care, including the important but less tangible elements such as suggestion, have less dramatic effects than these. If these moderate but important effects are to be detected reliably then systematic reviews of carefully controlled research will be required to produce the kind of evidence that I am likely to believe, and that I would wish those offering me care to take into account.

    For example, a recently published systematic review of 145 randomised controlled trials showed that a simple and inexpensive treatment—half a tablet of aspirin a day—reduces the risks of heart attack and stroke. The analyses presented make it clear that my father, who has recently been affected by transient cerebral ischaemic attacks and a couple of minor strokes, is doing the right thing by taking aspirin. This should reduce his chances of experiencing a seriously disabling stroke by about 25%. The review also shows that as long as my own risk of developing vascular problems remains no more than average it would almost certainly not be worth my taking aspirin prophylactically.

    By contrast to the reliable evidence on aspirin, I seem to be lumbered with rather unreliable evidence when my prostate starts to play up. If I decide to opt for surgical relief of my symptoms, I will have to ponder the fact that several studies have suggested that men who have had the simpler, more widely used, transurethral operation are more likely to die during the next few years than men who have had the more complex, now less frequently used, open operation. The problem is that this observed risk difference may reflect either a differential effect of the two surgical approaches or uncontrolled biases in the research—and no one can tell for certain which, because urologists and men with prostate problems did not collaborate in properly controlled comparisons of the transurethral and open operations when transurethral operations were introduced. Had they done so, I might have been able to take the kind of informed decision that will be possible if I have to consider surgery to reduce my risk of having a stroke. Unlike urologists, vascular surgeons and people at increased risk of stroke because of arteriosclerosis have assessed in randomised controlled trials which surgical procedures are useful. Thanks to their efforts, I know that while I might opt for a carotid endarterectomy—particularly if I have a severe degree of carotid stenosis—an external carotidinternal carotid bypass operation would be unlikely to help me.

    Having reliable evidence about the effects of health care is not enough; I want the evidence to be relevant as well. Sometimes, for example, only easily measured, surrogate end points have been studied. Research has shown reliably that there are drugs that can reduce the chances of my developing an arrhythmia if I have a myocardial infarction; but this is considerably less relevant to me than the evidence suggesting that these drugs may also reduce my chances of survival. Even when research has yielded reliable information about the effects of treatment on a relevant outcome like death, evidence about other important effects of treatment may be missing. Because I expect suffering to cease with death, I want reliable evidence about the effects of care on outcomes which may affect the quality of my life, during and after treatment. For example, although I am interested in evidence suggesting that using tissue plasminogen activator rather than streptokinase for thrombolysis after myocardial infarction might increase somewhat my overall chances of survival, I weigh this together with evidence suggesting that tissue plasminogen activator would also increase my chances of surviving with the disabling sequelae of a haemorrhagic stroke.

    Inevitably, there will be many occasions when no reliable, relevant research evidence exists to guide decisions about my health care. In these circumstances, when the relative merits of alternative forms of care are uncertain, I want to be offered the opportunity to participate in properly controlled research—and the emergency medical card that I carry makes this wish explicit. Two years ago, entirely as a result of my own stupidity, I broke my fibula in New Hampshire. An American orthopaedic surgeon told me that, after the swelling had subsided, my lower leg would be put in a plaster cast for six weeks. Thirty six hours later, after returning home, I was seen by a British orthopaedic surgeon, who told me that my leg would not be put in a plaster cast, and that I was to wear supportive strapping and to walk on the leg as much as possible. Because these two orthopaedic surgeons had prescribed very different forms of care, I asked the second one whether I might be entered into a randomised controlled trial to help resolve the contradictory advice that I had received. He explained to me that only doctors who are uncertain whether they are right or wrong collaborate in randomised controlled trials—and he was certain that he was right!

    My wish to be entered into randomised controlled trials when the relative merits of alternative forms of care are uncertain is purely selfish. Patients receiving treatment as participants in such trials seem to fare better than apparently comparable patients receiving the same treatments outside trials. Furthermore, new technologies seem as likely to be inferior as they are to be superior to existing alternatives, so randomisation provides an efficient hedging strategy in the face of these evenly balanced odds. Thirdly, randomised controlled trials help to generate reliable information on which to base future decisions about my health care.

    What do I want from researchers?

    Until recently, there has been too little support for the kind of applied health research that is required to inform choices in health care. Research to elucidate the mechanisms leading to disease has attracted greater status and funding. Such basic research is obviously important (the discovery of the role of Helicobacter pylori in peptic acid disease is a striking recent example). But the interests of patients will not be served effectively without relevant applied research. Although I may be interested in the results of basic research which suggest that particular interventions should work, as a patient I need to know which forms of care do (and do not) work in practice.

    There are some triumphant examples of synergy between basic and applied research. On the basis of his laboratory and animal research, Sir Graham Liggins developed the theory that a short course of corticosteroids given to women who were expected to give birth prematurely should help their babies. The validation of his theory in his and later randomised controlled trials stands as one of the most important discoveries in the history of perinatal research.

    Unfortunately, there are also counterexamples, where researchers have not been sufficiently self disciplined to test theories about the effects of health care derived from basic research (“These forms of care should work”) in properly controlled applied research (“These forms of care do work”). For example, observations derived from animal experiments led many people (including a researcher who was recently celebrated on a United States postage stamp) to extrapolate the results directly to clinical practice. As a result, thousands of newborn babies were cooled down on the assumption that this would protect them against brain damage. Subsequent applied research done by others (who have not so far been celebrated on postage stamps) showed that cooling newborn babies often killed them.

    Care may also be rejected inappropriately because researchers could not imagine how it might work. For example, theories derived from laboratory and animal investigations led some researchers to suggest that it would be pointless to give the anti-oestrogen drug tamoxifen to women with breast cancers on which no oestrogen receptors could be detected. Other researchers, extrapolating the results of experiments in rats, suggested that there would be no point in giving clot dissolving drugs to people who had experienced a myocardial infarction more than six hours previously. These theories led to the exclusion of patients with these characteristics from some of the applied research studies set up to assess the effects of these treatments. If some such patients had not participated in some of the experiments we would not know that they can actually benefit from the treatments which theories had predicted could not help them.

    It will always be necessary to make choices about what gets on to the applied health care research agenda, and an assessment of evidence derived from relevant basic research will always be important. But a greater readiness among researchers to acknowledge that evidence derived from basic research—like any other category of evidence—will always be incomplete should help to minimise mistakes of the kind just described. It might also help to encourage the use of applied research to assess the effects of various forms of complementary medicine, even if it is difficult to imagine how they might work.

    Lay involvement in research

    My attempt to adopt the perspective of a patient has left me with the impression that scope exists for health research and researchers to serve the interests of the public more effectively. Would researchers be helped to do more relevant research if the public became more involved in planning and promoting research?

    Over a period of more than two decades I have witnessed lay contributions to research in pregnancy and childbirth (see article by Oliver, p 1318),1 and this experience has led me to believe that greater lay involvement in health research would help to promote reliable, relevant research of importance to patients and those caring for them. Lay people have helped researchers to identify important research questions. It was the mother of a young woman with vaginal adenocarcinoma who first suggested that her daughter's cancer might have been caused by the drug (diethylstilbestrol) which she (the mother) had been prescribed during pregnancy; and it was the mother of a child with trisomy 18 who first suggested that a low level of maternal serum (alpha) fetoprotein might be a prenatal marker for this chromosomal abnormality.

    Lay people have helped researchers to assess the effects of health care in terms of outcomes of importance to people receiving health care. Women living in mud floored huts in Papua New Guinea questioned the implications for them of research which had shown that mass antimalarial chemoprophylaxis during pregnancy would increase birth weight; they were concerned that bigger babies might increase the risks of mechanical problems and trauma during childbirth. Lay people invited by researchers to comment on a protocol for a trial to assess whether low dose aspirin taken during pregnancy would reduce problems associated with hypertension asked why there were no plans to follow up the babies of women participating in the trial—yet for decades, women had been warned not to take aspirin during pregnancy because it might harm their babies.

    Lay people have helped to initiate and pursue research successfully. For example, consumer groups prepared the information leaflet for women being invited to consider participating in the Medical Research Council's randomised comparison of chorionic villus sampling and amniocentesis for prenatal diagnosis. Lay commentators promoted women's participation in the trial by making their support for it known through the media. Indeed, they suggested that until more was known about the effects of this invasive new technique for prenatal diagnosis it would be wrong for women to be offered chorionic villus sampling outside the context of controlled trials. Some consumer groups have themselves organised and conducted research on questions of importance to them (infection after childbirth, for example) because these have not been addressed adequately by the research community.

    Lay people have helped researchers to think through the implications of the results of research. For example, I was pleased when a large randomised trial showed that intensive monitoring of babies during labour reduced their likelihood of having seizures after delivery, because this hypothesis had been derived from my first attempt to prepare a systematic review of controlled trials. Women's comments on the trial helped me to put the confirmed hypothesis into perspective. For many of them, increasing the chances of a baby not having seizures from 996 per 1000 to 998 per 1000 (with no evidence that this would be reflected in any more substantive beneficial effect in the longer term) was simply not an adequate incentive to accept the encumbrance of being connected to intensive fetal monitoring equipment during labour.

    Finally, lay people have helped to ensure that the results of research influence both practice and future research. I have been astonished by the extent to which the results of systematic reviews of research assessing the effects of care during pregnancy and childbirth have been taken up by lay people, both at an individual level (ranging from women using the maternity services to the undersecretary of state for health in the House of Lords) and as groups (ranging from local branches of the National Childbirth Trust to the health committee of the House of Commons).

    The many lay contributions to research in pregnancy and childbirth encourage me to believe that there should be greater lay involvement in research more generally. No one—and certainly not researchers—can claim a monopoly of relevant wisdom in discussions about what deserves attention in health research. Lay people can draw on kinds of knowledge and perspectives that differ from those of professional researchers. Greater lay involvement in setting the research agenda would almost certainly lead to greater open mindedness about which questions are worth addressing, which forms of health care merit assessment, and which treatment outcomes matter. It should also help to counter the perverse incentives that lead researchers to do trivial and sometimes frankly unnecessary research, such as placebo controlled trials within classes of drugs in which existing preparations are already known to be effective (for example, prophylactic antibiotics for many forms of surgery).

    If health researchers are to respond positively to the opportunities that exist for exploring how lay people might become more involved in research, some changes in attitude will be required. Researchers sometimes betray fundamentally disrespectful attitudes towards the public. Medical researchers would do well to follow the example set by the British Psychological Society. After noting that “psychologists owe a debt to those who agree to take part in their studies,” who therefore deserved to be treated “with the highest standards of consideration and respect,” the society recommended that the term subject should be abandoned and replaced by participant.2 Researchers sometimes reveal cavalier attitudes to the public in other ways. For example, it remains rare for researchers to offer to send people who have participated in research a summary of the results of the work to which they have contributed, and to ensure that the results of research are published.

    As far as I am aware, my belief that the public might be served more effectively by research and researchers if there was greater lay involvement at all stages of the research process cannot be supported by formal evidence, and there is certainly scope for research to address this issue. At its simplest, this research might consist of an exploration of the feasibility of lay involvement in conducting and commenting on descriptive studies of past and current patterns of research activity in particular fields or localities. Controlled intervention studies should be feasible as well, perhaps using research ethics committees as experimental units.

    Many people, however, may feel that greater lay involvement in a pattern of research decision making which has been dominated by professional researchers is justified on the basis of existing informal experience, common sense, and justice. Greater lay involvement in research would also seem likely to result in the development of a lobby of well informed lay people to press for the resources needed to address a more substantial proportion of the many unanswered questions relevant to promoting and protecting health.

    This paper is based on a talk given at the Harveian Society of London in January 1994. I am grateful for comments on earlier drafts from Hilda Bastian, Thurstan Brewin, Andrew Chivers, Ruth Evans, Claire Foster, Paul Garner, Gillian Gyte, Andrew Herxheimer, Richard Lilford, Stephen Lock, Sandra Oliver, David Sackett, William Silverman, Jane Smith, and Hazel Thornton. It should not be assumed that they endorse all of my views.


    1. 1.
    2. 2.
    View Abstract