Intended for healthcare professionals

Careers

Self experimenting doctors

BMJ 2011; 342 doi: https://doi.org/10.1136/bmj.d2156 (Published 12 April 2011) Cite this as: BMJ 2011;342:d2156
  1. Rebecca Ghani, freelance journalist
  1. 1London, UK
  1. bexghani{at}live.co.uk

Abstract

Last year’s Christmas BMJ published a series of research papers where the authors were experimenters and subjects. Rebecca Ghani investigates the long and sometimes bizarre tradition of self experimenting doctors

Self experimentation throws up problems around practicality, accuracy, reliability, and ethics. You might assume that medical professionals would resort to it only if there was no alternative: is it feasible to be both a detached observer and a (possibly suffering) clinical trial subject? And why do this when there’s an agreed medical and ethical protocol for clinical trials?

But delve a little deeper and it’s possible to find not just a handful of DIY trials but a catalogue of examples where doctors and medical professionals have not only tested medical theories on themselves, but changed the course of medical history in the process.

Self experimenters have identified elusive disease triggers, forged major advances in pain management and anaesthesia, discovered ground breaking cures, and galvanised medical advances reliant on the evidence of the first human subject.

No, you go first

A doctor might self experiment for many reasons. In some cases it’s the convenience; in others the doctor might feel a moral need to go first; sometimes the experiment may not meet the requirements for official study; and in some cases it is a last resort to convince sceptics and to get a speedy result.

As Barry Marshall—who proved the link between Helicobacter pylori and stomach ulcers—says in his biography page on the Nobel prize website: “I was driven to get this theory proven quickly to provide curative treatment for the millions of people suffering with ulcers around the world . . . I realised I had to have an animal model and decided to use myself.”1

Darryl Reese, chair of Cambridgeshire 1 Research Ethics Committee, comments: “That was dramatic and he did it because the scientific community including the pharmaceutical industry was shunning his work and saying that what he was doing was absolute nonsense.” He also notes: “[It was] not without its potential dangers. It could have gone horribly wrong of course. But ultimately, ethically, it’s up to him what he does himself, but if anyone else was to advise or to be in judgment of it they would naturally say: don’t do that, it’s too dangerous.” (Dr Marshall evidently realised that would be the case and told only his wife and the hospital’s ethics committee after he had swallowed the bacteria.)

Difficult to condemn or endorse

Self experimentation is a difficult concept both to condemn—because of its history of fast forwarding medical knowledge—and to endorse—because of the high risks and some would say the lack of rigour and self serving nature of a solo experiment.

Asked about doctors taking part in their own approved clinical trials, Dr Reese says that it’s far from ideal to take the role of both investigator and subject: “How would they [the investigators] approach it in an objective manner if they were involved?” he says. “That’s why we have double blind studies and placebo controls: to eliminate bias.”

But others argue that doctors can be both investigator and subject in approved trials.

John Saunders, chairman of the Committee on Ethical Issues in Medicine at the Royal College of Physicians says: “In principle I cannot see why they [the investigators] shouldn’t be subjects in clinical research. I can’t see any ethical objection to that. Indeed, I was a subject in my own research which were basically physiological demonstrations, and I was one of the physiological subjects.”

Professor Saunders emphasises the difference between this and unauthorised self experiment: “I wasn’t the sole volunteer; I was taking part in my own experiment as an experimental subject . . . there’s no reason why I should be excluded from that study by virtue of being an investigator.”

High risk experiments

One of the main discussions around high risk clinical experiments is how to deliver an ethically sound experiment with informed consent at its core when the available information is inadequate and only the experiment will provide it. It is something of a catch 22 situation.

The high risk argument is used on both sides of the debate, with some saying that the lead doctor should go first, precisely because it is high risk, and others saying that it is as irresponsible to put oneself in a potentially dangerous position as it would be to put a third party in it.

A recent example of a fairly high risk self experimentation is David Pritchard’s 2004 study into hookworm parasites and their ability to calm allergic reactions and conditions such as asthma. The theory was that the hookworms living in the human gut somehow interacted with the immune system and therefore helped reduce the effects of various autoimmune related illnesses.

To test the safety of being a “hookworm host,” Professor Pritchard and his colleagues volunteered themselves for the experiment. They applied a dressing covered in hookworm larvae to their arms and left it for several days to ensure the infection took hold. In an interview with the BBC, Professor Pritchard said: “We did it to show our commitment and because we felt it would only be appropriate to proceed once we had found a dose that was safe to try as part of a clinical trial.”

A catalyst for further research

Professor Pritchard’s study is an example of a case where the evidence from the self experimentation was used as a catalyst to drive the next stage of the experiment, and also to deal with the “who goes first” dilemma.

In some cases the investigators would not be suitable for an experiment. Many clinical trials are therapeutic—that is, concerning patients who have a specific condition—in which case it would be unlikely that the investigator would be a viable subject. Other trials recruit healthy volunteers, and this is where self experiments are more likely to occur, either following official approval or as a self experiment proper.

Approval process

Sometimes a potentially life saving solution could also be potentially life threatening or damaging. Animal testing can move an experiment on to a particular stage, but there always comes a vital point when human testing is imperative. And, as in the title of Lawrence Altman’s book on self experimentation, the dilemma is: who goes first?2

This is where the ethical and medical standards and protocols come in. In the United Kingdom there is a thorough approval process which must be followed to gain approval for clinical trials. From research ethics committees, to the Medicines and Healthcare products Regulatory Agency, to the guidelines in the Declaration of Helsinki, there is a mass of approval to be sought and gained before clinical trials can start.

Although these processes have been criticised for being overly bureaucratic and “stifling,”3 they are widely considered essential to ensure clinical trials are carried out safely and ethically.

Nazi experiments

The history of ethical regulation has its roots in the Nuremberg Code—superseded by the Declaration of Helsinki. The Nuremberg Code was the first code of law and ethics in human experimentation, developed in response to the atrocities of the medical experiments undertaken by the Nazi regime.

The Nuremberg Code references the concept of self experimentation in article 5, saying: “No experiment should be conducted if there is an a priori reason to believe that death or disabling injury will occur, except, perhaps, in experiments where the experimental physicians also serve as subjects.”

An editorial on self experimentation accompanying the BMJ Christmas article points to evidence that article 5 was added to distinguish the Nazi experiments from other clinical trials that might have risked people’s lives. An example is the Cuban yellow fever trials, whose subjects included the investigators themselves. This point no longer forms part of the global ethical guidelines and is strongly rejected in the editorial.4

The editorial also says that self experimentation proper—the investigator experimenting on him or herself only—should not be classified as research but labels it “self indulgence (or some might say, self abuse).”

“We are social animals”

When asked about the practice of investigators doing unofficial self experiments, Professor Saunders comments: “Where I think the objection does come is in these studies where somebody takes it upon themselves and says, well I’m entitled to do to my own body what I like and therefore I’m going to do this.”

He makes the point that while in theory individuals are free to do what they like to themselves, there are other aspects to consider: “The thing is we are social animals; we have responsibilities to each other and it is relatively unusual that one can claim that knowledge can only be reached in this way [by self experimentation].”

Professor Saunders goes on to describe the need to balance the pursuit of knowledge with other responsibilities: “I think that there is a point at which we say the price for new knowledge may be so high that it conflicts with other values that we have as a society.” He goes on to say: “I think just saying, oh well this is the only way we can find this out—and it often isn’t—[but even] if that is true it is not automatically justified.” He adds: “It just means that scientific progress may be a bit slower.”

The information era

As Altman says in Who Goes First,2 the past century generated a huge number of medical breakthroughs, reliant on the use of human experiments—and often self experiments: “Significant advances were made more often in the twentieth century than in all of history, and underlying those advances is a cardinal fact: they were achieved only through experiments on humans.”

So is the concept of self experimentation changing? It could be argued that before the internet age, it was far more difficult to source and organise subjects for trials. In the communication age, when queues of willing (and paid) participants can be sought and found at the click of a mouse, is the justification for self experimentation losing validity?

Dr Reese says: “It [self-experimentation] was done more commonly years ago.” He goes on to suggest: “Possibly out of naivety or circumstances maybe; or out of the fact that it wasn’t the computer age and we didn’t have the amount of information available to us to avoid subjecting potential harm to ourselves?”

The approval process

Michael Rawlins, chairman of the National Institute for Health and Clinical Excellence and senior fellow at the Academy of Medical Sciences, has recently chaired a working party at the academy, which published the report A New Pathway for the Regulation and Governance of Health Research, recommending ways to streamline the research approval process in the UK.

Professor Rawlins’s first encounter with self experimenting was in the 1960s. “The very first experiment I did on myself was giving myself a fever and then seeing if I could lower it with intravenous aspirin.” He and colleagues infused themselves with bacteria and tested the effects of intravenous aspirin. After undertaking further studies, they showed that the aspirin lowered temperature via the central nervous system rather than by blocking the formation of proteins.

Professor Rawlins said, “I’ve done it all my life, I’ve lost count really! I’ve given myself all sorts of medicines, collected blood samples, saliva samples, done various tests on myself.” In a recent interview about the report on the Today programme on Radio 4, Professor Rawlins highlighted the use of self experimentation saying: “You’ve also got to remember that investigators often investigate themselves.” He went on to say: “I’ve done dozens of studies on myself and I’m in reasonably good shape still. And we wouldn’t—most of us—do anything to other people that we wouldn’t do to ourselves.”

Footnotes

  • Competing interests: RG is an employee of the National Patient Safety Agency.

  • From the Student BMJ.

References

View Abstract