Education And Debate

Quality improvement reportImproving design and conduct of randomised trials by embedding them in qualitative research: ProtecT (prostate testing for cancer and treatment) studyCommentary: presenting unbiased information to patients can be difficult

BMJ 2002; 325 doi: http://dx.doi.org/10.1136/bmj.325.7367.766 (Published 05 October 2002) Cite this as: BMJ 2002;325:766

Improving design and conduct of randomised trials by embedding them in qualitative research: ProtecT (prostate testing for cancer and treatment) study

  1. Jenny Donovan, professor of social medicine (jenny.donovan{at}bris.ac.uk)a,
  2. Nicola Mills, research associatea,
  3. Monica Smith, research associateb,
  4. Lucy Brindle, research associatea,
  5. Ann Jacoby, professor of medical sociologyc,
  6. Tim Peters, professor of primary care health services researchd,
  7. Stephen Frankel, professor of epidemiology and public healtha,
  8. David Neal, professor of surgerye,
  9. Freddie Hamdy, professor of urology, for the Protect Study Groupf
  1. aDepartment of Social Medicine, University of Bristol, Bristol BS8 2PR
  2. bCentre for Health Services Research, University of Newcastle upon Tyne, Newcastle upon Tyne NE2 4AA
  3. cDepartment of Primary Care, University of Liverpool, Liverpool L69 3BX
  4. dDivision of Primary Health Care, University of Bristol, Bristol BS6 6JL
  5. eSchool of Surgical Sciences, University of Newcastle upon Tyne, Newcastle upon Tyne NE2 4HH
  6. fDivision of Clinical Sciences, University of Sheffield, Sheffield S5 7AU
  7. Community Clinical Sciences (Primary Medical Care Group), University of Southampton, Aldermoor Health Centre, Southampton SO15 6ST
  1. Correspondence to: J Donovan

    Abstract

    Problem: Recruitment to randomised trials is often difficult, and many important trials are not mounted because recruitment is thought to be “impossible.”

    Design: Controversial ProtecT (prostate testing for cancer and treatment) trial embedded within qualitative research.

    Background and setting: Screening for prostate cancer is hotly debated, and evidence from trials about the effectiveness of treatments (surgery, radiotherapy, and monitoring) is lacking. Mounting a treatment trial is controversial because of past failures and concerns that differences in complications of treatment but not survival make randomisation unacceptable to patients and clinicians, particularly for a trial including monitoring.

    Strategy for change: In-depth interviews explored interpretation of study information. Audiotape recordings of recruitment appointments enabled scrutiny of content and presentation of study information by recruiters. Initial qualitative findings showed that recruiters had difficulty discussing equipoise and presenting treatments equally; they unknowingly used terminology that was misinterpreted by participants. Findings were used to determine changes to content and presentation of information.

    Effects of change: Changes to the order of presenting treatments encouraged emphasis on equivalence, misinterpreted terms were avoided, the non-radical arm was redefined, and randomisation and clinical equipoise were presented more convincingly. The randomisation rate increased from 40% to 70%, all treatments became acceptable, and the three arm trial became the preferred design.

    Lessons learnt: Changes to information and presentation resulted in efficient recruitment acceptable to patients and clinicians. Embedding this controversial trial within qualitative research improved recruitment. Such methods probably have wider applicability and may enable even the most difficult evaluative questions to be tackled.

    Background

    The randomised controlled trial is the widely acknowledged design of choice for evaluating the effectiveness of medical and surgical interventions,1 but recruitment is often much lower than anticipated.24 Methodological literature is almost exclusively statistical and epidemiological, and very little of it is concerned with conduct or the particular demands that trials put on trialists and participants. Problems with mounting surgical trials are well known,5 and systematic reviews have identified a range of barriers for clinicians and patients. 2 6 Nested studies within ongoing trials could help to elucidate recruitment difficulties.6

    The ProtecT (prostate testing for cancer and treatment) feasibility study provided such an opportunity. The study was controversial; although consensus existed that a trial of treatment was urgently needed, intense debate continued about whether it could be mounted. This was because of the differences in complications of treatment (but not in survival) between radical surgery, radiotherapy, and monitoring and the evidence from previous failures, including a Medical Research Council trial (PR06) and small scale attempts to randomise. 7 8

    In the ProtecT study, men aged 50-69 were invited to a nurse led clinic in general practice, where they were given detailed information about the implications of testing for prostate specific antigen, uncertainties about treatments, and the need for a treatment trial. If the men consented, blood was taken for prostate specific antigen testing. Participants with abnormal results were invited to undergo further diagnostic testing. Men diagnosed with localised prostate cancer were randomised in a nested trial of recruitment strategies to see a nurse or urologist for an “information” appointment. The men were given details about the treatments and the need for a randomised trial and were asked to consent to randomisation to either a three arm (surgery, radiotherapy, monitoring) or a two arm (surgery, radiotherapy) trial. If they refused randomisation, a patient led preference for treatment was agreed. A multicentre research ethics committee gave ethical approval.

    Strategy for change

    We used qualitative research methods to investigate the process of recruitment:

    • In-depth interviews with men after receipt of prostate specific antigen results and diagnosis—to elicit interpretations of study information and experiences of the study, including treatment preferences (LB with JD)

    • Detailed examination of pairs of audiotaped recruitment (“information”) appointments and follow up interviews to examine the delivery of information by recruiters and its interpretation by patients (NM, MS, JD, AJ)

    • Detailed examination of other information appointments (all were routinely audiotaped) to investigate reasons for different levels of recruitment between centres and over time (JD).

    All interviews were semistructured and carried out by using a checklist of topics to ensure that the same areas were covered but allowing issues to emerge that were of importance to the men themselves. Interviews and information appointments were audiotaped and fully transcribed. We analysed the data by using the methods of “constant comparison,” in which transcripts are scrutinised for similar themes and then examined in detail within themes. 9 10

    We used early findings to devise presentation strategies, which were implemented initially in one centre. We reproduced the findings and recommendations for changes to the content and presentation of information in three documents and circulated them to recruiters in June, October, and November 2000, and we developed a training programme and delivered it to recruiters. JD evaluated the impact of the documents and training by listening to subsequent information appointments. Recruitment (consent to randomisation and acceptance of allocation) was calculated regularly.

    Effects of change

    The rate of consent to randomisation changed over time as the findings from the qualitative research were introduced through the circulation of documents and delivery of training (table), increasing from 30-40% in May 2000 to 70% by May 2001. The findings from the qualitative research had an impact on the conduct of the trial in four major ways.

    Consent to randomisation to ProtecT study over time (patients with data available on final treatment decision). Values are numbers (percentages)

    View this table:

    (1) Organisation of study information

    Study information was based on the results of the team's systematic review of the literature,11 and treatments were presented in a standard order: surgery, then radiotherapy, and finally monitoring. Recordings of information appointments and patient interviews in the early part of the study showed clearly that the treatments were not presented or interpreted equally. Surgery and radiotherapy were portrayed in detail as aggressive, curative treatments, and monitoring was portrayed briefly as “watchful waiting” (box 1). Recruiters were asked to present the treatments in a different order: (1) monitoring, (2) surgery, and (3) radiotherapy and to describe their advantages and disadvantages in equivalent detail.

    Box 1: How treatments were presented

    • a) Early presentation of treatments

    • Clinician 1: “We believe that you are suitable for any of these three treatments … The first is radical prostatectomy. Probably the simplest answer is to remove the prostate gland completely—that gives you the opportunity of removing the whole of the cancer in its entirety. The problem is that radical prostatectomy is a major operation and there are risks … [26 lines follow]

    • “The second method is radiotherapy—you are trying to destroy the cancer cells by means of xrays without removing the gland … [30 lines follow]

    • “The final treatment is what we call watchful waiting. The basis of this is that we don't know whether your tumour is going to progress or not, and we can simply just watch it carefully … [10 lines follow]

    • “We can do [randomisation] for the three treatments—that is, surgery, radiotherapy, or watchful waiting—or if you didn't want to consider watchful waiting, just to compare two treatments which actually try to cure the disease, either surgery or radiotherapy”

    • b) Presentation and interpretation of “watchful waiting”

    • Clinician 2: “Watching it and treating—it's not treatment immediately, it's a different form of management: you're managing the disease rather than treating immediately, you're monitoring it and treating it if [it] shows signs of progression … if it does start to progress and cause problems you deal with them usually with hormone treatment”

    • Patient: “Well I suppose it's better for me to say now that I feel I would rather have something done about it at this stage”

    • Clinician 3: “Monitoring—obviously older people often choose that because they feel, you know, if they may not be around in 10 years time and it may be a good bet to take”

    • Patient: “Hmm”

    RETURN TO TEXT

    (2) Terminology used in study information

    Patients may interpret trial and clinical terminology differently than intended. 12 13 For example, “trial” was sometimes interpreted as monitoring (“try and see”), and recruiters sometimes assumed that patients had refused randomisation when they were really questioning monitoring. Also, the phrase intended to reflect evidence of good 10 year survival (“the majority of men with prostate cancer will be alive 10 years later”) was interpreted as an (unexpected) suggestion that some might be dead in 10 years. Recruiters were thus asked to replace “trial” with “study” and to present survival in terms of “most men with prostate cancer live long lives even with the disease.”

    (3) Specification and presentation of the non-radical arm

    It was quickly apparent that the non-radical treatment option caused difficulties for patients and recruiters. “Conservative monitoring” was meant to emphasise regular review and lack of radical intervention. Recruiters often called it “watchful waiting,” but patients interpreted this as “no treatment,” as if clinicians would “watch while I die” (box 1a).

    In June 2000 (document 1) the non-radical arm was renamed “monitoring” and redefined to involve three monthly or six monthly prostate specific antigen tests, with intervention if required or requested. Recruiters emphasised the slow growing nature of most prostate cancers and presented monitoring first. Men were clearly informed that the risk with monitoring was that future radical treatment might not be possible if the tumour progressed or the patient was no longer fit enough for it. An immediate impact was seen as some patients accepted monitoring, but scrutiny of information appointments showed that some recruiters continued to express it as “inactive” compared with radical treatments (box 1b).

    Documents 2 and 3 included examples of “good” and “not so good” presentation of information and renamed the non-radical arm “active monitoring,” emphasising scrutiny of regular prostate specific antigen results so that radical treatments could remain an option for men who wanted them if the cancer progressed. Recruiting staff were then able to express confidence in this treatment option (box 2).

    Box 2: Presentation of “active monitoring”

    • Clinician 4: “The first one would be to be monitored very closely and not to receive any active intervention, and that would be by watching you every three months certainly for the first year—we'll bring you back, we'll do the blood test, we check the prostate, and if the disease remains stable then obviously you know everybody's happy. If the blood test starts to change, it is extremely sensitive and it would give us an indication that there may be more activity there, so then all the options are discussed again”

    RETURN TO TEXT

    (4) Presentation of randomisation and clinical equipoise

    Recruiters and patients also had difficulty with randomisation and clinical equipoise. Each document contained guidance on this. We found it necessary to emphasise that recruiters must be genuinely uncertain about the best treatment, believe the patient to be suitable for all three treatments, and be confident in these beliefs. Patients commonly expressed lay views that cancer should be removed, told stories of friends or relatives who had died of advanced disease, or brought media information that was often biased in favour of radical treatments. Recruiters were encouraged to elicit these views and then discuss differences with ProtecT study information, explain that randomisation offered a way of resolving the dilemma of treatment choice, attempt randomisation before the end of the information appointment, and inform patients that they could have time to consider whether the allocated treatment was acceptable.

    Lessons learnt

    Qualitative research methods are increasingly included in health services research, conventionally to help in the interpretation of quantitative results or understanding of trials. 12 14 15 In the ProtecT feasibility study we inverted the normal relations between these methods and embedded the randomised trial within the qualitative study. We showed that the integration of qualitative research methods allowed us to understand the recruitment process and elucidate the changes necessary to the content and delivery of information to maximise recruitment and ensure effective and efficient conduct of the trial. The routine recording of information appointments was crucial: the content and method of delivery of the information provided the context within which the men's interpretations of the information could be set.

    The qualitative research illuminated four ways in which study information was having a negative impact on the study. Some of the issues raised were simple, such as reordering the presentation of treatments and avoiding terms that had particular and unanticipated meanings for patients. These “simple” issues would probably not have become apparent without the qualitative research. “Watchful waiting,” for example, is commonly used to describe a non-interventionist treatment. In lay terms, this conveys an impression of wilful neglect, in which the disease is watched and everyone waits for an event—death. It was only when the non-radical arm was redefined as “active monitoring” that patients and clinicians gained confidence in it as a legitimate option. Whether the term is more acceptable in other countries, such as the United States, needs investigation.

    Other issues that emerged were more complex. It has been shown elsewhere that patients have difficulty with randomisation.1517 In this study most men could recall and understand randomisation, but they often found it difficult to accept. Equipoise was particularly difficult but has received remarkably little examination in the literature. We found it essential that recruiting staff were able to express confidently that men were eligible for all three treatments, that the most effective treatment was unknown, that a trial was urgently needed, and that randomisation could provide a plausible way of reaching a decision. If recruiters gave any indication that they were not completely committed to these aspects, patients would question randomisation, often using subtle and sophisticated reasoning that surprised some recruiters.

    Although our intention was to maximise both recruitment and informed consent, changes to the content and delivery of information could potentially be used to coerce patients and artificially inflate randomisation rates. One outcome might then be to increase dropouts, but, as the table shows, the proportion who accepted the treatment allocation remained similar throughout the study. We are currently exploring reasons for rejection of allocation. The process of verbally presenting study information and obtaining written consent is not usually tape recorded or available for later scrutiny as they were here. Recruitment and informed consent in other trials may not have been maximised, because of different interpretations by patients and researchers. Although these methods carry a danger of coercion, our findings indicate that we ensured that the study became more ethical over time as participants received unambiguous information that allowed them to make an accurately informed decision about whether to accept randomisation. Many men rejecting randomisation early on had received unbalanced information open to misinterpretation.

    The controversial nature of the study and the extreme differences between the treatment arms might limit the generalisability of the findings to other randomised trials. However, controversial trials attempting to tackle difficult or “impossible” questions could be the very studies that need to benefit from the qualitative evaluation used here. Indeed, the extreme nature of the treatment choices illuminated issues that were very difficult and encouraged patients to be explicit about their interpretations. The plausibility of these findings suggests that these methods could have a role in improving the efficiency and conduct of trials in general.

    The findings also support the contention that the conduct of trials is not straightforward. The concepts inherent in trials, particularly randomisation and equipoise, are complex and difficult and place particular demands on participants and recruiters. Better training and information for these groups may help, but this study suggests that qualitative methods need to be used in feasibility phases in order to understand recruitment to particular trials.

    Health services research is a developing tradition, in which different disciplines and paradigms are brought together to tackle health related questions. Combining different approaches can be difficult, but the ProtecT study brought together the qualitative traditions of sociology and anthropology, epidemiological and statistical disciplines informing randomised trial design, and academic urology and nursing. The method of the study contravened conventional approaches by being driven not by the randomised trial design but by the qualitative research. Effectively, the ProtecT feasibility study embedded the randomised trial within the qualitative research and followed a sociological iterative approach. Thus qualitative research methods applied in combination with open minded clinicians and flexible or innovative trial designs may enable even the most difficult evaluative questions to be tackled and have substantial impacts even on apparently routine and uncontroversial trials.

    Key learning points

    • Recruitment to randomised controlled trials is often problematic, potentially threatening the power and external validity of trials and wasting resources

    • Embedding the controversial ProtecT randomised trial within qualitative research allowed detailed investigation of the presentation of study information by recruiters and its interpretation by participants

    • Changes to the content and delivery of study information increased recruitment rates from 40% to 70%

    • The embedding of randomised controlled trials in qualitative research may enable even the most difficult evaluative questions to be tackled and could have substantial impacts on recruitment to apparently routine trials

    Acknowledgments

    Members of the Protect Study Group are John Anderson, Miranda Benney, Sally Burton, Daniel Dedman, Ingrid Emmerson, David Gillatt, John Goepel, Louise Goodwin, John Graham, David Gunnell, Helen Harris, Barbara Hattrick, Peter Holding, David Jewell, Clare Kennedy, Sue Kilner, Peter Kirkbride, J Athene Lane, Hing Leung, Teresa Mewes, Steven Oliver, Jon Oxley, Ian Pedley, Philip Powell, Mary Robinson, Liz Salter, Mark Sidaway, Carol Torrington, Lyn Wilkinson, and Andrea Wilson.

    Contributors: JD, FCH, DEN, and TP designed the ProtecT feasibility study. JD, NM, MS, LB, AJ, and SF analysed the qualitative data, and FH, JD, and DN integrated the findings into the ProtecT study. All authors contributed to the writing of the paper. JD, FH, and DN are the guarantors.

    Footnotes

    • Funding The research was funded jointly by the UK NHS research and development health technology assessment programme and the MRC health services research collaboration. Support for the ProtecT study also came from the South West NHS research and development directorate. The department of social medicine of the University of Bristol is the lead centre of the MRC health services research collaboration.

    • Conflict of interest None declared.

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    13. 13.
    14. 14.
    15. 15.
    16. 16.
    17. 17.

    Commentary: presenting unbiased information to patients can be difficult

    1. Paul Little, professor of primary care research. (psl3{at}soton.ac.uk)
    1. aDepartment of Social Medicine, University of Bristol, Bristol BS8 2PR
    2. bCentre for Health Services Research, University of Newcastle upon Tyne, Newcastle upon Tyne NE2 4AA
    3. cDepartment of Primary Care, University of Liverpool, Liverpool L69 3BX
    4. dDivision of Primary Health Care, University of Bristol, Bristol BS6 6JL
    5. eSchool of Surgical Sciences, University of Newcastle upon Tyne, Newcastle upon Tyne NE2 4HH
    6. fDivision of Clinical Sciences, University of Sheffield, Sheffield S5 7AU
    7. Community Clinical Sciences (Primary Medical Care Group), University of Southampton, Aldermoor Health Centre, Southampton SO15 6ST

      This is a welcome paper, and Donovan and colleagues should be applauded on several grounds: for researching a common and extremely important clinical dilemma that other trialists have had major difficulty with; for tackling problematic ethical and recruitment issues; and for highlighting the utility of qualitative methods in helping to understand problems of process in trials, in this case the difficulties of recruitment. Two issues arise from the paper, however: the interpretation of the results and the ethical issues surrounding informed consent.

      Regarding interpretation, this study used a qualitative action research design: observe, intervene, monitor changes, intervene again. The main qualitative results of the study are difficult to assess (they are presented elsewhere). Regarding the quantitative changes, the iterative process probably changed recruitment, but as this was an uncontrolled study other potential explanations exist:

      • A non-specific change with time of the recruiters' equipoise regarding watchful waiting could have occurred

      • A non-specific effect of attention to recruiters could have taken place.

      An underlying assumption is that more patients consenting to randomisation is a “good thing,” but this depends crucially on the evidence for equipoise, and how information about different choices is presented to patients. Thus it may be that consent to randomisation in 40% of eligible patients is as good as one can expect for such a difficult decision about a major life event and a potentially life threatening disease, about which most patients have little expertise or ability to assimilate information and hence to make informed choices.

      The way groups are described may be key to whether informed consent is given. The way watchful waiting was described before the iterative interventions seemed reasonable to me (box 1a), and the change in description of active monitoring perhaps represents an optimistic view of the control group (box B2). The precise security and safety of watchful waiting is a little difficult to judge, as highlighted by a review of observational studies and trials.1 Surely, if we already knew that watchful waiting was really “extremely sensitive” (as is implied in the description in box B2) a trial randomising patients at presentation would not have been needed, but rather randomisation to different treatment options after watchful waiting had shown progression? The authors have tried to improve information for recruiters and patients and are clearly sensitive to the issue of coercion. However, clinicians recruiting for trials are in a powerful position, and by “challenging patients' views” (table) or describing choices in overly optimistic terms they may unwittingly coerce patients.

      This also raises the question of who should make the judgments about what is a reasonable description of groups for patients? Clearly, this is in the remit of ethics committees. However, ethics committees may not have the content expertise in a given research area (such as the sensitivity of prostate specific antigen testing in detecting progression) to judge whether a description is reasonable. Perhaps when descriptions of each group are being formulated for very difficult or contentious areas (for example, potentially life threatening disease) ethics committees should not only review changes to the presentation of information to patients but also use their power to consult someone who knows the evidence about each group intimately (that is, an impartial expert in the field). Although this is yet one more hurdle for ethics committees and researchers in an arguably over-regulated environment, perhaps such a hurdle is justified for such contentious areas.

      References

      1. 1.