Intended for healthcare professionals

Analysis And Comment Health services research

Process evaluation in randomised controlled trials of complex interventions

BMJ 2006; 332 doi: https://doi.org/10.1136/bmj.332.7538.413 (Published 16 February 2006) Cite this as: BMJ 2006;332:413
  1. Ann Oakley, professor of sociology and social policy (a.oakley{at}ioe.ac.uk)1,
  2. Vicki Strange, research officer1,
  3. Chris Bonell, senior lecturer in sociology and epidemiology2,
  4. Elizabeth Allen, statistician3,
  5. Judith Stephenson, senior lecturer

    RIPPLE Study Team

    3
  1. 1 Social Science Research Unit, Institute of Education, University of London, London WC1H ONR
  2. 2 Public and Environmental Health Research Unit, London School of Hygiene and Tropical Medicine, London WC1E 7HT
  3. 3 Centre for Sexual Health and HIV Research, London WC1E 6AU
  1. Correspondence to: A Oakley
  • Accepted 14 October 2005

Most randomised controlled trials focus on outcomes, not on the processes involved in implementing an intervention. Using an example from school based health promotion, this paper argues that including a process evaluation would improve the science of many randomised controlled trials

“Complex interventions” are health service interventions that are not drugs or surgical procedures, but have many potential “active ingredients.”1 A complex intervention combines different components in a whole that is more than the sum of its parts.2 Randomised controlled trials (RCTs) are the most rigorous way to evaluate the effectiveness of interventions, regardless of their complexity. Because of their multifaceted nature and dependence on social context, complex interventions pose methodological challenges, and require adaptations to the standard design of such trials.3 This paper outlines a framework for using process evaluation as an integral element of RCTs. It draws on experience from a cluster randomised trial of peer led sex education.

Why evaluate processes?

Conventional RCTs evaluate the effects of interventions on prespecified health outcomes. The main question is, “Does it work?” Process evaluations within trials explore the implementation, receipt, and setting of an intervention and help in the interpretation of the outcome results. They may aim to examine the views of participants on the intervention; study how the intervention is implemented; distinguish between components of the intervention; investigate contextual factors that affect an intervention; monitor dose to assess the reach of the intervention; and study the way effects vary in subgroups.4 Process evaluation can help to distinguish between interventions that are inherently faulty (failure of intervention concept or theory) and those that are badly delivered (implementation failure).5 Process evaluations are especially necessary in multisite trials, where the “same” intervention may be implemented and received in different ways.

Although “process” and “qualitative” are often used interchangeably, data for process evaluation can be both quantitative and qualitative. For example, in a series of systematic reviews of research on health promotion and young people, 33 (62%) of 53 process evaluations collected only quantitative data, 11 (21%) collected only qualitative data, and nine (17%) collected both.6 Process evaluation in RCTs is common in health promotion and public health, but often data are not collected systematically and are used mostly for illustration.


Embedded Image

Baby talk: the most effective sex education?

Credit: TONY KYRIACOU/REX

Process evaluation in the RIPPLE study

The RIPPLE (randomised intervention of pupil peer led sex education) study is a cluster RCT designed to investigate whether peer delivered sex education is more effective than teacher delivered sessions at decreasing risky sexual behaviour. It involves 27 English secondary schools and follow-up to age 19. In the schools randomised to the experimental arm, pupils aged 16-17 years (given a brief training by an external team) delivered the programme to two successive year cohorts of 13-14 year olds. Control schools continued with their usual teacher led sessions. The trial was informed by a systematic review and pilot study in four schools.7 8 Its design by a multidisciplinary research team includes an integral process evaluation that has four aims: to document the implementation of the peer led intervention and sex education in control schools; to describe and compare processes in the two forms of sex education; to collect information from study participants (schools and students) about the experience of taking part in the trial; and to collect data on individual school contexts.9

Box 1: Methods for data collection in the RIPPLE trial process evaluation

  • Questionnaire surveys of students and peer educators

  • Focus groups with students and peer educators

  • Interviews with teachers

  • Researcher observation of peer led and teacher led sex education

Several methods were used to collect process data (box 1), including questionnaire surveys, focus groups, interviews, researcher observations, and structured field notes. Some methods, such as questionnaire surveys, were also used to collect outcome data. Other methods, such as focus groups and interviews with school staff and peer educators, were specific to process evaluation.

The outcome results by age 16 showed that the peer led approach improved some knowledge outcomes; increased satisfaction with sex education; and reduced intercourse and increased confidence about the use of condoms in girls. Girls in the peer led arm reported lower confidence about refusing unwanted sexual activity (borderline significance). The incidence of intercourse before age 16 in boys and of unprotected first sex, regretted first intercourse (or other quality measure of sexual experiences or relationships), confidence about discussing sex or contraception, and some knowledge outcomes for both girls and boys were not affected.10

We analysed process data in two stages. Before analysing outcome data (box 2), we analysed the data to answer questions arising from the aims outlined above. The hypotheses generated were tested in statistical analyses that integrated process and outcome data to address three questions:

  • What was the relation between trial outcomes and variation in the extent and quality of the implementation of the intervention?

  • What processes might mediate the observed relation between intervention and outcomes?

  • Did subgroups of students and schools differ in their responses to the intervention?

We used several strategies to combine the different types of data. These included on-treatment analyses, in which results for students who received the peer led intervention were compared with results from the standard intention to treat approach (where allocation to, rather than receipt of, the intervention forms the basis of the analysis). We then carried out regression analyses and, where appropriate, tests for interactions, to examine the relation between key dimensions of sex education, subgroups of schools and students most and least likely to benefit from the peer led programme, and study outcomes (further details are available elsewhere).11 12 We used regression analyses because they are best for dealing with many types of outcome, assessing the impact of mediating factors, and analysing subgroups.13 14

These strategies provided answers to our three questions.

  • More consistent implementation of the peer led programme might have had a greater impact on several knowledge outcomes and reduced the proportion of boys having sex by age 16, but it would not have changed other behavioural outcomes.

  • There were key interactions between the extent to which sex education is participative and skills based and who provided it: when sex education was participative and skills based, the peer led intervention was more effective, but when these methods were not used, teacher led education was more effective.

  • Peer led sex education was less good at engaging students most at risk of poor sexual health (risk was assessed using housing tenure, attitude to school, and educational aspirations); the peer led approach was better at increasing knowledge in schools serving “medium” rather than “low” risk populations (assessed using the proportion of students receiving free school meals).

Integrating process and outcome evaluation

Process evaluation in the RIPPLE trial is an example of the trend to move beyond the rhetoric of quantitative versus qualitative methods. By integrating process and outcome data, we maximised our ability to interpret results according to empirical evidence. Importantly, an on-treatment approach to outcomes using process data made little difference to the results. Using process data to identify key dimensions of sex education and examining these in relation to the trial arm revealed the circumstances in which peer led sex education was most effective, as did analysis of risk for both individual schools and students. The conclusion that the peer led approach is not a straightforward answer to the problem of school disengagement and social exclusion is an important policy message.

Other recent trials of complex interventions in which integral process evaluations helped explain the outcome findings include a trial of peer education for homosexual men that had no apparent impact on HIV risk behaviour; the process evaluation established that the intervention was largely unacceptable to the peer educators recruited to deliver it.15 In trials of day care for preschool children and of supportive health visiting for new mothers, integral process evaluations revealed unpredicted economic effects of day care and the reasons for low uptake of health visiting in a culturally diverse population.16 17 In a trial of a secondary prevention programme for cardiovascular disease, interviews and focus groups showed that because patients viewed heart attacks as self limited episodes, they were less willing to adopt long term lifestyle changes, and that the ability of practice nurses to provide skilled continuity of care was affected by their lack of training and low status in the primary health care team.18 Such trials involve issues of service and organisation and the frameworks of understanding that inform the behaviour of healthcare users—features of the interactions between intervention and environment that characterise complex interventions.19

Box 2: Methodological issues of integrating process evaluation within trials

  • Process data should be collected from all intervention and control sites

  • Data should be both qualitative and quantitative

  • Process data should be analysed before outcome data to avoid bias in interpretation

  • Steps should be taken to minimise the possibility of bias and error in interpreting the findings from statistical approaches, such as on-treatment analyses and regression and subgroup analyses

“Good” randomised controlled trials have been defined as those having a high quality intervention, adequate evaluation of the intervention and its delivery, documentation of external factors that may influence outcome, and a culturally sensitive intervention.20 Thus, many RCTs would be enhanced by an integral process evaluation. The additional costs (such as collecting and analysing qualitative data) would probably be balanced by greater explanatory power and understanding of the generalisability of the intervention.21 Complexity as a quality of interventions is a matter of degree, not an absolute. The argument that process evaluation is most useful in cluster trials, and where the intervention is non-standardised, also applies to many “pragmatic” clinical RCTs,22 as does the idea that process evaluation in feasibility studies is crucial to developing appropriate and effective interventions.23

A “social” model of RCTs?

The move towards combining process and outcome evaluation builds on other, related methodological developments. For example, more attention is being paid to the need for pilot and consultation work in the development phase of RCTs,23 the importance of a more theory based approach to evaluation,24 and the modification of intervention effects by context in community intervention trials.25 Statistical methods to integrate process and outcome data, such as those developed in the RIPPLE study, are a move forward.

Some of the methods we suggest—such as allowing hypotheses about the effectiveness of interventions to be derived from process data collected during a trial, and drawing conclusions (with reservations) from on-treatment and subgroup analyses—go against the conventions of clinical RCTs. Perhaps the effort devoted to developing “how to do it” guidelines for clinical trials could be matched by a similar effort in the field of complex interventions.

Summary points

A detailed process evaluation should be integral to the design of many randomised controlled trials

Process evaluations should specify prospectively a set of process research questions and identify the processes to be studied, the methods to be used, and procedures for integrating process and outcome data

Expanding models of evaluation to embed process evaluations more securely in the design of randomised controlled trials is important to improve the science of testing approaches to health improvement

It is also crucial for persuading those who are sceptical about using randomised controlled trials to evaluate complex interventions not to discard them in favour of non-randomised or non-experimental studies

Footnotes

  • Contributors The RIPPLE study group includes the authors and Abdel Babiker, Andrew Copas, and Anne Johnson. The RIPPLE study was designed by AJ, AO, and JS. JS is principal investigator and is responsible for the overall conduct of the trial. The process evaluation is led by AO and VS. VS, S Forrest, and S Charleston developed methods for collecting and integrating qualitative and quantitative data, and coordinated data collection. EA and AC were the trial statisticians; S Black, M Ali, and AB contributed to statistical analyses. AJ contributed to quantitative data collection methodology. CB led data analyses related to social exclusion and sexual health. AO was the guarantor.

  • Funding Medical Research Council.

  • Competing interests None declared.

  • Ethical approval Committee on ethics of human research, University College, London.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
View Abstract