- Ann Oakley, professor of sociology and social policy (firstname.lastname@example.org)1,
- Vicki Strange, research officer1,
- Chris Bonell, senior lecturer in sociology and epidemiology2,
- Elizabeth Allen, statistician3,
- 1 Social Science Research Unit, Institute of Education, University of London, London WC1H ONR
- 2 Public and Environmental Health Research Unit, London School of Hygiene and Tropical Medicine, London WC1E 7HT
- 3 Centre for Sexual Health and HIV Research, London WC1E 6AU
- Correspondence to: A Oakley
- Accepted 14 October 2005
“Complex interventions” are health service interventions that are not drugs or surgical procedures, but have many potential “active ingredients.”1 A complex intervention combines different components in a whole that is more than the sum of its parts.2 Randomised controlled trials (RCTs) are the most rigorous way to evaluate the effectiveness of interventions, regardless of their complexity. Because of their multifaceted nature and dependence on social context, complex interventions pose methodological challenges, and require adaptations to the standard design of such trials.3 This paper outlines a framework for using process evaluation as an integral element of RCTs. It draws on experience from a cluster randomised trial of peer led sex education.
Why evaluate processes?
Conventional RCTs evaluate the effects of interventions on prespecified health outcomes. The main question is, “Does it work?” Process evaluations within trials explore the implementation, receipt, and setting of an intervention and help in the interpretation of the outcome results. They may aim to examine the views of participants on the intervention; study how the intervention is implemented; distinguish between components of the intervention; investigate contextual factors that affect an intervention; monitor dose to assess the reach of the intervention; and study the way effects vary in subgroups.4 Process evaluation can help to distinguish between interventions that are inherently faulty (failure of intervention concept or theory) and those that are badly delivered (implementation failure).5 Process evaluations are especially necessary in multisite trials, where the “same” intervention may be implemented and received in different ways.
Although “process” and “qualitative” are often used …