Effects of quality improvement collaborativesBMJ 2008; 336 doi: https://doi.org/10.1136/bmj.a216 (Published 26 June 2008) Cite this as: BMJ 2008;336:1448
All rapid responses
In his editorial1 Lindenauer questions the role of randomised
controlled trials in evaluating quality improvement collaboratives. The
choice of research design should be informed by the question that needs to
be answered. At a stage where a complex intervention is being developed
then a mix of qualitative and quantitative methods should be used to
define the content of the intervention, the method(s) of delivery and
presence and impact of any important effect modifiers. At the point that
such an intervention is regarded as both understood and stable then it
should be subject to an evaluation of its effects rather than just
introduced. The risk of not evaluating is introducing ineffective and
inefficient interventions that could waste scarce resources and a failure
to learn from current QI efforts to optimise future activities. Auerbach
et al2 argue persuasively for the use of the same standards of evaluation
for quality improvement interventions as are applied to biomedical ones.
At the point in a process of evaluation that the question is “what is the
effect of this intervention and at what cost” then a pragmatic (often
cluster) randomised controlled trial3 with its focus on a “real world”
evaluation, is the optimum design.. However, such a design in no way
precludes the simultaneous collection of the sort of data that Lindenauer
suggests - data on the processes by which the intervention may achieve its
effects or have them modified4,5,6. Such measures, informed by the
intervention development stage, can be much more informative by virtue of
having been collected within the same experimental context.
1. Lindenauer PK. Effects of quality improvement collaboratives are
difficult to measure using traditional biomedical research methods. BMJ
2. Auerbach AD, Landefeld CS, Shojania KG. The Tension between
Needing to Improve Care and Knowing How to Do It. The New England Journal
of Medicine 2007; 357(6): 608-613.
3. Schwartz,D. Lellouch,J. Explanatory and pragmatic attitudes in
therapeutical trials. Journal of Chronic Diseases 1967;20: 637-648.
4. Rousseau N, McColl E, Newton J, Grimshaw J, Eccles M. Practice
based, longitudinal, qualitative interview study of computerised evidence
based guidelines in primary care. BMJ 2003;326:314-322.
5. Ramsay CR, Eccles M, Grimshaw JM, Steen N. Assessing the long term
effect of educational reminder messages on primary care radiology
referrals. Clinical Radiology 2003;58:319-321.
6. Grimshaw JM, Zwarenstein M, Tetroe JM, Godin G, Graham ID, Lemyre
L, Eccles MP, Johnston M, Francis JJ, Hux J, O'Rourke K, Legare F,
Presseau J. Looking inside the black box: a theory-based process
evaluation alongside a randomised controlled trial of printed educational
materials (the Ontario printed educational message, OPEM) to improve
referral and prescribing practices in primary care in Ontario, Canada.
Implementation Science, 2007; 2: 38.
Competing interests: No competing interests