Intended for healthcare professionals

General Practice

Designing trials of interventions to change professional practice in primary care: lessons from an exploratory study of two change strategies

BMJ 2000; 320 doi: https://doi.org/10.1136/bmj.320.7249.1580 (Published 10 June 2000) Cite this as: BMJ 2000;320:1580
  1. Stephen Rogers, senior lecturer in primary health care (s.rogers{at}ucl.ac.uk),
  2. Charlotte Humphrey, senior lecturer in medical sociology,
  3. Irwin Nazareth, senior lecturer in primary health care,
  4. Sue Lister, lecturer in practice development,
  5. Zelda Tomlin, research fellow in qualitative research,
  6. Andy Haines, professor of primary health care.
  1. Department of Primary Care and Population Sciences, Royal Free and University College Medical School, University College London, Archway Resource Centre, London N10 3UA
  1. Correspondence to: S Rogers
  • Accepted 14 April 2000

Various strategies have been evaluated for their ability to support the adoption of clinical evidence into everyday practice.1 2 There is increasing interest in interventions aimed at groups of healthcare staff and promoting organisational change. Observational studies of these types of interventions have produced encouraging findings,3 4 but the results of randomised controlled trials have sometimes been disappointing.57 These differences may be due to the methodological and practical difficulties of evaluating such interventions in randomised trials rather than to lack of efficacy of the interventions.8

We carried out an exploratory trial to examine the independent and combined effects of teaching evidence based medicine and facilitated change management on the implementation of cardiovascular disease guidelines in primary care (box). The trial was accompanied by a formative evaluation, drawing on information collected by the evidence based medicine tutor, the change management facilitator, and a qualitative researcher who observed workshops and meetings and conducted a series of semistructured interviews with study participants.14 Progress was reviewed at monthly steering group meetings, and the thesis of this paper emerged from the deliberations and discussions of this group.

Summary points

When designing trials of interventions to change professional practice in primary care, choices have to be made about selection of appropriate practices, development and adaptation of interventions, and experimental design

The different priorities of researchers, those developing the interventions, and those participating must be recognised when such choices are made

The best design options may be those that are able to reconcile the interests of research, development, and practice

Interventions requiring the participation of health professionals in organisational change require a high degree of motivation, and eligibility criteria should be developed and applied at recruitment

Interventions must be adapted as far as possible to the needs of participants without compromising theoretical assumptions

Experimental designs must enable active staff participation without distorting the interventions delivered

Issues arising

Selection of suitable practices

Choice of sampling frame—Our exploratory study was carried out in collaboration with the Medical Research Council General Practice Research Framework. Although we recognised advantages in working with a well supported and enthusiastic group of practices with a track record in collaborative research, the practices may be atypical.11 For example, some practices had participated in research on cardiovascular disease before and may have adopted the findings of their research.15 We subsequently found a wide variation in performance across the cardiovascular disease management topics, but the organisational characteristics of the practices could still limit the generalisability of our conclusions.

Details of exploratory trial

Design—Randomised controlled trial with a factorial design.9 All practices were sent guidelines on five cardiovascular disease topics then allocated to evidence based medicine teaching, facilitated change management, both, or neither by using a restricted randomisation procedure.10

Participants—Eight from 25 eligible practices in the Medical Research Council General Practice Research Framework in North West Thames.11

Interventions—The evidence based medicine intervention was a one day practice based workshop, covering appraisal of trials, systematic reviews, and guidelines.12 The change management programme comprised a one day workshop introducing principles of continuous quality improvement (change management, multiprofessional working, problem solving, and analysis of the process of care) followed by a series of visits from a facilitator trained in the methods.13

Study outcomes—Prescribing indicators, reflecting the implementation of the cardiovascular disease guidelines, and qualitative data on changes in professional practice

Participating practices—About 30% of practices approached agreed to participate. Most practices expected benefits in terms of implementing guidelines or developing new skills, or both. However, the interested practices were not always the ones that the tutor and the facilitator thought would benefit from the interventions offered. In particular, the facilitator had reservations about working with more hierarchical practices, where the change in management approach might be unacceptable.4 An additional drawback was that some practices came into the study because they were interested in one intervention but were randomised to the other, and there was some evidence that this might have affected the degree of engagement with the interventions (equivalent to patient preference effects in therapeutic trials).16

Consent to participate—Informed consent was obtained from lead practitioners on behalf of their general practice partners. This was sufficient for ethics committees, but the extent of the internal consultation with partners varied considerably. The dissemination of information between general practitioners and staff also varied, and this affected the degree to which practice staff became engaged with the interventions being tested. The General Practice Research Framework has recently amended its procedures and now requires signatures from all partners before a practice is admitted to a research project, but there is an argument for seeking consent from an even wider range of staff when interventions are directed at practice teams.

Analytical framework for summarising desirable characteristics of trial design from perspectives of research, development, and practice

View this table:

Development and adaptation of interventions

Theoretical credibility—Our teaching of evidence based medicine was based on workshops developed at McMaster University,12 and the change management programme drew on principles of continuous quality improvement as taught at the Institute of Health Improvement.13 This strategy provided external validity for the interventions delivered, but some primary care staff felt that the change management workshop was insufficiently grounded in the language, concerns, and perspectives of primary care.

Acceptability—The one day practice based workshop was a popular format with primary care staff; participatory learning approaches were preferred over more didactic teaching, and practical tasks over presentations of theory. These positive remarks have to be balanced against concerns on the part of the tutor and facilitator that a single day was insufficient to deliver the relevant material. The facilitator follow up was an attractive intervention model for practices, as the low intensity and long time span meant it was easily integrated with other practice activities. However, the duration of the intervention was truncated by the short time frame for research funding.

Replicability and transferability—New materials were developed to support the evidence based medicine workshop, the change management workshop, and the facilitation activities. A single tutor ran the evidence based medicine sessions, and a single facilitator ran the change management programme, with the details of interventions adapted to take account of the cultural and organisational attributes of individual practices. Such tailoring improves the transferability of interventions but reduces their replicability. Using multiple tutors or facilitators would have similar implications. A transferable intervention could be tested in a wider range of practices, yielding more generalisable results, but could never be applied in service if replicability was compromised.

Evaluability—The pilot study was designed to evaluate the effect of the change management programme on specific guideline related outcomes, while adopting a qualitative approach for assessing more general changes in the way that primary care teams worked. The focus on implementation of guidelines simplified the outcome measures for the trial, but the facilitator was always more committed to capturing information on changes in organisational effectiveness. Various questionnaires for measuring organisational effectiveness have been identified,17 18 but their suitability and validity for use in a trial still needs to be assessed.

Choice of experimental design

Multiple interventions—The decision to adopt a factorial design had implications for the time scale of the trial, the delivery of the interventions, and the demands on the practices. In particular, delays were introduced because practices allocated to both interventions could not accommodate two, one day workshops in close succession. Practice based workshops were convenient for participants but labour intensive for the research staff. They enabled us to avoid contamination between practices allocated to different interventions, but interactions with practices were sometimes compromised as the tutor was asked not to advise on implementation and the facilitator was asked not to comment on research evidence.

Multiple guidelines—The provision of a choice of guidelines was valued by practices and felt to be consistent with a real life situation. The approach also allowed us to learn more about presentation of materials and preferred topics. Ultimately, the guidelines that practices selected varied considerably, and the principle of providing choice of guidelines to practitioners was difficult to reconcile with the establishment of common, meaningful measures of effect across practices. In addition, there is evidence that clinicians may derive greater benefits from directing attention towards topics that they rank lower in terms of interest than to those that they rank higher.19

Reconciling interests of research, development, and practice

We had to make various choices in designing our study: which practices to select for participation, how to develop and adapt the interventions, and which experimental design to use. Tensions arose because people whose principal concern was with the scientific rigour of the investigation, those whose main focus was the development and adaptation of the interventions (as a theoretically based model of behaviour change or as a pragmatic service intervention 20), and those who were participants in the research had different views on what was important. The table is an analytical framework that summarises the desirable characteristics of trial design from the perspective of the three constituencies. The issues arising are discussed in detail below.

The interests of research, development, and practice would be addressed by a trial design that was able to satisfy the following conditions:

  • A practice selection strategy which reconciles issues around the acceptability and appropriateness of interventions to practices and the generation of externally valid, generalisable trial results

  • Interventions that remain true to their theoretical foundations, are of sufficient focus, intensity, or duration to have an impact, but are adapted to the needs and conditions of participating practices. The interventions also need to be replicable, transferable, and evaluable

  • A statistically efficient experimental design that can ensure scientific rigour without compromising the ability of staff to participate and without damaging the integrity of the interventions being tested.

Discussion

Interventions requiring the active participation of health professionals in organisational change are likely to require a high degree of motivation from most of the practice team if they are to have an impact. The willingness of practices to participate in trials of professional behaviour change will depend on the interests of members and the organisational characteristics of practices. A trial which focuses on practices expected to derive substantial benefits (an explanatory trial) 21 would ensure that practices and interventions were well matched but might not provide appropriate information on the application of such interventions in service settings. An alternative approach would be to recruit practices that were typical of those that might come forward for interventions offered in a service context (pragmatic trial).21 The designs converge as recruitment becomes increasingly selective. In a practice preference design,20 a randomised controlled trial of practices that have no preferences for particular interventions is nested in an observational study in which practices with strong preferences receive the intervention of their choice. This design could meet the concerns of research, development, and practice but is complex and expensive and requires more effort from the researchers.

In the interests of theoretical credibility, we reproduced established models for teaching the principles of evidence based medicine 12 and continuous quality improvement 13 while allowing practitioners to explore specific applications. For interventions delivered at this level, measures of organisational effectiveness might be more appropriate than disease specific measures based on implementation of guidelines.17 18 Also, the approach would need to be directed at generating substantial change at practice level for measurable effects to be seen. The linking of theory to a specific task would be acceptable to practices, and a focus on guidelines could be associated with measurable outcomes. This would simplify measurement issues in a trial, but some commentators might express concerns about the application of continuous quality improvement methods to tasks that the practices had not necessarily identified as priorities.3 4

The factorial design is a powerful approach to studying two interventions simultaneously, though the execution of this design may be demanding for researchers and study participants. It may not be feasible for practices to implement a group of guidelines simultaneously, but a sequence of guidelines could be presented over time (in random sequence to balance order effects). This split plot design 9 is likely to be acceptable to trial participants, to those whose concern is to maintain the integrity of the interventions, and to researchers provided that funding could be found for the extended trial period. Alternatively, a randomised incomplete block design 9 could be used to assess the efficacy of a single intervention across two or more guidelines, but as every practice is subject to intervention the design could not provide information on the generic effects of interventions on professional behaviour.

Conclusions

Clinicians and those involved in service development sometimes dismiss academic research because of an “ivory tower” approach that pays too little attention to issues around service delivery.22 The nature of negotiated access to research subjects in primary care settings can have a direct effect not only on participation rates in research but also on the quality of the research data.23 If a trial is to be executed successfully, and its findings are to be applicable in a service setting, it is important to identify a trial design that can best reconcile the interests of research, development, and practice. Our analytical framework provides an approach by which it is possible to explore how particular characteristics of trial design appear from each perspective and thereby to assess the most satisfactory design options. The approach cannot assure that trial design will be straightforward and problem free, but early consideration of the perspectives of research, development, and practice might help to prevent fundamental problems arising later.

Acknowledgments

The study was carried out in collaboration with the MRC General Practice Research Framework, and we are grateful to participating practice staff and Dr M Vickers.

Contributors: AH and IN had the original idea for a trial. SR was overall coordinator of both design and execution of the exploratory trial, organised recruitment of the study practices, and developed packs of evidence based guidelines. CH and ZT designed and executed the qualitative component of the study. IN and SL developed and implemented the intervention strategies used in the feasibility study. SR, C Hill, and S Goubet developed the quantitative outcome measures. The steering group for the study comprised the authors, S Goubet, and C Hill. L Klinger, J Hickling, and M Griffin assisted with collection, checking, or analysis of data. Advisors to the project included M Vickers, M Lawrence, S Thompson, P Greenhalgh, S Ebrahim, and J Roberts. M Modell, R Morris, and D Mant (who was an external reviewer) gave valuable comments on earlier drafts of this paper. SR is the guarantor for the paper.

Footnotes

  • Funding NHS Research and Development Implementation Methods Programme.

  • Competing interests None declared.

References

  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.