The search for evidence of effective health promotionBMJ 1997; 315 doi: https://doi.org/10.1136/bmj.315.7104.361 (Published 09 August 1997) Cite this as: BMJ 1997;315:361
- Viv Speller, senior lecturera,
- Alyson Learmonth, director of health promotionb,
- D Harrison, health promotion general managerc
- a Wessex Institute for Health Research and Development, Winchester SO22 5DH
- b North Durham Community Health Care Trust, Health Centre, Chester-le-Street, Durham DH3 3UR
- c North West Lancashire Health Promotion Unit, Sharoe Green Hospital, Preston PR2 8DU
- Correspondence to: Dr Speller
- Accepted 3 February 1997
A conceptually sound evidence base for interventions that aim to promote health is urgently required. However, the current search for evidence of effective health promotion is unlikely to succeed and may result in drawing false conclusions about health promotion practice to the long term detriment of public health. The reasons for this are threefold: lack of consensus about the nature of health promotion activity; lack of agreement over what evidence to use to assess effectiveness; and divergent views on appropriate methods for reviewing effectiveness. As a consequence health promotion may be designated “not effective” because it is being assessed with inappropriate tools.
What are we looking for?
Health promotion is a multifactorial process operating on individuals and communities, through education, prevention, and protection measures.1 The statement of principles known as the Ottawa charter for health promotion, developed by the World Health Organisation, is internationally accepted as the guiding framework for health promotion activity. This describes five approaches; building healthy public policy, creating supportive environments, strengthening community action, developing personal skills, and reorienting health services.2 Health promotion methods may include activities as diverse as awareness raising campaigns, provision of health information and advice, influencing social policy, lobbying for change, professional training, and community development—often in combination in complex interventions. However, health promotion is rarely judged on its effectiveness in all these areas.
Not just about individual behaviour
In Britain the Health of the Nation strategy's concentration on reducing disease and behavioural risk factors3 has overemphasised the role of health promotion in developing personal knowledge and skills and focused attention on assessing individual health outcomes. In Europe, however, health promotion emphasises the development of health promoting settings, such as schools, workplaces and hospitals, which aim to enable and support healthy behaviour. Practice therefore includes management and organisational development approaches.4 In the Pacific countries the emphasis is on sustaining healthy environments, and legislative approaches are aimed at populations, rather than individuals.5
Health promotion in the UK is at risk from the application of inappropriate methods of assessing evidence, overemphasis on outcomes of individual behaviour change, and pressure on resources.
The effect of health promotion should be assessed across the breadth of its activities and settings, not just by changes in individual health behaviour
Systematic review methods need to be revised to include a broader range of studies and research methods, including qualitative research
Review criteria should consider the quality of the health promotion intervention as well as the research design
Seeking evidence of change in individual behaviour or health outcome is not appropriate if interventions are aiming to achieve other types of change. It may be difficult to generalise from studies conducted on different populations because the interventions and research methods may have been chosen to answer different questions about different types of health promotion practice. The logic of looking for immediate changes in health or behaviour as a result of one shot interventions, such as advice from health professionals or advertising, is questionable when we know that attitudes and behaviours are shaped by a panoply of socioeconomic and cultural influences. Attribution of any changes detected to a single intervention is also dubious. Despite calls to refocus population health “upstream,” many research studies are still seeking such simple results with little consideration of the effect of other influences, or use costly designs to attempt to control them.6
How should we look for evidence of effectiveness?
Britain's national research and development programme is not supporting health promotion research because it is dominated by clinically oriented criteria and methods. Evidence based medicine is defined as “the conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”7 The systematic review process aims to ensure reliable and rigorous evaluation of evidence. Criteria for the inclusion of studies usually place randomised controlled trials as the gold standard for judging whether a treatment is effective. This approach has been directly transferred to health promotion in the series of reviews by the NHS Centre for Reviews and Dissemination to determine what is known about the effectiveness of health promotion interventions. This evidence is now becoming available through the Effective Health Care Bulletin series8 9 and the Health Education Authority series of health promotion effectiveness reviews.10 11
The selection of studies for inclusion is done on the basis of the quality of the research only, not on the quality of the health promotion intervention. This can produce some anomalous results. An earlier bulletin on brief interventions and alcohol use12 suggested that brief interventions are as effective as more expensive specialist treatment. This sparked a debate about the meaning of the term “brief interventions” which highlighted the risks inherent in glossing over the detail.13 The intervention techniques had been pooled because they were all said to be of similar short duration and had common characteristics. However, these ranged from five minutes of simple advice to regular sessions over six months of structured interventions by a general practitioner. It is clearly difficult from this for the commissioner or practitioner to determine which intervention technique is most successful.
Another systematic review looked at the effectiveness of sexual health education interventions for young people.14 Of 270 papers reporting sexual health interventions only 12 met the inclusion criteria. Criteria such as the appropriateness of the interventions studied and outcome measures used were not considered essential “because of the large element of subjectivity in assessing whether they had been met.” This can lead to spurious generalised conclusions, such as that drawn from a US study of an abstinence education programme for 13 year old boys from a low income minority group, that chastity education is harmful. The study was included despite high attrition rates and dependence on self reported outcomes. After six lessons aimed at reducing premarital sex, more of the intervention group claimed to have initiated sexual intercourse. An experienced health promoter would immediately know that this is unlikely to have been an appropriate intervention method for this target group.
Process evaluation may be useful
As randomised trials are expensive, it is important that they are applied only to study high quality interventions that have been developed appropriately and based on knowledge of best practice. Another approach to assessing effectiveness which pays more attention to the quality of the intervention has been attempted by the International Union for Health Promotion and Education in a series of 16 effectiveness reviews.15 Criteria for selecting studies included information about the strategies used in the intervention. Evaluation criteria included not only the use of controls and measurements before and after the intervention but also formative or process evaluation. This is the study of the processes of implementing the intervention, to answer such questions as: was it applied in the manner intended, did other factors come into play that might have affected the result, what did the participants think about the process? Process evaluation often uses qualitative research methods and complements outcome evaluation. In health promotion research the technique of “triangulation”—that is, drawing conclusions from a number of different sources of data—is also used.
There are several important differences between the approach to review taken by the International Union for Health Promotion and Education and the systematic review approach of the Centre for Reviews and Dissemination. While the latter pays inadequate attention to the process of the intervention, the former is insufficiently critical of the soundness of quantitative research methods. It would be helpful to attempt to combine the best elements of both for future effectiveness reviews.
Another fundamental problem with using randomised controlled trials in health promotion research is that where interventions aim to influence systems or populations it may be difficult to randomly allocate units such as schools or communities to intervention or control groups, so quasiexperimental control designs are used. Well known quasiexperimental studies include the Stanford heart disease prevention program16 and the Minnesota heart health program,17 where multiple interventions were applied to communities and their risk factor profiles were subsequently compared with matched comparison communities. One of the major problems with studies employing this design is the “contamination” of the control group. This poses a serious dilemma in that the practice of health promotion relies strongly on the diffusion of the effects of the intervention through the target community. The difficulty of controlling for spillover to the comparison community reduces the effect attributable to the intervention. Whether health gain in the control group is influenced by diffusion of the intervention, or the intervention group effects are produced by secular trends, cannot be determined by looking at outcome measures alone.
As has been recognised for healthcare evaluation, a wide range of research methods is warranted.18 Qualitative research can contribute to assessing the effectiveness of interventions by illuminating processes, exploring diversity, and developing new theories. It includes a broad range of methods such as case study, ethnography, action research, participant observation, conversation analysis, and grounded theory.19 For example, a textual analysis of the way health visitors provide information to first time mothers identified that they frequently failed to reinforce the mother as a skilful and knowledgeable person, thereby affecting the reception of advice.20 Ethnographic studies provide a way of assessing outcomes of health promotion interventions on activities such as injecting drug use and cannabis dealing which may not be amenable to other research methods.21 22
Will we recognise effective health promotion when we find it?
An inquiry by Britain's parliamentary public accounts committee into the cost effectiveness of the Health of the Nation strategy showed that spending on health promotion in 1996 was £3m on the strategy, £45m on health education via the Health Education Authority, £73m on paying general practitioners for health promotion work, and £90m on NHS health promotion units.23 This represents less than 1% of the NHS's annual budget and less than the expenditure on staff cars and travelling and subsistence in 1994-5.24 Yet anecdotal evidence suggests that cost pressures, coupled with the inability to present conclusive evidence of effectiveness, are conspiring to make health promotion contracts a soft option for budget cuts in next year's contracts.
To get out of this downward spiral health promotion workers must demonstrate evidence of its effectiveness by establishing a robust evidence base. Existing reviews should be critically reanalysed using appropriate inclusion criteria which consider the quality of the health promotion intervention as well as that of the research. Criteria for rigorous evaluation of qualitative research methods need to be derived. The search should be broadened to include studies that measure the impact of interventions on systems and organisational development as well as change in individual behaviour. Further work needs to be done using the existing health topic based reviews to analyse cross cutting themes to examine the effectiveness of different methods of health promotion.
Research should be commissioned to fill the gaps identified by the reviews, ensuring that an appropriate range of methods is used. In doing this practitioners need to be involved in designing and implementing viable interventions including collecting data for process evaluation to ensure that programmes function optimally. It is essential for health promotion research to bring together experts from a range of disciplines to design and conduct studies that pay adequate attention to both the quality of the intervention and the methods of evaluation.
Finally, the evidence base must be accessible to and used by practitioners. While practitioners need to be more critical and to substantiate their decisions with evidence, key messages must also be disseminated clearly and unequivocally to influence practice.
If the commitment enshrined in the mission for the NHS—to promote health and prevent disease as well as providing treatment—is to be taken seriously, then the national research and development agenda needs to consider ways of tackling these issues. Action needs to be taken urgently to redress the negative consequences on health promotion of a misdirected search which is veering off course.