Intended for healthcare professionals

Primary Care

Practice based education to improve delivery systems for prevention in primary care: randomised trial

BMJ 2004; 328 doi: https://doi.org/10.1136/bmj.38009.706319.47 (Published 12 February 2004) Cite this as: BMJ 2004;328:388
  1. Peter A Margolis (Peter_Margolis{at}med.unc.edu), professor of pediatrics and epidemiology1,
  2. Carole M Lannon, associate professor of pediatrics and internal medicine1,
  3. Jayne M Stuart, assistant professor of pediatrics1,
  4. Bruce J Fried, associate professor of health policy and administration2,
  5. Lynette Keyes-Elstein, assistant director of biostatistics3,
  6. Jr Donald E Moore, director, division of continuing medical education4
  1. 1 University of North Carolina at Chapel Hill, North Carolina Center for Children's Healthcare Improvement, 730 Airport Rd, Ste 104, CB#7226, Chapel Hill, NC 27599, USA
  2. 2 University of North Carolina at Chapel Hill, School of Public Health, Department of Health Policy and Administration, Chapel Hill
  3. 3 Rho Inc, Chapel Hill, NC 27514, USA
  4. 4 Vanderbilt University School of Medicine, Nashville, TN 37232, USA
  1. Correspondence to: P A Margolis

    Abstract

    Objective To examine the effectiveness of an intervention that combined continuing medical education with process improvement methods to implement “office systems” to improve the delivery of preventive care to children.

    Design Randomised trial in primary care practices.

    Setting Private paediatric and family practices in two areas of North Carolina.

    Participants Random sample of 44 practices allocated to intervention and control groups.

    Intervention Practice based continuing medical education in which project staff coached practice staff in reviewing performance and identifying, testing, and implementing new care processes (such as chart screening) to improve delivery of preventive care.

    Main outcome measure Change over time in the proportion of children aged 24-30 months who received age appropriate care for four preventive services (immunisations, and screening for tuberculosis, anaemia, and lead).

    Results The proportion of children per practice with age appropriate delivery of all four preventive services changed, after a one year period of implementation, from 7% to 34% in intervention practices and from 9% to 10% in control practices. After adjustment for baseline differences in the groups, the change in the prevalence of all four services between the beginning and the end of the study was 4.6-fold greater (95% confidence interval 1.6 to 13.2) in intervention practices. Thirty months after baseline, the proportion of children who were up to date with preventive services was higher in intervention than in control practices; results for screening for tuberculosis (54% v 32%), lead (68% v 30%), and anaemia (79% v 71%) were statistically significant (P < 0.05).

    Conclusion Continuing education combined with process improvement methods is effective in increasing rates of delivery of preventive care to children.

    Introduction

    Preventive services are the cornerstone of primary care for children. Yet only about 75% of 2 year olds are fully immunised,1 and rates of preventive services such as screening for anaemia, tuberculosis, lead, and vision defects are disappointingly low. The large number of age specific preventive services recommended in the first five years of life2 3 and the time constraints under which primary care physicians practise make it easy for clinicians to overlook opportunities for preventive care.

    The challenge of achieving more reliable delivery of preventive services highlights the need for a more systematic approach.4 Recent studies have shown that better “office systems” can improve the delivery of preventive care.5 Office systems are defined as an organised series of interrelated activities carried out by several members of staff to achieve a specific purpose (for example, billing). Office systems for prevention are focused on interactions of patients, staff, and clinicians, ensuring that each step of the process of preventive care is carried out for every eligible patient at every encounter. The primary hypothesis for our study was that practices that received continuing medical education (CME) in combination with process improvement methods to implement office systems would have higher rates of four core preventive services than practices that did not.

    Methods

    Recruitment and randomisation of practices

    We identified all 453 paediatric and family practices in two regions of North Carolina located near practice assistance teams at the University of North Carolina at Chapel Hill and the Charlotte Area Health Education Center. Practices meeting the following criteria were eligible for the study: sufficient newborns enrolled each month to achieve sample size requirements; not part of an academic institution or a publicly funded health centre; and, in the region near the University of North Carolina, annual Medicaid billing in excess of $50 000 (£27 000; €40 000).

    We randomly selected practices from those meeting the eligibility criteria and stratified by factors potentially predictive of the success of the intervention: type of practice (paediatric/family practice), number of newborns enrolled each month, annual Medicaid billing. Within each stratum, we used a computerised random number generator to assign equal numbers of practices to either an intervention group, in which practices received assistance to establish office systems for prevention, or a control group.

    We recruited practices by using methods developed previously.6 The recruitment team was unaware of a practice's treatment allocation until after consent to participate was signed by all doctors in each practice. As an incentive, all practices received a copy of publicly available materials designed to facilitate preventive care,7 and the intervention group received CME credit.

    Intervention

    We have described an intervention that used practice based CME and process improvement methods (a combination recently termed knowledge translation8) to support the implementation of “office systems” for delivery of preventive care.9 The intervention was based on the plan-do-study-act (PDSA) cycle of process improvement10 as an organising framework. It included four steps: reviewing data to identify less than desired performance of preventive services; identifying evidence based changes that could improve performance; testing changes; and monitoring and adjusting new processes for delivery of care.

    In the first step, practices formed an improvement team of clerical, nursing, and physician staff members and discussed the results of chart abstractions.

    In the second step, project staff used “academic detailing”11 and mini-lectures to provide education about preventive care and effective delivery strategies for preventive services (for example, developing a preventive services summary, establishing a tracking or recall system12). Practices selected performance improvement goals and identified strategies that might improve care.13 We provided an organised set of tools to accelerate testing (for example, preventive services flow sheets), and project staff helped practices to customise these tools.

    During the third step, project staff helped practices use repeated PDSA cycles in small samples of patients to understand how to adapt new approaches to current office routines. In the fourth step, changes that had the desired effect on the process of preventive care after testing were spread throughout the practice by training staff in new roles.

    Two teams consisting of a trained nurse and doctor helped to carry out the intervention. These teams met with practices monthly over a year, using a defined curriculum. During the subsequent year, we checked in on each practice by telephone every two to three months to discuss problems with the logistical aspects of implementation, and to offer advice and support to overcome them. More complete details of the methods, tools, and resources used in this project are available directly from the authors (www.ncchildhealth.org).

    Outcome measures

    Before the study, we selected four preventive services (immunisations, and screening for anaemia, lead, and tuberculosis) from a list of services recommended for children that included immunisations, screening, and anticipatory guidance. These outcomes are broadly recommended by authoritative organisations for children in the first two years of life, and they tend to be well documented in patients' records. Tuberculosis screening consisted of testing for intradermal purified protein derivative or giving the Mantoux test, or risk assessment by 24 months of age; anaemia screening consisted of a complete cell count, packed cell volume, or haemoglobin concentration, or a risk assessment by 18 months of age; screening for environmental lead exposure consisted of a blood test or risk assessment by 24 months of age. A complete immunisation schedule comprised four injections of diphtheria-pertussis-tetanus vaccine; three oral polio vaccines; one measles, mumps, and rubella immunisation; three Haemophylus influenzae type B vaccines; and three hepatitis B vaccines by 24 months of age. For children whose records showed they had not had all immunisations, we searched the North Carolina state immunisation registry for additional immunisations. (Some of the preventive measures are specific to the United States. For example, living in environments contaminated by lead (for example, from lead based paint) may have long term effects on children's intelligence, psychological development, and behaviour. About 3% of children in North Carolina have lead levels above 15 μg/ml.14)

    The primary outcome measure was the change over time in the proportion of children in each practice who received all four of these services. This outcome was selected because together these services represent the existence of a more reliable system of preventive care delivery, thereby providing a more stringent test of the primary study hypothesis.

    Data collection

    We collected data from medical records, using repeated, random samples of 30 charts of children between 24 and 30 months of age in each practice, and from surveys of doctors and office staff administered at baseline and at the end of the study period.9 Patients who had been seen at least three times and for whom there was no evidence of having transferred out of the practice were eligible.

    We used the data from medical records to give feedback to intervention practices about their performance, including comparison with all other practices, every six months. Such feedback is integral to process improvement methods. Control group practices received feedback at baseline and annually for two years without comparison to other practices. Follow up of intervention practices was planned for 15-18 months after the 12 month intervention.

    We abstracted data from medical records by using methods developed previously2 and assessed inter-rater reliability throughout the study by randomly re-abstracting 20% of the charts. Reliability was excellent, remaining with a κ above 0.85 for each preventive service. Abstractors were not informed of the study arm to which each practice was assigned.

    Statistical analysis

    We conducted an intention to treat analysis in which all intervention and control practices were included. The estimated power of the study to detect a difference of 20% between intervention and control practices was 80% with a type I error of 0.05 (two tailed), using methods that accounted for within-practice clustering of the study data.15

    To compare the magnitude and pattern of change over time in delivery of preventive service between intervention and control practices, we fitted a logistic random regression model.16 Our experience showed that practices needed at least 6-12 months to plan and test changes before spreading them in the practice. Consequently, the effects of the intervention could not be detected in patients' charts until children in the practice had had time to be “exposed” to the changes. Feedback of results from abstraction from charts at baseline typically took place in a meeting three months after baseline data collection from charts.

    We selected nine months after the feedback meeting as the earliest point at which changes in performance could be detected. All chart abstractions before this point (12 months after baseline data collection) were designated “implementation” points; all points after this mark were designated follow up. “Implementation” and follow up points in control practices were not considered separately.

    We compared the change in the proportion of children per practice with age appropriate preventive services over four time intervals: after the implementation period (at 12 months), and 18, 24, and 30 months after the baseline chart abstraction. The fixed effects portion of the logistic random regression model included separate intercepts and implementation period slopes for intervention and control groups. For the intervention group, linear and quadratic effects for post-implementation time were included (that is, a regression spline with one knot at the nine month mark). Post-implementation time effects were not included for the control group. Random practice effects for intercept, implementation period, and post-intervention slopes (linear effect only) were assumed to be independent and normally distributed with a mean of zero. Models were fitted using the maximum likelihood approach (SAS version 8, NLMIXED). To compute proportions and ratios of proportions, we transformed measures from the logistic model to the desired scale (for example, proportion) and used Taylor series approximations to compute standard errors and confidence intervals.

    Results

    Practices and characteristics

    Of the practices screened for eligibility, 88 met the inclusion criteria (figure 1). After contact for recruitment, 24 were found ineligible. Eligible practices were stratified into blocks and recruited until the target number of practices was achieved. Of the 59 practices recruited, 44 (75%) agreed to be randomised. In the intervention group, one did not participate in the intervention and three dropped out after they went bankrupt. In the control group, one practice went bankrupt during the study and dropped out.

    Fig 1
    Fig 1

    Recruitment process.

    *The five practices “not asked to participate” were not selected for recruitment because the target sample size had been achieved

    Randomisation produced intervention and control practices with comparable baseline rates of preventive services (table 1) and practice characteristics (table 2). Control practices were twice as likely to be physician owned. Only about 11% of children across all practices had all four preventive services documented in their charts; lead and tuberculosis screening were particularly low.

    Table 1

    Mean (SD) percentage of patients with age appropriate preventive services in intervention and control practices at baseline.

    View this table:
    Table 2

    Characteristics of participating practices at baseline. Unless indicated otherwise, values are mean (minimum, maximum)

    View this table:

    Implementation of intervention

    Practices in the intervention group were followed for an average of 24 months after the beginning of the intervention. During the implementation period, all intervention practices developed improvement teams. Project teams met with practice improvement teams a median 8.5 times (range 5-14 times). Of the 22 intervention practices, 18 (82%) implemented preventive services summaries in patient charts, 17 (77%) used tools to support risk assessments, 15 (68%) used prompting by clinicians, and 7 (32%) instituted new health maintenance records for well child visits.

    Effectiveness of intervention

    Figure 2 shows the pattern of change over time in the proportion of children per practice with all four preventive services in intervention and control groups. During the implementation period the slopes of the lines did not differ significantly between the intervention and control groups (P = 0.11). During the follow up period, the proportion of children with all services in control practices remained relatively constant, changing from 0.09 at the start of the follow up period to only 0.10 after 30 months of follow up. In contrast, the proportion of children with all services in intervention practices increased from 0.07 to 0.34 over the same time period. After baseline differences were adjusted for, the change in the prevalence of all four services between the beginning and the end of the study was 4.6-fold greater (95% confidence interval 1.6 to 13.2) in intervention practices than in control practices (table 3).

    Fig 2
    Fig 2

    Proportion of children with up to date records of all four preventive services

    Table 3

    Effect of the intervention on primary study outcome: proportion of children with all age appropriate preventive services in intervention and control practices during follow up period

    View this table:

    We also examined which of the preventive services were most affected by the intervention. The intervention had the greatest impact on lead, tuberculosis, and anaemia screening (figure 3). At the end of the follow up period, the proportion of children per practice who had a record of receiving each of the four individual preventive services was higher in intervention than in control practices; differences for tuberculosis (54% v 32%), lead (68% v 30%), and anaemia (79% v 71%) screening were significant (P < 0.05). For immunisation rates, the improvement over 30 months was about the same in intervention practices and control practices.

    Fig 3
    Fig 3

    Proportion of children with up to date information on individual preventive services

    To assess the extent to which these results were produced by differences in documentation, we compared the proportion of children receiving blood lead testing (which is indicated in only a subset of children) within the two groups and observed the same pattern of results. The proportion of children with age appropriate blood lead testing changed from 11% to 34% in intervention practices compared with 15% to 19% in control practices.

    The experience of one intervention practice shows how practices used small scale plan-do-study-act cycles to develop and implement changes to delivery of preventive services. In this practice, the initial baseline data were met with scepticism. In an initial PDSA cycle, practice staff reviewed their own charts and confirmed the low rates of care. In a planning phase, the doctor leading the effort identified chart screening as an improvement to test. In another series of PDSA cycles, nurses screened charts each time the child visited the practice and placed a brightly coloured sticker on the chart to indicate which preventive services were needed. Within a few months, the doctor-nurse team concluded that the changes represented meaningful improvements. Following this success, all the nurses in the practice were asked to adopt chart screening and prompting about needed services. Doctors reported that these changes reduced the time clinicians spent reviewing the chart, increasing the time for dealing with the patient's needs and concerns. The doctor leading the effort subsequently applied the approach to redesign of the office's system of nurses giving advice on the telephone.

    Discussion

    A continuing education programme designed to assist primary care practices in testing and implementing “office systems” for preventive health care produced clinically and statistically significant improvement in rates of preventive care for children. By combining information about effective approaches to preventive care, organised tools and resources, and training in modern methods of process improvement, practices could focus their efforts at improvement.

    A growing body of literature indicates that continuing medical education based on the way care is delivered in the practice setting can affect the outcomes of care delivery.17 However, efforts to help practices implement office systems have produced mixed results. In a randomised trial, Dietrich et al found that in primary care practices that implemented office systems, the performance of mammography and clinical breast examinations improved significantly.18 Subsequent attempts to introduce office systems to improve preventive care for adults have been unsuccessful.19 20 Reasons for the limited impact of such interventions include the size and complexity of practices involved, staff turnover, using a suboptimal quality improvement model that placed too much emphasis on planning rather than testing changes, insufficient emphasis on measurement to determine if changes were resulting in improvement, lack of motivation to change, inadequately developed content materials, inexperienced improvement team leaders, and insufficient time for improvement activities.21 Using a formal practice assessment, a practice-wide meeting, and prevention tools, and giving feedback on performance every six months, Goodwin et al reported a small but statistically significant increase in rates of preventive services.22

    We used a somewhat different approach, working side by side with the office team, providing information and coaching to each practice to develop improvement expertise within the practice. This avoided the loss of performance associated with “train the trainer” models23 because it did not depend on novice leaders during what was often their first application of improvement methods. We emphasised frequent, small scale tests to enable practices to “try out” changes without risking disruption of practice routines. The provision of tools and materials allowed practices to concentrate on improving care, and the emphasis on measurement encouraged practices to learn from their data, thereby engendering trust in the process.

    Limitations

    This study has several limitations. We selected practices that provided care for relatively large numbers of children in order to be able to detect an intervention effect. Small paediatric practices and most family practices were excluded. Although this may limit the generalisability of the study to multi-physician settings, such environments tend to be more complex and thus stand to benefit more from quality improvement efforts.

    The primary outcome measure was the proportion of children within each practice who received all four age appropriate services. Some clinicians may not have agreed that all procedures were necessary; some children may not have been exposed to practice changes; and some services may have been provided without being documented. The increased use of blood and skin testing, in addition to risk factor screening, implies that improvements did not represent improved documentation alone.

    Immunisation rates did not improve significantly, but at the time the study was conducted North Carolina implemented a universal vaccine purchase programme. It became the state with the highest rates of immunisations in the United States.24

    Although a multi-arm trial could have evaluated the incremental value of audit and feedback, this approach would have increased the logistical complexity of the study. The existing evidence indicated that this intervention would have small to moderate effectiveness when used alone.3

    In addition to the measurable results obtained in this study, we were encouraged by intervention practices' response. Given information about performance and a variety of evidence based strategies and tools, practices were motivated to test alternative approaches to care. Several practice improvement teams went on to initiate change in other clinical areas, such as asthma. Our results are of potential importance to current efforts to incorporate performance improvement and systems thinking as a core competency for physicians and other health professionals.

    Conclusion

    This study shows that continuing education oriented to improving primary care practices' systems for delivery of care is associated with important improvements in preventive care. An important next step will be to reduce the costs of assistance and to disseminate new approaches to more practices more rapidly. Future studies should also explore how to further increase the reliability of care, magnify the rate of improvement, and sustain improvements.

    What is already known on this topic

    Most preventive services are recommended for children under 5 years of age, but rates for their delivery in primary care are lower than desired

    Better “office systems” for preventive care seem to be associated with improved care, but efforts to help practices implement such systems have produced mixed results

    Continuing medical education based on the way care is delivered in the practice setting can affect the outcomes of care delivery

    What this study adds

    Practice oriented continuing education, combined with process improvement methods, can improve systems for delivery of preventive care in primary care

    Improvements in office systems are associated with important improvements in rates of preventive care

    The trial was a collaboration involving the University of North Carolina at Chapel Hill (UNC); the North Carolina Division of Medical Assistance; North Carolina Office of Research, Demonstrations, and Rural Health; Carolinas Medical Center; and the NC Area Health Education Centers. We wish to recognise the leadership of the North Carolina Office of Rural Health, Research and Demonstrations (Jeffrey Simms, Burnie Patterson), the North Carolina Area Health Education Centers (Harry Gallis, Thomas Bacon), and the North Carolina Division of Medical Assistance (Dennis Williams) without whose vision and persistence this study would not have been possible. We are indebted to the practice assistance teams at the Charlotte Area Health Education Center (Laura Noonan, Patricia Nesbit, Dawn Carpenter) and the North Carolina Office of Rural Health, Research and Demonstrations (Carol Powell, Deborah Shah), who provided assistance to the participating practices. We also thank the staff of the North Carolina Center for Children's Healthcare Improvement at the University of North Carolina at Chapel Hill (Sarah McGovern, Jennifer Pearce, Jenny Bowes) who participated in curriculum development, training, and data collection and Laura Peterson, who helped with preparation of the manuscript. Most importantly, we wish to recognise the dedication of the primary care practices without whose participation this study would not have been possible.

    Footnotes

    • Contributors PAM, CML, JMS, BJF, LK-E conceived and designed the study. PAM, CML, JMS acquired data; PAM, CML, BJF, LK-E analysed and interpreted data. PAM, CML, LK-E, JMS, BJF, DEM drafted and revised the manuscript. LK-E provided statistical expertise; PAM, CML obtained funding; JMS, CML, PAM supervised the study. PAM is guarantor.

    • Funding US Agency for Healthcare Research and Quality (RO1-HS08509), US Bureau of Maternal and Child Health (RO1-HS08509), the North Carolina Division of Medical Assistance, the North Carolina Area Health Education Centers, and the Robert Wood Johnson Foundation Generalist Faculty Scholars Program.

    • Competing interests None declared.

    • Ethical approval University of North Carolina at Chapel Hill ethics committee.

    References

    View Abstract