Editorials

International comparisons of perinatal indicators

BMJ 2012; 344 doi: http://dx.doi.org/10.1136/bmj.e477 (Published 17 February 2012) Cite this as: BMJ 2012;344:e477
  1. Russell S Kirby, professor
  1. 1Department of Community and Family Health, University of South Florida College of Public Health, Tampa, FL 33612, USA
  1. rkirby{at}health.usf.edu

The real story lies behind the numbers

The media intermittently remind us that no matter how good we think we have it, those in other nations or regions have it better or worse, whether the indicator is life expectancy, premature mortality, survival with chronic disease or disability, or perinatal measures. Regardless of whether they are based on national statistics, reports from the World Health Organization, or reports from other international agencies, these stories are generally presented along similar lines: states, provinces, or nations are ranked from best to worst. Pundits, politicians, and health experts then expound on the implications of these results for the future of their jurisdictions.

In their linked paper (doi:10.1136/bmj.e746), Joseph and colleagues present direct comparisons of cross national perinatal indicators.1 They compared proportions of live births of babies born under 500 g, under 1000 g, at less than 24 weeks’ gestation, and at less than 28 weeks’ gestation across 25 nations in Europe and North America for the calendar year 2004. They also recalculated overall neonatal, infant, and fetal mortality rates after excluding babies born at under 1000 g.

Studies examining the data underlying national ranks of key perinatal indicators are almost as old as the rankings themselves.2 3 4 5 6 In cross national comparisons, definitions and reporting requirements—as well as variation in clinical practice—can amplify observed differences. As Benjamin Disraeli reputedly said, “There are three kinds of lies: lies, damn lies, and statistics.” Statistics, of course, can be manipulated to provide evidence to support very different assertions. As with so many things in life, there is a story behind these numbers.

It is therefore not surprising that Joseph and colleagues’ found that a nation’s ranking among the 25 countries on infant mortality varies when exclusions are applied to make the underlying data more comparable. For example, the United States ranked 22nd among the 25 nations in crude neonatal mortality but 11th when live births under 1000 g were excluded. Interestingly, using the US as the reference group, the US crude neonatal mortality rate ratio did not differ from that of the countries ranked 19th to 23rd; when live births under 1000 g were excluded, the rate ratio was more variable—although the US ranked 11th, its rate ratio did not differ significantly from that of most of the 10 countries that ranked ahead of it. Similar patterns can be found in the analyses of infant and fetal death.

Assigning ordinal rankings to countries or jurisdictions is a dangerous practice because of random fluctuations in incidence, small numbers of events, and varying sizes of underlying populations. Gerzoff and Williamson examined the rank order of US states for several indicators, including infant mortality, by calculating the 90% confidence intervals for each reported ranking.7 Although for most states the confidence interval comprised a range of 10 or fewer ranks, several less populous states had ranges of 30 or more ranks out of the 51 possible in this analysis. More recently, a similar analysis of child mortality across 42 hospitals participating in the Pediatric Health Information System found that rankings are statistically imprecise, which limits their usefulness as comparative measures of quality of care.8 Joseph and colleagues show that differences in legal reporting criteria for birth weight and gestational age strongly influence the crude rates for various measures of perinatal mortality at the national level and that this variability is at least as pronounced across smaller political units.1 Differences in regional reporting patterns are masked to some extent in national vital statistics studies, as shown recently by Ehrenthal and colleagues,9 who found considerable variation in the reporting of births of under 500 g across US states.

The findings of the current study and of other studies raise serious questions about the time honoured practice of producing international or state rankings for key perinatal indicators. Politicians and advocates will continue to use rankings to grab people’s attention. As Churchill once said, “The first lesson that you must learn is that, when I call for statistics about the rate of infant mortality, what I want is proof that fewer babies died when I was prime minister than when anyone else was prime minister. That is a political statistic.” Clearly, no politician in office wishes to be the recipient of bad news about infant mortality. Sometimes, however, rankings can serve as a call to action, as in an analysis of the declining relative position of Wisconsin among the US states regarding maternal and child health outcomes in the late 20th century.10 In the meantime, those constructing national rankings need to devise ways to take into consideration the role of changing reference populations, variations in reporting definitions and clinical practices, and the variability inherent in rare outcomes and small underlying populations.

Notes

Cite this as: BMJ 2012;344:e477

Footnotes

  • Research, doi:10.1136/bmj.e746
  • Competing interests: The author has completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References