Intended for healthcare professionals

CCBYNC Open access
Research

Influence of medical journal press releases on the quality of associated newspaper coverage: retrospective cohort study

BMJ 2012; 344 doi: https://doi.org/10.1136/bmj.d8164 (Published 27 January 2012) Cite this as: BMJ 2012;344:d8164
  1. Lisa M Schwartz, professor12,
  2. Steven Woloshin, professor12,
  3. Alice Andrews, instructor2,
  4. Therese A Stukel, professor345
  1. 1VA Outcomes Group, Department of Veterans Affairs Medical Center, White River Junction, VT 05009, USA
  2. 2Center for Medicine and the Media, The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon, NH, USA
  3. 3Center for Population Health, The Dartmouth Institute for Health Policy and Clinical Practice, Lebanon
  4. 4Institute for Clinical Evaluative Sciences, Toronto, ON, Canada
  5. 5Department of Health Policy, Management, and Evaluation, University of Toronto, Toronto
  1. Correspondence to: S Woloshin steven.woloshin{at}dartmouth.edu
  • Accepted 2 November 2011

Abstract

Objective To determine whether the quality of press releases issued by medical journals can influence the quality of associated newspaper stories.

Design Retrospective cohort study of medical journal press releases and associated news stories.

Setting We reviewed consecutive issues (going backwards from January 2009) of five major medical journals (Annals of Internal Medicine, BMJ, Journal of the National Cancer Institute, JAMA, and New England Journal of Medicine) to identify the first 100 original research articles with quantifiable outcomes and that had generated any newspaper coverage (unique stories ≥100 words long). We identified 759 associated newspaper stories using Lexis Nexis and Factiva searches, and 68 journal press releases using Eurekalert and journal website searches. Two independent research assistants assessed the quality of journal articles, press releases, and a stratified random sample of associated newspaper stories (n=343) by using a structured coding scheme for the presence of specific quality measures: basic study facts, quantification of the main result, harms, and limitations.

Main outcome Proportion of newspaper stories with specific quality measures (adjusted for whether the quality measure was present in the journal article’s abstract or editor note).

Results We recorded a median of three newspaper stories per journal article (range 1-72). Of 343 stories analysed, 71% reported on articles for which medical journals had issued press releases. 9% of stories quantified the main result with absolute risks when this information was not in the press release, 53% did so when it was in the press release (relative risk 6.0, 95% confidence interval 2.3 to 15.4), and 20% when no press release was issued (2.2, 0.83 to 6.1). 133 (39%) stories reported on research describing beneficial interventions. 24% mentioned harms (or specifically declared no harms) when harms were not mentioned in the press release, 68% when mentioned in the press release (2.8, 1.1 to 7.4), and 36% when no press release was issued (1.5, 0.49 to 4.4). 256 (75%) stories reported on research with important limitations. 16% reported any limitations when limitations were not mentioned in the press release, 48% when mentioned in the press release (3.0, 1.5 to 6.2), and 21% if no press release was issued (1.3, 0.50 to 3.6).

Conclusion High quality press releases issued by medical journals seem to make the quality of associated newspaper stories better, whereas low quality press releases might make them worse.

Introduction

Media coverage of medical research often fails to provide the information needed for the public to understand the findings and to decide whether to believe them.1 2 3 4 5 6 7 Although it is easy to blame journalists for poor quality reporting, problems with coverage could begin with the journalists’ sources. For example, relevant information in medical journal articles might be missing or difficult to find. Similarly, press releases—the most direct way by which sources (such as the pharmaceutical industry, academic medical centres, and medical journals) communicate with the media—often omit key facts, and fail to acknowledge important limitations.8 9 10

Nevertheless, press releases do have some impact. They can increase the chance of a journal article receiving media coverage.11 12 13 14 15 And journalists seem to find press releases useful;13 15 in fact, some appear to rely on them. An independent news rating website concluded that up to a third of health news stories relied solely or largely on press releases.5

However, the influence of press releases on the quality of subsequent media coverage is unknown. To understand this association, we studied press releases issued by medical journals since they are such a regular and trusted feature of journalists’ work.15 We used a retrospective cohort design to learn how the presence of key information in medical journal press releases—such as quantifying the main result or highlighting important research limitations—influences the presence of that information in subsequent newspaper stories.

Methods

Sample selection

Medical journal articles

Figure 1 shows how we identified original research articles generating newspaper coverage. We reviewed consecutive issues (going backwards from January 2009) of five major medical journals: Annals of Internal Medicine (Annals), British Medical Journal (BMJ), Journal of the National Cancer Institute (JNCI), Journal of the American Medical Association (JAMA), and the New England Journal of Medicine (NEJM). We selected these journals because their articles frequently receive news coverage, yet their editorial practices differ: Annals and JNCI articles include editor notes highlighting study cautions (whereas BMJ, JAMA, and NEJM articles do not), and NEJM is the only journal in the group that does not issue press releases.

Figure1

Fig 1 Study selection of journal articles, press releases, and newspaper stories

To rate newspaper stories on how they quantified results, we included only journal articles with straightforward quantifiable outcomes (that is, we excluded qualitative studies, case reports, biological mechanism studies, and studies not using an individual unit of analysis). To identify associated newspaper stories, we searched Lexis Nexis and Factiva (news article databases) for stories that included the medical journal name (the time frame for the search extended from two months before the journal article’s print date, to two months after).

We reached our sample size target (100 journal articles) after reviewing 230 potentially eligible journal articles published from January 2009 back to July 2008. Assuming an average of three newspaper stories per study and an intraclass correlation coefficient of 0.5 to account for the clustering of newspaper stories within a study, we needed 40 journal articles to detect an odds ratio of 9 for absolute risks in the story and 48 articles to detect an odds ratio of 4.9 for mentioning harms (assuming α of 0.05 and 80% power).

Medical journal press releases and newspaper coverage

To identify press releases issued by medical journals, we searched www.eurekalert.org, a press release database. Of the 100 journal articles identified, 68 had an associated press release. To ensure that no press releases were missed, we also checked each journal’s website.

We identified 759 unique newspaper stories reporting on the 100 journal articles; 76 additional stories were identified but excluded because they were exact duplicates, too brief (<100 words long), or editorials (fig 1). For the detailed assessment of reporting quality, we selected a random sample of 343 newspaper stories. We included up to six stories for each journal article. For journal articles with more than six stories, we selected a stratified random sample: three stories from major newspapers and three stories from minor newspapers. If one category (for example, minor newspapers) did not have enough stories, we substituted stories from the other category (for example, major newspapers). We classified international newspapers as “major” if they appeared on the Lexis Nexis’s list of major world newspapers, and newspapers from the United States as “major” if they appeared on BurrellesLuce’s list of top 20 US daily newspapers.16 Minor newpapers did not appear in these two lists.

Quality assessment

We created structured coding forms for the quality assessment of the journal articles (we coded only the abstract and, if available, the editor note), press releases, and a random sample of newspaper stories (web appendix). The coding forms were developed and refined during a pilot study. Each form included specific quality measures on the reporting of basic study facts, main results, harms, and limitations (box).

Quality measures used for content analysis

Basic study facts
  • Study size, funding source(s), randomised trials identified, longitudinal study time frames, survey response rates, and accurate description (compared with the abstract) of study exposure and outcome.

  • Only the first four measures had sufficient variability to compare the quality of press releases and newspaper stories. The accuracy of exposures, outcomes, and survey response rates varied too little across newspaper stories, appearing in 99%, 97%, and 0% of stories, respectively.

Main result
  • Was the main result quantified? If so, was it quantified with any absolute risks (including proportions, means, or medians)?

  • Were numbers used correctly (for example, did the numbers reported correspond to those in the abstract in terms of magnitude and time frame, or were odds or odds ratios misinterpreted as risks or risk ratios? The odds ratio issue was included under the category of numbers used correctly rather than as a separate measure.

Harms
  • For studies of interventions claiming to be beneficial:

  • Were harms mentioned (or a statement asserting there were no harms)?

  • Were harms quantified? If so, were they quantified with any absolute risks?

  • We did not present data for the correct use of harm numbers, because too few newspaper stories quantified harms (that is, fewer than five stories in each press release category).

Study limitations
  • Were any major study limitations noted?

  • We considered limitations noted in either the journal abstract or editor note, or on a design specific list that we created (web appendix) that includes limitations inherent in various study designs: small study size, inferences about causation, selection bias, representativeness of sample, confounding, clinical relevance of surrogate outcomes, clinical meaning of scores or surrogate outcomes,, hypothetical nature of decision models, applicability of animal studies to humans, clinical relevance of gene association studies, uncontrolled studies, studies stopped early because of benefit, and multifactorial dietary or behavioural interventions.

  • We presented data only for two major limitations (confounding and the clinical meaning of scores or surrogate outcomes), because the other limitations occurred too infrequently in newspaper stories for inclusion (that is, in fewer than five stories in each press release category).

To assess the reliability of the coding forms, two research assistants at doctoral level independently coded all the journal articles and press releases, and the sample of newspaper stories. We compared their responses by calculating κ statistics. For the three coding forms, the mean κ value was 0.78 (range 0.49-1.00) for the journal articles, 0.71 (0.46-1.00) for the press releases, and 0.71 (0.49-1.00) for the newspaper stories. Disagreements were resolved by discussion between the assessors or with the other authors (if the assessors did not reach consensus easily).

Analysis

We used multiple regression (the unit of analysis was the newspaper story) to assess the association between the quality of press releases (exposure) and quality of newspaper stories (outcome). The main outcome was the proportion of newspaper stories reporting on each of the quality measures (for example, if the story used absolute risks to report results, or noted study limitations). The exposure variable had three levels: press release without the quality measure under consideration (reference category), press release with the quality measure, and no press release issued. To isolate the effect of the press release, we adjusted the analyses depending on whether the quality measure was present in the abstract or editor note of the journal article.

Because newspaper stories are clustered within journal articles (that is, different stories can report on the same article), we used generalised linear models (STATA 11.0). We fit Poisson regression models assuming a binary distribution and a log link to estimate adjusted relative rates and proportions. Generalised linear models incorporate variance overdispersion in the estimates of all standard errors to account for the clustering, since quality measures in newspaper stories about the same medical journal article could be correlated.17 Statistical tests were two sided and calculated at the 5% level of significance.

Results

Medical journal articles and press releases

The 100 original research articles identified in our search (that is, those with quantifiable outcomes published in five major medical journals and receiving newspaper coverage) were equally divided between observational studies and randomised trials or meta-analyses (table 1). Sixty eight articles had an associated press release (of the remaining 32, 25 were published in the NEJM which, by policy, does not issue press releases).

Table 1

 Description of medical journal articles and associated newspaper stories, by journal. Data are number or number (%), unless indicated otherwise

View this table:

Table 2 compares the quality of reporting in journal article abstracts (including editor notes) with the quality in associated press releases. Quality was similarly high with respect to basic study facts (for example, study size was reported in 94% of abstracts v 88% of press releases) and similarly low for mentioning and quantifying the harms of intervention (43% v 35% and 20% v 13%, respectively). However, significantly more abstracts than press releases quantified the main result with absolute risks (50% v 34%, P=0.04) and mentioned any research limitations (71% v 40%, P=0.004).

Table 2

 Quality of reporting in journal article abstracts, journal press releases, and associated newspaper stories. Data are number or number (%)

View this table:

Newspaper stories

Our selection of 100 journal articles generated 759 newspaper stories—a median of three news stories per article (range 1-72). Of the associated stories, 7% and 5% appeared in the top five newspapers with the highest circulation in the USA and the United Kingdom, respectively (table 1). Overall, 8% of stories were reported on page 1 of the newspaper (table 1). Almost all stories accurately reported the exposure and outcome (table 2). However, stories were missing important information: for example, only 23% quantified the main result using absolute risks, 41% mentioned the harms of beneficial interventions, and 29% mentioned any study limitation.

Association between quality of press releases and quality of newspaper stories

Of 343 newspaper stories reviewed for the quality assessment, 245 (71%) reported on journal articles for which the medical journals had issued a press release.

High quality press releases versus low quality press releases

For all 13 quality measures, newspaper stories were more likely to report the measure if the relevant information appeared in a press release (that is, a high quality press release) than if the information was missing (P=0.0002, sign test; fig 2). This association was significant in separate comparisons for nine quality measures (fig 2). For example, when the press release did not quantify the main result with absolute risks, 9% of the 168 newspaper stories provided these numbers. By contrast, when the press release did provide absolute risks, 53% of the 77 newspaper stories provided these numbers (relative risk 6.0, 95% confidence interval 2.3 to 15.4). Because the accessibility of absolute risks varies with study design, we repeated this analysis for randomised trials only, in which absolute risks are always easily accessible. For the 140 newspaper stories reporting on randomised trials, the influence of press releases reporting absolute risks was the same as the main analysis (relative risk 5.8, 95% confidence interval 2.2 to 15.5).

Figure2

Fig 2 Association between quality of medical journal press releases and quality of associated newspaper stories. Proportions (%) of stories with specific quality measures adjusted for whether measure was in journal article abstract or editor’s note. The quality measure of whether a randomised trial was clearly identified not included because the adjustment model would not converge (crude proportions of stories with measure: 0% (press release does not identify trial), 29% (press release identifies trial), and 9% (no press release)). *Significant association. †Relative risks with 0 as numerator or denominators calculated by Haldane’s method for bias correction of small samples18

Of the newspaper stories reporting on beneficial interventions, 24% mentioned harms (or specifically declared none) when harms were not mentioned in the press release, compared with 68% when mentioned in the press release (relative risk 2.8, 95% confidence interval 1.1 to 7.4). Of the stories reporting on research with important limitations, 16% reported limitations if limitations were not mentioned in the press release, compared with 48% if mentioned in the press release (3.0, 1.5 to 6.2).

We also tested for a journal effect, because journals that tend to issue high quality press releases might take further steps to improve the quality of subsequent news coverage irrespective of issuing a press release. We re-ran our models by adding an indicator term for each journal. Controlling for a journal effect made little difference to the magnitude and significance of the association between the quality of press releases and the quality of newspaper stories (models for infrequent events failed, such as quantifying harms, quantifying harms with absolute risks, or mentioning confounding or the clinical meaning of surrogate outcomes).

Presence of a quality measure in the press release had a stronger influence on the quality of associated newspaper stories than its presence in the abstract (table 3). The association between information in the press release and in the story was significant for seven of the 12 quality measures that could be compared (table 3). The corresponding association between information in the abstract and in the story was significant for two of the 12 measures. In absolute terms, the independent effect of information in the press release was larger than the effect of information in the abstract for eight of 12 measures. The only measure for which the absolute effect of the abstract was greater than that of the press release was for quantifying the main result (although press releases were substantially more influential for quantifying the main result with absolute risks).

Table 3

 Independent effect of quality measure in press releases and journal abstracts (or editor notes) on the quality of associated newspaper stories

View this table:

No press releases versus low quality press releases

Figure 2 shows that for 12 of the 13 quality measures, newspaper stories were more likely to report the measure when no press release was issued than when a press release was issued that did not include the quality measure (P=0.003, sign test). Three comparisons were significant, which favoured no press release over a press release of low quality: mentioning funding, using numbers correctly, and mentioning the clinical meaning of surrogate outcomes (fig 2).

High quality press releases versus no press releases

The quality of subsequent newspaper reporting with a press release of high quality compared with no press release being issued was roughly the same for basic study facts. However, press releases of high quality had a stronger influence than no press release for: quantifying the main result with absolute numbers (53% v 20%, P=0.001), mentioning harms of beneficial interventions (68% v 36%, P=0.01), and limitations of research with important limitations (48% v 21%, P=0.06; fig 2).

Discussion

The news media matter: medical reporting not only educates but also influences health beliefs and behaviours.19 Our study shows that press releases are important as well. Higher quality press releases issued by medical journals were associated with higher quality reporting in subsequent newspaper stories. In fact, the influence of press releases on subsequent newspaper stories was generally stronger than that of journal abstracts. Fundamental information such as absolute risks, harms, and limitations was more likely to be reported in newspaper stories when this information appeared in a medical journal press release than when it was missing from the press release or if no press release was issued. Furthermore, our data suggest that poor quality press releases were worse than no press release being issued: fundamental information was less likely to be reported in newspaper stories when it was missing from the press release than where no press release was issued at all (although the findings were generally not statistically significant).

Limitations of study

Our results should be interpreted in the light of several limitations. The first concern in any observational study is the extent to which unmeasured confounding could explain the results. Although the association between quality of the press releases and quality of the newspaper stories was independent of the quality of the journal abstracts, other confounding factors might exist. For example, journals that tend to issue high quality press releases might take further steps to improve subsequent news coverage, such as improving the readability of the article text, reducing the amount of complex research published, or increasing the authors’ accessibility to the media. We found no evidence for confounding by such a journal effect.

Journalists might also be influenced by non-journal press releases, although such press releases are unlikely to improve the quality of news coverage since numbers and cautions are rarely mentioned in press releases issued by academic medical centres10 or industry.8 Irrespective of quality, non-journal press releases cannot confound our observations because they are issued after journal press releases. We can also exclude other potential confounders that relate to the quality of newspaper stories but that are not plausibly related to the quality of press releases issued by medical journals, such as interviews with investigators or experts, or the journalist’s ability.

Moreover, the repetition of identical information from the journal press release in the newspaper story helps to support a causal association. For example, of 240 stories that quantified results, 101 (42%) reported a number appearing in both the press release and journal abstract. But newspaper stories were almost four times as likely to report numbers that appeared only in the press release compared with numbers appearing only in the journal abstract (62 (26%) v 17 (7%), P≤0.001).

Secondly, we only looked at the association between journal press releases and newspaper stories—we did not examine broadcast, web, and social media platforms. A recent survey by the Pew Research Center showed that, although newspaper circulation has contracted recently, print media are still a major source of health news.20 Nevertheless, non-print sources are gaining in impact, particularly web based media. The effect of press releases on the coverage of medical journal articles in these media is unknown. In view of concerns about the quality of non-print media (such as a lack of dedicated staff for health news, rapid deadlines, and limited or no editing), journal press releases could have a stronger effect in these new media than in the print media.

Thirdly, some subjectivity is inherent in any content analysis. We tried to maximise objectivity by creating and extensively pilot testing a structured coding scheme. We observed a high degree of agreement in our formal assessment of intercoder reliability.

Fourthly, we could have introduced bias from our use of stratified random sampling to select newspaper stories for the content analysis. However, substantial bias is unlikely for two reasons. The distribution of coverage between major and minor newspapers for all 759 associated stories and the 343 stories in the sample was similar (39% (296/759) v 47% (161/343) appeared in major newspapers). Furthermore, stratified analyses of major and minor newspapers yielded similar observations as the main analysis (that the presence of relevant information in press releases was associated with its presence in associated newspaper stories).

What is already known on this topic

  • Press releases issued by medical journals vary in quality, often omitting key facts and failing to acknowledge important study limitations

  • The influence of press release quality on the quality of subsequent newspaper coverage is unknown

What this study adds

  • Press releases of high quality seem to improve the quality of associated newspaper stories, whereas press releases of low quality might make newspaper stories worse.

  • Presence of key information in press releases had a stronger influence on the quality of newspaper stories than the presence of the same information in journal abstracts

  • Medical journals should use press releases not simply to make medical news, but to make news reporting better.

Conclusions and policy implications

We found that newspaper stories reporting on medical journal research frequently failed to quantify the main results with absolute numbers and failed to note harms of interventions, study limitations, or study facts—findings consistent with previous investigations.1 2 3 5 6 7 The fact that medical journal abstracts often lack this information too is disappointing, as noted in our study and in other studies evaluating the quality of reporting in abstracts.21 22 23 Abstracts should routinely highlight these items, perhaps under separate headings (as Annals articles do for limitations sections in their abstracts). Routine reporting of absolute risks is the only item that would present a challenge, because particular study designs need unique approaches. The Grading of Recommendations, Assessment, Development and Evaluation (GRADE) working group provides guidance for meta-analyses: use the mean absolute risk either from the control groups or from an external population estimate, and generate the absolute risk for the intervention group by applying the summary ratio measure.24 The same approach with an external population standard should be used for case control studies if absolute risks cannot be calculated directly from the study itself.

Reporting on medical research is challenging: newspapers need to reach readers who vary widely in, for example, statistical literacy and reading levels. But these issues are not unique to medical news. Journalists constantly report quantitative information. Imagine the sports section without scores, player statistics, or team standing tables; or political polls without numbers. Although further work is needed to improve public understanding of medical research, a first step is to ensure that people have access to the fundamental information—basic study facts, quantified results, important study limitations—information they need to understand the findings and to decide whether to believe them. Our results suggest that press releases of high quality increase the chance that readers will receive this information.

High quality abstracts might improve newspaper coverage. But our observations suggest that well written press releases issued by medical journals could do even more to improve the communication of medical news to the public. Our observation that press releases have more influence than journal abstracts on reporting is unsurprising. Abstracts are dense, technical, and written mainly for a professional audience. Press releases are written in a non-technical narrative format that explicitly targets journalists, many of whom have limited scientific training.

High quality press releases are a simple way for medical journals to increase the chance of newspapers receiving key information. We hope our observations encourage medical journals to issue high quality press releases. Press officers could use a checklist to remind them to include the basic facts, numbers, and cautions. A more ambitious approach would be to develop a standardised press release that would help journalists find key information, perhaps by including structured tables quantifying benefits and harms. Some small journals, however, simply lack the necessary staff to produce high quality press releases, emphasising the need for editors to ensure that the relevant information is easily accessible in the journal abstract.

Our study shows that there is substantial room for improving press releases. Medical journals should use press releases not simply to make medical news—but also to make news reporting better.

Notes

Cite this as: BMJ 2012;344:d8164

Footnotes

  • We thank Barnett S Kramer for his helpful comments on an earlier draft of this manuscript; and Honor Passow for organising the databases of the journal articles, press releases, and newspaper stories, and for coding.

  • Contributors: LMS and SW contributed equally to the creation of the manuscript—the order of their names is entirely arbitrary; they contributed to the study conception, design, analysis, and writing of the manuscript; and are guarantors. AA contributed to the implementation and critical review of manuscript drafts. TAS contributed to the study design, provided statistical support, and undertook a critical review of manuscript drafts.

  • Funding: This study was supported by a grant from the National Cancer Institute. The funding source had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation, review, or approval of the manuscript.

  • Competing interests: All authors have completed the Unified Competing Interest form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; and no other relationships or activities that could appear to have influenced the submitted work.

  • Data sharing: No additional data available.

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

References

View Abstract