Intended for healthcare professionals


Impact of covert duplicate publication on meta-analysis: a case study

BMJ 1997; 315 doi: (Published 13 September 1997) Cite this as: BMJ 1997;315:635
  1. Martin R Tramèr, research fellow (martin.tramer%mailgate.jr2{at},
  2. D John M Reynolds, consultant physician and clinical pharmacologistb,
  3. R Andrew Moore, consultant biochemista,
  4. Henry J McQuay, clinical reader in pain reliefa
  1. a Pain Research, Nuffield Department of Anaesthetics, Churchill Hospital, Oxford OX3 7LJ
  2. b John Radcliffe Hospital, Oxford OX3 9DU
  1. Correspondence to: Dr Tramèr
  • Accepted 21 July 1997


Objective: To quantify the impact of duplicate data on estimates of efficacy.

Design: Systematic search for published full reports of randomised controlled trials investigating ondansetron's effect on postoperative emesis. Abstracts were not considered.

Data sources: Eighty four trials (11 980 patients receiving ondansetron) published between 1991 and September 1996.

Main outcome measures: Percentage of duplicated trials and patient data. Estimation of antiemetic efficacy (prevention of emesis) of the most duplicated ondansetron regimen. Comparison between the efficacy of non-duplicated and duplicated data.

Results: Data from nine trials had been published in 14 further reports, duplicating data from 3335 patients receiving ondansetron; none used a clear cross reference. Intravenous ondansetron 4 mg versus placebo was investigated in 16 reports not subject to duplicate publication, three reports subject to duplicate publication, and six duplicates of those three reports. The number needed to treat to prevent vomiting within 24 hours was 9.5 (95% confidence interval 6.9 to 15) in the 16 non-duplicated reports and 3.9 (3.3 to 4.8) in the three reports which were duplicated (P<0.00001). When these 19 were combined the number needed to treat was 6.4 (5.3 to 7.9). When all original and duplicate reports were combined (n=25) the apparent number needed to treat improved to 4.9 (4.4 to 5.6).

Conclusions: By searching systematically we found 17% of published full reports of randomised trials and 28% of the patient data were duplicated. Trials reporting greater treatment effect were significantly more likely to be duplicated. Inclusion of duplicated data in meta-analysis led to a 23% overestimation of ondansetron's antiemetic efficacy.

Key messages

  • Although publishing the same data more than once is strongly discouraged, there is no evidence of the impact of duplicate data on meta-analysis

  • Re-analysing an important trial, and cross referencing to original reports (overt duplication), may be necessary and valuable in some circumstances

  • Covert duplication, masked by change of authors, of language, or by adding extra data, causes problems. One danger is that patient data are analysed more than once in meta-analysis

  • 17% of systematically searched randomised trials of ondansetron as a postoperative antiemetic were covert duplicates and resulted in 28% of patient data being duplicated. None of these reports cross references the original source. Duplication lead to an overestimation of ondansetron's antiemetic efficacy of 23%. Trials reporting greater treatment effect were significantly more likely to be duplicated

  • Covert duplication of data has major implications for the assessment of drug efficacy and safety


The 5-HT3 receptor antagonist ondansetron is used to prevent and treat postoperative nausea and vomiting. While writing a systematic review of ondansetron's efficacy in treating established postoperative nausea and vomiting we found that one large multicentre trial had been published three times.1

Duplicate publication may be overt, such as reanalysis of an important trial with appropriate cross referencing to original reports, or it may be covert, with the same data published again without cross referencing, often with no intention to provide novel information. Covert duplication may result in qualitative exaggeration of an intervention's efficacy,2 3 but it also poses a threat to quantitative (meta) analysis. The danger is that data from the same patient will be analysed more than once, leading to biased estimates of treatment efficacy, exaggerated accuracy, and a false impression of drug safety.

We therefore searched for all published full reports of randomised controlled trials of ondansetron in the surgical setting to identify duplicated trials (both covert and overt) and efficacy data and to measure the impact of any duplication on the estimate of ondansetron's efficacy.


Systematic search

A systematic search was performed for published full reports of randomised controlled trials which tested the antiemetic efficacy of prophylactic and therapeutic ondansetron compared with placebo, no treatment, or other antiemetics on nausea and vomiting after a general anaesthetic. Medline (Knowledge Finder 4.0, Silver Platter 3.25), Embase, and Biological Abstracts databases were searched, without restriction to the English language and using different search strategies1 4 with free term combinations (date of the last electronic search 19 September 1996). Additional reports were identified from reference lists of retrieved reports and review articles on postoperative emesis and ondansetron and by hand searching locally available anaesthesia journals. We checked our database with the database of published trials provided by the manufacturer of ondansetron. Abstracts were not considered. Unpublished trials were not sought. Authors of reports were contacted to clarify uncertainty about duplicate publications. If there was an ambiguous reply or no reply we contacted the manufacturer.

Extraction of data

Information on authors, journals (parent journal or supplement), number of patients per treatment and control arm, sponsorship, cross references to reports with similar data or preliminary reports, and anaesthetic techniques was taken from each report. We extracted efficacy data because intention to treat information was not provided in all reports.

The following definitions are used in this paper. Sponsorship by the manufacturer was assumed when a trial or a journal issue (a supplement for instance) was stated to be sponsored or when one of the coauthors was an employee of the pharmaceutical company. When we discovered matching reports of trials the full report (a multicentre trial) or the report which was published first was regarded as the original. Preliminary reports, republication of original data with or without data from other trials, or republication of part of data from an original report were regarded as duplicate. A cross reference was defined as a reference which clearly stated either the original source of the data or that the same data would be published elsewhere.

Evidence that data were duplicated was based on: the presence of a cross reference, confirmation by authors or manufacturer, or similarity of reported data.

Impact of duplication

We planned to clarify statistical significance and clinical relevance of any difference in efficacy with and without duplicated data using the dataset with the most duplication—prophylactic intravenous ondansetron 4 mg versus placebo. Efficacy was defined as prevention of postoperative emesis. Relative benefit (relative risk) was calculated with 95% confidence intervals using a random effects model,5 together with point estimates and 95% confidence intervals of the number needed to treat.6 7 8 These calculations were made for non-duplicated data, for duplicated data, and for the two sets combined—that is, as published. A positive number needed to treat indicates how many patients have to be given ondansetron to prevent vomiting in one who would have vomited on placebo. Statistical significance of any difference between numbers needed to treat was assumed if there was no overlap of the confidence intervals, and, for independent datasets, it was verified by the formula:


where NNT is the number needed to treat and SE the standard error.


A total of 104 reports were considered: 16 were subsequently excluded because they were not randomised (10), the number of patients per treatment group was not mentioned (2), the observation period was not stated (2), general anaesthesia was not used (1), or they were not adequately controlled (1). We could not obtain hard copies of four citations (and neither could the manufacturer).

Demonstrating duplication

The remaining 84 randomised controlled trials, published between 1991 and September 1996, reported data on 20 181 patients, of whom 11 980 received ondansetron. There was evidence that nine trials were published in part or in full a total of 23 times (table 1).9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29

Ondansetron for prevention and treatment of postoperative nausea and vomiting. Duplication of data from full reports of published randomised controlled trials

View this table:

Seven of the nine duplicated trials were multicentre studies (trials 1, 2, 4, 5, 7, 8, and 9), of which two complete data sets (1 and 7) had been published three times, each over the subsequent two years. Trial 1 studied treatment of established postoperative nausea and vomiting.1 All other trials studied prophylaxis.4 Of the 23 reports which were either duplicated or contained duplicated data, 14 (61%) declared sponsorship and 10 (43%) appeared in a journal supplement. Our subsequent investigation suggests that all of these 23 reports emanated from nine trials sponsored by the manufacturer. Of the 61 randomised controlled trials which were not subject to duplication, 17 (28%) declared sponsorship and 11 (18%) appeared in a supplement. Seventy five percent of duplicated data appeared in six English language journals, and 25% in one French and two Italian journals. All reports except four, with a total of 290 patients receiving ondansetron (3b, 6a, 9b, and 9e), could be found in either Medline, Biological Abstracts, or Embase. The manufacturer provided us with a comprehensive printout of all controlled studies of ondansetron in postoperative emesis. All except one (6a) of the 23 original or duplicated reports were listed but with no distinction between originals and duplicates.

Evidence of duplication

One preliminary report (4b) mentioned a forthcoming full publication (4a) in a footnote, although the full publication failed to cross reference the preliminary report. No clear cross reference was found in any of the 21 other reports. Two papers (6b and 9c) cited matching reports (6a and 9b, respectively) but without saying or suggesting that these were the same patients.

Duplication of five trials (1, 2, 3, 7, and 9) was confirmed by the authors of the original report, the duplicate report, or both. Authors of a sixth trial (5) referred us to the manufacturer, who confirmed duplication.

Authors of two trials (6 and 8) did not respond to our inquiry. The efficacy data and other outcomes were, however, identical in the original and duplicate publications (1).

Problems in identifying duplicates

Parts of one multicentre trial (9a) had been published in four other reports (9b-e), each written by investigators from the multicentre trial. All the authors confirmed duplication, even though in three reports (9b-d) the anaesthetic was different from that described in the multicentre trial, in one (9d) a new treatment arm (ondansetron 8 mg) was added to the study design, and one report was the combination of already duplicated multicentre data (9b) with some new data to produce a third paper (9c). Four pairs of identical trials (2, 5, 6, and 8) were published by completely different authors without any common authorship, as happened with report 7c, which contained data from a trial which had been published twice before. Two pairs of duplicates combined data from one or two other trials and were then published as one paper (2b with 5b, 7c with 8b). Some duplicates used different numbers of patients (1a and 1c, 5a and 5b) or patient characteristics (2a and 2b, 7a and 7b) from the original. Trial 3 was published in two different languages. For trial 6 the sex distribution was different in the two reports.

Impact of duplication: originals versus duplicates

Fourteen of 84 analysed randomised trials (17%) contained duplicated data. The 14 duplicates documented 4886 patients, 3335 treated with ondansetron. In all 28% (3335/11 980) of the ondansetron patient data studied in published randomised trials were duplicated.

The studies most commonly duplicated were those which compared prophylactic intravenous ondansetron 4 mg with placebo. This comparison was reported in 19 original trials,4 16 of which were not subject to duplicate publication, but data from three (trials 7a, 8a, and 9a) were subsequently duplicated in six further reports (7b, 7c, 8b, 9b-d). The three original reports which were subject to duplicate publication were large multicentre trials.

In the 16 reports which were not duplicated the number needed to treat to prevent vomiting within 24 hours with intravenous ondansetron 4 mg compared with placebo was 9.5 (95% confidence interval 6.9 to 15), relative benefit 1.33 (1.21 to 1.47) (1). In the three original reports which were subject to duplication, the number needed to treat was 3.9 (3.3 to 4.8), relative benefit 1.44 (1.33 to 1.55). The difference between the two numbers needed to treat was statistically significant (z=4.84, P<0.00001).


Treatment efficacy in 16 original trials which were not duplicated, three original trials which were duplicated, six duplicates of these three trials, and the combination of originals and duplicates. Number needed to treat (with 95% confidence intervals) to prevent vomiting up to the 24th postoperative hour with intravenous ondansetron 4 mg compared with placebo. The numbers above the symbols are the numbers of reports.

In the six duplicates of those three trials the number needed to treat to prevent vomiting within 24 hours with intravenous ondansetron 4 mg compared with placebo was 3.3 (2.8 to 3.8), relative benefit 1.62 (1.43 to 1.83) (1). The combined number needed to treat with data from all three duplicated trials and their six duplicates was 3.5 (3.2 to 4.0), relative benefit 1.54 (1.46 to 1.63). This number needed to treat was significantly different from the number needed to treat from the 16 non-duplicated trials (z=6.8, P <0.00001).

Impact of duplication on meta-analysis

When all original reports were combined (19) the number needed to treat to prevent vomiting within 24 hours with intravenous ondansetron 4 mg compared with placebo was 6.4 (5.3 to 7.9), relative benefit 1.36 (1.26 to 1.47).4 When original and duplicate reports were combined (25), the number needed to treat improved to 4.9 (4.4 to 5.6), relative benefit 1.44 (1.33 to 1.55) (fig 1). Adding the duplicates to the originals therefore produced an overestimation of treatment efficacy of 23% ((6.4-4.9)/6.4).


Why make a fuss about duplicate publications? There are situations where duplicate publication is justified,30 and there are laudable examples,31 in different languages and with clear cross references.32 The reason for making a fuss is covert duplication, because when duplication of patient information is unstated, estimates of treatment efficacy and safety may be biased. There are two arguments here, qualitative and quantitative.

Instead of reading nine genuine reports of the efficacy and safety of a drug the reader is presented with 23. Even thoughtful readers will fail to spot the duplication33 and give the conclusions more emphasis than they deserve,3 34 believing that the drug was tested in more patients than was actually the case. An exaggerated perception of both treatment efficacy and safety is the likely result.

At face value the relevant published literature contains 11 980 patients treated with ondansetron in 84 trials. In reality it was 8645 patients in 70 trials. An uncritical analysis would have overestimated the number of trials by 17% and the number of patients by 28%.

As is clear from the 1, failure to exclude duplicates would have overestimated ondansetron's efficacy by 23% (improving the number needed to treat from 6.4 to 4.9), and exaggerated the accuracy of the point estimate (narrowing the confidence interval). While the exaggerated accuracy is a function of the inflated numbers, the cause of the improvement in the number needed to treat is not so obvious. It results from the fact that the trials which were subject to duplication reported much greater efficacy (number needed to treat 3.9) than the non-duplicated reports (number needed to treat 9.5) with no overlap of confidence intervals. The actual duplicates exaggerated treatment efficacy even more (number needed to treat 3.3).

The difference between originals, duplicates, and the combined data has clinical relevance. The number needed to treat of 6.4 for the original reports means that more than six patients have to be treated with ondansetron 4 mg to prevent vomiting in one who would have vomited had he or she received a placebo. For the duplicates only three patients need to be treated for one to benefit. Combining originals and duplicates—that is, combining the data as they appear in the literature, improved the 1 in 6 to 1 in 5, just within our preset definition of clinically relevant postoperative antiemetic efficacy.35

Overestimation of treatment efficacy by including duplicates is yet another methodological problem for meta-analysis.36 The effect is in the same direction and of the same degree as the effects of improper blinding or inadequate concealment of treatment allocation.37 Unwarranted assumptions about safety might also be made. No major drug related adverse effect seen in 11 980 patients sounds more persuasive than none in 8645 patients.38

Covert duplication should be uncovered by several of the processes of publication: by authors adhering to the Vancouver guidelines; by authors having to sign a declaration of unique publication when they submit a paper to a journal; and at peer review, when expert reviewers should spot the similarity of data.

The Vancouver guidelines state, “Manuscripts are reviewed for possible publication with the understanding that they are being submitted to one journal at a time and have not been published, simultaneously submitted, or already accepted for publication elsewhere.”39 All the duplicates reported here were published as full reports in journals which quote this requirement or referred to the guidelines at the time of publication. We do not know whether or not authors signed a declaration of unique publication. Half of these reports were published in journal supplements. Supplements' guides for authors were the same as their parent journals. If peer review was used the process clearly failed to detect or eliminate duplicates.

The key issue here is cross referencing. Without cross referencing duplication becomes covert. Only one preliminary report17 mentioned a future full paper, but when the full paper was published it did not mention the preliminary report. No clear cross referencing was present in the other 21 reports. Two reports referenced a matching paper but with no indication that they used the same data. These cross references were of no value, because the reader was left in the dark about the relation between the papers.

It could be argued that these covert duplicates were so obvious that they would be identified easily. However, they have not been identified. Original ondansetron data together with data from duplicate reports have been unwittingly cited in clinical investigations,40 41 42 43 in opinion leaders' articles on postoperative nausea and vomiting,44 in a widely cited review article on ondansetron,45 in a standard textbook on ambulatory surgery,46 and in a review about the ethics of antiemetic trials.33 Covert duplicate reports can be very difficult to recognise.

Examples of masking included different language or completely different authors, adding further data to duplicated material, presenting analyses as both efficacy and intention to treat, or combining the duplicated data with data from another trial and reporting a combined analysis. We detected the duplicates only because of the stringency of our systematic review process. Confirmation by the original authors had to be sought, but responses were far from comprehensive. Neither could the list of publications provided by the manufacturer resolve the matter, because the manufacturer did not distinguish between originals and duplicates.

Ironically, while writing this manuscript a report of yet another large multicentre trial of ondansetron appeared in a peer reviewed journal.47 It contained the same data as a report published two years earlier.23 Another 226 patients given ondansetron were duplicated in the published literature, and the crucial cross reference to the original paper was lacking. Caveat lector.


We are grateful for helpful comments from David Sackett, Doug Altman, and Jon Deeks.

Funding: UK overseas research student award (MRT). Pain research funds.

Conflict of interest: None.


  1. 1.
  2. 2.
  3. 3.
  4. 4.
  5. 5.
  6. 6.
  7. 7.
  8. 8.
  9. 9.
  10. 10.
  11. 11.
  12. 12.
  13. 13.
  14. 14.
  15. 15.
  16. 16.
  17. 17.
  18. 18.
  19. 19.
  20. 20.
  21. 21.
  22. 22.
  23. 23.
  24. 24.
  25. 25.
  26. 26.
  27. 27.
  28. 28.
  29. 29.
  30. 30.
  31. 31.
  32. 32.
  33. 33.
  34. 34.
  35. 35.
  36. 36.
  37. 37.
  38. 38.
  39. 39.
  40. 40.
  41. 41.
  42. 42.
  43. 43.
  44. 44.
  45. 45.
  46. 46.
  47. 47.