Re: Discrepancies in autologous bone marrow stem cell trials and enhancement of ejection fraction (DAMASCENE): weighted regression and meta-analysis
We are grateful that the Prof Moye, the leader of the CCTRN bone marrow stem cem cell research network, encourages detailed analyses of the field. CCTRN has provided one of the few trials, t23, in which no discrepancies were found. It reported an effect size on ejection fraction of -3.00 (95% CI -7.05 to 0.95, p=0.14) EF units. As cardiologists treating patients with heart failure we share his disappointment that well-reported trials such as his tend to report neutral results.
Prof. Moye raises several questions.
Query 1. Did DAMASCENE really bury data in complex appendices?
In DAMASCENE we took special pains to show the entire data. We listed all 600+ discrepancies explicitly. That amount of documentation itself would be longer than a paper so it had to go into Appendices, 1 to 3 [http://www.bmj.com/content/suppl/2014/04/28/bmj.g2688.DC1].
We are disappointed if the listings appear complex. We designed the format to be simple, and look forward to suggestions for how it could have been simpler. Each row represents one discrepancy. The first column is a discrepancy ID. The second column describes the discrepancy in a manner intended to be concise but clear. The other columns list the contradictory statements and where to find them.
Differentiating between errors of presentation and misrepresentations of data and research designs would require access to raw data and direct observation of trial methods. For example, without such access, how can readers judge why women described in an early trial report were later reported as men?
(a) They may have undergone gender reassignment treatment during the trial, in which case the researchers should be commended for diligent data-recording.
(b) There may have been miscounting in one or both reports.
(c) After the initial randomization, patients may have been reallocated between groups, or even removed. This reduces the scientific value of the data.
We refrained from attempting the impossible. We simply counted the number of discrepancies in the publicly obtainable trial reports.
Query 2. “Editing errors”?
We did not state the causes of the discrepancies because we did not know. We are grateful that Prof Moye provides insight as highly experienced insider, and expert witness [ http://www.law360.com/articles/7421/as-drug-risks-mount-senators-put-pre....
We do recognise that if researchers edit data carefully, the process might be impossible for readers to detect. In contrast, if a researcher edits data inconsistently, for example differently in the text versus a table, then this might be detectable. Discrepancies arising in this manner would indeed be editing errors.
Nevertheless, we feel that even if editing of data is carried out skillfully and undetectably, it is not desirable.
Clearly opinions vary on what is acceptable. That is why we show the full data to help readers reach an individualised interpretation.
Query 3. Missing perspective?
We agree that discrepancies can occur in any field. DAMASCENE reported that in this field the number of discrepancies was associated with the effect size reported. It fully lists the 600+ discrepancies identified so that readers can check them.
We have not seen this question addressed in such a detailed and open method in other studies.
Query 4. Ecological fallacy, or not.
The ecological fallacy [http://en.wikipedia.org/wiki/Ecological_fallacy] involves unsoundly drawing inferences about individuals from examining groups. In DAMASCENE the individual unit is the trial, not the patient. We specifically avoided the ecological fallacy by making sure we did not draw inferences about individual patients (or authors).
Query 5. Only part of the LVEF effect size is explained by discrepancy count.
We completely agree that discrepancy count far from fully explains the difference in effect size between trials. It was merely the most powerful variable, both in univariate analysis (Appendix 9, p=0.0026) and multivariate analysis (Appendix 9, p=0.0004). In the multivariate analysis using also sample size, blinding and high quality sequence generation, multivariate R squared is 0.43 (p=0.00005).
Query 6. Do mistakes in writing imply mistakes in research design and execution?
We do not know. We agree that this would be valuable to determine.
It is possible, as Prof Moye implies, that all the trials were designed and executed well, with the discrepancies occurring solely at the reporting stage. However we still await suggestions for a mechanism for how a spectrum of reliability during reporting could show a link to the effect size.
Query 7. C-CURE
DAMASCENE did not include the C-CURE trial, which is of “manufactured cells” [http://dx.doi.org/10.1016/j.cryobiol.2013.09.060]. Some of our group, with other correspondents, wrote through the journal asking clarification of approximately 100 discrepancies [http://tinyurl.com/khhyoc7] in the reports of this trial which gave a large effect.
We fully agree with Prof Moye that we should join together in fostering a research culture of open scientific questioning and answering. C-CURE’s authors should be credited for responding to the questions [http://tinyurl.com/mwz5qcf]: trial authors and journals do not always do so [http://dx.doi.org/10.1016/j.ijcard.2013.04.152].
However, answers are most useful if they are complete. Many C-CURE discrepancies were resolved, by repudiation of previous reports, replacement of tables and other clarifications. Unfortunately, some discrepancies seem to still be open queries.
For example, the primary endpoint of the registered trial is radionuclide ejection fraction [http://apps.who.int/trialsearch/trial.aspx?trialid=EUCTR2007-007699-40-BE]
and [https://www.clinicaltrialsregister.eu/ctr-search/trial/2007-007699-40/BE]. The paper does not show those data, and the rebuttal does not state whether any patients underwent radionuclide ejection fraction measurement.
Data is given on what was originally a secondary endpoint, echocardiographic ejection fraction. However the paper’s text [http://dx.doi.org/10.1016/j.jacc.2013.02.071] and the company’s prospectus
state a 7.0 unit increase in mean ejection fraction in cell recipients, while the individual patient data in Figure 4a of the paper appear to show it distinctly less than 5.5 units. Perhaps Prof Moye, or readers, can help us resolve this puzzle?
Query 8. Which institution is reported to have asked for SCIPIO to be retracted?
We apologise for any ambiguity in our note added at the proofs stage. The SCIPIO [http://dx.doi.org/10.1016/S0140-6736(11)61590-0] paper had two institutions, University of Louisville (12 authors including first author) and Harvard University (8 authors including last author). The institution reported to have asked the journal for a retraction was not Louisville but Harvard [http://www.forbes.com/sites/larryhusten/2014/04/11/lancet-editors-raises....
We apologise to Harvard if our note did not credit it for being observant, and to Louisville for the converse.
Some members of our group hope that SCIPIO is not retracted because it is unique. [http://retractionwatch.com/2014/04/11/harvard-brigham-heart-researcher-u....
Our scientific and clinical community, and patients, continue to be eager for methods to regenerate damaged or diseased heart tissue.
We have recently highlighted [http://dx.doi.org/10.1136/bmj.g1937] in another exciting field of cardiology - renal denervation for hypertension - that it is a false economy to design trials poorly just because a field is young. We are pleased that Prof Moye is supportive of this concept and can see that he has personally delivered high standards with CCTRN.
We are glad of Prof Moye’s support for bringing attention of readers to discrepancies. As indicated in the paper, we encourage discrepancies in DAMASCENE to be reported through the journal so that they can be resolved.
Competing interests: No competing interests