Bad reporting does not mean bad methods for randomised trials: observational study of randomised controlled trials performed by the Radiation Therapy Oncology GroupBMJ 2004; 328 doi: https://doi.org/10.1136/bmj.328.7430.22 (Published 01 January 2004) Cite this as: BMJ 2004;328:22
- Heloisa P Soares, research assistant1,
- Stephanie Daniels, coordinator, Eastern Cooperative Oncology Group2,
- Ambuj Kumar, research associate1,
- Mike Clarke, director3,
- Charles Scott, senior director4,
- Suzanne Swann, senior biostatistician4,
- Benjamin Djulbegovic (), professor of oncology and medicine1
- 1Department of Interdisciplinary Oncology, H Lee Moffitt Cancer Center and Research Institute, University of South Florida, 12902 Magnolia Drive, Tampa, FL 33612, USA
- 2H Lee Moffitt Cancer Center and Research Institute
- 3UK Cochrane Centre, Oxford OX2 7LG
- 4Statistical Unit, Radiation Therapy Oncology Group, PA 19107, USA
- Correspondence to: B Djulbegovic
- Accepted 10 October 2003
Objective To determine whether poor reporting of methods in randomised controlled trials reflects on poor methods.
Design Observational study.
Setting Reports of randomised controlled trials conducted by the Radiation Therapy Oncology Group since its establishment in 1968.
Participants The Radiation Therapy Oncology Group.
Outcome measures Content of reports compared with the design features described in the protocols for all randomised controlled trials.
Results The methodological quality of 56 randomised controlled trials was better than reported. Adequate allocation concealment was achieved in all trials but reported in only 42% of papers. An intention to treat analysis was done in 83% of trials but reported in only 69% of papers. The sample size calculation was performed in 76% of the studies, but reported in only 16% of papers. End points were clearly defined and α and βerrors were prespecified in 76% and 74% of the trials, respectively, but only reported in 10% of the papers. The one exception was the description of drop outs, where the frequency of reporting was similar to that contained in the original statistical files of the Radiation Therapy Oncology Group.
Conclusions The reporting of methodological aspects of randomised controlled trials does not necessarily reflect the conduct of the trial. Reviewing research protocols and contacting trialists for more information may improve quality assessment.
Evaluation of the quality of published clinical research is central to informed decision making. Information on trial quality is particularly important during peer review or when results from individual studies are evaluated in systematic reviews or meta-analyses.1 2 The quality of research should always be considered when a report is used in decision making in health care. Poorly conducted and reported research seriously compromises the integrity of the research process, especially if biased results receive false credibility.3
Many efforts have been made to improve the quality of studies and their related publications. The best example was the publication of the Consolidated Standards of Reporting of Trials (CONSORT) statement to improve the quality of trial reports.3 Such efforts to improve the quality of clinical research, however, imply that if certain design or methodological features are not reported then they were not done. Ideally, assessment of the quality of clinical research should not only address reporting but also the original design and intended plan for its conduct and analysis as specified in the trial's research protocol. The importance of linking the final report of clinical trials with their original research protocols led some authors to argue that no randomised controlled trial should be conducted without publication of its research protocol.4 The reasons behind this are that critical comments may be encouraged leading to improvements in trial design, publication can be coupled with trial registration, the original protocol can be compared with what was subsequently done, and investigators can more easily appreciate what research is being conducted in their areas of interest.4 More importantly, publication of research protocols is one of the best ways to minimise bias by explicitly stating a priori hypotheses and methods without the prior knowledge of results.5 Many randomised controlled trials are preceded by the preparation of a written protocol, which describes the conduct of the trial more comprehensively than is possible in many journal articles, and making these protocols available would provide much useful additional information. We aimed to test the assumption that poor reporting reflects poor methods by comparing research protocols with the information published in the final reports of a set of randomised controlled trials.
We studied randomised controlled trials conducted by the Radiation Therapy Oncology Group. This is a national clinical cooperative group with a focus on the development of studies to improve survival and the quality of life of patients with cancer. It was established in 1968 and is publicly funded by the National Cancer Institute in the United States. The group consists of both clinical and laboratory investigators from over 260 institutions across the United States and Canada, and its membership includes nearly 90% of all comprehensive and clinical cancer centres designated by the National Cancer Institute.6 Before activation, the group's research protocols must pass through a rigorous peer review process and be reviewed and approved through its own committee system and the National Cancer Institute. Development of a protocol consists of six phases (box).6
Our analysis included data related to primary outcomes from all terminated phase III trials conducted by the Radiation Therapy Oncology Group since its establishment in 1968. We extracted data on methodological domains that have been acknowledged as vital for minimising bias in the conduct and analysis of randomised controlled trials.7 The effect of chance is usually minimised by appropriate planning of the trial's size, through a statistical power analysis using estimates for the expected differences between the interventions and prespecified type I (α) and type II (β) error levels.7 To investigate systematic bias we extracted data on the quality of the randomisation process (selection bias) and drop outs (attrition bias).8 Since the primary outcome was survival in most of the studies, we did not consider quality related to observer bias, such as the use of placebo or independent reading of outcomes (there were only three placebo controlled trials). We extracted data from all papers and protocols. The accuracy of this data was verified by the group's statistical centre.
Overall, there were 59 terminated phase III randomised controlled trials, three of which had not been published. We found 58 published papers for the remaining 56 protocols for use in our study. The figure summarises the results according to information from the papers, protocols, and the Radiation Therapy Oncology Group's statistical office. This shows that the reporting of methods in the publications does not necessarily reflect the methodological quality of the associated protocols. For example, a priori sample size calculations were performed in 44 (76%) trials, but this information was given in only nine of the 58 published papers (16%). Although all trials had adequate allocation concealment (through central randomisation), this was reported in only 24 (41%) of the papers. From our initial data extraction, we found that 40 (69%) of these trials used an intention to treat analysis. This number was increased to 48 (83%) after verification by the Radiation Therapy Oncology Group. End points were clearly defined, and α and βerrors were prespecified in 44 (76%) and 43 (74%) trials, respectively, but only reported in six (10%) of the papers. Interestingly, reporting of drop outs was meticulous; we found no difference in frequency (91%) between data presented in the papers and those in the original files.
Poor reporting of randomised controlled trials may not indicate poor quality of the trials themselves. We are aware of two other studies that reported empirical assessments of this relation. One study evaluated the quality of 63 randomised controlled trials of breast cancer treatment. Data were extracted from publications related to these trials and the results compared with the information provided by the principal investigators. The study concluded that faulty reporting reflected faulty methods.9 Another study, however, concluded that even well designed and conducted trials may be badly reported.2 This conclusion was drawn indirectly from an assessment of three key indicators of quality: adequate allocation concealment, appropriate blinding, and use of intention to treat analysis.2 Unlike our study, neither of these studies reported a comparison of the quality of reporting with the methods specified in the original research protocols.
Phases in development of a trial protocol by Radiation Therapy Oncology Group
Approval of concept
Review and approval of protocol among group members
Review by headquarters, including statistics, data management, quality assurance, protocol administrator, and review by the institutional review board
Review by National Cancer Institute
Activation of protocol
Revision of protocol
In general, the Radiation Therapy Oncology Group and, we predict, other cooperative oncology groups sponsored by the National Cancer Institute, have conducted research of good quality. Our study is the first formal investigation of this and, we believe, the first examination of the methodological quality of randomised controlled trials performed by a cooperative oncology group.
The relation between poor reporting and poor methods was raised in 1980 in a report on patient registration, randomisation, and the importance of avoiding bias in cooperative oncology trials.10 This may have helped the cooperative oncology groups to be especially aware of methodological issues relating to trials and before the start of modern research on methodological quality.11 Consequently, for cooperative oncology groups such as the Radiation Therapy Oncology Group, even if the published description of the methods of a randomised controlled trial is poor, the quality of the trial should not be assumed to be poor. It is important to note, however, that our findings are based on a select sample of trials, which may not be representative of randomised controlled trials. Further studies to confirm the generalisability of our findings are needed and would be useful.
Another important point relates to any assumptions that trials published before the 1996 CONSORT statement are more likely to be of poorer quality than those published after it.3 The CONSORT statement contains several methodological elements that should be followed to eliminate biased results. The intention of this statement was to improve the conduct, integrity, and reporting of randomised controlled trials.3 Our results show that studies conducted by the Radiation Therapy Oncology Group were of high quality even before publication of the CONSORT statement. It was the reports of the randomised controlled trials that showed deficiencies in their description of the methods used in the trials, not the trials themselves. Our findings indicate that although researchers in the Radiation Therapy Oncology Group were cognisant of key features in the design and conduct of good quality trials, they were less aware of the need to report these to a standard that would meet contemporary (CONSORT) requirements.
It is still appropriate to expect that the CONSORT statement will contribute to the conduct of higher quality randomised controlled trials in the future, since it incorporates and highlights many of the elements needed to perform a trial adequately. We agree with the call for all journals to adopt the policy of only publishing the report of a randomised controlled trial if it follows the CONSORT requirements. This is supported by empirical data that are now emerging about the usefulness of the CONSORT statement. For example, one study compared the quality of reports of trials before and after the CONSORT statement and found that the statement was associated with an improvement in the quality of reports.12 Further improvements in the quality of the conduct and reporting of clinical research would arise with the publication of research protocols.4
What is already known on this topic
Assessment of the quality of research evidence is central to informed decision making
The quality of randomised controlled trials is often based on the quality of reporting
What this study adds
Poor reporting of methods in randomised controlled trials may not reflect on poor methods themselves
Evaluation of research protocols and contacting trialists should be integral to assessing the quality of such trials
Contributors HPS and BD conceptualised the study, were involved in all aspects of the study, and wrote the first draft of the paper. SD and AK contributed to the study design, collection of data, analysis and interpretation of the data, and writing the report. MC contributed to the study design, interpretation of the data, and writing the report. CS and SS contributed to the collection of data and writing the report. BD will act as guarantor for the paper.
Funding This research was supported by the Research Program on Research Integrity, an Office of Research Integrity/National Institute of Health collaboration, grant No 1R01NS/NR44417-01.
Competing interests MC is director of the UK Cochrane Centre, which is funded by the NHS research and development programme and part of the international Cochrane Collaboration. The collaboration produces systematic reviews of health care, including randomised trials, but the views expressed here are not necessarily those of the official policy of the Cochrane Collaboration.
Ethical approval This study was approved by the University of South Florida Institutional Review Board (No 100449).