“Hardly worth the effort”? Medical journals’ policies and their editors’ and publishers’ views on trial registration and publication bias: quantitative and qualitative study
BMJ 2013; 347 doi: https://doi.org/10.1136/bmj.f5248 (Published 06 September 2013) Cite this as: BMJ 2013;347:f5248- Elizabeth Wager, publications consultant1,
- Peter Williams, research associate2
- on behalf of the OPEN Project (Overcome failure to Publish nEgative fiNdings) Consortium
- Correspondence to: E Wager liz{at}sideview.demon.co.uk
- Accepted 12 August 2013
Abstract
Objectives To determine the proportion of medical journals requiring trial registration and to understand their reasons for adopting (or not adopting) such policies and other measures designed to reduce publication bias.
Design Quantitative study of journals’ instructions to authors (in June 2012) and qualitative study of editors’ and publishers’ views on trial registration and publication bias (carried out in Autumn 2012).
Setting Random selection of 200 medical journals publishing clinical trials identified from the Cochrane CENTRAL database.
Participants Editors (n=13) and publishers (n=3) of journals with different policies on trial registration (and with recently changed policies) identified from the survey of their instructions to authors.
Results Only 55/200 journals (28%) required trial registration according to their instructions and a further three (2%) encouraged it. The editors and publishers interviewed explained their journals’ reluctance to require registration in terms of not wanting to lose out to rival journals, not wanting to reject otherwise sound articles or submissions from developing countries, and perceptions that such policies were not relevant to all journals. Some interviewees considered that registration was unnecessary for small or exploratory studies.
Conclusions Although many major medical journals state that they will only publish clinical trials that have been prospectively registered, and such policies have been associated with a dramatic increase in the number of trials being registered, most smaller journals have not adopted such policies. Editors and publishers may be reluctant to require registration because they do not understand its benefits or because they fear that adopting such a policy would put their journal at a disadvantage to competitors.
Introduction
The medical evidence base may be distorted if the findings of some trials are published repeatedly, other trials are not published at all, or trial outcomes are published selectively.1 2 The resulting distortion has been termed publication bias, and several studies have shown that it is a serious problem.3 One method of reducing, or at least detecting, publication bias is to require a description of all trials to be posted on a public register before the trials begin. Trial registration has been proposed since 1986 but calls were largely ignored until 2005, when members of the International Committee of Medical Journal Editors (ICMJE) made registration a requirement for publication in their journals.4 5 Other medical journals have since adopted similar policies, but by no means all, and even those that say they require trial registration do not always enforce it.6 A recent survey of randomised trials published in Medline in 2010 found that 61% were registered but that only 55% of the published reports contained a trial registration number.7
We wanted to discover what proportion of journals require trial registration as a condition for publication, or encourage it without making it mandatory, and why they choose to do this. We also wanted to understand the reasons why some journals do not require trial registration and what other measures they have instigated to reduce publication bias. We therefore undertook a two part study: the first was a quantitative analysis of journal requirements for trial registration from a sample of medical journals’ websites and the other was a qualitative study of journal editors’ and publishers’ views on trial registration and publication bias.
Methods
Quantitative study
We obtained a listing of all journals included in the Cochrane Central Register of Controlled Trials (CENTRAL) database from 2009 to 2011. This database includes randomised trials from a wide range of sources (including literature searches from bibliographic databases, hand searches undertaken as part of systematic reviews, and retrieval of articles from the reference lists of other articles obtained by both these methods). The list was downloaded into an Excel spreadsheet, deduplicated, and sorted alphabetically by journal title, producing a total of 3512 journals. A series of 200 random numbers from 1 to 3512 was generated using www.random.org. We extracted journal titles corresponding to these numbers in the Excel listing and entered these into a search engine via a web browser (Google accessed via Mozilla Firefox) to locate the journal website. If no website could be found, or the journal did not provide instructions in English, we selected the next journal on the Excel spreadsheet. In this way we created a sample of 200 journals thought to publish clinical trials.
In June 2012 a single researcher sought information about trial registration policy from the relevant part of the journal websites (usually the instructions to authors) using automatic text search or “find” functions for the term “regist*” (the truncation being used to capture register, registry, or registration) and “Helsinki” on web pages or downloaded documents. We categorised journals according to whether trial registration was mandatory, encouraged (but not mandatory), or not mentioned. Recommendations that trials should be conducted according to the Declaration of Helsinki were also recorded, including the version of the declaration that was referenced (if specified).
Qualitative study
We used the results from the quantitative survey to identify journals with different policies on trial registration. From this list we selected 31 for invitation to ensure that we had sufficient interviewees from journals with different registration policies. We focused especially on journals that had recently changed their policy because we hoped that the discussions leading to such changes would be fresh in the editors’ minds. We also attempted to select a sample that was representative of characteristics such as publication volume/frequency, type of publisher (commercial or academic society), status of editor (full time or part time), place of publication, and specialty. Of the 31 journals invited by email, 15 were willing and available to be interviewed during the required period (September 2012). (We purposely invited more potential interview participants than we thought would be needed, to ensure that we had a large enough sample available during the study timeframe.) In one case, both editor and publisher took part in the interview (giving a total of 16 interviewees from 15 journals). For other journals, the interviewee was the editor in chief (n=11), an associate editor (n=1), or the publisher (n=2). Ten of the interviewees were based in Europe and six in North America.
Data were gathered by in-depth, semistructured interviews, mainly by telephone, with two conducted face to face and one via Skype video link. An experienced qualitative researcher (PW) conducted all interviews. The interview schedule contained six open questions (see supplementary file), which was sent to interviewees before the interview, along with background information about the project. Participants were asked whether their contribution could be recorded and transcribed and were assured anonymity. Every interviewee agreed to these terms, although in two cases there were technical problems with the recording equipment, so the researcher took notes. The notes were sent to the interviewees to check and supplement if required. Interviews lasted approximately 20 to 30 minutes.
Both investigators reviewed and “framework” analysed the transcripts and notes.8 This approach involves a systematic process of filtering and sorting material into themes and key issues. Once we had established the key themes, including topics defined a priori from the research aims and issues raised by the interviewees, we thematically indexed and “charted” the transcripts and notes thus allowing comments related to each theme to be grouped and sorted. From this we elicited further concepts and associations and assessed the strength and extent of views and reported behaviour.
Results
Quantitative study
A search of the CENTRAL database produced a list of 3512 journals. From the initial random sample of 200 journals, 49 were excluded and replaced by the next journal in the list because no website could be found (n=17), the instructions for authors were not in English (n=26), or the journal did not publish primary reports of clinical trials (n=6).
Of the 200 journals sampled, 142 (71%) did not require registration (or at least did not mention this on their website), 55 (28%) required registration, and 3 (2%) encouraged registration but did not make it a requirement for publication.
One journal’s website included wording which implied that trial registration might sometimes prevent publication. In a section about previous publication activities it noted “Advances in Therapy will publish data that has been preregistered on clinical trial websites. However, if a trial registration number is available, it should be included at the end of the abstract.”
Of the 142 journals that did not require registration, 42 stated that trials should be performed according to the Declaration of Helsinki. Of these 42 journals, 13 referenced (or provided a link to) the 2008 version of the declaration, 10 did not specify the version (or provide a link), and 19 referenced an earlier version.
Qualitative study
The interviewed editors and publishers talked about what they understood by publication bias, what contributed to it, who was responsible for it, and measures that could be taken to prevent it, including trial registration. We were particularly interested to record their views on barriers to journals making trial registration a requirement for publication, since the quantitative survey had shown that this is not required by most journals.
Reasons for not requiring trial registration
Several reasons were given by editors of journals that did not require trial registration as a condition for publishing papers—and, indeed, even by editors whose journals do now formally require it (and some that had recently adopted such a policy). These were: fear of losing out to rival journals that do not require registration, policy not required because few primary papers were submitted to the journal, policy not required because few clinical trial papers were submitted to the journal, registration unnecessary when reporting small trials, doubts about the effectiveness of registration, and fear of discouraging research from developing countries, which have different or no registration systems in place.
Fear of losing out to rival journals not requiring registration
Fear of losing out to those journals that did not require registration was summed up in a comment made, surprisingly, by an editor whose journal states that it does require authors to register their trials. “We are competing with rival journals, and until the rival journals make it mandatory, why would I want to bar what is potentially quite interesting papers to us, just because we have got the higher standards than the rest of them?” Another commented “To require registration when bigger journals with much higher impact factors that are above that journal in their ranking don’t require it, is like inflicting a self inflicted handicap . . . for your journal.” When this point was put to other editors, however, they reported no decline in article submissions as a result of requiring trial registration.
Lack of primary papers
Smaller journals may not receive many papers that describe the primary results of clinical trials. As one interviewee explained, “We’re probably a mid-level journal . . . and so we usually don’t get the primary papers . . .[so] I don’t think it’s that big a deal.” The argument here was that the bigger or higher impact journals would receive the main findings and would be more likely to require trial registration, making it unnecessary for journals mainly publishing secondary papers to make this a requirement.
Lack of clinical trial papers
The only potential research participant to formally decline our interview request did so because the journal concerned does not publish clinical trials. Others said that they did not publish enough clinical trials to make registration a formal requirement. One interviewee explained that registration was inappropriate for certain types of research—for example, observational studies.
Another editor raised the lack of a registration system for non-clinical trials, in particular for genetic trials, as a problem.
Reporting small trials
One interviewee was not sure that trial registration was necessary “for everything.” The example given was: “we have training programmes where we have fellows . . . and some of them might want to do a little research project.” The interviewee then went on to give the example of two drugs, already licensed and marketed for an indication, being compared for effectiveness, commenting “technically that’s a clinical trial, but it seems hardly worth the effort to register.”
Doubts about the effectiveness of registration
Although many editors and publishers agreed that trial registration could be useful in combating publication bias not all were convinced. One editor said “There is also this notion that trial registration will reduce under-reporting and I . . . I really don’t see how trial registration would make it more likely that a negative study which should be published, will be published.”
Research from developing countries
There was a feeling that requiring trial registration for papers originating from developing countries could actually create publication bias because there may not be a registration system in the country of origin: “it’s not an enforced requirement . . . because a third of our content comes from emerging markets. I’m not sure what their trial registration requirements are.” On the other hand, another interviewee remarked that many such trials “are run through . . . companies which know that if they don’t do those things they will be in trouble . . . they are usually under the umbrella of some international pharma company.”
Checking trial registration
Although most of the journals questioned (11 out of 15) required trial registration as a condition for publication, most did little to check whether authors actually complied with this requirement (“You ask the authors and trust them” and “We don’t always go out and check these things” were typical comments). Checking usually consists of requiring “that at the time of submission that the registry number is submitted as part of the submission process.” It was pointed out that “it is . . . not in the authors’ interest to cheat” and that authors claiming their trial was registered when it wasn’t could easily be caught by other researchers “whistle blowing.” Interestingly, some editors whose journals had a stated requirement for trial registration spoke as if they did not enforce it. One even spoke of not wishing to debar papers from appearing in the journal concerned because of this requirement. In some cases, publishers expressed the need to police editors to comply with this requirement.
On the other hand—mainly in the case of bigger journals and publishers—a minority of editors and publishers were insistent that trial registration was checked rigorously: one publisher noted “We have a whole series of checks and trial registration is included so, because we wanted everybody to include it, it’s part of our initial check, along with ethics approval and consent . . . We would expect the editors in chief to make sure this happens.” This publisher is currently auditing all of its journals to make sure this is, indeed, the case, starting with “the ones where we think there might be a problem.”
Few difficulties were reported by editors in policing registration (although, as mentioned, policing the editors themselves was identified as a problem by some publishers). Much seems to be taken on trust or relying on readers or reviewers to check registration. One point, however, was that “the editorial office staff may not have the background to determine . . . if it is a clinical trial or not,” and hence, “the editor should have the responsibility for making sure the registration is checked.”
General comments on publication bias
Most comments focused on the failure to publish studies with negative findings. However, during interviews it became apparent that the term “negative findings” or “negative trial” can be used to mean either those with statistically non-significant findings or those that, whether statistically significant or not, are viewed as unfavourable to a particular position (for example, a sponsor’s product or a hypothesis). To distinguish these meanings we use the terms “statistically non-significant” and “unfavourable,” respectively, and have attempted to clarify the meaning when the more general term “negative trial” was used by interviewees.
There was fairly wide agreement among interviewees that statistically non-significant findings were subject to publication bias, whereas there were mixed views regarding unfavourable results. Such results were widely seen as being of more scientific (or reader) interest than statistically non-significant findings subject to publication bias. One interviewee suggested that this may be due to a lack of power in trials that produce statistically non-significant findings and that if the sample size had been greater a statistically significant result might have emerged.
In addition to under-publication of statistically non-significant or unfavourable results, other forms of publication bias were mentioned, namely biases based on the type of research undertaken or its topic; the geographical origin of the research—specifically studies from the developing world; and the standard of reporting—particularly from non-native English speakers.
Bias relating to the type of research undertaken may reflect the value accorded in the literature to particular approaches or subjects. One interviewee noted that there was a disinclination to publish research about “specific topics . . . which are not well supported.” The interviewee further explained: “They’re important, but they do not get into the literature because they’re not seen as cool, exciting biomedical stuff . . . for example, somebody might want to write up some innovative way of organising their service which might deliver faster treatment, better adherence, patients do better on it.” Although the interviewee considered that such service improvements might have considerable impacts on patient care, studies describing them were not highly regarded by journal readers, “because we don’t know how to value [these studies] in the literature.”
A bias against publishing studies carried out in developing countries was noted, and explained thus by one interviewee: “in third world countries or next to third world countries if they do a clinical trial, they have problems with study design, they have problems with data acquisition, [and] their methodology.” Finally, the standard of English in papers from non-English speaking countries was cited as an important factor affecting the likelihood of publication by several interviewees.
These aspects were also discussed in more detail in relation to the key players who may contribute to, or be affected by, publication bias.
Key players in publication bias
Interviewees were asked for their perceptions of the roles of different players such as authors, editors, and readers in publication bias.
Authors
Much was made of the role of authors in publication bias. One editor, describing how researchers would lose interest in studies that kept producing statistically non-significant results, said “in the end I think it is largely a matter of the authors . . . the big publication bias is not produced by journals or by editors or by reviewers, but it is really the gradual emerging lack of interest of authors.”
Key points that emerged were that authors: do not submit reports of studies that produce statistically non-significant or unfavourable findings so readily as those with statistically significant or favourable findings, exhibit bias in where they send manuscripts so that reports of studies with negative findings (of both types) are more likely to be submitted to lower impact journals, and try to rework their data or look at different aspects of it (sometimes beyond their original research questions), to obtain a positive result.
Regarding the first point, one interviewee remarked “If you are a scientist and you obtain a negative [rather than a positive] result, your first reaction is to try and make it work differently, and if it doesn’t work, you don’t pursue that line of research.” Another interviewee also spoke about “pressures” on authors causing them to “mine data.” As the interviewee explained, this meant “they will do a study, perhaps not having thought out the aims at the beginning and then might retrospectively go through them and say . . . Can we find something interesting out? . . . and in that way they will miss things that are not significant—they won’t even consider if that non-significant result is of interest.” Interviewees suggested that studies that produced a statistically non-significant outcome because the sample size was too small might be abandoned by authors.
This leads to the second point—authors submitting negative findings to lower impact journals. This was considered likely to happen if the lack of statistical significance was due to the sample size: “if an investigator is doing research from the trial and they write in the method that the power calculations estimated that you should include 200 patients and then you end up with 140 patients. Such studies will probably be published, if it is published at all, in a lower ranking journal.” As another editor commented, this may lead to publication bias since authors want to publish in high impact journals but feel they might be rejected if they submit statistically non-significant findings.
Finally, regarding the third point, an editor told us: “there is not necessarily a very systematic approach to hypothesis testing . . . you try something and then you really try to make it work and then if it doesn’t you leave it on the side and try something different.” Another interviewee noted that researchers may be “driven by an answer which they think is true and career pressure to publish can go a long way toward generating a whole set of data that appear to support a very gripping conclusion . . . a whole series of . . . papers [are then] published . . . [but in] a large, blinded, multicentred trial, this wonderful effect is no longer apparent.”
Readers
As some interviewees pointed out, authors are of course also journal readers and thus authors’ behaviours may be influenced by their reading preferences and their knowledge of others who read their work. However, most comments seemed to consider readers as non-researchers or certainly as non-specialists and their role was therefore viewed as different from that of authors. Interviewees suggested that readers played a role in causing or perpetuating publication bias through their preference for positive findings. The editors also mentioned that perceptions of such reader preferences could influence their decisions about what to publish.
The editors considered that publication bias could affect readers through its effects on meta-analyses. As one interviewee put it “what one ends up seeing [in meta analyses] is undoubtedly . . . biased towards the positive, so I think that has an impact on readers and clinical practitioners who rely on evidence based types of assessment.” Similarly, “if they only read what’s published, and are not aware of . . . the possible biases behind it, they may well take it at face value, as the only evidence out there.”
Journal editors
Although highlighting author practices that lead to publication bias, the editors interviewed did generally assume considerable responsibility. One frankly admitted that editors choose submissions because they are “geared towards producing nice papers to make your journal look good.” Another said that “as an editor your job is really about selection and you are going to select the papers that are the most exciting, that are really pushing the field forward, etc and in the preclinical and basic science area, these are the papers that must demonstrate an effect.” The term “benign dictatorship” was also used unashamedly by one editor in self reference and to describe editors in general.
Along with others, one participant stated that the job of editors was to “publish journals for people to read . . . We want it to be a good newspaper, so to speak.” According to another, journals “don’t claim to be unbiased.” A third said simply that “we don’t publish negative trials.” A fourth admitted that journals are reluctant to publish papers with negative or “uninteresting” results.
In addition to the propensity to publish positive findings on the grounds of reader interest, other ways in which editors may contribute to publication bias were mentioned. These included the need for studies to be written in good English, be carried out in geographical areas of interest to readers, and have sufficient statistical power. Editors and publishers also mentioned the pressures of filtering a large number of submissions and maintaining their journal’s impact factor.
Comments from one interviewee encapsulated the first two of these factors: “If somebody submits a paper from Iran, for example, the barrier is higher for them and it’s bias in both the fact that they have to be able to generate a paper that is written in coherent English and also biased because the focus of our journal is for . . . members of the [an American learned society] . . . so you would . . . reject out of hand papers that talk about . . . the Persian population.” Other interviewees spoke of local studies that may be relevant to one country, not having a “global” interest or readership, and also of the format and presentation of submitted manuscripts that may be rejected for poor English or structure. One interviewee, however, thought that the problem was more complex than the simple inability to report research in “good” English: “India is interesting because most sophisticated Indians have been educated in English . . . but you need to then turn it into academic English, so I would say it’s not just language, it’s a kind of understanding what it is that western journals are looking for.”
One interviewee mentioned the difficulty of publishing statistically non-significant findings (which may be due to under-powered studies), stating “you’ve got the problems of the smaller studies or the ones that really have no major findings, that don’t actually get out there [published] very easily.”
Another way in which editors contribute to publication bias was explained by the fact that “we have so many articles coming through . . . there is an element of pre-refereed selection of papers that never get to a referee.” Some journals perform an in-house screening of submissions and reject those they believe do not merit being judged by an external peer reviewer: “Because of the sheer volume of papers that come in there is little point in reviewing things we are never going to accept.” This procedure was also justified by the fact that “editors might worry that . . . they may have trouble finding people who want to peer review [negative or statistically non-significant studies], because they’ll know that peer reviewers may say ‘why do you want to even consider this?’”
However, one of the criteria specifically cited as the basis for rejecting a paper without sending it for external review was the topic of the paper (which, as one editor noted: “now that’s a degree of bias”).
Finally, as one interviewee remarked, journals “want to increase their impact factor.”
Peer reviewers
The issue of “reviewer bias” was raised by some interviewees, referring to the fact that reviewers may react negatively to papers that have negative results or may (“unfairly” according to one respondent) judge a paper for not being of a sufficiently high editorial standard. One editor, discussing the system of authors being asked to suggest reviewers, commented that “The whole recommendation of reviewers can cause some degree of bias.”
Research funders
There were disagreements among interviewees about the role of research funders in publication bias. There was a minority view that commercial funders, in particular pharmaceutical companies, might suppress negative findings. Typifying this position, interviewees stated “In a lot of studies the sponsor tries to influence researchers in what to publish,” and “Pharmaceutical companies want to get their drugs to market, especially while the patent is still in force.” Most interviewees, however, were of the view typified by one who said that “if the drug doesn’t really work, or has some amount of harmful effect, they [the major drug companies] really want to hear it . . . and in fact they’ll shut down research projects where there seems to be some bias in the collection and analysis of the data in favour of a drug that’s not legitimate.” Another said that it would be “serious if pharmaceutical companies or other commercial interests are not publishing negative findings because the findings contradict another positive trial,” admitting that “I don’t know how frequently that happens.”
One editor believed that the dissemination of all findings should be an obligation on the part of the funding bodies: “[a funding body], for example, pays a guy half a million pounds to do a study—it’s their job to actually publish the results . . . not the publisher’s job.”
Employers
The policies of researchers’ institutions were highlighted as playing a role in publication bias. According to one interviewee “there are lots of countries where publications have to be published in a journal in the top third of its impact factor category” to be eligible for inclusion on a CV for a potential employer. “This is enormous pressure on researchers to make their paper look better than their results would actually indicate.” As another interviewee explained, “[authors] are judged and their career . . . promotion prospects and . . . funding prospects . . . depend on their capability of publishing in . . . highly selective journals . . . [and so] authors tend to pursue things where there is a positive outcome as opposed to pursue those that lead to negative results.”
Measures to combat publication bias (other than trial registration)
Many interviewees spoke of steps their own, or other, journals were taking (or could take) to mitigate the problem of publication bias in addition to requiring trial registration. These were: journals specialising in publishing negative findings, databases of research findings, having clear journal instructions, policing reviewers and editors, and specialist review.
Journals specialising in publishing negative findings
Several interviewees mentioned the possibility (and, indeed, current existence) of journals specifically designed to publish negative or statistically non-significant findings: “You have seen a rise recently in minimal threshold journals where the focus, selection criteria are based on the technical soundness of the study not necessarily on its impact, and I think these journals are really very good homes for . . . negative results . . . if an experiment is well done, technically sound . . . but it doesn’t show an effect.” Giving examples of such journals, the editors and publishers mentioned the Journal of Negative Results in Biomedicine, and BMC Research Notes. These journals judge submissions only on scientific validity rather than predicted reader interest.
The potential of open access journals to facilitate this type of publishing was mentioned. An editor also raised the possibility of journals publishing additional full length articles (as distinct from supplementary material published alongside conventionally selected articles) suggesting: “If somebody said to me ‘would you like, with some extra resource . . . to publish a whole load of things as a kind of online supplement?,’ that would be good.” This would enable the publication not only of less interesting articles and those that did not have statistically significant findings but also what one interviewee termed “me too” articles—that is, those confirming previous work.
Research findings database
One interviewee suggested that clinical trial results could be deposited in a database (and this might also be appropriate for other types of scientific research). One editor stated “It’s not the journal’s problem to deal with this, it is the problem of the people who commission research—the universities, medical research councils, or the pharmaceutical industry. They should have a way of . . . recording the research.” This would solve the problem of journal editors rejecting papers as the findings would still be disseminated. In similar vein, it was suggested that compulsory publication of clinical trial results would “make it much more difficult to (a) not publish the results or (b) to change the original protocol.”
Having clear journal instructions
Several interviewees mentioned that their journals are willing to publish papers of scientific worth even if they report negative or statistically non-significant findings, and that the issue was to make researchers aware of this. As one interviewee put it: “when we speak to people face to face, and we email correspondents when we are trying to get submissions or . . . material, we write to promote the journals we make very clear, that is one of the first things we say, that it doesn’t have to be positive—it can be any study whatsoever.” Another mentioned the need for education, by the publisher, for some editors: “All the series cares about is that the science is sound, that it is not actually flawed . . . we send that message out to our external expert editors, and sometimes we have conflict because the expert editors will say ‘but this is a negative finding’ . . . and we constantly reiterate that it doesn’t matter.”
Policing reviewers and editors
Guidelines and policies can only have an effect if they are followed. Two interviewees spoke of a mechanism whereby an associate editor or publisher monitors the work of editors and reviewers to make sure they are not contributing to publication bias: “I mark the referees as to the usefulness of their refereeing. I want people . . . without bias.” In one case, the journal editor generates monthly metrics “that look at which papers are returned on the triage phase, which manuscripts are rejected, and [produces] a table each month . . . of which ones were rejected and what the primary reason was.”
Specialist review
Apart from the general peer reviewers, who might be able to detect author initiated reporting bias, one publisher employs “a team of medical statisticians” to review randomised controlled trials, who “are paid for reviews . . . this has established a systematic approach to reviewing these trials . . . trying to . . . eliminate bias as much as possible.”
Discussion
In our survey of the websites of 200 journals publishing clinical trials we found that almost three quarters (71%) did not indicate a requirement for trial registration. Our findings are similar to those of previous studies that surveyed smaller samples but generally found that 67-84% of journals did not require registration (although one study of Italian journals in 2006 found that none required it9, table⇓). Another study found that only 14 of the 121 journals (12%) listed on the McMaster Online Rating of Evidence system (selected to represent journals of high quality and clinical relevance) encouraged submission of research reports regardless of the direction or strength of the results, and only 11 of these included such a statement in their “Information for Authors.”10
Journal policies do seem to influence trial registration, even though they are not always fully enforced. The clearest evidence for this comes from the spike in trial registrations that occurred around the deadline set by the International Committee of Medical Journal Editors (ICMJE).11 More recently, one study found that physical therapy trials published in high impact journals were more likely to be registered than those published in other journals (75% v 34%) again suggesting an association between journal policies and registration, although not necessarily demonstrating cause and effect.12
The effects of journals endorsing or recommending reporting guidelines has been more widely studied than the effects of their registration policies.13 While these studies generally show that endorsement is associated with higher reporting quality, they also show that journal instructions sometimes cite outdated versions of guidelines and also clearly demonstrate that endorsement alone is insufficient to ensure adherence. As well as seeking explicit references to trial registration, we looked to see whether journal instructions mentioned the Declaration of Helsinki because the most recent (2008) version states that “Every clinical trial must be registered in a publicly accessible database before recruitment of the first subject.”14 Therefore, if journals require that studies conform to the Declaration of Helsinki, this should include prospective registration. We were also aware, from a previous study, that the declaration is quite commonly cited in journal instructions. We found that of the 42 journal instructions that mentioned the declaration rather than a specific requirement for registration, only 13 cited the 2008 version, 10 did not specify the version, and 19 cited an outdated version. This suggests that journals sometimes endorse statements without fully appreciating their recommendations and may not keep their own guidelines up to date.
We did not record whether journals endorsed or referenced the ICMJE uniform requirements for manuscripts submitted to biomedical journals because these merely “encourage” journals to adopt a “similar policy” to that of the ICMJE members.15 (The ICMJE guidelines have since been updated and the latest version “recommends that all medical journal editors require registration of clinical trials”.) Therefore, endorsement of the uniform requirements does not necessarily indicate that a journal requires registration as a condition of publication.
Our finding that journals do not always enforce their policies is similar to observations on the effects of reporting guidelines. Researchers have observed that guidelines adopted by journals improve reporting “only when actively implemented by a specific editorial policy,”16 and our findings suggest the same is true for trial registration requirements. A study of trials published in ICMJE member journals found deficiencies in their registration and called on editors to establish quality control procedures.17
To our knowledge, ours is the first study assessing journal editors’ and publishers’ views on publication bias using qualitative methods. However, a previous study interviewed 59 trialists about outcome reporting bias and reported that the “direction” of the findings influenced researchers’ decisions about analysis and publication.18 They reported that researchers displayed “a lack of understanding about the importance of reporting ‘negative’ results.” Reasons given by the researchers for not reporting outcomes included perceptions that the findings were uninteresting and journal space limitations. Three studies by Chan and colleagues using emailed questionnaires also reported that lack of clinical importance, lack of statistical significance, and restrictions on journal space were the most common reasons for under-reporting given by authors.19 20 21 Chan and colleagues also found that many researchers denied that outcomes had been selectively reported, despite evidence from study protocols, again suggesting a lack of understanding of the problem.
A survey of 275 trialists (which was published in 2007 and included an open question on concerns about registration) reported that trialists’ most common concerns were the length of time needed to register and the possibility that information about early phase or poorly designed trials might be confusing.22
Strengths and limitations of this study
Although we did not restrict our search to English language journals, we could only analyse journals that displayed information online in English.
A single researcher searched journal websites for information on trial registration policies. Although it is possible that some information was missed, the use of a standard search term and the “find” function on web pages and downloaded documents (to capture any mention of trial registration) should have minimised this possibility. Also, the wording about registration requirements was clear in all cases, so there was no ambiguity that required resolution by more than one researcher.
It is possible that some journals surveyed do require registration but do not mention this on their web based policies and instructions. All accessible material was assessed, but we did not retrieve information or instructions that could be viewed only as part of the manuscript submission process.
We chose to sample journals listed in the Cochrane CENTRAL database because this includes a wider range of sources than many bibliographic databases (such as Medline) and mainly comprises journals that publish controlled clinical trials. We hoped that a random selection of journals from this database would include not only high impact indexed titles but also smaller journals that publish trials and would therefore include a more representative selection of journals than more restrictive databases.
Our interview sample was relatively small (16 participants from 15 journals) and was drawn from Europe or North America. After an initial round of invitations and interviews, we assessed the findings and increased the sample to include some more journals that did not require registration, as we had a poorer response rate from such journals initially than from those that required registration. Despite this, the sample contained a majority of journals that required registration, although some had only recently introduced such a policy. However, the recurrence of themes between interviewees suggested that we had interviewed sufficient to capture a representative range of views. Also, some editors of journals that now require registration were forthcoming about reasons why such policies might not be adopted or enforced (and these were similar to the views expressed from journals that did not require registration). This may partly be due to the fact that we included several journals that had recently introduced a requirement for trial registration (because we thought the arguments would be fresh in the editors’ minds and they should be able to explain both their current and their former policies).
This study focused on the views of editors and did not seek the views of other groups such as trialists or authors directly since they have been dealt with in earlier studies.18 22 However, we were interested in editors’ perceptions of the roles of other players because these perceptions (whether correct or not) may influence editors’ behaviour (for example, editors may be reluctant to introduce policies they believe might deter authors from submitting work to their journal). Also, most editors are, or have been, researchers, authors, and readers, therefore these categories are not mutually exclusive.
Conclusions
Our study shows that most journals that publish clinical trials do not make prospective registration a requirement for publication or even encourage it in their web based instructions. The editors and publishers we interviewed proposed several reasons why journals might not require trials to be registered, or might not enforce a trial registration policy strictly. These included fear of losing good submissions to other journals, concern about preventing publication of studies from developing countries, and scepticism about the value of insisting on registration for small or exploratory studies. Other reasons why journals may not have a policy on trial registration include not publishing many primary trial reports.
The editors and publishers we interviewed recognised that authors, reviewers, funders, research institutions, and editors themselves may contribute to publication bias. Perceptions that readers do not want to read negative studies may also explain editors’ decisions. Several measures to combat publication bias were proposed. Alongside trial registration, these included: journals specialising in publishing negative findings; journals selecting submissions on the basis of scientific validity rather than perceived reader interest; publishing findings in databases rather than in journal articles; and educating authors, reviewers, and editors.
Although prospective trial registration is predicted to reduce publication bias and selective reporting of trials and outcomes, only a minority of journals that publish clinical trials make it a requirement for publication and many editors do not seem convinced that they should adopt such a policy. This suggests that journal editors may not believe that the benefits of trial registration are sufficiently important to be a mandatory requirement, or have concerns that requiring registration as a condition of publication would harm their journal, or both. As one editor (referring to small trials) said “it seems hardly worth the effort.”
What is already known on this topic
-
Prospective trial registration can reduce publication bias, is recommended by the Declaration of Helsinki, and is required as a condition of publication by some major medical journals
-
Trial registration increased considerably when members of the International Committee of Medical Journal Editors started to require this in 2005
-
Previous studies of small samples of journals within particular specialties have shown that only 16-33% require registration
What this study adds
-
Only 28% of a random sample of 200 journals publishing clinical trials and included in the Cochrane CENTRAL database require trial registration
-
Reasons for journals not requiring registration as a condition for publication may include editors’ lack of understanding of the benefits of trial registration (or of the extent and serious effects of publication bias) and fears that adopting tough requirements may put their journals at a competitive disadvantage
Notes
Cite this as: BMJ 2013;347:f5248
Related links
Footnotes
-
We thank the editors and publishers who agreed to be interviewed and Sally Hopewell and Iveta Simera for suggestions about using the Cochrane CENTRAL database for this study.
-
Contributors: EW designed the study, obtained the funding, carried out the quantitative survey of journal instructions to authors, drafted sections of the paper, revised the whole paper, and is the study guarantor. PW contributed to the development of the interview schedule, performed all the interviews, analysed the qualitative findings, drafted sections of the paper, and revised the whole paper.
-
Funding: This study was part of the OPEN project (Overcome failure to Publish nEgative fiNdings, www.open-project.eu/) which was funded from the European Union Seventh Framework Programme (FP7-HEALTH.2011.4.1-2) under grant agreement No 285453. The funder had no role in any part of the research.
-
Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: EW and PW had support from the European Union for the submitted work; they have no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; EW is a member of the advisory board of the International Standard Randomised Controlled Trial Number (ISRCTN) scheme, this is an unpaid position; she was also a member of the World Health Organization Scientific Advisory Group on trial registration (2005-07) and received expenses to attend occasional meetings; she acts as a paid consultant to several journal publishers.
-
Ethical approval: Research ethical approval was waived by the UCL research ethics committee.
-
Data sharing: In line with the OPEN project’s publication policy, data from the quantitative part of this study are available in the supplementary tables. Anonymised versions of the qualitative data (interview transcripts) may be shared with other researchers on a case by case basis, but only if the anonymity of the interviewees can be assured.
-
Members of the OPEN project Consortium are listed at www.open-project.eu/project-partner.
This is an Open Access article distributed in accordance with the Creative Commons Attribution Non Commercial (CC BY-NC 3.0) license, which permits others to distribute, remix, adapt, build upon this work non-commercially, and license their derivative works on different terms, provided the original work is properly cited and the use is non-commercial. See: http://creativecommons.org/licenses/by-nc/3.0/.