Open access publishing, article downloads, and citations: randomised controlled trial
BMJ 2008; 337 doi: https://doi.org/10.1136/bmj.a568 (Published 31 July 2008) Cite this as: BMJ 2008;337:a568All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
All of the articles in our study have now aged 3-years and we report that our initial findings were robust: articles receiving the open access treatment received more article downloads but no more citations [1].
Article Downloads
During the first year of publication, open access articles received more than double the number of full-text downloads (119%, 95% C.I. 100% - 140%) and 61% more PDF downloads (95% C.I. 48% - 74%) from a third more unique visitors (32%, 95% C.I. 24% - 41%). Abstract views were reduced by nearly a third (-29%, 95% C.I. -34% - -24%) signaling a reader preference for the full article when available.
Article Citations
Thirty-six months after publication, open access treatment articles were cited no more frequently than articles in the control group. Open access articles received, on average, 10.6 citations (95% C.I. 9.2 -12.0) compared to 10.7 (95% C.I. 9.6 - 11.8) for the control group. No significant citation differences were detected at 12, 18, 24 and 30 months after publication.
1. Davis, P. M. 2010. Does Open Access Lead to Increased Readership and Citations? A Randomized Controlled Trial of Articles Published in APS Journals. The Physiologist 53: 197-201. http://www.the-aps.org/publications/tphys/2010html/December/open_access.htm
Competing interests: No competing interests
We welcome the valuable comments from Richard Smith.
Below we will attempt to meet the questions point by point.
1. The tables are now available
2. 34 out of 216 studies did not declare any funding. All of these
studies were locked. Thus studies with funding of any source was more
likely to get published “unlocked” (OA) (p= 0.0296).
111 studies received non-industrial funding of which 11 were published
“unlocked”. The risk ratio (RR) of getting “unlocked” (OA) status for
industrial funding vs. other types of funding was 1.71 (CI 95% 0.80-3.65;
p= 0.177). The lack of significance probably reflects lack of power as the
trend clearly favors industry funded studies with regard to OA status.
3. Two of the studies did not give any information about funding or
competing interest, both of these were locked.
4. We suspect that the lack of correct declaration, demonstrated in
the current study, merely reflects forgetfulness and unawareness. However,
COI declaration including explicit declaration of employment by industry
is an important courtesy provided to the reader, which helps to put the
results and conclusions into the right perspective.
As you correctly point out, reliability of author declarations of
potential COIs have previously been questioned (Gross et al, BMJ 2003;
Bekelman et al, JAMA 2003; Hussain et al, BMJ 2001). Although BMJ has a
comprehensive policy on disclosure of funding and potential COIs (COI) by
authors (www.icmje.org, updated 2008), where all authors receive written
guidance and are required to complete a form declaring all potential
financial COIs prior to publication in BMJ
journals(http://www.bmj.com/cgi/content/full/317/7154/291/DC1), this study
points at a continuous need for propagating correct declaration of actual
or potential conflicting interests and funding by industry. However, to
date no formal way exist to ensure complete declaration, and editors and
readers are therefore left to rely on the honesty and adequacy of authors.
On the other hand, it must be expected that the risk of bad publicity
concerning industry trying to conceal the support of research, is a strong
motivator for giving correct and complete information. Interestingly,
correct declaration of industry employment was significantly more likely
for studies published OA (9/10 [90%] vs. 13/31 [42%]; Fisher’s exact test
p=0.011; OR=12.46 [exact 95% CI: 1.40 to 110.87]), which probably have to
do with the association of OA and RCT studies, which in turn yields more
rigorous control with conducting and publishing data.
5. We met unexpected resistance when trying to get the full length
article accepted for publication, and we have come to believe that the
results of this study are controversial and possibly inconvenient for the
hybrid journals. The more pleased we are that the message finally got out
and appreciate the endorsements we are receiving from the publishing
community. At the moment we do not have any studies of other hybrid
journals underway. But we consider the quest for future unbiased OA
publication highly important and are encouraged to undertake further
studies. Furthermore we welcome others to do the same.
Competing interests:
RC: is editor in the Cochrane Collaboration (Cochrane Musculoskeletal Review group [CMSG]).
Competing interests: No competing interests
I was interested to read this study by Ane Krag Jakobsen and
others, and it was an excellent idea to undertake it. But I
have some questions.
1. It’s a pity that the tables are not available. Can they
be made available in another way?
2. It seems very likely that “funded” as opposed to
“unfunded” research is more likely to be “unlocked”—because
the payment then doesn’t have to come out of the authors’
own pockets. Can you do an analysis of “funded” against
“unfunded”? It may well be that that division is more
important than the “industry funded” versus “non-industry
funded.”
3. How many of the studies did not give information on
funding or competing interests?
4. You write that “In 41 (19%) studies at least one author
was employed by industry with commercial interests in the
field, but only 22 (54%) of these studies had authors who
met the requirement of declaring this potential competing
interest.” But do you not think that authors think that
their competing interest is obvious when they declare that
they are employed by the company? This is an age old
problem.
5. Might you now repeat the study in other hybrid journals?
Competing interests:
I'm a board member of the
Public Library of Science
and a zealot for open
access.
Competing interests: No competing interests
Over the last decade electronic procedures have gained increasing
importance in biomedical research, facilitating the process of submitting,
reviewing, and subsequently publishing of research results. The concept of
“Open Access” (OA) publishing means that scholarly communication is made
available to everyone free of charge on the Internet(1). In the field of
biomedical research OA publishing is often financed by authors or sponsors
paying a fee to the publisher in order to have articles published with
immediate free online access(2). A few journals operate entirely under
this model, whereas others such as specialist journals published by the
BMJ group use a hybrid model allowing authors to choose between
subscription access and author-paid OA in all journal volumes.
In this cross-sectional report, we wanted to investigate the
association between funding by industry of biomedical research and author-
paid OA publishing of papers in Annals of the Rheumatic Diseases (ARD)
(3,4).
Studies published during a one-year period, Vol.66 no.10 (Oct 2007) to
Vol.67 no.9 (Sept 2008) referred to as “Extended Reports”, were considered
eligible for inclusion. The primary exposure was defined as declaration of
study funding from an industrial source with commercial interests in the
area studied, while other author-industry affiliations was considered as
secondary exposures. The unambiguous dichotomous access status defined as
subscription access (“locked”) or OA (“unlocked”) was registered as the
outcome measure.
The results are presented in table 1 and 2. Out of 216 papers, 71
(33%) declared to have received funding from an industrial sponsor
considered to have commercial interests within the field. A significantly
higher proportion of industry-funded studies were published “unlocked”
(12/71 [17%] industry funding vs. 11/145 [8%] no industry funding, OR=2.48
(1.03 to 5.94); Chi2= 4.35, p=0.037). Furthermore, studies with at least
one author declaring other affiliations with industry showed a
significantly higher frequency of ”unlocked” papers (Chi2= 10.04,
p=0.002). In 41 (19%) studies at least one author was employed by industry
with commercial interests in the field, but only 22 (54%) of these studies
had authors who met the requirement of declaring this potential competing
interest. As expected, a larger proportion of studies funded by industry
were RCTs (12/71 [17%]), compared to papers that received no funding from
industry (5/145 [3%]); two-sided Fisher’s exact test (p=0.002). However,
there was no significant interaction between study design and funding
status in relation to OA.
Our results indicate that author-paid OA publishing in the biomedical
field preferentially increases accessibility to studies funded by
industry. This may potentially favour dissemination of pro-industry
results to researchers as well as to non-researching physicians(5-7). We
suggest the term “e-publication bias” for this emerging type of
publication bias in OA hybrid journals. Our results may very well have
relevance beyond the BMJ journal ARD and the medical field of
Rheumatology. Furthermore, we found that adequate disclosure of employment
by an industrial company was missing in as many as 46% of the cases, which
supports the continuous need for propagating correct declaration of actual
or potential conflicting interests and funding by industry(3,8,9).
Funding:
No external funding was received for this study.
The salaries of all researchers during the study period were paid by the
main employers (AKJ, RP and LEK: Region Skåne; RC, EMB: The Parker
Institute, which is funded by the Oak Foundation. Employers had no
influence on the study conduct, interpretation and reporting of results or
the decision to submit this article for publication.
References:
1. Craig ID, Plume AM, McVeigh ME, Pringle J, Amin M. Do open access
articles have greater citation impact? A critical review of the
literature. J Informetrics 2007;1:239-248.
2. Giglia E. Open access in the biomedical field: a unique
opportunity for researchers (and research itself). Eura Medicophys
2007;43:203-13.
3. Bekelman JE, Li Y, Gross GP. Scope and impact of financial
conflicts of interest in biomedical research: a systematic review. JAMA
2003;289:454-65.
4. Patsopoulos NA, Ioannidis JPA, Analatos AA. Origin and funding of
the most frequently cited papers in medicine: database analysis. BMJ
2006;332:1061-4.
5. Korn D. Conflicts of interest in biomedical research. JAMA
2000;284:2234-7.
6. Angell M. Industry-sponsored clinical research: A broken system.
JAMA 2008;300:1069-71.
7. Choudhry NK, Stelfox HT, Detsky AS. Relationships between authors
of clinical practice guidelines and the pharmaceutical industry. JAMA
2002;287:612-7.
8. Gross CP, Gupta AR, Krumholz HM. Disclosure of financial competing
interests in randomised controlled trials: cross sectional review. BMJ
2003;326:526-7.
9. Hussain A, Smith R. Declaring financial competing interests:
survey of five general medical journals. BMJ 2001;323:263-4.
Competing interests:
RC: is editor in the Cochrane Collaboration (Cochrane Musculoskeletal Review group [CMSG]).
Competing interests: No competing interests
to observe a platypus laying eggs is not a demonstration that the
platypus does not lay eggs. You have to actually observe the provenance,
ab ovo, of the little newborn platypusses, if you want to demonstrate
that they are not engendered by egg-laying.
Failing to observe a significant OA citation Advantage after a year (or a year
and a half -- or longer, as the case may be) with rando
mized
OA does not demonstrate that the many studies
that do observe a significant OA citation Advantage with
nonrandomized OA are simply reporting self-selection artifacts (i.e.,
selective provision of OA for the more highly citable articles.)
You first have to replicate the OA citation Advantage with
nonrandomized OA (on the same or comparable sample) and then
demonstrate that randomized OA (on the same or comparable sample)
eliminates the OA citation Advantage (on the same or comparable sample).
Otherwise, you are simply comparing apples and oranges (or eggs and
expectations, as the case may be) in reporting a failure to observe a
significant OA citation Advantage in a one one-year (or 1.5 year) sample with
randomized OA -- along with a failure to observe a significant OA citation
Advantage for nonrandomized OA for the same sample either (because the
nonrandomized OA subsample was
too small):
The many reports of the nonrandomized OA Citation Advantage are based on
samples that were sufficiently large, and on a sufficiently long time-
scale (almost never as short as a year) to detect a significant OA Citation
Advantage.
A failure
to observe a significant effect with small samples on short time-scales -
- whether randomized or nonrandomized -- is simple that: a failure to
observe a significant effect: Keep testing till the size and duration of your
sample of randomized and nonrandomized OA is big enough to test your
self-selection hypothesis (i.e., comparable with the other studies that have
detected the effect).
Meanwhile, note that (as other studies have likewise reported), although a
year is too short to observe a significant OA citation Advantage, it
was long enough to observe a
significant OA download Advantage -- and other studies have also
reported that early download advantages correlate significantly with later
significant citation advantages.
Just as mating more is likely to lead to more progeny for platypusses (by
whatever route) than mating less, so accessing and downloading more is
likely to lead to more citations than access and downloading less.
Stevan
Harnad
American Scientist Open Access Forum
Competing interests:
OA Advocate and co-author of
several articles reporting OA
citation advantage
Competing interests: No competing interests
One aspect of the debate on open access jounrals that has been relatively neglected is the cost to publish in these. Retaining copyright of the manuscript often carries a significant upfront fee that is unaffordable to clinicans interested in doing small scale research but without significant funding in the form a grant. Thus, whilst the discussion regarding the effects on readership and citation rates is valid and interesting, the choice to publish in either traditional subscription-based or open access journals may not always be available to everyone.
Competing interests:
None declared
Competing interests: No competing interests
The article by Davis and colleagues started a good number of rapid
responses. The majority focused on methods and study’s power to detect a
difference in citation rates. Several commentators highlighted limitations
of the study, flaws, and explained why, in their view, the results may
have limited importance. I agree that methodology may have been more
robust. Possibly the results would have been different, if the authors
would have used a different strategy, or simply have pursued a longer
follow-up.
As a clinician, editor, and reader of medical journals, I think
I still see an important message here, one that publishers, editors, and
authors alike should listen to very carefully, once more. OA articles are
more likely to be read than pay-per-access articles. From an author and
reader perspective, this is great. The trial confirmed that OA increases
readership and dissemination of information. Readers are more likely to
read an article if it is open access, and this is what any editor and
author should aspire to. If citation rate does not go up, hey, that may
not be the end of the world after all. Is it a good way to measure the
impact and importance of an article anyway? I don’t think so, and I
certainly wouldn’t lose my sleep about it.
Competing interests:
I am an advocate of open access publishing, and former associate editor of an open access journal
Competing interests: No competing interests
Dr. Eysenbach has expresses repeated concerns over the negative results presented in our paper [1] and rightly questions whether we have waited long enough to observe differences, and secondly whether we have the statistical power to detect citation differences.
In his own study of author-sponsored open access publishing in the journal PNAS [2], Dr. Eysenbach found large and significant differences in the odds of being cited very shortly after publication (Odds Ratio: 1.7 after 0-6 months; 2.1 after 4-10 months; 2.9 after 10-16 months).
Considering the magnitude of these results (and those of others studies) we should have seen differences in the data -- even if not significant -- early in our study. Our regression analysis performed at 9-12 months (see Table 3) indicates that the direction of the Open Access effect goes in the opposite direction, reducing the incidence rate of being cited by about 5% (although this is not statistically significant). That the effect is biased against randomized open access articles provides additional confirmation that there is no positive effect of open access on article citations. As we mentioned in an earlier post, we are still unable to see an open access with an additional 6 months of data. While there is a risk of a Type II error, that is failing to detect an Open Access effect on citations when one really exists, there is little evidence of committing such an error in our study.
Impact Factors and Statistical Power
Based on our sample size and the standard variation in citations of APS journals, we calculated that we would be able to detect significant differences of about 20% (see the section Sample Size. While some of the 11 APS journals under investigation are cited more heavily than others, the impact factors range from as low as 3.632 for the Journal of Applied Physiology to 29.600 for Physiological Review. In comparison, PNAS = 9.598 for Dr. Eysenbach’s observational study.
Merits of Randomized Controlled Trial over Observational Study
While Dr. Eysenbach’s study of PNAS was exceptional in that it attempted to control for competing explanations for a citation advantage, there were variables left out of his analysis that are well known to be significant predictors of future citations, such as length of article, type of article (e.g. review, method, empirical), and number of references [3-5]. In addition, his study fails to account for indicators of novelty and newsworthiness -- factors which may have led some authors to pay the additional page charges ($1,000) for immediate open access status -- such as whether the article was featured on the cover of the journal, received a press release, or picked up by external scientific new sources and the lay press [6-8]. For example, open access articles published in Eysenbach’s sample of PNAS articles were more than twice as likely to be featured on the front cover of the journal, and nearly twice as likely to be picked up by the media as subscription-based articles [9]. A randomized controlled trail (the method we implement in our study) eliminates the chance that confounding variables -- whether they can be measured or not -- can bias the results.
Notes:
1. Davis PM, Lewenstein BV, Simon DH, Booth JG, Connolly MJL. Open access publishing, article downloads and citations: randomised trial. BMJ 2008;337:a586. http://dx.doi.org/10.1136/bmj.a568
2. Eysenbach G. Citation Advantage of Open Access Articles. PLoS Biology 2006;4(5). http://dx.doi.org/10.1371/journal.pbio.0040157
3. Stewart JA. Achievement and Ascriptive Processes in the Recognition of Scientific Articles. Social Forces 1983;62(1):166-189.
4. Baldi S. Normative versus Social Constructivist Processes in the Allocation of Citations: A Network-Analytic Model. American Sociological Review 1998;63(6):829-846.
5. van Dalen HP, Henkens K. What makes a scientific article influential? The case of demographers. Scientometrics 2001;50(3):455-482.
6. Phillips D, Kanter E, Bednarczyk B, Tastad P. Importance of the lay press in the transmission of medical knowledge to the scientific community. New England Journal of Medicine 1991;325(16):1180-1183.
http://content.nejm.org/cgi/content/abstract/325/16/1180
7. Kiernan V. Diffusion of news about research. Science Communication 2003;25(1):3-13.
8. Chapman S, Nguyen TN, White C. Press-released papers are more downloaded and cited. Tobacco Control 2007;16:71-. http://dx.doi.org/10.1136/tc.2006.019034
9. Davis PM. Citation advantage of Open Access articles likely explained by quality differential and media effects (letter). PLoS Biology 2007. http://arxiv.org/abs/cs/0701101
Competing interests:
None declared
Competing interests: No competing interests
Dr Davis continues to justify his short observation period with citing the PLoS paper. But, I hope Davis still answers the 8 questions I posed for him here and on my blog, in particular question #8 "PNAS has a very high impact factor of >10 (i.e. a very high citation rate). What are the impact factors of the journals you included and how would the different citation rates affect the type II error?". Surely, one would expect to see a statistically significant citation advantage earlier in a journal that has higher citation rates?
As to Dr Harnads comment on "Eysenbach is continuing to promote his own published report on the significantly higher number of citations for OA compared to non-OA articles as being the first and only methodologically sound demonstration of the OA citation advantage)": This is obvious nonsense. I wont't repeat my arguments why the PLoS study represented an advance at the time here (just to mention the keywords "prospective" and "adjusting for multiple confounders" - for an educational piece on this see http://www.webcitation.org/5ZttrqKAz ).
Science often progresses in small steps, each study adding something to prior studies. Early studies (Lawrence, Harnad, Antelmann etc) were completely uncontrolled, retrospective comparisons between crude citation rates of openly accessible articles (perhaps stratifying for a single confounder, but not for multiple confounders). The PLoS study was the first prospective (not retrospective, as Davis misleadingly writes) cohort study using multivariate regression methods to adjust for multiple confounders associated with study quality and author impact, which previous studies have not adjusted for (the PLoS study also showed that by adjusting for these multiple confounders, any citation advantage for self-archived (GREEN) open access articles disappeared -- which is why Dr Harnad dislikes the study). Dr Davis' study is the first published RCT (with others ongoing - including my own RCT), which gets rid of self-selection bias and residual confounding.
In epidemiology, evidence of risk factors or preventative factors from cohort studies - which remain predictors for the outcome even if one adjusts for multiple confounders - often motivate and justify the conduct of randomized trials. As such I still stand behind the results of the PLoS study as one important step to generate objective evidence beyond any advocacy rhetoric.
Competing interests:
Editor of the Journal of Medical Internet Research (http://www.jmir.org), an open access journal
Competing interests: No competing interests
Results of Open Access RTC Generalizable to other Disciplines
In a recently published article [1] we report that the findings
described in Davis et al. (BMJ 2008) are generalizable to other
disciplines outside of physiology, and now include medicine,
multidisciplinary science, biology, the social sciences and humanities.
Articles randomly assigned to the open access intervention received
significantly more downloads and reached a broader audience within the
first year, yet were cited no more frequently, nor earlier, than
subscription-access control articles within 3 years.
The real beneficiaries of open access publishing may not be the
research community but communities of practice that consume, but rarely
contribute to, the corpus of literature.
[1] Davis PM. Open access, readership, citations: a randomized
controlled trial of scientific journal publishing. The FASEB Journal
2011;25(7). http://www.fasebj.org/content/early/2011/03/29/fj.11-
183988.abstract
Competing interests: No competing interests