Intended for healthcare professionals

CCBYNC Open access
Research

The scatter of research: cross sectional comparison of randomised trials and systematic reviews across specialties

BMJ 2012; 344 doi: https://doi.org/10.1136/bmj.e3223 (Published 17 May 2012) Cite this as: BMJ 2012;344:e3223
  1. Tammy Hoffmann, associate professor of clinical epidemiology,
  2. Chrissy Erueti, assistant professor,
  3. Sarah Thorning, medical librarian,
  4. Paul Glasziou, professor of evidence based medicine
  1. 1Centre for Research in Evidence-Based Practice, Faculty of Health Sciences and Medicine, Bond University, 4229 Qld, Australia
  1. Correspondence to: T Hoffmann Tammy_Hoffmann{at}bond.edu.au
  • Accepted 2 April 2012

Abstract

Objective To estimate the degree of scatter of reports of randomised trials and systematic reviews, and how the scatter differs among medical specialties and subspecialties.

Design Cross sectional analysis.

Data source PubMed for all disease relevant randomised trials and systematic reviews published in 2009.

Study selection Randomised trials and systematic reviews of the nine diseases or disorders with the highest burden of disease, and the broader category of disease to which each belonged.

Results The scatter across journals varied considerably among specialties and subspecialties: otolaryngology had the least scatter (363 trials across 167 journals) and neurology the most (2770 trials across 896 journals). In only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) were 10 or fewer journals needed to locate 50% of trials. The scatter was less for systematic reviews: hearing loss had the least scatter (10 reviews across nine journals) and cancer the most (670 reviews across 279 journals). For some specialties and subspecialties the papers were concentrated in specialty journals; whereas for others, few of the top 10 journals were a specialty journal for that area. Generally, little overlap occurred between the top 10 journals publishing trials and those publishing systematic reviews. The number of journals required to find all trials or reviews was highly correlated (r=0.97) with the number of papers for each specialty/subspecialty.

Conclusions Publication rates of speciality relevant trials vary widely, from one to seven trials per day, and are scattered across hundreds of general and specialty journals. Although systematic reviews reduce the extent of scatter, they are still widely scattered and mostly in different journals to those of randomised trials. Personal subscriptions to journals, which are insufficient for keeping up to date with knowledge, need to be supplemented by other methods such as journal scanning services or systems that cover sufficient journals and filter articles for quality and relevance. Few current systems seem adequate.

Introduction

Research output doubles about every seven years.1 This continuing expansion is both a blessing and a curse. The improvement in our collective knowledge and ability to diagnose and manage illness has the potential to benefit patients globally. The potential gains are, however, inhibited by the information overload experienced by clinicians struggling to keep abreast of new research. Growth in the number of both articles and journals means reports of new developments are increasingly scattered, with no corresponding increase in the amount of time that clinicians can devote to reading.

Two responses to cope with the expansion of knowledge have been better organisation and synthesis of articles, such as systematic reviews and guidelines, and increasing subspecialisation of clinicians. Systematic reviews are an attempt to reduce the scatter of trials by synthesising the best research for a specific clinical question through systematic searching for and appraising and (sometimes) pooling of the scattered research. To save clinicians from needing to read all the primary trials, the Cochrane Collaboration aims to publish up to date systematic reviews in all areas of medicine, although it is still far from achieving this goal.2 Even if up to date systematic reviews were available for all questions, the number of, and subcategorisation of diseases, diagnostic tools, and treatments also continues to grow rapidly. Hence subspecialisation has been an alternative response to the knowledge expansion but brings its own problems of fragmented care and rising costs.

Both responses—better organisation and subspecialisation—have contributed to increasing the scatter of research (the spread of journal articles for a particular subject across multiple journals3), which seems to be an inevitable consequence of its growth. Previous work has suggested that this scatter of research has a long tail. For example, in renal medicine 49% of 2779 renal studies (derived from 195 systematic reviews) were in 20 journals, half renal and half non-renal, whereas the remaining 51% were scattered across 446 journals.4 The distribution has been characterised by Bradford’s law, which states that if journals in a discipline are sorted by number of articles into three groups, each with about one third of all articles, then the number of journals in each group will be proportional to 1:n:n². Two methods that clinicians can use to keep up to date with research are targeted searching for answers to specific clinical questions or general scanning of a selected number of journals, with research suggesting that the latter strategy is used much more. For example, 79% of the articles last read by paediatricians were identified through browsing journals to which they personally subscribed and only 8% were located through searching.5 General scanning can be done directly or through the increasingly popular “journal scanning services,” which collate articles from across numerous journals. We focused on the general scanning strategy, whether by personal subscription or scanning service, as the feasibility of this strategy for locating relevant recent research in unknown, before considering factors such as the quality of the articles located. We determined how the scattering of randomised trials and systematic reviews differed across medical specialties and subspecialties and examined the implications of this scatter for keeping up to date with research.

Methods

Selection of specialties and subspecialties

We were interested in examining how the scatter of research differs across various specialties and subspecialties, as this impacts on the number of journals that clinicians need to read. Since Medical Subject Headings (MeSH) coding of randomised trials and systematic reviews refers more to the health condition than to the area of specialty, we focused on MeSH terms for selected diseases and disorders. For example, for psychiatry we searched for “mental disorders,” and for cardiology we searched for “heart diseases.” Throughout the results and discussion, however, we mainly refer to the corresponding medical specialty (psychiatry and cardiology, for example) rather than to the disease or disorder.

Our choice of the sample of specialities to study was informed by burden of disease. Using the World Health Organization Global Burden of Disease report,6 we chose the nine diseases or disorders that made the leading contributions to the burden of disease in high income countries: depressive disorders, ischaemic heart disease, cerebrovascular disease, dementias, alcohol use disorders, hearing loss, chronic obstructive pulmonary disease, diabetes mellitus, lung cancers (the 10th was road traffic accidents which involves many specialties). We then chose the broader category of disease to which each of these diseases or disorders belongs; for example, lung cancer in the broader category of cancer. Three of the diseases/disorders (depression, dementia, alcohol related disorders) belonged to the one category (mental disorders), resulting in seven specialties and nine subspecialties analysed in this study.

Search strategy

An experienced medical librarian (ST) carried out the searches in PubMed on 14 April 2011. We selected the publication year 2009 to ensure that indexing would be complete: most indexing is complete by March of the following year (PubMed, personal communication, 2012). Each search string consisted of a MeSH term to identify the selected disease or disorders, a publication type to identify the type of study, and the year of publication. For example, the search strategy for randomised trials in cardiology was: “heart diseases”[MeSH] AND randomised controlled trial[pt] AND 2009[dp]. When compared with 37 other search filters, the Medline/PubMed filter of randomised controlled trial as publication type had the highest precision, as well as high sensitivity and specificity.7 Although our search may have retrieved some randomised phase 2 trials, most such studies are not randomised trials. In oncology, for example, 96% of trials have been reported as single arm studies.8 To identify systematic reviews we searched using “meta-analysis” as publication type, as this strategy has been shown to be one of the highest performing single search terms for maximising precision when retrieving systematic reviews.9 Results for each search were saved in separate EndNote libraries.

Data extraction and analysis

References from each search were exported from EndNote into Excel, and for each disease set we sorted studies by journal then counted the number of articles and journals. This was done separately for randomised trials and systematic reviews. For each specialty and subspecialty we calculated, separately, the number of journals that would need to be read to locate all the randomised trials and systematic reviews as well as the number to locate half of the randomised trials and half of the systematic reviews (which we designate the J50 value).

For the 10 journals that had published the highest number of papers (separately for randomised trials and systematic reviews) for each specialty and subspecialty in 2009 (the “top 10” journal list), we classified the journals as specialty, subspecialty, other specialty (for example, a trial of an intervention for patients with diabetes published in a cardiology journal), or general medical. The classification was done independently by two researchers (TH and PG), who then discussed and resolved any disagreements.

Results

Scatter of randomised trials and systematic reviews

For the selected specialties and subspecialties, 14 343 randomised trials and 3214 systematic reviews were published in 2009. The correlation between the number of papers and the number of journals in a discipline was high (Pearson’s correlation coefficient r=0.97, fig 1). The largest number of randomised trials (n=2770) was in neurological diseases, published in 896 journals; in contrast, 26 randomised trials related to hearing loss were published in 21 journals. The largest number of systematic reviews (n=670) was in cancer, published in 279 journals. Hearing loss had the smallest number, with 10 systematic reviews published in nine journals.

Figure1

Fig 1 Number of randomised trials and systematic reviews for each specialty/subspecialty published in 2009 compared with number of journals in which they were published. COPD=chronic obstructive pulmonary disease

Figure 2 shows the scatter of randomised trials and systematic reviews for each of the specialties and subspecialties. For example, an estimated 29 journals would need to be read to locate 25% of the randomised trials on neurological disease, 114 to locate 50%, and 896 journals to locate all trials. The range of scatter varied considerably across specialties, but with a similar shape. The plots all show a long tail of journals with few randomised trials or systematic reviews, but a logarithmic version of figure 2 (not shown) is slightly convex, suggesting an imperfect fit with Bradford’s law. The third group contained an average of 30% fewer journals than predicted by Bradford’s 1:n:n2 formulation.

Figure2

Fig 2 Number of journals compared with percentage of randomised trials and systematic reviews in 2009 (total number in brackets) by specialty (in bold) and subspecialty. COPD=chronic obstructive pulmonary disease

The table shows the number of journals needed to locate 50% and 100% of the randomised trials and systematic reviews for each specialty and subspecialty. For only three subspecialties (lung cancer, chronic obstructive pulmonary disease, hearing loss) and no specialities 10 or fewer journals needed to be read to locate 50% of the papers of the randomised trials for that area. Similarly for systematic reviews, these three subspecialties, along with alcohol related disorders and otolaryngological diseases, were the only areas in which 10 or fewer journals needed to be read to locate 50% of the papers.

Number of journals needed to locate 50% and 100% of randomised trials and systematic reviews published in 2009, for each specialty and subspecialty

View this table:

Journals publishing the highest number of randomised trials and systematic reviews

For five of the seven specialties (endocrinology, oncology, neurology, pulmonology, and cardiology), the journal that had published the highest number of randomised trials was the same as the journal that had published the highest number of randomised trials in the corresponding subspecialty (table). For six of the seven specialties and four of the nine subspecialties, the Cochrane Database of Systematic Reviews had published the highest number of systematic reviews, publishing between 6% and 18% of all the systematic reviews published in each area in 2009.

For three specialties (endocrinology, cardiology, and oncology) and two subspecialties (myocardial infarction, chronic obstructive pulmonary disease) some overlap was evident between the top 10 journals that had published randomised trials and the top 10 journals that had published systematic reviews, with five or more journals appearing in both lists. The remaining areas showed little similarity. A journal’s impact factor was not related to its ranking in the top 10 journals (by proportion of papers published), for either randomised trials or systematic reviews (see supplementary figure).

Types of journals

Figure 3 shows the classification of the top 10 journals (separately for randomised trials and systematic reviews) for each specialty and subspecialty either as a journal of that specialty, subspecialty, other specialty, or as a general medical journal.

Figure3

Fig 3 Proportion of each journal type in top 10 journals for randomised trials and systematic reviews for each specialty/subspecialty. COPD=chronic obstructive pulmonary disease. *Systematic reviews were scattered across nine journals

Randomised trials

For some specialties and subspecialties the randomised trials were concentrated in specialty journals. For example, in cardiology, myocardial infarction, chronic obstructive pulmonary disease, and alcohol related disorders seven or more specialty journals were in the top 10 journals publishing relevant randomised trials. In contrast, for dementia, depression, stroke, and neurology, only one of the top 10 journals publishing randomised trials was a specialty journal for that area. Most (n=9) of the top 10 journals that published randomised trials on depression were journals that specialise in areas other than depression (for example, Patient Education and Counseling), and for neurology, seven of the top 10 journals were other specialty journals (for example, Clinical Rehabilitation). General medical journals were not in the top 10 journals publishing randomised trials for any specialty, but were for four subspecialties: lung cancer (two of the top 10 journals), stroke (n=1), myocardial infarction (n=1), and hearing loss (n=1).

Systematic reviews

For some specialties and subspecialties the systematic reviews were concentrated in specialty journals. For example, oncology, lung cancer, and psychiatry each had six or more specialty journals in the top 10 journals. In contrast with randomised trials, at least one (range 1-3) of the top 10 journals for each specialty and subspecialty, with the exception of lung cancer, was a general medical journal. The Cochrane Database of Systematic Reviews was the most common general medical journal. For the subspecialties of hearing loss, dementia, and depression, at least half of the top 10 journals that had published systematic reviews were journals that specialise in areas other than these subspecialties (for example, systematic reviews on hearing loss published in Ageing Research Reviews).

Discussion

Randomised trials and systematic reviews were widely scattered across a large number of journals. Within a discipline, the number of journals required to find all trials or reviews was closely related to the number of papers. The extent of scatter varied among the specialties and subspecialties. Otolaryngology had the lowest number of published papers and hence the lowest extent of scatter; neurology had the highest number of papers and the greatest extent of scatter (896 journals for all trials). Neurology’s extensive scatter may result from its wide range of conditions and its overlap with other specialties such as psychiatry and internal medicine.10 For example, the top 10 journals publishing randomised trials on dementia comprised journals in the areas of gerontology, psychiatry, neurology, dementia, and randomised trials.

To find even half of the papers published in one year (that is, the J50), a clinician would need to read an impracticable number of journals; for example, an estimated 39 journals for randomised trials on diabetes and 23 journals for systematic reviews on myocardial infarction. For a few areas the numbers might be feasible, for example, lung cancer (for both randomised trials and systematic reviews nine journals would need to be read to locate 50% of relevant articles), chronic obstructive pulmonary disease (10 journals for randomised trials and six for systematic reviews), and hearing loss (eight for randomised trials and four for systematic reviews). The problem is compounded by a small overlap between the top 10 journals publishing randomised trials and those publishing systematic reviews.

Limitations of the study

Our study has some limitations. The sample of papers was restricted to a single year and the analysis to seven specialties and nine subspecialties based on burden of disease. Also, our search strategy relied on PubMed’s publication type to retrieve randomised trials and systematic reviews and therefore may have missed some eligible papers and included some that are not truly systematic reviews or randomised trials. It is likely that we missed some eligible papers; however, locating additional papers should only increase further the estimated scatter. Scatter is also likely to be greater in specialties that typically concern patients with a wide variety of conditions. Examples include emergency medicine, primary care, palliative care, and allied health disciplines, such as occupational therapy and physiotherapy. These practice areas are not covered by a singular subtree of the MeSH coding system, which further impedes retrieval of relevant articles by clinicians who work in these areas.

Comparison with other published work

Previous work has examined scatter within specific disciplines. A bibliometric analysis of 195 systematic reviews on renal conditions4 found that relevant articles were scattered across a large number of journals, in a pattern consistent with Bradford’s law, and that about half of relevant research was published in non-renal journals. An analysis of 103 792 randomised trials, retrieved from Medline for a 12 year period, divided articles into four zones and found that the number of journals in each zone was similar to what Bradford’s law predicts, except for a much larger fourth zone.1 In our study, the final third showed less scatter than predicted by Bradford’s law, but this may be explained by our focus on a single year of randomised trials and systematic reviews. Many journals publish fewer than one paper per year in the specialties/subspecialties selected for this study and will not have been included in our sample, but those publishing many well have been.

Consequences of research scatter

The main concern of this scatter is that trials and systematic reviews are published in more journals than clinicians can feasibly read to keep up to date with research. Surveys suggest that the median number of journals read by psychiatrists is 10 for those with academic commitments and three for those without11; non-academic pediatricians reported reading a median of six (range 5-8) journals and 21% of respondents read three or fewer journals; and for surgeons, a mode of six journals for academics and four for non-academics was reported.12 Browsing a few key journals that are perceived as pertinent to clinical practice is a technique reported by clinicians as a means of keeping up to date with new research.13 14 In surveys of the reading habits of three separate groups of specialists (paediatricians, psychiatrists, and surgeons), most reported reading two key journals—one in their specialty area and one in a general medical journal.11 12 15 Analysis of digital access to full text articles by primary care doctors and specialists over an 18 month period found that the clinicians only accessed 38% of the journal titles available to them, and, of these, showed high use of a few journals and much lower use of other journals.16 Specialists accessed a higher number of specialist journals and overall accessed journals almost twice as often as primary care doctors.

The results of our study highlight that clinicians who read or subscribe to journals in their specialty only are likely to miss many new relevant papers as many were published in other types of journals. This was more of a problem for some specialties/subspecialties, such as depression and neurology. As some important and influential papers are published in low circulation or low impact journals,17 these articles are likely to be missed by clinicians who arbitrarily choose a few journals to scan for up to date information. In an analysis of digital accesses, other than a small set of “core” journals there was little consistency about which additional journals doctors accessed,16 suggesting that beyond a few key journals, it may be intuition rather than a considered decision that guides clinicians’ selection of journals to browse. The patterns differed between randomised trials and systematic reviews, with general medical journals appearing more often in the top 10 journals publishing systematic reviews than in those publishing randomised trials, even after excluding the Cochrane Database of Systematic Reviews from the systematic review journal list. Another consequence of scatter is the trend for increasing subspecialisation of specialties. For example, many oncologists feel pressure to specialise in a single tumour type, and proliferation of subspecialisation is likely to continue, with a major driver of this being the pressure to keep up to date with new knowledge.18 However, while subspecialisation may be occurring in high density areas, in many towns specialists see patients across the whole range of their specialty and therefore need to keep up to date with developments in research across the specialty. Scatter makes this difficult.

Possible solutions to managing scatter

As the number of papers and journals increases, scatter is likely to increase. A yearly growth rate of 11% has been estimated for randomised trials1 and 3.5% for journals19; the latter resulting in a doubling of the number of journals every 20 years.19 Since their inception, randomised trials and systematic reviews have been growing exponentially, with no signs of plateau in this growth.2 Hence solutions are clearly needed to help clinicians cope with the increasing volume and scatter of research publications. Different solutions have been proposed in different disciplines, and the successes and failures of these might be worth examination. Firstly, systematic reviews, which gather and synthesise the data from randomised trials, are an important element of any solution. An initial hope of the Cochrane Collaboration was to act as a central library of systematic reviews, organised by specialty, but Cochrane reviews have not become the solution that was hoped. Some of the reasons are that only about 1 in 10 new systematic reviews are published in the Cochrane Database of Systematic Reviews,20 a large proportion of Cochrane reviews are out of date,21 and Cochrane reviews are perceived by clinicians as less clinically relevant and newsworthy than non-Cochrane systematic reviews.22 Hence there is also a need to manage the scatter of systematic reviews, for example, through a registry of planned and completed systematic reviews, such as www.crd.york.ac.uk/prospero.23 A registry would, in addition to providing a single location in which to find reviews, provide benefits such as helping to manage the increasing problem of publication bias, avoid duplication of reviews (by ensuring title registration of reviews, as occurs for Cochrane systematic reviews), and provide greater transparency.24 Such registries may still be difficult for clinicians to navigate, however, and some filtering and summarisation will usually be needed. For example, the WHO Reproductive Health Library (http://apps.who.int/rhl/resources/about/en/) is an electronic review journal that summarises the best available evidence on sexual and reproductive health from Cochrane systematic reviews and presents it along with practical actions for clinicians (and policy makers).

A related option is to develop specialist databases, journals, and journal scanning services that collate scattered higher level research papers in a single resource. For example, the ACP Journal Club, sourced from over 120 journals, summarises the best evidence in internal medicine in the form of a structured abstract and clinical commentary. Journal scanning services are often touted as a solution to keeping up to date and are available for some of the specialties/subspecialties analysed in this study. We examined the journal scanning services that we could identify relevant to these specialties (through our own searching and a request to the Evidence-Based Health Care email list) and found only one that seemed to deal with the problems of wide scatter and variable quality—the free service provided by the BMJ Group’s EvidenceUpdates and the McMaster Premium Literature Service Team (http://plus.mcmaster.ca/EvidenceUpdates/), which scans over 120 journals, filters articles on the basis of quality, has practising clinicians rate them for relevance and newsworthiness, and makes them available as email alerts and in a searchable database. However, this scanning service focuses mostly on the generalist physician and many specialties are not covered. Of the other services, common problems included no listing of journals searched or frequency of searching, inclusion of all article types including all study designs and news items, no filtering for quality, no transparent or explicit process for the inclusion of articles, or industry sponsorship. Of the two major journal scanning services that did list the journals scanned, there was poor mismatch between the journals they scanned and the top 10 journals identified in our study: for randomised trials an average match of 5.5 (range 0-9) journals and for systematic reviews an average match of 4.5 (range 1-8) journals (full data available from authors on request). If done well, journal scanning services may be part of the solution to managing research scatter, but if not (which currently seems to be the case for most services) they may inadvertently be encouraging selective reading and giving clinicians the illusion that they are keeping up to date. Scanning services are unlikely to be the sole solution to scatter though. It has been reported that clinicians rarely retrieve synopses of new research for which they receive email alerts.25

To manage the problem of research scatter in allied health, specialised databases that collate and critically appraise randomised trials and systematic reviews relevant to occupational therapy (www.otseeker.com) and physical therapy (www.pedro.org.au) have been developed and are well used.26 Trials and reviews are primarily located through regular systematic searches across multiple databases, the results screened, and eligible papers retrieved and appraised.27 28 The use of social media tools to alert clinicians to important new research is an emerging area that may be useful in helping clinicians to cope with scatter. However its potential to do so, including structures and safeguards that would be needed, should be explored in research.

Each specialty should consider which types of resource may provide the greatest benefit for their clinicians. Although the development and maintenance of such resources requires a dedicated team to assume responsibility, lack of investment in knowledge organisation is a contributor to the enormous problem of worldwide waste in research evidence and funding.29 This problem is one that can be rectified, at least partially, through solutions such as those suggested. The resources needed will be substantial but small compared with the $100bn (£62bn; €76bn) spent on medical research annually, much of which is wasted because it is unusable or unused.29

Conclusion

Despite its caveats, our study highlights the extent of a barrier that can prevent clinicians from keeping up to date with new research evidence in their specialty. Raising awareness of the problem is an important first step towards generating solutions to minimise the impact of this problem on the use of research evidence in clinical practice.

What is already known on this topic

  • Research papers, and the number of journals publishing them, have grown exponentially, making it difficult for clinicians to keep up to date with new research

  • To keep up to date, most clinicians scan a selected number of paper or electronic journals

  • This is unlikely to be an effective strategy, but how much the scatter of research across journals differs among specialties is not known

What this study adds

  • Publication rates of speciality relevant trials vary widely (1-7 trials per day) and are scattered across hundreds of general and specialty journals

  • Although systematic reviews reduce the extent of scatter, they are still widely scattered and mostly in different journals to those containing randomised trials

  • Subscriptions to journals need to be supplemented by other methods such as journal scanning services or systems that cover sufficient journals and filter for quality and relevance, but few current systems seem adequate

Notes

Cite this as: BMJ 2012;344:e3223

Footnotes

  • We thank Iain Chalmers (editor, James Lind Library, Oxford, UK) for reviewing the manuscript and providing helpful comments.

  • Contributors: TH and PG were primarily responsible for data analysis and interpretation, had full access to all of the data in the study, and take responsibility for the integrity of the data and the accuracy of the data analysis. They are the guarantors. PG was conceived and designed the study. ST and CE acquired the data and provided administrative support. TH led the writing of the first draft of the manuscript. All authors contributed to drafting and revising the manuscript.

  • Funding: TH is supported by a National Health and Medical Research Council of Australia (NHMRC)/Primary Health Care Research Evaluation and Development career development fellowship (No 1033038) with funding provided by the Australian Department of Health and Ageing. PG is supported by a NHMRC Australia fellowship (No 527500). The funders had no role in the design and conduct of the study; collection, management, analysis, and interpretation of the data; or preparation, review, and approval of the manuscript.

  • Competing interests: All authors have completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declare: TH is supported by a NHMRC/Primary Health Care Research Evaluation and Development career development fellowship and PG is supported by a NHMRC Australia fellowship; none of the other authors had support from any organisation for the submitted work; all authors have no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Ethical approval: Not required.

  • Data sharing: The full search strategies and primary data are available from the corresponding author at Tammy_Hoffmann{at}bond.edu.au.

This is an open-access article distributed under the terms of the Creative Commons Attribution Non-commercial License, which permits use, distribution, and reproduction in any medium, provided the original work is properly cited, the use is non commercial and is otherwise in compliance with the license. See: http://creativecommons.org/licenses/by-nc/2.0/ and http://creativecommons.org/licenses/by-nc/2.0/legalcode.

References

View Abstract