Intended for healthcare professionals

CCBYNC Open access

Rapid response to:

Research Methods & Reporting

ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

BMJ 2016; 355 doi: https://doi.org/10.1136/bmj.i4919 (Published 12 October 2016) Cite this as: BMJ 2016;355:i4919

Rapid Response:

Re: ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions

Dear Editor,

I read with great interest the paper by Sterne et al titled “ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions”. The authors are to be congratulated on a very thorough and important work. As I have myself worked on the same topic for many years, I would like to refer briefly to my publications on observational intervention studies.

In 2015 I published a paper “Benchmarking Controlled Trial - a novel concept covering all observational effectiveness studies”[1]. I chose the word Benchmarking Controlled Trial (BCT) as in observational settings the comparisons have to be between peers treating similar patients, and thus benchmarking is always present. I defined categories, subcategories, and characteristics of Benchmarking Controlled Trials, and made a division according to whether a clinical question or that related to the effects of health care system features was asked. I ended up with six main categories of methodological characteristics of Benchmarking Controlled Trials: research question and study design; selection of patients/population to the study and measures to increase comparability; validity and completeness of baseline data, and comparability between groups at baseline; validity and completeness of process data throughout the clinical pathway; validity and completeness of outcome data; and statistical and data issues. Each category includes several subcategories.

I elaborated the method further and in May 2016 published a paper ”Assessing validity of observational intervention studies - the Benchmarking Controlled Trials”[2]. In this paper I created and pilot tested a checklist for appraising methodological validity of a BCT. The checklist was created by extracting the most essential elements from the comprehensive set of criteria in the paper described above, and utilizing also checklists and scientific papers on observational studies. Ten main methodological issues in assessing internal validity of BCTs were identified. Criteria for the judgment of acceptable validity of each of these issues were operationalized. Thus a checklist and a practical method on how to assess risk for bias in observational intervention studies was created.

In both aforementioned papers, I tested the method with articles from the Lancet and the New England Journal of Medicine and found that the studies had several methodological limitations, Some of these limitations could be avoided beforehand, and others should be considered in the discussion as limitations.

In March 2016 I published a paper “System Impact Research – increasing public health and health care system performance”[3]. In this paper I present the six most important impacts in health care: accessibility, quality, equality (of obtaining services of uniform quality), effectiveness, safety (occurrence of adverse effects) and efficiency (cost-effectiveness). I also present guidance on when to use Benchmarking Controlled Trial or Randomized Controlled Trial for assessing the effect of interventions directed to the financing, reimbursement and incentives, organizational issues, regulations, and available resources of the health care system. The Benchmarking Controlled Trial is the design of choice in most of these cases.

In August 2016 I published a paper “Clinical Impact Research – how to choose experimental or observational intervention study?”[4] In this paper I suggest how to choose between Randomized Controlled Trial and Benchmarking Controlled Trial when pursuing the most suitable study design for assessing effects of clinical interventions. The RCT is usually the study design of choice in single interventions, but reasons related to ethics, study question and feasibility may indicate BCT as an alternative or even the only possible study design. When studying effectiveness of clinical pathways the BCT is the primary study design, and when undertaking benchmarking between health care providers the BCT is the only feasible design.

I very much agree with the authors' conclusion that ROBINS-I method should facilitate scientific discussion on how to improve ways to assess and prevent bias in non-randomised studies of interventions.

Yours sincerely,

Antti Malmivaara, M.D., Ph.D.
Chief Physician
Centre for Health and Social Economics, National Institute for Health and Welfare
Mannerheimintie 166
00270 Helsinki
Finland
Email: antti.malmivaara@thl.fi

References:
1 Malmivaara A. Benchmarking Controlled Trial-a novel concept covering all observational effectiveness studies. Ann Med 2015;47:332-40 doi:10.3109/07853890.2015.1027255 [doi].
http://www.tandfonline.com/doi/full/10.3109/07853890.2015.1027255
2 Malmivaara A. Assessing validity of observational intervention studies - the Benchmarking Controlled Trials. Ann Med 2016:1-4 doi:10.1080/07853890.2016.1186830 [doi].
http://www.tandfonline.com/doi/full/10.1080/07853890.2016.1186830
3 Malmivaara A. System impact research - increasing public health and health care system performance. Ann Med 2016:1-5 doi:10.3109/07853890.2016.1155228 [doi].
http://www.tandfonline.com/doi/full/10.3109/07853890.2016.1155228
4 Malmivaara A. Clinical Impact Research - how to choose experimental or observational intervention study?. Ann Med 2016:1-4 doi:10.1080/07853890.2016.1186828 [doi].
http://www.tandfonline.com/doi/full/10.1080/07853890.2016.1186828

Competing interests: No competing interests

14 October 2016
Antti Malmivaara
Chief Physician
National Institute for Health and Welfare
Mannerheimintie 166, 00270 Helsinki, Finland