Editorials

Improving comparative effectiveness research

BMJ 2012; 345 doi: http://dx.doi.org/10.1136/bmj.e5160 (Published 01 August 2012) Cite this as: BMJ 2012;345:e5160
  1. Sean R Tunis, president and chief executive officer
  1. 1Center for Medical Technology Policy, Baltimore, MD 21202, USA
  1. sean.tunis{at}cmtpnet.org

New methodological standards focus on quality and relevance of research to patients

In June 2012, just before the Supreme Court of the United States narrowly affirmed the constitutionality of the Affordable Care Act, the Patient Centered Outcomes Research Institute (PCORI) issued a mandated draft report that provides an initial methodological blueprint for the conduct of research into comparative effectiveness.1 PCORI was established in 2010 as part of the act, which aims to expand access to health insurance to many of the estimated 50 million uninsured Americans. It was inspired by the view that reliable evidence about which health services work best for which patients is needed to inform clinical decisions and health policy decisions in the context of this historic overhaul of the US healthcare system.

The PCORI report offers a concise and coherent set of observations about the flaws in the current US health research enterprise and it discusses how PCORI aims to remedy this. It shows that the institute is thinking seriously about how to put into practice the concept of “patient centredness” in research, taking a term that was originally generated for political reasons and translating it into an executable approach to evidence development. The report offers a framework for thinking about the process of determining which methods would be most appropriate for tackling specific research questions. It also compiles an initial set of standards and best practices for several research methods that will probably be widely used in comparative effectiveness and patient centred outcomes research.

The report’s introduction acknowledges that, “when it comes to health and healthcare choices, despite the best intentions of the research community far too often the information available isn’t good enough.”1 As highlighted throughout the report and in the background papers commissioned to inform it, an essential aspect of dealing with this problem is to engage healthcare decision makers (patients, caregivers, practising clinicians, healthcare administrators, healthcare purchasers, payers, and policy makers) in a meaningful way at all phases of the research process. The view that the clinical and health services research enterprise is insufficiently aligned with the evidence needs of healthcare decision makers has been expressed before. This major new research funding institute takes a step forward in focusing on this disconnect as the guiding premise for its work.2 3 4

Most of the PCORI report focuses on standards or “best practices” for the conduct of patient centered outcomes research, consistent with the legislative language that established the main functions of the institute’s methodology committee.5 A highly informative series of background papers on best practices in research was commissioned to support this work. These dealt with general research topics (such as standards for priority setting, patient centredness, patient engagement, causal inference, heterogeneity of treatment effects) and best practices for the use of specific research methods (including registries, adaptive and Bayesian clinical trials, and studies of diagnostic tests).6

Several aspects of the PCORI methods standards distinguish them from a rapidly growing body of work that shares the overall aim of increasing the quality, consistency, and relevance of research by providing methodological standards. The standards in the initial report focus mainly on the design of primary research, rather than methods for the review of completed studies. The Agency for Healthcare Research and Quality has produced an extensive library of methodological guides that are targeted at those who conduct systematic literature reviews, with the aim of improving their transparency, consistency, and scientific rigour.7 Similarly, the European Union Network of Health Technology Assessment has recently issued draft guidelines for those who conduct relative efficacy assessments of drugs, which cover topics such as choice of comparators, composite endpoints, and applicability.8 Although such recommendations offer indirect guidance to academics and product developers designing primary clinical research studies, their utility for this secondary purpose is uncertain.

Other organisations, including the International Society of Pharmacoeconomics and Outcomes Research, have also published best practice guidance for the design of primary research.9 The PCORI standards, because of their direct link to the funding decisions to be made by that organisation, have greater potential to influence research practices, however. The PCORI report notes that because the draft standards had not yet benefited from public input, funding applications will not be scored on the basis of adherence. This implies that PCORI intends that future proposals will be evaluated in light of revised standards. The degree to which these standards influence researchers’ behaviour will depend on whether PCORI can persuade its scientific review panels to accord adequate weight to adherence to these standards when rating submitted proposals. Alternatively, PCORI itself could hold researchers accountable to these standards.

Almost all of the standards proposed in the draft report are general in nature. For example, the standard on patient outcomes recommends that researchers “measure outcomes that people in the population of interest notice and care about.” Individual researchers are then advised to seek input from patients in selecting these outcomes for the topic they intend to study. It will ultimately be more efficient and effective for PCORI to produce or support the development of condition specific standards on major elements of study design, rather than depending on individual researchers to do this work independently. An example of what such guidance might look like is reported in a recent publication that describes a structured multi-stakeholder deliberative process that recommended a core set of 14 patient reported outcomes for inclusion in trials of cancer drugs, including a list of validated instruments, as well as a data collection schedule and procedures.10 Related multi-stakeholder efforts to develop condition specific standards for study design are also now under way internationally, with an initial focus on drugs for Alzheimer’s disease (www.greenparkcollaborative.org). Broad adherence to such condition specific standards, incentivised through links to research funding or product reimbursement decisions, could greatly improve the quality and relevance of future research and should also result in more consistent design across studies, thereby enhancing the value of information available from the synthesis of multiple studies within that domain.

Although valuable in its own right, the report’s most important contribution may be that it illustrates the potentially far reaching benefits of establishing an organisation with the remit to develop and enforce standards for research that reflect the type of evidence that would be most useful for patients, caregivers, clinicians, payers, and other decision makers.

Notes

Cite this as: BMJ 2012;345:e5160

Footnotes

  • Competing interests: The author has completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author) and declares: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; no other relationships or activities that could appear to have influenced the submitted work.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References