Intended for healthcare professionals

Feature Clinical trials

The appeal of large simple trials

BMJ 2013; 346 doi: https://doi.org/10.1136/bmj.f1317 (Published 28 February 2013) Cite this as: BMJ 2013;346:f1317
  1. Bob Roehr, freelance journalist
  1. BobRoehr@aol.com

Clinical trials have become complex, expensive, and yet are often not large enough to answer questions necessary to optimize treatment for individual patients. One answer might be to have large simple trials. Bob Roehr investigates

Large simple trials are primarily phase IV studies that are embedded in the delivery of care, make use of electronic health records (EHRs), demand little extra effort of physicians and patients, and can be conducted for a relatively modest sum.

Often they are thought of in the context of a learning healthcare system that some of the more integrated systems such as the Veterans Administration and Kaiser Permanente have begun to make part of their operations. Research is a part of care and that information is used to guide a constant refinement of their standard of care, improve outcomes, and often results in net savings.

The criticism of many randomized controlled trials (RCTs) today is that they have become “Ferrari trials,” says Tjeerd-Pieter van Staa, director of research at the UK’s Clinical Practice Research Datalink.

“They have very high internal validity, low measurement error—we are measuring everything, controlling everything.” But as with the high performance race cars he references, “What is our ability to generalize what we saw in this beautiful formula 1 healthcare center to things out there” in a typical clinical practice setting?

“The endpoints are primarily surrogate endpoints, not clinical endpoints,” adds Michael Lauer, director of the Division of Cardiovascular Sciences at the NIH National Heart, Lung, and Blood Institute (NHLBI). “And the setting is a research enterprise where high grade data are created in this parallel universe that is heavily audited and monitored.”

The complexity of surrogate and secondary endpoints opens the door to possible spin by a sponsor seeking to identify some positive association amidst the ocean of data points that these trials generate.

“Some analyses suggest that an integrated healthcare system that is responsible for the long-term care of their patients has an incentive to know what the right answer is,” according to Lauer. “If the best way to get the right answer is to do a randomized study, then they have the incentive to actually get that done.”

He compares it to modern, data driven retailing that tracks sales on a daily basis and adjusts stock and orders according to that information. “It is built into their culture. It is not something they are doing as an academic interest, it is something they are doing because they have a direct financial incentive.”

When you see it

There is no bright line between simple and complex trials. For Duke University’s Robert M Califf, the issue isn’t so much one of the size of the study. “I think the key is simple.” He believes too many study questions and trial designs are driven by funding and the concerns of academicians rather than by questions relevant to clinical practice.

GlaxoSmithKline senior vice president Ralph J Horowitz confesses, “Maybe it is like pornography: you know it when you see it.”

The DataLink approach1 to large simple trials “is very simple: you recruit patients at the point of care, get their consent, and then utilize routinely collected data for follow up,” says van Staa.

Their database contains EHRs of about five million patients in the UK (about 8% of the population), and while stripped of patient identifiers, “there is a way back to the patient through the system.”

“You don’t ask the clinician to start collecting lots and lots of data because that will interfere with their practice . . . You want to randomize patients and then let practice take its course,” van Staa explains.

They forgo blinding because that would add complexity and cost. “But if the options are between two statins, it shouldn’t matter,” he says. “With a countrywide system, one could recruit all of the patients for a large study in a week or two.”

Surveys in the US and UK have found that general practitioners are wary of adding to their administrative burden by participating in trials. But a study design that is well integrated into the EHR system can minimize that burden and resistance.

One potentially burdensome area is the push to include quality of life measures in clinical trials and care. Patients argue that these factors often are important in their deciding between treatment options, while many studies have shown that patient satisfaction is crucial to adherence and improved clinical outcomes.

Groups like the web-based PatientsLikeMe (www.patientslikeme.com/) have shown that at least a portion of patients are eager to report extensive quality of life information related to their disease and treatment.

Lauer and others believe it is only a matter of time before the technology is developed to integrate patient reported quality of life information into EHRs. It might be tied to smart phones, driven by apps, centered on checklists, and proactively seeking the patient’s feedback.

Modern era

Lauer traces contemporary interest in large simple trials to the GISSI cardiovascular study in 1986 in Italy. The approach got a big boost in 2002 with the creation of the NIH Roadmap: Reengineering the Clinical Research Enterprise,2 Califf adds. Initially the technology, mainly EHRs, was not in place for broad implementation. But the infrastructure has caught up and now Califf says resistance “is mainly cultural.”

“Patients have a hard time believing that doctors do not know what is best for them,” says Ryan Ferguson in discussing cultural and attitudinal barriers he has encountered in implementing large simple trials with the VA Boston Healthcare System where he is a senior researcher.

“Our doctors have a hard time believing that they do not know what is best for their patients,” he adds. “Our IRBs [Institutional Review Boards] have a hard time believing that patients want to participate in a study without signing a 25 page consent form.”

Ethical considerations of informed consent are not simply an attitudinal factor, they also play out in the regulatory structure. Ruth Faden, an ethicist at Johns Hopkins University, says, “There is a fundamental disconnect between traditional approaches to research ethics and a learning health system.”

The current paradigm was codified in the 1970s in the shadow of the infamous Tuskegee syphilis study. It draws a clear line between research and treatment as being ethically different and focuses on protecting patients from abuse and harm.

The assumptions are that clinical research puts patients at greater risk than clinical practice; imposes greater burdens not associated with care; carries the risk of inferior outcomes; and that there might be more restrictions on physician and patient choice.

“A learning healthcare system proposes that it is not only acceptable but indeed essential to integrate research and practice,” argues Faden. And that requires “a way of thinking about the ethics of clinical research and healthcare that is completely different from the paradigm that has structured our thinking for the last 45 years.”

The informed consent process is not only meant to protect patients involved in research, it is “also a way of showing respect for patients and honoring their absolutely essential role in generating knowledge,” she says. Faden has coauthored a paper on these issues that serves as the cornerstone for a supplement to the Hastings Center Report.3

Ferguson says the VA currently gains consent from each patient individually for each trial, but they are moving toward obtaining a veteran’s consent at the time of admission to the clinical point of care program rather than wait. Once that consent is in the system, the patient will be cleared to participate in appropriate studies as the opportunities arise.

VA model

The information system at the Department of Veterans Affairs (VA) initially was designed “for basic science discovery and drug discovery; it was not tuned in to clinical effectiveness or comparative effectiveness,” Ferguson says in describing how the VA Boston Healthcare System is beginning to implement point of care trials.4

That resulted in “two asynchronous worlds, one of clinical care and one of research, that were operating independently and rather intolerantly of one another. And the financing of both was not aligned with the information needs.”

The solution was “to create a learning healthcare system within the VA that conducts large simple trials within the electronic medical records of the VA . . . So the knowledge that we gain is turned directly into care.”

If equipoise exists—for example, with two different blood pressure medicines—then during the course of routine care, patients are randomized to one of those options, Ferguson explains. “Each healthcare encounter within the VA will be turned into a learning exercise for the VA so that we can improve ourselves and the care we are delivering to veterans.”

Statistical analysis is a hybrid approach using both Bayesian statistics to adjust randomization probabilities and conventional statistics to evaluate the evidence. “This allows us to promote automatic implementation of the winning strategy. As drug A starts to beat drug B, that drug starts to come up more and more frequently” in the randomization, and at some point there is a human decision to stop use of drug B.

An open label pilot study on insulin administration was conducted at three VA centers in the northeast and the findings are being prepared for publication. Patients were randomized to a sliding scale or weight based dosing using either fast acting and long acting insulin, or just long acting insulin. The primary endpoint was length of hospital stay, with secondary endpoints of glycemic control and readmission within 30 days.

Entering the study was simple. The EHR opening screen for the physician added the option of participating in randomization to the standard options of sliding scale or weight based administration of insulin. Clicking on randomization generated a brief explanation of the study for the patient, a consent agreement, and the option to enroll. He says the rate of decline in this study was about one fifth that of a typical VA study.

Ferguson says the system gives the VA “the ability to immediately integrate our result into practice, thereby lowering the translation barrier. We will have enhanced acceptance by our providers because they are the ones that are actually going to be participating in the generation of the evidence, they will be more likely to believe that evidence, and hopefully they will be more likely to use that evidence.”

An additional benefit is the ability to assess long term outcomes. Veterans tend to stay with the system because it is national in scope, delivers high quality care, and the cost to beneficiaries is nominal.

Other VA pilot studies under way involve HIV drugs, blood pressure control, and different types of mental health approaches. He says the next generation EHR should allow for more sophisticated studies.

Notes

Cite this as: BMJ 2013;346:f1317

Footnotes

  • Competing interests: I have read and understood the BMJ Group policy on declaration of interests and have no relevant interests to declare.

  • Acknowledgment: Much of the information in this report was gathered at an Institute of Medicine meeting on large simple trials, as part of an ongoing evaluation of the medical research process. Materials from this and other meetings can be accessed online at http://iom.edu/Activities/Quality/VSRT/2012-NOV-26.aspx.

  • Provenance and peer review: Commissioned; externally peer reviewed.

References

View Abstract