Views & Reviews Personal View

We need better ways to create new hypotheses and select those to test

BMJ 2012; 345 doi: http://dx.doi.org/10.1136/bmj.e7991 (Published 27 November 2012) Cite this as: BMJ 2012;345:e7991
  1. Frank Davidoff, executive editor, Institute for Healthcare Improvement, 143 Garden Street, Wethersfield, Connecticut, CT 06109, USA
  1. fdavidoff{at}cox.net

Judging from the mass of clinical trials being published, the testing of hypotheses is flourishing in biomedicine.1 This isn’t surprising, given the currently dominant hypothetico-deductive paradigm, which has spawned a huge and well funded clinical trial apparatus. It does, however, make the current disrespect for studies that “merely” generate hypotheses all the more puzzling. (One major clinical journal, for example, tells authors who submit such papers that, “We think you picked the wrong journal; [we] rarely if ever publish hypotheses.”) But we need falsifiable hypotheses; without them, what would we do randomised trials on?2 Besides, having many candidate hypotheses increases the chances of discovering the small number of ideas that most effectively explain anomalous observations.3

Medicine’s history of stubborn adherence to inadequate hypotheses about disease aetiology and therapeutic mechanisms undoubtedly contributes to the current caution in embracing new ideas. To make matters worse, the process by which hypotheses are created is mysterious, seemingly far outside rational scientific thought.3 The great philosopher of science Karl Popper threw up his hands on this question, asserting that “there is no such thing as a logical method of having new ideas, or a logical reconstruction of this process . . . every discovery contains an ‘irrational element,’ or a ‘creative intuition.’”4 Blindness to the potential value of new ideas may also be deeply rooted in fear of failure, a disabling state of mind fostered by the pressure to conform that is inherent in all professions. This blindness may also be encouraged by the current system for awarding biomedical research grants, which some see as a sort of jobs creation programme for researchers; one which therefore plays it safe—testing endless variations of the hypothesis that “drug X affects outcomes in disease Y,” for example—rather than taking us in potentially transformative but riskier directions.5

Despite these intellectual headwinds, biomedical hypotheses somehow continue to emerge, but it’s impossible to test them all, and it would be extraordinarily wasteful to do so even if we could. It is a key challenge, therefore, to decide which nascent hypotheses are formulated well enough to be worth testing. The clinical and social sciences have some structured mechanisms for deciding which hypotheses to test—for example, the systematic screening and assessment method,6 and others used by private foundations and US government agencies.7 8 9 Arguably, however, the principal driving force behind these mechanisms is the need for guidance in distributing limited research funds, rather than a judgment by the scientific community that estimating the potential scientific value of hypotheses is in itself an important professional responsibility.

Contrast this with the US grand jury system, the structured process, independent of the justice system, that determines whether “probable cause” exists in ambiguous criminal cases, a requirement for bringing those cases to trial.10 Grand juries emerged centuries ago because the wider community recognised that justice would not be well served if such cases were either abandoned (because of failure to gather enough evidence) or were all brought to trial (which would swamp the courts with unjustified litigation). Clinical research is of course not criminal justice, but the grand jury system makes it clear that a profession can deal with its weak links if it has the will.

So what is to be done?

Firstly, the clinical research community must affirm the vital role played by hypothesis generation and work to improve our understanding of the process. Secondly, as a matter of editorial policy, clinical journals must encourage publication of well founded studies that generate hypotheses. Thirdly, training in biomedical research must sharpen its focus on the creation of hypotheses, by using disruptive cognitive techniques such as lateral thinking, for example.5 Fourthly, we must explore new and better ways to identify hypotheses worth testing—for example, by using open innovation communities11—and evaluating their effectiveness.

Cross disciplinary efforts to define criteria for well founded hypotheses are steps in the right direction—criteria, for example, like clarity of constructs, measurability, explanatory power, description of causal mechanisms, parsimony, generalisability, and testability.12 Finally, as Roberta Ness suggests, we should consider funding creative work separately from implementation studies; providing financial support to laboratories or programmes as well as individual investigators; and exploring alternatives to the business model that underpins many health science centres—a model that focuses mainly on short term financial gain.5

There may also be drawbacks in developing new ways for generating and identifying fruitful hypotheses. Many pragmatic priorities—fiscal, political, and social—bear heavily on the conduct of science 6 7 8 and these must not be allowed to stifle support for promising ideas. For example, high-visibility groups must not be allowed to push aside strong ideas from less well-known sources that are seen as competitors. Establishing unequivocally which researchers proposed new ideas, particularly when they do so as part of a team, will go a long way toward protecting legitimate claims for academic promotion, and avoiding counterproductive disputes over patent rights.5

The current mechanisms for identifying promising hypotheses and selecting them for testing are haphazard, inefficient, and far from rational. Reshaping how we manage hypotheses will demand patience because payoffs from this reshaping will take time. It will also demand greater tolerance for risk of failure, particularly among researchers and funders, because disruptive hypotheses often disappoint. But a reshaping of the paradigm is necessary if we are to create a body of scientific knowledge that not only tells us what we know but also what we need to know.

Notes

Cite this as: BMJ 2012;345:e7991

Footnotes

  • I thank Paul Batalden, Laura Leviton, Susan Michie, and Jan Vandenbroucke for comments on an earlier draft.

  • Competing interests: The author has completed the ICMJE uniform disclosure form at www.icmje.org/coi_disclosure.pdf (available on request from the corresponding author and declares: no support from any organisation for the submitted work; no financial relationships with any organisations that might have an interest in the submitted work in the previous three years; and no other relationships or activities that could appear to have influenced the submitted work.

  • Provenance and peer review: Commissioned; not externally peer reviewed.

References