How to make a compelling submission to NICE: tips for sponsoring organisationsBMJ 2003; 327 doi: https://doi.org/10.1136/bmj.327.7429.1446 (Published 18 December 2003) Cite this as: BMJ 2003;327:1446
- Correspondence to: A Burls
Evidence in support of products being assessed by the National Institute for Clinical Excellence can be presented in various ways to make products seem more attractive than they really are
During health technology appraisals the National Institute for Clinical Excellence (NICE) invites sponsors of the technology to make a submission in support of their product. These submissions can be of variable quality. We examine some of the more dubious techniques that can be used by sponsoring organisations to make their products seem attractive to those making reimbursement decisions.
Although the following “advice” is based on real examples of submissions to NICE, it should be remembered that similar biases can be found in academic, peer reviewed publications and that such practices are not the preserve of industry.
Make your technology look effective
Generating the evidence
Compare your intervention with an inactive control; placebo controlled trials are ideal for circumventing clinically relevant head-to-head comparisons.
If forced to use an active control make sure that the comparator is:
Ineffective in the patient group studied (for example,select only those whohave failed to respond to standard treatment and then use standard treatment as the comparator).
Do not look at long term outcomes; for modelling purposes, extrapolating from short term data is far more flexible.
If the treatment effect may be short lived, switch all the controls to yourintervention at the end of the trial.This makes long term assessment impossible, and you can extrapolate the benefits from the early results. Justify this switch on ethical grounds.
To be taken seriously, you must describe at least one analysis as “intention to treat.” Do not over-interpret this requirement or stick slavishly to technical definitions. For example, by defining withdrawal criteria to include patients who find the intervention toxic or ineffective, you can avoid collecting follow up data for patients whose inclusion in an intention to treat analysis would be undesirable.
Reporting the evidence
Place most emphasis on the outcome in the trial that produces themost significant results.
Do not be unduly upset if no outcome on its own is statistically significant.
Combine different outcomes—by chance you will often end up with the intersection of two or three sets of outcomes that is highly significant. Report these as a clinically “very important combination.”
With a little creativity you will almost certainly be able to find a subgroup of patients with especially good results.
Report mean changes on rating scales as your primary outcome. This way, no one will know how many patients experienced worthwhile clinical improvements.
Always refer to the trials favouring your intervention as the “pivotal” trials.
If your product is a drug and the evidence of efficacy is weak insist that it is a member of a class of drugs.
Conversely, if the class of drugs has little advantage over alternative treatments, insist that your product is unique within that class.
Presentation and framing are all important—a 33% decrease may be better expressed as a 50% increase, and expressing results as a relative risk reduction is usually much more compelling than the equivalent absolute risk reduction.
Minimise the possibility of independent evaluation of the evidence
Suppress the original protocols for trials (“commercial in confidence” is an established justification for this)—this prevents independent reviewers from detecting whether your reported outcomes are results driven, as they will not be able to verify theprimary outcome of the trial.
If some of your trials come out with unfavourable results:
Do not report them;
Delay reporting until after the NICE committee makes its decision; or
If these results have already been reported in journals, minimise their importance—find some minor difference in trial design from the studies that give favourable results and emphasise the clinical importance of this.
Make sure it is not possible to reanalyse your results—there are many ways this can be done:
Present your data as an “integrated” analysis. This also conveniently allows you to add and subtract bits to make your case stronger (such as truncating data at an outcome point that maximises the apparent treatment effect);
Leave out the standard deviations and confidence intervals, especially where these do not reach conventional levels of statistical significance;
Present survival curves without giving information on the hazard ratio, withdrawals, or losses to follow up;
Present results in different formats for each trial to prevent independent pooled analyses.
Overwhelm independent technology assessment reviewers by submitting large volumes of data. Aim for 10 000 pages as a minimum. Include rat, elk, sheep, and in vitro tissue studies where possible—you do not want to be accused of holding back potentially useful information.
Ensure that data are delivered at the last possible moment.
If an independent review team have worked in the area insist that the work be given to a team that is coming fresh to the subject
However, if there is plausible evidence to support your technology, ignore the above. Develop a chummy and cooperative relationship with the review team and providethem with everything they need. (The evidence is more likely to impress the appraisal committee when it is presented by independent reviewers and will thus make your case better than you can.)
Make your technology seem cost effective
This is the easy bit. To virtually guarantee approval of your product, aim to present an incremental cost effectiveness ratio that is less than £30 000 per quality adjusted life year (QALY). The following simple rules will ensure success.
Use a very low utility estimate for the untreated disease state. A negative utility (that is, a state worse than death) has successfully been used even for relatively mild illnesses.
Similarly, the treated health state should have the largest utility estimate you can derive.
There is little empirical research evidence of good quality, so generate your own utilities.
On no account should these be measured during the trial, as blinding may systematically reduce the estimate for the utility gained by use of your product (remember a dogged independent review team may get hold of outcomes measured within a trial and may make a fuss if they are withheld).
Make sure that you do any study on utilities retrospectively without a protocol and include several different methods of measurement.
If you obtain suitable estimates, place great emphasis on having evidence based, empirically derived estimates (university review teams are not funded to do primaryresearch and may have had to make assumptions).
If you do not get suitable estimates, ignore your findings and use assumptions based on standard indices (or repeat the research a few times and report the most advantageous findings).
Expert clinical opinion can be a useful source of helpful utility data.
If things still look bad, surrogate outcome measures (such as radiographic findings or chemical markers) are a boon here. NICE “case law” has established that the fact that some surrogate outcomes correlate poorly with clinical outcomes is not a reason for ignoring them.
To prove cost effectiveness, a Markov model is useful.
Many people do not understand what Markov models are and are reluctant to admit their ignorance.
They are infinitely flexible, with many places where assumptions can be inserted—generate lots andpick those that make your case most effectively.
Use as much technical jargon as possible (for example, call your model a Markov model even when it is not).
Model over as long a time as possible—with the right decision model you canusually ensure that there are significant long terms gains in quality of life with your product.
Do not worry readers with the intricacies of your model,just give a bottom line.
Delay submission of your model until the last possible moment.Do not make the mistake of thinking this is the deadline for industry submissions set by NICE. Models have been accepted well after the deadline, and some astute organisations have presented theirs after the appraisal committee has met.
Enhance the chance of success
Suborn as many competent independent researchers as you can by paying them what they will consider to be inordinate amounts of money for trivial but time consuming work (given the salaries in the university sector, this should not cost much). This will:
Undermine their perceived integrity and reduce the review capacity of the NHS (where there is already a deficit of skilled reviewers and health economists).
Ensure media hype; publicity is more important than peer reviewed publication.
Do not discourage patients and their doctors from making emotional appeals—one child on Newsnight is worth a thousand patients in a randomised controlled trial.
Remind all parties that “CE” stands for clinical excellence, not cost effectiveness.
Insinuate that a decision against your product would reflect the penny-pinching government's lack of commitment to the NHS.
If your product has little true effectiveness, place great emphasis on patient autonomy and theright to choose.
When things go wrong
If the decision goes against your product:
Imply that the NICE secretariat and committee are faceless bureaucrats who do not care about patients; • Submit further data or a reworked economic model;
Appeal on trivial procedural grounds—the details will not be noticed. and if your appeal is upheld you will be able to proclaim indignantly that NICE havebeen proved to be wrong.
If all else fails, and the appeal goes against you:
Throw a tantrum and threaten to withdraw your research or other commercial activities from the country.
The apparent effectiveness of a health technology can be enhanced at every stage of its assessment by a variety of techniques
These include using ineffective comparators and methods when generating the evidence; selectively reporting the most favourable evidence; analysing the results in a way that favours the technology; and choosing favourable assumptions when modelling the evidence and dealing with uncertainty
Producers and users of health technology assessments need to be aware of these potential biases
We thank the many colleagues who gave input into these guidelines but prefer to remain nameless.
Contributors and sources This article was conceived by AB, who wrote the first draft as personal therapy. JS made it funnier. AB is guarantor for the article.
Competing interest None declared. The authors work for the West Midlands Health Technology Assessment Collaboration (WMHTAC), in the Department of Public Health and Epidemiology at the University of Birmingham. WMHTAC is funded by the regional and national NHS to undertake research synthesis to inform policy decisions. WMHTAC accepts no commissions nor funding from pharmaceutical companies and for-profit organisations.