Distinguishing opinion from evidence in guidelines
BMJ 2019; 366 doi: https://doi.org/10.1136/bmj.l4606 (Published 19 July 2019) Cite this as: BMJ 2019;366:l4606
All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
Medicine is not an exact science, and there are some who are inclined to consider medicine as an art rather than a science, with the underlying assertion being that the best clinicians are not only knowledgeable but are also capable of demonstrating reliable clinical judgement (1). Nonetheless, Schünemann and colleagues have highlighted that guideline development needs to distinguish opinion from evidence (2), and although they provide key definitions for their conceptual framework there is still a blurring between what is opinion, fact and evidence.
Evidence is defined as facts intended for use in support of a conclusion, and an opinion is defined as a view or judgement formed about something, not necessarily based on facts. The statement that is presented as an example of evidence: “I had prostate cancer detected by PSA screening and I am alive 10 years later”, includes two separate facts, but is this statement evidence? Does linking these two facts provide an evidential conclusion, and if so, what is it? Or does their linkage actually represent an opinion on the potential relationship of PSA screening to survival?
It is evident from this example that determining what is evidence and what is opinion may be complex and ironically, this uncertainty is likely to drive personal opinion. Moreover, in many circumstances where the highest quality of evidence may not be available to underpin guidelines, the focus will be on whether the available evidence is “good enough” and this is when decision making in medicine evolves into an art rather than an exact science. Therefore, a more realistic approach may be to limit the distinction to – what are the facts and what are the opinions, and both of these sources of information can be considered as evidence. The strength of evidence will undoubtedly differ for different sources of information, and this should be systematically assessed by a grading process that is consistent for all sources.
References
1. Panda SC. Medicine: Science or Art? Mens Sana Monogr 2006; 4: 127–138.
2. Schünemann HJ, Zhang Y, Oxman AD. How to distinguish opinion from evidence in guidelines. BMJ 2019; 366: 14606.
Competing interests: No competing interests
The development of clinical guidelines requires distinction between expert evidence and expert opinions. Expert opinions may have certain misunderstandings. Expert opinions have personal emotional tendencies while expert evidence is more objective. Several suggestions mentioned in the article can still improve the current situation.
Expert consensus should be similar to expert opinion. From the process of learning, we should recognize what are experts' opinions and what is expert evidence. Then consider the conclusions of expert evidence that we can use to assist in clinical diagnosis and treatment. I think that experts in general medicine should join the guidelines and provide evidence for hospitals in the diagnosis and treatment process.
Competing interests: No competing interests
Based on systematic evaluation of evidence and a balance between the pros and cons of different interventions, the clinical guidelines form a set of recommendations that provide the best care for patients.The expert consensus emphasizes the role of expert experience in the development of guidelines, which is mainly derived from the consensus results of the diagnosis and treatment plans for specific clinical problems made by the team composed of multidisciplinary expert representatives.Guidelines often need a lot of scientific basis, but we should not ignore the summary of the expert consensus in clinical practice, in terms of timeliness, we often need the expert consensus to guide the progress of certain diseases, but should not stick to the routine.
Competing interests: No competing interests
General practitioners need to establish practical and useful guidelines needed by the grass-roots level. Community hospitals also encounter many unusual conditions, such as snake bites and pesticide poisoning. However, general practitioners may not always deal with these emergencies, and the first choice is generally referred to the superior hospital. The patient may have died or suffered serious complications when he or she arrived at the superior hospital. Therefore, community hospitals should have the ability to treat acute diseases, rather than delay the rescue time. General practitioners should develop emergency guidelines suitable for community practice. There are no general practitioner guidelines. General practitioners should summarize all kinds of problems encountered in practical work, and summarize treatment methods and treatment effects in different categories. Community hospitals have a large number of patients, but how to do the standard is a problem. Evidence is needed for any guide, but some evidence is obtained from practice and not published at a high level. We should explore and solve this problem.
Competing interests: No competing interests
This paper focuses on the impact of the differences in expert opinions and evidence in the current guidelines on clinical decision-making and discusses the solutions, that is, to promote expert evidence and collect and identify expert evidence in a systematic and transparent way.This, to some extent, increases the credibility of the guidelines.But I think the clinical guidelines that primary care doctors need should not only be evidence-based, but also operable in the context of our primary care doctors.To put it simply, when doctors at the grass-roots level encounter an emergency, they can do something for patients by relying on the resources they can mobilize at hand, how to do it to the greatest extent, how to make the connection before referral or sending to the higher hospital, or factors that need to be considered when making joint decisions with patients.In my opinion, the guidelines should not only include professional treatment, but also provide guidance for specific medical resources in different places, so that we can effectively respond to emergencies and help patients reduce injuries and pain after recovery as much as possible.The medical conditions at the grass-roots level are quite different from those of experts in large medical institutions. Therefore, expert opinions or expert evidence may need to be accepted.To solve this problem, I think in addition to collecting expert opinions or evidence, we should also consider the opinions and evidence of practitioners of primary medical work.Some people may worry about the professional level of grassroots doctors. As practitioners of specific practices and guidelines, I think they have more say.In the future, the medical practice guidelines of grassroots doctors should be more close to the actual situation at the grassroots level, which requires grassroots doctors and other practitioners to participate in the formulation of the guidelines and cooperate with experts. Such a model is more conducive to the development of the family doctor team in the future.
Competing interests: No competing interests
The author's explanation makes us pay more attention to the evidence when reading the guideline. However in fact ,It's hard to avoid seeing a lot of expert advice in a guide , sometimes it's not necessarily useless.Expert opinions are based on the clinical experience and even the consensus of a team of experts in a certain field. As long as it has the proper form of publication, it can be receivable .For example, when experts evaluate the effect of a certain drug to treat diabetes, they directly draw a conclusion, which will be less convincing.If they use a form to supplement the number of patients and the disease evolution , it will be more credible. As for expert evidence, it also has its limitations and cannot be divorced from the context.Because of the heterogeneity of patients, the guidelines are only for reference and cannot be blindly applied.
Competing interests: No competing interests
At present, many guidelines do not distinguish expert opinions from expert evidence. For both, expert evidence tends to be factual, while expert opinions are only experts'personal opinions. In the absence of expert evidence, we have to use expert opinions, so in the process of developing the guide, we can mark the level of evidence to help the users of the guide choose the appropriate way to deal with it. For general practitioners, the use of guidelines may be more limited. First of all, many grassroots guidelines simply reduce the current guidelines, and can not truly reflect the current status of grassroots. Secondly, general practitioners are faced with more complex disease situations, such as multi-disease coexistence, multi-drug use and so on. At this time, the guidelines are relatively limited. In addition, the disease spectrum is different in different regions, so it is necessary to develop guidelines according to local conditions. Therefore, as far as general practitioners are concerned, it is necessary to use structured tables in clinical work to collect relevant evidence, summarize and evaluate, in order to formulate guidelines in line with the situation at the grass-roots level.
Competing interests: No competing interests
Dear Editor,
I write in the hope this may assist others responding.
OPINIONS AND JUDGEMENTS
MJ Quinn writes [1]: "I am not sure what point is being made in the preceding article but if it is denying us the role of "judgement" in either the making or reading of guidelines then it is wrong!" I assume MJ Quinn's reference to "the preceding article" is to the prior response [2] of Hugh Mann attributing to Leonardo da Vinci "The greatest deception men suffer is from their own opinions.".
There are sound scientific reasons why the practice of medicine, a professional calling reliant upon information from inexact sciences, can only be practised engaging professional judgement and opinions. It is not necessary here to rehearse why, but it is an obvious truth once explained.
But if one wishes to form reliable opinions based on sound judgement an issue is what is "opinion" and similarly, "judgement". So whilst Hugh Mann it seems complains of unreliable opinions and unsound judgements, MJ Quinn is concerned that the place for reliable opinions and sound judgements in medicine is acknowledged.
From logic validity and soundness are relevant. Is an opinion or judgement sound? An argument may be valid logically but unsound.
Examples:
1) False premises and false conclusion - logically valid but unsound: All drugs are sweets [premise]. Aspirin is a drug [premise]. So Aspirin is a sweet.
2) False premises and true conclusion - valid but unsound: All play-acting is first-aid [premise]. All CPR is play-acting [premise]. So CPR is first-aid [conclusion].
The warrant for a sound opinion or judgement includes the evidence to support the truth of its premises and so its conclusion.
But where in EBM is that explained and how is it addressed in practice? So far as I am aware, it is not and if so then this is part of EBM's foundational problems.
WHAT IS EVIDENCE
Erika FJ Frey observes [3] "there are a few self-styled definitions of evidence in the responses to this article".
The definition of evidence I proferred [4] is far from "self-styled". It has the precision essential to the subject of evidence, which so many including in law find so complicated. It is not my definition but because of its simplicity, elegance and completeness it is the definition I adopt when teaching undergraduates and post-graduates what evidence is and how we use it. This is also why I noted that in medicine there seems to be an attitude that only medical professionals know best such that centuries of learning is discarded and ignored as if it does not exist.
Dr Frey continues "With facts in context, we can expect the same outcome (with all variables taken into account) no matter how many times the evidence is tested. It is this principle that makes science, science." However, this is a common misunderstanding including amongst scientists. And it is not correct particularly in the context of the practice of medicine. Strict regularity of outcome, the bedrock of reductionist science and of the ill-termed "scientific method" always fails when confronted with complexity. Medical professionals are confronted daily not with homogeneity but heterogeneity. Additionally the theories upon which medical practice are based are often from inexact sciences and so falsified in the Popperian sense by the irregularity typical of all inexact sciences. One might expect similar presentations or outcomes in some patients for some conditions but not always the same and there may be very different presentations and outcomes with other patients.
And Dr Frey also illustrates a foundational problem with EBM "And it is also this principle upon which EBM is founded and that the evidence base is expanded."
So if Dr Frey's understanding of EBM's foundations is correct, then EBM is an ill-founded practice. In fact EBM is founded at least tacitly on reductionist science principles, with inevitable consequences in medicine which as Dr. William Osler put it, is an "art of probabilities," or at best, a "science of uncertainty" [5].
FACT AND TAUTOLOGY
I included in my prior response [4] "false facts" in a short list of tautologies. This is not tautology but is internally a logically contradictory phrase.
-------------------------------
[1} 23 July 2019 MJ Quinn https://www.bmj.com/content/366/bmj.l4606/rr-1
[2] 22nd July Hugh Mann Facts vs. Opinions https://www.bmj.com/content/366/bmj.l4606/rr
[3] 25 July 2019 Erika F J Frey https://www.bmj.com/content/366/bmj.l4606/rr-2
[4] 23 July 2019 Clifford Miller https://www.bmj.com/content/366/bmj.l4606/rr-0
[5] Silverman ME, Murray TJ, Bryan CS, eds.The Quotable Osler. Philadelphia, Pa.: American College of Physicians; 2002.
Competing interests: No competing interests
I note that there are a few self-styled definitions of evidence in the responses to this article, so I also would like to throw-in my 2 cents worth also.
Def: Evidence = 'facts in context'.
This a very simple but useful definition when considering what constitutes evidence within Evidence Based Medicine.
1. A fact without context can not be evidence. A fact is merely a piece of information that may or may not be relevant in any given situation. For example, a common fact touted in Australia (especially during heatwaves) is "you can never drink too much water". But the evidence shows that you can indeed drink too much water, leading to homeostasis issues, electrolyte issues, cardiac issues and death issues by water intoxication. For a fact to hold true as evidence, it must be in context.
2. Context is what positions the facts to be evidence. The fact that drinking water is essential for our bodies needs context to be an evidence based statement. Again to use our example of water intake, without knowing what the context is (whether it is a 42 Celsius day, or fluid restriction for CHF), we aren't to know what the parameters and relevance around the fact are, which is what context provides. Without context, the fact is not evidence, whilst simultaneously being at the mercy of variables, taking it further away from being evidence based.
3. Evidence is facts in context. Within its stated context, the fact that underpin the evidence, and produces the expected outcomes repeatedly, can not be disputed. The evidence for optimal water consumption suggests that for an average adult, 2L a day of water is a good amount for optimal hydration, without extreme physical stress leading to excessive sweating and electrolyte loss. We know this to be true for the vast majority of adults, whom fit the context-driven criteria for the facts to be undisputed and proven repeatedly. Here we have the fact and the context for that fact, which is what underpins the evidence on this.
With all this in mind, all this is not to say that the evidence can't change. But that also is the point. Facts are only evidence when they are in context. If the context changes, the evidence changes too.
With facts in context, we can expect the same outcome (with all variables taken into account) no matter how many times the evidence is tested. It is this principle that makes science, science. And it is also this principle upon which EBM is founded and that the evidence base is expanded.
Competing interests: No competing interests
Re: EVIDENCE - Do not forget the hidden human dimension
EVIDENCE – Do not forget the hidden human dimension
The word “evidence” occurs 121 times on 3 successive pages of BMJ 27 July 2019 from Professor Holger Schünemann and colleagues [1] indicating how important a word it is. Not enough is made of the fact that whether as “research evidence” or “expert evidence” evidence is mediated through humans. The Latin Conjugation derivative of the word says it all – video (I see), videre (to see), visi (I have seen) or, put differently, I witness, to witness, I have witnessed. Evidence that something has happened must always have a human interpretation. Margaret McCartney describes us humans correctly when she mentions “seams of multiple misinformation” in her excellent article “Evidence in a post-truth world” [2].
Clare Dyer also revealed that “Journal Agrees to retract paper after university found study was never done”. [3] To put it bluntly then, scientific liars and lying scientists exist who are capable of producing research evidence and expert opinions. Indeed, the word “evidence” (especially when preceded by “scientific”) not only assumes an importance it hardly deserves, but also that when used negatively can be made to turn truths upside down: For example “There is no scientific evidence that Felix I D Konotey-Ahulu was born on a Saturday, the second son of his mother” [4] has been interpreted to mean “There is evidence that Felix I D Konotey-Ahulu was NOT born on a Saturday, the second son of his mother.” As Professor Edward Ernst eloquently put it “Absence of evidence is not evidence of absence”.[5]
HISTORICAL EVIDENCE CAN SUCCEED WHERE SCIENCE TOTALLY FAILS
Get the best scientific brains in the world to use Science to prove that I was born on a Saturday, the second son of my mother, and they will fail. But talk to an illiterate adult in my Krobo Tribe in Ghana and within 30 seconds the answer will emerge [6]. For elucidation of certain truths History is far superior to Science [6]. If this fact does not make worshipers Science a little humble, I do not know what will.
PROFESSOR LORD SOLLY ZUCKERMAN’S ADVICE TO SCIENTISTS
In his William Randolph Lovelace II Memorial Lecture, which all scientists must read, “Pride and Prejudice in Science” [7], Lord Zuckerman FRS exposes as fraudulent work that had been presented as scientific evidence. BMJ’s own past editor Dr Stephen Lock and Frank Wells edited a 202-page excellent book “Fraud and Misconduct in Medical Research” that should be required reading in every university. [8]. Could not the mention of “evidence” 121 times in a 3-page article have made Professor Schünemann and colleagues include some of the concerns of Lord Zuckerman and Dr Stephen Lock? Limiting evidence to just “research evidence” and “expert evidence” omits human meddling.
FALSE DICHOTOMY BETWEEN RESEARCH EVIDENCE AND EXPERT EVIDENCE
Because of their outstanding research work 10 eminent researchers were assembled by the WHO to compile Guidelines in Haemoglobinopathy [9]. Never was it thought that these members of WHO’s Expert Advisory Panel in Human Genetics were unsuitable because their “research evidence” could be at odds with their “expert opinions”. No, what was always problematic was when flawless evidence such as Richard Doll’s University of Oxford work in the 1950s and later which linked smoking to lung cancer [10] was rubbished by other scientists with their “there is no evidence” evidence.
FLAWLESS EVIDENCE HAS BEEN IGNORED IN PROMULGATING GUIDELINES
National Confidential Enquiry into Patient Outcome and Death (NCEPOD) published evidence that over a 2 year-period in UK hospitals “9 out of 19 patients with sickle cell disease who had pain on admission and who then died had been given excessive doses of opiods” [11 12]. Yet in the Guidelines for sickle cell crisis NICE recommends “a strong opiod intravenously”. [13 14] On the Death Certificate “Chest Syndrome” would be written as cause of death. [15] Not a word about the fact that respiration is ominously suppressed with Morphine & Diamorphine [16], causing the chest syndrome [17].
MEDIA BREAKING NEWS: CAN HIDDEN AGENDA BE EXCLUDED?
Newspapers, radio, TV, and Social Media placard contradictory “evidence” most notable in the Stains/Cholesterol controversy [18-24]. When the evidence that world media had trumpeted loudly was found to be false and withdrawn from scientific journals, the media never went back to correct false evidence. Two examples:
(a) On Sunday 3rd May 09.15 GMT 1987 BBC World Service in its SCIENCE IN ACTION programme broadcast that British scientists had evidence that homosexuals and central Africans had a common genetic factor that predisposed them to AIDS, and that the research had been published in Lancet the previous day [25]. Having just returned from a 6-week fact finding tour in Africa where I did grassroots epidemiological research to unravel the main cause of AIDS there, I knew at once that the Lancet genetic evidence was highly suspect, and I said so. Mine was the first critical letter the Editor published: “To use genetic data from this anthropologically distinct group, who do not even have AIDS, to cover ‘Central Africa’ leaves a lot to be desired” [26]. The scientific authors of the article subsequently wrote confessing “ERRONEOUS DATA” [27]. Did SCIENCE IN ACTION apologise to homosexuals and Central Africans for broadcasting erroneous data?
(b) BMJ publication that a Ghanaian nurse flew the 45 minutes from Kumasi to Accra and developed intestinal infarction and was found to have electrophoresis-proven Sickle Cell Trait [28] turned out to be False Evidence! [29 30 31]. Although the false Case Report was then withdrawn from publication, The TIMES Science Correspondent did not withdraw his reprehensible recommendations based on the false evidence [33]. Professor Hermann Lehmann protested from Cambridge [34].
CONCLUSION
Research Evidence, Expert Evidence, Opinions, are all human generated. Shall we continue to wax eloquent about Evidence-based Medicine without scrutinising the evidence producers for probity? Should not Editors now begin to ask reviewers who pass scientific articles as fit for publication also to declare “Competing Interest”?
Competing Interest: None Declared.
Twitter@profkonoteyahul felix@konoey-ahulu.com
Felix I D Konotey-Ahulu FGA MB BS MD(Lond) DSc(Hon UCC) DSc(Hon UH) FRCP(Lond) FRCP(Glasg) DTMH(L’pool) FGCP FWACP FTWAS ORDER OF THE VOLTA (OFFICER) Kwegyir Aggrey Distinguished Professor of Human Genetics University of Cape Coast, Ghana; Former Consultant Physician Genetic Counsellor in Sickle Cell and Other Haemoglobinopathies Korle Bu Teaching Hospital & Director Ghana Institute of Clinical Genetics, and 9 Harley Street, Phoenix Hospital Group, London W1G 9AL. Website: www.sicklecell.md
1 Holger J Schünemann, Yuqing Zhang, Andrew D Oxman. Distinguishing opinion from evidence in guidelines. BMJ.366:doi 10.1136/bmj.l4606 July 27 2019 pages 151-153.
2 McCartney Margaret. Evidence in a post-truth world. [“seams of multiple misinformation” pervading the scientific world] BMJ 2016; 355: 16363, November 28, 2016.
3 Dyer Clare. Journal agrees to retract paper after university found study was never done. BMJ 2013; 347: f55
4 Konotey-Ahulu FID “Private Thoughts: There is no evidence that I was born on a Saturday. Postgraduate Medical Journal of Ghana 2012; 1: 32-33 33” which often proved superiority of history over science in arriving at truth.
5 Ernst Edzard. Absence of evidence is not evidence of absence. BMJ 19 March 2012. https://blogs.bmj.com
6 Konotey-Ahulu FID. History versus Limits of Science: Is Solomonic Genius a Y Chromosome Phenomenon? J Genet Disorders Genetic Reports 2014; 3: 2
http://bit.ly/1wyq5H5
7 Zuckerman Lord Solly. Pride and Prejudice in Science. Randolph William Lovelace II Memorial Lecture. Aerospace Magazine 1974. 45: 638-647. [Also republished with permission in Ghana Medical Journal 1975 Vol. 14 No. 1 p 52-60]
8 Lock Stephen, Wells Frank (Editors). Fraud and Misconduct in Medical Research. BMJ Publishing Group, 1993. London WC1H 9JR. [ISBN 0 7279 0757 3]
9 Boyo AE, Cabannes R, Conley CL, Lehmann H, Luzzatto L, Milner PF, Ringelhann B, Weatherall DJ, Barrai I, Konotey-Ahulu FID, Motulsky AG. WHO Geneva; Scientific Group on Treatment of Haemoglobinopathies and Allied Disorders. (Technical Report 1972); 509: 83 pages.
10 Doll Sir Richard. [October 28 1912 to July 24 2005] Obituary Lancet 6 August 2005, Volume 366, Issue 9484, p 448.
11 NCEPOD (National Confidential Enquiry into Patient Outcome and Death). Sickle: A Sickle Crisis? (2008) [Sebastian Lucas (Clinical Coordinator), David Mason (Clinical Coordinator), M Mason (Chief Executive), D Weyman (Researcher), Tom Treasurer (Chairman) info@incepod.org
12 Konotey-Ahulu FID. Poor care for sickle cell disease patients: This wake-up call is overdue BMJ Rapid Response May 28 2008 BMJ 2008; 336: 1152 to Susan Mayor “Enquiry shows poor care for patients with sickle cell disease” on National Confidential Enquiry into Patient Outcome and Death (NCEPOD) REPORT “SICKLE: A Sickle Crisis? (2008) http://www.bmj.com/cgi/eletters/336/7654/1152a#196224 | http://www.info@ncepod.org
13 NICE. Management of an acute painful sickle cell episode in hospital: summary of NICE guidance. BMJ 2012; 344 doi: https://doi.org/10.1136/bmj.e4063 (Published 27 June 2012) BMJ 2012;344:e4063
14 Konotey-Ahulu FID. Management of an acute painful sickle cell episode in hospital: NICE guidance is frightening1 Sept 7 2012 www.bmj.com/content/344/bmj.e4063/rr/599158
15 Konotey-Ahulu FID. Opiates for sickle-cell crisis? Lancet 1998; 351: 1438.
16 Konotey-Ahulu FID. The Sickle Cell Disease Patient. Natural History from a clinic-epidemiological study of the first 1550 patients of Korle Bu Hospital Sickle Cell Clinic. First Published 1991 and reprinted 1992 by The Macmillan Press Ltd. London and Basingstoke. Published & Reprinted 1996 by Tetteh-A’Domeno (T-ADCo) Watford, Herts, England.
17 Konotey-Ahulu FID. Management of sickle cell disease in the community. BMJ Rapid Response 13 April 2014 [90 references] to Brousse V, Makalli J, Ress DC: management of sickle cell disease kin the community. BMJ 2014; 348:g1765 doi.10.1136/bmj.g1765 http://www.bmj.com/content/348/bmj.g1765/rr694233
18 Godlee Fiona. Statins: We need an independent review. BMJ 2016; 354: i4992 http://www.bmj.com/content/354/bmj.i4992
19 Knapton Sarah. “End statins controversy with government review”. Daily Telegraph. Friday 16 September 2016, page 1.
20 Krumholz Harlan M. Statins evidence: when answers also raise questions. Sharing the data is more likely to settle the debate than another review. BMJ 2016; 354: i1463 (doi:10.1136/bmj.i4963)
21 Blakemore Sarah. Statins: We need an independent review. BMJ Rapid response www.bmj.com/content/354/bmj.i4992/rapid-responses.
22 Taylor Rose. Taking Statins after 75 ‘nearly halves risk of heart attack’. THE TIMES Front Page Headline Wednesday July 31 2019. [Quote: Sir Nilesh Samani, medical director at the British Heart Foundation said: “This study, although observational, adds to evidence that statins reduce heart attacks and strokes in older people”.] ***NOTE “EVIDENCE” here too?
23 Kendrick Malcolm. The Great Cholesterol Con - The Truth About What Really Causes Heart Disease And How To Avoid It. John Blake 2007. London [ISBN 9781-1-84454-610-7]
24 Le Fanu James. All Must Take Statins. Chapter 4 in: Too Many Pills – How too much medicine is endangering our health and what we can do about it. Little, Brown Book Group, 2018. London EC4Y 0DZ. [ISBN 978-1-4087-0977-1]
25 Eales L-J, Nye KE, Parkin JM, Weber JN, Forster SM, Harris JRW, Pinching AJ. Association of different allelic forms of group-specific component with susceptibility to and clinical manifestations of human immunodeficiency virus infection. Lancet 1987; i: 999-1002.
26 Konotey-Ahulu FID. Group specific component and HIV infection. Lancet 1987; i: 1267.
27 Eales L-J, Nye KE, Pinching AJ. Group-specific component and AIDS: Erroneous data. Lancet 1988; i: 1936.
28 Green RL, Huntsman RG, Serjeant GR. Sickle cell and altitude. BMJ 1971; 4; 593-595.
29 Addae RO. Sickle cell trait and altitude. BMJ 1972; 1: 53.
30 Djabanor F F T. Sickle cell trait and altitude. Brit Med J 1972; 1: 113
31 Konotey-Ahulu FID. Sickle cell trait and altitude. BMJ 1972; 2: 231-32 April 22
32 Green RL, Huntsman RG, Serjeant GR. Sickle and altitude. Brit Med J. 1972; 2: 294
33 TIMES The London. Sickle Cell Disease and Flying. Science Report – Medicine. December 9 1971.
34 Lehmann Herman. Sickle cell and flying. The TIMES, London. January 4 1972.
Competing interests: No competing interests