Qualitative research and The BMJ
BMJ 2016; 352 doi: https://doi.org/10.1136/bmj.i641 (Published 10 February 2016) Cite this as: BMJ 2016;352:i641
All rapid responses
It may have occurred to many participants in this debate that the attitudes of those who favour qualitative approaches over quantitative ones, and vice versa, bear an interesting resemblance to the characteristic tendencies of the right and left cerebral hemispheres, to perceive and then act, in very different ways to each other. If the ‘dominance’ of one hemisphere, rather than a nuanced, shared role in perception and decision making, is the exception rather than the rule, this suggestion may seem an irrelevance. On the other hand, it may go some way to explain how so little progress has been made in the debate, how little ‘meeting of minds’ there has been.
McGilchrist has pointed out that if the left hemisphere were to overwhelm the world view of the right hemisphere, “ there would be a loss of the broader picture, and the substitution of a more narrowly focused, restricted but detailed view of the world, making it perhaps more difficult to maintain a coherent overview..... the ‘bits’ of anything, the parts into which it could be disassembled, would come to seem more important, more likely to lead to knowledge and understanding, than the whole, which would come to seem no more than the sum of the parts... Knowledge that came through experience, and the practical acquisition of embodied skill, would become suspect, appearing either a threat or simply incomprehensible.“ (1)
McGilchrist’s ability to illustrate the very different views that the two hemispheres have, and the different preferences and actions that follow from those views, may help us to understand better the apparent inability of people of goodwill and integrity, to appreciate what it is about another’s opinion, that makes it worthy of their consideration.
Animated, if bemused at times, this present debate may be another challenge for the BMJ, to recognise how far it has moved to a position suggesting left hemisphere dominance, dismissive of those whose views do not complement its own narrow focus.
“ ..since the left hemisphere is the hemisphere of What, quantity would be the only criterion that it would understand. The right hemisphere’s appreciation of How (quality ) would be lost. “ (1)
It is interesting, if depressing, that in recent years the BMJ has refused to explain the unprofessional and vulgar abuse in its pages and on its website, directed at medical homeopaths. Many are BMA members, subject to abuse and accusations of bad practice which one might expect the GMC to open an eye to, if many abusers were not senior colleagues.
The accusations have been referenced and rebutted, (2,3 ), on many occasions, and the BMJ invited to explain its motivation. Silence.
Homeopathic assessment and treatment are complex processes, not at all what RCTs are designed to assess. There is a small mountain of qualitative evidence demonstrating the effectiveness, acceptability and safety of homeopathic remedies. (4)
Medical homeopaths are concerned, not so much at the abuse directed at them, in the BMJ, but at the inability of the BMJ to realise that there is a broader picture than the one it sees and acts upon. The BMJ’s inability to see that broader picture has contributed to a negative milieu for homeopathy, reducing the availability of NHS homeopathy, inevitably increasing the sum of human suffering.
Good luck to those who argue for a more qualitative approach.
1 The Master and his Emissary. Iain McGilchrist. Yale, 2009.
2 http://www.bmj.com/content/351/bmj.h5083/rr
3 http://www.bmj.com/content/351/bmj.h5624/rr-6
4 http://www.facultyofhomeopathy.org
Competing interests: NHS, peripatetic/unpaid homeopath.
Loder and colleagues’ reply fails to address one of the main points raised by Greenhalgh et al’s letter: their editorial policy implicitly constrains the topics about which research papers can get into the BMJ, excluding research about complex interventions. For such interventions, as Woolcock (1) warns, ‘the default assumption regarding their external validity... should be zero’; to learn anything about complex interventions which can validly be applied elsewhere, case studies which use qualitative or mixed methods are likely to be more useful than RCTs.
Many of the quantitative papers that do get published in the BMJ build their claims to generalisability on sand, by failing to acknowledge that the apparently-simple interventions they ‘test’ do not work alone but as components of a complex system, particularly when they are implemented in primary care. If these papers do indeed change clinical practice, as the editors hope, they may well do so by misleading doctors into assuming that what ‘works somewhere’ will also ‘work here’ (2).
1. Woolcock M. Using case studies to explore the external validity of ‘complex’ development interventions. Evaluation. 2013;19:229-48.
2. Cartwright N, Hardie J. Evidence-Based Policy: A Practical Guide to Doing It Better. Oxford, UK: Oxford Scholarship Online; 2012 February 2015 online.
Competing interests: No competing interests
We appreciate the vigour and thoughtfulness of those who have responded to Greenhalgh and colleagues’ open letter and to our editorial. (1,2) We see this exchange of views, and the fact that we are open about our priorities, as very positive things. We’ve listened to the arguments presented, and plan to do two things.
Over the next few months we will be consulting with qualitative researchers to learn more about how we can recognise the very best qualitative work, especially that which is likely to be relevant to our international readers and help doctors make better decisions. In addition, we will shortly issue a formal call for research methods and reporting articles about qualitative research. (3) We hope that proposals for these articles will come from some of the authors of the Greenhalgh et al letter.
Byatt, Smith and Fuller argue that, just as with qualitative studies, findings from randomised controlled trials often are not generalisable either, and that we are being inconsistent in assigning a lower priority to qualitative studies. We agree about the limited applicability of many trials. That’s exactly why The BMJ’s clearly stated policy for more than a decade has been that we are unlikely to publish certain trials, especially placebo controlled trials, and that we give higher priority to comparative effectiveness and pragmatic trials that are done in real clinical practice. Bolland asks why The BMJ continues to publish observational studies with limited generalisability, for example the recent paper on potato intake in pregnancy. We try to choose observational studies that we think have implications for a broad readership and that are methodologically sound. When considering nutritional epidemiology studies we bear in mind important caveats such as those raised by Ioannidis, and we publish only a small proportion of the nutritional studies we receive. (4)
It would be hard to claim that our, or indeed any journal’s, week by week decisions about what to publish are reproducible. But we do aim to publish work that will be both accessed by readers and cited by academics, and we routinely track usage and citations. We now have consistent data, going back a decade or more, showing that qualitative studies are on average less frequently accessed in the first three months after publication (median times accessed=4,975 (Lower Quartile 3,623, Upper Quartile 6,237)) than systematic reviews and/or meta-analysis articles (median=9,272 (Lower Quartile 5,953, Upper Quartile 14,634)). On average they also are much less frequently cited than review articles and some other designs. Because we have fewer than 200 slots for research papers in the journal each year, we have to do all we can to make those count, for patients, clinicians, and the journal. The paradox, as noted by Richard Smith, is that while journals pursue increased citations, researchers pursue publication in high impact factor journals. (5) This is borne out by a qualitative study recently published in BMJ Open. (6) Luckily, The BMJ is not the only journal in the world, and even within the BMJ’s stable of journals, there are several in which qualitative research represents a substantial proportion of published articles: BMJ Open and BMJ Supportive and Palliative Care, to name but two.
In summary, there is not and never has been a ban on qualitative research in The BMJ but nor do we plan to introduce a quota. We are open to and will seek out the very best and most useful qualitative studies and we will continue to publish methodological articles that support good qualitative research.
1. Greenhalgh T, Annandale E, Ashcroft R, et al. An open letter to The BMJ editors on qualitative research. BMJ 352 doi:10.1136/bmj.i957
2. Loder E, Groves T, Schroter S, et al. Qualitative Research and The BMJ. BMJ 352 doi:10.1136/bmj.i641
3. http://www.bmj.com/about-bmj/resources-authors/article-types/research-me...
4. Ioannidis JPA. Implausible results in human nutrition research. BMJ 2013;347:f6698.
5. http://blogs.bmj.com/bmj/2016/02/23/richard-smith-qualitative-research-a...
6. Tijdink JK, Schipper K, Bouter LM, et al. How do scientists perceive the current publication culture? A qualitative focus group interview study among Dutch biomedical researchers. BMJ Open 2016;6:e008681. doi:10.1136/bmjopen-2015- 008681
Competing interests: TG declares that she is editor-in-chief of and receives salary from BMJ Open, a journal that welcomes qualitative research. EL declares that she is currently participating as a researcher in a qualitative study.
Dear Prof Greehalgh
If The BMJ is out of step with the vast majority of its readership, the majority should stop reading The BMJ. Thus, I do not read the Daily Mail. I do not like a lot about the Guardian. I skip those sections.
I have forgotten most of the statistics that I ever knew. The randomised trial business is being carried to extremes.
I liked case reports, medical memoranda, discussions (skip the pre-pub peer reviews).
Over the sixty yeas and more that The BMJ has evolved, from the BRITISH MEDICAL JOURNAL into this quasi-tabloid mode, I have protested, not once, but (it seems) a thousand times. Clearly my voice is unworthy of being listened to.
The BMJ is what it is. Shrug your shoulders and transfer your affections elsewhere. Or, ask the BMJ publications to start ANOTHER journal, more in tune with you. Then, who knows the two journals might marry and live happily, not ever after, but maybe for a decade?
Best wishes.
Competing interests: No competing interests
Interesting to see Prof. Greenhalgh include quantitative results in her further response on the need for Qualitative research in The BMJ. Presumably she felt that the qualitative quotes she included would not have been persuasive enough alone. C'ést la vie!
As to the criteria by which the BMJ's Editors might decide they are out of touch, I am at a loss to help Prof. Greenhalgh. In a Britain beset by obesity, anti-social behaviour, drug and alcohol abuse, youth unemployment, family breakdown, tension over migration and loss of moral purpose for much of society, at least the Editors still understand that Climate Change is the No.1 issue.
Competing interests: No competing interests
The open letter on which this editorial was based is currently (by a considerable margin) the most highly-accessed paper on the BMJ’s website. It has an Altmetric score of 1118 (putting it in the top 20 papers ever published in the BMJ for social media coverage).
That letter, and this editorial, have so far drawn responses from 120 people in 50 rapid responses; of those 120, a single individual (0.83% of responders to date) considers the BMJ editors’ response to be adequate. The other 99.17% of responders have used words like “naïve”, “epistemologically blinkered”, “incorrect” and showing “a serious lack of academic proficiency”. I am sure someone will be able to confirm that this distribution of responses is statistically significant.
The editors have contacted me privately to emphasise that having published our letter and responded to it in the journal, they have delivered what was needed.
I have one further question: by what criteria do the BMJ’s editors judge whether they are out of touch with their readership?
Competing interests: No competing interests
I was amazed and alarmed by the editorial response to Trish Greenhalgh et al's letter. Surely the BMJ has a role in promoting critical evaluation of research findings and knowledge ? Yet, this response seems to indicate a regression to reductionist approaches, along with a serious failure to engage with many complex questions about clinical interventions, health and health care.
Maybe the journal needs to consider whether it might be more useful to consign some of the favoured RCT approaches to specialist journals (due to the problems with generalizability to the actual experiences of many patients) and develop an emphasis on questioning and critiquing received 'wisdom'.
Competing interests: No competing interests
The BMJ’s response [1] to Greenhalgh et als’ open letter [2] is epistemologically blinkered, and has taken us back to a simplistic dichotomy of qualitative versus quantitative research that has long since been debunked.
All research methodologies have strengths and limitations. The BMJ editors claim that qualitative studies do not provide generalisable answers however the same can be said for quantitative methodologies. RCTs are considered the gold standard in evaluating the effectiveness of healthcare interventions. However, RCTs are often poorly reported and criticised for having poor external validity. Factors such as restrictive inclusion criteria, the trial setting and difference between the trial protocol and usual care all limit external validity. [3] These factors remove context – the very thing that qualitative approaches provide.
The BMJ has chosen to focus “on quantitative research that reports outcomes that are important to patients, doctors, and policy makers”. What constitutes ‘outcomes that are important’, is however, determined by unique social, cultural, political, and economic processes which are often best understood using qualitative methodologies. By overlooking the important contribution such methodologies can make to understanding health and healthcare, the BMJ are creating arbitrary boundaries preventing doctors making better decisions, rather than enabling them.
1. Loder E, Groves T, Schroter S, Merino JG, Weber W. Qualitative research and The BMJ. BMJ. 2016;352. doi:10.1136/bmj.i641.
2. Greenhalgh T, Annandale E, Ashcroft R, Barlow J, Black N, Bleakley A et al. An open letter to The BMJ editors on qualitative research. BMJ. 2016;352. doi:10.1136/bmj.i563.
3. Rothwell P. Factors That Can Affect the External Validity of Randomised Controlled Trials. PLoS Clin Trials. 2006 May; 1(1): e9.
Competing interests: No competing interests
There doesn't seem to be much opposition in the responses from those carrying out quantitative research? Are they more receptive to using it than it seems?
There is a very interesting article published on the theme by Craig Klugman in 'Bioethics Net , blogs' 14th Feb 2016. He advocates that both are necessary but discusses the tensions and conflicts inherent in the 'marriage'. 'Is there a Happy Ever After for Medical Humanities and Bioethics'. He is 'wrapping up a project discussing bioethical ethics through the lens of medical humanities'
Competing interests: No competing interests
Re: Qualitative research and The BMJ, Where is the Participation?
I have stayed out of this debate until now because I see it as inflammatory and not building bridges between methodology and engagement. I want to work in a participatory, interdisciplinary way in all aspects of research.
Like all of life it is critical to find out what the public wants to do and can do rather than building arbitrary boxes to house them in. To me methodology is about what works for the research question not only how it is defined by researchers in the "field". I assert that qualitative research is not always public user friendly and can represent researchers rather than participant views once it is filtered. Bias plays no favorite, we are all vulnerable.
I have found it disturbing that qualitative protocols are not systematically published so we can see how they deviate from what they started with or why. Without exemplars of methods in process there is little for new researchers to learn from. It is challenging to find clear methods papers for qualitative research where one want's to involve the public in the research process itself. The existing "guidelines" are not especially useful. The focus on meaning and lived experience can not come at the cost of other ways to answer a research question.
I disagree that qualitative research should be shuffled to "specialist" or "educational" journals as that becomes like preaching to the choir instead of reaching the mainstream where the evidence will be put into practice. This signals there is work to do so that qualitative work meets the needs of readers, the journal to which it is submitted and the practice of healthcare.
My experience is that once we have to adapt methodology so the public can do the research with us, qualitative researchers in general are not happy with changes/simplifications to their theory driven platforms and since they are the ones that are asked to review; good luck with getting a real qualitative participatory study published.
On another front who are any group to decide a journals "quotas", it is their job to figure out how to get their own research disseminated, cited and downloaded and if it is not done then the readers have voted and it is time to change the strategy.
I disagree that qualitative work is only case study or exploratory in content. It can be very useful at any stage in the research process and can save a lot of resources and heartache. If we want to know what works/or doesn't and why, then the fastest route is to ask those that want/need/use/have abandoned it.
Amy leads the PLOT-IT (Public Led Online Trials-Infrastructure and Tools) at ThinkWell (http:ithinkwell.org) & is an EBHC DPHIL student at the University of Oxford.
Competing interests: No competing interests