Checklists for improving rigour in qualitative research: a case of the tail wagging the dog?
BMJ 2001; 322 doi: https://doi.org/10.1136/bmj.322.7294.1115 (Published 05 May 2001) Cite this as: BMJ 2001;322:1115All rapid responses
Rapid responses are electronic comments to the editor. They enable our users to debate issues raised in articles published on bmj.com. A rapid response is first posted online. If you need the URL (web address) of an individual response, simply click on the response headline and copy the URL from the browser window. A proportion of responses will, after editing, be published online and in the print journal as letters, which are indexed in PubMed. Rapid responses are not indexed in PubMed and they are not journal articles. The BMJ reserves the right to remove responses which are being wilfully misrepresented as published articles or when it is brought to our attention that a response spreads misinformation.
From March 2022, the word limit for rapid responses will be 600 words not including references and author details. We will no longer post responses that exceed this limit.
The word limit for letters selected from posted responses remains 300 words.
In her interesting 'education and debate' section article on the
usefulness of checklists for improving rigour in qualitative research
Barbour [1] commented that "qualitative research is increasingly being
influenced by funding and editorial policies". Those of us using
qualitative methods in the UK, in areas such as health care, patient
satisfaction or health services research, would probably add the influence
of (regional) research ethics committees.
However, one should bear in mind that there are different views on how to
assess the quality of qualitative studies. In a research report recently
published as part of the NHS R&D Health Technology Assessment
Programme [2] four approaches are outlined to 'criteria for assessing
qualitative techniques:
1. All research perspectives are unique and each is valid in its own
terms. This view rejects the notion of establishing general quality
criteria;
2. Qualitative and quantitative methods can be assessed, but not by the
same criteria. For example Lincoln and Guba [3] suggested four criteria
specifically for the evaluation of qualitative methods: credibility,
transferability, consistency (or dependability) and confirmability.
3. Qualitative and quantitative methods can be assessed by the same (or
very similar) criteria, such as: acceptability, cost, validity,
reliability, generalisability (or relevance) and objectivity.
4. Finally, an alternative approach to using predefined criteria in
assessing the quality of qualitative techniques is the use of checklists
of good practice [4,5,6,7]. It is important to note that such checklists
focus on how the research is to be (or was) conducted, rather than on the
quality of the method itself.
Funding bodies, journal editors, policy-makers and research ethics
committees have had a considerable influence on the development of
guidelines for qualitative methods. Not least because there is poor
qualitative research around, as Caan [8] stated, this kind of research can
"also be trite, banal or hopelessly obscure." There is very little doubt
in my mind that (a) checklists are here to stay, and (b) that their
existence will not only influence journal editors, reviewers, funders,
ethics committees, but also researchers and, of course, students.
However, as a research community we need to guard for the concern which
Williams [9] expressed namely that "checklist .... simply set up rules
which researchers play to and get round, in rather the same way they find
ways of getting round tax legislation." To continue with the taxation
example, one could argue that there is nothing wrong in using the tax
legislation to one's advantage, but there is something wrong when this is
done in such a way that it is illegal (i.e. improper).
It is important that the different stake holders have some idea of the
quality of the qualitative research they are funding, publishing in their
journals, or quoting in the articles and reports. Checklist do exactly
that, they give outsiders a chance to evaluate the quality of our work.
At the same time it is important to stress that there a many different
ways to conduct qualitative research. Whilst some ways are more
appropriate than others, at certain times, under certain conditions or for
specific target groups, it should be remembered that this does not
necessarily mean that the researcher using a different approach, let alone
method, than her colleague is doing a worse job.
EDWIN VAN TEIJLINGEN,
Senior Lecturer in Public Health,
Dugald Baird Centre for Research on Women's Health & Department of
Public Health
CAROLINE REEVES,
Research Assistant
MANDY RYAN,
MRC Senior Fellow
MOIRA NAPPER,
Information Officer
Health Economics Research Unit
ELIZABETH RUSSELL,
Professor of Social Medicine
Department of Public Health
All are based at the UNIVERSITY OF ABERDEEN
References
1. Barbour R.S. Checklists for improving rigour in qualitative
research: a case of the tail wagging the dog? BMJ 2001;322:1115-1117.
2. Ryan M. et al. Eliciting public preferences for healthcare: a
systematic review of techniques. Health Technol Assess 2001; 5(5). See
executive summary/full text on NCCHTA website:
http://www.hta.nhsweb.nhs.uk/
3. Lincoln Y.S., Guba E. Naturalistic enquiry, Sage, Beverley Hills,
CA, 1985.
4. Blaxter, M. Criteria for qualitative research. Medical Sociology
News 2000; 26: 34-37.
5. Mays N, Pope C. Rigour and qualitative research. BMJ 1995; 311:109
-112.
6. Mays N, Pope C. Assessing quality in qualitative research. BMJ
2000; 320:50-52.
7. Critical Appraisal Skills Programme (CASP) 10 questions to help
you make sense of qualitative research, 1998.
http://www.phru.org/casp/qualitative.html
8. Caan W. Call to action Rapid Responses. eBMJ (10th May 2001)
http://bmj.com/cgi/eletters/322/7294/1115
9. Williams B. Longer Checklists or Reflexivity? Rapid Responses eBMJ
(4th May 2001) http://bmj.com/cgi/eletters/322/7294/1115
Competing interests: No competing interests
EDITOR-
Barbour's thoughtful contribution (1) to the debate on 'rigour' in
qualitative research is very timely, because the UK Department of Health
has recently introduced a Research Governance Framework and by 31 May 2001
this requires a baseline assessment of the existing mechanisms to ensure
the quality, safety and ethical conduct of all research in health and
social services. The minimum requirement, for a transparent and fair
approach to judging the scientific quality of research involving any
service users, applies whether or not the research is publicly funded.
Working in the community, say with small groups of young patients or
people living with particular disabilities, qualitative methods are often
the most suitable for research questions about needs or the impact of
care. They can generate rich, multi-dimensional profiles of outcome,
especially when psychological techniques like laddering or creative
techniques like drama help patients express the deeper meanings behind
health events. However, they can also be trite, banal or hopelessly
obscure.
Chief Executives across the public sector now find that they have a
duty to ensure that research in their organisation maintains the best
possible standards, including relevance and rigour. The essential
partnership between senior managers, practitioner-researchers and medical
schools in establishing Research Governance will be the focus of our next
meeting of the NHS R&D Managers Forum (at the Institute of Child
Health in London on 29 May 2001). In particular, the urgency of assessing
current quality mechanisms and the opportunities for new scientific
collaborations have not yet got across to many Social Services departments
(2) or Primary Care Trusts (3).
One of the most promising opportunities for qualitative researchers
is the new Health Development Agency initiative "HDA Evidence Base". This
welcomes public health knowledge from action researchers, social
anthropologists, health geographers, market researchers etc., provided
"experts" in each domain can agree their approach to assessing the rigour
of research evidence.
Woody Caan,
public health specialist in research and development,
International Centre for Health and Society, University College London,
London WC1E 6BT.
1. Barbour RS. Checklists for improving rigour in qualitative
research: a case of the tail wagging the dog ? BMJ 2001; 322: 1115-1117.
2. Caan W. Respond on research. Community Care 2000; 2 November: 17.
3. Caan W. Now is the time to produce an equitable and responsive approach
to evidence-based practice in the community. Health Service Journal 2001;
5 April: 22.
Declaration of interest: Dr. Caan is a practitioner member of the
Academy of Learned Societies for the Social Sciences.
Competing interests: No competing interests
Dear Editor
Barbour's article [1] is tantalising and mystifying in equal measure.
She is right to counsel qualitative researchers from shielding behind a
protective wall of checklists and quazi-paradigmatic research techniques.
(Though surely the same should be levelled at epidemiologists,
statisticians and health economists, with all researchers being charged
with the responsibliity of ensuring the research tools and analysis fit
the question to be addressed). Yet, and here's where the tantalising
becomes mystifying, she twice (once in the second paragraph and again in
the last) tells us our research strategies need to be informed by the
epistemology of qualitative research, without giving us an inkling as to
what she believes this to be. In short, whilst rightly espousing the
importance of context for the qualitative researcher she denies us the
very context in which to assess her own critique.
As a champion of applied social science, particularly action research
and qualitative research in public health, I would suggest the biggest
threat to this growing area of work is not so much over-adherence to
prescriptive check-lists and sampling strategies, but rather the over-
reliance on self-reports and verbal representaions of the world. For many,
qualitative research has become synonymous with the semi-structured
interview, the self-report and the ubiquitous focus group. The roots of
qualitative research are in anthropology and ethnography, where direct
observation of events are central. Much of the contribution of qualitative
research in the understanding of social aspects of health issues, notably
HIV/AIDS, has been through direct observation of the nuances of social
behaviour.
Questioning the validity of checklists and the prevailing
methodological orthodoxy in qualitative research is useful, but of greater
relevance is the need to promote (and teach) a more observational paradigm
for qualitative health research.
[1] Barbour RS "Checklists for improving rigour in qualitative
research: a case of the tail wagging the dog?"
BMJ 2001;322:1115-1117
Competing interests: No competing interests
Editor – Barbour1 has been brave enough to place her head above the
parapet. Over the past few years researchers who use qualitative methods
have received greater esteem, obtained funding from sources previously out
of bounds, and published in journals that may have dismissed their efforts
a decade ago. As a result it has been in the interests of many
researchers to keep some of these views to themselves.
While I believe Barbour has done a service to the integrity of the
methodology of qualitative research she stops short of offering many
solutions. I sympathasize. Solutions to any of the problems highlighted
are themselves likely to be added to the checklists. For example, Barbour
appears to advocate the “constant-comparative” method of analysis but
should this simply be added to current lists?
The constant search for rigour simply results in longer and longer
checklists. One only has to compare the assessment and design guidelines
for clinical trials twenty years ago with those of today to see the point.
Will qualitative research go the same way? Perhaps it doesn’t matter.
Perhaps longer checklists simply reflect advances in knowledge of the
scientific method. My concern, however, is that while checklists are a
quick and easy way of facilitating the appraisal of a paper they simply
set up rules which researchers play to and get around, in rather the same
way that they find ways of getting round tax legislation.
Historically, qualitative researchers have addressed this issue, not
simply through technical fixes, but the more important process of
documenting reflection. In other words, researchers constantly reflect on
the research question, their role, attitudes, feelings, the impact of the
researcher on the people being studied, and so on.
While the personal reflections of the researcher may appear rather
out of place in many academic journals it would at least provide a way of
covering some of the known and unknown blind spots of checklists. This
applies to quantitative research as well as qualitative. In reading the
report of a clinical trial, do we really know everything that happened?
For example, patients were not told what drug they were receiving but were
they really “blind”? Rigour may lie in the unreported details,
peculiarities and idiosyncrasies of studies as much as in the overarching
issues contained in a checklist. The challenge is finding a way of making
it possible and acceptable to openly report these.
1. Barbour RS. Checklists for improving rigour in qualitative
research: a case of the tail wagging the dog? BMJ 2001;322:1115-1117.
Competing interests: No competing interests
pot calling the kettle black
Dear Editor
Having just read yet another attempt by the medical world to 'get its
head around' qualitative research I cannot help feeling frustrated at the
well intentioned but slightly inaccurate explanations doctors recieve from
other doctors. Previous work published in the BMJ around rigour in
qualitative research has often reflected 'methodological tribalism' in
describing the approaches to rigour of the more positivist end of the
rigour spectrum. This article although generally accurate demonstrates a
lack of conceptual clarity (e.g confusing deviant case analysis with
theoretical sampling).
I acknowledge that polemic debate around the more philosophical
issues such as epistemology is not helpful and doctors cannot be expected
to become overnight experts in a new field of research. To this end we
are developing a flow chart of decision making in qualitative research
that seeks to embrace the range of approaches to rigour from health
services research, sociology, psychology etc and get away from checklists.
Initally intended as a tool for use in the systematic review of
qualitative evidence it will hopefully also model the process of
qualitative research and educate its users a little about how to do this
kind of research.
My intention is to submit a paper on the work to the BMJ in the hope
that doctors won't object to recieving advice on research methodology from
a researcher (Ph.D in psychology using both quantative and qualitative
methods). I would welcome a response from the editor about the potential
for this work.
Yours sincerely
Dr Jane Meyrick
Competing interests: No competing interests