Intended for healthcare professionals

Education And Debate Qualitative research in health care

Assessing quality in qualitative research

BMJ 2000; 320 doi: https://doi.org/10.1136/bmj.320.7226.50 (Published 01 January 2000) Cite this as: BMJ 2000;320:50
  1. Nicholas Mays, health adviser (nicholas.mays{at}treasury.govt.nz)a,
  2. Catherine Pope, lecturer in medical sociologyb
  1. aSocial Policy Branch, The Treasury, PO Box 3724, Wellington, New Zealand
  2. bDepartment of Social Medicine, University of Bristol, Bristol BS8 2PR
  1. Correspondence to: N Mays

    This is the first in a series of three articles

    In the past decade, qualitative methods have become more commonplace in areas such as health services research and health technology assessment, and there has been a corresponding rise in the reporting of qualitative research studies in medical and related journals.1 Interest in these methods and their wider exposure in health research has led to necessary scrutiny of qualitative research. Researchers from other traditions are increasingly concerned to understand qualitative methods and, most importantly, to examine the claims researchers make about the findings obtained from these methods.

    The status of all forms of research depends on the quality of the methods used. In qualitative research, concern about assessing quality has manifested itself recently in the proliferation of guidelines for doing and judging qualitative work.25 Users and funders of research have had an important role in developing these guidelines as they become increasingly familiar with qualitative methods, but require some means of assessing their quality and of distinguishing “good” and “poor” quality research. However, the issue of “quality” in qualitative research is part of a much larger and contested debate about the nature of the knowledge produced by qualitative research, whether its quality can legitimately be judged, and, if so, how. This paper cannot do full justice to this wider epistemological debate. Rather it outlines two views of how qualitative methods might be judged and argues that qualitative research can be assessed according to two broad criteria: validity and relevance.

    Summary points

    Qualitative methods are now widely used and increasingly accepted in health research, but quality in qualitative research is a mystery to many health services researchers

    There is considerable debate over the nature of the knowledge produced by such methods and how such research should be judged

    Antirealists argue that qualitative and quantitative research are very different and that it is not possible to judge qualitative research by using conventional criteria such as reliability, validity, and generalisability

    Quality in qualitative research can be assessed with the same broad concepts of validity and relevance used for quantitative research, but these need to be operationalised differently to take into account the distinctive goals of qualitative research

    Two opposing views

    There has been considerable debate over whether qualitative and quantitative methods can and should be assessed according to the same quality criteria. Extreme relativists hold that all research perspectives are unique and each is equally valid in its own terms, but this position means that research cannot derive any unequivocal insights relevant to action, and it would therefore command little support among applied health researchers.6 Other than this total rejection of any quality criteria, it is possible to identify two broad, competing positions, for and against using the same criteria.7 Within each position there is a range of views.

    Separate and different: the antirealist position

    Advocates of the antirealist position argue that qualitative research represents a distinctive paradigm and as such it cannot and should not be judged by conventional measures of validity, generalisability, and reliability. At its core, this position rejects naive realism—a belief that there is a single, unequivocal social reality or truth which is entirely independent of the researcher and of the research process; instead there are multiple perspectives of the world that are created and constructed in the research process.8

    Relativist criteria for quality7

    • Degree to which substantive and formal theory is produced and the degree of development of such theory

    • Novelty of the claims made from the theory

    • Consistency of the theoretical claims with the empirical data collected

    • Credibility of the account to those studied and to readers

    • Extent to which the findings are transferable to other settings

    • Reflexivity of the account—that is, the degree to which the effects of the research strategies on the findings are assessed or the amount of information about the research process that is provided to readers

    Those relativists who maintain that assessment criteria are feasible but that distinctive ones are required to evaluate qualitative research have put forward a range of different assessment schemes. In part, this is because the choice and relative importance of different criteria of quality depend on the topic and the purpose of the research. Hammersley has attempted to pull together these quality criteria (box).7 These criteria are open to challenge (for example, it is arguable whether all research should be concerned to develop theory). At the same time, many of the criteria listed are not exclusive to qualitative research.

    Using criteria from quantitative research: subtle realism

    Other authors agree that all research involves subjective perception and that different methods produce different perspectives, but, unlike the anti-realists, they argue that there is an underlying reality which can be studied. 9 10 The philosophy of qualitative and quantitative researchers should be one of “subtle realism”—an attempt to represent that reality rather than to attain “the truth.” From this position it is possible to assess the different perspectives offered by different research processes against each other and against criteria of quality common to both qualitative and quantitative research, particularly those of validity and relevance. However, the means of assessment may be modified to take account of the distinctive goals of qualitative research. This is our position.

    Assessing the validity of qualitative research

    There are no mechanical or “easy” solutions to limit the likelihood that there will be errors in qualitative research. However, there are various ways of improving validity, each of which requires the exercise of judgment on the part of researcher and reader.

    Triangulation

    Triangulation compares the results from either two or more different methods of data collection (for example, interviews and observation) or, more simply, two or more data sources (for example, interviews with members of different interest groups). The researcher looks for patterns of convergence to develop or corroborate an overall interpretation. This is controversial as a genuine test of validity because it assumes that any weaknesses in one method will be compensated by strengths in another, and that it is always possible to adjudicate between different accounts (say, from interviews with clinicians and patients). Triangulation may therefore be better seen as a way of ensuring comprehensiveness and encouraging a more reflexive analysis of the data (see below) than as a pure test of validity.

    Respondent validation

    Respondent validation, or “member checking,” includes techniques in which the investigator's account is compared with those of the research subjects to establish the level of correspondence between the two sets. Study participants' reactions to the analyses are then incorporated into the study findings. Although some researchers view this as the strongest available check on the credibility of a research project,8 it has its limitations. For example, the account produced by the researcher is designed for a wide audience and will, inevitably, be different from the account of an individual informant simply because of their different roles in the research process. As a result, it is better to think of respondent validation as part of a process of error reduction which also generates further original data, which in turn requires interpretation.11

    Clear exposition of methods of data collection and analysis

    Since the methods used in research unavoidably influence the objects of inquiry (and qualitative researchers are particularly aware of this), a clear account of the process of data collection and analysis is important. By the end of the study, it should be possible to provide a clear account of how early, simpler systems of classification evolved into more sophisticated coding structures and thence into clearly defined concepts and explanations for the data collected. Although it adds to the length of research reports, the written account should include sufficient data to allow the reader to judge whether the interpretation proffered is adequately supported by the data.

    Reflexivity

    Reflexivity means sensitivity to the ways in which the researcher and the research process have shaped the collected data, including the role of prior assumptions and experience, which can influence even the most avowedly inductive inquiries. Personal and intellectual biases need to be made plain at the outset of any research reports to enhance the credibility of the findings. The effects of personal characteristics such as age, sex, social class, and professional status (doctor, nurse, physiotherapist, sociologist, etc) on the data collected and on the “distance” between the researcher and those researched also needs to be discussed.

    Attention to negative cases

    As well as exploration of alternative explanations for the data collected, a long established tactic for improving the quality of explanations in qualitative research is to search for, and discuss, elements in the data that contradict, or seem to contradict, the emerging explanation of the phenomena under study. Such “deviant case analysis” helps refine the analysis until it can explain all or the vast majority of the cases under scrutiny.


    Embedded Image

    (Credit: LIANE PAYNE)

    Fair dealing

    The final technique is to ensure that the research design explicitly incorporates a wide range of different perspectives so that the viewpoint of one group is never presented as if it represents the sole truth about any situation.

    Relevance

    Research can be relevant when it either adds to knowledge or increases the confidence with which existing knowledge is regarded. Another important dimension of relevance is the extent to which findings can be generalised beyond the setting in which they were generated. One way of achieving this is to ensure that the research report is sufficiently detailed for the reader to be able to judge whether or not the findings apply in similar settings. Another tactic is to use probability sampling (to ensure that the range of settings chosen is representative of a wider population, for example by using a stratified sample). Probability sampling is often ignored by qualitative researchers, but it can have its place. Alternatively, and more commonly, theoretical sampling ensures that an initial sample is drawn to include as many as possible of the factors that might affect variability of behaviour, and then this is extended, as required, in the light of early findings and emergent theory.2 The full sample, therefore, attempts to include the full range of settings relevant to the conceptualisation of the subject.

    Some questions about quality that might be asked of a qualitative study

    • Worth or relevance—Was this piece of work worth doing at all? Has it contributed usefully to knowledge?

    • Appropriateness of the design to the question—Would a different method have been more appropriate? For example, if a causal hypothesis was being tested, was a qualitative approach really appropriate?

    • Context—Is the context or setting adequately described so that the reader could relate the findings to other settings?

    • Sampling—Did the sample include the full range of possible cases or settings so that conceptual rather than statistical generalisations could be made (that is, more than convenience sampling)? If appropriate, were efforts made to obtain data that might contradict or modify the analysis by extending the sample (for example, to a different type of area)?

    • Data collection and analysis—Were the data collection and analysis procedures systematic? Was an “audit trail” provided such that someone else could repeat each stage, including the analysis? How well did the analysis succeed in incorporating all the observations? To what extent did the analysis develop concepts and categories capable of explaining key processes or respondents' accounts or observations? Was it possible to follow the iteration between data and the explanations for the data (theory)? Did the researcher search for disconfirming cases?

    • Reflexivity of the account—Did the researcher self consciously assess the likely impact of the methods used on the data obtained? Were sufficient data included in the reports of the study to provide sufficient evidence for readers to assess whether analytical criteria had been met?

    Is there any place for quality guidelines?

    Whether quality criteria should be applied to qualitative research, which criteria are appropriate, and how they should be assessed is hotly debated. It would be unwise to consider any single set of guidelines as definitive. We list some questions to ask of any piece of qualitative research (box); the questions emphasise criteria of relevance and validity. They could also be used by researchers at different times during the life of a particular research project to improve its quality.

    Conclusion

    Although the issue of quality in qualitative health and health services research has received considerable attention, a recent paper was able to argue, legitimately, that “quality in qualitative research is a mystery to many health services researchers.”12 However, qualitative researchers can address the issue of quality in their research. As in quantitative research, the basic strategy to ensure rigour, and thus quality, in qualitative research is systematic, self conscious research design, data collection, interpretation, and communication. Qualitative research has much to offer. Its methods can, and do, enrich our knowledge of health and health care. It is not, however, an easy option or the route to a quick answer. As Dingwall et al conclude, “qualitative research requires real skill, a combination of thought and practice and not a little patience.”12

    Further reading

    Acknowledgments

    We acknowledge the contribution of the HTA report on qualitative research methods by Elizabeth Murphy, Robert Dingwall, David Greatbatch, Susan Parker, and Pamela Watson to this paper. We thank these authors for their careful exposition of a tangled series of debates, and their timely publication of this literature review.

    The views expressed in this paper are those of the authors and do not necessarily reflect the views of the New Zealand Treasury, in the case of NM. The Treasury takes no responsibility for any errors or omissions in, or for the correctness of the information contained in this article.

    Footnotes

    • Series editors Catherine Pope and Nicholas Mays

    • Competing interests None declared.

    • This article is taken from the second edition of Qualitative Research in Health Care, edited by Catherine Pope and Nicholas Mays, published by BMJ Books

    References

    1. 1.
    2. 2.
    3. 3.
    4. 4.
    5. 5.
    6. 6.
    7. 7.
    8. 8.
    9. 9.
    10. 10.
    11. 11.
    12. 12.
    View Abstract