Intended for healthcare professionals


How do patients use information on health providers?

BMJ 2010; 341 doi: (Published 26 November 2010) Cite this as: BMJ 2010;341:c5272
  1. Martin Marshall, clinical director and director of research and development1,
  2. Vin McLoughlin, director of quality performance and analysis1
  1. 1The Health Foundation, London WC2E 9RA, UK
  1. martin.marshall{at}
  • Accepted 20 August 2010

Expectations are high that the public will use performance data to choose their health providers and so drive improvements in quality. Martin Marshall and Vin McLoughlin question whether this is realistic and suggest how published information could be better used in the future

Over the past two decades market forces have been increasingly used as a mechanism to drive improvements in quality and efficiency in health services. Information fuels markets, and the publication of comparative data has been, and will remain, a high profile feature of health policies of all UK political parties.1 2 3 The Department of Health in England, regulators such as the Care Quality Commission,4 professional organisations such as the Society for Cardiothoracic Surgery,5 Dr Foster Intelligence from the private sector,6 and the health service through NHS Choices7 have all contributed to the plethora of information that is now available to whoever wants to look at it.

Who are the potential audiences? Clinicians and managers clearly require information about how well they are doing to improve their work. Policy makers also need to make judgments, and the public, usually through the media, may want to use this information to hold those in the health service to account. But the spotlight is increasingly on patients—the people who use the service—to drive change. Market advocates believe that patients want to be able to choose between different providers; that given good information they will do so; and that these choices will be a major force driving improvement in services.8 They cite evidence,9 sometimes selectively, supporting their views. We use research evidence to challenge these expectations, explain why we think patient choice is not at present a strong lever for change, and suggest ways in which currently available information can be improved to optimise its effect.

The evidence on public use of comparative information

Two large systematic reviews summarise the growing evidence on what happens when comparative information about the quality of care and the performance of health services is placed in the public domain.10 11 The findings from research conducted over the past 20 years in several countries are reasonably consistent. They provide little support for the belief that most patients behave in a consumerist fashion as far as their health is concerned. Although patients are clear that they want information to be made publicly available, they rarely search for it, often do not understand or trust it, and are unlikely to use it in a rational way to choose the best provider. Public reporting of comparative data does seem to have a limited role in improving quality, but the underlying mechanism is providers’ concern about their reputation, rather than direct, market based competition driven by service users.

Interpreting the evidence

These findings, which might seem counterintuitive, have several possible explanations. The first is that most patients do not yet seek out performance information and doing so would require a change in culture.10 It might, therefore, take a long time for any effect to be seen. In addition, much of the information made available in the past decade has been criticised for being inadequate in terms of content, presentational format, and timeliness,12 which will limit its effect.

Data providers have responded to these criticisms by producing increasingly sophisticated information, and policy makers have ratcheted up incentives and sanctions to encourage greater use of the data. However, if the only problems were timing, data quality, and ease of access to the information, then we would expect to have seen the increasingly sophisticated information that has been published over more than a decade in the UK at least starting to have some effect. There is little evidence that this is the case.11

There is an additional explanation that is worth exploring. It is possible that the expectations of advocates of public disclosure are unrealistic and that their underlying assumptions about rational consumerism are inappropriate where health decisions are concerned. There is some evidence that parents use school league tables to choose the best educational institutions for their children,13 but perhaps parents feel (and have been encouraged to feel) more empowered to make decisions about their child’s education than patients do about their health decisions. We suggest that patients would be more likely to use comparative information if its design and presentation were influenced by a modified set of assumptions.

How people use information to make decisions

The traditional economic theories underpinning expectations of how people respond to comparative data can be simply described in the following way: people are ready to make a decision, they search out data to inform this decision, they trade off the advantages and risks of each decision in a rational way, they usually prioritise important clinical outcomes (for example, the risk of dying) over other factors (for example, convenience), and they then choose the best performing organisation. The decision making process is seen as largely egocentric and detached from any wider context. The role of those governing the market is then simply to ensure that information is available—to operate in what has been described as an “information telling mode.”14

Alternative theories, however, see the decision making process as more complex and less rational. Such theories describe how people implicitly look for easy ways to make decisions, how they take short cuts, and how they are often overawed by data, particularly if data are complex and numerical.15 These theories regard decision making as primarily a social process rather than a cognitive one. People draw on past experiences and are influenced by their expectations and fears and by the views of others—particularly people they trust. They internalise all of these influences, make trade-offs, and do so in a way that might seem irrational to outside observers, yet has a strong internal logic. For example, a person might decide not to choose a highly rated hospital only because their grandmother died there. The decision making process is complex, iterative, and emergent. It has been described as a process of knowledge construction14 and is philosophically and practically different from the information telling model. Evidence from psychological and sociological studies shows that the knowledge construction model is more relevant to how people make decisions than the rational model, particularly in the health arena (box).14 15 16 17

Evidence supporting social models of decision making

We know from the discipline of marketing that people are strongly influenced by what others think and do. People often use the judgment of others as a short cut in making their own decisions, a process known as social proof.16 Social proof is a process well understood by waiters, for example, who will augment a tips box with their own money to persuade diners that others valued the service and so, therefore, should they.

People often have a strong sense of loyalty and indebtedness, which can transcend hard evidence about performance. This is called reciprocation.17 Patients will choose to stay with the local general practice that their family has frequented for generations, despite evidence that better care is being provided elsewhere.

The cognitive psychology literature tells us that people rarely read all of the information that is presented to them.14 They tend to scan the information and often look for patterns or themes that will confirm their prejudices. Evidence that runs counter to their established views is more likely to be discounted. If people have a good opinion of their local hospital and see it ranked highly in a league table this will quickly confirm their opinion. If it is ranked lowly they are more likely to criticise the data than change their opinion.

People are inclined to ignore information that has little meaning to them.14 Most people, told that there was a 1.4% chance of dying of a coronary bypass graft operation in one hospital and a 2.2% chance in another hospital, would judge the risk to be small and the difference unimportant. Data that are important to clinicians and commissioners are often perceived differently by patients.

How to optimise use of performance data

How should policy makers, managers, and clinicians respond to these findings? Some might suggest that we should focus only on those who work in the health service and discount patients as important stakeholders. We believe that this would be wrong. The public has a clear right to know how well their health system is working, irrespective of whether they want to use the information. Improvement of the relevance and accessibility of the data should be seen as a good thing in its own right and may begin to engage more people in the future.

There are several ways by which currently available information could be made more useful.

Trusted source—It is important that users perceive the information as coming from a trusted source. Information providers that might be regarded as having ulterior motives, such as government or for-profit organisations, may not be perceived to be independent. Partnerships between the health service, and respected professional bodies and academic institutions may be seen as more trustworthy, although we could find no published examples of this kind of partnership.

Relevance—Information needs to be of interest to the target audience. Risk adjusted mortality is important to cardiac surgical teams, but potential patients are more likely to be interested in outcome measures that reflect their ability to carry out their lives as they wish. The development of patient reported outcome measures (PROMs), a structured approach to asking patients for their views about their own health, has great potential in this respect. Condition specific measures (such as the Oxford Hip Score, to assess the outcomes of hip surgery), and generic measures (such as the SF-36 or EQ-5D health and wellbeing survey) are becoming a routine way of evaluating the outcome of interventions.18

Knowledge of the system—Most people who use the health service (indeed, many people who work in it) have little understanding of how the system is structured and how it works. For example, patients need to know what an acute trust or primary care organisation is, how funding flows, and how the referral process works before they can realistically judge comparative performance data. Patients also need to know whether apparent differences in performance between organisations have any practical relevance. The Easy Read section of the NHS Choices website provides some examples of how this contextual information might be presented.19

Information must be presented in a visually attractive way—There is much evidence to guide design decisions,14 and the health sector lags a long way behind other sectors, such as retail, in its targeting approaches. New web technologies, in particular, could be used to enable users to define what information they want and how they want it presented. This would help to meet the needs of disparate audiences and recognises the heterogeneity among potential users.

Multiple types of data—Different patients engage with different forms of information. Quantitative data are important, but stories—particularly ones based on personal experience—can be compelling and influential and may be used alongside numeric data. There will inevitably be a trade-off between the validity and the reliability of narrative information, but this is a trade-off patients seem to want to make. The award winning website, based on in-depth qualitative interviews by researchers from the University of Oxford, shows both the popularity and the power of personal narratives.


In this paper we present a significant challenge to those who believe that providing information to patients to enable them to make choices between providers will be a major driver for improvement in the near or medium term. Patients might want to view health as something other than a commodity, and this presents a conceptual as well as a practical challenge to those responsible for designing and producing comparative performance information. We suggest that, for the foreseeable future, presenting high quality information to patients should be seen as having the softer and longer term benefit of creating a new dynamic between patients and providers, rather than one with the concrete and more immediate outcome of directly driving improvements in quality of care.


Cite this as: BMJ 2010;341:c5272


  • Contributors and sources: MM has had a research and policy interest in public disclosure for more than a decade and has advised governments around the world on how best to publish comparative information. VMcL commissioned work to better understand the impact of publishing information. This article arose from discussions about how little impact the established research evidence seems to have on the policy and practice of releasing comparative information. Both authors contributed to the development of the ideas in this article and to its writing. MM is the guarantor and corresponding author. Sadly VMcL died of pancreatic carcinoma before the final draft was completed.

  • Competing interests: MM has completed the Unified Competing Interest form at (available on request from the corresponding author) on behalf of both authors and declares they have support from the Health Foundation for the submitted work; no financial relationships with organisations that might have an interest in the submitted work in the previous 3 years; and no other relationships or activities that could appear to have influenced the submitted work.

  • Provenance and peer review: Not commissioned; externally peer reviewed.