NHS data breaches: a further erosion of trustBMJ 2022; 377 doi: https://doi.org/10.1136/bmj.o1187 (Published 11 May 2022) Cite this as: BMJ 2022;377:o1187
- Natalie Banner, former lead for Understanding Patient Data
The BMJ has uncovered failures by NHS data users to comply with the terms of their agreements for managing and using data they received from NHS Digital.1 The audits did not uncover evidence of malicious use, of selling data, or causing harm to patients. But that does not detract from the fact that in several cases, data was stored and used in breach of contracts in which the public are expected to trust.
When NHS Digital—or its predecessor the Health and Social Care Information Centre (HSCIC)—was created in legislation, the most technically feasible way of providing data for commissioners and researchers was to send it, either via a download or a physical device, to a user. The security of that data could only be assured by the user signing a contract setting out what they could and couldn’t do with the data, and a promise to delete it after use. This has been a common approach for many years for health commissioning and research. It allows users to analyse the data with different software packages, or link it in useful ways to derive new insights.
Access to this data are not a free for all: research projects are often delayed for months because of the hurdles deliberately in place to safeguard patients’ data. But these audit failures illustrate the weakness of an approach to data security that relies on disseminating data based on contractual agreements, compliance with which may often go unchecked.
The pandemic has demonstrated how powerful access to NHS-held data can be: the DISCOVERY trial of treatments for hospitalised patients with covid drew on routinely collected data, and the GP covid vaccine dashboard enabled targeted community interventions for the vaccine rollout. The health service should indeed make better, more effective use of the data it holds to improve services, identify and address health inequalities, and support research for new diagnostics and treatments. In my previous work with Understanding Patient Data, a collective convened by Wellcome to promote transparency and accountability, much of our research with members of the public backed this up: there was an expectation and desire for data to be used well, for benefits for patients and the NHS, provided the right safeguards were in place—and these safeguards really mattered.2
But with thousands of users of NHS held data, from commissioners to local authorities, academics to auditors, health tech start-ups to pharma companies, how can the NHS provide honest public assurance that people’s most sensitive data are being handled safely and respectfully?
Advances in data technology should, in practice, render obsolete the dissemination of data under solely contractual terms. We now have the capabilities to bring researchers to the data, rather than sending the data out to them. Secure Data Environments or Trusted Research Environments (TREs) can provide the assurance that data will remain in secure computational spaces and, crucially, every piece of analysis can be tracked. Audit logs can be built in and made visible by default, removing the need for the laborious process of manual audits checked against bespoke data agreements with each institution or user. The recent Goldacre Review set out much of the technical and policy detail necessary to bring these environments into the NHS mainstream.3
However, in the policy shift towards secure environments we should ensure we take the right lessons from the audit failures The BMJ has identified. The first concerns transparency. These audits were in the public domain, but it took The BMJ investigation to surface them, connect the dots, and draw out their implications for public trust in how NHS data are handled. Transparency needs to be meaningful if it is to provide assurance and enable scrutiny. This means investing in the accessibility of information about who gets access to what data, why, and how they are held to account. The technological complexity behind data use may be a black box to the average observer, but the decisions made about it shouldn’t be.
Secondly, trust cannot be automated. Investments in data infrastructure need to be accompanied by investments in engagement and public participation in decision making, if they are to be perceived as promising, positive developments for the health of the nation. The recent “deliberative wave” of citizen participation methods are well suited to decisions about our data, which are after all resources that are both deeply individual and societally valuable.4 These need to be built on for the long term, not as one-off justifications for decisions as data infrastructures are set up.
Finally, these audits are just one example of failures of NHS data policy and practice over the past decade. They represent a steady drip that, over time, erodes public perceptions that the NHS is a trustworthy steward of our data. If the government and NHS really are going to create a health service fit for a data-driven world, these failures should be a wake-up call that a lot of plumbing needs to be fixed first.
Competing interests: none declared
Provenance and peer review: commissioned, not peer reviewed
Natalie Banner is director of ethics at Genomics England. This piece is written in a personal capacity and does not represent the views of her employer.