Moving beyond tokenism in our approach to human rights in digital healthBMJ 2021; 375 doi: https://doi.org/10.1136/bmj.n2873 (Published 22 November 2021) Cite this as: BMJ 2021;375:n2873
- Rachael Hinton, independent consultant1,
- Ulla Jasper, governance and policy lead2,
- Siddhartha Jha, AI/digital programme manager2
Digital technology that depends on data and artificial intelligence (AI) is becoming a key resource for innovation and development to address healthcare challenges.1 However, some commentators have warned that a technocratic approach to digital technology in healthcare, which ignores the broader political, sociocultural, and economic context, will reinforce or introduce new threats to human rights.23 Ethicists, researchers, and human rights experts have also raised concerns that AI could now stand for “augmenting inequality” in the era of covid-19 healthcare.4 We hope the upcoming inaugural global Digital Health Week will continue to move the conversation forward to address these issues.
In current debates about the human rights implications of digital health, technology advocates and sceptics tend to choose a singular human right to make the case for or against a technology. The benefits of digital technology, for example, are promoted as an enabling factor for achieving universal health coverage and people’s right to health.5 Sceptics, on the other hand, invoke privacy in discussions about data ownership or non-discrimination when they call for unbiased datasets.6 Others argue that the intended beneficiaries of digital technology have a right to participate in its development and it should reach those who need it most.7 Equally important is the concern that everyone share in the benefits of scientific advancement.8
Since multiple human rights can be positively and/or negatively affected by digital technology, we need a more explicit and systematic practice to identify, understand, assess, and address its effects on end users, such as health service users and health workers.910 Building on an approach proposed almost three decades ago in the area of health policy making,1112 we argue that a human rights impact assessment should be integrated across the lifecycle of digital technology in healthcare. Recent guidance from the World Health Organisation on the ethics and governance of AI for health also promotes human rights assessments alongside the application of traditional ethical principles.13 These kinds of assessments would help to avoid, mitigate, and remedy the adverse effects and unintended consequences of digital technology on human rights and optimise its positive human rights benefits in support of broader health and development outcomes.14
Human rights impact assessments are new to digital health and the criteria for assessment will need to be further defined and adapted for this area. However, the following are examples of what a human rights impact assessment of digital technology in healthcare could highlight14:
● Assessing the purpose of a proposed digital health intervention, its effectiveness, and its contribution to reaching those with greatest need
● Consultation and involvement of users and affected stakeholder groups in the design and testing of health data models or data-driven products
● Assessing the data used in the development of diagnostic tools for sampling bias and unequal representations of groups, such as women or people from ethnic minority backgrounds
● Evaluating effects on privacy, such as user tracking and whether health data are sold to third parties without information or consent
A human rights impact assessment gains its strength from drawing on the long established, legally binding body of human rights law. It is also supported by the United Nations Guiding Principles on Business and Human Rights, which set the expectation that businesses conduct human rights due diligence.15 By identifying rights-holders and duty-bearers, as well as their respective entitlements and obligations, and by establishing an objective legal standard of evaluation, such an assessment goes beyond merely voluntary models of “ethical” governance by industry, which are the standard practice now. Such tokenistic approaches are not enough to protect human rights, nor to establish trust in the digital transformation of healthcare.10
We recognise that there are methodological challenges to human rights impact assessments, and they can be demanding in terms of time, resources, and expertise. It can also be challenging to identify the principal actor in a human rights violation and determine who is accountable for remedying it: is it, for example, the creator of an algorithm, the designer of technology, or the health system using it?
Despite these challenges, we must do more to protect and promote human rights in digital technology for healthcare, whether a donor funding a digital health programme; a policy maker developing a new digital health strategy; a designer, implementer, or user of a technology; or a civil society group calling for accountability for human rights violations in digital health. It will require increased dialogue between these groups, and both technical knowledge and capacity for all stakeholders to understand the role they have in protecting and promoting people’s health and rights. Governments must also ensure that their digital health policies, legislation, regulations, and enforcement measures are effective for addressing the risk of human rights violations and ensuring accountability.15
Globally accepted standards for promoting and protecting human rights in digital health do not currently exist. As a starting point we encourage all stakeholders, including companies that develop digital technology and the organisations that deploy them, to integrate human rights impact assessments as part of standard practice to improve accountability for human rights in digital health.
Conflict of interest statement: RH is an associate editor at The BMJ. There are no other conflicts to declare.