Digital Health: what does it mean for your rights and freedoms


Provision of a national ID as a pre-condition for access The inability to provide an ID card should never result in a denial of services. We are not alone on this point; the UN Special Rapporteur on Extreme Poverty and Human Rights previously questioned existing mandatory ID requirements for accessing health care services.

However, the reality is that strict ID requirements effectively exclude people from receiving care. In countries like Chile, Uganda, India and Kenya, to name a few, providing an ID card is a pre-condition for accessing any public service. Even in normal circumstances, we see numerous cases of exclusion because many are unable to register and get an ID card. And if upheld, these requirements would be counter-productive to an effective response to this public health crisis.

And yet, some governments are seeing a digital identity as a solution to facilitate the provision of emergency healthcare and other public services. For example, the Jamaican Government is using Covid-19 as justification to fast-track the creation of a national identification system (NIDS) to help it with its aid and benefits distribution. It will be interesting to see how it proceeds given that in April 2019, the Constitutional Court of Jamaica had struck down the country’s mandatory biometric National Identification and Registration Act and the National Identification and Registration System (NIDS) ruling that they violated constitutional privacy protections, and had to be reviewed before the government could proceed.

Categorise to filter and exclude Creating a database enables the categorisation according to selected criteria depending on what data is processed which could include gender, ethnicity, race, nationality or legal status, amongst others, to serve as as basis for decision-making whether access to the service itself is granted or not.

Migrants are often most negatively affected, as migrants, asylum seekers, and refugees may be limited in accessing healthcare out of fear of coming forward and risking exposing their identities; and worries about costs too. Fortunately, these implications have led to some countries suspending such policies. For instance, the Irish government gave assurances to “treat them [migrants] with dignity and with absolute privacy and patient confidentiality, as will their social work system, during this time of emergency.” Portugal decided to regularise the status of thousands of persons with pending cases so they could access universal healthcare during the pandemic.

Automated discrimination and bias : The various uses of AI – from diagnostic to eligibility – raise concerns that outcomes seen in sectors such as the criminal justice system, policing and recruitment may be reproduced in the healthcare context, resulting in bias and discrimination leading to inaccurate and unequal outcomes and predictions “of health outcomes across race, gender, or socioeconomic status”.

Enabling 360 view and tracking

As with digital identity systems, many of the digital initiatives enable a 360 view of an individual and to track their transactions at different stages: be it through a unique health identifier which will provide a detailed historical record of every transaction between an individual and a healthcare provider, or more sophisticated tools such as wearables or applications which enable tracking not just of one’s interaction with a healthcare provider but an array of other information including location data, and interaction with third parties.

Mission creep: Once it’s there it’s too tempting

Mission creep occurs when data is used for another purpose than for which it was initially processed, and importantly which was declared to the individual at the time of collection. Preventing mission creep needs to be built into the design and governance of health systems and requires both technical and legal measures to prevent data from being used for another purpose than for which it was intended for in the first place.

Some have pointed to heightened concerns that once data is available it is perceived and treated as a free for all, and that the data available can be used for other purposes. Examples of mission creep have emerged around how biometrics data collected for digital health purposes30243-1/fulltext) could be used for other purposes such as forensics or criminal proceedings. This was very much at the centre of the successful challenge of civil society to prevent the Kenyan government from creating a biometric database of certain persons living with HIV including school-going children, guardians, and expectant and breastfeeding mothers living with HIV. More recently, civil society successfully countered plans of the Kenyan national health authorities, funded by the Global Fund to Fight AIDS, TB and Malaria, to conduct a study of HIV and key populations which would require the processing of biometric data. Both initiatives raised concerns than the mere existence of such data would be used by the government for targeting criminalised populations in Kenya.

Data-driven eligibility criteria

Governments around the world are increasingly making registration in national digital ID systems mandatory for populations to access healthcare services as well as social benefits and other forms of state support. By virtue of their design, these systems inevitably exclude certain population groups from obtaining an ID and hence from accessing essential resources to which they are entitled. We have seen this play out in different ways, for example in Kenya with discrimination against specific groups with ID vetting for ethnic and religious minorities, logistical failures resulting in delays and errors in Uganda’s National Identity Card system (“Ndaga Muntu”) preventing thousands in particular women and the elderly for accessing healthcare. In India, technical exclusion and ubiquitous linking of the Aadhaar card similarly prevents individuals from accessing basic state support, including food rations. These issues have been replicated and amplified during the pandemic, with some governments introducing the presentation of ID as a pre-requisite to access Covid-19 vaccinations. In India, vaccine appointments are managed through a mobile app which requires linking with Aadhaar, potentially excluding millions from accessing the vaccine.

Automation

One area where we have seen new technologies like AI/ML being used is in the sector of digital health. Whilst much of it remains in pilot stage and/or limited scale, we’re increasingly seeing the use of AI for diagnostics, health research and drug development. Other uses of automation are clinical care – especially identification of individuals at risk, self-management of care and home-based care – as well as health surveillance and public health emergencies preparedness. Each raise concerns for human rights and access to healthcare in terms. For example, resorting to AI for decisions without room for questioning where a system is solely automated inevitably leaves out the human element. AI has been used to predict pregnancy amongst adolescents in Argentina, an incident which raised questions about the data used to train the AI, as well as concerns in relation to the agency and autonomy of the individual within this prediction model. Bias found in AI technology has also led to examples of algorithms used for health decisions leading to less spending in Black communities than on white patients despite same level of need. And there is one area where we have seen significant advancements being made in terms of scalability: various countries are starting to use AI for health systems management and planning, where it is used to complement personnel in undertaking tasks as well as to support complex decision-making to identify fraud/waste, assess staffing needs and resourcing, and mapping trends in patient behaviour, i.e. missed appointments, among others. The use of AI for fraud management is a practice we’ve seen already in the welfare sector where automated decision-making is used to assess eligibility in the first instance and then to identify and police those who may be abusing the system. This is subjecting those requesting such assistance to arbitrary and invasive surveillance and monitoring. In some countries, such as the UK, this sort of tactics for fraud detection are being formalised and institutionalised through mechanisms like the National Fraud Initiative.

Security and integrity concerns: expanding the exposure and attack surface

Technologically complex systems are inherently vulnerable to intrusions or data breaches. There are numerous high-profile examples of digital health systems being breached. If some of the most well-resourced governments (and companies) in the world are unable to protect their most sensitive data sources, it is reasonable to assume that resource-constrained governments and humanitarian agencies will face significant challenges to appropriately securing databases, while making them ‘honey pots’ for attackers.

In the case of digital health systems, there are many concerns associated with and resulting from poor security management particularly at the state of design and then maintenance.

One common associated risk is data breaches. Data breaches in the health sector are common: the 2019 HIV data leak from Singapore, the leak of over 2.5M medical records in the United States in 2020, and the leak of 500,000 medical records in France in early 2021, to name a few.

As explained above, health data is of a particularly sensitive nature. Therefore, any data breach affecting medical records is extremely serious from a human rights standpoint. But the negative consequences flowing from the breach can be manifold, diverse and far-reaching. In addition to the privacy harms that necessarily attach to such data leaks, the exposure of medical records can put patients’ security and welfare at risk. Civil society has long expressed concern about the potential dangers of poor data security policies, particularly in the context of reproductive health, which in many countries is heavily stigmatised. Those fears were made out in 2020, when a young rape victim’s pregnancy details were leaked in Brazil, prompting anti-abortion protesters to block access to the hospital where she was due to terminate the pregnancy. In January 2019, it was discovered that the HIV-positive status of 14,200 people in Singapore, as well as their identification numbers and contact details, had been leaked online following a breach of the HIV registry managed by the Ministry of Health.

What we have observed is that security is often an afterthought, rooted in a lack of digital literacy and uncertainty as to the risks and potential harms. Ultimately, these difficulties flow from a lack of understanding of how the systems work – hardware and software -, the implications of any decisions made, and the data ecosystem in which we operate.

Tech industry and health data exploitation

It is important to note that there are few instances where governments are able to design, deploy and maintain such digital systems themselves. The complexity of these systems and the highly technical know-how required to create them has led to the growth of the ‘government-industry complex’ that manages and regulates social protection programmes like healthcare. Some of the concerning features of this ‘government-industry complex’ include:

  • poor governance of social protection policies, including the absence of open, inclusive and transparent decision-making processes;
  • limited transparency and accountability of the systems and infrastructure;
  • access is tied to a rigid national identification system;
  • excessive data collection and processing;
  • data exploitation by default; and
  • multi-purpose and interoperability as the endgame.

Industry not only provide solutions to governments but through the delivery of their own services they also feed the broader data exploitation ecosystem. Industry has identified the health sector as a fertile ground for innovation often leading to data exploitation. They play different roles from providing tech “solutions”, i.e. infrastructure, but also some of the bigger tech giants such as Google, Microsoft and Amazon are involved at different, complementary levels from infrastructure to data management, analysis and product development. Add to that the hidden industry of data brokers mining vast amounts of personal data for commercial purposes, and the web of actors interacting with health data becomes ever more intricate, opaque – and untraceable.

Why the drive for digital

As note elsewhere, the health sector is one of many to have embraced the digital revolution. The drivers for this trend vary, but a common feature is the pure tech-solutionism which posits that technology is the solution to socio-economic and political problems facing our societies.

Below we outline two of the main narratives driving the digital revolution in the health sector.

Facilitate access and enable empowerment

A huge drive behind digitalisation is the approach to technology as a tool for empowerment whereby individuals would be in control while supposedly democratising access. While that is possible, unlocking that promise requires building that approach and outcome into the design, deployment and maintenance of any digital solution, and that’s not been the case in many sector including the health sector.

Across sectors including the development sector technology has been hailed as an essential tool to achieve the 2030 Agenda for Sustainable Development, and in particular SDG 3 Ensure healthy lives and promote well- being for all at all ages.

The World Health Organization (WHO) Global Strategy on Digital Health 2020–2025 emphasised digital technologies as an essential component and an enabler of sustainable health systems and universal health coverage, and strategies from UNDP and other international and national actors have made a similar emphasis.

Some of the arguments assume that shifting care to a patient-based approach, where individuals manage their own health-related data via a variety of digital tools, including online portals or application, results in the individual taking responsibility and having control. In practice, however, it is unclear how much this empowers an individual as another open question is at what stage of their experience is the individual “empowered” – and how empowerment is defined. Is empowerment defined in terms of accessing information, being given the authority to make decisions about the care one receives, or control over how one’s data is processed and over decisions made on the basis of this information?

As noted by UNDP, this promise of empowerment can only start to make sense if digital health initiatives are “developed, implemented and monitored in a way that respects, protects and fulfils ethics and human rights.”

Efficiency, fraud prevention and saving money

With finite resources being allocated by governments to the management of healthcare (and other social services) demands and pressure on the health sector to provide better quality of care are increasing, leading to the accelerated exploration of ways to be more efficient. Within this umbrella of efficiency comes also the obligation to ensure funds are used wisely, and are not wasted either because of the way the system operates or because of fraud.

Arguments of efficiency have justified the move towards digital solutions in various parts of the health sector, from supply chain management to improved diagnostics and for processing eligibility and delivery of care.

It is important to be clear on whether, and if so where within the healthcare ecosystem technology, actors can deliver on promises of efficiency; and not to treat them all in the same way with the same expected promises and risks.

Whilst technology can be part of the solution to make aspects of our bureaucratic governance systems more efficient and transparent, the scope of its application and the way in which it has been deployed within this sectors raises serious concerns about the outcome.

As noted by the UN Special Rapporteur on extreme poverty and human rights in his 2019 annual report on the digital welfare state: “…the introduction of various new technologies that eliminate the human provider can enhance efficiency and provide other advantages but might not necessarily be satisfactory for individuals who are in situations of particular vulnerability. New technologies often operate on the law of averages, in the interests of majorities and on the basis of predicted outcomes or likelihoods.”



Source link

Leave a Reply

Your email address will not be published.