Last week, the World Health Organisation (WHO) released a guidance document titled Ethics & Governance of Artificial Intelligence for Health. The 165-page document is a culmination of efforts by international experts, colluded over the past two years. The report outlines six cardinal principles that countries should strive to follow, to ensure AI develops in the interest of the public it is meant to serve. In addition, the report also shines a light on some of the tougher challenges in the field of AI such as medical ethics, racial bias, identity protection, data integrity and the various knock-on effects they can have, especially in Lower Middle Income Countries (LMIC). Given that a considerable portion of the report’s findings would have the unmistakable tinge of the COVID19 pandemic, there are some interesting insights on the impact of AI technologies on managing infectious diseases and outbreaks.
For the benefit of the readers, I’ll quickly skim through the six key principles outlined in the report:
- Protecting human autonomy
- Protect human well being, safety and public interest
- Ensuring transparency, explainability and intelligibility
- Fostering responsibility and accountability
- Ensuring inclusiveness and equity
- Promoting responsible and sustainable AI
While these six guiding principles are a step in the right direction, what was particularly interesting in the report are some of the harder challenges that AI is aiming to solve. With AI technologies in healthcare rapidly progressing from lab to market, it begs the question that maybe some of the famed, tech-driven solutions we see today don’t pass through the ethical and regulatory filter, and probably need reinspection.
How much is too much?
One of the initial responses to the COVID19 pandemic was an aggressive track and trace methodology. Some Asian countries like Taiwan, China and South Korea – presumably drawing from their prior experiences with SARS in 2004 – already had a fairly good idea of how to deploy track and trace technologies effectively to contain the spread of COVID19. Pretty soon, several countries fashioned their own versions of this application. Some notable ones include Aarogya Setu in India and HaMagen in Israel. These apps predominantly utilise a user’s location details to classify him as a contagion risk. Many would say this was intrusive with personal details like one’s residential address and other details were now available on a government backed application. Taking this a step ahead was China’s leading online payment platform Alipay established a QR Code system to introduce the Alipay Health Code. This collects data to establish an algorithm that draws automated conclusions on whether a person is a contagion risk or not. Even as vaccination programmes gather speed, more countries are pressing for immunity passports – where a person’s entry to a nation will be determined on their inoculation status. Tajikstan today managed to do what a lot of countries are probably ruminating silently – the country has made COVID19 vaccination compulsory for its citizens. Such policy decisions with a strong technology implementation and execution strategy impinge on basic human rights. The WHO report states that on one hand, many of these technologies lack scientific validation to achieve said goals, could perpetrate discrimation and exclusion, and lead to the development of Digital Health IDs without the express consent of the users involved.
Understanding Informed Consent: When Collecting Behavioural Data Surplus Backfires
A case study of consequence in the report is Dinerstein vs Google. In May 2017, the University of Chicago and the University of Chicago Medicine entered a partnership with Google to develop novel machine learning algorithms to predict medical events like unexpected hospital admissions. To this end, the University shared “de-identified” patient records with Google. One patient, Matt Dinerstein filed a class-action lawsuit against the University and Google with several claims including breach of contract, alleging prima facie violation of the US HIPAA. While the data shared was “de-identified”, details of service dates and free text medical notes were in the master dataset – enough to make accurate correlations of patients who sought treatments using triangulation offered by Google Maps. Dinerstein claimed that the data was insufficiently anonymised and Google didn’t do enough to procure patient consent. Although the lawsuit was dismissed by the district judge as Dinerstein couldn’t show sufficient damage caused by this partnership and their activities, the case definitely brings up an oft-discussed topic of informed consent. It is, in principle, the recommended course of action – ask any lawyer. But in reality, how effective and accurate is this exercise? Today, healthcare data isn’t accessible to just healthcare providers – a slew of intermediaries have emerged including technology companies, healthcare service providers, insurance companies and so on. The report emphasises heavily on the need for informed consent, as the healthcare delivery system is getting wider by the day, with more unrelated players being added to the mix. Transparency is essential for promoting trust among stakeholders, and there should be a thorough examination of ensuring how this can be done with patients in the loop.
Racial Bias:
This one remains a hot favourite among AI researchers. One of the most explicit and shocking biases that machines keep throwing back at us is to do with the colour of skin. Machine learning systems have repeatedly failed to recognise blacks or those with darker skin tones. ML has consistently outperformed skin specialists in detecting potentially cancerous skin lesions – but predominantly white-skinned individuals. The argument here to be made is that those who are paler and whiter are more prone to skin cancer, hence there is a higher propensity of related data to be more widely available. But that can’t be the baseline for further medical investigations and research. There is still a gaping hole in information pertaining to under-represented communities such as blacks, their medical histories and backgrounds.
The report discusses a vital technology and design solution called Design for Values, which is an explicit transposition of moral and social values into context-dependent design requirements. Machines still struggle to recognise abstract concepts, but when factored into a machine’s DNA through design, the knock-on effects can be beneficial as the machine learns and reinforces these values.
We cannot possibly place the onus of taking decisions, pertaining to human life, on machines. In that area, they still need a lot of help. However, policy makers, technology developers along with medical professionals must continue their dialogue on making machines more pliable to understanding, processing and reinforcing core values that respect and sometimes, defend human rights.
Source: indiaai.gov.in