Every new piece of technology in the healthcare industry is subject to careful examination. This also applies to conversational AI.
Engage in a dialogue to exchange private health information. One of the numerous governance issues that needs to be carefully considered in order to encourage the appropriate usage of chatbots in healthcare is AI chatbots. Together with the traditional difficulties of artificial intelligence, such explainability and fairness, there are additional hurdles in the form of performance assurance, patient concerns, legality, privacy, and security.
In order to tackle these governance issues, the World Economic Forum brought together a group of stakeholders and together they developed the Chatbots RESET framework, which governs the appropriate application of Conversational AI in healthcare.
There are two sections to the Chatbots RESET framework:
A set of ten carefully considered ethical principles derived from AI and healthcare, which are interpreted in the context of using chatbots in healthcare; and Operationalization actions for every principle, which take the form of suggestions to be put into practice at different phases of the chatbot deployment lifecycle.
The framework serves as a practical manual for three key constituencies to encourage the appropriate use of chatbots in healthcare applications: government regulators, healthcare providers, and technology developers.
Ten Principles for Responsible Chatbot Usage
The multistakeholder community of the Chatbots RESET initiative developed, interpreted, and curated ten principles for the use of chatbots in healthcare. These principles were taken from AI and healthcare ethics.
- Non-maleficence and safety: Chatbot behavior is prohibited from causing preventable harm to people or other unforeseen outcomes, such as dishonesty, addiction, or disrespect for diversity.
- Efficacy: Chatbots’ alleged services must be thoroughly validated for effectiveness in accordance with recognized international standards.
The outputs of chatbots must be customized for the people for whom they are meant, while also taking into consideration the medical nature of the information being shared. - Data protection: All information and past exchanges, including both intentional and inadvertent disclosures of personal information and information obtained with authorization, must be carefully stored and disposed of in accordance with all applicable privacy and data protection laws and regulations.
The chatbot user’s consent and/or any appropriate ethics committee approvals for research and data collecting purposes will be requested if any data is recorded during a session and/or used across sessions.
Users of chatbots will be able to access and take responsibility of their personally identifiable information.
Chatbot data shall not be utilized for monitoring, penalizing, or unfairly and opaquely denying users access to healthcare coverage. - Human agency: Chatbots should uphold the autonomy of the user, promote fundamental rights, and permit human supervision.
Chatbots should honor patients’ autonomy to choose the interventions they need for their healthcare.
When a user wants to communicate with a human agent, chatbots with an operational model that incorporates human monitoring in real-time must give in to that desire. - Accountability: The organization’s entity (person or group) in charge of chatbot governance must take accountability.
Chatbot recommendations and conclusions must be auditable. - Transparency: Users of chatbots must always be aware of whether they are interacting with a human, an AI, or a combination of the two. Chatbots must also clearly inform users of the system’s performance limitations, with the exception of circumstances in which the chatbot’s intended purpose does not require notification.
Users of chatbots will be notified right away if the chatbot cannot comprehend them or cannot reply with certainty, with the exception of cases where this communication would conflict with the chatbot’s intended function. - Fairness: Chatbots are not allowed to behave in a way that is consistently biased against people based on their age, gender, religion, race, region, or language.
In the event that a chatbot “learns” from data, the target population should be represented in the training dataset. - Explainability: Chatbot decisions and suggestions must be explicable in a way that makes sense to the people for whom they are designed.
- Integrity: Chatbots must only provide reasoning and responses that are supported by solid, ethically sourced information and data, as well as data gathered with a specific goal in mind.
- Inclusiveness: All reasonable steps should be taken to ensure that chatbots are usable by the intended user base, with particular attention to identifying and facilitating access for marginalized or disadvantaged groups.