Ethical Guidelines for AI in Healthcare and Biomedical Research have been published by ICMR. These rules apply to all applications and biomedical research involving human subjects and/or their biological data, as well as to AI-based tools used in those fields.
Among the widely acknowledged uses of AI in healthcare include diagnosis and screening, medicines, preventive therapies, clinical decision-making, public health surveillance, complicated data processing, forecasting illness outcomes, behavioral and mental healthcare, and health management systems.
As AI cannot be held responsible for its actions, an ethically appropriate regulatory framework is necessary to direct the advancement of AI technology and its use in healthcare. According to the ICMR guidance guideline, it is crucial to have procedures that cover accountability in case of mistakes for safeguarding and protection as AI technologies are further developed and used in clinical decision-making.
Ten important patient-centered ethical guidelines for AI applications were underlined in the document. These principles include autonomy, data privacy, collaboration, risk reduction, equity, non-discrimination, fairness, validity, and trustworthiness. They also contain accountability and culpability.
The autonomy principle emphasizes the need for obtaining the patient’s agreement, who must also be made aware of the hazards to their bodily, mental health, and social well-being. In contrast, anonymous data delinked from global technology is prevented under the safety and risk reduction principle in order to fend off cyberattacks.
The organization oversees evaluating the validity and morality of any health-related research. It will make sure that the idea is grounded in science and weighs all possible risks and benefits for the group of people being studied.
Some important topics emphasized in the guidelines are informed consent and AI tool governance in the health field. Even in industrialized nations, the latter is still in its early phases.