A report claims that on March 1, 2024, the Ministry of Electronics and Information Technology (MeitY) sent out an advice to “intermediaries” and “platforms” that host artificial intelligence (AI), including models based on generative AI.
The term “intermediary” is defined by the Information Technology Act, 2000 (IT Act). To put it simply, an intermediary is an entity that deals with or provides a service in relation to an electronic record on behalf of another person. Players that fall under this category typically include telecom service providers, cloud-hosting providers, e-commerce platforms, etc. To what extent the Advisory applies to digital “platforms” deploying AI-based models is still unclear, though, as the term “platform” is not defined clearly in the IT Act or Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 (Intermediary Guidelines).
While an official clarification or revision to this effect has not yet been produced, the Central Government (Government) has made it clear in news conferences held after the Advisory’s release that it only intends to regulate “significant platforms” rather than startups.
ESSENTIALS OF THE ADVISORY LIMITATION on the use of information / content that is prohibited:
“Intermediaries” and AI-based “platforms” are required to make sure that any AI tool on their “platform” or being used through their computer resources does not deal with illegal or wrong content, such as pornographic material, false or misleading information, infringing material, etc., that threatens the integrity / security of India or its relations with other countries. Given that India’s general elections are coming up, the government probably wants to reduce the amount of false information and fake news that is disseminated online throughout the nation, especially in the days preceding the polls.
Actions to stop prejudice in the electoral process’s integrity:
AI-based “platforms” and “intermediaries” must make sure that their “platform” forbids any kind of bias, discrimination, or threat to the election process’s integrity through the application of AI capabilities. This implies that “intermediaries” may need to put in place the proper processes to quickly identify and handle any such occurrences, particularly if the content is produced using AI techniques.
Before employing and making available in India under-tested/unreliable AI tools, the government must give its explicit consent.
The Advisory requires that before employing and making available AI Tools that are unproven and untested, “intermediaries” and AI “platforms” must obtain express approval from the government. It will be interesting to observe what meaning is given to the aforementioned terms because they are not defined by the IT Act or the Intermediary Guidelines. Government has made it clear that the Advisory is intended to regulate “significant platforms” and does not apply to startups, as was previously mentioned. But since the legislation doesn’t define “significant platform” precisely, a formal explanation is needed in this regard. The Advisory also mandates that “intermediaries” and AI “platforms” provide appropriate disclaimers about the potential fallibility or unreliability of the output produced by these AI Tools, in an effort to make it clear to users that information may not always be accurate and dependable. According to the advisory, people can be informed about this by using “consent pop-ups” to reinforce it.
Users should be made fully aware of the repercussions of handling illegal content.
The Advisory mandates that “intermediaries” and digital “platforms” that use artificial intelligence (AI) notify all of their users about the consequences of handling illegal information on their “platform” through user agreements and terms of service. These consequences may include terminating or suspending access to user accounts, disabling access to or removing non-compliant information, and facing legal repercussions.
Labeling information that could be misrepresented or deepfaked appropriately:
According to the advisory, any intermediary that allows for the synthetic creation, generation, or modification of any kind of information—text, audio, visual, or audio-visual—that might be used as “misinformation” or “deepfake” ought to identify the material or embed a permanent, unique metadata or identifier into it. Such labeling would serve the following purposes: (a) indicating that the material has been altered during its distribution; and (b) identifying the source or first person who created the “misinformation” or “deepfake”.
Report on action taken filed:
Within 15 (fifteen) days of the Advisory, all “intermediaries” are required by the Advisory to guarantee compliance with the aforementioned measures by providing a “action taken-cum-status” report.
Remark
The Advisory represents the government’s goal to control the quickly expanding field of artificial intelligence and guarantee its moral application, which is imperative given the current lack of a specific legal framework. It also suggests that the government is inclined to address the escalating worries about data security, privacy, and the effects of AI algorithms. There is no denying that the Advisory has had an impact on the sector, even though there is some dispute and ambiguity regarding its legal foundation and applicability. It will be interesting to watch how the government attempts to define the standards that the AI “platforms” and “intermediaries” must uphold in order to provide services to its Indian users, as well as the testing threshold for these models. The proposed Digital India Act, which is anticipated to be introduced in the later half of this year, may also solve these issues.