MeitY’s 2024 forecast explains why a warning about the use of AI systems by online middlemen in India was released on March 1, 2024, by the Ministry of Electronics and Information Technology. An advice is usually a clarification that addresses a specific context and one component of the legislation. The warning follows another one that was released in December 2023 and focused on deepfakes, or AI-generated likenesses that might be exploited in misinformation campaigns.
Online intermediaries are required by the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules 2021 (IT Rules) to notify users that purposeful uploading of false material is forbidden. Online intermediaries were urged by the 2023 advisory to clarify the restriction on the deliberate dissemination of incorrect content. But by essentially establishing an AI governance mandate, the 2024 recommendation goes far beyond the purview of the IT regulations, casting doubt on both its applicability and the direction AI governance will take in India.
Rules governing AI system licencing?
First off, the 2024 advisory says that government approval is needed before “unreliable” or “under-tested” AI models, huge language models, generative AI, software, and algorithms can be made publicly available. For AI systems, the clause essentially functions as a licencing scheme. Here, a number of problems surface. One is that there is no definition for any of the phrases used to refer to AI in the guidance, thus it is hard to tell what businesses can and cannot do.
Second, these terms are not defined by the IT Rules or the Information Technology Act 2000, the legislation that governs them. They also provide the government no authority to establish a licencing programme for any kind of technology. This raises doubts about the legitimacy of both the requirement and the system as a whole.
Third, platforms that enable user interactions or transactions between users are included in the 2024 advisory. In actuality, Rajeev Chandrasekhar, the Union Minister of State for IT, made it clear on March 4 that it only applies to major social media intermediaries, or organisations with five million or more members, and not to startups. As they publish content in response to user commands, this ostensibly would not encompass OpenAI’s ChatGPT platform or any other AI-based text or image generator. Given that organisations of all sizes have the ability to cause harm connected to AI, how can one expect such an advisory to be effective?
Fourth, neither the government department that would be in charge of enforcing such a programme nor any guidelines for determining the dependability of AI systems are mentioned in the recommendation.