Artificial intelligence (AI) platforms and social media intermediaries received a warning from the Ministry of Electronics and IT (Meity) on Friday. The ministry asked them to obtain authorization prior to introducing AI goods within the nation.
Additionally, the ministry has requested that platforms make sure that the biases resulting from their AI models or platforms do not interfere with the Indian political process.
Rajeev Chandrasekhar, the state minister for electronics, stated that “generative AI or AI platforms available on the internet will have to take full responsibility for what the platform does, and cannot escape the accountability by saying that their platform is under-tested.”
Meity has requested that, within 15 days of the advice, all platforms in question submit an action taken/status report to it.
The government has recently accused Google’s AI tool, Gemini, for purportedly producing results that are biassed against Prime Minister Narendra Modi.
In order to facilitate the tracking of misinformation or deepfakes and their originators, the government has also asked the intermediaries to make sure that any potentially misleading content is labelled with unique metadata or identifiers that allow for the identification of its origin and the intermediary involved. “Everything that is created synthetically by the platforms should have a metadata or some sort of identifier embedded in it,” Chandrasekhar stated.
According to the minister, noncompliance could result in legal consequences for intermediaries or platforms, such as prosecution under the IT Act and other applicable criminal statutes. “You don’t do that with cars or microprocessors,” he added. Why are there no restrictions on what stays in the lab and what is released to the public for a technology as revolutionary as artificial intelligence?
The minister stated that all digital platforms that facilitate the production of deep fakes and image modification, as well as traditional internet intermediaries and platforms that are not strictly intermediates but contain some AI, will have to abide by the warning.
Additionally, organisations that host unreliable or inadequately tested AI platforms and wish to establish an online testing sandbox are need to obtain government authorization and designate the platform as “under-tested.”
Furthermore, a “consent popup” method might be used to explicitly disclose to the public the “possible and inherent fallibility or unreliability” of the content created by these platforms.
He continued, “This advice is really hinting at our future regulatory framework, which is meant to create a safe and trustworthy internet.”