Governments are growing more watchful of the regulations governing this new technology as a result of the global AI boom and the negative effects of it emerging online. Formerly, before making AI technologies available to the public, the Ministry of Electronics and Information Technology (MeitY) required developers to obtain government permission. The Indian government has made a major policy change by updating its standards, which is good news for AI developers. Developers can now publish generative AI models without first obtaining government approval thanks to the new rules.
On March 15, this revised policy was made public. The amended advise has requested developers to take proactive precautions for emerging difficulties and to start the process of self-regulation, in place of the previous obligations.
By doing this, the government hopes to strike a balance between encouraging technological advancement and reducing any possible risks related to artificial intelligence.
Additionally, the government has pushed AI developers to designate their models’ output—especially those that are vulnerable to abuse, like deepfakes. If any content generated by AI is unreliable, it should be appropriately labeled.
In relation to deepfakes, the government mandates that developers tag or incorporate such information using distinct metadata. They could face legal repercussions under current legislation if they violate these regulations.
The developer community has embraced this new strategy since it offers more creative flexibility and encourages the responsible development and application of AI.
“I see this [advisory] as a step in the right direction as there was a backlash on the previous advisory,” stated Sharath Srinivasamurthy, associate VP of research at IDC. Since AI, and particularly genAI, is a new technology, laws governing it will change as we proceed.