When releasing “unreliable” or “under-tested” generative AI models or tools in the public eye, tech companies are asked by the Indian government to get its clear approval. In addition, as the nation prepares for a national election, it has issued a warning to businesses stating that their AI systems should not produce results that “threaten the integrity of the electoral process.”
The Indian government told Parliament in April 2023 that it was not looking to draft any legislation to control artificial intelligence (AI), which is a reversal of its previous position of a hands-off policy.
The warning was released last week by the Indian Ministry of Electronics and Information Technology (MeitY) in reaction to criticism Google’s Gemini received from the right wing regarding its answer to the question, “Is Modi a fascist?”
It said that the “crackdown on dissent and its use of violence against religious minorities” by his government was evidence that Indian Prime Minister Narendra Modi was “accused of implementing policies some experts have characterised as fascist.”
In response, junior information technology minister Rajeev Chandrasekhar charged that Google’s Gemini was breaking Indian law. He went on, “Sorry ‘unreliable’ does not exempt from the law.” Google allegedly apologized for the response, citing a “unreliable” algorithm as the cause, according to Chandrashekar. In response, the business stated that it was taking the issue seriously and trying to make the system better.
Large tech businesses in the West have frequently been accused of having a liberal tilt. These claims of bias have permeated generative AI solutions, such as Microsoft Copilot and OpenAI’s ChatGPT.
Meanwhile, AI entrepreneurs in India are worried that excessive regulation may choke out their fledgling sector due to a government guideline. Others are concerned that the advise may be interpreted as an attempt by the Modi administration to select which AI applications to approve and which to reject, so gaining control over online arenas where these instruments have significant influence, given the impending announcement of the national election.
“License feelings, Raj”
The advise does not automatically become law that binds businesses. Lawyers informed Al Jazeera that noncompliance might result in legal action under India’s Information Technology Act. “This nonbinding recommendation appears to be more political posturing than substantive policymaking,” stated Mishi Choudhary, the center’s founder in India for Software Freedom Law. “After the elections, there will be a lot more serious engagement. This allows us to see inside the minds of the decision-makers.
However, according to Harsh Choudhry, co-founder of Bengaluru, India-based Sentra World, an AI solutions firm, the advice already sends a message that can prove inhibiting for innovation, particularly at startups. “It appears like an impossible task for the government as well if every AI product needs approval,” the man remarked. He laughed and said, “They might need another GenAI (generative AI) bot to test these models.”
The recommendation has been criticized by a number of other prominent figures in the generative AI space as an instance of legislative overreach. General partner Martin Casado of the American investment company Andreessen Horowitz stated on social networking site X that the action was “anti-public,” “anti-innovation,” and a “travesty.”
The CEO of Abacus AI, Bindu Reddy, stated that India “just kissed its future goodbye” with the release of the new advise.
In response to the criticism, Chandrashekar clarified X, saying that the guidance only applied to “significant platforms” and that startups would not be required to obtain prior approval before deploying generative AI tools on “the Indian internet.”
But there’s still a lot of ambiguity. Many vague phrases, such as “unreliable,” “untested,” and “Indian Internet,” are used throughout the advisory. Mishi Choudhary stated, “There are clear indications of a hurried job when multiple clarifications are needed to explain scope, application, and aim. “The ministers are competent individuals, but they lack the resources to evaluate models and grant operating permits.”
She went on, “No wonder it [has] invoked the feelings of a licence raj from the 1980s,” alluding to the bureaucratic system that hampered India’s economic growth and innovation by needing government permits for corporate activity up until the early 1990s.
However, exemptions from the recommendation solely for hand-selected start-ups may not come without a catch: when AI produces false or falsified outputs, it could also lead to hallucinations and politically biased replies. According to Mishi, this means that the exemption “raises more questions than it answers.”
According to Harsh Choudhry, the government intended for the rule to hold businesses that are profiting from AI technologies responsible for providing inaccurate results. However, he said, “it might not be the best course of action to take permission first.”
Deepfake shadows
Geopolitical repercussions would also result from India’s decision to control AI material, according to Shruti Shreya, senior program manager for platform regulation at The Dialogue, a think tank focused on tech policy.
“India’s policies can set a precedent for how other nations, especially in the developing world, approach AI content regulation and data governance,” the speaker stated, citing the country’s fast expanding internet user base.
Analysts claimed that the Indian government finds it challenging to strike a balance when it comes to AI rules.
The national polls, which are expected to take place in April and May, will see millions of Indians cast ballots. India has already developed into a haven for manipulated media due to the proliferation of readily available, frequently free generative AI tools; this has raised concerns about the validity of elections. The main political parties in India still use deepfakes in their campaigns.
According to Kamesh Shekar, senior project manager at The Dialogue think tank who specializes in data governance and artificial intelligence, the current recommendation should be viewed in light of the government’s continuing attempts to create complete legislation pertaining to generative AI.
Previously, in November and December of 2023, the Indian government requested that Big Tech companies identify manipulated media, remove deepfake content within 24 hours of receiving a complaint, and take proactive measures to combat misinformation. However, the directive did not specify any specific penalties for noncompliance.
However, Shekar also stated that a rule requiring businesses to obtain permission from the government before releasing a product would stifle innovation. He suggested that the government think about setting up a “sandbox,” or live testing environment, where AI solutions and involved parties may test the product before a large-scale launch to ascertain its dependability.
Still, there are professionals who disagree with the government of India’s criticism.
Governments often struggle to keep up with the rapid advancement of AI technology. Government regulation is necessary, according to Hafiz Malik, a University of Michigan computer engineering professor who specializes in deepfake detection. He argued that it would be absurd to let businesses control themselves and that the Indian government’s advice was a positive beginning.
“The governments must impose regulations, but innovation shouldn’t be sacrificed in the process,” he stated.
But in the end, Malik continued, more public awareness is required.
Malik stated, “Seeing something and believing it is now off the table.” “Deepfake cannot be resolved unless the public is aware of the issue. The only instrument available to address a highly complex issue is awareness.