When Sam Altman visited India in June of last year, he was asked what it would take for an Indian company to create something similar to ChatGPT. His response to that query went viral since it appeared to downplay the potential applications of artificial intelligence for Indians. Altman has stated: Indians will not succeed in creating AI similar to ChatGPT. Almost a year later, efforts are still underway to construct Large Language Models (LLMs) tailored to India. However, recent developments in this field, particularly in the past 10 days, indicate that efforts to create anything akin to ChatGPT are, in fact, at a standstill.
Particularly noteworthy are two items: the tool Krutrim, which Ola introduced, and the latest guidelines published on Friday by the Ministry of Electronics and Information Technology (Meity).
First, let’s discuss Krutrim. Krutrim, a homegrown AI tool similar to ChatGPT, was introduced to the public last month and is hailed as India’s answer to LLMs. The hype surrounding the AI tool that was built in India subsided in a matter of days. As people began interacting with it, they quickly discovered that the AI tool is incomplete. It provided a great deal of false information to people, was frequently hallucinating—that is, making up information—and in one occasion, outright stated that OpenAI was the company behind its creation. Many conjectured that Krutrim is nothing more than a sophisticated shell over ChatGPT as a result of this. Because Ola felt the charges were serious enough, it responded with a tweet. The business stated: “We investigated the issue and found the root cause to be a data leakage issue from one of the open-source datasets used in our LLM fine-tuning.”
Whatever the case, Krutrim’s launch and the controversy surrounding its performance demonstrated that it was unready for prime time and, if this was the best an Indian company could do in the LLM space, then it is unlikely that we will be able to compete with ChatGPT or other similar companies anytime soon.
Having said that, let’s put things in perspective and show a little kindness to Krutrim. Because Google’s Gemini also failed the public test shortly after Krutrim did. And how! Beyond its issues with historical personages, Google Gemini sparked controversy in India when it misrepresented various beliefs and viewpoints about Prime Minister Narendra Modi as reality.
The government lowers the hammer
Thus, LLMs are not perfect. We shouldn’t be too hard on smaller businesses and startups like Ola if a giant like Google can’t get them right in these early stages of the AI industry. But in the wake of the Krutrim and Gemini debacle, the Indian government appears to be operating under the old adage “every problem is a nail”—appropriated from someone who has a hammer.
This is also most likely the reason behind the Indian government’s recent release of a stringent recommendation that will probably have an effect on how Indian businesses proceed with developing ChatGPT-like AI technologies in the nation. Furthermore, in a more constrained setting, that might not even be feasible if the best that Indian companies have managed to accomplish thus far in a free-form setting is something akin to Krutrim.
On Friday, Meity published an advise stating, “The use of under-testing, unreliable Artificial Intelligence model(s) (like) LLMs, Generative AI, software(s) or algorithm(s) and its availability to the users on Indian Internet must be done so with explicit permission of the Government of India.”
This caused a stir, with numerous technologists, specialists, and AI industry members denouncing the new advise as harmful to India’s endeavours to develop ChatGPT-like technology.
A few days later, on Monday, Rajeev Chandrasekhar, the Union Minister of State for IT, provided clarification. He argued that rather than focusing on startups, this law largely targeted large platforms. “Advisory is aimed at the Significant platforms and permission seeking from Meity is only for large platforms and will not apply to startups,” stated Chandrasekhar.
However, not everybody is persuaded. Rather, they think that Indian startups and AI-related businesses would relocate their operations abroad, especially since that Dubai is also reportedly making significant investments to develop an AI-ready policy and infrastructure environment.
“Bad move by India,” stated Aravind Srinivas, CEO of Perplexity AI, citing a news article on a new advise. He wasn’t by himself. Bindu Reddy, CEO of Abacus AI, voiced similar opinions, saying: “India just kissed its future goodbye.” The Indian government now has to give its consent before any business can use a GenAI model. That is, implementing a 7b open source model alone now requires permission.”
It was also pointed out by several IT experts that the advise was probably going to affect Indian progress on AI technologies. Mishi Choudhary tells India Today Tech, “This Advisory may be well-intentioned but is an overreach. It interferes with innovation.” “The fact that an advisory needs to be clarified indicates that it was hastily written and was only meant to address a single problem brought on by Google’s Gemini. AI requires rules and restrictions, such as labelling, but not the ones that exist now.”
In addition to being the creator of SFLC.in, Choudhary practices technology law and is an online civil rights activist.
Vice President of Counterpoint Research Neil Shah concurs that caution is necessary when it comes to AI development. However, he also claims that it will impede India’s progress in AI. “The Indian regulators are making sure to prioritise ethics driven AI over economics driven AI which could slow down the overall AI proliferation in India until the AI driven platforms get the output accurate without any misinformation and usable,” he stated.
The president of CyberMedia Research, Thomas George, stated that the government must strike a balance between its concerns and the demands of business. “It is imperative that there is a delicate equilibrium between fostering transparency and innovation in the AI sector while simultaneously safeguarding privacy and intellectual property rights,” according to him.
Was Altman correct?
Returning to Sam Altman and his challenge to Indian tech firms and technologists, Altman has sort of been validated thus far. Though it’s too soon to say that Indian businesses can’t create anything along the lines of ChatGPT.
Other LLM initiatives are being carried out in India; some of them might possibly be more extensive than Krutrim. These things require time as well. However, it’s evident that developing a dependable LLM takes more than a few months, and Indian businesses face significant difficulties, the most significant of which are personnel and infrastructure-related (large farms of high-end GPUs, for example). For background, consider this: this year, Mark Zuckerberg of Meta is purchasing 350000 H100 Nvidia graphics cards with high-end cash. After that, AI models will be trained on these GPUs.
Srinivas brought out these two main points in a recent interview. But recent events indicate that these are not the only obstacles India’s AI initiatives must overcome. Additionally, there is now policy space. It is feasible that all Indian AI businesses might be severely impacted if the Meity advisory remains in place and is strictly followed.