At the Indian Institute of Technology Hyderabad, Prof Vineeth N Balasubramanian heads what is likely the only exclusive Department for Artificial Intelligence across all IITs. “Some of the other IITs have schools that combine different things but IIT Hyderabad has an exclusive department of AI which is about two years old,” he says during his conversation with INDIAai. Personally, he is a veteran in the field with nearly two decades of experience behind him.
A field that has historically been considered a sub-branch of computer science for many decades, Artificial Intelligence today has a life of its own. “It includes perspectives in computer science, electrical engineering, maths, robotics, civil engineering, human computer interfaces from design, AI and ethics from liberal arts. So there’s a reason to have an AI department that can do foundational AI, applied AI as well as interdisciplinary AI. Today, applying AI to any domain is a field by itself. So that’s how came the idea to have a department that can do all of this,” he says.
AI at IIT-H
It was the first college to start a B.Tech. in AI. “The purpose of creating a B.Tech. program was to give it a more holistic perspective than considering AI as a sub-branch of computer science.”
Elaborating on the ongoing projects at the department, he said, “One of the major hubs for the 5G initiative in India is based out of IIT-H. We also have a large initiative on AI for agriculture, which is a collaboration with the University of Tokyo in Japan.”
He adds, “One of the recent technology innovation hubs, which the government instituted a couple of years ago, is based in IIT-H and it’s based on autonomous navigation. About three months ago, we inaugurated a testbed for autonomous navigation which includes work in drones as well as self-driving vehicles. Further, some faculty do AI for surveillance where we have access to the traffic cameras in Hyderabad city – they do things like checking for whether a person is wearing a helmet. We also have a faculty who does fraud analytics on live financial data in collaboration with the Government of Telangana.”
Indigenous algorithms for indigenous problems
Given the lack of labelled datasets, the more efficient way of attacking any problem is to come up with machine learning algorithms that can scale things when there is very little labelled data. That’s the approach that Prof Vineeth’s group has taken.
“A majority of the data sets available today are created by different academic institutions around the world. Some data sets are created by, say, Microsoft or Facebook or Google that are made available for everybody to work on. But the challenge here is that obviously, they’re going to work on problems that satisfy their business interests. And so, if we want to solve a local problem in India, say, a public health problem or an education problem, somebody who’s working in that space has to create this data set and make it available.”
And creating datasets, especially those that can lead to tangible solutions, is not easy, he admits. “That’s the reason you need these algorithms that can work with very limited labelled data. So that’s one space where we have a lot of work.”
Decoding the AI black box
The current success of AI is due to its use in low-stakes applications like product recommendations on Amazon, face detection on Facebook, Tweet sentiment analysis and machine translation on Google Translate. “But AI/ML is not used in the same way in risk sensitive applications where there are lives at stake. Maybe it is used in subsystems or a part of it, but not in decision-making, not the way it is used to recommend a product or user,” he says.
Commenting on limited AI use in high-stakes applications, he says, “Medical imaging systems today, by a Siemens or a Philips that make CT scan machines, are using machine learning perhaps as a part of highlighting some regions to help a radiologist. But it’s not making the final decision of saying that the CT scan is healthy and the patient can go back home.” This is what sets the stage for responsible and explainable AI.
“Everybody understands that explainability is important, especially with things like GDPR in Europe. I think it’s not just a western thing. Even a company that develops software for a bank in Europe today has to have explainability because the customer in Europe can go back and sue the bank according to the GDPR regulations. So it’s something that everybody has to deal with.”
Most people add a layer of explainability to an ML model that is already deployed. “Now these are called post-hoc methods where you first predict, then you come up with a layer of explainability to reason. The problem now is that it becomes an issue of accountability. So if something goes wrong, which team do you blame: the team that built the machine learning model, or the team that built the explainability model? It’s a big legal problem going forward.”
So what’s the solution? “You need solutions that bake explainability in the model building itself. Explainability should not be a post-hoc thought.”
Symbolic AI vs Connectionist AI
As per Prof Vineeth, factoring in reasoning and explainability in AI gives us an opportunity to merge the two approaches: the classic Symbolism (or GOFAI) and the contemporary Connectionism which included Deep Learning.
He illustrates his point by contrasting the two AI programs, Deep Blue and AlphaZero. “We all know that AlphaZero is DeepMind’s chess-playing algorithm. But IBM did this with Deep Blue some 20 years ago. So why are we still touting it (AlphaZero) in a big way? There’s a fundamental difference between these two paradigms. Deep Blue had no clue about what was a white square, black square, rook, queen or king. The only thing it was given was the position – then it decided strategies and told what the next moves should be on the board. But AlphaZero perceives the chessboard and then makes decisions.”
Looking back at the entire information processing pathway as a combination of perception and cognition, he says, “We solved, in some sense, the cognition problem with Deep Blue but we did not know how to solve the perception problem. And deep learning, in a sense, has solved the perception problem.”
Putting the cognition piece back to the perception piece is a very strong combination. “Deep learning is a perception problem. Now, if you try to bring back the cognition problem, it requires reasoning, explainability, logic – which is actually the symbolic piece of it. Looking at it as combining connectionist and symbolic approaches from a fundamental perspective is very interesting.”
Source: indiaai.gov.in