Arghya Pal is a Postdoctoral Researcher in the Department of Computer Science at Harvard University.
He earned his doctorate from the Indian Institute of Technology, Hyderabad.
Arghya is enthusiastic about discussing deep learning, computer vision, and machine learning.
INDIAai interviewed Arghya Pal to get his perspective on AI.
What attracted you to AI/ML?
I became interested in mathematics while preparing for the Joint Entrance Examination (JEE) in School. Though my rank in JEE was not super exciting, that was when I found mathematics is very visual! Perhaps that was the preparatory step for my further interest in AI/ML. Besides that, Richard Fynman’s lecture on “can machines think?” and the science fiction of Satyajit Ray “, Prof. Shonku and robu”, nurtured my mind to quest for a bigger question – what should a machine think, understand, and explain? They eventually inspired me to research AI and ML.
How did you become interested in algorithms?
Honestly, my introduction to algorithms is an arranged one. Although I had to take algorithms as part of my coursework, it didn’t take me long to become interested in research. Some of the most exciting algorithms I designed were Adversarial Data Programming (CVPR’ ‘ 18), Zero-Shot Task Transfer (CVPR’19CVPR’19), Synthesize-It-Classifier (CVPR’21CVPR’21), Guess-It-Generator (ACMMM’22ACMMM’22), etc.
What does a typical day look like for a deep learning researcher?
Typically, I try to organize my day such that I can read some current news and trends in deep learning from Twitter or LinkedIn. I was then followed by listening to some exciting lectures on youtube or Coursera for some motivation. And finally coding, followed by documenting the day’s research and making do’s for the next day.
What are your everyday responsibilities as a postdoctoral researcher at Harvard Medical School?
Harvard is where I realized my lifelong dream of putting my work to practice. As a part of my research, I interact with pioneer doctors and researchers. From these discussions, I collected valuable insights into the practical world. And I loved the challenge of designing machine learning models that could work on real-life medical equipment. Finally, I hope to teach the same habit after joining as an Assistant Professor at Monash University.
As a researcher, How did you identify an issue area? Could you describe the development process?
I regularly catch up with the proceedings of popular research venues such as Neurips, CVPR, ICML, ICLR, etc., to find newer research areas. But, as a researcher, I must confess that sometimes when we fall short of an issue area – we Create some!
What, in your opinion, should the AI community concentrate on moving forward? What is the general idea of the researchers?
As I see it, one of the biggest challenges the current AI/ML faces is a lack of integration perception and reasoning. So while ML and DL have achieved quite excellent state-of-the-art performances – instead, integration of perception and reason for a wholesome human-level intelligence is the biggest challenge that might confront ML.
What, in your opinion, are the most popular AI subdomains/subjects among your peers?
AI/ML has a pretty good span, and it would be unfair to select some subjects or domains as popular. However, I find exciting works on transformers, generative models like flow, and explainable machine learning (XAI). Furthermore, on transfer learning, continual learning, causality, etc. Therefore, I feel those research areas might be able to hold the spotlight for the next couple of years.
In your opinion, what are the positive and negative aspects of the current state of AI research?
The positive aspect, as I find, is that AI/ML has changed the way of research. One could discover papers in ArXiv, codes in GitHub, and lectures – most of which are free. However, that is precisely the negative aspect of AI/ML – the abundance of resources but a lack of corresponding structure. I find that students are no longer interested in going deeper into some problem areas but only focusing on publishing in ML within a few months of their first code. Due to peer pressure or the chance of immediate reward, they now consider only the trending topics without sufficiently going deeper into that subject.
What advice do you have for people who wish to pursue a career in AI research? What are the best avenues for progress?
A student or an upcoming researcher could consider; (i) finding a comfortable area and (ii) a good habit of collaboration. A comfortable area would help one to build a habit of reading, a sense of thinking, and ideating new methods. One could follow NeurIPS/CVPR/ICML proceedings and venues to find contemporary research areas and understand the working principles from the open-source codes hosted in Github, codeshare, etc.
A good habit of active collaboration is equally important. Follow your fellow researchers on Twitter and Linkedin, observe good medium blogs, making a habit of asking questions on quora and Reddit would be good practice. Always keep an eyeball for summer schools and workshops (NeurIPS, CVPR, ICML, ACL, etc.), and participate in Kaggle competitions.
Could you provide a list of influential research publications and books?
- Computer Vision, Szeliski, and Fundamentals of Computer Vision by Mubarak Shah
- Deep Learning, Ian Goodfellow
- Natural Language Processing, Dan Jurafsky
- Introduction to Non-linear Optimization, Amir Beck
- Machine Learning; Andrew Ng’s Coursera course
- Linear Algebra Done Right, Sheldon Axler
Source: indiaai.gov.in