NEW DELHI: On December 17, 2022, the third iteration of the Indian Symposium on Machine Learning (IndoML) 2022 was finished by the Indian Institute of Technology Gandhinagar (IITGN) and IIT Kharagpur. 300 people from 49 institutions in India took part in IndoML.
The conference brings together academics and industry experts to exchange information and practical insights on the significance of the interface between artificial intelligence (AI) and machine learning (ML) and other disciplines of study. In addition to Georgia Tech, Rice University, Northeastern University, UMass Amherst, University of Michigan, University of Texas at Dallas, NIST, IISc Bangalore, IIT Kharagpur, IIT Bombay, IIT Jodhpur, Google, Amazon, MerlynMind, Accenture, and Hewlett Packard, among others, the symposium featured 17 talks, tutorials, and poster presentations by eminent speakers, researchers, and industry professionals from around the world.
The three-day conference, which took place from December 15 to 17, focused on a variety of issues in the fields of AI and ML, including modelling climate change, robotics, the intersection of ML and economics, and sustainable solutions to the growing issue of AI carbon emissions, among others. Additionally, IndoML 2022 held a Datathon for early-career professionals and students. 107 teams submitted almost 600 entries to the competition. To present their work in front of a distinguished panel of ML academics and practitioners, the top selected teams were invited to IndoML 2022. Ujjal Gadiraju, an assistant professor at Delft University of Technology in the Netherlands, said this when discussing “Human-Centered Artificial Intelligence – A Crowd Computing Perspective”: “Crowd Computing offers a promising means to overcome fundamental challenges in computation and interaction. A new generation of human-centered AI systems may be announced by it. The inventor of ThirdAI and associate professor at Rice University, Prof. Anshumali Shrivastava, was another expert presenter who highlighted the potential of the Dynamic Sparsity algorithm and stated: “The barrier to Universal AI is the scale of models. Simple designs are acceptable, but we must also consider model size because bigger models work better. The concept of dynamic sparsity involves employing various parameter subsets over the whole network for various inputs. This can make training models with a billion parameters on CPUs much faster.