AI has both good and bad sides, just like anything else. Around the world, scientists, researchers, and developers are trying to figure out how technology may be applied for good. The Responsible AI Resource package was created by NASSCOM in this context. Responsible AI combines a variety of AI techniques and improves their realism and dependability.
The Resource Kit is the result of NASSCOM’s partnership with top industry partners to launch the wide-scale deployment of ethical AI. The Resource Kit includes sector-neutral tools and advice to help organisations use AI to grow and scale with assurance while putting user trust and safety first.
Akbar Mohammed, Architect, Fractal Analytics, who was a major contributor to the creation of the Responsible AI Resource kit, said in an interview with INDIAai, “With NASSCOM, we brought together an industry neutral framework that everyone can leverage from to establish responsible AI practises.
Putting together the Resource Kit
The Resource Kit’s creation took a lot of time. The responsible AI concepts were put together by analysis of various viewpoints. Making the resource package applicable to any business was the main issue.
The resource kit’s development required the creators to consider more from the perspective of practitioners than from an organisational lens. According to Sagar Shah, Client Partner at Fractal Analytics, “India is the first country to think about applying responsible AI.” “Countries all around the world, including the United States, will introduce new bills in the upcoming months. It will have an impact on Indian enterprises and developers. We’re attempting to inform organisations about this potential transition.
Being accountable
Consciously taking responsibility is difficult. The analysts and developers that work on a daily basis may not be diligent about the idea, even though the top companies desire to be more accountable. According to Sagar, they created behaviours that clarified the responsible AI principles in order to assure the ethical application of AI.
For instance, a practise known as “human centricity” makes sure that people come first in all decisions. Another is explainability, which makes sure nothing is left out, according to Sagar.
These actions are made possible via the Responsible AI Resource Kit. A guidebook that NASSCOM has published is included in the package. It also includes the training programmes that will shortly be offered.
The resource package also discusses the benefits of implementing Responsible AI rather than focusing only on the drawbacks that can arise if it is not implemented. In the upcoming months, the developers want to spread these ideas and methods throughout thousands of Indian businesses.
A partnership
In the long run, responsible AI practises will be required by organisations and governments, which will require cooperation and support from business partners. Additionally, after implementation, results can turn out differently than anticipated, leading to a request for modification. As a result, the resource kit will always be improved.
It is crucial to realise that in the future, businesses can fail because they failed to implement ethical AI techniques or did so in the wrong way, according to Fractal’s contributors.