Governments all over the world are creating laws and regulations to control how businesses utilise AI. The industry has reached a stage where effective rule-making cannot keep up with AI’s capabilities, and there may be more responsible options than waiting for government regulations.
In an event hosted by the Deloitte AI Institute, two luminaries in AI—Debjani Ghosh, President of NASSCOM, and Kay Firth-Butterfield, Head of AI & Machine Learning and Member of the Executive Committee of the World Economic Forum—find the answer to this conundrum. The event was moderated by Beena Ammanath, Executive Director of the Global Deloitte AI Institute.
Governments all over the world are developing AI policies in their own unique ways. Leaders in AI around the world, such as China, India, and the US, have different views on integrating AI into their strategy. There is a need for a global framework. The government’s initiatives, however, are still in the experimental stage. Therefore, it is yet too early to declare one method to be the best.
AI regulation
Regulations are being focused at specific aspects of AI. One such factor that many are concerned about is bias in AI. Fairness, comprehensibility, and responsibility in AI are also high on Kay’s list, though.
Debjani Ghosh presented a fresh viewpoint on AI legislation that was based on the cultural norms of various nations. “Indians and Chinese view privacy in very different ways. There are no right or incorrect answers in this situation, she added. She believes that each nation’s demands should be taken into account when creating AI rules.
Regulating technology is never a solution to an issue that already exists. Instead, usage restrictions should be put in place because people are always the ones who cause problems, not technology. According to Debjani, “Countries should come together and create a list of use cases depending on their risk level and tailor according to their specific needs.”
The significance of self-control
Any organisation must have a fundamental basis for self-regulation, according to Kay. Today, AI is used in some capacity by every enterprise. Self-regulation is important since it affects the value of a brand. A bias in AI will result from improper regulation, which will have an impact on brand quality and individual products.
Bias cannot be completely eliminated because it is ingrained in human mind. It is crucial to recognise unconscious bias from development to implementation by holding up a mirror. The Responsible AI Resource Kit was an effort by NASSCOM to acknowledge the reality of prejudice and introduce a fix into every organization’s DNA. Additionally, it is important to determine the appropriate level of bias based on each use case.
achieving balance
The sandbox method is essential for striking the ideal equilibrium. Companies will be able to experiment without worrying about consequences. The equilibrium will continue to change as technology advances. Governments and organisations should therefore give themselves room to try new things and pick up new skills.
Companies will be able to access the sandbox to test their technologies thanks to this sectorial strategy. Organizations could interact with one another in this situation and assess the hazards.
effective AI governance
Success and confidence go hand in hand. Citizens should be at the centre of inclusion efforts.
According to Debjani Gosh, the AI governance model should have the following four elements:
Businesses should evaluate where they are.
Developing strategies is not a simultaneous process.
Checkpoints to evaluate a strategy’s performance after implementing it
Management responsibility
Governments and businesses using AI should make sure their technology is reliable, responsible, and inclusive. They should read up on accepted laws and customs and make their selections in light of those learnings.