Artificial intelligence (AI) is rapidly transforming our society, from improving healthcare to upgrading transportation systems. However, as the effect of AI grows, governments and international organisations are becoming increasingly concerned about the regulation of its research and usage. AI legislation is critical to ensuring that the technology is created and used responsibly and ethically. Global patterns in AI legislation have emerged in recent years, with different governments and regions taking different methods to address the difficulties brought by this technology.
The Indian government sees AI as a “kinetic enabler” with the potential to improve governance. According to the Ministry of Electronics and Information Technology (“MeitY”), artificial intelligence (“AI”) can be used to provide personalised and interactive citizen-centric services via digital public platforms. 1 Currently, the Indian government is not considering enacting AI legislation. 2 This, however, does not imply that there will be no regulation at all. AI is now regulated through laws governing intellectual property, privacy, and cyber security. The Indian government has also launched a number of other efforts to help the country’s AI ecosystem develop. In 2018, the NITI Aayog released a discussion paper titled National Strategy for Artificial Intelligence, which identified healthcare, education, agriculture, smart cities, and mobility as sectors for AI deployment. MeitY has also established a number of Centres of Excellence (COEs) to aid in knowledge management and the development of capacities to capture new and developing areas of technology.
Other regulators have attempted to create a dialogue about the use of these technologies. For example, the Securities and Exchange Board of India (SEBI) has advocated using artificial intelligence (AI) for data analytics and pattern identification to assist the regulator in detecting fraudulent behaviour and irregular trading. The Reserve Bank of India (RBI) has also expressed interest in hiring experts with expertise in AI, machine learning, and advanced analytics.
MeitY stated in recent discussions on the Digital India Act (“DIA”), which is being developed as a comprehensive legal framework for the digital ecosystem, that the proposed law would regulate “high-risk AI systems” through a legal, quality testing framework to examine regulatory models, algorithmic accountability, zero-day threat and vulnerability assessment, AI-based ad-targeting, content moderation, and so on. However, the proposed legislation does not expressly address the issues raised by the use of AI.
The European Union intends to establish a comprehensive and extensive legislative framework to oversee artificial intelligence. The planned Artificial Intelligence Act (“AIA”) of the EU envisions graded regulation, i.e., AI legislation based on its potential hazards. Under this approach, AI systems that are seen as a threat to people’s safety, livelihood, or human rights are forbidden. They are assigned to the “unacceptable risk” category. This includes AI processes having the ability to influence individuals or exploit the vulnerabilities of specific populations, such as those with disabilities. AI systems that are not restricted are classified as “high-risk,” “limited risk,” and “minimal risk.” The AIA is subject to the legislative process of the EU and may be revised in the next few years.
Australia is another country battling with AI-related regulatory challenges. While there is no legislation in Australia that directly addresses AI, the Privacy Act Review Report 2022 (“Review Report”), if implemented, may have some ramifications for the use of AI. According to the review report, companies should be compelled to declare how personal information will be utilised in automated decisions. Because the decision-making process in AI systems is opaque and the rationale behind the conclusions is not always evident, the black box effect makes it difficult to govern automated decisions. This lack of openness and interpretability might pose ethical, legal, and regulatory issues, especially when it comes to maintaining justice and eliminating prejudice in decision-making.
Individual profiling based on data such as behavioural trends, geography, education, and purchasing history is becoming more popular. This profiling can be used to make product and service offering decisions, but it can also lead to bias in areas like recruitment and surveillance. Furthermore, the unintentional use of AI for data aggregation and health information might unjustly affect businesses’ opinions, choices, and offers to individuals, and the opaque nature of AI algorithms can make it difficult to discern how data is being utilised. These challenges underscore the importance of regulatory and ethical considerations in the use of artificial intelligence for individual profiling and other reasons.
Governments and legal systems around the world are grappling with the issues posed by the black box effect and the use of AI in decision-making, particularly when it comes to guaranteeing justice and accountability and preserving privacy rights.
Throughout these arguments about AI governance, Elon Musk and others have expressed concerns about the rapid development of AI, emphasising the threats it poses to human civilization. There is rising concern that artificial intelligence (AI) will soon outnumber and outsmart human minds. An open letter signed by Elon Musk, Steve Wozniak, and Yuval Harari has proposed a moratorium on training systems more powerful than GPT-4 in order to make AI more transparent, safe, and loyal.