The country’s Artificial Intelligence (AI) restrictions have been tightened by the Ministry of Electronics and Information Technology (MeitY). MeitY emphasized in a recent advise that big platforms would need official government approval before releasing AI models, particularly ones that are still in the testing stages. The warning was issued in response to worries expressed about the possible abuse and dependability of AI systems, notably in the context of social media.
The rule, which went into effect on March 3, 2024, requires platforms to obtain prior authorization before implementing large-language models (LLMs), AI models, or any algorithms that are still in the testing phase or are deemed unreliable. Platforms must also clearly notify users of the possible fallibility of the results produced by these AI systems.
Union Minister for Electronics and Technology Rajeev Chandrasekhar explained that the order does not apply to startups and that it primarily targets large platforms in response to questions about the guidance. The goal, Chandrasekhar emphasized, is to guarantee the security and reliability of India’s online environment.
Chandrasekhar emphasized the need for social media posts that prioritize cyber security for India and the development of an online platform responsibility culture. He reaffirmed the government’s commitment to using cutting-edge technologies—including artificial intelligence—in a reliable and secure manner.
The ramifications of MeitY’s AI guidance have been discussed by industry professionals, who have pointed out a number of potential and worries related to the use of AI on social media platforms. Practice Head (Technology & General Corporate) Gaurav Sahay of Fox Mandal & Associates shed light on how the directive affects mental health, cyberbullying, filter bubbles, data privacy, algorithmic bias, and the moral application of AI.
Data Privacy Issues: Because social media sites frequently collect enormous volumes of user data, privacy and security issues are brought up. By creating strong data security mechanisms, such as encryption, anonymization, and user-controlled privacy settings, AI can help allay these worries.
Algorithmic bias: Social media companies’ usage of AI algorithms may unintentionally reinforce prejudices, producing unfair results. For recommendation engines and content distribution to be equitable and inclusive, these algorithms must be regularly checked and audited.
Mental Wellness & Health: Overuse of social media has been connected to detrimental effects on mental health, including elevated levels of loneliness, anxiety, and depression. AI-driven solutions can assist by introducing features like usage tracking and digital well-being tools, as well as by offering users personalized recommendations to encourage healthy online behaviors.
Misinformation & Fake News: Social media sites have come under fire for their part in the dissemination of false information. Artificial intelligence (AI) technology, including machine learning and natural language processing, can be used to identify misleading content, fact-check it, and highlight reliable sources in order to stop the spread of incorrect information.
Cyberbullying and Online Harassment: Social media sites can serve as fertile habitat for many types of online abuse. AI-powered content moderation solutions enable platforms to take preventative action to shield users from damage by automatically identifying and flagging hate speech, abusive behavior, and harassment.
Filter Bubbles and Echo Chambers: Social media algorithms frequently give content preference to users based on their previous interactions. This creates echo chambers and filter bubbles where people are only exposed to information that confirms their own opinions. Through the introduction of serendipity and exposure to a variety of ideas and viewpoints, AI can aid in the diversification of users’ feeds.
AI Ethics: When using AI on social media platforms, it is important to take user consent, accountability, and transparency into account. AI ethics frameworks can direct the creation and application of ethical AI systems that put users’ rights and well-being first.
The guidance highlights how the government is taking the initiative to control the use of AI and make sure that technology is used responsibly in order to protect user interests and advance a safe digital environment in India. Platforms operating in the nation must abide by the mandate, and MeitY is pleading for quick adherence and the timely submission of action-taken reports.