Google said on Wednesday that the company’s cooperation with the Indian government for a multi-stakeholder discussion is in line with its commitment to addressing this challenge jointly and ensuring a responsible approach to AI, in light of the government’s tough stance on AI-generated fake content, particularly deepfakes.
Michaela Browning, vice president of government affairs and public policy at Google Asia Pacific, said, “We can ensure that AI’s transformative potential continues to serve as a force for good in the world by embracing a multi-stakeholder approach and fostering responsible AI development.” “There is no magic bullet to stop AI-generated disinformation and deepfakes. It necessitates teamwork and includes candid communication, thorough risk assessment, and proactive mitigation techniques, according to Browning.
The corporation expressed its gratitude for the chance to collaborate with the government and to carry on the conversation, citing its forthcoming participation in the Global Partnership on Artificial Intelligence (GPAI) Summit as one example. “We know it’s imperative to be bold and responsible together as we continue to incorporate AI, and more recently, generative AI, into more Google experiences,” Browning stated.
In an effort to stop the proliferation of deepfakes on social media platforms, the Center last week issued them a seven-day deadline to modify their policies in accordance with Indian laws. According to Minister of State for Electronics and IT Rajeev Chandrasekhar, deepfakes may be subject to action under the present IT Rules, including Rule 3(1)(b), which requires the removal of 12 types of content within 24 hours of receiving user complaints.
In the future, the government will likewise prosecute all of these infractions under the IT Rules. Google claims that it is attempting to assist in addressing any dangers in a number of ways.
The IT behemoth stated, “Identifying AI-generated content and equipping people with knowledge of when they’re interacting with AI-generated media is an important consideration.” Creators of realistically changed or synthesized content—including those that use AI tools—will have to notify this need to YouTube in the upcoming months.
Google stated, “We will notify users about such content through labels in the description panel and video player.” “We’ll make it possible to use our privacy request process to request the removal of AI-generated or other synthetic or altered content that simulates an identifiable individual, including their face or voice, on YouTube in the coming months,” the statement continued.
Google recently changed its guidelines for election advertising, requiring sponsors to reveal any content that has been digitally created or edited. In order to create workable solutions, we also actively collaborate with specialists, researchers, and legislators. The Indian Institute of Technology, Madras has received donations totaling $1 million from us to create the first multidisciplinary center for responsible AI, according to Browning.