The software alliance, or BSA, asked that the proposed deepfake modifications to the Information Technology (IT) Rules not take a “one-size-fits-all” approach in a letter sent earlier this week to the Ministry of Electronics and Information Technology (Meity).
Global software heavyweights Adobe, Cisco, Microsoft, IBM, and others are part of the BSA alliance. It is present in more than 30 nations.
The alliance has suggested that while establishing requirements pertaining to the dissemination of deepfakes, the proposed policy adjustments in the IT Rules take into account the variations in the role and function of intermediaries.
In the letter, it is stated that “important service-level, technical, functional, and user-based distinctions ensure that all intermediaries do not have the same ability to address this issue.”
In its recommendation, the industry association makes the case that different services offered by various intermediaries might not present the same level of risk.
Given their user bases’ size and the fact that they don’t offer services directly to customers, business-to-business and corporate software services, for instance, pose little risk to user safety and public order.
The alliance has also recommended that as a means of combating the deepfake threat, the government consider “content authenticity solutions.”
In order to help people distinguish between information produced by artificial intelligence (AI) and real content, it has requested that the use of watermarks and other disclosure techniques be encouraged.
It is crucial that platforms maintain content credentials, watermarks, and metadata rather than removing them. This will guarantee that everyone who consumes internet information can see it,” the letter says.
Meity published an advisory in December of last year directing social media platforms and intermediaries to make sure users are not violating the content limits imposed by Rule 3(1)(b) of the IT Rules. The advisory also instructed companies to inform users about content that is forbidden.
Eleven categories of user harms or forbidden content on digital intermediaries are listed in Rule 3(1)(b) of the IT Rules.
The ministry has indicated that it will be amending the IT standards to ensure greater enforcement for deepfakes and misinformation in the event that the current standards are not followed.