Daily News
Google’s new policy guides creators on ethical use of AI-generated content
Published
2 years agoon

Google is in the process of formulating a policy aimed at guiding content creators on the ethical use of ‘synthetic’ or deepfake content. This policy, applicable to platforms like YouTube, focuses on disclosure by creators when manipulating reality and mandates labelling for certain content like electoral advertisements generated by GenAI. Additionally, the policy empowers creators to watermark their content using Google’s tools.
Saikat Mitra, vice-president and head of trust and safety for Asia-Pacific at Google, stated that the company is on the right way to tackle this issue. On YouTube, it plans to have disclaimers about deepfakes or synthetic content for video descriptions, and in certain more sensitive cases, within videos themselves.
Mitra highlighted that YouTube is working on a policy requiring content creators to disclose the use of synthetic media and the alteration of reality, with consequences for non-compliance. Mitra also mentioned that existing policies allow for the suspension of accounts and the removal of content violating YouTube’s compliance guidelines.
Concerns over synthetic content have escalated following the circulation of deepfake videos targeting public figures, including Prime Minister Narendra Modi and actor Katrina Kaif, on various social media platforms. To address these issues, Ashwini Vaishnaw, IT Minister held discussions with industry representatives, including Google, on regulations targeting AI-generated fake content.
The Centre has initiated work on draft regulations for deepfakes, expecting industry stakeholders to propose actionable measures to prevent misuse. Vaishnaw indicated that these regulations could be implemented through amendments to existing rules or the introduction of a new law specifically addressing deepfake content.