The answer defines deepfake as a technology that “enables to allows alter/create the image or video media of a person with any other person’s image or video using artificial neural networks and deep learning/machine learning techniques.” This is vaguely correct, but, deepfakes can also include the manipulation of voice and content without using anyone else’s imagery. The measures listed by the minister seem quite rudimentary and give the impression that there’s a real lack of understanding about deepfake technology:
The first measure is to tackle manipulated content through the removal of objectionable content as per a clause in IT ACT 2000. It is not clear what is defined as objectionable content. In second (and to an extent in third) point, Prasad suggests social media platforms have taken steps to curb fake news and put a limit on forwarding messages to other users. It’s important to understand deepfake content is not always ‘fake news’. It could’ve been created for entertainment purposes. Also, social media platforms are still struggling with forming their own deepfake policies. In the third and fourth points, site information security awareness and twitter handle Cyberdost are mentioned as possible ways to spread awareness around cybersecurity. None of these resources contain any material on deepfake.
The answer also notes the country is pushing to develop “a sustainable ecosystem” for AI development, which says nothing about how it plans to tackle modified content. Overall, the statement from the minister leaves a lot of things unanswered when it comes to deepfakes. It also suggests the understanding around the topic is quite embryonic. You can read Prasad’s answer to the deepfake problem here.