Monday, May 6, 2024
154,225FansLike
654,155FollowersFollow
0SubscribersSubscribe

Deepfakes: UP BJP MLA Dr Rajeshwar Singh writes to Law Minister Meghwal for new law, strengthening legal framework

Dr Rajeshwar Singh, the Uttar Pradesh BJP MLA from Sarojini Nagar constituency of Lucknow district, has written to Union Minister for Law and Justice Arjun Ram Meghwal on Saturday, seeking the enactment of new laws and strengthening the existing legal framework to regulate offences relating to digital forgeries, such as Deepfakes.

Recently, Prime Minister Narendra Modi had sounded the alarm on the increasing use of Deepfakes at an interaction with journalists. He had said there was a Deepfake video of him doing the Garba in circulation as well.

Dr Singh, the former joint director of the Enforcement Directorate, said some unscrupulous elements were making fake videos with the help of artificial intelligence to malign the reputation of individuals, thus violating their Fundamental Right to Privacy guaranteed under Article 21 of the Constitution of India. 

Sending a copy of the letter to Uttar Pradesh Chief Minister Yogi Adityanath, the BJP MLA said  the existing legal regime was insufficient to protect citizens from the adverse manipulation of this technology and there was an urgent need for strengthening the existing legal framework to meet these challenges. 

Noting that there was no specific legal provision that recognised the individual rights against such menace caused by Deefakes and related technological advancements, Dr Singh said the existing laws which indirectly combat such menace include provisions of laws on Copyright Violation, Defamation and cyber felonies. 

He said keeping in view the extent of the harm that can be caused by the Deepfake technology, it was imperative to either enact new laws or suitably amend the existing ones to deter people from misusing the technology. 

He said punishment under Section 66C of the IT Act should be immediately increased to seven years from the existing three years and the fine should also be increased to Rs five lakh from the existing Rs one lakh. Similarly. punishment under Section 420 and 468 IPC should be increased to 10 years from the existing seven years.

He further suggested inserting creation of a non-consensual deepfake media as an illustration of the term alteration contained within the definition of personal data breach in the Digital Personal Data Protection Act, 2023. 

To complement this perhaps an omission of the word ‘significant’ in Section 33(1) of the aforesaid Act, will remove any doubts regarding the Data Protection Board of India’s ability to impose hefty fines on data fiduciaries responsible for safeguarding the data of the injured principal, noted the former ED Joint Director. 

He said Section 121 of the IPC (waging war against the Government of India) could be invoked to deter any miscreants from spreading hate speech and online defamation. These crimes can be prosecuted under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 of the Information Technology Act, 2000. 

He said Sections 153-A and 153-B (Speech affecting public tranquility). Section 499 (defamation) of the Penal Code, 1860 could also be invoked in this regard. 

He said the spread of false or misleading information could create confusion and undermine public trust and could be used to manipulate public opinion or influence political outcomes. These crimes could be prosecuted under Section 66-F (Cyber terrorism) and the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2022 of the Information Technology Act, 2000. 

Such crimes could be prosecuted under Section 66-D(2) (punishment for cheating by personation by using computer resources) and Section 66-F (cyber terrorism) of the Information Technology Act. 2000. Also, Sections 123(3-A), 123 and 125 of the Representation of the People Act, 1951 could be invoked to tackle the menace. 

Regarding violation of privacy, obscenity and pornography, he said these crimes could be prosecuted under Section 66-E (punishment for violation of privacy), Section 67 (punishment for publishing or transmitting obscene material in electronic form), Section 67-A (punishment for publishing or transmitting of material containing sexually explicit act, in electronic form), Section 67 B (punishment for publishing or transmitting of material depicting children sexually explicit act/pornography in electronic form) of the Information Technology Act, 2000. 

Also, Sections 292 and 294 (Punishment for sale etc. of obscene material) of the Penal Code, 1860 and Sections 13, 14 and 15 of the Protection of Children from Sexual Offences Act, 2012 (POCSO) could be invoked in this regard to protect the rights of women and children. 

He further said that punishment under Section 292 and 294 IPC should be increased to seven years from the existing two years and fine should be increased to Rs five lakh and to seven years from existing three months respectively if the Deepfake related offences have been committed. 

Suggesting amendments in the IT Rules, 202l, he said it should say that all content reported to be fake or produced using Deepfake technology needs to be taken down by intermediary platforms within 36 hours of being flagged by the authorities, failing which they will lose ‘safe harbour immunity’ and be liable to criminal and judicial proceedings under the Indian laws. 

The 36-hour period of taking down the content from the platform should be further brought down to 12 hours. Further, all offences in this regard should be immediately made cognizable and non-bailable, he added. 

He said these offences should also be made scheduled offences under the Prevention of Money Laundering Act. 

Dr Singh suggested that intermediary platforms and other concerned entities should be directed to develop such a technology like disappearing messages, which could help in remotely deleting the content from the mobile phones and other devices whenever such content was flagged down by the concerned authorities and/or directed to be taken down from the platforms. 

Keeping in mind the extent of democratisation of all forms of technology, alongside the positive application of this technology in various facets such as education, he said a blanket ban on Deepfake Technology was neither feasible nor desirable. 

In consonance with the recommendations of the Lodha Committee in the context of Online Gaming stating that a blanket ban always has negative consequences, the need of the hour was strict regulation, noted the BJP MLA. 

Noting that there were many loopholes in the present laws, which made them unable to prevent Deepfakes from harming people and society in general, the BJP MLA warned that the scale of the turmoil that could ensue, if the same was not addressed swiftly and in a strong manner, had the capacity to create social, political and economic unrest amongst the nation. This in turn could have a ripple effect on distorting the democratic and social fabric of the nation, he added. 

Speaking about Deepfake, he said the term originated in 2017, when an anonymous Reddit user called himself ‘Deepfakes’. This user manipulated Google’s open-source, deep-learning technology to create and post pornographic videos. 

He said Deepfake in essence was content (video, audio or otherwise), which was wholly or partially fabricated or existing content (video, audio or otherwise), manipulated to create digital forgeries. 

Currently, dozens of applications exist in the market, which the users can download for nefarious purposes such as scams and hoaxes, celebrity pornography, election manipulation, social engineering, automated disinformation attacks, identity theft and financial fraud, noted the UP MLA. 

He said facial expressions can be manipulated per frame, pitch, timbre and language can be adjusted; identities of two or more people can be merged, for example by fusing the faces of two people or by giving a figure the face of a well-known person and the voice of another.

Deepfake technology has only been around for a few years but its quality has improved dramatically since then, a trend which should continue, making its detection even harder, he added.

The letter said weaponisation of Deepfkes and synthetic media had the potential to impact of invading the personal lives of individuals. The prominence of non-consensual Deepfake pornography, which accounted for 96 percent of the total Deepfake videos online, was its most pervasive illustration. 

Others like Slut-shaming and revenge pom could have very serious consequences. Deepfakes would increase this problem, not only because of their quantity and quality, but also because the technology allowed a person to create a porn video of someone based on a fully dressed photo. 

Dr Singh said in a prismatic society like India, the social consequences of such an incident were immense, distorting not just the public image of an individual, but also their self-image. Speaking about the obscene Deepfake video of actress Rashmika Mandana which recently went viral, he said it was a malevolent form of the desire to attain virality on the back of celebrities, a desire that could be unfortunately met and inflamed further by the introduction of such technologies. 

The unabated perpetuation of such reprehensible activities represented a failure of every citizen to fulfil their Fundamental Duty to renounce practices derogatory to the dignity of women provided under Article 51A of the Constitution of India, he added. 

He further noted the risk to political stability and economy due to the technology. He said deepfakes could be used to spread false or misleading information about political candidates and to manipulate public opinion and influence the outcome of an election. 

Dr Singh said with the Lok Sabha elections 2024 due in India, unless Deepfakes and Synthetic Media was urgently regulated, it could potentially lead to a democratic disruption causing a social and political turmoil and consequently putting the very integrity of our democracy at risk. 

He further talked about deepfakes impacting the economic stability of the country by manipulating markets in various mannerisms such as Identity theft, voice cloning or face swap. He said a video made in such a way could be used to impersonate an individual and initiate fraudulent transactions on their behalf. 

Voice cloning or face swap video could be used to impersonate a trusted government official or family member of the victim and coerce a fraudulent payment. Stock manipulation could be done through fabricated events to falsify a product endorsement, which could alter the investor’s sentiments, added the BJP leader. 

Talking about the impact on the freedom of press, Dr Singh said the advent of Deepfake technology has led to a situation ot infodemic worldwide challenging the old notion of “seeing is believing”. The tainting of the credibility of audiovisual evidence produced in journalism and its systematic investigation being the obvious fallout of this new normal. Though often put to benign ends, Deepfakes and related synthetic media are also widely seen as posing a potentially serious threat to the right to Freedom of Speech and expression guaranteed under Article 19(1)Xa) of the Constitution of India. 

He said the Legislative, Technological and Media Literacy solutions have to be implanted together to tackle the multifarious issue. Measures like creation of a deepfake zoo in the USA (a regulatory sandbox which will encourage the private sector to share insights and collectively come up with resilient technological solutions) need to be instituted. 

He further suggested proliferation of radioactive datasets of video content for easier detection of deepfake videos. 

The BJP MLA said the government can also limit public access to specific sufficiently advanced detectors (developed through collaboration with the private sector through investments like those made by the Networking and Information Technology Research and Development program in the US) to keep them in strategic reserve for catching deepfakes capable of jeopardising national security. 

He termed media literacy as an effective initial buffer for the effects of Deepfakes with a robust, affordable and effective solution that will be able to buy time for any impending legislation. 

Read his full letter here:

Rajeshwar-Singh

spot_img

News Update