Elections and WhatsUp


By Priyanshi Jain and Ashit Srivastava

Whatsapp, owned by Meta, is one of the most used messaging platforms. Since the elections of 2024 are approaching very near, the problem of fake messages and misinformation will spread. Deepfake videos, which use artificial intelligence (AI) to manipulate existing footage and create realistic looking, but entirely fabricated, content, have become a significant challenge. Political figures have often been the target, with these videos, creating confusion and potentially influencing public opinion. 

Fuelled by the surge in AI-driven deepfake videos featuring politicians, the centre is now invoking a controversial law to hold WhatsApp accountable for the content shared on its platform. This move underscores the government’s determination to curb the dissemination of manipulated media and misleading narratives during the electoral period. The instant messaging app allegedly responded by stating that since it does not have access to the chats between two users, it could have an impact on users’ privacy.

The government is relying on the Information Technology (IT) Rules of 2021, which grant authorities the power to request platforms like WhatsApp to disclose details about the initial sender of a message. This move aims to identify the individuals responsible for sharing the deepfake videos, holding them accountable for the dissemination of misleading content. By seeking the identity of the first originator of these manipulated videos, the government aims to trace the source of the misinformation. This initiative aligns with the broader objective of ensuring fair and transparent elections, free from the influence of falsified media. The move also emphasizes the government’s commitment to strengthening cybersecurity measures and ensuring the responsible use of social media platforms.

The Information Technology (IT) Rules mandate online messaging companies to disclose the identity of the first sender of a specific message on their platform, a requirement commonly known as traceability. Such orders can be issued by a court or the government, focusing on national security, public order, and foreign relations issues. However, the rules specify that these orders should only be employed when other less invasive methods fail to identify the originator of the information.

WhatsApp, India’s most widely used messaging app, opposes this provision, citing threats to its end-to-end encryption system, which ensures private communications even from the company itself. WhatsApp questions the practicality of implementing this measure, expressing concerns about compromising its security and enabling widespread surveillance. Determining the limits of government intervention is a pivotal challenge. The law must clearly outline the circumstances under which the government can request information about the first originator of a message. Ambiguity in the law could lead to misuse and violation of citizens’ rights. Questions about the types of content that warrant government scrutiny, the threshold for intervention, and the oversight mechanisms to prevent abuse must be meticulously addressed.

While the government’s intentions are rooted in maintaining the integrity of the electoral process, the move has sparked concerns related to privacy and freedom of expression. Critics argue that such actions might infringe upon citizens’ right to privacy and could potentially lead to surveillance concerns. Since the right to privacy has been recognised as a fundamental right in KS Puttaswamy, 2017, it can get violated due to such actions of the government. Not only this, it raises serious questions that as per the idea of proportionality, laid down in the KS Puttaswamy case, is this step of government proportional to its objective; is there no alternative solution to this problem of deepfake spreading? Any law encroaching upon this fundamental right must pass the test of proportionality, demonstrating a compelling state interest and a proportional response to the perceived threat. Any legislation infringing upon this right must withstand rigorous scrutiny, ensuring that it serves a legitimate state aim and does so in a proportionate manner. As infringing fundamental right should be the last step and that too proportional to the problem, i.e., problem should be of such graver nature to strike a balance between fundamental right and its infringement. In the instant matter, this idea should be given much importance. Is the present problem so graver to lead to such steps to be taken by government?

Privacy advocates are concerned that this move might set a precedent, allowing the government unprecedented access to individuals’ private conversations. Additionally, the lack of clear guidelines on how this information will be utilized and safeguarded raises apprehensions among legal experts and civil liberties organizations. Another challenge lies in defining the boundaries of the government’s powers. While the intention to combat misinformation is valid, the law must strike a delicate balance between national security interests and individual privacy rights. Determining the scope of government intervention without infringing upon citizens’ rights poses a complex legal challenge.

Another pressing concern revolves around the safeguarding of user data. If WhatsApp is compelled to divulge information about the first originator, ensuring the confidentiality and security of this data is paramount. Legal frameworks must be established to govern the handling, storage and disposal of such sensitive information. Preventing misuse and unauthorized access to this data is essential to maintain public trust in digital communication platforms. Beyond legal complexities, the law faces challenges related to technology. Deepfake detection and attribution are intricate tasks that require sophisticated AI tools. Ensuring the accuracy and reliability of these technologies is essential to prevent false identifications and protect innocent users from unwarranted scrutiny. The law must incorporate provisions for regular technological assessments and updates to maintain efficacy and fairness.

As per Sections 4, 5, 6 and 7 of new DPDP Act, 2023, data processing requires the consent of the Data Principal (WhatsApp users). The government’s request for information from WhatsApp about message originators might necessitate consent from the individuals whose data is being shared. The processing of personal data by WhatsApp, including sharing the first originator’s details, must be for a lawful purpose as per the provisions of the Data Protection Act, which in present case is debatable. WhatsApp, as a Data Fiduciary, must provide a notice to its users, informing them about the processing of their personal data, including sharing data with the government for national security reasons. Users have the right to know how their data is being processed and their rights under the law, as per the notice requirement.

At the heart of this controversy is the clash between privacy concerns and national security imperatives. WhatsApp, like many other messaging platforms, employs end-to-end encryption as a fundamental feature. This encryption ensures that messages, calls, photos and videos sent via the platform are scrambled into unreadable text on the sender’s device and only decoded back to their original form on the recipient’s device. Notably, even WhatsApp itself does not have access to the content of these messages due to this encryption. This security measure is designed to protect user privacy and prevent unauthorized access, including from the service provider. The government asserts that the normal functioning of WhatsApp, including its end-to-end encryption system, will remain unaffected by this measure. WhatsApp vehemently opposes this move, citing potential threats to its end-to-end encryption system. The company argues that any attempt to trace the originator of a message inherently undermines the privacy protections offered by the encryption. WhatsApp contends that such a measure, if enforced, would create a backdoor, making it vulnerable to mass surveillance. The concern is that once this precedent is set, it might pave the way for more intrusive requests in the future, compromising user trust in the platform’s security. 

The controversy highlights the delicate balance between individual privacy rights and national security imperatives. While the government seeks to combat the spread of misinformation and deepfake videos, it must navigate this issue without unduly compromising user privacy. Striking a balance between ensuring the responsible use of encrypted platforms and safeguarding user confidentiality is essential. The debate underscores the need for transparent dialogue between technology companies, governments and civil society to develop nuanced solutions that address security concerns without infringing upon fundamental privacy rights. As the discussions continue, finding a middle ground that respects both privacy and national security remains a significant challenge in the digital age.

The Indian government’s attempt to combat misinformation through WhatsApp highlights the complex issue of balancing privacy rights with national security. The debate calls for clear regulations defining when platforms can disclose user information, ensuring minimal encroachment on privacy. Transparency and open dialogue between technology companies, governments, legal experts, and civil society are crucial. Regular technological assessments and updates are also essential for fairness. A multi-dimensional approach, including robust legal frameworks, ethical technology development and ongoing dialogue is needed to address this complex issue and set a precedent for future digital challenges.

The other aspect lurking beneath this issue is pertaining to personal data; misinformation will be an ominous problem during the general elections of 2024. That has been a common scenario during the last few assembly and general elections. The brunt of misinformation is faced not only by the Indian contingent, rather it has been a common problem in the South Asian countries, such as the Philippines. Under this scenario, it is a social evil that needs to be subdued. The major problem with misinformation is not only about the abundance of it, rather it is also about the targeted misinformation. Every Whatsapp ping/message/gif or any other social media files shared during elections are targeted to particular individuals on the basis of their inclinations. This leads to creation of an Echo-Chamber (a term coined by Eli Pariser). The evil of misinformation can to a large extent be defanged, if the personal data can be regulated. As personal data of the India citizen is used for the purpose of customizing and targeting of Indian voters, if personal data of the citizens can be regulated, simply by knowing the purpose for which personal data is being collected, the targeted sending of misinformation can be stalled. If every Indian citizen is aware, as to where his/her personal data is being processed and which political party is processing it, it will help in limiting the targeted sending of misinformation. The current legislation of Digital Personal Data Protection Act, 2023 gives the right to the data principal (which in this case will be Indian voters) to know the whereabouts of their personal data. 

A similar modus operandi was adopted by the Information Commissioner Office in United Kingdom during the elections. The ICO was calling for accountability on political parties who were using personal data of voters for targeting them. So, a better regulation of personal data can regulate/ restrict and possibly prohibit spread of misinformation. 

—Priyanshi Jain is a student at Dharmashastra National Law University, Jabalpur, while Ashit Srivastava is Assistant Professor of Law there