Delhi High Court receives petition seeking directions against inflammatory content on Rohingya community in facebook


A PIL has been filed in the Delhi High Court by two Rohingya refugees seeking directions to Facebook (now Meta) to stop hateful and inflammatory content against the Rohingya community.

The PIL requests for issuing directions to Facebook for stopping the ranking algorithms which encourages hate speech and violence against minority

The High Court is likely to hear the petition by Mohammad Hamim and Kawsar Mohammed. They fled persecution in Myanmar and reached India in July 2018 and March 2022 respectively.

Advocate Kawalpreet Kaur, who represented the petitioner said that misinformation, harmful content and posts originating in India targeting Rohingya refugees are widespread on Facebook.

The petitioner said that Facebook was now used to dehumanise the Rohingya community in Myanmar and as the 2024 general elections draw close, there is a chance that the misinformatio created can result in violence against the community.

The plea focused that the facebook content shows presence of Rohingya refugees as a threat to India, often referring to the group as 'terrorists', 'infiltrators' and exaggerating the numbers of Rohingya that have fled to India.

The plea states how a study by Equality Lab into hate speech on Facebook in India in 2019 found that 6 percent of the Islamophobic posts were specifically anti-Rohingya even though the Rohingya comprised only 0.02 percent of India’s Muslim population.

The plea asserted that hateful content poses a threat to the lives of Rohingyas and, therefore, violates their right to life under Article 21 of the Constitution.

The petitioners further said that Facebook is in violation of Section 79(3) of the Information Technology Act read with Rule 3 of the Information Technology (Intermediaries Guidelines) Rules 2011 which deals with the due diligence to be observed by an intermediary while discharging its duties.

“It [Meta] must also provide an India-specific report on hate speech content moderation. This report must clearly identify the content moderation decision trajectories where content is removed and where content is not removed. This report should also include specific numbers on how many users flagged reports were received, what part of user flagged reports were removed, how many of these were appealed and what amount of content was removed during the process of appeal and under what categories,” the plea demanded.