Multiple Case Decision
Criminal Allegations Based on Nationality
September 25, 2024
The Board has reviewed three cases together all containing criminal allegations made against people based on nationality. In overturning one of Meta’s decisions to remove a Facebook post, the Board has considered how these cases raise the broader issue of how to distinguish content that criticizes state actions and policies from attacks against people based on their nationality.
3 cases included in this bundle
FB-25DJFZ74
Case about hate speech on Facebook
IG-GNKFXL0Q
Case about hate speech on Instagram
TH-ZP4W1QA6
Case about hate speech on Threads
Summary
The Board has reviewed three cases together all containing criminal allegations made against people based on nationality. In overturning one of Meta’s decisions to remove a Facebook post, the Board has considered how these cases raise the broader issue of how to distinguish content that criticizes state actions and policies from attacks against people based on their nationality. In making recommendations to amend Meta’s Hate Speech policy and address enforcement challenges, the Board has opted for a nuanced approach that works for moderation at-scale, with guardrails to prevent negative consequences. As part of the relevant Hate Speech rule, Meta should develop an exception for narrower subcategories that use objective signals to determine whether the target of such content is a state or its policies, or a group of people.
About the Cases
In the first case, a Facebook post described Russians and Americans as “criminals,” with the user calling the latter more “honorable” because they admit their crimes in comparison with Russians who “want to benefit from the crimes” of Americans. This post was sent for human review by Meta’s automated systems, but the report was automatically closed, so the content remained on Facebook. Three months later, when Meta selected this case to be referred to the Board, Meta’s policy subject matter experts decided the post did violate the Hate Speech Community Standard and removed it. Although the user appealed, Meta decided the content removal was correct following further human review.
For the second case, a user replied to a comment made on a Threads post. The post was a video about the Israel-Gaza conflict and included a comment saying, “genocide of terror tunnels?” The user’s reply stated: “Genocide … all Israelis are criminals.” This content was sent to human review by Meta’s automated systems and then removed for violating the Hate Speech rules.
The third case concerns a user’s comment on an Instagram post in which they described “all Indians” as “rapists.” The original Instagram post shows a video in which a woman is surrounded by men who appear to be looking at her. Meta removed the comment under its Hate Speech rules.
All three cases were referred to the Board by Meta. The challenges of handling criminal allegations directed at people based on nationality are particularly relevant during crises and conflict, when they “may be interpreted as attacking a nation’s policies, its government or its military rather than its people,” according to the company.
Key Findings
The Board finds that Meta was incorrect to remove the Facebook post in the first case, which mentions Russians and Americans, because there are signals indicating the content is targeting countries rather than citizens. Meta does not allow “dehumanizing speech in the form of targeting a person or group of persons” based on nationality by comparing them to “criminals,” under its Hate Speech rules. However, this post’s references to crimes committed by Russians and Americans are most likely targeting the respective states or their policies, a conclusion confirmed by an expert report commissioned by the Board.
In the second and third cases, the majority of the Board agrees with Meta that the content did break the rules by targeting persons based on nationality, with the references to “all Israelis” and “all Indians” indicating people are being targeted. There are no contextual clues that either Israeli state actions or Indian government policies respectively were being criticized in the content. Therefore, the content should have been removed in both cases. However, a minority of the Board disagrees, noting that content removal in these cases was not the least intrusive means available to Meta to address the potential harms. These Board Members note that Meta failed to satisfy the principles of necessity and proportionality in removing the content.
On the broader issue of policy changes, the Board believes a nuanced and scalable approach is required, to protect relevant political speech without increasing the risk of harm against targeted groups. First, Meta should find specific and objective signals that would reduce both wrongful takedowns and harmful content being left up.
Without providing an exhaustive list of signals, the Board determines that Meta should allow criminal allegations when directed at a specific group likely to serve as a proxy for the state, such as police, military, army, soldiers, government and other state officials. Another objective signal would relate to the nature of the crime being alleged, such as atrocity crimes or grave human rights violations, which can be more typically associated with states. This would mean that posts in which certain types of crime are linked to nationality would be treated as political speech criticizing state actions and remain on the platform.
Additionally, Meta could consider linguistic signals that could distinguish between political statements and attacks against people based on nationality. While such distinctions will vary across languages, making the context of posts even more critical, the Board suggests the presence or absence of the definite article could be such a signal. For example, words such as “all” (“all Americans commit crimes”) or “the” (“the Americans commit crimes”) could indicate the user is making a generalization about an entire group of people, rather than their nation state.
Having a more nuanced policy approach will present enforcement challenges, as Meta has pointed out and the Board acknowledges. The Board notes that Meta could create lists of actors and crimes very likely to reference state policies or actors. One such list could include police, military, army, soldiers, government and other state officials. For photos and videos, reviewers could look for visual clues in content, such as people wearing military uniform. When such a clue is combined with a generalization about criminality, this could indicate the user is referring to state actions or actors, rather than comparing people to criminals.
The Board urges Meta to seek enforcement measures aimed at user education and empowerment when limiting freedom of expression. In response to one of the Board’s previous recommendations, Meta has already committed to sending notifications to users of potential Community Standard violations. The Board considers this implementation an important step towards user education and empowerment on Meta’s platforms.
The Oversight Board’s Decision
The Oversight Board overturns Meta’s decision to take down the content in the first case, requiring the post to be restored. For the second and third cases, the Board upholds Meta’s decisions to take down the content.
The Board recommends that Meta:
- Amend the Hate Speech Community Standard, specifically the rule that does not allow “dehumanizing speech in the form of comparisons to or generalizations about criminals” directed at people based on nationality, to include an exception along the following lines: “Except when the actors (e.g., police, military army, soldiers, government, state officials) and/or crimes (e.g., atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court) imply a reference to a state rather than targeting people based on nationality.”
- Publish the results of internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy. Results should be provided in a way that allows these assessments to be compared across languages and/or regions.
* Case summaries provide an overview of cases and do not have precedential value.
Full Case Decision
1. Case Description and Background
These cases concern three content decisions made by Meta, one each on Facebook, Threads and Instagram. Meta referred the three cases to the Board.
The first case involves a Facebook post in Arabic from December 2023, which states that both Russians and Americans are “criminals.” The content also states that “Americans are more honorable” because they “admit their crimes” while Russians “want to benefit from the crimes” of Americans. After one of Meta’s automatic classification tools (a hostile speech classifier) identified the content as potentially violating, the post was sent for human review. However, this was automatically closed so the content was not reviewed and remained on Facebook. In March 2024, when Meta selected this content for referral to the Board, the company’s policy subject matter experts determined the post violated the Hate Speech Community Standard. It was then removed from Facebook. The user who posted the content appealed this decision to Meta. Following another stage of human review, the company decided content removal in this case was correct.
The second case is about a user’s reply in English to a comment made on a Threads post from January 2024. The post was a video discussing the Israel-Gaza conflict, with a comment noting “genocide of terror tunnels” with a question mark. The reply said “genocide” and stated that “all Israelis are criminals.” Meta’s automatic classification tools (a hostile speech classifier) identified the content as potentially violating. Following human review, Meta removed the reply to the comment for violating its Hate Speech Community Standard. Meta’s policy subject matter experts also then determined the original decision to remove the content was correct, after the company identified this case as one to refer to the Board.
The third case concerns a user’s comment in English on an Instagram post from March 2024, stating “as a Pakistani” that “all Indians are rapists.” The comment was in response to a video of a woman surrounded by a group of men who appear to be looking at her. Meta removed the comment after one of its automatic classification tools (a hostile speech classifier) identified the comment as potentially violating the Hate Speech Community Standard. After Meta selected this content to refer to the Board, the company’s policy subject matter experts determined the original decision to remove the content was correct.
In none of the three cases did the users appeal Meta’s decisions to the Board, but Meta referred all three.
According to expert reports commissioned by the Board, “accusations of criminal behavior against nations, state entities and individuals are prevalent on Meta’s platforms and in the general public discourse.” Negative attitudes towards Russia on social media have increased since the Russian invasion of Ukraine in February 2022. According to experts, Russian citizens are often accused on social media of supporting their authorities’ policies, including Russia’s aggression towards Ukraine. Russian citizens, however, are less often accused of being “criminals” – a word used more frequently in reference to Russia’s political leadership and the soldiers of the Russian army. As per linguistic experts consulted by the Board, the Arabic translation of “Americans” and “Russians” in the first case could be used to express resentment towards American and Russian policies, governments and politics respectively, rather than against the people themselves.
Experts also report that mentions of Israel and Israelis in relation to genocide have spiked on Meta’s platforms since the beginning of the country’s military operations in Gaza, which followed the Hamas terrorist attack on Israel in October 2023. The discourse in relation to accusations of genocidal actions has intensified, especially after the January 26, 2024, order of the International Court of Justice (ICJ), in which the ICJ ordered provisional measures against Israel under the Convention on the Prevention and Punishment of the Crime of Genocide in the South Africa v. Israel case. Since its adoption, this order has been a subject of both criticism and endorsement. Experts also argue that accusations against the Israeli government “often become the basis for antisemitic hate speech and incitement” given that all Jewish people, regardless of citizenship, are often “ associated with Israel in public opinion.”
Finally, experts also explained that generalizations about Indians related to rape are rare on social media. While the characterization of “Indians as rapists” has occasionally surfaced in the context of alleged sexual violence by Indian security forces in conflict areas, this rarely refers to “all Indians.” Most scholarly, journalistic and human rights related documentation about these incidents clearly calls out abuses by the army and does not refer to a larger set of the population.
2. User Submissions
The authors of the posts were notified of the Board’s review and provided with an opportunity to submit a statement. None of the users submitted a statement.
3. Meta’s Content Policies and Submissions
I. Meta’s Content Policies
Meta’s Hate Speech policy rationale defines hate speech as a direct attack against people – rather than concepts or institutions – on the basis of protected characteristics, including national origin, race and ethnicity. Meta does not allow hate speech on its platform because it “creates an environment of intimidation and exclusion, and in some cases may promote offline violence.”
Tier 1 of the Hate Speech policy prohibits “dehumanizing speech or imagery in the form of comparisons, generalizations or unqualified behavioral statements (in written or visual form)” about “criminals.”
Meta’s internal guidelines to content reviewers on how to enforce the policy define generalizations as “assertions about people’s inherent qualities.” Additionally, Meta’s internal guidelines define “qualified” and “unqualified” behavioral statements and provide examples. Under these guidelines, “qualified statements” do not violate the policy, while “unqualified statements” are violating and removed. The company allows people to post content containing qualified behavioral statements that can include specific historical, criminal or conflict events. According to Meta, unqualified behavioral statements “explicitly attribute a behavior to all or a majority of people defined by a protected characteristic.”
II. Meta’s Submissions
Meta removed all three posts for “targeting people with criminal allegations based on nationality,” as they contained generalizations about a group’s inherent qualities, as opposed to their actions. Meta noted that the statements are not explicitly limited to those involved in the alleged criminal activity, and do not contain further context to indicate the statements are tied to a particular conflict or criminal event.
When Meta referred these cases to the Board, it stated that they present a challenge on how to handle criminal allegations directed at people based on their nationality under the Hate Speech policy. Meta told the Board that while the company believes the policy “strikes the right balance between voice and safety in most circumstances,” there are situations, particularly in times of crisis and conflict, “where criminal allegations directed toward people of a given nationality may be interpreted as attacking a nation’s policies, its government, or its military rather than its people.”
While these cases do not constitute a request for a policy advisory opinion, Meta presented for the Board’s consideration alternative policy approaches to assess whether and how the company should amend its current approach of removing criminal allegations against people based on nationality, while allowing criticism of states for alleged criminal activities. In response to the Board’s questions, Meta stated that the company did not conduct new stakeholder outreach to develop the policy alternatives for these cases but instead considered extensive stakeholder input received as part of other policy development processes. It became clear to Meta that attacks characterizing members of nation states as “war criminals” could be leading to over-enforcement, and limiting legitimate political speech, since there tends to be a link between this type of attack and actions taken by states.
Under the first alternative, Meta envisaged introducing an escalation-only framework to distinguish between attacks based on national origin as opposed to attacks targeting a concept. This would require identifying factors to help with this determination such as whether a particular country is involved in a war or crisis, or whether the content references the country or its military in addition to its people. In other words, if the automated systems identify the post as likely violating, it would be taken down unless, following an escalation to Meta’s subject matter experts, the latter conclude otherwise. Meta added that if this type of framework is adopted, the company would likely use this framework as a backdrop to the existing concepts versus people escalation-only policy under the Hate Speech policy. This means that Meta “would not allow content, even if it determined the content was in fact targeting a nation rather than people, if it would otherwise be removed, under the concepts versus people framework.” Under the existing concepts versus people escalation-only policy, Meta takes down “content attacking concepts, institutions, ideas, practices, or beliefs associated with protected characteristics, which are likely to contribute to imminent physical harm, intimidation or discrimination.”
Meta noted that this new framework would enable the company to consider more contextual cues, but it would likely be applied rarely and only on-escalation. In addition, “as escalation-only policies are only applied to content escalated to Meta’s specialized teams, they may be perceived as inequitable to those who lack access to these teams and whose content is reviewed at-scale.”
Under the second alternative, Meta presented a range of sub-options to address the risk of over-enforcement at-scale. Unlike the first alternative, this would not require additional context for content to be considered for assessment and would apply at-scale. The sub-options include:
(a) Allowing all criminal comparisons on the basis of nationality. Meta noted that this option would result in under-enforcement by leaving up some criminal comparisons that attack people based on their nationality with no clear connection to political speech.
(b) Allowing all criminal comparisons to specific subsets of nationalities. Meta stated that a specific exception could be considered for subsets of nationalities likely to represent government or national policy (e.g., “Russian soldiers,” “American police” or “Polish government officials”), based on the assumption that these subsets are more likely to be a proxy for the government or national policy.
(c) Distinguishing between different types of criminal allegations. Meta noted that references to some types of crimes may be more frequently tied to states or institutions or appear to be more political than others.
The Board asked Meta questions on operational feasibility and trade-offs involved in the proposed alternative policy measures, and the interplay between existing policies and the proposed policy measures. Meta responded to all questions.
4. Public Comments
The Oversight Board received 14 public comments that met the terms for submission. Of these, seven were from the United States and Canada, six from Europe and one from Asia Pacific and Oceania. To read public comments submitted with consent to publish, click here.
The submissions covered the following themes: the implications of allegations of criminality against a whole nation in times of conflict, Meta’s Hate Speech Community Standard and Meta’s human rights responsibilities in conflict situations.
5. Oversight Board Analysis
The Board accepted these referral cases to consider how Meta should moderate allegations of criminality based on nationality, particularly how the company should distinguish between attacks against persons based on nationality and references to state actions and actors during conflicts and crises. These cases fall within the Board’s strategic priorities of Crisis and Conflict Situations and Hate Speech Against Marginalized Groups.
The Board examined Meta’s decisions in these cases by analyzing Meta’s content policies, values and human rights responsibilities. The Board also assessed how Meta should distinguish between speech that attributes criminality to individuals as members of a nationality and speech that attributes criminality to states. That distinction is adequate as a matter of principle, but its implementation is challenging, especially at-scale.
5.1 Compliance With Meta’s Content Policies
I. Content Rules
The Board finds that the pieces of content in the second and third cases violate Meta’s Hate Speech policy. The Board believes, however, that Meta’s decision to remove the content in the first case was incorrect, given there are signals indicating that the Facebook post is targeting countries and not its citizens.
After reviewing Meta’s Hate Speech policy, the Board recommends that the company reduce reliance on broad default rules and instead develop narrower subcategories that use objective signals to minimize false positives and false negatives on a scalable level. For example, the company should allow criminal allegations against specific groups that are likely to serve as proxies for states, governments and/or their policies, such as police, military, army, soldiers, government and other state officials. The company should also allow comparisons that mention crimes more typically associated with state actors and dangerous organizations, as defined by Meta’s Dangerous Organizations and Individuals policy, particularly atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court.
Individual Cases
The Board finds that the post in the first case did not violate the prohibition against “dehumanizing speech in the form of targeting a person or a group of persons” based on nationality “with comparisons to, generalizations or unqualified behavioral statements about ... criminals,” under Meta’s Hate Speech Community Standard. An expert report commissioned by the Board indicated that the references to crimes committed by “Russians” and “Americans” is most plausibly read as targeting respective states or their policies, not people from those countries. Moreover, the post compares Russians with Americans. Given the role both Russia and the United States play in international relations and politics, the comparison indicates that the user was referring to the respective countries, rather than the people. The Board concludes that the post in the first case is targeting states or their policies and, therefore, does not contain dehumanizing speech against persons based on nationality in the form of a generalization about criminals – and should be restored.
The Board agrees with Meta that the content in the second and third cases did violate Meta’s Hate Speech Community Standard, as these posts do contain generalizations about “criminals,” which target persons based on nationality. The references to “all Israelis” and “all Indians” most plausibly target Israelis and Indians, not the respective nations or governments. Additionally, neither post contains sufficient context to conclude it is referring to a particular act or criminal event.
Although the content in the second case was posted in response to another Threads user’s post containing a video discussing the Israel-Gaza conflict, the word “all” in reference to Israelis is a strong indication that the people as a whole are being targeted and not just the government. Moreover, while the content also includes a reference to “genocide,” there are no contextual signals unambiguously indicating that the user intended to refer to Israel’s state actions or policies, rather than to target Israelis based on their nationality. Similarly, no such context is present in the third case: the fact the user is commenting on an Instagram video in which men look at a female figure indicates the user is likely to be targeting people. The men in the video have no apparent connection to the Indian government. Additionally, there is no indication the user was criticizing the Indian government’s policies or actions on rape. In the absence of unambiguous references serving as criticism of states, the Board concludes that the removal of content in the second and third cases was justified under Meta’s Hate Speech policy.
Broader Issues
Turning to the broader issues raised by the three cases, the Board acknowledges the challenges in distinguishing content criticizing state actions and policies from attacks against people based on nationality, especially during crises and conflicts. Thus, the Board believes that Meta should implement nuanced policy changes that result in relevant political speech being protected and left on Meta’s platforms, without increasing the risk of harm against targeted groups. In the Board’s understanding, this requires a scalable approach with guardrails to prevent adverse far-reaching consequences.
The Board recommends that Meta find specific, objectively ascertainable signals that reduce false positives and false negatives in important subgroups of cases. For example – and without purporting to provide an exhaustive list of such signals – the Board is of the view that nationality-based allegations of criminality should generally be allowed when they are directed to a specific group that are likely to serve as proxies for states, such as soldiers, army, military, police, government or other state officials.
Another objective signal relates to the nature of the crime alleged in the challenged post. Some crimes, particularly atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court, are typically associated with state actors and dangerous organizations, while other crimes are almost exclusively committed by private individuals. Accordingly, posts that attribute the former type of crime to a nationality, when followed by references to state actions or policies, should be treated as political speech criticizing state action, and left on the platform, while the latter should generally be removed.
Additionally, certain linguistic signals could serve a similar function of distinguishing between political statements and hate speech. While recognizing that inferences from such signals may vary from language to language, the Board suggests that the presence or absence of a definite article is likely to have significance. To say that “the Americans” commit crimes is not the same as saying that “Americans” commit crimes, as the use of the definite article may signal a reference to a particular group or location. Similarly, words like “all” are strong signals that the speaker is making generalizations about an entire group of people rather than their nation state. These distinctions may vary across languages, making contextual interpretations even more critical.
At the same time, the Board considers that developing a framework that would only be available to Meta’s policy experts (an “escalation-only” rule), rather than to at-scale content reviewers, is an inadequate solution. In the Sudan’ s Rapid Support Forces Video Captive case, the Board learned that Meta’s human reviewers carrying out moderation at-scale “are not instructed or empowered to identify content that violates the company’s escalation-only” rules. Similarly in this case, Meta informed the Board that “escalation-only” rules can only be enforced if content is brought to the attention of Meta’s escalation-only teams, for example, through Trusted Partners or significant press coverage, or inquiries from content moderators about concerning trends, specialized teams in the region or internal experts such as Meta’s Human Rights Team or Civil Rights Legal Team.
While the Board acknowledges this escalation-only framework would allow for expert analysis of the overarching context of a conflict situation, cues around the user’s intent and any links to state institutions, the Board considers that this approach would not result in distinguishing between permissible and impermissible posts in most cases, given that it would not be applied at-scale. Similarly, the Board finds that another of Meta’s alternatives, to allow all criminal comparisons based on nationality, is not a sufficiently nuanced approach and would result in the risk of under-enforcement of harmful content, that may be especially exacerbated in times of crises. The Board considers this option overbroad as it may protect content targeting people, rather than states, their actions or policies.
II. Enforcement Action
Meta has informed the Board about potential enforcement challenges associated with some of the more nuanced policy alternatives it provided to the Board, including potential difficulties with classifier training to enforce narrow exceptions and the increased complexity for human reviewers moderating at-scale. The company noted that under the current Hate Speech policy, all protected characteristic groups are treated equally, which makes it easier for human reviewers to apply the policy, and this also facilitates classifier training.
In the Violence Against Women case, Meta informed the Board that “it can be difficult for at-scale content reviewers to distinguish between qualified and unqualified behavioral statements without taking a careful reading of context into account.” In the Call for Women’s Protest in Cuba case, Meta told the Board that because it is challenging to determine intent at-scale, its internal guidelines instruct reviewers to remove behavioral statements about protected characteristic groups by default when the user has not made it clear whether the statement is qualified or unqualified.
While the Board acknowledges the enforcement challenges around nuanced policies, it finds that Meta could consider creating lists of actors and crimes that are very likely to reference state policies or actors, rather than people. For example, the list could include references to police, military, army, soldiers, government and other state officials. When it comes to photo and video content, Meta may instruct its human reviewers to consider visual cues in the content. For instance, content that features people wearing military attire coupled with generalizations about criminality may indicate the user’s intent to reference state actions or actors, rather than to generalize or compare people to criminals.
The Board also notes that some crimes can more typically be committed by or attributed to state actors and dangerous organizations, and therefore could signal that the user’s intent is to criticize actions or policies of state actors or dangerous organizations. In enforcement of such content at-scale, Meta may consider focusing on atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court.
In view of the enforcement challenges to minimize false positives and false negatives at scale, the Board recommends that Meta publicly share the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy. The company should provide the results in a way that allows these assessments to be compared across languages and/or regions. This recommendation is in line with the Board’s recommendation no. 5 from the Breast Cancer Symptoms and Nudity decision and recommendation no. 6 from Referring to Designated Dangerous Individuals as “Shaheed” policy advisory opinion.
Considering the complexities and nuances of the proposed policies, the Board underlines the importance of providing sufficient and detailed guidance to human reviewers to ensure consistent enforcement, in line with recommendation no. 1 below.
5.2 Compliance With Meta’s Human Rights Responsibilities
The Board finds that Meta’s decision to remove the content in the first case was not consistent with the company’s human rights responsibilities. The majority of the Board considers that removing the content in the second and third cases was in line with Meta’s human rights commitments, while a minority disagrees.
Freedom of Expression (Article 19 ICCPR)
Article 19 of the ICCPR provides for broad protection of expression, including about politics, public affairs and human rights, with expression about social or political concerns receiving heightened protection ( General Comment No. 34, paras. 11-12).
When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s human rights responsibilities in line with the UN Guiding Principles on Business and Human Rights (UNGPs), which Meta itself has committed to in its Corporate Human Rights Policy. The Board does this both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As the UN Special Rapporteur on freedom of expression has stated, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486, para. 41).
I. Legality (Clarity and Accessibility of the Rules)
The principle of legality requires rules limiting expression to be accessible and clear, formulated with sufficient precision to enable an individual to regulate their conduct accordingly (General Comment No. 34, para. 25). Additionally, these rules “may not confer unfettered discretion for the restriction of freedom of expression on those charged with [their] execution” and must “provide sufficient guidance to those charged with their execution to enable them to ascertain what sorts of expression are properly restricted and what sorts are not,” ( Ibid.). The UN Special Rapporteur on freedom of expression has stated that when applied to private actors’ governance of online speech, rules should be clear and specific (A/HRC/38/35, para. 46). People using Meta’s platforms should be able to access and understand the rules and content reviewers should have clear guidance regarding their enforcement.
The Board finds that, as applied to these three cases, Meta’s policy prohibiting dehumanizing speech against persons based on nationality in the form of comparisons to, generalizations or unqualified behavioral statements about “criminals” meets the legality test. While all three posts contain generalization about criminal allegations, the Board considers that the first case contains sufficient context to conclude the user was referring to generalizations about state actions or policies and this content should be restored. However, the content in the second and third cases targets people based on nationality, violating Meta’s Hate Speech policy.
Further, the Board highlights that any new rules should be clear and accessible to users as part of Meta making changes to the policy. Thus, the Boards urges Meta to update the language of the Hate Speech policy to reflect changes that will result from this decision and the policy recommendations that are adopted.
In the Violence Against Women,Knin Cartoon and Call for Women’s Protest in Cuba decisions, the Board found that content reviewers should have sufficient room and resources to take contextual cues into account in order to accurately enforce Meta’s policies. Therefore, to ensure consistent and effective enforcement, Meta should provide clear guidance about the new rules to its human reviewers, in line with recommendation no. 1 below.
II. Legitimate Aim
Any restriction on freedom of expression should also pursue at least one of the legitimate aims listed in the ICCPR, which includes protecting the “rights of others.” “The term ‘rights’ includes human rights as recognized in the Covenant and more generally in international human rights law,” (General Comment No. 34, para. 28). In line with its previous decisions, the Board finds that Meta’s Hate Speech policy, which aims to protect people’s right to equality and non-discrimination, pursues a legitimate aim that is recognized by international human rights law standards (see, for example, our Knin Cartoon decision).
III. Necessity and Proportionality
Under ICCPR Article 19(3), necessity and proportionality requires that restrictions on expression “must be appropriate to achieve their protective function; they must be the least intrusive instrument amongst those which might achieve their protective function; they must be proportionate to the interest to be protected,” (General Comment No. 34, para. 34).
The UNGPs state that businesses should perform ongoing human rights due diligence to assess the impacts of their activities (UNGP 17) and acknowledge that the risk of human rights harms is heightened in conflict-affected contexts (UNGP 7). The UN Working Group on the issue of human rights and transnational corporations and other business enterprises noted that businesses’ diligence responsibilities should reflect the greater complexity and risk of harm in some scenarios ( A/75/212, paras. 41-49).
In the Myanmar Bot case, the Board found that “[Meta’s] heightened responsibilities should not lead to default removal, as the stakes are high in both leaving up harmful content and removing content that poses little or no risk of harm.” The Board further noted that “while Facebook’s concern about hate speech in Myanmar was well founded, it also must take particular care to not remove political criticism and expression, in that case supporting democratic governance.”
While criticism of state policies, politics and actions, especially in crisis and conflict situations, is of heightened importance, attacks on persons based on nationality may be particularly harmful in the same context. Criminal allegations against people based on nationality may result in offline violence that targets people and contributes to the escalation of tensions between countries in a conflict setting. The majority of the Board finds that Meta’s decision to remove the content in the first case did not comply with the principles of necessity and proportionality, while the removals in the second and third cases were necessary and proportionate. The majority considers that in the absence of contextual cues to conclude that the users in the second and third cases were criticizing the Israeli and the Indian governments respectively, both content removals were justified. However, the majority concludes that such context is present in the first case, thereby making the removal in that case neither necessary nor proportionate, and requiring the post to be restored.
The Board reiterates that context is key for assessing necessity and proportionality (see our Pro-Navalny Protests in Russia decision). The Board acknowledges the importance and challenges around identifying contextual cues within the content itself and taking into account the external context and “environment for freedom of expression” surrounding posted content, (see also our Call for Women’s Protest in Cuba decision).
Regarding the content in the second case, the majority of the Board notes the reports that since October 7, the United Nations, government agencies and advocacy groups have warned about an increase in antisemitism and Islamophobia. The Anti-Defamation League, for example, reported that antisemitic incidents in the United States increased by 361% following the October 7 attacks. Countries across Europe have warned of rising hate crimes, hate speech and threats to civil liberties targeting Jewish and Muslim communities. When analyzing the challenges of enforcing Meta’s policies at-scale, the Board has previously emphasized that dehumanizing discourse that consists of implicit or explicit discriminatory speech may contribute to atrocities (see Knin Cartoon decision). In interpreting the Hate Speech Community Standard, the Board has also noted that even when specific pieces of content, seen in isolation, do not appear to directly incite violence or discrimination, during times of heightened ethnic tension and violence the volume of such content is likely to exacerbate the situation. At least in those circumstances, a social media company like Meta is entitled to take steps beyond those available to governments to make sure its platform is not used to foster and encourage hatred that leads to violence. In the absence of unambiguous references signaling criticism of the state, one of its institutions or policies, the majority of the Board concludes that the content in the second case constituted dehumanizing speech against all Israelis based on nationality. In the context of reports of increasing numbers of antisemitic incidents, including attacks on Jewish people and Israelis on the basis of their identity, such content is likely to contribute to imminent offline harm.
Similarly, the majority of the Board takes note of the ongoing tensions between India and Pakistan, and the reports on instances of communal violence between Hindus and Muslims in India (see Communal Violence in the Indian State of Odisha decision). Therefore, the majority considers that the removal of the content in the third case was necessary and proportionate because it targeted Indians, rather than criticized the Indian government, contributing to an environment of hostility and violence.
A minority of the Board disagrees with removal of the second and third posts. Global freedom of expression principles (as enshrined in ICCPR Article 19) require that limits on speech, including hate speech bans, meet necessity and proportionality principles, which entails an assessment of whether near term harm is likely and imminent from the posts. This minority is not convinced that content removal is the least intrusive means available to Meta to address potential harms in these cases as a broad array of digital tools are available for consideration (e.g., preventing the sharing of posts, demotions, labels, time-limited blocking, etc.). Meta’s failure to demonstrate otherwise does not satisfy the principle of necessity and proportionality. The Special Rapporteur has stated “just as States should evaluate whether a limitation on speech is the least restrictive approach, so too should companies carry out this kind of evaluation. And, in carrying out the evaluation, companies should bear the burden of publicly demonstrating necessity and proportionality,” (A/74/486, para 51) [emphasis added]. For the minority, Meta has failed to publicly demonstrate why removals are the least intrusive means and the majority has not made a persuasive case that the necessity and proportionality principle is satisfied in the second and third cases.
While the majority of the Board upholds removal of the two violating posts in the second and third cases, it underlines the importance of seeking user education and user empowerment measures when limiting freedom of expression. The Board takes note of recommendation no. 6 in the Pro-Navalny Protests in Russia decision, in response to which Meta explored ways of notifying users of potential violations to the Community Standards before the company takes an enforcement action. The company has informed the Board that when the company’s automated systems detect with high confidence a potential violation in content that a user is about to post, Meta may inform the user that their post might violate the policy, allowing the user to better understand Meta’s policies, and then to decide whether to delete and post their content again without the violating language. Meta added that over the 12-week period from July 10, 2023, to October 1, 2023, across all notification types, the company notified users across more than 100 million pieces of content, with over 17 million notifications relating to enforcement of the Bullying and Harassment Community Standard. Across all notifications, users opted to delete their posts more than 20% of the time. The Board notes that all information is aggregated and de-identified to protect user privacy, and that all metrics are estimates, based on best information currently available for a specific point in time. The Board considers the implementation of such measures an important step towards user education and empowerment, and additional control for users over their own experiences on Meta’s platforms.
6. The Oversight Board’s Decision
The Oversight Board overturns Meta’s decision to take down the content in the first case, requiring the post to be restored, and upholds Meta’s decisions to take down the content in the second and third cases.
7. Recommendations
Content Policy
1. Meta should amend its Hate Speech Community Standard, adding the section marked as “new” below. The amended Hate Speech Community Standard would then include the following or other substantially similar language to that effect:
“Do not post
Tier 1
Content targeting a person or group of people (including all groups except those who are considered non-protected groups described as having carried out violent crimes or sexual offenses or representing less than half of a group) on the basis of their aforementioned protected characteristic(s) or immigration status in written or visual form with dehumanizing speech in the form of comparisons to or generalizations about criminals:
- Sexual Predators
- Violent Criminals
- Other Criminals
[NEW] Except when the actors (e.g., police, military, army, soldiers, government, state officials) and/or crimes (e.g., atrocity crimes or grave human rights violations, such as those specified in the Rome Statute of the International Criminal Court) imply a reference to state rather than targeting people based on nationality.”
The Board will consider this recommendation implemented when Meta updates the public-facing Hate Speech Community Standard and shares the updated specific guidance with its reviewers.
Enforcement
2. To improve transparency around Meta’s enforcement, Meta should share the results of the internal audits it conducts to assess the accuracy of human review and performance of automated systems in the enforcement of its Hate Speech policy with the public. It should provide the results in a way that allows these assessments to be compared across languages and/or regions.
The Board will consider this recommendation implemented when Meta includes the accuracy assessment results as described in the recommendation in its Transparency Center and in the Community Standards Enforcement Reports.
*Procedural Note:
- The Oversight Board’s decisions are made by panels of five Members and approved by a majority vote of the full Board. Board decisions do not necessarily represent the views of all Members.
- Under its Charter, the Oversight Board may review appeals from users whose content Meta removed, appeals from users who reported content that Meta left up, and decisions that Meta refers to it (Charter Article 2, Section 1). The Board has binding authority to uphold or overturn Meta’s content decisions (Charter Article 3, Section 5; Charter Article 4). The Board may issue non-binding recommendations that Meta is required to respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation.
- For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, a digital investigations group providing risk advisory and threat intelligence services to mitigate online harms, also provided research. Linguistic expertise was provided by Lionbridge Technologies, LLC, whose specialists are fluent in more than 50 languages and work from 5,000 cities across the world.