Case Description
These cases concern three content decisions made by Meta, one each on Instagram, Threads and Facebook, which the Oversight Board intends to address together. For each case, the Board will decide whether the content should be allowed on the relevant platform.
The first case involves a user’s reply to a comment on a Threads post from January 2024. The post was a video discussing the Israel-Hamas conflict. The reply says “genocide” and states that “all Israelis are criminals.” In this case, one of Meta’s automated tools (specifically, a hostile speech classifier) identified the content as potentially violating. Following human review, Meta determined the content violated its Hate Speech Community Standard and removed it. Meta’s policy subject matter experts then also determined the original decision to remove the content was correct, after the company identified this case as one to refer to the Board.
The second case involves a Facebook post in Arabic from December 2023, which states that both Russians and Americans are “criminals.” The content also states that “Americans are more honorable” because they “admit their crimes” while Russians “want to benefit from the crimes of the Americans.” After one of Meta’s automated tools (a hostile speech classifier) identified the content as potentially violating, the post was sent for review but this was automatically closed, so it remained on Facebook. In March 2024, Meta selected this content to be referred to the Board and the company’s policy subject matter experts determined the post violated the Hate Speech Community Standard. It was then removed from Facebook. The user who posted the content appealed this decision. Following another stage of human review, the company decided content removal in this case was correct.
The third case involves a user’s comment on an Instagram post from March 2024, stating that “all Indians are rapists.” Meta removed the content after one of Meta’s automated tools (a hostile speech classifier) identified it as potentially violating the Hate Speech Community Standard. The user did not appeal Meta’s decision. After Meta selected this content to refer to the Board, the company’s policy subject matter experts determined the original decision to remove the content was still correct.
Meta removed the content in all three cases. In the first case, Meta did not apply a standard strike to the user’s account as the latter had had another piece of content removed around the same time. Meta explained that when the company removes multiple pieces of content at once, they may count these as a single strike. In the second case, Meta did not apply a standard strike to the user’s account as the content was posted more than 90 days before an enforcement action was taken, as per Meta’s strikes policy. In the third case, the company applied a standard strike and a 24-hour feature limit to the user’s account, which prevented them from using Live video.
Meta’s Hate Speech Community Standard distinguishes between attacks against concepts or institutions, which are generally allowed, and direct attacks against people on the basis of protected characteristics, including race, ethnicity, national origin and religious affiliation. Content attacking concepts or institutions may be removed if it is “likely to contribute to imminent physical harm, intimidation or discrimination” against people associated with the relevant protected characteristic. Prohibited attacks under the Hate Speech policy include “dehumanizing speech in the form of comparisons to or generalizations about” criminals, including sexual predators, terrorists, murderers, members of hate or criminal organizations, thieves and bank robbers. In the cases under review, Meta removed all three posts for “targeting people with criminal allegations based on nationality.”
When Meta referred these cases to the Board, it stated that they present a challenge on how to handle criminal allegations directed at people based on their nationality, under the Hate Speech policy. Meta told the Board that while the company believes this policy line strikes the right balance between voice and safety in most circumstances, there are situations, particularly in times of crisis and conflict, “where criminal allegations directed toward people of a given nationality may be interpreted as attacking a nation’s policies, its government, or its military rather than its people.”
The Board selected these cases to consider how Meta should moderate allegations of criminality based on nationality. These cases fall within the Board’s strategic priorities of Crisis and Conflict Situations and Hate Speech Against Marginalized Groups.
The Board would appreciate public comments that address:
- The impact of social media platform’s hate speech policies, especially Meta’s, on the ability of users to speak up against the acts of States, particularly in crisis and conflict situations.
- The impact of content alleging criminality based on a person’s nationality, including members of marginalized groups (e.g., national, ethnic and/or religious minorities, migrants), particularly in crisis and conflict situations.
- Meta’s human rights responsibilities in relation to content including allegations of criminality based on nationality, given the company’s approach of distinguishing between attacks against concepts (generally allowed) and attacks against people on the basis of protected characteristics (not allowed).
- Insights into potential criteria for establishing whether a user is targeting a concept/institution (e.g., state, army) or a group of people based on their nationality.
As part of its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.
Comments
The Anti Defamation League (ADL) is the leading anti-hate organization in the world. Founded in 1913, its mission is to “stop the defamation of the Jewish people and to secure justice and fair treatment to all.” The Center for Technology and Society at ADL seeks to combat the proliferation of hate and harassment online and to protect targets of online abuse.
This public comment responds to the case currently under consideration by the Oversight Board, in which the comment in question states that "all Israelis are criminals" and invokes the term "genocide." Contextually, this case aligns with two other cases brought into consideration by the Oversight Board; the first considers whether a December 2023 comment in Arabic that states that both Russians and Americans are “criminals,” and further states that “Americans are more honorable” because they “admit their crimes” while Russians “want to benefit from the crimes of the Americans”; the second involves a March 2023 Instagram post that stated “all Indians are rapists.” At stake in all three comments in question is the consideration of whether these comments violate Facebook’s protection against protected characteristics, thus constituting an attack against persons by way of their ethnicity and national origin.
This comment about “Israelis” represents a clear violation of Facebook's Community Standards regarding hate speech. According to Facebook's policy, hate speech is defined as "a direct attack against people — rather than concepts or institutions— on the basis of what we call protected characteristics: race, ethnicity, national origin, disability, religious affiliation, caste, sexual orientation, sex, gender identity and serious disease." The commenter's broad generalization targeting Israelis on the basis of their nationality is a textbook example of a direct attack against a protected characteristic.
Furthermore, Facebook's hate speech policy elaborates that this includes violent or dehumanizing speech, harmful stereotypes, statements of inferiority, expressions of contempt, disgust or dismissal, cursing and calls for exclusion or segregation. By labeling an entire nationality as criminals, the comment promotes a harmful stereotype, expresses contempt, and implicitly calls for the exclusion or segregation of Israelis. The comment is, therefore, hateful by what it implicates: Criminals are, by definition, guilty of a crime, and therefore punishment is imminent as a logical next step. Such rhetoric is not only contrary to Facebook's standards but could be deeply problematic from a legal perspective.
It is in this context that we must see the invocation of the term "genocide" as not only inflammatory but also a policy violation and consequentially reckless. Genocide is a grave crime under international law, defined in the Convention on the Prevention and Punishment of the Crime of Genocide as specific acts committed with the intent to destroy a national, ethnic, racial, or religious group. Accusing an entire nation of genocide without presenting substantive evidence is an egregious misuse of a legal term that carries profound implications. Such accusations should not be made lightly, especially in a public forum like Facebook, as they can contribute to the spread of misinformation, further inflame tensions in an already volatile conflict, and give license those who take this charge literally to act with a sense of moral righteousness to harm Israelis, under the presumption that in doing so they are acting justly to stop the gravest of crimes.
Additional human rights issues, independent and above Facebook’s own specific policies, are also at play. Increasingly, human rights law recognizes a “right to truth,” or the idea that victims of atrocity and human rights abuses have the right to accurate and factual representation of the human rights abuses they experienced. “The term “genocide” was coined as a result of the Holocaust and other modern-day atrocities like the Armenian genocide.” Appropriating and then weaponizing the Jewish experience of genocide under the Nazis to describe actions by the Jewish state of Israel violates the right to truth, distorting the Jewish experience of gross human rights violations—in this case, in order to demonize and justify hatred against the Jewish state.
Facebook, as a private company, has the legal right and responsibility to moderate content on its platforms in accordance with its established policies. It logically follows from Facebook’s own definition of hate speech and its commitment to mitigate it that the comment describing all Israelis as criminals and evoking the term genocide constitutes a clear violation of Facebook's hate speech policy. It directly attacks a protected group based on nationality, promotes harmful stereotypes, misuses a serious legal term, and strikes at the human rights value of a “right to truth,” in this case, the right to a true understanding of the Jewish experience of genocide, and their right to not have their experience used untruthfully as a weapon against them. By allowing such content to persist on its Threads platform, Facebook risks enabling the spread of discriminatory views that contravene international human rights principles and exposes itself to potential liability. To foster a safe and inclusive online environment and uphold its legal and ethical obligations, Facebook must consistently enforce its Community Standards and take swift action against hate speech targeting any protected characteristic, including national origin.
Meta should not restrict legitimate collective accusations, particularly against nations and peoples. Its present policy fails to distinguish between 'attacks on the basis of protected characteristics', and legitimate claims of collective accountability, collective responsibility, and collective guilt. It lumps them all together as 'hate speech', and that is logically and politically inadequate. The issue is highly politicised, and this is not the first controversy. In 2017, Facebook suppressed the online feminist campaign "Men are Scum", an offshoot of the #MeToo movement. The women's reactions illustrate the huge gap between both sides:
" … participants did not conceptualize their comments as hate speech, but rather considered posts such as “Men Are Scum” to be a means of “punching up” and “critiquing the power structure” … Respondents insisted that their posts challenge the patriarchal order they have been socialized into … In this manner, they connected their posts to offline power dynamics, suggesting the site’s refusal to consider societal hierarchies protects hegemonic interests and subordinates women. … Several participants linked discriminatory censorship to the demographics of Facebook employees. … It should be noted that gender-based censorship is not a new problem for women but rather a long-standing pattern that predates the introduction of social media. … Facebook’s policies compound transhistorical inequities surrounding public speech."
NURIK, Chloe, “Men Are Scum”: Self-Regulation, Hate Speech, and Gender-Based Censorship on Facebook. International Journal of Communication, v. 13, p. 21, jun. 2019. ISSN 1932-8036. .
As of 2024, the most contentious accusation is of Jewish collective responsibility, for Israeli military actions. In recent decades, it was the issue of German collective guilt, for the atrocities of Nazi Germany. In the coming years, collective responsibility will be an important theme, in debates about national reparations for slavery and colonisation. On the horizon is the issue of male reparations for patriarchy, femicide and rape, which is even more contentious than reparations for slavery. All such discussion angers some people: attribution of Jewish collective responsibility is often considered antisemitic. The anger is predictable, because all collective accusations also target the individual members.
The typical objection to statements such as "Men are scum" or "All Indians are rapists", is that they falsely overgeneralise, and therefore slander innocent members of the group. At first sight a reasonable objection, but it can be used as a tactic, to dispute legitimate attribution of guilt and responsibility, for past and current wrongs. Those who do wrong, should not be shielded against the accusation of wrongdoing. The test of legitimacy is not whether the accusations are hateful or offensive, but whether they are true. True collective accusations remain true, no matter how much offence and pain they cause. An accusation is always an 'attack', but that cannot justify the prohibition of all accusations.
No social media company will want to adjudicate complex social-historical-ethical questions, such as whether white Americans bear guilt for slavery, or whether all men benefit from rape. There is however, no magical policy or algorithm, which can determine the legitimacy of an accusation, without examining its truth or falsity. Nor is any 'protected category' exempt from well-founded moral judgement.
Frequent false overgeneralisations do not disprove collective responsibility for collective action, inaction, goals, and/or values. By definition, individual members are also responsible for the collective acts of collectivities, including the collective acts of a nation. The nation-state can never be entirely separated from the nation, nor the nation from its individual members. Without members there is no nation, and without a nation, no nation-state.
When can we legitimately blame a nation or people, and/or its individual members?
Firstly, the nation is responsible for its own existence, which is neither given, nor trivial, nor without consequences. The American people, for instance, is a powerful political entity in its own right, even discounting the state (USA), its economy, and its military. Simply being American is an individual political act, which collectively creates a collective political actor. This is true for all nations and peoples, and the individual act is subject to moral judgement.
The nation collectively seeks some form of national homeland, and this too is political. Because territory is finite, national territory and sovereignty always come at the expense of others, and require the exercise of power. The case of Israel is exemplary: its Declaration of Independence states that:
"Eretz Israel was the birthplace of the Jewish people. Here their spiritual, religious and political identity was shaped. … After being forcibly exiled from their land, the people kept faith with it throughout their Dispersion and never ceased to pray and hope for their return to it and for the restoration in it of their political freedom."
At no point was that possible without harm to others: the collective aspiration is neither innocent, nor self-evidently justified. It is also a choice, because the Jewish people could abandon their aspiration to return to Eretz Israel, and to collective political freedom there. They could have accepted 'dispersion'. Because it is a choice, their aspiration is subject to moral judgement, and because it is collective, the responsibility is collective. All nations and peoples who aspire to a homeland, can be criticised for that aspiration alone.
Internally, all nations act collectively to preserve their national culture and traditions, which necessarily entails limits on foreign influences, and suppression of cultural innovation. All nations maintain their cultural homogeneity, which always entails some suppression of deviance, and they exert collective pressure to conform to national norms, for instance use of the national language.
That short list shows, that negative moral judgements on nations and their members are morally permissible. No nation is guiltless. That does not mean, that every negative characterisation is inherently accurate. It does however undermine the conventional wisdom, that it is wrong to criticise people for simply being American, or being Irish, or being Jewish. This 'presumption of innocence' for nations underlies the inclusion of nationality, in the lists of protected categories used by social media moderators, and sometimes in anti-discrimination law. The presumption lacks an ethical foundation: there are good reasons to criticise nations, and the individual members who constitute them. Consequently, they do not deserve the status of 'protected category', in policies or legislation intended to restrict speech, such as social media content policies.
Meta's current list includes the terms 'race, ethnicity, and national origin', whereas the Oversight Board speaks of 'allegations of criminality based on nationality', and of 'marginalised national and ethnic minorities'. All of these terms could be used as proxies, to protect individual members of a nation, and the nation itself, from legitimate collective accusations. Meta should either remove them, or otherwise modify its policies, to permit legitimate collective accusation.
Thid statement means Palestinians who have been forcibly displaced from their land within the past 76 years have the right to return. Granting them this right is both morally and ethically the correct course of action. How could ANYONE have a problem with this, unless they are the thieves who stole the land and don't want to let them back.
Indigenous groups all around the world who have been colonized and the right to their land. The abuse and mistreatment MUST STOP.
We can't have justice in the world when there is no justice.
Meta should not ban the phrase "from the river to the sea" as it's meaning is about Palestinians and Israelis to coexist in one state. Saying this phrase means violence against Jewish people is out of context. From the river to the sea, Palestine will be free.
From the river to the sea merely means that the Palestinians get the land back that’s rightfully theirs, and are allowed to live at peace without the impending threat of death. It is tied to basic human rights, and to censor the phrase would be to stand on the side of genocide. Do not make this mistake.