New Cases to Consider Criminal Allegations Based on Nationality
May 21, 2024
Today, the Board is announcing three new cases for consideration. As part of this, we are inviting people and organizations to submit public comments by using the button below.
Case Selection
As we cannot hear every appeal, the Board prioritizes cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Meta’s policies.
The cases that we are announcing today are:
Criminal Allegations Based on Nationality
2024-028-IG-MR, 2024-029-TH-MR, 2024-030-FB-MR
Cases Referred by Meta
Submit a public comment using the button below
These cases concern three content decisions made by Meta, one each on Instagram, Threads and Facebook, which the Oversight Board intends to address together. For each case, the Board will decide whether the content should be allowed on the relevant platform.
The first case involves a user’s reply to a comment on a Threads post from January 2024. The post was a video discussing the Israel-Hamas conflict. The reply says “genocide” and states that “all Israelis are criminals.” In this case, one of Meta’s automated tools (specifically, a hostile speech classifier) identified the content as potentially violating. Following human review, Meta determined the content violated its Hate Speech Community Standard and removed it. Meta’s policy subject matter experts then also determined the original decision to remove the content was correct, after the company identified this case as one to refer to the Board.
The second case involves a Facebook post in Arabic from December 2023, which states that both Russians and Americans are “criminals.” The content also states that “Americans are more honorable” because they “admit their crimes” while Russians “want to benefit from the crimes of the Americans.” After one of Meta’s automated tools (a hostile speech classifier) identified the content as potentially violating, the post was sent for review but this was automatically closed, so it remained on Facebook. In March 2024, Meta selected this content to be referred to the Board and the company’s policy subject matter experts determined the post violated the Hate Speech Community Standard. It was then removed from Facebook. The user who posted the content appealed this decision. Following another stage of human review, the company decided content removal in this case was correct.
The third case involves a user’s comment on an Instagram post from March 2024, stating that “all Indians are rapists.” Meta removed the content after one of Meta’s automated tools (a hostile speech classifier) identified it as potentially violating the Hate Speech Community Standard. The user did not appeal Meta’s decision. After Meta selected this content to refer to the Board, the company’s policy subject matter experts determined the original decision to remove the content was still correct.
Meta removed the content in all three cases. In the first case, Meta did not apply a standard strike to the user’s account as the latter had had another piece of content removed around the same time. Meta explained that when the company removes multiple pieces of content at once, they may count these as a single strike. In the second case, Meta did not apply a standard strike to the user’s account as the content was posted more than 90 days before an enforcement action was taken, as per Meta’s strikes policy. In the third case, the company applied a standard strike and a 24-hour feature limit to the user’s account, which prevented them from using Live video.
Meta’s Hate Speech Community Standard distinguishes between attacks against concepts or institutions, which are generally allowed, and direct attacks against people on the basis of protected characteristics, including race, ethnicity, national origin and religious affiliation. Content attacking concepts or institutions may be removed if it is “likely to contribute to imminent physical harm, intimidation or discrimination” against people associated with the relevant protected characteristic. Prohibited attacks under the Hate Speech policy include “dehumanizing speech in the form of comparisons to or generalizations about” criminals, including sexual predators, terrorists, murderers, members of hate or criminal organizations, thieves and bank robbers. In the cases under review, Meta removed all three posts for “targeting people with criminal allegations based on nationality.”
When Meta referred these cases to the Board, it stated that they present a challenge on how to handle criminal allegations directed at people based on their nationality, under the Hate Speech policy. Meta told the Board that while the company believes this policy line strikes the right balance between voice and safety in most circumstances, there are situations, particularly in times of crisis and conflict, “where criminal allegations directed toward people of a given nationality may be interpreted as attacking a nation’s policies, its government, or its military rather than its people.”
The Board selected these cases to consider how Meta should moderate allegations of criminality based on nationality. These cases fall within the Board’s strategic priorities of Crisis and Conflict Situations and Hate Speech Against Marginalized Groups.
The Board would appreciate public comments that address:
- The impact of social media platform’s hate speech policies, especially Meta’s, on the ability of users to speak up against the acts of States, particularly in crisis and conflict situations.
- The impact of content alleging criminality based on a person’s nationality, including members of marginalized groups (e.g., national, ethnic and/or religious minorities, migrants), particularly in crisis and conflict situations.
- Meta’s human rights responsibilities in relation to content including allegations of criminality based on nationality, given the company’s approach of distinguishing between attacks against concepts (generally allowed) and attacks against people on the basis of protected characteristics (not allowed).
- Insights into potential criteria for establishing whether a user is targeting a concept/institution (e.g., state, army) or a group of people based on their nationality.
As part of its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.
Public Comments
If you or your organization feel you can contribute valuable perspectives that can help with reaching a decision on the cases announced today, you can submit your contributions using the button below. Please note that public comments can be provided anonymously. The public comment window is open for 14 days, closing at 23.59 your local time on Tuesday 4 June.
What’s Next
Over the next few weeks, Board Members will be deliberating these cases. Once they have reached their decision, we will post it on the Decisions page.