A floating staircase against a wall, with the stairs facing downwards.
A floating staircase against a wall, with the stairs facing downwards.
A floating staircase against a wall, with the stairs facing downwards.

Oversight Board announces Holocaust Denial case


August 2023

Today, the Board is announcing a new case for consideration. As part of this, we are inviting people and organizations to submit public comments.

Case selectionCase selection

As we cannot hear every appeal, the Board prioritizes cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Meta's policies.

The case that we are announcing today is:

Holocaust Denial

2023-022-IG-UA

User appeal to remove content from Instagram

Submit public comments here.

To read this announcement in Hebrew, click here.

לקריאת הודעה זו בעברית, יש ללחוץ כאן.

In September 2020, an Instagram user with around 9,000 followers posted an image of Squidward–a cartoon character from the television series, SpongeBob SquarePants–­which included a speech bubble entitled “Fun Facts About The Holocaust.” The speech bubble contains claims about the Holocaust that are not true. The caption below the image includes several tags relating to memes, some of which may target specific geographical audiences. In comments on their own post, the user reiterates the claims are “real history.” The post was viewed around 1,000 times and had fewer than 1,000 likes.

On October 12, 2020, several weeks after the content was originally posted, Meta announced revisions to its content policies to prohibit Holocaust denial. “Denying or distorting information about the Holocaust” was added to a list of “designated dehumanizing comparisons, generalizations, or behavioral statements” within the Hate Speech Community Standard (Tier 1). On November 23, 2022, Meta reorganized the Hate Speech Community Standard, and now “Holocaust denial” is listed under Tier 1 as an example of prohibited “harmful stereotypes historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic.”

Since the content was posted in September 2020, users reported it six times for hate speech. Four of these reports were made before Meta’s policy change, and two came after. Some reports were assessed automatically as not violating Meta’s policies, whereas others were auto-closed as a result of what Meta described as its “COVID-19-related automation policies.” This policy, introduced at the beginning of the pandemic in 2020, auto-closed review jobs based on a variety of criteria to reduce the volume of reports for human reviewers, while keeping open potentially “high-risk” reports. As some reports were auto-closed, the content was left on Instagram.

Two reports led to human reviewers assessing the content as non-violating, one prior to the policy change and one after it. In May 2023, another user reporting the content appealed that decision, but that appeal was auto-closed due to Meta’s COVID-19-related automation policies. The same user then appealed to the Board, expressing deep concern at Meta’s failure to remove content about Holocaust denial.

The Oversight Board has selected this case because of its relevance to the Board’s strategic priorities, and the volume of user appeals questioning the way Meta enforced its prohibition on Holocaust denial.

This case falls within the Board’s priority on Hate speech against marginalized groups. As a result of the Board selecting this case, Meta determined that its original decision to leave the content on Instagram were wrong, and the company ultimately removed the post.

The Board would appreciate public comments that address:

  • Research into online trends about content denying the factual basis of the Holocaust, and the associated online and offline harms.
  • Meta’s human rights responsibilities in relation to content denying the factual basis of the Holocaust, including relating to dignity, security and freedom of expression.
  • Challenges to and best practices on using automation to accurately detect and take enforcement action against hate speech that promotes false narratives about protected characteristic groups, as well as hate speech in the form of memes or other images/video with text overlay (i.e., how to reduce false negative enforcement).
  • Challenges to and best practices in preventing the mistaken removal (false positives) of content countering hate speech, including in the form of satire or any other form of speech.
  • Meta’s reliance on automation in content moderation since the start of the COVID-19 pandemic, and the implications for users’ abilities to appeal and rectify mistakes.
  • The usefulness of Meta’s transparency reporting on the extent and accuracy of its enforcement against hate speech, particularly for people studying and/or working to counter hate speech online.

As part of its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to this case.

Public commentsPublic comments

If you or your organization feel that you can contribute valuable perspectives that can help with reaching a decision on the case announced today, you can submit your contributions using the link above. The public comment window is open for 14 days, closing at 23:59 your local time on Thursday 14 September.

What's nextWhat's next

Over the next few weeks, Board members will be deliberating this case. Once they have reached their final decision, we will post it on the Oversight Board website. To receive updates when the Board announces new cases or publishes decisions, sign up here.

Attachments

Hebrew-Language Translation
Download
Back to news and articles