Overturned
Responding to antisemitism
September 13, 2023
A user appealed Meta’s decision to remove an Instagram post of a video that condemned remarks by music artist Ye (the American rapper formerly known as Kanye West) praising Hitler and denying the Holocaust. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.
This is a summary decision. Summary decisions examine cases where Meta reversed its original decision on a piece of content after the Board brought it to the company’s attention. These decisions include information about Meta’s acknowledged errors. They are approved by a Board Member panel, not the full Board. They do not consider public comments, and do not have precedential value for the Board. Summary decisions provide transparency on Meta’s corrections and highlight areas where the company could improve its policy enforcement.
Case summary
A user appealed Meta’s decision to remove an Instagram post of a video that condemned remarks by music artist Ye (the American rapper formerly known as Kanye West) praising Hitler and denying the Holocaust. After the Board brought the appeal to Meta’s attention, the company reversed its original decision and restored the post.
Case Description and background
In January 2023, an Instagram user from Turkey posted a video containing an excerpt of an interview in English where Ye states that he "likes" Adolph Hitler and that Hitler "didn't kill 6 million Jews." The video then cuts to a person who appears to be a TV reporter expressing outrage over Ye's statements and recounting how his family members were killed in the Holocaust. The video is subtitled in Turkish and has a caption that can be translated as "TV reporter responds to Kanye West."
Meta originally removed the post from Instagram citing its Dangerous Organizations and Individuals (DOI) and Hate Speech policies. Under Meta’s DOI policy, the company removes praise of designated individuals, including Adolf Hitler. However, the policy recognizes that “users may share content that includes references to designated dangerous organizations and individuals to report on, condemn or neutrally discuss them or their activities.” Under its Hate Speech policy, the company removes Holocaust denial as a form of harmful stereotype that is “historically linked to intimidation, exclusion, or violence on the basis of a protected characteristic.” The Hate Speech policy also recognizes that “people sometimes share content that includes slurs or someone else's hate speech to condemn it or raise awareness.”
In their appeal to the Board, the user argued that the video does not support Adolf Hitler and that they were misunderstood.
After the Board brought this case to Meta’s attention, the company determined that the content did not violate its policies. Although the video contained praise for Adolf Hitler and Holocaust denial, the second part of the video clearly condemned these statements, placing it within an allowable context. Therefore, the company concluded that its initial removal was incorrect and restored the content to the platform.
Board authority and scope
The Board has authority to review Meta's decision following an appeal from the user whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1).
Where Meta acknowledges it made an error and reverses its decision in a case under consideration for Board review, the Board may select that case for a summary decision (Bylaws Article 2, Section 2.1.3). The Board reviews the original decision to increase understanding of the content moderation process, to reduce errors and to increase fairness for people who use Facebook and Instagram.
Case significance
This case shows an example of error in the application of exceptions in Meta's DOI and Hate Speech policies. Such mistakes can suppress speech meant to respond to hate speech, including holocaust denial, or condemn statements of praise for dangerous individuals such as Hitler. Protecting counter-speech is essential for advancing freedom of expression and a tool for combating harmful content such as misinformation and hate speech. The Board has previously recommended that: Meta should assess the accuracy of reviewers enforcing the reporting allowance under the DOI policy in order to identify systemic issues causing enforcement errors ( Mention of the Taliban in news reporting, recommendation no. 5); Meta should evaluate automated moderation processes for enforcement of the DOI policy ( Öcalan's isolation, recommendation no. 2); and Meta should conduct accuracy assessments focused on its Hate Speech policy allowances that cover forms of expression such as condemnation, awareness raising, self-referential, and empowering uses ( Wampum belt, recommendation no. 3). Meta has reported progress on Mention of the Taliban in news reporting, recommendation no. 5), declined to implement Öcalan's isolation, recommendation no. 2), and demonstrated implementation on Wampum belt, recommendation no. 3.
The Board reiterates that the full implementation of these recommendations may reduce error rates in the enforcement of allowances under the Hate Speech and the Dangerous Organizations and Individuals policies. This will, in turn, better protect counter-speech and enhance freedom of expression overall.
Decision
The Board overturns Meta’s original decision to remove the content. The Board acknowledges Meta’s correction of its initial error once the Board brought the case to Meta’s attention.