Oversight Board Publishes Policy Advisory Opinion on Referring to Designated Dangerous Individuals as "Shaheed"
March 26, 2024
Today, the Board published a policy advisory opinion in response to Meta’s request on whether the company should continue to remove all content using the Arabic term “shaheed” to refer to individuals designated under its Dangerous Organizations and Individuals policy. The Board has found that Meta’s current approach disproportionately restricts free expression, is unnecessary, and that the company should end this blanket ban.
Read the Executive Summary of the policy advisory opinion below. To read the full version, click here.
Executive Summary
The Board finds that Meta’s approach to moderating content that uses the term “shaheed” to refer to individuals designated as dangerous substantially and disproportionately restricts free expression. Meta interprets all uses of “shaheed” referring to individuals it has designated as “dangerous” as violating and removes the content. According to Meta, it is likely that “shaheed” accounts for more content removals under the Community Standards than any other single word or phrase on its platforms. Acts of terrorist violence have severe consequences – destroying the lives of innocent people, impeding human rights and undermining the fabric of our societies. However, any limitation on freedom of expression to prevent such violence must be necessary and proportionate, given that undue removal of content may be ineffective and even counterproductive.
The Board’s recommendations start from the perspective that it is imperative Meta take effective action to ensure its platforms are not used to incite acts of violence, or to recruit people to engage in them. The word “shaheed” is sometimes used by extremists to praise or glorify people who have died while committing violent terrorist acts. However, Meta’s response to this threat must also be guided by respect for all human rights, including freedom of expression.
On October 7, 2023, as the Board was finalizing this policy advisory opinion, Hamas (a designated Tier 1 organization under Meta’s Dangerous Organizations and Individuals policy) led unprecedented terrorist attacks on Israel that killed an estimated 1,200 people and resulted in roughly 240 people being taken hostage (Ministry of Foreign Affairs, Government of Israel). According to news reports, as of February 6, 2024, at least 30 of the estimated 136 hostages remaining in Hamas captivity in early January are believed to have died. Meta immediately designated these events a terrorist attack under its Dangerous Organizations and Individuals policy. Israel quickly initiated a military campaign in response to the attacks. That military campaign had killed more than 30,000 people in Gaza as of March 4 (UN Office for the Coordination of Humanitarian Affairs, drawing on data from the Ministry of Health in Gaza). Reports from January indicated that 70% of fatalities were estimated to be women and children.
Following these events, the Board paused publication of this policy advisory opinion to ensure its recommendations were responsive to the use of Meta’s platforms and the word “shaheed” in this context. This additional research confirmed the Board’s recommendations to Meta on moderating the word “shaheed” held up, even under the extreme stress of such events, and would ensure greater respect for all human rights in Meta’s response to crises. At the same time, the Board underscores that Meta’s policies in this area are global and their impact extends far beyond this conflict. While acknowledging the salience of recent events in Israel and Palestine, the Board’s recommendations are also global and not limited to any particular context.
In the Board’s view, Meta’s approach to moderating the word “shaheed” is overbroad, and disproportionately restricts freedom of expression and civic discourse. For example, posts reporting on violence and designated entities may be wrongly removed. Meta’s approach also fails to consider the various meanings of “shaheed,” many of which are not intended to glorify or convey approval, and lead all too often to Arabic speakers and speakers (many of them Muslim) of other languages having posts removed, without that removal serving the purposes of the Dangerous Organizations and Individuals policy. Moreover, Meta’s policies prohibit, for example, the glorification, support and representation of designated individuals, organizations and events, as well as incitement to violence. These policies, enforced accurately, mitigate the dangers resulting from terrorist use of Meta’s platforms. Accordingly, the Board recommends that Meta end its blanket ban on use of the term “shaheed” to refer to individuals designated as dangerous, and modify its policy for a more contextually informed analysis of content including the word.
Background
In February 2023, Meta asked the Board whether it should continue to remove content using the Arabic term “shaheed,” or شهيد written in Arabic letters, to refer to individuals designated under its Dangerous Organizations and Individuals policy. “Shaheed” is also a loanword (meaning many non-Arabic languages have “borrowed” this Arabic-origin term, including by adapting its spelling).
The company describes the word “shaheed” as an “honorific” term, used by many communities across cultures, religions and languages, to refer to a person who has died unexpectedly, such as in an accident, or honorably, such as in a war. The company acknowledges the term has “multiple meanings” and while there is “no direct equivalent in the English language,” the common English translation is “martyr.” Noting that “in English the word ‘martyr’ means a person who has suffered or died for a justified cause and typically has positive connotations,” Meta states that “it is because of this use that we have categorized the term [“shaheed”] as constituting praise under our [Dangerous Organizations and Individuals] policy.”
Meta’s presumption that referring to a designated individual as “shaheed” always constituted “praise” under the Dangerous Organizations and Individuals policy resulted in a blanket ban. Meta has acknowledged that because of the term’s multiple meanings it “may be over-enforcing on significant amounts of speech not intended to praise a designated individual, particularly among Arabic speakers.” In addition, Meta does not apply the Dangerous Organizations and Individuals policy exceptions allowing the use of “shaheed” to “report on, condemn or neutrally discuss designated entities.” This has continued under the latest updates to the policy – made in December 2023 – that now prohibit “glorification” and “unclear references” instead of “praise,” which has been removed completely.
Meta started a policy development process in 2020 to reassess its approach to the term “shaheed” because of these concerns. However, no consensus was reached within the company and no new approach agreed.
When requesting this policy advisory opinion, Meta presented three possible policy options to the Board:
- Maintain the status quo.
- Allow use of “shaheed” in reference to designated individuals in posts that satisfy the exceptions to the “praise” prohibition (for example, reporting on, neutrally discussing or condemning), so long as there is no other praise or “signals of violence.” Some examples of these signals proposed by Meta included a visual depiction of weapons, or references to military language or real-world violence.
- Allow use of “shaheed” in reference to designated individuals so long as there is no other praise or signals of violence. This is regardless of whether the content falls under one of the exceptions listed above, in contrast to the second option.
The Board did consider other possible policy choices. For the reasons given throughout this policy advisory opinion, the Board’s recommendations align closely with the third option, though fewer signals of violence are adopted than Meta proposed in its request, and there is a requirement for the broader application of policy exceptions for reporting on, neutrally discussing and condemning designated entities and their actions.
Key Findings and Recommendations
The Board finds that Meta’s current approach to the term “shaheed” in connection to individuals designated as dangerous is overbroad, and substantially and disproportionately restricts free expression.
“Shaheed” is a culturally and religiously significant term. At times it is used to indicate praise of those who die committing violent acts and may even “glorify” them. But it is often used, even with reference to dangerous individuals, in reporting and neutral commentary, academic discussion, human rights debates and even more passive ways. Among other meanings, “shaheed” is widely used to refer to individuals who die while serving their country, serving their cause or as an unexpected victim of sociopolitical violence or natural tragedy. In some Muslim communities, it is even used as a first (given) name and surname. There is strong reason to believe the multiple meanings of “shaheed” result in the removal of a substantial amount of material not intended as praise of terrorists or their violent actions.
Meta’s approach of removing content solely for using “shaheed” when referring to designated individuals intentionally disregards the word’s linguistic complexity and its many uses, treating it always and only as the equivalent of the English word “martyr.” Doing so substantially affects freedom of expression and media freedoms, unduly restricts civic discourse and has serious negative implications for equality and non-discrimination. This over-enforcement disproportionately affects Arabic speakers and speakers of other languages that have “shaheed” loanwords. At the same time, other ways of implementing theDangerous Organizations and Individuals policy would still enable Meta to advance its value of safety and its goal of keeping material glorifying terrorists off its platforms. The current policy is therefore disproportionate and unnecessary.
To align its policies and enforcement practices around the term “shaheed” more closely with human rights standards, the Board recommends the following recommendations:
1. Meta should stop presuming that the word “shaheed,” when used to refer to a designated individual or unnamed members of designated organizations, is always violating and ineligible for policy exceptions. Content referring to a designated individual as “shaheed” should be removed as an “unclear reference” in only two situations. First, when one or more of three signals of violence are present: a visual depiction of an armament/weapon, a statement of intent or advocacy to use or carry an armament/weapon, or a reference to a designated event. Second, when the content otherwise violates Meta’s policies (e.g., for glorification or because the reference to a designated individual remains unclear for reasons other than use of “shaheed”). In either scenario, content should still be eligible for the “reporting on, neutrally discussing and condemning” exceptions.
2. To clarify the prohibition on “unclear references,” Meta should include several examples of violating content, including a post referring to a designated individual as “shaheed” combined with one or more of the three signals of violence specified in recommendation no. 1.
3. Meta’s internal policy guidance should also be updated to make clear that referring to designated individuals as “shaheed” is not violating except when accompanied by signals of violence, and that even when those signals are present, the content may still benefit from the “reporting on, neutrally discussing or condemning” exceptions.
If Meta accepts and implements these recommendations, under its existing rules, the company would continue to remove content that “glorifies” designated individuals, characterizes their violence or hate as an achievement, or legitimizes or defends their violent or hateful acts, as well as any support or representation of a designated dangerous entity. The Board’s proposed approach that results from these recommendations would be for Meta to stop always interpreting “shaheed” in reference to a designated individual as violating, only removing the content when combined with additional policy violations (e.g., glorification) or as an “unclear reference” due to signals of violence. Such content would still need to be eligible for the “reporting on, neutrally discussing and condemning designated individuals” policy exceptions.
The Board also recommends that Meta:
4. Explain in more detail the procedure by which entities and events are designated under its Dangerous Organizations and Individuals policy to improve transparency around this list. Meta should also publish aggregated information on the total number of entities within each tier of its designation list, as well as how many entities were added and removed in the past year.
5. Introduce a clear and effective process for regularly auditing designations and removing those no longer satisfying published criteria to ensure its Dangerous Organizations and Individuals entity list is up-to-date, and does not include organizations, individuals and events that no longer meet Meta’s designation definition.
6. Explain the methods it uses to assess the accuracy of human review and the performance of automated systems in the enforcement of its Dangerous Organizations and Individuals policy. Meta should also periodically share the outcomes of performance assessments of classifiers used in enforcement of this policy, providing results in a way that can be compared across languages and/or regions.
7. Clearly explain how classifiers are used to generate predictions of policy violations and how Meta sets thresholds for either taking no action, lining content up for human review or removing content. This information should be provided in the company’s Transparency Center to inform stakeholders.
For Further Information
In preparing this policy advisory opinion, the Board carried out extensive stakeholder engagement. For more on the public comments we received and our Stakeholder Engagement Roundtables, click through to the full policy advisory opinion.