Oversight Board Overturns Meta in Iranian Woman Confronted on Street Case
March 7, 2024
The Oversight Board has overturned Meta’s original decision to take down a video showing a man confronting a woman on the streets of Iran for not wearing the hijab. The post did not violate the Violence and Incitement rules because it contains a figurative statement, rather than literal, and is not a credible threat of violence. Shared during a period of turmoil, escalating repression and violence against people protesting, access to social media in Iran is crucial, with the internet representing the new battleground in the struggle for women’s rights. As Instagram is one of the few remaining platforms not to be banned in the country, its role in the anti-regime “Woman, Life, Freedom” movement has been immeasurable, despite the regime’s efforts to instill fear and silence women online. The Board concludes that Meta’s efforts to ensure respect for freedom of expression and assembly in the context of systematic state repression have been insufficient and it recommends a change to the company’s Crisis Policy Protocol.
About the Case
In July 2023, a user posted a video on Instagram in which a man confronts a woman in public for not wearing the hijab. In the video, which is in Persian with English subtitles, the woman responds by saying she is standing up for her rights. An accompanying caption expresses support for the woman and Iranian women standing up to the regime. Part of the caption, which also criticizes the regime, includes a phrase that translates as, “it is not far to make you into pieces,” according to Meta.
Iran’s criminal code penalized women who appeared in public without a “proper hijab” with imprisonment, a fine or lashes. In September 2023, Iran’s regime approved a new Hijab and Chastity Bill under which women could face up to 10 years in prison if they continue to defy the mandatory hijab rules. The caption in this post makes it clear the woman in the video has already been arrested.
First flagged by Meta’s automated systems for potentially violating Instagram’s Community Guidelines, the post was sent for human review. Although multiple reviewers assessed the content under Meta’s Violence and Incitement policy, they did not come to the same conclusion, which, in combination with a technical error, meant the post stayed up. A user then reported the post, which led to an additional round of review, this time by Meta’s regional team with language expertise. At this stage, it was determined the post violated the Violence and Incitement policy, and it was removed from Instagram. The user who posted the content then appealed to the Board. Meta maintained its decision to remove the content was correct until the Board selected this case, at which stage the company reversed its decision, restoring the post.
Key Findings
The Board finds the post did not violate the Violence and Incitement Community Standard because it contains figurative speech, rather than literal, and is not a credible threat of violence that is capable of inciting offline harm. While Meta originally removed the post partly because it assessed the phrase, “it is not far to make you into pieces,” as a statement of intent to commit high-severity violence – targeting the man in the video – it should not be interpreted literally. Given the context of widespread protests in Iran, and the caption and video as a whole, the phrase is figurative and expresses anger and dismay at the regime. Linguistic experts consulted by the Board noted a slightly different translation of the phrase (“we will tear you to pieces sometime soon”), explaining that it conveys anger, disappointment and resentment towards the regime. Rather than triggering harm against the regime, the most likely harm that would result from this post would be retaliatory violence by the regime.
While Meta’s policy rationale suggests “language” and “context” may be considered when evaluating a “credible threat,” Meta’s internal guidance to moderators does not enable this in practice. Moderators are instructed to identify specific criteria (a threat and a target) and when those are met, to remove content. The Board previously noted its concern about this misalignment in the Iran Protest Slogan case, in which it recommended that Meta provide nuanced guidance on how to consider context, directing moderators to stop default removals of “rhetorical language” expressing dissent. It remains concerning there is still room for inconsistent enforcement of figurative speech, in contexts such as Iran. Furthermore, as automation accuracy is impacted by the quality of training data provided by humans, it is likely the mistake of removing figurative speech is amplified.
This post was also considered under the Coordinating Harm and Promoting Crime Community Standard because there is a rule prohibiting “content that puts unveiled women at risk by revealing their images without [a] veil against their will or without permission.” The policy line has since been edited to prohibit: “Outing [unveiled women]: exposing the identity of a person and putting them at risk of harm.” On this, the Board agrees with Meta that the content does not “out” the woman in the video and the risk of harm had abated because her identity was widely known and she had already been arrested. In fact, the post was shared to call attention to her arrest and could help pressurize the authorities to release her.
As Iran is designated an at-risk country under Meta’s crisis policies, including the Crisis Policy Protocol, the company is able to apply temporary policy changes (“levers”) to address a particular situation. While the Board recognizes Meta’s efforts on Iran, these have been insufficient to ensure respect for people’s freedom of expression and assembly in environments of systematic repression.
The Oversight Board’s Decision
The Oversight Board has overturned Meta’s original decision to take down the post.
The Board recommends that Meta:
- Add a lever to the Crisis Policy Protocol to make clear that figurative (i.e., not literal) statements that are not intended to, and not likely to, incite violence, do not violate the Violence and Incitement policy line that prohibits threats of violence in relevant contexts. This should include developing criteria for at-scale moderators on how to identify such statements in the relevant context.
For Further Information
To read the full decision, click here.
To read a synopsis of public comments for this case, please click here.