New Decision: Allow Third-Party Imagery of Terrorist Attacks, With a Warning, When Posts Condemn or Share Information on These Acts

The Board has overturned Meta’s decisions to remove three Facebook posts showing footage of the March 2024 terrorist attack in Moscow, requiring the content to be restored with “Mark as Disturbing” warning screens.

While the posts violated Meta’s rules on showing the moment of designated attacks on visible victims, removing them was not consistent with the company’s human rights responsibilities. The posts, which discussed an event on front page news worldwide, are of high public interest value and to be protected under the newsworthiness allowance, according to the majority of the Board. In a country such as Russia with a closed media environment, accessibility on social media of such content is even more important. The posts each contain clear language condemning the attack, showing solidarity with or concern for the victims, with no clear risk of them leading to radicalization or incitement.

Suppressing matters of vital public concern based on unsubstantiated fears it could promote radicalization is not consistent with Meta’s responsibilities to free expression. As such, Meta should allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not identifiable victims when shared for news reporting, condemnation and raising awareness.

About the Cases 

The Board has reviewed three cases together involving content posted on Facebook by different users immediately after the March 22, 2024, terrorist attack at a concert venue and retail complex in Moscow.

The first case featured a video showing part of the attack inside the retail complex, seemingly filmed by a bystander. While the attackers and people being shot were visible but not easily identifiable, others leaving the building were identifiable. The caption asked what is happening in Russia and included prayers for those impacted.

The second case featured a shorter clip of the same footage, with a caption warning viewers about the content and stating there is no place in the world for terrorism.

The third case involved a post shared on a Facebook group page by an administrator. The group’s description expresses support for former French presidential candidate Éric Zemmour. The post included a still image from the attack, which could have been taken from the same video, showing armed gunmen and victims. Additionally, there was a short video of the retail complex on fire, filmed by someone driving past. The caption stated that Ukraine had said it had nothing to do with the attack, while pointing out that nobody had claimed responsibility. The caption also included a statement of support for the Russian people.

Meta removed all three posts for violating its Dangerous Organizations and Individuals policy, which prohibits third-party imagery depicting the moment of such attacks on visible victims. Meta designated the Moscow attack as a terrorist attack on the day it happened. According to Meta, the same video shared in the first two cases had already been posted by a different user and then escalated to the company’s policy or subject matter experts for additional review earlier on in the day. Following that review, Meta decided to remove the video and added it to a Media Matching Service (MMS) bank. The MMS bank subsequently determined that the content in the first two cases matched the banked video that had been tagged for removal and automatically removed it. In the third case, the content was removed by Meta following human review.

The attack carried out on March 22, 2024 in Moscow’s Crocus City Hall claimed the lives of at least 143 people. An affiliate of the Islamic State, ISIS-K, claimed responsibility soon after the attack. According to experts consulted by the Board, tens of millions of Russians watched the video of the attack on state-run media channels, as well as Russian social media platforms. While Russian President Vladimir Putin claimed there were links to Ukraine and support from Western intelligence for the attack, Ukraine has denied any involvement.

Key Findings

While the posts were either reporting on, raising awareness of or condemning the attacks, Meta does not apply these exceptions under the Dangerous Organizations and Individuals policy to “third-party imagery depicting the moment of [designated] attacks on visible victims.” As such, it is clear to the Board that all three posts violate Meta’s rules.

However, the majority of the Board finds that removing this content was not consistent with Meta’s human rights responsibilities, and the content should have been protected under the newsworthiness allowance. All three posts contained subject matter of pressing public debate related to an event that was front page news worldwide. There is no clear risk of the posts leading to radicalization or incitement. Each post contains clear language condemning the attack, showing solidarity with or concern for the victims, and seeking to inform the public. In combination with the lack of media freedom in Russia, and the fact the victims are not easily identifiable, this further moves these posts in the direction of the public interest. 

Suppressing content on matters of vital public concern based on unsubstantiated fears it could promote radicalization is not consistent with Meta’s responsibilities to free expression. This is particularly the case when the footage has been viewed by millions of people and accompanied by allegations that the attack was partly attributable to Ukraine. The Board notes the importance of maintaining access to information during crises particularly in Russia, where people rely on social media to access information or to raise awareness among international audiences.

While, in certain circumstances, removing content depicting identifiable victims is necessary and proportionate (e.g., in armed conflict when victims are prisoners of war), as the victims in these cases are not easily identifiable, restoring the posts with an age-gated warning screen is more in line with Meta’s human rights responsibilities. Therefore, Meta should amend its policy to allow third-party imagery of visible but not personally identifiable victims when clearly shared for news reporting, condemnation or awareness raising.

A minority of the Board disagrees and would uphold Meta’s decisions to remove the posts from Facebook. For the minority, the graphic nature of the footage and the fact that it shows the moment of attack and, in this case, death of visible victims, makes removal necessary for the dignity of the victims and their families.

In addition, the Board finds that the current placement of the rule on footage of violating violent events under the Dangerous Organizations and Individuals policy creates confusion for users. While the “We remove” section implies that condemnation and news reporting is permissible, other sections state that perpetrator-generated imagery and third-party imagery of moment of attacks on visible victims is prohibited and does not specify that Meta will remove such content even if it condemns or raises awareness of attacks.

The Oversight Board’s Decision

The Oversight Board overturns Meta’s decisions to remove the three posts, requiring the content to be restored with “Mark as Disturbing” warning screens.

The Board also recommends that Meta:

  • Allow, with a “Mark as Disturbing” warning screen, third-party imagery of a designated event showing the moment of attacks on visible but not personally identifiable victims when shared in the contexts of news reporting, condemnation and raising awareness.
  • Include a rule under the “We remove” section of the Dangerous Organizations and Individuals Community Standard on designated violent events. It should also move the explanation of how Meta treats content depicting designated events out of the policy rationale section and into this section.

For Further Information

To read public comments for this case, click here.

Return to News