A person scrutinizing a sphere she’s holding in her hand, while shapes and clouds float around her.
A person scrutinizing a sphere she’s holding in her hand, while shapes and clouds float around her.
A person scrutinizing a sphere she’s holding in her hand, while shapes and clouds float around her.

Oversight Board announces new cases and review of Meta’s COVID-19 misinformation policies


July 2022

Today, the Oversight Board announced that it has accepted a request from Meta for a policy advisory opinion on its removal of COVID-19 misinformation. The Board also announced new cases for consideration concerning gender identity and nudity, hate speech and Russia’s invasion of Ukraine, and UK drill music.

As part of both the policy advisory opinion and the new cases, we are inviting people and organizations to submit public comments.

  • Submit comments on the COVID-19 policy advisory opinion here.
  • Submit comments on the “Gender identity and nudity” cases here.
  • Submit comments on the “Russian poem” case here.
  • Submit comments on the “UK drill music” case here.

Policy advisory opinionPolicy advisory opinion

Beyond reviewing individual cases to remove or restore content, the Board can also accept requests from Meta for guidance on its wider content policies. After receiving the request from Meta and input from external stakeholders, the Board provides detailed recommendations on changes that Meta should make to its policies on a given topic.

Meta must send the Board's recommendations through its official policy development process and give regular updates on this, including through its newsroom. While the Board's policy advisory opinion is not binding, Meta must provide a public response and follow-on actions within 60 days of receiving our recommendations.

To date, the Board has taken on two other policy advisory opinions, publishing its first in February 2022 on the sharing of private residential information on Facebook and Instagram, and beginning a review of Meta’s cross-check system in October 2021.

Removal of COVID-19 misinformation (PAO 2022-01)

Submit public comment here.

Meta has requested a policy advisory opinion on its approach to COVID-19 misinformation, as outlined in the company’s policy on harmful health misinformation.

Meta’s request to the Board is available in full, here.

In its request, Meta asks the Board whether it should continue removing content under this policy or whether another, less restrictive, approach would better align with the company’s values and human rights responsibilities.

Meta informed the Board that its approach to misinformation on its platforms mainly relies on contextualizing potentially false claims and reducing their reach, rather than removing content. Because it is difficult to precisely define what constitutes misinformation across a whole range of topics, removing misinformation at scale risks unjustifiably interfering with users’ expression. However, the company began adopting a different approach in January 2020, as the widespread impact of COVID-19 started to become apparent. Meta moved towards removing entire categories of misinformation about the pandemic from its platforms. Meta states that it did this because “outside health experts told us that misinformation about COVID-19, such as false claims about cures, masking, social distancing, and the transmissibility of the virus, could contribute to the risk of imminent physical harm.”

Meta’s current approach is to remove misinformation that is likely to directly contribute to the risk of imminent physical harm and to label, fact-check and demote misinformation that does not meet the “imminent physical harm” standard.

According to Meta, it “remove[s] harmful health misinformation if the following criteria are met: (1) there is a public health emergency; (2) leading global health organizations or local health authorities tell us a particular claim is false; and (3) those organizations or authorities tell us the claim can directly contribute to the risk of imminent physical harm.” The Help Center article on COVID-19 provides a list of 80 “distinct false claims” that the company removes because it “directly contributes to a risk of imminent physical harm as assessed by a relevant external expert.” These claims include false cures, false information designed to discourage treatment, false prevention information, false information about availability of or access to health resources or false information about the location or severity of a disease outbreak.

For content that does not fall within these standards for removal, the company relies on third-party fact checking organizations to review and rate the accuracy of the most viral content. Independent fact checkers review individual pieces of content and can label content “False,” “Altered,” “Partly False,” or “Missing Context.” Content that is labeled “False” or “Altered” is covered by a warning screen, requiring users to click through to view the content. The warning screen also provides links to articles provided by the fact-checker debunking the claim. Content labeled “Partly False” or “Missing Context” has a less intrusive label, which does not obscure the post and does not require clicking through to view the content. This label also provides a link to articles provided by the fact-checker. According to Meta, content rated “False,” “Altered,” or “Partly False” is demoted in users’ feeds, while content rated “Missing Context” is generally not demoted.

Meta also states that it employs a temporary emergency reduction measure when “misinformation about a particular crisis spikes on our platforms and our third-party fact-checkers cannot keep up with rating those claims.” In such circumstances, the company says it demotes important and repeatedly fact-checked claims at scale.

Meta states in its request that, in limited circumstances, it may add a label to non-violating content on COVID-19 that directs users to Meta’s COVID-19 Information Center. According to the company, “[t]hese labels do not signal judgement on whether the post is true or false.”

In its request for a policy advisory opinion, Meta points to the changed landscape surrounding COVID-19 as the reason the company seeks the Board’s advice on its current approach. First, according to Meta there was a lack of authoritative guidance at the beginning of the pandemic, which “created an information vacuum that encouraged the spread of rumors, speculation, and misinformation.” Today, people have greater access to information. “While misinformation about COVID-19 continues to exist, data-driven, factually reported information about the pandemic has been published at an astounding rate.” Second, the development of vaccines, therapeutic treatments and the evolution of disease variants, means that COVID-19 is less deadly. Finally, Meta states that “public health authorities are actively evaluating whether COVID-19 has evolved to a less severe state.” Meta recognizes in its request to the Board that the course of the pandemic has and will continue to vary across the world, noting the variation in vaccination rates, health care system capacity and resources, and lower trust in government guidance as contributing to the likely disproportionate affect the disease will have on people in different countries.

Meta’s request notes the Board’s prior decisions about COVID-19, “Claimed COVID Cure” and “COVID lockdowns in Brazil.” Meta’s response to the Board’s policy recommendations in these cases can be found here.

Questions posed by Meta to the Board:

Meta presented the following policy options to the Board for its consideration:

  1. Continue removing certain COVID-19 misinformation. This option would mean continuing with Meta’s current approach of removing content that directly contributes to the risk of imminent physical harm. Meta states that under this option the company would eventually stop removing misinformation when it no longer poses an imminent risk of harm and requests the Board’s guidance on how the company should make this determination.
  2. Temporary emergency reduction measures. Under this option, Meta would stop removing COVID-19 misinformation and instead reduce the distribution of the claims. This would be a temporary measure and the company requests the Board’s guidance as to when it should stop using it if adopted.
  3. Third-party fact checking. Under this option, content currently subject to removal would be sent to independent third-party fact checkers for evaluation. Meta notes that “the number of fact-checkers available to rate content will always be limited. If Meta were to implement this option, fact-checkers would not be able to look at all COVID-19 content on our platforms, and some of it would not be checked for accuracy, demoted, and labeled.”
  4. Labels. Under this option, Meta would add labels to content which would not obstruct users from seeing the content but would provide direct links to authoritative information. Meta considers this a temporary measure and seeks the Board’s guidance on what factors the company should consider in deciding to stop using these labels.

Meta explained to the Board that each of these options has advantages and disadvantages, particularly in terms of scalability, accuracy, and in terms of the amount of content affected. For technical reasons, the company strongly supports taking a global approach, rather than adopting country or region-specific approaches.

While the Board will consider the specific options provided by Meta, the Board's recommendations and Policy Advisory Opinion might not be limited to these options.

The Board requests public comments that address:

  • The prevalence and impact of COVID-19 misinformation in different countries or regions, especially in places where Facebook and Instagram are a primary means of sharing information, and in places where access to health care, including vaccines, is limited.
  • The effectiveness of social media interventions to address COVID-19 misinformation, including how it impacts the spread of misinformation, trust in public health measures and public health outcomes, as well as impacts on freedom of expression, in particular civic discourse and scientific debate.
  • Criteria Meta should apply for lifting temporary misinformation interventions as emergency situations evolve.
  • The use of algorithmic or recommender systems to detect and apply misinformation interventions, and ways of improving the accuracy and transparency of those systems.
  • The fair treatment of users whose expression is impacted by social media interventions to address health misinformation, including the user’s ability to contest the application of labels, warning screens, or demotion of their content.
  • Principles and best practice to guide Meta’s transparency reporting of its interventions in response to health misinformation.

Public comments

An important part of the Board's process for developing a policy advisory opinion is gathering additional insights and expertise from individuals and organizations. This input will allow board members to tap into more knowledge and understand how Meta’s policies affect different people in different parts of the world.

If you or your organization feel that you can contribute valuable perspectives to this request for a policy advisory opinion, you can submit your contributions here.

The public comment window for the policy advisory opinion request announced today is open until 15:00 UTC on Thursday 25 August.

This timeline is longer than the public comment period for new cases as the policy advisory opinion process does not have the same time constraints as case decisions. Additionally, public comments can be up to six pages in length and submitted in any of the languages available on the Board's website to allow broader participation on the issues at stake. The full list of languages is available through the link above.

What's next

Now that the Board has accepted this request for a policy advisory opinion, it is collecting necessary information, including through the call for public comments which has launched today. Following deliberations, the full Board will then vote on the policy advisory opinion. If approved, this will be published on the Board's website.

New casesNew cases

As we cannot hear every appeal, the Board prioritizes cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Meta’s policies.

The cases we are announcing today are:

Gender identity and nudity (2022-009-IG-UA 2022-010-IG-UA)

User appeals to restore content to Instagram

Submit public comments here.

These cases concern two content decisions made by Meta which the Oversight Board intends to address together. Two separate images with captions were posted on Instagram by the same account which is jointly maintained by a US-based couple. Both images feature the couple who stated in the posts, and in their submissions to the Board, that they identify as transgender and non-binary. In the first image, posted in 2021, both people are nude from the waist up and have flesh-colored tape over their nipples, which are not visible. In the second image, posted in 2022, one person is clothed while the other person is bare-chested and covering their nipples with their hands. The captions accompanying these images discuss how the person who is bare-chested in both pictures will soon have top-surgery. They state their plans to document the surgery process and discuss trans healthcare issues. They include fundraiser announcements to pay for the surgery.

Meta removed both posts under the Sexual Solicitation Community Standard. In both cases, Meta’s automated systems identified the content as potentially violating.

  • In the first case, the report was automatically closed without being reviewed. Three users then reported the content for pornography and self-harm. These reports were reviewed by human moderators who found the post to be non-violating. When the content was reported for a fourth time, another human reviewer found the post to be violating and removed it.
  • In the second case, the post was identified twice as potentially violating by Meta’s automated systems, sent for human review and found to be non-violating. Two users then reported the content, but each report was closed automatically without being reviewed. Finally, Meta’s automated systems identified the content as potentially violating for a third time and sent it for human review. The reviewer found the post to be violating and removed it.

The account owners appealed both removal decisions to Meta, and the company maintained its decisions to remove both posts.

The account owners appealed both removal decisions to the Board. The Board will consider them together. In their statements to the Board, the couple express confusion about how their content violated Meta’s policies. They explain that the breasts in the photos are not those of women and that it is important that transgender bodies are not censored on the platform especially when trans rights and access to gender-affirming healthcare are being threatened in the United States.

As a result of the Board selecting these posts, Meta identified the removals as “enforcement errors” and restored the posts.

The Board would appreciate public comments that address:

  • Whether Meta’s policies on Nudity and Sexual Solicitation sufficiently respect the rights of trans and non-binary users.
  • Whether the gender confirmation surgery exception to Meta’s prohibition on female nipples in the nudity policy is effective in practice.
  • Whether Meta has sufficient procedures in place to address reported non-violating content and to mitigate against the risk of mistaken removal.
  • How Meta’s use of automation to detect sexual solicitation and nudity could be improved.
  • Insights on the socio-political context in the United States (and around the world) particularly regarding any challenges or limitations to freedom of expression, including gender expression and expression about trans and non-binary rights and issues of access to gender-affirming healthcare.
  • Insights on the role of social media globally as a resource and forum for expression for trans and non-binary users.

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.

Russian poem (2022-008-FB-UA)

User appeal to restore content to Facebook

Submit public comments here.

In April 2022, a Facebook user in Latvia posted a photo and text in Russian to their newsfeed. The photo shows a street view with a person lying still, likely deceased, on the ground. No wounds are visible. In the text, the user comments on alleged crimes committed by Soviet soldiers in Germany during the Second World War. They say such crimes were excused on the basis that soldiers were avenging the horrors that the Nazis had inflicted on the USSR.

The user draws a connection between the Second World War and the invasion of Ukraine, arguing that the Russian army “became fascist.” They write that the Russian army in Ukraine “rape[s] girls, wound[s] their fathers, torture[s] and kill[s] peaceful people.”

The user concludes that “after Bucha, Ukrainians will also want to repeat ... and will be able to repeat” such actions. At the end of their post, the user shares excerpts of the poem “Kill him!” by Soviet poet Konstantin Simonov, including the lines: “kill the fascist so he will lie on the ground’s backbone, not you”; “kill at least one of them as soon as you can”; “Kill him! Kill him! Kill!”.

The post was viewed approximately 20,000 times. The same day the content was posted, another user reported it as “violent and graphic content.” Based on a human reviewer decision, Meta removed the content for violating its Hate Speech Community Standard. Hours later, the user who posted the content appealed and a second reviewer assessed the content as violating the hate speech policy.

The user appealed to the Oversight Board. As a result of the Board selecting the appeal for review on May 31, 2022, Meta determined that its previous decision to remove the content was in error and restored it. On June 24, 2022, 24 days after the content was restored, Meta applied a warning screen to the photo in the post under the Violent and Graphic Content Community Standard, on the basis that it shows the violent death of a person.

In their appeal to the Board, the user states that the photo they shared is the “most innocuous” of the pictures documenting the “crimes of the Russian army in the city of Bucha,” “where dozens of dead civilians lie on the streets.” The user says their post does not call for violence and is about “past history and the present.” They say the poem was originally dedicated to the “struggle of Soviet soldiers against the Nazis,” and that they posted it to show how “the Russian army became an analogue of the fascist army.” As part of their appeal, they state they are a journalist and believe it is important for people to understand what is happening, especially in wartime.

The Board would appreciate public comments that address:

  • How Meta’s policies should treat hate speech or incitement to violence on the basis of nationality in the context of an international armed conflict, including when potentially targeted at the military?
  • How Meta should take into account the laws of armed conflict when moderating content about armed conflict.
  • Whether Meta’s policy should distinguish between attacks on institutions (such as the army or military) and individuals within those institutions (such as soldiers).
  • Insights related to Meta's moderation of content that includes commentary from journalists and/or artistic expression, particularly art that may address sensitive themes such as war.
  • The work of Konstantin Simonov, the context surrounding it, and how it is referenced today, including in relation to the current conflict.
  • Insights related to the sharing and visibility of photographs depicting potential human rights violations or war crimes in armed conflicts on Meta’s platforms.

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. The Board welcomes public comments proposing recommendations that are relevant to this case.

UK drill music (2022-007-IG-MR)

Case referred by Meta

Submit public comments here.

In January 2022 an Instagram account which describes itself as publicising British music posted a video with a short caption on its public account. The video is a 21 second clip of the music video for a UK drill music track called “Secrets Not Safe” by the rapper Chinx (OS). The caption tags Chinx (OS) as well as an affiliated artist and highlights that the track had just been released. The video clip shows part of the second verse of the song and fades to a black screen with the text “OUT NOW.” Drill is a subgenre of rap music popular in the UK, with a large number of drill artists active in London.

Shortly after the video was posted, Meta received a request from UK law enforcement to remove content that included this track. Meta says it was informed by law enforcement that elements of it could contribute to a risk of offline harm. The company was also aware that the track referenced a past shooting in a way that raised concerns that it may provoke further violence. As a result, the post was escalated for internal review by experts at Meta.

Meta’s experts determined the content violated the Violence and Incitement policy, specifically the prohibition on “coded statements where the method of violence or harm is not clearly articulated, but the threat is veiled or implicit.” The Community Standards list signs that content may include veiled or implicit threats. These include content that is “shared in a retaliatory context,” and content with “references to historical or fictional incidents of violence.” Further information and/or context is always required to identify and remove a number of different categories listed at the end of the Violence and Incitement policy including veiled threats. Meta has explained to the Board that enforcement under these categories is not subject to at-scale review (the standard review process conducted by outsourced moderators) and can only be enforced by Meta’s internal teams. Meta has further explained that the Facebook Community Standards apply to Instagram.

When Meta took the content down, two days after it was posted, it also removed copies of the video posted by other accounts. Based on the information they received from UK law enforcement, Meta’s Public Policy team believed that the track “might increase the risk of potential retaliatory gang violence,” and “acted as a threatening call to action that could contribute to a risk of imminent violence or physical harm, including retaliatory gang violence."

Hours after the content was removed, the account owner appealed. A human reviewer assessed the content to be non-violating and restored it to Instagram. Eight days later, following a second request from UK law enforcement, Meta removed the content again and took down other instances of the video found on its platforms. The account in this case has fewer than 1000 followers, the majority of whom live in the UK. The user received notifications from Meta both times their content was removed but was not informed that the removals were initiated following a request from UK law enforcement.

In referring this matter to the Board, Meta states that this case is particularly difficult as it involves balancing the competing interests of artistic expression and public safety. Meta explains that, while the company places a high value on artistic expression, it is difficult to determine when that expression becomes a credible threat. Meta asks the Board to assess whether, in this case and more generally, the safety risks associated with the potential instigation of gang violence outweigh the value of artistic expression in drill music.

The Board would appreciate public comments that address:

  • The artistic and cultural significance of UK drill music and any relationship between the sharing of this music online and acts of real-world violence.
  • Meta’s human rights responsibility to respect artistic expression, as well as to ensure its platforms are not used to incite violence, and how this should inform its approach to content moderation of music.
  • Whether Meta’s policies on violence and incitement should include allowances for humour, satire, or artistic expression, and if so, how they should be worded, and how they should be enforced accurately at scale.
  • How social media platforms should manage law enforcement requests for the review or removal of content that does not violate national laws but may breach a platform’s content rules.
  • How social media platforms should incorporate law enforcement requests for content removal, especially requests not based on alleged illegality, into their transparency reporting.

In its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.

Public comments

If you or your organization feel that you can contribute valuable perspectives that can help with reaching a decision on the cases announced today, you can submit your contributions using the links above. The public comment window for these cases is open for 14 days, closing at 15:00 UTC on Tuesday 9 August 2022.

What’s next

In the coming weeks, Board Members will be deliberating these cases. Once they have reached their final decisions, we will post them on the Oversight Board website. To receive updates when the Board announces new cases or publishes decisions, sign up here.

Attachments

COVID misinformation full PAO request
Download
Back to news and articles