Oversight Board Publishes Policy Advisory Opinion on the Removal of COVID-19 Misinformation
April 20, 2023
In July 2022, the Oversight Board accepted a request from Meta to assess whether it should continue to remove certain categories of COVID-19 misinformation, or whether a less restrictive approach would better align with its values and human rights responsibilities. The policy advisory opinion published today is the Board’s response to that request.
To read the full version of the Board’s policy advisory opinion on the removal of COVID-19 misinformation, click here.
Please note: Translations of the full policy advisory opinion into other languages are underway and will be uploaded to our website in the coming weeks.
The Board has conducted extensive investigation and public consultation. Given Meta's insistence that it takes a single, global approach to COVID-19 misinformation, the Board concludes that, as long as the World Health Organization (WHO) continues to declare COVID-19 an international public health emergency, Meta should maintain its current policy. That means it should continue to remove COVID-19 misinformation that is likely to directly contribute to the risk of imminent and significant physical harm. However, the Board finds that Meta should begin a process to reassess each of the 80 claims it currently removes, engaging a broader set of stakeholders. It should also prepare measures for when the WHO declaration is lifted, to protect freedom of expression and other human rights in these new circumstances. The Board strongly recommends that Meta publish information on government requests to remove COVID-19 content, take action to support independent research of its platforms, examine the link between its platforms’ architecture and misinformation, and promote understanding around COVID-19 misinformation globally.
Background
In early 2020, as the COVID-19 pandemic took hold, Meta began removing several claims from Facebook and Instagram that the company identified as COVID-19 misinformation. The list of COVID-19-related claims that the company removes has developed over the course of the pandemic. Today, Meta removes about 80 distinct COVID-19 misinformation claims under its “Misinformation about health during public health emergencies” policy, a subsection of the Misinformation Community Standard created in response to the Board’s recommendations in the “Claimed COVID cure” decision. This policy advisory opinion focuses exclusively on Meta’s actions during the COVID-19 pandemic under the “Misinformation about health during public health emergencies” policy. It does not address actions Meta has taken during the COVID-19 pandemic under other policies.
Under the “Misinformation about health during public health emergencies” policy, Meta removes “misinformation during public health emergencies when public health authorities conclude that the information is false and likely to directly contribute to the risk of imminent physical harm.” Meta relied exclusively on public health authorities to determine whether that standard had been met. The 80 claims it currently removes include, for example, denying the existence of COVID-19 and asserting that COVID-19 vaccines cause magnetism. Meta removed 27 million pieces of COVID-19 misinformation from Facebook and Instagram between March 2020 and July 2022, 1.3 million of which were restored through appeal. COVID-19 misinformation that does not meet the standard for removal can be fact-checked, labeled or demoted. Fact-checkers rate content (for example, as “false,” or “missing context”). Meta then labels it as such and links to a fact-checker article on the subject. The company also demotes content labeled by fact-checkers, meaning it appears less frequently and prominently in users’ feeds, depending on a number of factors. Meta also applies “neutral labels” to COVID-19-related content. These labels contain statements such as, “some unapproved COVID-19 treatments may cause serious harm,” and direct people to Meta’s COVID-19 information center, which provides information on prevention measures, vaccines and resources from public health authorities.
In its request to the Board, Meta asked whether it should continue to remove certain COVID-19 misinformation. Alternatively, the company said it could stop removing it and instead demote content, send it to third-party fact-checkers, or label it. Meta insists on taking a single, global approach to COVID-19 misinformation, rather than varying its approach by country or region. According to the company, taking a localized approach to the policy at scale would lead to lack of clarity for users and poor enforcement, and it lacks the capacity to adopt such an approach. In considering the request, the Board held extensive public consultations. These included a series of virtual roundtables with participants from around the world, convened in partnership with civil society, through which the Board heard from a wide range of experts and stakeholders.
Key Findings and Recommendations
The Board finds that continuing to remove COVID-19 misinformation that is “likely to directly contribute to the risk of imminent physical harm” during a global public health emergency is consistent with Meta’s values and human rights responsibilities. The Board initially explored whether it would be preferrable for Meta to take a localized approach to COVID-19 misinformation at scale. However, Meta insisted that this was not feasible without significantly undermining clarity and fairness for users and significantly increasing errors in enforcing its policy. Meta’s concerns may be warranted. However, by ruling out this option, Meta has frustrated the Board’s efforts to reconcile competing viewpoints from stakeholders and Board Members on how to best address harmful COVID-19 misinformation, while respecting human rights, especially the right to freedom of expression. The 18 recommendations in this policy advisory opinion, most of which are summarized below, work within this constraint.
The Board recommends that Meta:
Continue to remove false content about COVID-19 that is “likely to directly contribute to the risk of imminent physical harm during the ongoing global public health emergency,” while beginning a transparent and inclusive review and reassessment of the 80 claims it currently removes. A public health emergency presents a serious and direct danger to health. Given Meta’s insistence on a single, global approach to COVID-19 misinformation, the Board finds Meta is justified in responding with its current exceptional measures of removing false information likely to directly contribute to the risk of imminent physical harm, as determined by public health authorities. Meta has not, among other things, returned to the relevant public health authorities to ask them to re-evaluate the claims it removes. Nor has it conducted broader stakeholder and expert consultations to re-evaluate the individual claims or the overall policy. As Meta has not yet engaged in a due diligence process to change its policy (which is Meta's responsibility in the first instance), the Board is not in a position to recommend a policy change that could disproportionately affect the most vulnerable. However, now that we are no longer in the early stages of the crisis, to meet its human rights responsibilities, Meta should regularly assess whether the threshold for removal, set out in its policies, continues to be met. It should therefore begin a transparent process to regularly review the 80 claims subject to removal, consulting with a broad range of stakeholders. Only when stakeholders provide clear evidence of the potential of a claim to cause imminent physical harm is it justifiable to include it on the list of claims subject to removal. Meta should share with the public the outcomes of these periodic reviews.
Explore localizing its approach. Meta needs to plan what to do when the WHO stops classifying COVID-19 a global health emergency, but local public health authorities continue to designate it a public health emergency. The Board recommends initiating a risk assessment process to identify the measures it will take in this scenario. These should address misinformation likely to directly contribute to significant and imminent real-life harm, without compromising freedom of expression globally. The risk assessment should include assessing whether it is feasible to localize enforcement of its policies.
Assess the impact of its platforms’ architecture. Experts raised concerns that the architecture of Meta’s platforms amplifies harmful health misinformation. Given these claims, the Board recommends that Meta assess the human rights impact of its design choices. The company should commission a human rights impact assessment on how its newsfeed, recommendation algorithms, and other features amplify harmful health misinformation and its impacts.
Increase transparency around government requests. At the height of the pandemic, concerns were raised about Meta reviewing COVID-19-related content at the behest of governments. This is particularly problematic where governments make requests to crack down on peaceful protesters or human rights defenders, to control conversations about the origins of the pandemic, and to silence those criticizing or questioning government responses to the public health crisis. The United Nations has raised concerns that some governments have used the pandemic as a pretext to erode the tenets of democracy. Meta should be transparent and report regularly on state actor requests to review content under the "Misinformation about health during public health emergencies” policy.
Support independent research and promote understanding of COVID-19 misinformation. The Board heard from experts that wider efforts to understand COVID-19 misinformation, and the effectiveness of Meta’s response to it, are frustrated by lack of access to the company’s data and research. A lack of data has also created challenges for the Board when assessing the merits of this policy advisory opinion request. The Board recognizes that in comparison with other social media companies, Meta has taken significant steps to share data with external researchers, many of whom told the Board of the importance of Meta tools such as CrowdTangle, and Facebook Open Research and Transparency (FORT). At the same time, researchers have also complained about the difficulty of accessing tools such as FORT. Meta should continue to make these tools available, while improving their accessibility, and allow external researchers to access data that is not public. The Board also recommends that the company conduct research and publish data on its efforts to enforce its COVID-19 policies, and that it publish the findings of the “neutral labels” research shared with the Board. Finally, it recommends that Meta take steps to expand access to the company’s data to Global Majority, also referred to as the Global South, researchers and universities, and that it support digital literacy programs across the world.
Increase fairness, clarity and consistency around the removal of COVID-19 misinformation. To meet its human rights responsibilities, Meta must also make sure its rules are clear to users. To this end, the company should explain how each category of the COVID-19 claims it removes directly contributes to the risk of imminent physical harm. It should also explain the basis of its assessment that a claim is false and create a record of any changes made to the list of claims it removes. To support the consistent application of its rules across languages and regions, the company should translate its internal guidance for content moderators into the languages in which it operates. Meta should also protect users’ right to a remedy by expanding users’ ability to appeal fact-checker labels, and by ensuring such appeals are not reviewed by the fact-checker who made the initial decision.
For Further Information
To read the Board's full policy advisory opinion, as well as Meta's request to the Board and public comments submitted for this opinion, please click here.