Multiple Case Decision
Australian Electoral Commission Voting Rules
The Oversight Board has upheld Meta’s decisions to remove two separate Facebook posts containing the same screenshot of information posted on X by the Australian Electoral Commission, ahead of Australia’s Indigenous Voice to Parliament Referendum.
2 cases included in this bundle
FB-0TGD816L
Case about coordinating harm and publicizing crime on Facebook
FB-8ZQ78FZG
Case about coordinating harm and publicizing crime on Facebook
Summary
The Oversight Board has upheld Meta’s decisions to remove two separate Facebook posts containing the same screenshot of information posted on X by the Australian Electoral Commission (AEC), ahead of Australia’s Indigenous Voice to Parliament Referendum. Both posts violated the rule in the Coordinating Harm and Promoting Crime Community Standard that prohibits content calling for illegal participation in a voting process. These cases show how information out of context can impact people’s right to vote. The Board recommends that Meta more clearly explain its voter and/or census fraud-related rules by publicly providing its definition of “illegal voting.”
About the Cases
On October 14, 2023, Australia held its Indigenous Voice to Parliament Referendum. Days before, a Facebook user posted in a group a screenshot of an X post from the AEC’s official account. The information shown included: “If someone votes at two different polling places within their electorate, and places their formal vote in the ballot box at each polling place, their vote is counted.” In addition, another comment taken by the user from the same X thread explained that the secrecy of the ballot prevents the AEC from “knowing which ballot paper belongs to which person,” while also stating “the number of double votes received is incredibly low.” However, the screenshot does not show all the information shared by the AEC, including that voting multiple times is an offence. The caption for the post stated: “vote early, vote often, and vote NO.”
A second post shared by a different Facebook user contained the same screenshot but had text overlay with the statement: “so you can vote Multiple times. They are setting us up for a ‘Rigging’ … smash the voting centres … it’s a NO, NO, NO, NO, NO.”
The Voice Referendum asked Australians whether the Constitution should be amended to give greater representation in parliament to the Aboriginal and Torres Strait Islander peoples.
Voting is compulsory in Australia, with the AEC reporting turnout of about 90% in every election and referendum since 1924. Multiple voting is illegal and a type of electoral fraud.
After Meta’s automated systems detected both posts, human reviewers removed them for violating Meta’s Coordinating Harm and Promoting Crime policy. Both users appealed.
Key Findings
The Board finds that both posts violated the Coordinating Harm and Promoting Crime rule that prohibits content “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process.” In the first case, the phrase “vote often,” in combination with the AEC’s information on counting of multiple votes, is a clear call to engage in illegal voting. Voting twice is a type of “illegal voting,” as per Meta’s internal guidelines. In the second case, the use of the phrase “smash the voting centres,” alongside the rest of the text overlay, can be understood as advocating for people to flood polling places with multiple votes. Neither of the posts benefit from the policy exceptions on condemning, awareness raising, news reporting or humorous or satirical contexts. Specifically, on awareness raising, the posts do not fall under this exception since they go beyond discussing the AEC’s X post and instead decontextualize information to imply the AEC says that voting more than once is allowed.
Preventing users from calling on others to engage in voter fraud is a legitimate aim of protecting the right to vote. The Board regards political speech as a vital component of democratic processes. In these cases, both users were directly engaging in the public debate sparked by the referendum but their calls for others to engage in illegal behavior impacted the political rights of people living in Australia, particularly the right to vote. So, while the calls to “vote No” are protected political speech, the phrases “vote often” and “smash the voting centres” are a different matter. The Board finds that Meta was correct to protect democratic processes by preventing voter fraud attempts from circulating on its platforms, given the frequent claims that the Voice Referendum was rigged.
The Board acknowledges Meta’s efforts on the Voice Referendum. The company proactively identified potentially violating content under the voting interference rules of the Coordinating Harm and Promoting Crime and Misinformation Community Standards. The phrases “double vote” and “vote multiple times” were the keywords that activated the company’s keyword-based detection system in this case. According to Meta, the system is adapted to local contexts. Based on the information shared, the Board notes that initiatives like these should be consistently applied across the globe, in countries undergoing elections, although Meta is encouraged to develop success metrics for assessing how effective keyword-based detection is.
Finally, the Board finds that the public-facing rules of the Coordinating Harm and Promoting Crime Community Standard are not clear enough. They do not include what is available to reviewers in Meta’s internal guidelines, namely the company’s definitions of “illegal voting.” Since it is crucial that users can engage on social media to discuss public-interest issues about democratic events, Meta needs to clearly inform users of the rules.
The Oversight Board’s Decision
The Oversight Board upholds Meta’s decisions in both cases to remove the content.
The Board recommends that Meta:
- Incorporate its definition of the term “illegal voting” into the public-facing language of the Coordinating Harm and Promoting Crime policy’s prohibition on content “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in a condemning, awareness raising, news reporting, or humorous or satirical contexts.”
* Case summaries provide an overview of cases and do not have precedential value.
Full Case Decision
1. Decision Summary
The Oversight Board upholds Meta’s decisions to remove two separate posts on Facebook containing a screenshot of a post by the Australian Electoral Commission (AEC) on X, previously known as Twitter. The screenshots from the AEC posted by the Facebook users included the following language: “If someone votes at two different polling places within their electorate, and places their formal vote in the ballot box at each polling place, their vote is counted.” In the first Facebook post, the screenshot was accompanied by a caption stating “vote early, vote often, and vote NO.” In the second Facebook post, the screenshot was accompanied by text overlay, which included: “so you can vote Multiple times … they are setting us up for a ‘Rigging’ … smash the voting centres… it’s a NO, NO, NO, NO, NO.” The caption also contained a “stop” emoji followed by the words “Australian Electoral Commission.”
The Board finds that both posts violated the Coordinating Harm and Promoting Crime Community Standard, which prohibits “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in condemning, awareness raising, news reporting, or humorous or satirical contexts.” The Board finds that none of the exceptions apply.
These cases raise broader concerns around the sharing of decontextualized information against the backdrop of democratic processes, such as elections and referenda, with the potential of impacting people’s right to vote. The Board recommends that Meta more clearly explain the voter and/or census fraud-related policy lines under the Coordinating Harm and Promoting Crime Community Standard to clarify what constitutes an “illegal participation in a voting or census process.”
2. Case Description and Background
On October 14, 2023, Australia held its Indigenous Voice to Parliament Referendum (hereinafter “Voice Referendum”). Days before the vote, a Facebook user in a group they administered shared a post with a screenshot of an X post from the official account of the Australian Electoral Commission (AEC). The AEC’s post on X included the following language: “If someone votes at two different polling places within their electorate, and places their formal vote in the ballot box at each polling place, their vote is counted.” The screenshot also shows another comment from the same thread on X, which explains that the secrecy of the ballot prevents the AEC from “knowing which ballot paper belongs to which person,” while also reassuring people that “the number of double votes received is incredibly low.” However, the screenshot does not show all the information shared by the AEC, including that voting multiple times is an offence in Australia. A caption accompanied the first Facebook post, stating: “vote early, vote often, and vote NO”.
Another post containing the same screenshot of the AEC’s post on X was shared a day later by a different Facebook user on their profile. It was accompanied by text overlay, which included the following statement: “so you can vote Multiple times. They are setting us up for a ‘Rigging’ ... smash the voting centres ... it's a NO, NO, NO, NO, NO.” The caption also contained a “stop” emoji followed by the words “Australian Electoral Commission.”
Both posts were proactively detected by Meta. The phrases “double vote” and “vote multiple times” were the keywords that activated the company’s “keyword-based pipeline initiative” in this case. This keyword-based detection approach is a systematic procedure deployed by Meta to proactively identify “content potentially violating including, but not limited to, content related to voter and census interference.” Both posts were then automatically lined up for human review. Following human review, both posts were removed for violating the Coordinating Harm and Promoting Crime policy. Meta also applied a standard strike and a 30-day feature limit to both user profiles, which prevented the users from posting or commenting in Facebook groups, creating news groups, or joining Messenger rooms.
The Board noted the following context in reaching its decisions in these cases:
The Voice Referendum asked whether Australia’s Constitution should be amended to recognize the First Peoples of Australia “by establishing a body called the Aboriginal and Torres Strait Islander Voice,” which would have been able to “make representations to the Parliament and the Executive Government of the Commonwealth on matters relating to Aboriginal and Torres Strait Islander peoples.” Relevant background information about the Voice Referendum includes the fact that the Aboriginal and Torres Strait Islander peoples in Australia are among the most socially and economically disadvantaged groups in the country, experiencing high levels of unemployment, lower participation in higher education, poor health outcomes ( both physical and mental health), far shorter life expectancy than other Australians and high levels of incarceration. Aboriginal and Torres Strait Islander peoples also face discrimination and are disproportionately impacted by gender and police violence.
Prime Minister Anthony Albanese campaigned in favor of the constitutional amendment (supporting “Yes”), while Australia's main opposition coalition campaigned against it (supporting “No”). The proposal was rejected nationally and by a majority in all six states, thus failing to secure the double majority needed to amend the Australian Constitution.
Voting is compulsory in Australia and the AEC reports that voter turnout has been approximately 90% in every general election and referendum since 1924. Multiple voting is a type of electoral fraud both at state and federal levels, based on the Commonwealth Electoral Act 1918 and the Referendum (Machinery Provisions) Act 1984. In response to allegations of multiple voting in the Voice Referendum, the AEC posted a lengthy thread on X, which stated that multiple voting is “very rare” and outlined the measures the AEC has in place to prevent the practice. The AEC explains on its website that to counter double voting, identical certified lists of all voters for a division are issued to each polling place. When electors are issued with a set of ballot papers, their names are marked off the certified list held at that issuing point. If an elector goes to another issuing point to cast another ordinary vote, the result will be that another copy of the certified list for that division will be marked to signify that the person has been issued with ballot papers. Immediately following voting day, each identical certified list for each division is scanned to check for instances of multiple marks against any names. The AEC then investigates and writes to each elector suspected of multiple voting. The response leads to the issue being resolved due to reasons such as “polling official error,” or explanations of “language or literacy difficulties” or that the person is “elderly and confused and voted more than once due to forgetting they had already cast a vote.” When they cannot be resolved, remaining cases are further investigated by the AEC and may be forwarded to the Australian Federal Police for consideration.
In 2019, the AEC testified that multiple voting was a “very small problem,” only 0.03% of the 91.9% turnout were multiple mark-offs and the majority of multiple voting instances were mistakes by voters who were elderly, had poor literacy skills or had a low comprehension of the electoral process. The AEC reiterated the “negligible” rate of occurrence of multiple voting in Australia in its public comment submission to the Board. According to the AEC, only 13 cases of apparent multiple voting out of a total of 15.5 million votes were referred to the Australian Federal Police for further investigation in the context of the 2022 federal election, (PC-25006; see also PC-25007).
According to experts consulted by the Board, claims that the Voice Referendum was rigged were frequent, with some posts accompanied by #StopTheSteal and #RiggedReferendum hashtags. Journalistic reporting similarly highlighted that claims of voter fraud in the context of the Voice Referendum were common. Based on social media monitoring tools deployed by experts consulted by the Board, as of February 2024, screenshots of the AEC’s posts on X had been shared on Meta’s platforms over 475 times, receiving thousands of reactions and at least 30,000 views.
3. Oversight Board Authority and Scope
The Board has authority to review Meta’s decision following an appeal from the person whose content was removed (Charter Article 2, Section 1; Bylaws Article 3, Section 1).
The Board may uphold or overturn Meta’s decision (Charter Article 3, Section 5), and this decision is binding on the company (Charter Article 4). Meta must also assess the feasibility of applying its decision in respect of identical content with parallel context (Charter Article 4). The Board’s decisions may include non-binding recommendations that Meta must respond to (Charter Article 3, Section 4; Article 4). When Meta commits to act on recommendations, the Board monitors their implementation. When the Board identifies cases that raise similar issues, they may be assigned to a panel as a bundle to deliberate together. A binding decision will be made in respect of each piece of content.
4. Sources of Authority and Guidance
The following standards and precedents informed the Board’s analysis in these cases:
I. Oversight Board Decisions
II. Meta’s Content Policies
Meta’s Coordinating Harm and Promoting Crime policy rationale states that it aims to “prevent and disrupt offline harm and copycat behaviour” by prohibiting content “facilitating, organizing, promoting, or admitting to certain criminal or harmful activities targeted at people, businesses, property or animals.” The policy prohibits users from posting content “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in condemning, awareness raising, news reporting, or humorous or satirical contexts.”
There are also types of voter- or census-interference content that can be removed under the policy provided there is additional context to justify it. These include “calls for coordinated interference that would affect an individual’s ability to participate in an official election or census,” as well as “threats to go to an election site to monitor or watch voters or election officials’ activities if combined with a reference to intimidation.”
Meta’s Violence and Incitement policy is aimed at preventing “potential offline harm” that may be related to content posted on Meta’s platforms. It prohibits “threats that could lead to death (and other forms of high-severity violence)” as well as “threats to take up weapons or bring weapons to a location or forcibly enter a location” such as “polling places or locations used to count votes or administer an election.” It also prohibits threats of violence “related to voting, voter registration, or the administration or outcome of an election; even if there is no target.”
Meta’s Misinformation policy articulates how the company treats different categories of misinformation. Under one of these categories, Meta removes, “in an effort to promote election and census integrity,” “misinformation that is likely to directly contribute to a risk of interference with people’s ability to participate in those [political] processes.” That includes “misinformation about who can vote, qualifications for voting, whether a vote will be counted, and what information or materials must be provided in order to vote.”
III. Meta’s Human Rights Responsibilities
The UN Guiding Principles on Business and Human Rights (UNGPs), endorsed by the UN Human Rights Council in 2011, establish a voluntary framework for the human rights responsibilities of private businesses. In 2021, Meta announced its Corporate Human Rights Policy, in which it reaffirmed its commitment to respecting human rights in accordance with the UNGPs. The Board’s analysis of Meta’s human rights responsibilities in this case was informed by the following international standards:
- The right to freedom of expression: Article 19, International Covenant on Civil and Political Rights ( ICCPR); General Comment No. 34, Human Rights Committee, 2011; UN Special Rapporteur on freedom of opinion and expression, reports: A/HRC/38/35 (2018) and A/74/486 (2019).
- The rights to vote and participate in public affairs: Article 25, ICCPR; General Comment No. 25, Human Rights Committee, 1996.
5. User Submissions
In their statements to the Board, both users claimed they were merely sharing information posted by the AEC. The user who made the second post additionally asserted that their post served as a “warning to others” that the “election may be fraudulent” for allowing multiple voting since people “don’t need to show ID” to have their names marked off the list.
6. Meta’s Submissions
Meta determined that both posts violated the policy line on “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process” of the Coordinating Harm and Promoting Crime Community Standard. Based on Meta’s internal guidelines to content reviewers, Meta’s voting interference policies apply to both elections and “official referenda that are organized by a nationally designated authority.” The term “illegal voting” includes, “but is not limited to” the following: “(a) voting twice; (b) fabricating voting information to vote in a place where you are not eligible; (c) fabricating your voting eligibility; and (d) stealing ballots.”
With respect to the first post, Meta emphasized the phrase “vote often” is “usually understood to mean illegally voting more than once in an election.” The company also found that the phrase was not intended as humor or satire, since the user was calling for people to vote “NO,” which in Meta’s view constituted a serious attempt to promote the user’s political preference. The company also shared with the Board that when reviewing content about elections at-scale, it is not always able to gauge the intent of users who post potential satire.
With respect to the second post, Meta found the phrase “smash the voting centres” to be violating. The company explained the user’s call “could be read as advocacy to inundate the election with duplicate voting,” which is prohibited by the Coordinating Harm and Promoting Crime policy line against “advocating ... to illegally participate in a voting or census process.”
According to Meta, if interpreted literally to mean a call to destroy the voting center buildings, the phrase would violate the Violence and Incitement policy, given that the policy prohibits: (i) threats of high-severity violence against a building that could lead to death or serious injury of any person present at the targeted place; and (ii) threats of violence “related to voting, voter registration, or the administration or outcome of an election; even if there is no target.” Based on Meta’s internal guidance to content reviewers, threats to places must be stated in “explicit terms,” such as “blow up,” “burn down,” “shoot up,” and also generic terms such as “attack,” “ambush” and “destroy” for a piece of content to be considered violating under this policy.
Meta published the company’s integrity efforts for the Voice Referendum in a blog post in July 2023. Meta additionally told the Board that it formed a cross-functional team to begin preparations for the referendum in April 2023. The team consisted of Asia Pacific-based teams, as per standard practice for national elections. Meta also formed a virtual Integrity Product Operations Center (IPOC) during the final week of campaigning before the vote to focus on the referendum during a period of likely heightened tension. The IPOC included additional operations teams to quickly respond to escalations and critical risks that arose in the lead up to voting day. Meta did not apply the Crisis Policy Protocol or any other policy levers for the Voice Referendum.
Meta also explained the company’s “keyword-based pipeline initiative,” which identifies and automatically enqueues potentially violating content containing keywords, whether in text or images like screenshots, for human review through “a specialized digital pipeline that scans for specific keywords.” Meta told the Board that the list includes many words and phrases developed by Meta’s misinformation and regional teams. The primary function of this keyword-based detection system is to “ensure the integrity” of elections and referenda by “systematically identifying and manually reviewing relevant content.” The keyword-based detection system was activated, in this case, because of the virtual IPOC that was set up for the Voice Referendum. Meta implements the initiative globally. It is not confined to specific countries or regions but is adapted to local contexts. According to Meta, the list of keywords is “dynamic,” subject to change and “specific to the nature of each event.”
The initiative seeks to actively enforce the following areas of Meta’s Community Standards: (i) the Coordinating Harm and Promoting Crime policy addressing “voter and/or census fraud, including offers to buy or sell votes with cash gifts, and statements advocating or instructing illegal participation in voting or census processes;” and (ii) the Misinformation policy focusing on voter or census interference, including “misinformation about voting or census dates, locations, times, methods, voter qualifications, vote counting and required voting materials.” The keyword-based detection system for the Voice Referendum was not designed to actively enforce other content policies concerning elections or voting, such as those under the Violence and Incitement Community Standard. However, if content flagged by the initiative violates other Community Standards, it is also subjected to enforcement upon human review.
With respect to this case content, the phrases “double vote” and “vote multiple times” were the keywords that activated Meta's detection system. The term “double vote” was not directly used in the Facebook posts but appeared in the screenshot of the AEC’s post on X. Any content containing these keywords, whether as text or in images like screenshots, is “automatically flagged and queued for human review to proactively monitor for voter suppression-related speech.”
The Board asked Meta 12 questions in writing. The questions related to Meta’s voting interference content policies, the keyword-based detection system and protocols that Meta adopted for moderating content relating to the Voice Referendum. Meta answered all questions.
7. Public Comments
The Oversight Board received five public comments that met the terms for submission. Three were submitted from the Asia-Pacific and Oceania region (all from Australia), one from the United States and Canada, and one from Europe. To read the public comments submitted with consent to publish, please click here.
The submissions covered the following themes: the sociohistorical context leading to the Voice Referendum, history of voter fraud in Australia, the spread of misleading and decontextualized information during the Voice Referendum, and Meta’s content policies and enforcement practices on misinformation more generally.
8. Oversight Board Analysis
The Board examined whether these posts should be removed by analyzing Meta’s content policies, human rights responsibilities and values. The Board also assessed the implications of this case for Meta’s broader approach to content governance.
The Board selected these cases to examine Meta’s content moderation policies and enforcement practices on misleading or decontextualized voting information and voter fraud, given the historic number of elections in 2024. These cases fall within the Board’s strategic priority of Elections and Civic Space.
8.1 Compliance with Meta’s Content Policies
I. Content Rules
The Board finds that both posts violated the Coordinating Harm and Promoting Crime policy, which prohibits the advocacy of illegal participation in a voting or census process. The phrase “vote often” in the first post, when shared together with the AEC’s post on X about the counting of multiple votes, is a clear call to engage in such practice. Pursuant to Meta’s internal guidelines to content reviewers, “voting twice” is a form of “illegal voting.”
The second post also violates the Coordinating Harm and Promoting Crime policy. It contained a screenshot of the X post and was accompanied with text overlay saying, “so you can vote multiple times.” It also urges people to “smash the voting centres.” The user could be simply attempting to express their frustration with the AEC for supposedly allowing people to “vote multiple times.” The phrase, however, when read together with the rest of the text overlay on the screenshot claiming the AEC was condoning multiple voting and accusing it of setting up people for a “rigging,” can be more reasonably understood as advocating for people to flood the polling place with multiple votes. In the context of the Australian elections, where voting is mandatory and the turnout is over 90%, a call for people to vote once is an unlikely interpretation of “smash the voting centres,” especially when this call follows a claim that people “can vote multiple times.” This is further supported by the user’s request for people to repeatedly vote “No” (“NO, NO, NO, NO, NO”). When read as a whole and in the context of the Australian elections, the post thus constitutes a call to vote twice, which amounts to “illegal voting,” prohibited by the Coordinating Harm and Promoting Crime policy.
The Board recognizes that while it is a possibility that the posts could have been made satirically, their satirical intent is not explicit. The Board does not believe that the posts were implicitly satirical based on the language of the captions, and the text overlay on the images. While the degree of certainty in the call to action is different for both posts, each of them includes a plea to engage in multiple – hence “illegal” – voting. Given the risks associated with voter fraud attempts in electoral contexts, the Board believes that Meta’s humor or satire exception should only apply, in such circumstances, to content that is explicitly humorous. Therefore, neither of the posts qualifies for this exception.
The posts also do not qualify for the awareness-raising exception under the Coordinating Harm and Promoting Crime policy. The screenshots and much of the user-created content were designed to call attention to the possibility of voter fraud based on the AEC’s statement. However, they went beyond and actively encouraged others to illegally participate in the Voice Referendum through multiple voting, rather than just discussing the AEC’s posts on X. The posts did not contain additional context provided by the AEC, in the same thread on X, that voting multiple times is an offence in Australia. Therefore, rather than raising awareness around the possibility of multiple voting, both posts decontextualized the AEC’s communication to imply that the AEC is saying that it is permissible to vote more than once.
Unlike Meta, the Board does not believe a more literal reading of the word “smash” (meaning the destruction of buildings) is applicable in this case, given the lack of signals pointing in that direction (e.g., context of conflict or heightened tensions with widespread circulation of content directly inciting violence). Therefore, the Board concludes that the second post does not violate Meta’s Violence and Incitement policy.
The Board also assessed both pieces of content against Meta’s Misinformation policy, given that they decontextualize the AEC’s communication. The Board concluded, however, that the Coordinating Harm and Promoting Crime Community Standard is the applicable policy in this case because both users are encouraging others to engage in voter fraud.
II. Enforcement Action
The Board acknowledges Meta’s integrity efforts for the Voice Referendum, including the keyword-based detection system adopted by Meta. The company explained the system was deployed for proactively identifying potentially violating content under the voting interference policy lines of the Coordinating Harm and Promoting Crime and Misinformation Community Standards. According to Meta, the keyword-based detection system is adapted to local contexts and contains market-specific terms. Based on the information Meta shared with the Board about how the initiative works, the Board appreciates that the keyword-based detection system was deployed and seems to have worked in this case. Initiatives like this one need to be consistently applied across the globe, in all countries undergoing elections and other democratic processes. The Board also believes that this initiative should encompass voting interference and related policies under the Violence and Incitement Community Standard.
Given the limitations of keyword-based approaches to the detection of harmful content, the Board will continue to evaluate the efficacy of Meta’s system in other election-related cases. In this regard, the Board encourages Meta to develop success metrics for assessing how effective the keyword-based detection system is, along with other election integrity efforts, in identifying potentially violating content under election-relevant policies. This would be in line with the Board’s recommendation in the Brazilian General’s Speech decision for Meta to “develop a framework for evaluating the company’s election integrity efforts.”
8.2 Compliance with Meta’s Human-Rights Responsibilities
Freedom of Expression (Article 19 ICCPR)
Article 19 of the International Covenant on Civil and Political Rights (ICCPR) provides for broad protection for expression of “all kinds.” This right includes the "freedom to seek, receive and impart information and ideas of all kinds.” The UN Human Rights Committee has highlighted that the value of expression is particularly high when it discusses political issues, candidates and elected representatives (General Comment No. 34, para. 13). This includes expression that is “deeply offensive,” critical of public institutions and opinions that may be erroneous (General Comment No. 34, para. 11, 38 and 49).
The UN Human Rights Committee has emphasized that freedom of expression is essential for the conduct of public affairs and the effective exercise of the right to vote (General Comment No. 34, para. 20). The Committee further states that the free communication of information and ideas about public and political issues among citizens is essential for the enjoyment of the right to take part in the conduct of public affairs and the right to vote, Article 25 ICCPR (General Comment No. 25, para 25). In this case, both users were engaging on the referendum, a matter of public interest, to share their views on what the outcome should be, therefore directly participating in the public debate triggered by the referendum process.
When restrictions on expression are imposed by a state, they must meet the requirements of legality, legitimate aim, and necessity and proportionality (Article 19, para. 3, ICCPR). These requirements are often referred to as the “three-part test.” The Board uses this framework to interpret Meta’s voluntary human rights commitments, both in relation to the individual content decision under review and what this says about Meta’s broader approach to content governance. As in previous cases (e.g., Armenians in Azerbaijan, Armenian Prisoners of War Video), the Board agrees with the UN Special Rapporteur on freedom of expression that, although “companies do not have the obligations of Governments, their impact is of a sort that requires them to assess the same kind of questions about protecting their users’ right to freedom of expression,” ( A/74/486, para. 41). In doing so, the Board attempts to be sensitive to ways in which the human rights responsibilities of a private social media company may differ from a government implementing its human rights obligations.
I. Legality (Clarity and Accessibility of the Rules)
Rules restricting expression should be clearly defined and communicated, both to those enforcing the rules and those impacted by them (General Comment No. 34, para. 25). Users should be able to predict the consequences of posting content on Facebook and Instagram. The UN Special Rapporteur on freedom of expression has highlighted the need for “clarity and specificity” in content-moderation policies ( A/HRC/38/35, para. 46).
The public-facing language of the Coordinating Harm and Promoting Crime Community Standard is not sufficiently clear for the user. Given the importance of users being able to engage on social media to discuss issues of public interest in the context of democratic events, Meta needs to make sure that users are clearly informed of the applicable rules. This will help users anticipate whether content they are posting is potentially violating. In this regard, the Board finds that the clarification in the internal guidelines of what constitutes “illegal voting” should be incorporated into the public-facing Coordinating Harm and Promoting Crime Community Standard.
II. Legitimate Aim
Restrictions on freedom of expression must pursue a legitimate aim (Article 19, para. 3, ICCPR), including to protect the “public order” and the “rights of others.”
The Coordinating Harm and Promoting Crime policy aims to “prevent and disrupt offline harm and copycat behaviour” by removing content “facilitating, organizing, promoting or admitting to certain criminal or harmful activities.”
Protecting the right to vote and to take part in the conduct of public affairs is an aim that Meta’s Coordinating Harm and Promoting Crime policy can legitimately pursue, especially in the context of elections (Article 25, ICCPR). The Board finds that preventing users from calling on others to engage in voter fraud is a legitimate aim to protect the right to vote. General Comment No. 25 on the right to vote sets forth that “there should be independent scrutiny of the voting and counting process” so that “electors have confidence in the security of the ballot and the counting of votes,” (para. 20). Additionally, there is “the principle of one person, one vote must apply,” which means that “the vote of one elector should be equal to the vote of another” (para. 21). The Board also notes that the policy helps preserve “public order” by protecting polling places and democratic processes from voter interference, more broadly.
III. Necessity and Proportionality
Under ICCPR Article 19, para. 3, necessity and proportionality require that restrictions on expression “must be appropriate to achieve their protective function; they must be proportionate to the interest to be protected,” ( General Comment No. 34, para. 34). As part of their human rights responsibilities, social media companies should consider a range of possible responses to problematic content beyond deletion to ensure restrictions are narrowly tailored ( A/74/486, para. 51).
The Board finds that Meta’s removal of both posts from Facebook complied with the requirements of necessity and proportionality. The Board notes the content was posted days before an upcoming referendum that marked a significant constitutional moment in Australia, especially for the Aboriginal and Torres Strait Islander peoples. On the one hand, political speech is a vital component of democratic processes and both users were directly engaging in the public debate sparked by the referendum. On the other hand, the users’ calls for others to engage in illegal behavior in the context of the referendum impacted the political rights of people living in Australia, particularly the right to vote and to take part in the conduct of public affairs.
Applying these standards to the case content, the calls to “vote No” in both posts are clearly protected political speech. However, the phrase “vote often” in the first post and the phrase “smash the voting centres” in the second post are a different matter, given the fact they actively encouraged others to illegally participate in the Voice Referendum through multiple voting, as explained in more detail under Section 8.1 above. Experts consulted by the Board noted that claims the Referendum was rigged were frequent, while journalistic reporting highlighted that claims of voter fraud were common. Therefore, the Board finds that Meta was correct to err on the side of protecting democratic processes by preventing voter fraud attempts from circulating on Meta’s platforms ( General Comment No. 25). The circulation of voter fraud-related content may create an environment where the integrity of electoral processes is at risk. However, a minority of the Board find that the removal of the post urging people to “smash the voting centres” does not pass the necessity and proportionality test, given Meta’s failure to establish a “direct and immediate connection between the expression and the threat,” (General Comment No. 34, para. 35). For this minority, given the fact that the user’s call for people to “smash the voting centres” is an ambiguous call for people to vote multiple times, the connection with the voter fraud threat was not direct and immediate.
The Board believes that Meta’s approach to expect clarity from users when enforcing exceptions to be a sensible one to assess if the content was shared in condemning, awareness raising, news reporting, or humorous or satirical context. There was no clear indication in the posts under analysis by the Board that the phrases “vote often” and “smash the voting centres” were meant rhetorically, instead of clearly advocating for multiple voting – an action that put the integrity of the Voice Referendum at risk. Therefore, both removals were necessary and proportional responses from Meta.
Additionally, a minority of the Board is not convinced that content removal is the least intrusive means available to Meta to address voter fraud-related speech, and finds that Meta’s failure to demonstrate otherwise does not satisfy the requirement of necessity and proportionality. The Special Rapporteur has stated “just as States should evaluate whether a limitation on speech is the least restrictive approach, so too should companies carry out this kind of evaluation. And, in carrying out the evaluation, companies should bear the burden of publicly demonstrating necessity and proportionality,” ( A/74/486, para. 51). For this minority, Meta should have publicly demonstrated why removal of such posts is the least intrusive means of the many tools it has at its disposal to avert likely near-term harms, such as voter fraud. If it cannot provide such a justification, then Meta should be transparent in acknowledging that its speech rules depart from UN human rights standards and provide a public justification for doing so. The minority believe that the Board would then be positioned to consider Meta’s public justification and a public dialogue would ensue without risking the distortion of existing UN human rights standards.
9. Oversight Board Decision
The Oversight Board upholds Meta’s decisions to take down both pieces of content.
10. Recommendations
Content Policy
1. To ensure users are fully informed about the types of content prohibited under the “Voter and/or census fraud” section of the Coordinating Harm and Promoting Crime Community Standard, Meta should incorporate its definition of the term “illegal voting” into the public-facing language of the policy prohibiting: “advocating, providing instructions for, or demonstrating explicit intent to illegally participate in a voting or census process, except if shared in a condemning, awareness raising, news reporting, or humorous or satirical contexts.”
The Board will consider this recommendation implemented when Meta updates its public-facing Coordinating Harm and Promoting Crime Community Standard to reflect the change.
*Procedural Note:
The Oversight Board’s decisions are prepared by panels of five Members and approved by the majority of the Board. Board decisions do not necessarily represent the personal views of all Members.
For this case decision, independent research was commissioned on behalf of the Board. The Board was assisted by Duco Advisors, an advisory firm focusing on the intersection of geopolitics, trust and safety, and technology. Memetica, an organization that engages in open-source research on social media trends, also provided analysis.