2023 Annual Report Shows Board’s Impact on Meta

2023 was a year of impact and innovation for the Board. Our recommendations continued to improve how people experience Meta’s platforms and, by publishing more decisions in new formats, we tackled more hard questions of content moderation than ever before.

Meta’s implementation of our recommendations is also picking up momentum. Our 2023 Annual Report shows that between January 2021, when we published our first decisions, and May 2024 when this report was finalized, Meta had fully or partially implemented 75 of our recommendations, and reported progress on implementing a further 81. Numbers, of course, do not tell the whole story; it is the quality and impact of the recommendations that matter.

Getting Results for Facebook and Instagram Users

A consistent problem is that users are often left guessing about why Meta removed their content or suspended their account.

To address this, we’ve urged Meta to follow these tenets of transparency: make your rules easy to understand, tell people how you enforce them and, when people break the rules, say exactly what they’ve done wrong.

In 2022, in response to our recommendations, Meta introduced new messaging telling people the specific policy they violated for its Hate Speech, Dangerous Organizations and Individuals, and Bullying and Harassment policies.

In 2023, our recommendations continued to improve people’s experiences on Facebook and Instagram.

Today’s Annual Report shows how, in response to the Board’s recommendations, Meta:   

  • Launched Account Status, an in-product experience telling people about current and past penalties on their account, and why Meta applied them.
  • Improved its ‘strikes’ system to explain why a person’s content has been removed and how the system works, as well as making it fairer to users who have been disproportionately impacted in the past.
  • Started warning people if their post was highly likely to be violating, giving them the opportunity to delete and repost their content. In a 12-week period in 2023, Meta issued warnings about more than 100 million pieces of content.*

We have carried this momentum into 2024, with our recommendations further improving how Meta treats the people using its platforms.   

People often tell us that Meta has taken down posts calling attention to hate speech for the purposes of condemnation, mockery or awareness-raising because of the inability of automated systems (and sometimes human reviewers) to distinguish between such posts and hate speech itself. To address this, we asked Meta to create a convenient way for users to indicate in their appeal that their post fell into one of these categories.  Meta agreed to this and today we are releasing new data on its impact.

  • In February 2024, Meta received more than seven million appeals from people whose content had been removed under its rules on hate speech. Eight out of 10 of those appealing chose to use this new option to provide additional context. One in five of these users indicated that their content was meant “to raise awareness,” while one in three chose “it was a joke.” We believe that giving people a voice – and listening to them – can help Meta make better decisions.*
  • In May 2024, in response to our recommendations, Meta started labeling a wider range of video, audio and image content as “made with AI.” This will provide people with greater context and transparency for more types of manipulated media, while also removing posts that violate Meta’s rules in other ways. 

Protecting and Preserving Speech

Our recommendations also helped Meta protect and preserve content about protests, health conditions and atrocities during conflicts.

Our 2023 Annual Report shows how, in response to the Board’s recommendations, Meta:

  • Allowed the term “Marg bar Khamenei” (which literally translates as “Death to [Iran’s supreme leader] Khamenei”) to be shared in the context of the protests in Iran. After Meta implemented this recommendation in January 2023, a sample of accounts on Instagram showed that posts using this term increased by nearly 30% – protecting political speech in Iran.  
  • Updated and created new classifiers, which, over two 30-day periods in 2023, prevented a total of 3,500 pieces of breast cancer-related content from being automatically removed. This is protecting important content from breast cancer patients and campaigners.
  • Is finalizing a new, consistent approach to preserving potential evidence of atrocities and serious violations of international human rights law and humanitarian law.   

Publishing More Cases, Faster

2023 was also a year of innovation. Following our commitment to publish more cases, faster, we issued our first summary decisions that examine cases in which Meta changed its original decision on a piece of content after it was shortlisted for potential review. We also issued our first expedited decisions, which were about the Israel-Hamas conflict.

In addition, we published standard decisions in new areas including prisoners of war, extreme diets and violent speech targeting transgender people. In total, we decided 53 cases in 2023, more than any previous year, overturning Meta’s original decision in around 90% of cases.  

With 47 cases decided or announced so far in 2024, we are well on track to exceed last year’s total. These cases include crucial content moderation topics such as explicit AI-generated deepfakes, Holocaust denial and criticism of heads of state.

In February, we also expanded our scope to cover Threads – the first time we have taken on a new app – and announced our first Threads case in May. We’ve also seen increased engagement with our work, with 2,800 public comments submitted for cases published or announced this year – around twice the number we received in 2022 and 2023 combined.   

Evolving to Expand Our Influence

While we have come a long way, we are also innovating to be more relevant to the biggest debates in content moderation. In 2024, for example, we are publishing white papers that examine broad issues facing social media based on the experience gained from our full body of work, rather than just specific cases.

With around half of the world’s population going to the polls this year, we issued our first white paper on elections with nine key lessons for industry. In the coming weeks, we will publish our next white paper about AI and automation.

As we move further into 2024, we want to examine other major issues that matter to users, such as demoted content. We are also paying attention to the evolving regulatory landscape and the importance of the Board’s model – one of an independent, global deliberative body that examines cases through a human right lens.

Recently, as part of our ongoing efforts to improve users’ experiences, we have worked with Meta to make our appeals process more intuitive. In addition, having launched our new website in May, we are making changes to our transparency reporting. Our new website now includes much of the content we published in our quarterly, and more recently half-yearly, transparency reports – including an interactive tracker for our recommendations, a public comments archive and the ability to filter decisions by country, region and Community Standard. As such, while we will continue to publish Annual Reports, we will no longer issue half-yearly reports.

Four years into our journey, we are still tackling the toughest questions in content moderation and pushing Meta to treat users fairly. The challenge is enormous, the uncertainty is great, but the work we are doing really matters. Together we will strive to find answers that improve social media for people around the world.

For more on our Transparency Reporting, click here.

* PLEASE NOTE: All information is aggregated and de-identified to protect user privacy. All metrics are estimates, based on best information available for a specific point in time.

Return to News