Oversight Board Announces Two New Cases on Explicit AI Images of Female Public Figures
April 16, 2024
Today, the Board is announcing two new cases for consideration. As part of this, we are inviting people and organizations to submit public comments.
Case Selection
As we cannot hear every appeal, the Board prioritizes cases that have the potential to affect lots of users around the world, are of critical importance to public discourse or raise important questions about Meta’s policies.
The cases that we are announcing today are:
Explicit AI Images of Female Public Figures
2024-007-IG-UA, 2024-008-FB-UA
User appeal to remove content from Instagram and user appeal to restore content to Facebook
Submit public comments, which can be provided anonymously, here.
These cases concern two content decisions made by Meta, one on Instagram and one on Facebook, which the Oversight Board intends to address together. For each case, the Board will decide whether the content should be allowed on Instagram or Facebook.
The first case involves an AI-generated image of a nude woman posted on Instagram. The image has been created using artificial intelligence (AI) to resemble a public figure from India. The account that posted this content only shares AI-generated images of Indian women. The majority of users who reacted have accounts in India, where deepfakes are increasingly a problem.
In this case, a user reported the content to Meta for pornography. This report was automatically closed because it was not reviewed within 48 hours. The same user then appealed Meta’s decision to leave up the content but this was also automatically closed and so the content remained up. The user then appealed to the Board. As a result of the Board selecting this case, Meta determined that its decision to leave the content up was in error and removed the post for violating the Bullying and Harassment Community Standard.
The second case concerns an image posted to a Facebook group for AI creations. It features an AI-generated image of a nude woman with a man groping her breast. The image has been created with AI to resemble an American public figure, who is also named in the caption. The majority of users who reacted have accounts in the United States.
In this case, a different user had already posted this image, which led to it being escalated to Meta’s policy or subject matter experts who decided to remove the content as a violation of the Bullying and Harassment policy, specifically for “derogatory sexualized photoshop or drawings.” The image was added to a Media Matching Service Bank – part of Meta’s automated enforcement system that automatically finds and removes images that have already been identified by human reviewers as breaking Meta’s rules. Therefore, in this case, the image was already considered a violation of Facebook’s Community Standards and removed. The user who posted the content appealed but the report was automatically closed. The user then appealed to the Board.
The Board selected these cases to assess whether Meta’s policies and its enforcement practices are effective at addressing explicit AI-generated imagery. This case aligns with the Board’s Gender strategic priority.
The Board would appreciate public comments that address:
- The nature and gravity of harms posed by deepfake pornography including how those harms affect women, especially women who are public figures.
- Contextual information about the use and prevalence of deepfake pornography globally, including in the United States and India.
- Strategies for how Meta can address deepfake pornography on its platforms, including the policies and enforcement processes that may be most effective.
- Meta’s enforcement of its “derogatory sexualized photoshop or drawings” rule in the Bullying and Harassment policy, including the use of Media Matching Service Banks.
- The challenges of relying on automated systems that automatically close appeals in 48 hours if no review has taken place.
As part of its decisions, the Board can issue policy recommendations to Meta. While recommendations are not binding, Meta must respond to them within 60 days. As such, the Board welcomes public comments proposing recommendations that are relevant to these cases.
Public Comments
If you or your organization feel you can contribute valuable perspectives that can help with reaching a decision on the cases announced today, you can submit your contributions using the link above. Please note that public comments can be provided anonymously. The public comment window is open for 14 days, closing at 23:59 your local time on Tuesday 30 April.
To respect their rights and mitigate risks of furthering harassment of the women depicted in these posts, the Board requests that public comments avoid naming or otherwise sharing private information about third parties or speculating on the identities of the people depicted in the content of these cases.
What’s Next
Over the next few weeks, Board Members will be deliberating these cases. Once they have reached their final decisions, we will post them on the Oversight Board website.
To receive updates when the Board announces new cases or publishes decisions, sign up here.