Oversight Board Recommends Meta Label Manipulated Content in Case Concerning Edited President Biden Video
23 de febrero de 2024
Review of “cheap fake” video of U.S. President Biden reveals major inconsistencies in the way Meta treats altered content.
The Oversight Board urged Meta to begin labeling manipulated content, such as videos altered by artificial intelligence (AI) or other means when such content may cause harm. This decision is part of the Board’s emphasis on preserving freedom of expression while protecting against demonstrable online harm, especially in the context of elections. The Board said Meta should stop removing manipulated media when no other policy violation is present and instead apply a label indicating the content is significantly altered and may mislead.
“The volume of misleading content is rising, and the quality of tools to create it is rapidly increasing,” said Oversight Board Co-Chair Michael McConnell. Platforms must keep pace with these changes, especially in light of global elections during which certain actors seek to mislead the public.
“At the same time, political speech must be unwaveringly protected. This sometimes includes claims that are disputed and even false, but not demonstrably harmful. Manipulated media, however, present special challenges.”
To enforce Meta’s Manipulated Media policy, which governs moderation of content altered by AI, labeling may be more scalable than relying on the company’s system of third-party fact checkers. Fact checking is asymmetric depending on language and market. Labels could be attached to a post once identified as “manipulated” independently from the context in which it is posted, across the platform and without reliance on third-party fact-checkers. The Board is concerned about Meta demoting content that fact-checkers rate as “false” or “altered” without informing users or providing appeal mechanisms. Demoting content has significant negative impacts on freedom of expression.
The Board is also concerned about the Manipulated Media policy in its current form, finding it to be incoherent, lacking in persuasive justification and inappropriately focused on how content is created, rather than on which specific harms it aims to prevent, such as disrupting electoral processes.
“As it stands, the policy makes little sense,” said McConnell. “It bans altered videos that show people saying things they do not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook.
“Perhaps most worryingly it does not cover audio fakes, which are one of the most potent forms of electoral disinformation we’re seeing around the world. Meta must urgently work to close these gaps.”
For consistency and clarity, it is important that the company clearly define the harms the policy is seeking to address, given that not all misleading video or audio in themselves are objectionable, absent a direct risk of offline harm. Some forms of media alteration may even enhance the value of content to the audience, for example humor.
This case, which concerned an edited video of President Joe Biden, was selected to examine whether Meta’s Manipulated Media policy adequately addresses the potential harms of altered content, while ensuring that political expression is not unjustifiably suppressed.
The Biden video, which was posted to Facebook, does not violate Meta’s Manipulated Media policy as now written, given that the policy applies only to video created through AI and only to content showing people saying things they did not say. Since the video in this post was not altered using AI and it shows President Biden doing something he did not do (not something he did not say) – and the alteration of the video clip is obvious – it does not violate the existing policy. The Board thus concluded that Meta was right to leave up the content, but should take prompt action to amend that policy to bring it into alignment with its stated purposes and label such content as manipulated in the future.
Background
The Oversight Board is an independent organization comprising global experts that hold Meta accountable to its Community Standards for Facebook and Instagram as well as to its human rights commitments. The Board has contractually binding authority over Meta’s decisions to remove or leave content up on its platforms. The Board also issues non-binding recommendations that shape Meta’s policies to ensure the company is more transparent, enforces its rules evenly and treats users more fairly. Meta has fully or partially implemented 67 of the Board’s recommendations made to date. The Board’s decisions are based on human rights and free speech principles.