We Need to Act on Online Disinformation Now
By Helle Thorning-Schmidt
From the United States to the United Kingdom, India to Indonesia, Mexico to South Africa, roughly half of the global population will head to the polls in 2024. But, if we’re not careful, the year that breaks election records may also fracture democracy.
I’ve seen first-hand how bad actors work on and offline to undermine fair and just democratic processes. Over time, these efforts have grown more sophisticated. Social media platforms have become forums for sowing disinformation and inciting violence. With so many elections looming, the risks posed are being amplified by artificial intelligence.
Fake video and audio of candidates telling supporters not to vote are proliferating. Cloned politicians are heard rigging or stealing elections. And it isn’t just politicians who are affected. Just last week, X had to block searches for Taylor Swift after explicit AI-generated images of her were circulated on the site. US politicians have now called for urgent action to criminalise the creation of deepfake images. But while we focus on problems caused by new technologies, tech companies risk overlooking content that is altered using more basic tools.
On Monday, the Oversight Board, which makes independent decisions about content on Facebook and Instagram, released a landmark decision on video content edited to make it appear as though US president Joe Biden was inappropriately touching his adult granddaughter’s chest. We found that the post did not violate Meta’s existing manipulated media policy, in part because it was not generated by AI and the alteration to the video was obvious.
However, we also found that this policy makes little sense. It bans altered videos that show people saying things they did not say, but does not prohibit posts depicting an individual doing something they did not do. It only applies to video created through AI, but lets other fake content off the hook. As well as closing these loopholes, Meta should stop removing manipulated media when no other policy violation has occurred and instead apply a label indicating the content has been significantly altered and could mislead.
To keep pace with the threat posed by bad actors, social media platforms don’t need to reinvent the wheel. They should reduce complexity and go back to basics.
First, policies that suppress freedom of expression must specify the real-world harm they are trying to prevent. There are clear tensions between allowing freedom of expression and protecting people from real-world harm. The user’s intent, and how other users understand their content, is key to determining the likelihood of such harm.
Second, in crisis situations, including elections where the risk of violence is high, policies should reflect threats to physical safety and security. A clear system for moderating content in a crisis is important.
Third, transparency is fundamental. Only by showing how decisions are made and holding people to account can we build trust. The Board has pushed Meta to share its metrics for how it monitors its efforts leading up to, during, and after “expected critical events,” such as elections. The company expects to publish these in the latter half of 2024.
Government requests to remove content should also be made public. Disproportionate censorship can contribute to a climate of misinformation and affect users’ rights to access information or share opinions. A road map for protecting speech and promoting ethical action online is possible. But governments, regulators, civil society and academics must work with us if we are to meet the challenge of combating election disinformation.
This article was originally published in the Financial Times in February 2024.