Facebook, this morning, announced a new policy banning deepfakes, digitally manipulated video made with easy-to-use technology that seems ripe for exploitation by bad actors. The threat of deepfakes has led to a lot of hand-wringing over recent months, given how easy and widely available the tools to create them are. Stitching one person’s face was once labor-intensive CGI work. Now, it can be done by anyone with just enough know-how. The policy was first reported by the Washington Post.
In a blog post outlining the change, Monika Bickert, Facebook’s vice president of global policy management, said that manipulated media subject to removal would have to meet two criteria:
It has been edited or synthesized — beyond adjustments for clarity or quality — in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not actually say. And:
It is the product of artificial intelligence or machine learning that merges, replaces, or superimposes content onto a video, making it appear to be authentic.
Superficially, this seems like a promising policy shift. The Facebook network’s ability to propel misinformation far and wide is a considerable problem. But Facebook’s idealistic, one-size-fits-all approach to policy-making once again buckles under the weight of how the world actually works. These new rules are basically unenforceable, given the very next sentence in Bickert’s announcement: “This policy does not extend to content that is parody or satire, or video that has been edited solely to omit or change the order of words.”
In other words, if you say that it’s parody or satire, then it can stay up. Facebook has almost no way of proving otherwise. The purveyors of YouTube prank videos have for years hidden behind the excuse that their work is “satire” or a “social experiment.” Late-night comedians have been doing this for years; one ancient video looked like Obama is kicking a door in. Another recent video on social media made it look like Trump forgot about his toupee.
On top of the parody carve-out, the definition of unacceptable media outlined here is so narrow that it’s not likely to eliminate any of Facebook’s deceptive-media problems. That the videos need to be a product of “artificial intelligence or machine learning” is a completely arbitrary distinction. People decide to create deepfakes; the clips don’t just come into existence on their own. Videos similar to deepfakes can be created with most professional video-editing software and without nebulously defined AI.
You don’t need to be a visual-effects wizard to fool people, though. There is already one prominent example of manipulated video that wouldn’t be affected by the new Facebook rules. Over the summer, a video of Nancy Pelosi supposedly slurring her words went viral on Facebook, but the video was actually just footage slowed down to three-quarters of its initial speed — no advanced AI or neural networks or whatever required. In contrast, a face-swap made using a filter that’s been available on Snapchat for years might run afoul of Facebook’s rules.
If the contradictions and vagueness already discussed aren’t enough, Facebook is still sticking to its guns on refusing to fact-check the political ads that politicians buy, or take down ones found to be dishonest.
In reality, deepfakes are much more likely to be used for cyberbullying, like posting a photo of someone’s head stitched onto a nude body, than for political purposes. But Facebook already has policies against harassment and bullying, making the deepfake policy redundant in this sense. The new policy should be viewed, like many of Facebook’s efforts, as the company miming effort and pretending to put in work to combat political disinformation, without actually changing anything about its platform.