Facebook’s disclosure yesterday that thousands of dollars of political ads had been bought on the social network by a Kremlin-linked online influence operation, and a New York Times feature exposing the extent of Russia-linked fake Facebook accounts, seemed to confirm many people’s worst fears. For months, Facebook has faced scrutiny over how large-scale operations may have used the platform’s immense reach and its narrow targeting ability to influence voters amid the campaign, and since the election, even Facebook and its CEO, Mark Zuckerberg have evolved from dismissing the platform’s potential role to admitting that it is vulnerable to manipulation and misuse.
But the breathless reaction to the Facebook news isn’t necessarily matched by the facts — yet. Before we can draw any real conclusions, there are several very important questions we need to answer.
How much impact did these ads and fake accounts have?
This is the big one. We don’t really have any reliable method for evaluating the impact of targeted ads on social media. Sure, we can figure out how clicking on product ads gets people to buy the product, but evaluating impact on voting decisions is substantially more difficult. TV and radio are more neutral distribution methods — everyone in a certain geographic region has access to them — but targeted ads shift the weight considerably.
Facebook wants to have it both ways: It talks up the efficiency of its hyperspecific ad-targeting platform and its advanced AI functions, while also avoiding specifics when expedient. The company shrugs about its potential political influence, while simultaneously touting said influence.
In the case of the ads bought by Russian agents, the influence was intended to work to muddy political waters, drive partisanship and anger, and confuse the issues — not to support or condemn a specific candidate. As the Times notes in an overview of the situation: “The Russian efforts were sometimes crude or off-key, with a trial-and-error feel, and many of the suspect posts were not widely shared. The fakery may have added only modestly to the din of genuine American voices in the pre-election melee, but it helped fuel a fire of anger and suspicion in a polarized country.”
What did these ads look like?
While yesterday’s reports disclosed that the Facebook ads targeted addressed topics like LGBTQ rights and the Second Amendment, we don’t actually know what they looked like or said. Facebook’s personalized News Feeds are difficult to monitor, dwarfing the amount of media flowing across TV and radio airwaves by an exponential magnitude. Facebook itself is not disclosing what the ads looked like, and ProPublica has already set up a program to try to monitor them.
What Facebook has said is that “the vast majority of ads run by these accounts didn’t specifically reference the US presidential election, voting or a particular candidate,” and that they “appeared to focus on amplifying divisive social and political messages across the ideological spectrum.”
Who was targeted?
Facebook says that “one-quarter of these ads were geographically targeted.” What’s unclear is where that geography was (targeting swing states, for example, would be more effective than locked states) and where that targeting info came from.
Is there more bad news coming?
The $150,000 in dubious ad spending is significant, in that it confirmed that Russia-linked agencies were buying election ads, but that dollar amount is a drop in a vast ocean of political ad spending. For perspective: Donald Trump’s campaign spent $90 million on digital ads. But it’s also a real possibility that more ads were bought through different channels, and the $150,000 was just the tip of the iceberg. It’s unclear how Facebook identified accounts tied to the Internet Research Agency, and it’s unclear if there are others waiting to be discovered. It’s an eternal game of Whac-a-Mole.