As social media’s influence over politics and elections has risen, so too has our collective anxiety about it. Over the last three years in particular, the use of megaplatforms like Facebook and Twitter as vectors for misinformation has been the subject of congressional hearings and not a few columns on the websites of some of our finer magazines. And as we gear up for elections in 2019 — and, assuming we make it through this year, in 2020 — the anxiety is gearing up, too. On Monday, in anticipation of the European Parliamentary elections in May, the Mozilla Foundation, an influential internet-advocacy non-profit, released an open letter to Facebook, co-signed by 32 civil rights and transparency groups, demanding that the social network implement measures designed to increase transparency, facilitate research, and combat misinformation. “We … are deeply concerned,” the letter begins, “about the validity of Facebook’s promises to protect European users from targeted disinformation campaigns during the European Parliamentary elections.”
But what, exactly, should we be concerned about? Where is the misinformation coming from? Rival governments? Shitposting trolls? And what does it do? Convince voters of untruths? Change the outcomes of elections? Researchers are just beginning to map out the relationship between social media and electoral politics, but a long, reported article about one such “targeted disinformation campaign” in this week’s New Yorker is a useful object lesson in the most pressing dangers presented by social media dirty-tricks operations.
The story, by Adam Entous and Ronan Farrow, follows an election for the hospital board in the central-California town of Tulare, contested between Parmod Kumar — the incumbent ally of the hospital’s much-disliked administrator, Yorai Benzeevi — and Senovia Gutiérrez, the mother of a local activist. To ensure Kumar’s victory, Benzeevi engaged Psy-Group, a private intelligence company based out of Tel Aviv, staffed by ex-Mossad agents, and specializing in “covertly spreading messages to influence what people believed and how they behaved.” In early 2017, in the months before the election, articles implying that Guitérrez took bribes, or suggesting that she wasn’t a citizen, appeared on newly created websites with names like “Tulare Leaks” and “Drain Tulare Swamp.” As Entous and Farrow detail:
Other articles on Draintulareswamp.com questioned whether Senovia was fit to manage finances, and published records showing that she had filed for bankruptcy in 2003. (The bankruptcy records were authentic.) “It was horrible — they put out stuff that we couldn’t believe, and they were turning it out so fast,” Deanne Martin-Soares, one of the founders of Citizens for Hospital Accountability, said. “We couldn’t trace anything. We didn’t know where it was coming from.” On Facebook, Alex Gutiérrez responded to the smear tactics, writing, “The gall of their campaign to fabricate and move forward with such trash speaks volumes of their desperation and fear!”
It’s hard to imagine how it would feel to be a candidate for a race as small-scale as one for a hospital board, and to still face such overwhelming dirty tricks. And yet: when the dust had cleared, and the votes were counted, Gutiérrez won the election with 75 percent of the vote. Less than a year later, Psy-Group shuttered its doors.
It’s an oddly anticlimactic end to such a treacherous race. But the tale of Tulare’s hospital-board election contains, I think, two important lessons about social media disinformation campaigns.
The first is about whom, precisely, we should direct our vigilance toward. Since 2016, anxiety over digital dirty tricks around elections has tended to be framed in geopolitical terms — “the national security challenge of the 21st century,” as Lindsey Graham puts it. According to this line of thinking, the threat, such as it is, emanates from adversaries like Russia, who transform Facebook, Twitter, and other platforms into (in the words of Virginia senator Mark Warner) “petri dishes for Russian disinformation and propaganda.”
It’s absolutely true that Kremlin–backed trolls were active on Facebook and Twitter and Instagram during the 2016 election cycle, creating fake accounts, groups, and events, and generally shitposting in service of Donald Trump. But Facebook and its peers, reacting to pressure and aided by fairly clear FEC laws about foreign influence in elections, have gotten relatively good at identifying and stopping state-sponsored disinformation campaigns, whether from Russia or elsewhere. As state-sponsored threats are more aggressively confronted by platforms, the threat of misinformation in U.S. elections will increasingly originate not with geopolitical adversaries but with private intelligence contractors and their wealthy clients, as in Tulare. We’re already seeing this happen: In the race for Alabama’s senate seat last year, a group of “Democratic tech experts” backed by LinkedIn CEO Reid Hoffman carried out a secret program designed to sow division among Alabama Republicans by creating fake conservative Facebook pages. (Senator Doug Jones, the intended beneficiary of the efforts, which were likely too small to have any effect, has requested an inquiry into the campaign.) These campaigns, based as they generally are in the United States and subject to greater legal protections than those that arise in St. Petersburg or Beijing, are more difficult to identify and terminate. You don’t even need to be that rich: Psy-Group, Entous and Farrow report, was selling its services for an average price of $350,000.
Of course, the question is: What are you getting for your money? Which brings us to the second lesson of the Tulare election, which is about what we should fear from such misinformation campaigns. Perhaps because of the highly disputed but prominent claim that “targeted cyberattacks” from Russian hackers “helped swing the election for Trump,” it’s easy to imagine that the effect of misinformation is specifically electoral: By launching a social media propaganda attack, you can swing an election toward your preferred candidate. But, as the Tulare story demonstrates, dirty tricks on social media are not, in and of themselves, enough to sway an election. In some cases, they can do the reverse. Kumar’s campaign manager, who was unaffiliated with Benzeevi and Psy-Group, told The New Yorker that he believed the attacks “had the opposite of the intended effect: they motivated Senovia’s supporters to turn out on election day.”
So what is the effect of social media dirty-tricks campaigns, if it’s not to decisively change the outcome of elections? Entous and Farrow detail one operation Psy-Group claims to have undertaken on behalf of a corporate client, in which a fake persona they’d created “became so well-established in the industry that he was quoted in mainstream press reports and even by European parliamentarians.” As Ram Ben-Barak, a former Psy-Group employee, puts it: “This is the challenge of our time. Everything is fake. It’s unbelievable.” Ultimately, the targeted use of disinformation as a political tactic will be to destroy public trust at scale, as voters are confronted again and again with campaigns refracted through social media platforms awash with sketchy news sources and fake accounts (and a legacy media apparatus unfortunately beholden to those same platforms). Indeed, this was probably the original intent of Russian operations in 2016: to “undermine public faith in the U.S. democratic process” and decrease levels of civic trust, rather than to specifically help elect Donald Trump. Wealthy donors hiring private intelligence agencies to help their chosen candidates may convince themselves that they’re simply participating in the political process. What they’re really doing is eroding it.