select all

Could Facebook Swing the Election?

Photo-Illustration: SelectAll/Photo by Getty Images

After Mark Zuckerberg publicly denounced Donald Trump (not by name, for some reason, but very clearly), Gizmodo reported that Facebook employees asked on an internal message board whether Facebook has a responsibility to try to stop a Trump presidency. The question, verbatim, was: “What responsibility does Facebook have to help prevent President Trump in 2017?”

Zuckerberg didn’t answer — publicly, at least. But there was a larger, and frankly scarier, question lurking behind the question of Facebook’s political responsibilities: Could Facebook help prevent President Trump? Not through lobbying or donations or political action committees, but simply by exploiting the enormous reach and power of its core products? Could Facebook, a private corporation with over a billion active users, swing an election just by adjusting its News Feed?

“The way that you present information on Facebook or other social-media sites can have subtle but meaningful effects on people’s moods, their attitudes,” says Paul Brewer, a professor in the communications department of the University of Delaware who has studied Facebook’s political effects. Facebook knows this better than anyone; a study, released in 2014, was conducted to see whether changing the emotional content of users’ News Feeds would affect their mood. (The answer: yes.)

The first thing Facebook would have to do, if it wanted to swing an election, would be to suss out exactly who to target. “In politics, on some things, it’s very hard to change people’s minds,” says Brewer. “You’re not gonna change people from a Trump supporter to a Bernie supporter.” Trying to change the minds of those who are already vocally committed to one candidate is, basically, not worth the effort. So Facebook would, like any campaign, want to encourage turnout among the supporters of its preferred candidate, persuade the small number of genuinely uncommitted likely voters, and target apathetic voters who could be convinced to get out to the polls.

Facebook, understandably, keeps close to its chest exactly what conclusions it can draw about users based on their behavior on the social network. But the company almost certainly has the data to determine what your politics are; it has itself trumpeted the correlations between “liked” Facebook pages and political affiliation. It’s unclear whether apathy, as such, would be as easy to identify, but if you consider that third-party researchers have used public Facebook data to create algorithms that can predict personality traits to a high degree of accuracy, it seems likely that it would be fairly easy for the company to deduce your level of political engagement.

Assuming Facebook has successfully identified a persuadable voter, the next step would be the persuasion. Some of the tactics are basically the same as any other form of advertising. Most obviously, Facebook would be wise, says Brewer, to continually show pro-Facebook-candidate stories and ads, varying them so as not to annoy the user with repetition. Familiarity bias, a psychological phenomenon that finds that we make decisions in favor of things we see often, would come into play here. Hammering away with a picture of a candidate would have the same effect as a movie with billboards every other mile.

Facebook also has access to huge amounts of microtargeting data; that’s exactly what makes the company so valuable to advertisers, and it’s no different for politics. “There’s lots of opportunity, I think, to manipulate based on what they know about people,” says Josh Wright, executive director of Ideas42, a nonprofit lab that uses behavioral science for social good.

Wright mentioned in-group and out-group psychological theories of social identity: These say, basically, that we tend to identify and to like people who are like us. For example, Facebook knows that you like, say, basketball. It could then ensure that of the thousands of stories, photographs, and videos of its preferred candidate being posted in your extended network, you’re most likely to see the content that shows the candidate playing or somehow involved with basketball. It’d be a simple but effective way to engender positive feelings.

More directly, Facebook could build algorithms that would curate your News Feed based on particular “wedge issues” that it can tell you care about. If your Facebook data explicitly or implicitly identifies you as a Republican, but some posting history explicitly or implicitly shows you support immigration amnesty, Facebook could ensure that articles and comments about each candidate’s stance on immigration are more likely to appear in your feed.

Facebook’s data about you could also come in handy for figuring out when to blast you with ads. “We as humans have limited attention, limited cognitive bandwidth, and we just can’t read everything that comes through Facebook,” says Wright. The metrics for attention are incredibly valuable here: Facebook can figure out when in the day you’re most likely to click through to other stories, to read long posts, or to interact with other users. That changes for everyone; a single parent should be targeted at a different time than someone who works nights or someone still in school or someone unemployed. TV ads can’t figure that stuff out, but Facebook can.

But both Wright and Brewer noted that the biggest potential for Facebook is not in its abilities as an advertiser but in its control over your social worldview. Brewer conducted a study in which he and his team presented real Facebook users with a fake political candidate, shown in news stories in the News Feed. Those stories were sometimes accompanied by (fake) comments — some positive, some negative.

The study found that what the candidate said didn’t much matter to the subjects; what mattered were the comments. “What you say about yourself, including on social media, isn’t as persuasive as what other people say about you,” says Brewer. “We know that people are motivated to present themselves in the best possible light.” This is why Amazon and Yelp reviews are so important: We trust other people more than we trust the subject.

Browsing through Facebook gives you a feeling of consensus that’s totally decided by your particular selection of friends. Studies have indicated that most Facebook users have a strikingly homogeneous circle of friends, in terms of taste and political beliefs and even geography. Your News Feed now, even without any of the sci-fi manipulation stuff, tends toward an echo chamber. Facebook’s algorithm is already set up to display things to you that it thinks you already like. There’s nothing outright sinister about that; Facebook simply wants to give you things it thinks you’ll like in order to increase your time on the site and your engagement, and it uses the data about things you’ve liked in the past to do that. “Already Facebook is more likely to cause people to reinforce ideology,” says Wright.

The comments, far from being useless chatter, are tremendously powerful. Facebook could very easily surface positive comments about its preferred candidate and bury negative ones. Wright even suggested Facebook might want to bury long back-and-forth debates; those imply that there’s something negative being said, and you’d also have the struggle of trying to make people actually read something long and involved.

I asked about bots, or fake comments, and both Wright and Brewer suggested that this would be effective, but probably not as effective. “All else being equal, friends are more persuasive,” says Wright. But what about those friends we all have who we totally disagree with, I asked. “There would be exceptions that would hurt you, but playing the law of averages, you’d win,” says Brewer.

So the most valuable thing Facebook has to offer is, well, us. Nowhere is this more true than in getting out the vote — the most important, and most difficult, task in any political campaign, and one at which Facebook has already demonstrated some success. As political polarization increases, there are fewer truly persuadable voters; to a large extent, candidates and parties win based on their ability to get their supporters out to vote. And studies have indicated that Facebook legitimately has the power to move people to the polls: One study conducted by researchers at the University of California, San Diego, during the 2010 midterm election found that those users presented with photographs of friends along with a message about voting were 0.39 percent more likely to actually go vote than those who saw just the message. It’s a seemingly small effect that would compound across a large population: The study’s researchers found that for every person directly mobilized by the images of friends, an additional four voters were yielded thanks to the “social contagion” effect of Facebook’s friend networks. Imagine if those photographs were only shown to supporters of one party.

Perhaps the scariest part about this exploration into the possible is that there are all kinds of questions that we can’t answer, but that Facebook probably can. I asked whether adjacent posts make a difference. For example, would Facebook want to bookend a political post with something fluffy and positive, or something outraging that’ll get your blood boiling? Neither Wright nor Brewer knew, but we all know who could find out: Facebook. Little stuff, like whether a font or a color matters, whether a picture size matters, how long an ideal post should be, and all of the other tiny variables that make up a News Feed — those might make a difference, or they might not, but the only entity with the data and the manpower to figure it all out is, well, Facebook.

Could Facebook Swing the Election?