Tomorrow and on Wednesday, Facebook CEO Mark Zuckerberg will finally appear before Congress to answer questions about user privacy, data exchange and what the site is doing to help fight misinformation and election tampering as it serves more than a quarter of the world’s users every month.
What’s this whole thing about? Russia? Trump? Cambridge Analytica? Fake news? The Trump campaign using Russia and Cambridge Analytica to spread fake news? Part of the difficulty in mapping out this scandal is that Facebook has been accumulating scandals for something close to a decade and quibbled about the semantics and specifics of its business. Put simply, tomorrow’s hearings are probably going to be a complete mess; a combination of political bluster, a lack of technical knowledge, and technological pedantry.
You can read Zuckerberg’s planned testimony here, but we’ll also lay it out for you. Here is what, potentially, might come up.
The Facebook business model
Before anything else, it’s important to understand how Facebook’s business works. Facebook collects a ton of info about its users — not only what users explicitly provide, but also web browsing behavior — and then places users in categories based on that data. Advertisers can then specify what categories they want to advertise to, and Facebook acts as a middleman. If an advertiser wanted to reach “New Yorkers in their 20s who like Rock music,” Facebook could likely do that. Facebook does not sell this data to advertisers, it offers advertisers the ability to put that data to use. This is an important technical distinction that Facebook has been stressing a lot in the past month.
This is the precipitating event. The rundown is this: years ago, Facebook allowed users to link their account to third-party apps, and those apps were then able to use that access to collect a lot of data on users. They collected data not just on the users who voluntarily connected to the apps, but also that user’s friends. In other words, Facebook allowed users to share information about their friends, even if those friends didn’t consent.
In 2013, a researcher named Aleksandr Kogan made an app that 300,000 people linked their Facebook accounts to, and through this, Kogan’s app was able to get the information on roughly 50 million people. That is, the 300,000 app users and each user’s friends. That data was then sold (illicitly) to Cambridge Analytica, according to Christopher Wylie, a whistleblower who worked for the political consulting firm. Cambridge Analytica, according to themselves, specializes in using this data to build “psychographic” profiles that influence potential voters.
The important thing to know about Cambridge Analytica is that it receives a lot of its funding from Robert Mercer, a right-wing billionaire whose fortune also funds Breitbart News. As a prerequisite for receiving Mercer funding, many conservative candidates had to contract Cambridge Analytica. Among the candidates using the firm’s services was Donald Trump.
For congressmen and women opposed to Trump, it would be politically expedient to draw a line between Facebook offering easy access to user data to Trump’s win in 2016. Whether Cambridge Analytica actually had an effect on the election is very much an open question, but what is indisputable is the fact that Facebook estimates that the firm at one point had the personal information of 87 million Americans. And the firm was able to obtain that data because Facebook left the door wide open.
According to his statement to Congress, Zuckerberg will be apologetic about the lax practices that led to the current state of affairs. Throughout this whole scandal, however, he has not indicated any willingness to collect less data on Facebook users. He still believes that Facebook is a responsible data holder. Expect some pushback on that point.
The Russia situation is also going to confuse the Cambridge Analytica issue. A representative from Facebook has already appeared before Congress regarding the Russian online influence campaign, and it’s mostly separate from the Cambridge Analytica scandal, but that probably won’t stop someone from bringing it up with Zuckerberg.
Zuckerberg has already announced changes to Facebook’s political ad policy. Advertisers are now required to authenticate their identity, and users will be shown information about who paid for an ad in their feed, and what the targeting criteria was. Political ads will also be publicly available for anyone to review, not just whoever is targeted. These changes are in line with possible Congressional regulation on the issue, and would bring online ads mostly in line with the transparency regulations governing radio and television.
I mean, who even knows. Someone will probably try to yell at Zuckerberg about “fake news” and cloud this whole issue even further. Zuckerberg will stress that Facebook is getting even better at using artificial intelligence to spot “inauthentic” users and fake news, but probably won’t get any more specific — and given the technical knowledge of this Congress, can you blame him? He will also probably reiterate Facebook’s just-announced data-sharing program with researchers from various academic institutions.
One of the many problems with fake news and Russian trolls is that the trolls were not sharing things any different from what partisan groups were sharing. Posts about gun rights, and Black Lives Matter, and immigration and so on. To draw a line would be to take a stronger stance on moderating speech, something Facebook does not want to do.
All three of these issues are interwoven, and can be weaponized on either side of the aisle. For Republicans, Facebook has long faced accusation of being a liberal platform that suppresses conservative views. For Democrats, Facebook can be framed as a company willing to sacrifice user safety and election integrity in exchange for advertisers dollars. The only thing that can be safely assumed is that neither side likes Mark Zuckerberg or his company, and that’s a dangerous place for the social network to be in.