select all

‘Facebook Is a Fundamentally Addictive Product’

Facebook founder and CEO Mark Zuckerberg is testifying in front of Congress this week. To accompany the testimony, Select All is publishing transcripts of interviews with four ex-Facebook employees and one former investor, conducted as part of a wider project on the crisis within the tech industry that will be published later this week. These interviews include:

• Former Facebook designer Soleio Cuervo on Facebook’s commitment to users, what the media gets wrong, and why regulation is unnecessary.

• Early Facebook investor Roger McNamee on Facebook propaganda, early warning signs, and why outrage is so addictive.

• Former Zuckerberg speechwriter Kate Losse on how the Facebook founder thinks and what is hardest for him to wrap his mind around.

Former Facebook product manager Antonio Garcia Martinez on the “sociopathic scene” of Silicon Valley and Mark Zuckerberg’s “disingenuous and strange” reaction to the election.

This interview is with Sandy Parakilas, who managed privacy issues and policy compliance on the Facebook platform in 2011 and 2012. He is now an advisor to the Center for Humane Technology.

When did you start working at Facebook, what did you do there, and when did you leave?
I started working at Facebook in 2011, about a year before the IPO. I was an operations manager and I was responsible for three main areas. I was responsible for privacy issues on the Facebook platform. If you remember Farmville and all the big, web-based Facebook apps that were popular back then, that was what I was focused on. I was focused on privacy issues, policy compliance, and I ran the ad network for ads inside of those apps. I left in late 2012.

Okay. It seems the core of your concern with Facebook is that the company began prioritizing user growth and making money over the safety of its users. What led you to believe that?
I thought over and over again that they allocated resources in a way that implied that they were almost entirely focused on growth and monetization at the expense of user protection. I could not get engineers to build or maintain some of the compliance functions that I felt were necessary, and I saw other teams had very large numbers of engineers building lots of new products. That was the primary way that I understood how they prioritized because, you know, at a tech company, engineers are everything.

How does that manifest at Facebook specifically? Could you give a couple examples?
Tech companies are really focused on building new products. Facebook, Google, Apple, Amazon — the lifeblood of these companies is engineers. They’re the people that actually build the products that we all use every day. The way you can understand how a company thinks about what its key priorities are is by looking at where they allocate engineering resources. At Facebook, I was told repeatedly, “Oh, you know, we have to make sure that X, Y, or Z doesn’t happen.” But I had no engineers to do that, so I had to think creatively about how we could solve problems around abuse that was happening without any engineers. Whereas teams that were building features around advertising and around user growth had a large number of engineers.

You drew up a map of vulnerabilities on Facebook for other executives in 2012, but nothing substantial came of it.
I drew up the PowerPoint deck, and in the PowerPoint deck I included a map of data vulnerabilities. It was specifically the Facebook platform, but the Facebook platform allowed developers to access a huge amount of Facebook’s data. That was one of the biggest vulnerabilities the company had.

What are the consequences of such a vulnerability?
One of the examples I cited in the New York Times op-ed that I wrote is there was a developer, who had access to Facebook’s data, who was accused of creating profiles of children without those children’s consent. The accusation was that this developer, which had access to Facebook’s developer APIs, was getting data about people’s friends and using that friend data to create profiles of people without their consent, including children. The problem was, when we heard about these reports and we contacted the developer, we had no way of proving whether that had actually happened or not because we had no visibility into the data once it left Facebook’s servers. They then had the data and they could manipulate it however they wanted to. So Facebook had policies against things like this, but it gave us no ability to see what developers were actually doing.

It seems like this is kind of a dynamic that you see elsewhere in Facebook. There’s a set of guidelines but the fundamentally open nature of the platform and Facebook’s reluctance to regulate it means that it’s creating systems that sometimes fail. Content review being one example.
That’s exactly right.

Could you explain how that reverberates throughout Facebook?
I think there are two big problems. One is one of design and one is one of liability. In terms of design, they have intentionally built — and this is not just Facebook, this is Twitter, this is other similar companies, too — systems that are designed to enable them to grow users and collect data as fast as possible. They have not prioritized features that would protect people against the most malicious cases of abuse. The reason it’s built that way is because the stock market measures them based on revenue and user growth, so they literally take the metrics the stock market is going to measure them against and they say, “Well, what are the features that will move these metrics the fastest?” They build those features. As it gets to bigger and bigger scale, you start getting more and more of these malicious cases, but they haven’t prioritized either the systems or the process to protect people against those malicious abuse cases to the extent that they should.

The second reason why this is such a huge problem is they have no liability when something goes wrong. Rule 230 of the Communications Decency Act of 1996 effectively shields internet companies from the actions of third parties on their platforms.

Platform immunity.
Yes. Because of this rule, which was originally envisioned around freedom of speech and did not envision a future in which these companies would be so central to people’s lives, these companies are able to hide behind that immunity. It enables them to not prioritize the features they need to build to protect users.

It seems to me that Facebook’s business interest is in direct conflict with the goal of protecting users. Facebook is so staggeringly profitable because it uses software and a relatively low number of workers to sell lots and lots of ads to buyers and then shows those ads to its many, many users.
But to protect users to the standard that you would like them to, the company would need to hire more workers and dedicate substantially more resources toward protecting users, vastly reducing Facebook’s profitability. How does Facebook resolve that paradox?
I think most of what you just said I agree with, but the portion where you said they need to hire more workers, I don’t think that fully solves the problem. I think that will mitigate some of the problem, but the business model, as you said, is fundamentally at odds with their responsibility to society. If all they do is leave the business model intact and hire a few thousand more reviewers and have a few new rules, which is basically what they’ve proposed, they will solve only a small part of the problem. They still have an incentive to try to attract as many users, get those people to view as much content as possible, collect as much data as possible and then sell that. Frankly, we and they can’t think of every malicious use case in advance. No one saw this Russian attack coming and then the next wave of this will be totally different. I don’t think that actually solves their problem. I think they need to change their business model.

Facebook’s stated mission of connecting the world seems harmonious with the idea of attracting as many users as possible. Is there a fundamental flaw there that I’m not seeing?
The fundamental flaw is that they have created a business model, which aligns with their stated mission at the absolute surface level only. Meaning that if they continue to do what they’re doing, they will have some kind of connection between every person on Earth. However, if you take connection to mean advance the well-being of society, then they have absolutely failed at that.

What do you think are exactly those negative impacts of Facebook today? What have they failed and what are the consequences of that?
One of the core things that is going on is that they have incentives to get people to use their service as much as they possibly can, so that has driven them to create a product that is built to be addictive. Facebook is a fundamentally addictive product that is designed to capture as much of your attention as possible without any regard for the consequences. Tech addiction has a negative impact on your health, and on your children’s health. It enables bad actors to do new bad things, from electoral meddling to sex trafficking. It increases narcissism and people’s desire to be famous on Instagram. And all of those consequences ladder up to the business model of getting people to use the product as much as possible through addictive, intentional design tactics, and then monetizing their users’ attention through advertising.

Looking back at the history of Facebook, do you think that there is a moment when the company could have made a different decision or a different set of decisions and gone down a different path than the one it’s on today?
I don’t think that moment has passed. I think frankly they could still fundamentally change.

What would they need to do to change?
They’re going to have to change their business model quite dramatically. They say they want to make time well spent the focus of their product, but they have no incentive to do that, nor have they created a metric by which they would measure that. If you can imagine a world where Facebook charged a subscription instead of relying on advertising, then people would use it less and Facebook would still make money. It would be equally profitable, and more beneficial to society.

Facebook’s current path seems to me to be the most profitable possible one it could be on. That means that there’s going to be a substantial degree of resistance, not just from within Facebook, but from within the market as a whole. How would you aim to address that?
I don’t know that Facebook is on the absolute most profitable path possible. I think it is on an obviously profitable path. I think there’s a Facebook side of this and there’s a societal side. I think on the Facebook side, I’m not sure that they could not find a new business model that is both more beneficial to society and equally or more profitable. In fact, if you charged users a few dollars a month, you would equal the revenue Facebook gets from advertising. It’s not inconceivable that a large percentage of their user base would be willing to pay a few dollars a month. If you compare it to Netflix, then you’re talking about the same order of revenue.

I’m not asking for like a point-by-point plan as to how you would get there, but I am curious about figuring out, how does one address that?
There is surprisingly less resistance to the idea that this is a huge problem that requires more regulation than you would think, both in Washington and definitely in Europe at the moment.

What about Silicon Valley?
I’ve read a poll of the subscribers of [tech-industry news site] the Information and something like 70 percent of the respondents said that they want more regulation of Silicon Valley companies. These are Silicon Valley people, two-thirds of them saying we want more regulation.

Do you think that there’s something there? I mean, do you think that that applies to Facebook senior leadership?
They don’t want more regulation. There’s no doubt about that. The question is not what do they want. The question is, through a combination of government pressure, user pressure, and advertiser pressure, where do we end up? The reality is that on the advertiser side, we’ve already seen one major advertiser come out and demand that they do a better job of dealing with these issues, which was Unilever. I suspect there will be advertisers. There’s also been calls from advertisers to form some kind of an independent organization to deal with this set of issues. On the government side, we are just starting to see the first wave of laws, both in Congress and at the state level, and there will be certainly more in Europe that address all this stuff. I suspect that these laws will get more and more aggressive. Finally, on the user side, people are using Facebook less. They saw a decrease in users in the U.S. for the very first time in Q4 of last year. We may see an even greater decline in Q1.

Do you see yourself as part of a political movement? I’m interested in understanding — is there a movement? What is that movement, and what are the goals?
I think one of the main problems with the duopoly that we have with Facebook and Google is that it prevents the kind of innovation and entrepreneurial success that we have traditionally seen in Silicon Valley. I think people here are finally starting to wake up to the fact that there are huge dead zones around both Facebook and Google where start-ups should not tread because they can’t compete. That aspect of this runs very counter to the ethos of Silicon Valley.
That is the opportunity for a case to be made. I would argue that I don’t know that that is a widely understood fact yet, but I think that it will be widely understood relatively soon. I think the anti-monopolists, which I would count myself as one, are now in a very strong position to make that case.

This interview has been edited and condensed for length and clarity.

A Conversation With Sandy Parakilas, Former Facebooker