select all

‘Facebook Does Not Exist to Return Profits to Its Shareholders’

Facebook founder and CEO Mark Zuckerberg is testifying in front of Congress this week. To accompany the testimony, Select All is publishing transcripts of interviews with four ex-Facebook employees and one former investor, conducted as part of a wider project on the crisis within the tech industry that will be published later this week. These interviews include:

Former Facebook manager Sandy Parakilas on privacy, addiction, and why Facebook must “dramatically” change its business model.

Former Facebook product manager Antonio Garcia Martinez on the “sociopathic scene” of Silicon Valley and Mark Zuckerberg’s “disingenuous and strange” reaction to the election.

• Early Facebook investor Roger McNamee on Facebook propaganda, early warning signs, and why outrage is so addictive.

• Former Zuckerberg speechwriter Kate Losse on how the Facebook founder thinks and what is hardest for him to wrap his mind around.

This interview is with Soleio Cuervo, a product designer at Facebook between 2005 and 2011. He is now a founding partner of Combine, a design and venture-capital firm.

Could you tell me a little bit about your time at Facebook, and your career, and how you ended up at Facebook, and for how long you were there?
I joined the company in the late summer of 2005. I was one of Facebook’s earliest designers. I believe I was the second member of the product design team there. So, the team was very formative, we were still a college network, this is before News Feed, before a lot of the features that folks remember. And I was a designer at Facebook for six years, 2005 through 2011, during which I led the design and development of several of the company’s seminal products and features.

Over the first half of my time there, I was responsible for the initial design of some of the apps that serve News Feed in that launch in 2006, was then the lead designer of video as part of the platform launch. And then, when I shifted over to our communications products, I was on the redesign of News Feed that introduced the Like button in 2009. And led the unification of chat and messages and ultimately the launch of messenger in 2011.

So, I played a pretty direct role in leading the design of the product there, and was very fortunate to have worked, not only with some exceptional colleagues there, but, obviously, really closely with today’s leadership team, Mark Zuckerberg and Chris Cox and a couple of the other folks that are Facebook execs that are currently overseeing it. The other thing that I did there was I was responsible for helping grow that organization. By the time I left Facebook we had about 40-some-odd product designers, some really exceptional folks.

I’m interested in the idea that there was an inherent problem that precedes Facebook in how the internet was built, baked into the business model, which is that these companies have to operate at such a large scale and design products that accommodate so many users that there isn’t really a way that they can successfully manage concerns as varied as toxic community issues, abusive users, the civic issues that we see arise with fake news and government-sponsored trolling and so on. And I’m kind of curious about the extent to which specific decisions that you were present for, how those concerns were baked into the decisions, or how they weren’t, and simultaneously how you guys incorporated those concerns, if at all. I’m thinking of the roll out of News Feed, for example, as being such a moment.
Facebook back then was not Facebook now, meaning it was not a service that had saturated the market, it was not clear whether or not it would grow to become a juggernaut. And more importantly, we were building products that ultimately were trying to create real demonstrable social value in people’s lives. Can we provide products through which folks can share with confidence, with a sense of who their audiences are? Can they share the meaningful moments that they wanted to? Can they connect with the people that mattered most to them, be it family or friends or work colleagues? And could we provide features that were fun and engaging on a daily basis?

I think that the thing that has always been true of Facebook, from my perspective, since the early days, is that Facebook was really motivated to build what it viewed as the universal product, meaning a product that spoke to a very core human motivation to exchange information, to communicate, to share meaningful moments, and to interact with the people you were closest with. And Facebook always felt, from the get-go, that this was not a product that was targeted to Americans, or to millennials, or to a specific demographic, that it was fundamentally a universal product.

So, a lot of the product-development principles that arose from that ultimately spoke to — can we move quickly to identify what are the most valuable and universal features that people were consistently engaging with? And how can we make it so that people can actually use the product? And that ultimately led to the extension beyond colleges, so that people can just register for the service, that led to the internationalization of the products so that people who did not speak English could still use the products, and more importantly, to all of the mobile products that are now finally tailored to emerging markets where bandwidth constraints and data constraints could impact the usability of the service. So, I think that there is this core fundamental motivation at Facebook to connect people to the folks that matter most to them and to create a platform that ultimately creates genuine, meaningful, social value.

So, it seems like you made the bold case for making Facebook as universal a product as possible and for designing it to be used by as many people as possible. Does that like to make it as accessible as probably speaking, is that a fair characterization?
That is a fair characterization and it was explicit product-design principle to create products and features that had universal appeal.

I think what its critics point out is that it’s possible for Facebook to earnestly believe in connecting the world, expanding Facebook for everyone, creating a tool that connects people and can link them [to] one people across great distances or even close together, much more easily, but that simultaneously, that this is an incredibly lucrative business. And it seems, now, that there are some consequences to connecting everybody. I’m kind of curious to how you view holding those two ideas. It seems like those two ideas are in tension, now. How does Facebook reconcile that tension? And, if you can think of a moment in the time you were there in which that exact tension was something you had to deal with, I’d be very curious to hear about it.
I can sort of speak to two things. There’s the core motivation of the leadership team there, which I feel as though many critics are not privy to. The critics seem to pause at that, the strong profit motive is what’s driving all this. And I can rest assured that that is not the motive for the early leadership team at Facebook.

I mean, you laugh, but Facebook’s not a charity case; I mean you’re a VC, it exists to deliver a return for its investors.
I would strongly disagree with that statement. Facebook does not exist to return profits to its shareholders. To be very crystal clear, it exists to connect the world and to serve this very fundamental universal need for people to communicate with the folks that they’re closest with. That is what existed in the company far before it was profitable. That continues to be the case today. Now, I think what is also worth noting is that the company has always had a very user-centric mind-set, one that is always responded very swiftly to, and you can trace this all the way back to the original launch of News Feed, when there was just strong backlash within the existing community then to News Feed, and the desire for clarification and control over privacy settings. I remember the company staffing a team overnight to address, and ultimately shipping features within two weeks of News Feed launch, to address those complaints and those concerns. I remember the privacy backlash that took place, I want to say back in 2010.

There was a privacy backlash where people felt as though they did not have adequate control over the privacy settings and a lot of the new defaults that were being rolled out to new users seemed to suggest that concept was inadvertently public by default, and that the privacy controls that were originally designed to get people a high degree of granularity, which is too damn confusing to the average Joe. People wanted much simpler controls and an entire team was spun up. The entire privacy back end was completely re-engineered to serve this desire from users to have much simpler controls of their privacy.

What I find to be wildly inaccurate today is the idea that the company doesn’t take these criticisms seriously, that it doesn’t move swiftly to enact new policies and new product features where the community is responding negatively to certain perceptions and how that product works. And it is my expectation that, much like Facebook has in the past invested in product infrastructure to serve the desires of individual users worldwide, they will start to invest in further infrastructure, to prevent a lot of the issues that have arisen from the past year. And I think the company’s been pretty consistent about that. It’s my view that they’re not paying lip service to it, that they are very much completely remapping their products to address a lot of these issues. I think they’ve been really forthcoming about what are the specific things that they’re going to have to build and invest in.

Do you think that they can build and do all these things on their own? Or do you think that there’s a degree to which government intervention, which is a very popular conversation topic now, is necessary or likely?
I don’t believe it is necessary. I cannot say whether or not it’s likely or not, but I do believe that Facebook is best equipped to build the tools and the controls necessary for Facebook to be a safe environment in which people to use. And Facebook, ultimately, is accountable and responsible for defending its own platform. They do not take that responsibility lightheartedly.

It is my view that the company responds swiftly to the demands of the public and the community, that it has a legacy for having done so in the past, and in the case of data leakage, through the Facebook platform, it is my understanding that several years ago, they clamped down their data policies to prevent that from happening further. Now, did that happen in time, did it happen early enough? No, but it was also the case that when data portability was very top of mind for, again, critics of the company back then, the company was looking to create ways in which the platform can be of greater value to its users.

I guess my question is, if Facebook is doing a good job responding to user concerns, why are people so mad?
That’s a great question. I think one reason why people are mad is because there is strong resentment to the American election outcome.
A big faction of Americans today strongly disagree with that election outcome. The second bit is that they’re mad because it’s turned out that this breach in trust is a legitimate one, and it’s certainly one that Facebook has apologized for.

Which breach in trust, specifically?
Specifically, as I understand it, the use of data through the Facebook platform, to then be sold to third parties in a way that Facebook had no direct control over.

So, I know that you’re not naming Cambridge Analytica. But this anger and this dissatisfaction preceded that. When I ask why people are mad, I’m not just referring to that scandal. It seems like there’s a degree of discontent that precedes that.
I think, like I said, there is great discontent over the role that Facebook has played, in potentially influencing an election that, a broad swath of the American population disagrees with. That is one big motivator.

True. It’s absolutely, but then there’s also, it’s not just liberals that are mad at Facebook, there’s a bunch of conservatives mad at Facebook, too.
I think one part of the discontent is something that I feel as though the media has definitely fed, which is a broader discontent and discomfort with the role that technology companies play in society, Facebook being one of the largest.

What do you mean by that?
That technology companies, in society today, are driving extraordinary economic growth, that not all members of society participate in that growth directly, and that it’s unclear whether or not these companies are, one, adequately regulated, but then, two, are doing their best to serve society and protect society. It is my view, also, that Facebook is expending extraordinary energy and resources to making sure that it honors both of those things. Mainly, that it is providing the platform that serves the interests and needs of individuals, but also that serve the needs and interests of society alike. And I don’t, for a moment, accept the idea that Facebook has dragged its feet or that Facebook is only responding because of the public backlash. There have been steady and proactive investments in securing the platform, that have existed for quite some time and, as it turns out, like any great defense system, sometimes has failure points, and sometimes those failure points are catastrophic, but that is not to say that nobody’s working on it, or that the company’s being negligent.

I guess I’m kind of curious about why you think a company that has catastrophic points of weakness is also a company that doesn’t require government intervention. Do you just think that these points of weakness are a built in byproduct of having platforms as big and accessible as Facebook?
That’s a good question. I don’t know if I’m the best person to answer that last portion of the question, but I don’t believe that it necessitates third- party regulation for them to be able to act on it. Facebook is the best group of people on the planet to regulate Facebook, and they have a very strong motive to do so. See, now I have worked alongside these folks for years. I can speak to their integrity. I can speak to their work ethic. And I can certainly speak to their seriousness. That has been consistently the case for as long as I have known that team and worked with these folks. And I certainly still perceive it to be the case.

So, if these people are ethically strong as they come and working extremely hard, and all these different things, but they still seem to fall short of the standard that’s being demanded by the public. What do you see as being wrong with the picture then? What are some of the causes of the gap that exists between the caliber of the people and the commitment to the cause and the commitment to doing good and what’s being demanded by the public?
I think part of it, and this is something that, my understanding is that the company has fully accounted for and spoken to, is that they did not foresee state actors using the platform.

Why do you think that they didn’t foresee it?
The same reason why other parties did not foresee it. I don’t know, actually, that is a great question to ask an executive at Facebook.

There’s all sorts of laws that exist, for example, that do not allow foreigners to buy political advertising in the U.S. on TV. There are laws that are in place for existing media that prevent this kind of thing, but seemingly there was nothing at Facebook — which had lobbied the FEC to prevent similar election standards from being the rule at Facebook. There was nothing that stopped, whether it was Russians or a guy like in South Africa who wanted to have a laugh, from doing this kind of thing, from buying explicitly political advertising. Do you think that if Silicon Valley companies had been more open to these kinds of existing and traditional regulation that some of this could have been prevented?
I’m sure that they would agree to this statement. There are probably external policies and internal policies that would’ve prevented the abuse of the platforms, and I’m sure that they are currently working on new internal policies to prevent it going forward. I’m pretty confident, and you’re probably familiar with this moreso than I am, but they have already started enacting new policies to ensure that state actors cannot run ads in the way that they have before. Now, is that too late? Maybe yes, maybe no, likely yes. Have they apologized and taken accountability for it? Yes.

What does accountability look like to you? This is one of those areas where I think there’s a gap between public perception and what happens internally. There haven’t been any firings and there haven’t been any fines handed down from the government. Those are, I would imagine, to be traditional standards of accountability that have been imposed by the public before. What makes you say that you feel that they’ve taken sufficient accountability for this previously?
What makes me say that they’ve taken sufficient accountability for this previously? Like I said, they have enacted new products and features and policies to counteract abuse of the platform in the past, and they will likely continue doing it going forward.

Is there anything that gives you concern about the broader way that large digital platforms operate? Any way you think that they could improve to get ahead of these kinds of problems and backlashes? It’s one thing to quickly and aggressively do triage, but how do you get that sense of foresight that you mentioned before, that seemed to have been absent for dealing with a number of these issues?
That’s a great question. Again, I am not an expert on the matter, so you should take it with a grain of salt, but my perspectives on the potential solutions spaces, I imagine that the two- pronged approach that I would consider is, one, making sure that Facebook is very proactive in communicating its product policies to regulators and to legislators and the people who represent the public to ensure that there is an active dialogue there. I think that’s absolutely necessary in order for Facebook to maintain a steady stream of communications and expectation setting, and also to be much more proactive about enacting emerging policies that are relevant to the populace, that seems like the natural place for that to exist.

But then, two, that is the notion of identifying and employing people who are experts at identifying exploits, I think in the technology sphere they’re referred to as white hats, people who are hackers that are ultimately aligned with the best interests in society, but are very good at thinking through edge cases in exploits. And certainly, we know now that there are some very smart, motivated people probing and looking for ways to exploit the platforms, so I view that as, also, an important recruiting initiative that Facebook must embark on is, nurture that competency. There are dangerous people on the other side of that coin, so it’s really fundamental for the organization to be thinking of that as an internal discipline to be nurturing and developing.

So, you’re somebody who’s worked in Silicon Valley a while, and part of your job is to invest in companies and get on the ground floor of helping to build the kinds of cultures that can accommodate that kind of foresight. Do you think there’s something embedded in the DNA of Silicon Valley that limits the ability of the people who work in these companies to prevent these problems before they arise?
It has been my experience, working in Silicon Valley, that the entrepreneurs, the engineers, and designers that work here have a very optimistic world view, and they have blind spots for malicious intent, in some cases. So, if I had to make broad sweeping generalizations, which seems to be the question you’re asking, I think rooted in Silicon Valley’s DNA is expecting the better-natured human. I think it’s really important to note that, for every Facebook that exists out there, there are hundreds of start-ups that don’t go anywhere, that are built by well-intended people, trying to create something meaningful and sustainable, but it’s truly hard to build a new business. I don’t fault Facebook for not anticipating how it could be used to rig elections. That was certainly not on top of mind for us, that was not a scenario that we anticipated. I apologize for it, and hopefully the world can understand why that was not very high up on our list, because we were trying to build a product that was not yet globally relevant.

Do you think that they needed to fail in order to succeed the next time?
I think that’s true, just human condition. Show me a person who has succeeded who hasn’t failed. That is a human folly, right? Every admirable person on this planet has catastrophic failures in their success stories, and the thing that I don’t question for a moment is the integrity and fierceness of Mark Zuckerberg and his team. They’ve publicly acknowledged, and that they’re working very diligently to correct. And I say that as a former colleague, as somebody who has worked very closely with him, as a close friend to him, which is true. And I don’t know if the critics can speak [to] that sort of relationship that I can unblinkingly say that that’s the case.

And do you, I’m just curious, do you still hold Facebook stock?
I don’t disclose that kind of information to the public.

Is there anything that we didn’t discuss that you think is kind of important or critical to add here?
It is my perception that the media has an ax to grind. I don’t know if that’s actually true. Yeah, that has been my perception thus far.

Why does the media have an ax to grind?
It seems to me that there is a very specific viewpoint that’s being projected onto the circumstances that I’m not, at least as a consumer of this content, not hearing enough from the folks that are working, that are spending their nights and weekends tackling these challenges, in this coverage, so I think it’s a relevant viewpoint and I would urge your publication and all media publications to incorporate that.

Well, if I try to talk to those people, they get fired.
How do you mean?

Well, if I were to email a Facebook employee and ask them for their insider perspective, realistically, they’re not going to get approval to speak with me, and simultaneously, if they were to speak to me without that approval they’d be fired. That’s policy.
Why do those policies exist?

I think that they ostensibly exist to protect competitive secrets, is one explanation that’s given, but I think the real reason is to ensure that the company can present a unified public line.
Mm-hmm.

And that doesn’t make Facebook any different from other companies. Tech companies like Google and Facebook tend to be a little bit more successful in ensuring this kind of discipline in this regard. I’d love to grind my ax with a few different Facebook employees, and have tried multiple times, but the reality is that I don’t think that they’re allowed to talk to me.
Yeah. I have not worked with the company since 2011, so I genuinely cannot speak to their communications policies or how they handle their relationship with the press, but I can say to you that they are very much shifting all of their energy and focus to ensuring that the platform is genuinely safe for users, and that it upholds its civic duty, given its sheer reach and the impact that it can have, positive or negative, on human lives. And it’s something that they don’t take lightheartedly, whatsoever. I don’t recall at any moment, when we were trading off decisions on a profit motive. That was never part of our calculations.

This interview has been edited and condensed for length and clarity.

A Conversation with Soleio Cuervo, early Facebook Employee