Earlier this year, which we’re only 11 days into, Logan Paul got in trouble for posting something very stupid on YouTube. He went to a forest in Japan that’s well-known for the suicides that take place there and filmed the aftermath of a recent one. Predictably, and understandably, he got in trouble for it. Yesterday, Google dropped him from their top tier of advertising (where the biggest clients are) and put on hold productions that he was involved in for its subscription service, YouTube Red.
Last week, I argued that the Logan Paul controversy was a watershed moment for YouTube, and that as more scrutiny is applied to these unfathomably large information distributors and social networks — Facebook, Twitter, Google, among others — YouTube is now getting its time under the microscope. YouTube is not going away. It’s too big, too central to the modern internet. But it is going to change fundamentally, as the behavior of its biggest stars and the glut of unmoderated content continue to attract concern.
The media fervor over what Logan Paul did would lead you to believe that he is the disease, and not merely a symptom of it. As my colleague Madison Malone Kircher wrote last week, the culture of the megapopular YouTube influencer was always headed in this direction. The internet’s freedom (and relative freedom from consequence) has allowed people to embarrass themselves online for decades. Why stop now?
So, sure, the popular kids at the top of the YouTube food chain bear some responsibility. The other party, reluctant to take responsibility, is YouTube the company. YouTube is not just people getting into feuds and trying to one-up each other. The entire culture is powered by a confluence of search algorithms and computer-determined recommendations.
5 of the Most Notorious YouTube Scandals
That YouTube is powered by technology that is ruthlessly efficient at holding attention shouldn’t surprise anyone. At CES this week, the company explained just how effective it actually is. Speaking on a panel, as reported by CNET, chief product officer Neal Mohan threw out a few stats. On mobile, viewing sessions on YouTube last more than 60 minutes. That’s a fraction of the five hours the average American spends watching TV every day, but YouTube viewership skews young, and presumably, demographic and industry changes will lead to online video taking the place of traditional TV at some point.
Mohan threw out another even more important stat: According to its data, 70 percent of time spent on YouTube is determined by automated recommendations. In other words, the videos that the site compels users to watch, and keeps them on the site and watching, are chosen by algorithmic means. For the majority of their time spent on YouTube, users aren’t directing the viewing experience — YouTube is.
It’s a situation analogous to the algorithmically sorted Facebook News Feed, a system that keeps you coming back to see things that it believes you want to see, making choices about what you see and what you don’t.
Just like Facebook, YouTube seems to want all of the benefits without any of the responsibility. It derives ad revenue from its most popular users, and its automated systems are, on most occasions, responsible for determining what users watch. This effectively incentivizes YouTube to recommend videos that have the most lucrative advertisers attached to them. That would explain why, according to Bloomberg, YouTube recently conceived of a plan to vet content specifically made by popular creators.
On the other hand, YouTube doesn’t want to get into the business of moderating its platform or determining what users can or can’t say or depict. To do so is costly, and counter to the company’s eventual goal of eliminating human feedback entirely (though the company did announce its intent to hire 10,000 more people to review content on the site, a PR tactic that’s become a regular occurrence in the last year). To compare it to standard television, YouTube wants to determine what makes it to air, but not what gets green-lit.
The point is: YouTube is responsible for the systems it has put in place. It is still (understandably) boasting about these systems’ ability to determine what people see, hear, and experience on a site that young people are spending hours on every day. It is not difficult to argue YouTube’s ability to influence its audience’s thought and debate and morality and perception of truth — the company has practically admitted so itself.