Facebook’s Ad Algorithm Discriminates Even When It’s Not Told To, Study Finds

Photo: Josh Edelson/AFP/Getty Images

Facebook treats its content algorithm — the process by which the site determines who sees what on its News Feed — as a trade secret, burying its intricacies in nondisclosure agreements. But the advertising algorithm is a little less secretive, as Facebook provides its advertisers with a limited breakdown of how and where their ads are performing.

To determine racial bias within the ad algorithm, researchers from Northeastern University, the University of Southern California, and the tech nonprofit Upturn treated it in the way one might approach an estranged family member on the subject, providing litmus-test scenarios to see just how racist the answers are. According to the results of the researchers, the algorithm is pretty racist.

The study focuses on Facebook’s advertising delivery system, the second half of the platform’s ad process. The Intercept explains how the whole operation works:

There are two basic steps to advertising on Facebook. The first is taken by advertisers when they choose certain segments of the Facebook population to target: Canadian women who enjoy badminton and Weezer, lacrosse dads over 40 with an interest in white genocide, and so forth. The second is taken by Facebook, when it makes an ad show up on certain peoples’ screens, reconciling the advertiser’s targeting preferences with the flow of people through Facebook’s apps and webpages in a given period of time. Advertisers can see which audiences ended up viewing the ad, but are never permitted to know the underlying logic of how those precise audiences were selected.

To understand more about the ad-delivery part of the equation, the researchers made their own advertising campaign without specifying demographic targets, to see how the Facebook algorithm would respond. The results weren’t ideal: The researchers found a “significant skew in delivery along gender and racial lines” for their faux-ads on employment and housing. With the housing ads, the system amounted to a digital form of redlining: The researchers found that Facebook delivered “broadly targeted ads for houses for sale to audiences of 75 percent white users,” but ads for rental units were displayed to “a more demographically balanced audience.”

The researchers ran ads for eleven generic job types — doctors, lumberjacks, AI developers, taxi drivers, cashiers, etc. — in North Carolina without specifying demographic targeting options. The Facebook algorithm wound up sending the ads to users it thought might fill those roles nicely. The audience for lumberjacks was 72 percent white and 90 percent male, the cashier audience was 85 percent women, and ads for taxi drivers were delivered to a 75 percent black audience.

The research from Northeastern, USC, and Upturn could represent a major problem for the world’s largest social media company: Ads excluding people based on race and gender are prohibited by landmark federal laws in housing — the Fair Housing Act of 1968 — and employment — the Civil Rights Act of 1964. In a statement to The Intercept, the company claims it stands “against discrimination in any form” and that it’s open to exploring changes to its ad delivery system.

Last week, the federal government filed housing discrimination charges against Facebook over its ad processes. In 2016, ProPublica broke the news that Facebook allowed advertisers to exclude groups they called “ethnic affinities,” which included African-American, Asian-American, and Hispanic users. The platform got rid of the “ethnic affinity” option, and signed a legally binding agreement in July 2018, vowing to remove advertisers’ ability to exclude based on race, religion, sexual orientation, and other protected classes. But nine months later, it looks like the problem hasn’t been fixed — worse still, the Facebook algorithm seems to be discriminating on behalf of its advertisers.

Facebook’s Ad Algorithm Discriminates When It’s Not Told To