Facebook’s ad algorithms still exclude women from seeing jobs
The study provides the latest evidence that Facebook hasn’t resolved its ad discrimination issues since ProPublica first brought the issue to light in October 2016. At that point, ProPublica announced that the platform was allowing job and living advertisers to exclude certain audiences characterized by traits such as gender and race. Such groups receive special protection under US law, which makes this practice illegal. It took two and a half years and several legal disputes before Facebook finally removed this feature.
A few months later, the U.S. Department of Housing and Urban Development (HUD) filed a new lawsuit alleging that Facebook’s ad delivery algorithms were still excluding target audiences for apartment ads without the advertiser specifying the exclusion. A team of independent researchers, including Korolova, led by Muhammad Ali and Piotr Sapieżyński of Northeastern University, confirmed these claims a week later. For example, they found that houses for sale were more often shown to white users and houses to rent were more often shown to minority users.
Korolova wanted to re-examine the issue with her latest audit as the burden of proof is higher for workplace discrimination than for housing discrimination. While any variation in the display of ads based on proprietary characteristics is illegal in the case of housing, U.S. labor law deems it justified if the variation is based on legitimate skill differences. The new methodology controls this factor.
“The design of the experiment is very clean,” says Sapieżyński, who was not involved in the latest study. While some might argue that auto and jewelry salespeople actually have different qualifications, the differences between pizza delivery and grocery delivery are negligible. “These gender differences cannot be explained by gender differences in skills or a lack of skills,” he adds. “Facebook can no longer say that [this is] legally justifiable. “
The release of this audit comes as part of a closer look at Facebook’s AI bias. In March, MIT Technology Review released the results of a nine-month investigation by the company’s Responsible AI team that found that the team, founded in 2018, had failed to work on topics such as algorithmic amplification of misinformation and polarization because of their blinkers focus on AI bias. The company published a blog post shortly thereafter highlighting the importance of this work, specifically saying that Facebook was trying to “better understand potential errors that could affect our ad system, as part of our ongoing and broader investigation algorithmic fairness in advertisements. ”
“We have taken significant steps to address discrimination issues in ads, and teams are now working on the fairness of ads,” Facebook spokesman Joe Osborn said in a statement. “Our system takes into account many signals to try to display the ads they are most interested in, but we understand the concerns raised in the report … We continue to work closely with the civil rights community, regulators and academics on these important ones Affairs. “
Despite these claims, however, Korolova did not see any significant change in the way Facebook’s ad delivery algorithms worked between the 2019 audit and that audit. “From that perspective, it’s really disappointing because we brought it to their attention two years ago,” she says. She has also offered to work with Facebook to resolve these issues, she says. “We didn’t hear anything back. At least for me they didn’t reach us.”
In previous interviews, the company stated that due to ongoing legal disputes, it was unable to discuss details of how it was working to mitigate algorithmic discrimination in its advertising service. The ad team said its progress was limited by technical challenges.
Sapieżyński, who has now carried out three audits of the platform, says this has nothing to do with the problem. “Facebook has yet to acknowledge that there is a problem,” he says. While the team works out the technical issues, there’s a simple workaround as well: they could turn off algorithmic ad targeting specifically for home, employment, and loan ads without affecting the rest of the service. It’s really just a matter of political will, he says.
Christo Wilson, another researcher in the Northeast who studies algorithmic bias but hasn’t participated in Korolova’s or Sapieżyński’s research, agrees: “How often do researchers and journalists have to find these problems before we simply accept that all of the ad targeting System is bankrupt? ”