Algorithmic Bias or Fairness: The Importance of the Economic Context

Blog Post
Nov. 27, 2018

CATHERINE TUCKER
SLOAN DISTINGUISHED PROFESSOR OF MANAGEMENT SCIENCE AND PROFESSOR OF MARKETING, SLOAN SCHOOL OF MANAGEMENT, MIT

As a society, we have shifted from a world where policy fears are focused on the ubiquity of digital data, to one where those concerns now center on the potential harm caused by the automated processing of this data. Given this, I find it useful as an economist to investigate what leads algorithms to reach apparently biased results—and whether there are causes grounded in economics.

Excellent work from the discipline of computer science has already documented apparent bias in the algorithmic delivery of internet advertising [1]. Recent research of mine built on this finding by running a field test on Facebook (and replicated on Google and Twitter), which revealed that an ad promoting careers in science, technology, engineering, and math (STEM) was shown to between 20 and 40 percent more men than women across different age groups [2]. This test accounted for users from 190 different countries, with the ad displayed to at least 5,000 eyeballs in each country. In every case, the ad was specified as gender-neutral in terms of who it should be shown to.

When my team and I investigated why it was shown to far more men than women, we found that it is not because men use these internet sites more than women. Nor is it because women fail to show interest or click on these types of ads—thereby prompting the algorithm to respond to a perceived lack of interest. (In fact, our results showed that if women do see a STEM career ad, they are more likely than men to click on it.) Nor does it seem to echo any cultural bias against women in the workplace. The extent of female equality in each of the countries as measured by the World Bank was found to be empirically irrelevant for predicting this bias.

Instead, we discovered that the reason this variety of ad is shown to more men than women is because other types of advertisers actually seem to value the opportunity to get their ads in front of female (rather than male) eyeballs—and they’ll spend more to do it. Some advertisers’ willingness to pay more to show ads to women means that an ad which doesn’t specify a gender target is shown to fewer women than men. In essence, the algorithm in this case was designed to minimize costs and maximize exposure, so it shows the ad in question to fewer expensive women than what amounts to a greater number of relatively cheaper men.

I emphasize that like most econometric studies, there are cautions about generalizability. This was a case study of a single ad and a single instance of apparent bias. But since the apparent bias was not particular to the domain of careers, but one resulting from a general price effect, I predict it might apply to other, sensitive types of information advertisers might want to promote online. Even if that’s not true, I believe there are useful regulatory insights to take from this example.

One policy tool that is often discussed as a panacea for bias is algorithmic transparency.

One policy tool that is often discussed as a panacea for bias is algorithmic transparency, where platforms are asked to make public the underlying code of their algorithms so that it can be analyzed for potential issues that may lead to prejudice. However, in this case, it is unlikely that much could have been prevented (or gained) by mandating algorithmic transparency, even supposing it were technologically possible. The apparent bias occurred because of other advertisers’ higher valuation of female eyeballs—something that would not have been clear from analyzing an algorithm that was simply intended to minimize costs.

This introduces the possibility that attempts to mandate lack of bias in algorithms could lead to trade-offs; in this case, it might prevent advertisers from receiving a “discount” for showing ads to men. Alternatively, while society may have interest in preventing women from seeing fewer job ads than men, we may not care to ensure that women see just as many ads for shoes as men do—and this makes regulating hard. Ultimately, it may be undesirable to regulate targeting on a global basis for all ads.

Apparent bias around who sees ads for STEM jobs may happen offline, too. Even without algorithms involved, employers with job listings may shun publications that are geared toward women because ads in those publications are more expensive. We only know about the discrepancy that occurs online because of the better data and online measurement available. This ultimately illustrates the importance of knowing the “but for” world if the algorithm did not exist to determine the cause of bias, but also the difficulties faced in assessing that counterfactual in an offline, less measurable world.

References

  1. Latanya Sweeney, “Discrimination in Online Ad Delivery,” acmqueue 11, no. 3 (April 2013), https://queue.acm.org/detail.cfm?id=2460278.
  2. Anja Lambrecht and Catherine E. Tucker, “Algorithmic Bias? An Empirical Study into Apparent Gender-Based Discrimination in the Display of STEM Career Ads,” March 9, 2018, http://dx.doi.org/10.2139/ssrn.2852260.