Google showed women ads for lower-paying jobs

We know that humans are biased and discriminatory when it comes to hiring. Unfortunately, it seems that computers might be, too.

New research from Carnegie Mellon University and the International Computer Science Institute found that Google’s ad-targeting algorithms might inadvertently discriminate against certain Internet users. In a study presented last week at the Privacy Enhancing Technologies Symposium in Philadelphia, researchers created a tool called AdFisher to study how Google’s ad-serving algorithm tracks users.

What researchers were interested in was how the ads Google serves up in those millions of little ad boxes on sites across the Web might change based on information Google inferred about a user. If a user begins browsing drug addiction websites, for example, what kind of ads will they see? Or what about if a user changes their Google Ad Settings profile from male to female?

And when browsing a list of top 100 employment websites, they found that fake users Google believed to be men were more likely to be shown ads for high-paying executive jobs than users believed to be women. Theoretically, this results in women being less likely to see and apply for higher-salary jobs. In other words, the algorithm reinforced a societal inequity already present in so many workplaces: men frequently earn more than women, often simply because they are expected to earn more.

“I think our findings suggest that there are parts of the ad ecosystem where kinds of discrimination are beginning to emerge and there is a lack of transparency,” Anupam Datta, a co-author of the study, told the MIT Technology Review. “This is concerning from a societal standpoint.”

It’s the latest in a series of algorithmic uproars for Google. In April, researchers found that Google image searches were “sexist;” a search for the term “CEO,” for example, returned results that were only 11 percent women. And earlier this month, the company was forced to issue an awkward apology after its new Photos app tagged pictures of black people as gorillas. And let’s not forget this 2013 UN campaign that looked at Google’s auto-suggestions when someone searches for “a woman should…” (The suggestion wasn’t “become a CEO.”) Sometimes the mistakes are just a fluke. Other times algorithms simply inherit the biases of their makers. And, as was the case with the Google gorilla fiasco, it can be difficult to tell which it is.

Often, instead of weeding out bias, algorithms amplify it. Consider LinkedIn or a service like Jobaline, which helps companies determine who to hire based on a voice analysis. Typically, such matching algorithms merely reflect human preference, incorporating the biases of the human who programmed it or set up the search. Jobaline, for example, runs the risk of simply automating what’s known as linguistic profiling, or discriminating based on factors such as accent and dialect and instead choosing candidates whose voices sound like the cliche of success. So the program might prioritize a candidate with a masculine voice that sounds low and white, one that matches the type of person stereotypically associated with good leadership qualities.

“In the United States, most companies are required to be equal opportunity employers; discrimination on the basis of race, sex, creed, religion, color, and national origin is prohibited,” danah boyd, Microsoft Research’s principal researcher, wrote in a paper last year. “However, there is nothing stopping an employer from discriminating on the basis of personal network. Increasingly, algorithmic means of decision-making provide new mechanisms through which this may occur.”

In the new research on Google’s ad-targeting algorithm, researchers said it was unclear what caused such troubling patterns to emerge. The factors that determine who sees what kind of ad include not only variables determined by Google, but by ad buyers, the websites on which ads appear and Internet users themselves.

“Advertisers can choose to target the audience they want to reach, and we have policies that guide the type of interest-based ads that are allowed,” a Google spokeswoman said in a statement to Fusion. “We provide transparency to users with ‘Why This Ad’ notices and Ad Settings, as well as the ability to opt out of interest-based ads.”

But today’s algorithms perform the tasks of sorting people, segmenting them and matching them. Developing algorithms that do so without falling back on the same stereotypes and judgements that the humans who program them do is essential. Algorithmic inequity is as much a threat to equality in the workplace and elsewhere as any human bias. To address such issues, some have gone so far as to call for  “algorithm transparency.”

“We must rethink our models of discrimination and our mechanisms of accountability,” wrote boyd.

Pitching Jobaline on NPR, company CEO Luis Salazar bragged that while human hirers might discriminate, computers do not.

“That’s the beauty of math,” he said. “It’s blind.”

But math isn’t actually blind. Instead, we input our biases into an equation and it spits them right back out at us.

 
Join the discussion...