The Google Photos 'gorilla' fail won't be the last time AIs offend us

ICYMI, Google Photos just unwittingly misidentified black people as gorillas. (Big fail, Google.) When the news surfaced on Twitter, a Google engineer almost immediately tweeted an apology. He also mentioned that at one point, the app was mistaking white faces for dogs and seals (a less racially-inflammatory screw-up).

A Google spokesperson told Fusion in a statement that the company was “appalled and genuinely sorry that this happened,” saying the company was “taking immediate action” to stop it. “There is still clearly a lot of work to do with automatic image labeling, and we’re looking at how we can prevent these types of mistakes from happening in the future,” said the spokesperson.

But that might be harder than Google might like. Google Photos works thanks to neural networks, a class of software that’s exceptional at learning patterns from huge amounts of data, like images. After looking through thousands of images, they learn what makes a cat a cat and a car a car, for instance. (For a more in-depth explanation of this works, check out this post I wrote earlier this week.) They do this best when images are well-lit and taken straight on. Real world photos can be harder to decipher, as this incident makes clear, because they’re grainy, blurry or poorly shot. But even with picture-perfect images, it can be a long and arduous process to train a neural network to pick out objects. In the context of the gorilla #fail, that’s no bueno.

“It’s not easy to fix,” said Bart Selman, an AI expert at Cornell University, by email. “Deep neural nets can give quite unexpected answers after training, like a random noise image may be classified as a ‘cat’ or a certain ‘cat’ image may be totally misclassified [as something else altogether].”

It’s also possible that, if the Google Photos’ algorithm was trained on tagged photos from across the Internet, the training included the kind of racist photos that come up in a “black people gorillas” Google search. So it may have inherited the racism from human beings, though some experts I spoke with said the mistake was likely just a total fluke, a very unfortunate instance of computer vision gone wrong.

Even if Google “fixes” this, it may well happen again, said Selman. “Even retraining the deep neural net, which would take significant time, would not be guaranteed to eliminate the risk of something like this happening [again],” said Selman.

So what’s the solution? Until the networks can be taught to behave themselves, they’ve sort of gotten a gag order: “I think what Google did is simply block certain labels all together, at least in a broad range of contexts,” says Selman. So Google has probably blocked Photos results for searches of “gorilla” or “gorillas” for now.

AI is only going to become a bigger and bigger part of our lives and this sort of scenario, of AIs acting out in unpredictable ways, is going to become more commonplace. That’s why people like Elon Musk are donating millions of dollars to research how we can code up more ethical AIs. This week, the Future of Life Institute, an organization focused on making sure killer robots don’t kill off humanity in the far future, announced grants aimed at dealing with just this sort of unforeseen bad behavior.

“Although racial profiling was not directly addressed,” said Selman, who received one of the awards, “it will fall under the broader umbrella of how to properly constrain AI systems to be ethical, predicable, non-discriminatory, and​, in general, ​conform ​to our societal standards.”

That’s all well and good, but what happens if they still get out of line? We all learn societal standards from a very young age, and we still commit crimes. What we need is a legal framework through which AIs (or their creators) can be held liable, says Ryan Calo, a cyberlaw expert at the University of Washington.

“The law today is not well positioned to deal [with these kinds of scenarios],” said Calo. “They break our standard legal models.”

If a user was extremely emotionally distressed at being categorized as a gorilla, he or she couldn’t sue the search giant for negligence because that requires some foreseeability. Like if a gunman shoots into a crowd, it’s logical someone’s gonna get hit. How do you prove that Google could have foreseen this? The user couldn’t get them on defamation or even negligent infliction of distress because that also carries a hefty burden of proof and some legal maneuvering.

But in the future, Calo says the law might adopt a category for this type of “crime” — maybe, ‘negligent programming’ — especially if enough of these incidents happen. They’ll likely carry a lower penalty for the engineers coding these systems than if the humans were to act up themselves. Calo predicts that we’ll be seeing more and more of this in years to come, so it behooves us to start thinking seriously about how to amend our laws to deal with these problems. The robots are coming, after all.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

 
Join the discussion...