The new Google Photos app is disturbingly good at data-mining your photos

Last week, Google released a new photo app for storage and searching that gives users a rare and incredible glimpse into how much information computers can derive from our photos. When you download the app, it taps into photos you already have on Google’s Picasa/Plus platform and asks you to upload the photos on your smartphone to Google’s Cloud if you haven’t already. Then comes the fun part: searching them.

You can look for photos with “dogs” in them, “skiing,” “bridges,” or “parties.” In case you don’t know what to search, Google offers help, pre-categorizing your photos. When you click on the magnifying glass in the app, Google offers up categories for you: “People” (with photos of each of the people it’s picked out from your collection); “Places” (based on geolocation information in the photos); and “Things” (separated into categories it’s discerned, like “food,” “cars,” “beaches,” and “monuments”). It makes you realize how much a company with access to your photos can quickly learn about your tastes through data-mining them.

It gets really interesting when you ask Google to do more conceptual searches. Here at Fusion, we asked Google Photos to search for “beautiful” and got fascinating, surprising results: family, boots, drag queens, art, selfies, freaky-looking nursing dummies, crawfish plates, and a cornucopia of goodies for a night of fun: condoms, marijuana pipes, cigars and deodorant. Thumbs up, Google. Life is more beautiful with sex, selfies and sumptuous food, and it’s great that Google’s algorithms recognize that.

It could handle “beautiful” but “gorgeous” seemed to stump it. Several people who were helping me test-drive the app got no results. On my own search, I got a image of a cartoon chick hatching. Google only deemed friends of Fusion’s Kashmir Hill “gorgeous”:

It was quite adept with objects like dogs, trees, and clothes. (Though it sometimes gets confused: one reporter’s search for “dog” yielded a bunch of photos of kangaroos she took at a park in Australia.) When I searched “robots,” it spit out various, appropriate images: a Beam telepresence robot, a robot maneuvering kitchen items, and even a robotic wheelchair I’d seen at a conference:

What’s particularly incredible is the facial recognition. The app sees individuals in photos even if they are barely in the picture, far in the background, or featured in a photo within a photo. When I did a search for my adult sister’s face, it recognized her in a photo I took of a 20-year-old elementary school picture of her. When I searched for my father’s face, it included a photo I took of a decorative tile-wall in Mexico. I thought it had messed up, because I didn’t see any people in the photo, but when I looked closely, there was a tiny version of my dad at the bottom. Facial recognition has gotten very powerful.

Google also seems to know how to flatter its users. When I typed in “skinny,” the search unearthed pictures of me, friends, my sister and my mother, as if it was trying to compliment us. But when I searched for other adjectives, particularly negative ones — fat, sad, upset, angry — Google Photos came up empty. (Some of my colleagues got similar results.) The technology to help computers decipher emotions is out there already, so there’s no technical reason why Google isn’t turning up results for those searches. It gave us results for “love,” but not for “hate.” Whether it’s that we don’t take photos of ugly things, or that Google is shielding us, is something we’d really like to ask the search giant.

After I was done sifting through emotions, I turned my attention to my “things” folder. I was almost as impressed with this functionality as I was with the facial recognition. It grouped all my cute animals and food-porn pics into their own delightful albums. But it couldn’t always differentiate between real food and animals and things that looked like food and animals. For instance, it labeled photos of a mushroom-themed stool, a menu with images of flan and pie, and a recipe book with chicken wings on the cover all as food. And it labeled a Halloween pumpkin that had been carved in the shape of a snake as an animal. A human would have no issue figuring out that what was pictured wasn’t the real thing. Algorithms can’t do that yet, even when they’re Google-grade.

Google has been pouring millions into artificial intelligence research that can identify faces and other types of objects in images. Some of the AI technology that’s behind Google Photos is an extension of the stuff that powered auto-tagging of images on Google+. But now the technology has busted out of the flailing social network and into Photos, where it’s likely to be much more useful to consumers. And it’s giving those consumers insight into advances made possible by deep learning, a type of artificial intelligence that, like the human brain, assesses different layers of information. For example, if Google wanted to “identify” a flower, they’d engineer a low-level layer to detect simple things like the edge of a petal. The next layer up would piece those edges together into shapes and then objects. That allows computers to “see” what’s in images.

But what about more abstract concepts, like “beautiful” and “gorgeous”? We asked Google how its algorithms might be defining that. “Deep neural nets, especially ones with many layers (20+ in the case of our ‘Inception’ network), can learn very abstract concepts. As long as there are examples out on the web to train on,” a spokesperson told us. So when there are images on the web described by specific words or phrases, Google can take those and train its algorithms to recognize the types of images that have these verbal labels. The quality of the search results will depend on how many examples are out on the Internet for Google engineers to feed into the system.

The app is a great tool for anyone wanting to quickly search through their images. (It actually even creates animated GIFs, collages and filtered photos for you.) The catch, though, is that using it requires you upload them to the Google cloud. That’s because search, both in the app and on the web, is powered by artificial intelligence, and the algorithms that let you sift easily through your photos live in the Google cloud. No upload, no search, which kills one of the main features of the app.

Several of the people I asked to test out the app’s search feature refused because they didn’t want to upload their naked selfies, drunken outings, or intimate moments — this despite the fact that Google overhauled its privacy settings this week, giving consumers more control over what types of information the company can see and collect about its users.

Some worried about who would be able to access their information. If we start tagging our photos with the names of our loved ones, for example, will Google use that information to, for example, better face-index the world’s population? For now, the company is saying no:

“It’s very important for us to make this a private home for your photos,” Google Photos chief Anil Sabharwal told WIRED. “We’re not tying this back to any identity. You can label a person and over time, you can give them nicknames, but from the system’s perspective, it’s not tied to any identity… Any changes or intelligence that we learn from tagging a particular person in your account won’t impact anything on my account. We want to keep those things very siloed in order to maintain this very private area.”

That’s reassuring and it might make people feel more comfortable with using the service, but it’s also likely that Google will collect information about how we use the app to better train its AI, noting when its labels are right and how we re-label and re-categorize images. So the more you use this incredible app to search your photos, the more incredible visual searching will probably get.

Daniela Hernandez is a senior writer at Fusion. She likes science, robots, pugs, and coffee.

 
Join the discussion...