Forget drones and spaceships: The Snapchat dogface filter is the future

So, Snap Inc. is going to be the next great tech IPO. Rumor has it that the company formerly known as Snapchat will raise $4 billion dollars from the public markets at a valuation of $25-35 billion.

Over the last couple of years, Snap Inc. has gone from the punchline to jokes about sexting to what Stratechery founder Ben Thompson calls “the most compelling company in tech right now.” And what does the most compelling company in tech do? Snap Inc., according to a recent rebranding, “is a camera company.”

That sounds like a crazy thing for a long-time messaging app to say. Wouldn’t it make more sense to say that they are a communication service? Or a video platform? Or a media juggernaut? Or even an ephemerality company?

In a massive Fast Company profile from last October, Imran Khan, the company’s chief strategy officer said, “We have two major businesses. One is communication, and the other is entertainment.”

But that’s not what CEO Evan Spiegel says with the new motto: Snap Inc. is a camera company. Why might he say that?

In the old days of social networking, you wanted to own the “social graph.” Now, for young people communicating mostly in pictures and often in pictures of themselves, the pictures are what end up generating the social graph.

Those who make the camera that people use end up owning the messages the pictures it takes becomes. And they who own messages own the network. And, if Facebook has shown us one thing, they who own the network own everybody else.

So being a camera company in an age of photo-taking obsession makes a lot of sense. And what’s the most important part of any camera? The lens! Snapchat’s Lenses, the technology which magically morphs and mixes your face with digital components, then, are the most important part of Snap Inc., the camera company.

When Snapchat introduced Lenses a year ago, they seemed a little like the funny cartoon hats that have long been available in Google Hangouts. And sure, some Snapchat lenses “superimpose” cartoon images onto our faces. But the real work lenses do is to process the video in real time, whether or not other elements are added. They smooth skin. They make eyes bigger and brighter. They make you more beautiful. Or they make you look like a goat or a fireball or a puppy.

Think of the training data that Snapchat has had: teens sending pictures of themselves to each other as messages. If people are talking in pictures, they need those pictures to be capable of expressing the whole range of human emotion. Sometimes you want clear skin and sometimes you want deer ears. Lenses let us capture ourselves as we would like to be transmitted, rather than what the camera actually sees.

Which is why the Snapchat camera dominates among teens: A Snapchattian camera understands itself as a device that captures and processes images for the purpose of transmission within a social network. And it’ll bring any technology to bear within the product to make that more fun and interesting and engaging.

From the beginning, Snapchat has been amazing at tossing out the conventions and assumptions about technology that we don’t even consciously hold. When Steve Jobs said that Apple would be making personal computers, he did not mean that he was putting a mainframe in every garage. And similarly, when Evan Spiegel calls Snap Inc. a camera company, he doesn’t mean a Leica in every hand. The image sensor on the smart phone, as Andreessen Horowitz’s Ben Evans points out, is undergoing a tremendous transformation.

“Rather than thinking of a ‘digital camera,’ I’d suggest that one should think about the image sensor as an input method, just like the multi-touch screen,” Evans writes. “That points not just to new types of content but new interaction models.”

Being a camera company in this modern age means helping a picture-taker shape the reality that the camera perceives. We don’t just snap pure photos anymore. The moments we capture do not require fidelity to what actually was. We point a camera at something and then edit the images to make sure that the thing captured matches our mood and perceptions. In fact, most images we take with our cameras are not at all what we would see with our eyes. For one, our eyes are incredible visual instruments that can do things no camera can. But also, we can crank the ISO on fancy cameras to shoot in the dark. Or we can use slo-mo. Or Hyperlapse. Or tweak the photos with the many filter-heavy photo apps that rushed in after Instagram. Hell, the fucking Hubble Space Telescope pictures of the births of stars are as much the result of processing as they are the raw data that’s captured.

So, let’s look at that processing capability within Snapchat. The company bought the technological capability through the acquisition of Looksery for a rumored $150 million. Some lenses are permanent. Some come and go. Some are sponsored. On any given day, there are about 15 or 20 lenses.  They are one of the main draws for the platform, as Stratechery’s Thompson told me: “Lenses are a reason to come back every day because the lenses are different every day. Just in driving daily active usage. It’s brilliant.”

And they are also, by far, the biggest success for “augmented reality,” which just about every major tech company and venture capital firm is making bets on. It’s worth dwelling on Snapchat’s approach to AR as a way of seeing just how good the company is at bringing fresh eyes to an old problem.

As far back as the mid-90s, people got the idea that augmented reality, the important technological phenomenon, would be cartoons running around a scene in front of you, projected via glasses—and eventually contact lenses—into your eyes. Everyone recognized Pokemon Go as augmented reality because, you know, there was a Pikachu bouncing around on the sidewalk. Pokemon Go is paradigmatic AR. It would have been totally recognizable to the tech nerds of 1998. A Wired article from 1998 summarized it like this: “This ability to annotate the space around you–to superimpose pictures, graffiti, music, and other kinds of remembrance agents on top of it–is called augmented reality.”

Check out the assumptions baked into this pithy early definition of augmented reality.

First, Superimposition. AR was going to be an overlay on the extant world. Imagine layers in Photoshop, stacked one on the next. AR would not *edit* the world, but sit on top of it. There were technical reasons for this. If you were imagining how to do this stuff with a 486 or Pentium processor (as was the deal in 1998), the idea of live-editing video would have been preposterous to you. So, we get superimposition.

Second, one would annotate the space around you . You’d annotate the world in front of you: environments, rooms, landscapes. Not self-portraits. Not selfies.

A year later, journalist Steve Silberman went to visit Nokia for Wired and came back with this vision from one of its R&D bigwigs, Hannu Nieminen, who was then head of Nokia’s Visual Communications Technology laboratory:

“On a train-station platform, a line of text visible in your data glasses informs you the train is 15 minutes late,” he wrote. “More text captions flash over the heads of others on the platform, telling you who they are or who’s just around the corner.”

(THEY CALLED THEM DATA GLASSES!)

And sure, there would be games, but more importantly “memory prosthesis”: “Double-click on that name bobbing above the head of that guy who looks familiar and realize you had a terribly boring or interesting conversation with him at TED last year.”

So, add it all up and you already have 17, 18 years ago the working idea for AR: A high utility user-interface superimposed on “the space around you” through data glasses.

As it turns out, thus far, this is not what augmented reality is turning out to be at all.

Instead, AR—as it is used by real people at scale—is about changing the way your face looks for selfies. And it’s not only about overlaying graphics, but actually editing the “base layer,” if you will, of reality as captured by a camera. Of course, Snapchat is not the only company doing this! The Korean app Snow is pretty good at it, too. As are a slate of other Korean apps, including B612 and Camera 360. In Korean, there’s a different word for “selfie,” which I think is relevant: selca, self+camera.

If we projected our world a bit out for a near-future novel, that Korean word, selca, would come to be used to describe your self online. Selves and cameras would be widely recognized as inseparable and “what we look like” far more plastic.

Maybe that freaks you out. But hey: “All self-construction is narrative, it’s all improvisation,” wrote Rachel Syme, in the definitive defense of selfies. “Filters don’t always blur out the facts; they can work as storytellers, dramatic lighting for personal theater.”

The central self-narrating truth of post-modernity is being made real through technology. To “just be yourself” is not only to stand unvarnished in harsh light, but to put on dog ears and perfect skin in one message, a flaming fireball face the next, sponsored troll hair the message after that, and so on.

Snap Inc is a camera company. We are selcas.

 
Join the discussion...