You Autocomplete Me: How Google Built an Algorithm to Peer Into My Soul

By

Recently I discovered that Google is more in touch with my emotions than I am. It was the last stage of a breakup, the coda to the relationship where you air over email your lingering doubts, demands, and disbelief. I still believed I could muster the eloquence and maturity to patch things back together. I would elaborate my feelings, the force of which would move and persuade my ex-girlfriend, and my confidence grew with every word I typed. When I scrolled again to the bottom of her email, though, Gmail suggested a computer-generated response that was shorter, clearer, and, I realized, all I really wanted to tell her: “I miss you so much.”

The suggestion had been generated by what Google calls an “industrial strength neural network.” The network had “consumed” the words of my ex-girlfriend’s email in order to “produce a vector” and “synthesize a grammatically correct reply one word at a time.” The suggestions started to appear in trios at the bottom of emails in my inbox last November, when Google announced a new “Smart Reply” feature that would “determine if an email was answerable with a short reply, and compose a few suitable responses.”

I used Smart Replies at first to swerve conversations in weird directions. Once, a friend emailed a passage from a book he was reading about the Battle of Berlin: “Bits of bodies splashed against the boarded-up store front. Men and women lay in the street screaming and writhing in agony.” I chose the reply “Pics?” He replied to my Smart Reply with another, and, as the robot answers piled up, Google forgot the initial war crime and seemed to start arranging a hookup with itself. The computer grew hungrier for photos — “Can I see a pic?” “Did you get my pics?” “No pics?”— and eventually furnished an invitation my friend said was familiar from Grindr: “Can you host?”

But sometimes Smart Reply was eerily perceptive. I started to respect the service after it identified some paternal wisdom my dad had sent me. “Thanks for the advice,” it suggested — an appropriate-enough answer that I was tempted to pass off as my own. Now the technology had evolved beyond offering convenient replies to miming true emotion. It was the Smart Reply’s intensifier that stung the most. “I miss you” would have been a common courtesy, but “I miss you so much” was a cry for help. It was as though the computer could feel my frustration. Google got me.

Ever since British computer scientist Alan Turing proposed the famous “Turing test” in 1950, engineers have been trying to build robots that convincingly simulate human conversation. In recent years, they made a breakthrough: Rather than coding computers to follow long lists of rigid rules, they started to write more flexible programs that could learn from past experiences. Greg Corrado, a senior research scientist on the Google Brain Team, told me Smart Reply had digested a database of anonymous historical emails that Google keeps. “It doesn’t look in the database for similar emails or anything like that,” he said. “It’s just that it’s been exposed to the patterns in the past. It’s learned how these things go.”

So Smart Reply had an ulterior motive. It wanted to disabuse me of heartbreak’s egoism. “It’s because this is not the first breakup that has happened over email that it knows what’s going on,” Corrado said. “There have been breakups in the past and there are many different ways they can go. As long as all those ways happen with sufficient frequency in the data, then that’s the kind of thing the system can learn to imitate.” (I wonder who inspired the other two suggestions to my ex’s email. What stoic hero would have answered, “I understand”? And were they dimwits or geniuses who wrote, “So what are you doing now?”)

You didn’t need to be a professional writer to find this unsettling. We already live in a world where machines can figure out what movie we want to watch or food we want to eat. What happens when they start to speak for us? Two months after Google introduced Smart Reply, the company said it was already being used in 10 percent of all responses in the mobile Inbox app. Facebook, meanwhile, unveiled this month a neural network called DeepText, which it said “can understand with near-human accuracy the textual content of several thousands posts per second.”

These programs suggested a future where a computer figures out what we need or want or ought to say before we find the words ourselves. We feed our raw emotions to a technology company, which averages them with the emotions of others and feeds them back to us as something simpler, blander, but also more direct. I’d rather be a monkey banging on a typewriter for eternity: My eloquence might be occasional and accidental, but at least it would be my own.

Corrado urged me not to fear Smart Reply technology, however. “Having an original notion or a completely new idea? That is not what the system does at all,” he said. “What it does is enable you to focus on writing the part of the email that is really original authorship, as opposed to quick niceties or vapid small talk.”

In my sadness, I was willing to accept Smart Reply as an ally, not an enemy — but not because it saved me from writing quick niceties. Rather, Smart Reply’s “I miss you so much” homed in on the subtext of my wordiness and exposed, with total neutrality, what I wanted to hide out of fear of seeming weak. In stating the obvious, Google didn’t help me figure out what I really had to say. It helped me realize there was nothing left to say at all.