the future

Nicholas Carr Doesn’t Want Your Self-Driving Car

Nicholas Carr Photo: Jung Yeon-Je/AFP/Getty Images

The day before Tesla Motors announced it was making a semi-autonomous sedan called the P85D, Nicholas Carr, the best-selling author of The Shallows, went to visit the headquarters of Google, another Silicon Valley company making great strides toward truly self-driving vehicles. Carr, who once wrote a piece for The Atlantic called “Is Google Making Us Stupid?,” won’t likely endear himself to either Google or Tesla with his new book, The Glass Cage, a thought-provoking and accessible look at the costs — economic, cognitive, and moral — of our society’s increasing reliance on automation. (Among the unexpected risks he warns about: airline pilots, whose skills at flying planes have degenerated measurably as autopilot computers do more of their jobs for them.) But despite Carr’s reservations about our glorious, robotic future, the search giant invited him to give a talk about The Glass Cage anyway.

Google’s press office wouldn’t allow me to accompany Carr on his visit, but we caught up at a bar in San Francisco afterward.

How was Google?
You didn’t miss much. The people were very nice to me. They seemed engaged and didn’t take anything personally. I wasn’t sure when I went in whether it was going to be “This is the guy that disses Google” or something.

What do you think of the self-driving car?
On one hand, you have to be kind of amazed. The self-driving car is really an incredible accomplishment. But I think Google now understands this: Going from having a car that can usually perform well in traffic to a completely automated car that can go out and drive around San Francisco, there’s a huge gap there.

I don’t personally think there are going to be fully automated cars driving around in traffic for a long time. Even if you can automate 98 percent of what a person can do, that’s not enough. You still need a person to be able to take over. And that’s kind of a regress if you’re sitting there, and you’re still responsible, but you’re not driving. You can’t take a nap. You can’t write poetry or anything. You’re still the last line of defense.

Shouldn’t the goal be total automation, though? Shouldn’t we want Google or Tesla to get to 100 percent automation with these cars, so roads will be safer and crashes will be less frequent?
We should encourage the innovation and engineering to go in the direction of giving us that opportunity. And I think it will. It’s always hard when people start talking about safety to say, “Well, maybe that’s not the only thing we have to worry about.” Driving is a pleasurable activity for many people. It’s one of the rare times when you can relax and be in charge, be in control. Looking at this purely as a matter of efficiency and safety misses something important.

One of the things that worries me is this belief that, ultimately, we can use computers and automation to create this perfectly efficient, safe society without saying, “Who the hell wants to live in that society?”

We have lots of automated technology in our cars already: antilock brakes, cruise control, power steering. Cars are already doing nine out of ten things automatically. And you’re saying not everything should be automated. But if you’ll allow me some reductio ad absurdum, if we took that view all the way back to this beginning of vehicular automation, we’d go back to Barney Rubble pedaling the Flintstones car.
I’m not saying that because total automation is problematic then all automation is somehow bad. If you look at some of the most advanced automated systems coming into cars right now … I think Mercedes has something that monitors your steering and can tell if you fall asleep while driving. That’s an example of good automation. It doesn’t rush to displace. It says, “People have flaws. They get sleepy, they lose attentiveness. How can we aid a person and make them a better driver, rather than taking away more and more activities?”

My point isn’t: Let’s stop this process. It’s: Let’s think fully about all the criteria we should be using in planning.

In San Francisco, the worst thing you can call someone is a Luddite. Do you consider yourself a Luddite?
I don’t consider myself a Luddite as that term has come to be defined. When you actually think about the true Luddites, I have some sympathies for them. They were motivated not by a fear of progress; they were motivated by a desire to protect things they thought were important.

But the reason I know I’m not a Luddite is because I quite enjoy technology. I’m also really conscious of the fact that my behavior changes when I have a new tool. And I do feel that a lot of the new technology as it’s evolved doesn’t work in a way that enriches our lives.

We’ve historically been very bad prophets of automation. Keynes thought that by now our jobs would be automated, and we’d all have lives of pure leisure. In the 1950s, we thought people would be living on Mars by now. What makes you think your view of this stuff is any more prescient than previous generations of futurism?
My own feeling is that I’m suspicious of all futurism. One of the reasons I am is that it’s very, very hard to predict. But also, and you see this in Silicon Valley, there’s often a lot of futurism that relieves any hard, critical thinking about where we are right now. My view is: We don’t live in the future. We will never live in the future, because we always live in the present. And these are real issues that affect people’s lives. I don’t think you can say, “Well, it’s okay to blindly automate everything because it will lead to perfect lives of self-fulfillment.” You get a lot of callous proclamations about “We’re taking away people’s jobs, but, you know, it’s all leading to this really good place.”

One term you use in your book is data fundamentalism. What is that?
It’s the belief that data processing is the best way to answer hard questions.

You don’t think it is?
No. I think it’s one way to look at questions. But it doesn’t obviate the need for human perspective. It’s very easy to assume that somehow, if you just get the right data and process it in the right way, we can see through problems and resolve them. But it doesn’t work because it distorts your view of the problems.

One thing I hear when I talk to roboticists is that robots are pretty dumb right now. They can replace a small number of jobs in a factory, but they’re not capable of solving problems the way we are; they’re not capable of creative thought.
They have no understanding of the world.

Right. So until that gets solved, what do we have to worry about?
They can do a whole lot of stuff, even without replicating everything we can do. Watson can win at Jeopardy — that’s a really weird place for a computer to outperform people! There’s wordplay and trivia. But even if they don’t develop all of these things, there are many areas that, through pure crunching of numbers, computers can replicate our ends without replicating our means. We’re going to be in constant negotiations with computers — there’s plenty to worry about.

My own worry is that computers don’t have to have 100 percent of human capability to reshape the labor market. They just have to do enough that the people running businesses find it cost-efficient to replace humans. If there’s a Nick Carr bot that can write books 80 percent as well as you for much less money, your publisher might opt for the bot, even if the output will be slightly worse.
That’s a bad example. [Laughs.] One story I think is interesting is how Toyota is replacing some robots with humans in its factories. It’s been a leader in automating the factory. But it’s also started to have quality problems. What that says is that we need to bring back the culture of craftsmanship, not just as a rallying cry, but because having people fabricate crankshafts by hand brings back a perspective on the job that computers can never match.

You do a very admirable job laying out the potential costs of automation. But what do we do about it? What are the solutions to the problems you identify?
Well, it seems to me that those problems can actually be solved by a wiser, more humanistic approach to automation, which has particular design implications.

Like what?
Okay, so, for example: Take radiologists looking at X-rays. There’s now software that takes over some of that analytical chore and highlights areas that are suspicious based on the data analysis. Sometimes that’s good, because the radiologists can really focus. On the other hand, we know that when a computer highlights things, that’s where our attention goes, and the radiologists can miss things. Another way to do that is to first let the radiologist examine the X-ray on his or her own, then bring in the automation afterward.

Or take flying. Instead of having pilots be able to turn on the flight automation system when they take off and use it all the way until they land, maybe you program the computers to randomly transfer manual control back to the pilot. As soon as a human being knows they could be called upon to do a job randomly, they become much more attentive.

So the self-driving car should kick into manual mode for ten seconds every half-hour, just to keep us on our toes?
Assuming the human being is still involved, yeah.