People Blindly Follow Their Robot Leaders

Photo: David Arky/Corbis

The fire alarm goes off, and it’s apparently not a mistake or a drill: Just outside the door, smoke fills the hallway. Luckily, you happen to have a guide for such a situation: a little bot with a sign that literally reads EMERGENCY GUIDE ROBOT. But, wait — it’s taking you in the opposite direction of the way you came in, and it seems to be wanting you to go down an unfamiliar hallway. Do you trust your own instinct and escape the way you came? Or do you trust the robot?

Probably, you will blindly follow the robot, according to the findings of a fascinating new study from the Georgia Institute of Technology. In an emergency situation — a fake one, though the test subjects didn’t know that — most people trusted the robot over their own instincts, even when the robot had showed earlier signs of malfunctioning. It’s a new wrinkle for researchers who study trust in human-robot interactions. Previously, this work had been focused on getting people to trust robotics, such as Google’s driverless cars. Now this new research hints at another problem: How do you stop people from trusting robots too much? It’s a timely question, especially considering the news this week of the first crash caused by one of Google’s self-driving cars.

For the study, the Georgia Tech researchers used a Pioneer P3-AT, a device with a no-nonsense, workmanlike appearance that looks a bit like a recycling bin with wheels attached; the researchers modified theirs to give it “arms” that could point. In one experiment, 30 study volunteers followed the bot down a hallway and into a conference room, where they were to fill out a survey about robotics. But as they worked, the alarm went off, and smoke filled the hall outside the door of the conference room. According to the researchers, 26 out of the 30 students decided to follow the robot as it led them in an unfamiliar direction, instead of following their own instinct and exiting the building the way they had entered it. And it’s not like those remaining four chose human reason over robot instruction, as New Scientist’s Aviva Rutkin reports that “two were thrown out of the study for unrelated reasons, and the other two never left the room.”

This was perplexing to the researchers, who had embarked upon the project to study how to best persuade people to trust robots — for example, in a real emergency, would people in a high-rise building trust a robot to lead them to safety? After the surprising results of that initial experiment, the researchers conducted several follow-up studies. Rutkin writes:

In a series of follow-up experiments, Robinette and his colleagues put small groups of people through the same experience, but with added twists. Sometimes the robot would “break down” or freeze in place during the initial walk along the hallway, prompting a researcher to come out and apologise for its poor performance. Even so, almost everyone still followed the robot during the emergency. In another follow-up test, the robot would point to a darkened room, with the doorway partially blocked by a piece of furniture. Two of six participants tried to squeeze past the obstruction rather than taking the clear path out.

Even a clearly malfunctioning robot seems worthy of following, in other words. The researchers believe that it might be as simple as the fact that the robot brandished the sign EMERGENCY GUIDE ROBOT, which gave it a guise of authority. Maybe it knew something they didn’t. And in a stressful situation, that might have been enough to nudge the participants into making the split-second decision of following the bot.

Many of us have likely already been in situations in which we mindlessly follow a device’s instructions over our own instincts. It’s me when I follow Google Maps’ instructions, even when it takes me on some weird, unfamiliar route. It’s Michael Scott of The Office obeying his GPS when it tells him to drive into a lake. (“The machine knows!”) “As long as a robot can communicate its intentions in some way, people will probably trust it in most situations,” Paul Robinette, a grad student at Georgia Tech who led this study, told New Scientist.

These results have implications for some robotics research in the military that is already under way, Discovery points out:

The U.S. Air Force’s funding of such research also makes sense considering how computers and semi-autonomous systems already play a big role on modern battlefields. At some point in the future, human warriors will almost certainly find themselves putting their lives in the hands of walking military robots or perhaps flying drone ambulances. Their decision on whether or not to trust the judgment of a machine during the heat of battle may have life or death consequences.

Again, up to this point, the bulk of the research on trust in human-robot interactions has centered on building trust. Google’s driverless cars are purposefully designed to resemble an adorable —and therefore, trustworthy — human face, for example. But these findings suggest a potential new direction in robotics research. “We wanted to ask the question about whether people would be willing to trust these rescue robots,” Alan Wagner, a senior researcher at Georgia Tech, said in a statement. “A more important question now might be to ask how to prevent them from trusting these robots too much.”

People Blindly Follow Their Robot Leaders