The uncanny valley is a place no one wants to be. Somewhere between machine and human, the theory goes, robots take a dive into creepiness. But roboticists aren't sure the valley really exists. Now, researchers in California say they have new evidence for this icky zone, and they can even draw a map of it. Robotics professor Masahiro Mori first proposed the uncanny valley in 1970. The idea feels right—certainly some robots are charming and others, especially androids not quite succeeding at looking human, are a little stomach turning. (An android is a robot designed like a person.) But studies trying to pinpoint this valley have had mixed results. A 2015 review concluded that evidence for the uncanny valley is, at best, ambiguous. Maya Mathur, a biostatistician at the Stanford University School of Medicine, and David Reichling, a physiologist at the University of California, San Francisco, thought they could do better. Some uncanny valley studies in the past used small numbers of robots, Mathur says, or included robots that weren't meant to interact with humans. Other studies looked for the effect by digitally morphing a robot and a human face together. To study the question more realistically, Mathur and Reichling gathered pictures of real robots from Google Images. They searched "robot face," "interactive robot," "human robot," and just plain "robot." Within their results, they looked for clear pictures of androids that were built to interact with humans, weren't toys, and weren't modeled after Einstein or someone else famous. They took the first 80 images that met all their criteria. Then the researchers turned to Amazon Mechanical Turk for study subjects. The online participants rated each robot face on how mechanical or human-like it looked. This let the researchers place all the robot faces on a spectrum from machine to human. In the full set of photos at the bottom of this post, the robots are arranged from most mechanical to most human-like. Next, 342 subjects scored each face on "how friendly and enjoyable (versus creepy) it might be" to interact with the robot in an everyday situation. The researchers used an entirely different set of subjects than the ones who'd rated the robots for human-ness; Mathur notes that in studies where the same subjects have answered both questions, subjects might think they should find a relationship between the two. As the robot faces progressed from machine-like to human, their likability scores increased at first. Then, as robots got more human-like, their likability plummeted to well below neutral on the friendly-versus-creepy scale. For the most human robots, likability crept up again. In other words, the results were shaped like a valley:
To make sure happy-looking androids weren't skewing the results, the researchers repeated the analysis with only "low-emotion" faces and got nearly identical results. This isn't just a question of vanity for robot designers. If humans are creeped out by a robot that's supposed to be helping them, they might be reluctant to interact with it. To find out how people felt about the robots, Mathur and Reichling did another experiment. This time, subjects played a wagering game online. They were told they could give a certain amount of imaginary money to a robot partner (one of the faces from the image set). Then that money would triple, and the robot would "decide" how much to give back to the human player. When the researchers looked at how much money humans were willing to entrust to their robot partners, they saw another valley. Robots on the more mechanical end of the spectrum got more money. As robots became increasingly human-like, subjects gave them less money. Then the amount rebounded; subjects entrusted the most money to the most human-like robots. Unlike with likability, though, these results seemed to depend on the emotions on the robot faces. When the researchers looked only at unemotional robots, the valley went away. Finally, Mathur and Reichling repeated all their experiments using two sets of faces they'd digitally created to make a spectrum from machine to human. The likability scores of both sets of faces showed a clear valley, with the dip where you'd expect (lady robot number four, I'm looking at you).
The wagering experiments didn't show the same valley in trust as with the larger set of faces. "Our interpretation is that the uncanny valley on trust can be a substantial influence on people’s reactions to robots," Mathur says. But she thinks trust is "more context-dependent" than likability. As for how much people enjoy robots, Mathur and Reichling believe they've mapped out a true uncanny valley. But they still can't say why it's there. What happens in a person's brain to make some robots so distasteful? Other research, Mathur says, suggests that robots may be especially creepy if the human-ness of their different facial features is inconsistent. "Uncanny" may not be the best word for the valley, either. Mathur describes the experience of looking at one of these unlikable robots as "a complex feeling of revulsion and creepiness." Whatever the reason, though, robot designers who want people to enjoy their products will have to aim for the hilltops.
Images: Mather & Reichling, 2015 (top image modified by me)
Mathur MB, & Reichling DB (2015). Navigating a social world with robot partners: A quantitative cartography of the Uncanny Valley. Cognition, 146, 22-32 PMID: 26402646
Help do some science! Is it your first time visiting Inkfish? Do you read every post? Either way, you can be part of a scientific study without leaving your chair or sniffing a poop stick. I’ve teamed up with researcher Paige Brown Jarreau to create a survey of Inkfish readers. By participating, you’ll be helping me improve Inkfish and contributing to Paige’s research on blog readership. You will also get FREE science art from Paige’s Photography for participating, as well as a chance to win a t-shirt and other perks. It should take 10–15 minutes to complete the survey, which you can find here: http://bit.ly/mysciblogreaders. Thank you!!