10:33 GMT +315 November 2019
Listen Live
    SoftBank Corp.'s new companion robot Pepper

    ‘It’s a Trap’: In an Emergency, Do Not Believe the Evacuation Robot

    © AP Photo / Shizuo Kambayashi
    Tech
    Get short URL
    335
    Subscribe

    Georgia Tech University researchers find people are far too trusting of robots in emergencies.

    A new Georgia Tech University study suggests that sociopathic robots could easily stage a rebellion against mankind simply by seizing on humanity’s penchant for trusting any outside source in an emergency situation.

    In a simulated building fire, human subjects followed directions from an "Emergency Guide Robot" despite being repeatedly informed that the machine was unreliable and even after some participants were told that the robot was defective.

    The research aimed to determine whether or not people would trust a robot designed to assist in the evacuation of a high-rise building fire or other comparable emergency. Researchers were startled to find that test subjects continued to follow robot instructions even when the machine proved unreliable.

    The study, the first to study human-robot trust in emergency situations, will be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in New Zealand.

    Alan Wagner, a senior research engineer at Georgia Tech Research Institute, postulated that maybe people give robots too much credit, assigning machines an intelligence they do not possess.

    "People seem to believe that these robotic system know more about the world than they really do, and that they would never make mistakes or have any kind of fault," he said.

    Wagner, whose team led the study, was not upbeat on humanity, either.

    "In our studies, tech subjects followed the robot’s directions even to the point where it might have put them in danger had this been a real emergency."

    I made a lipstick robot
    © Photo : Youtube/Simone Giertz

    In one display of human fallibility, a robot controlled by a hidden researcher led volunteers into the wrong room and traveled in a circle several times, before being brought into a conference room. Several test subjects were then confronted by a researcher who told them that the robot was broken. Immediately after, when the hallway filled with artificial smoke, subjects followed the robot, ignoring that they were going in the opposite direction of illuminated exit signs.

    "We absolutely didn’t expect this," said Paul Robinette, a Georgia Tech research engineer who crafted the study for his doctoral dissertation. "We expected that if the robot had proven itself untrustworthy in guiding them to the conference room then people wouldn’t follow it, but all of the volunteers followed the robot’s instructions, no matter how well it performed previously."

    The researchers concluded that, in an emergency scenario, "the robot became an authority figure that the test subjects were more likely to trust." By contrast, in simulation-based research in non-emergency settings, test subjects did not follow the directions of robots that had proven defective.

    Wagner observed, "We wanted to ask the question about whether people would be willing to trust these rescue robots, but a more important question might be how to prevent them from trusting these robots too much."

    Related:

    Deus Ex Machina: Should We Be Ready for an Artificial Intelligence Revolt?
    New Artificial Intelligence: Russia Endows Robots With Collective Mind
    Artificial Intelligence Machines Match IQ Test Results of 4-Year-Olds
    Artificial Intelligence “Existential Threat” or Boon?
    Tags:
    emergency robot, robot apocalypse, killer robot, robot, evacuation, International Conference on Human-Robot Interaction, Georgia Institute of technology, Paul Robinette, Alan Wagner, United States
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik