01:13 GMT +308 December 2016
Live
    Robotics

    The Primal Instinct: AI Machines of the Future Could Feel Fear

    © Photo: Pixabay
    Tech
    Get short URL
    227750

    As artificial intelligence (AI) becomes more intelligent and outperforms humans on many levels, researchers remain on a quest to capture emotions that contribute to another level of development, including fear. Tech experts are attempting to make AI robots and machines feel 'fear.'

    A paper currently under review by academics at the International Conference on Learning Representations (ICLR) explores the possibility of building the fear factor into AI machines.

    The paper's abstract says: "We might hope for an agent that would never make catastrophic mistakes. At the very least, we could hope that an agent would eventually learn to avoid old mistakes. Unfortunately, even in simple environments, modern deep reinforcement learning techniques are doomed by a Sisyphean curse." 

    The abstract continues to suggest that AI 'agents' are just as likely to forget new experiences as they are to understand and remember a new one. "Consequently, for as long as they continue to train, state-aggregating agents may periodically relive catastrophic mistakes."

    ​The scientists attempted to induce fear in agents to train them to avoid dangerous situations. Their paper argues that if AI machines can be rewarded for making good decisions — they can be punished for making wrong ones — and therefore fear the consequences of those actions or decisions.

    Using Deep Reinforcement Learning (DRL) AI machines are trained to make good decisions by chasing rewards.

    "If you stray too close to the edge of a roof your fear system kicks in and you step away to stop yourself from falling over. But if the fear you feel is so powerful and it stays with you, you might develop an irrational fear where you never step foot on a roof ever again," Zachary Lipton, co-author of the paper and researcher at UC San Diego told The Register.

    The academics suggest that DRL can be reversed, so if machines can be rewarded for making good decisions, so they can be for making wrong ones, and learn to perhaps fear the consequences.

    Related:

    'How You Doin'? Joey From Friends Immortalized Using AI
    UK Gov't Not Ready for Robots, Accused of Ignoring AI
    Fact and Fiction Behind the Threat of 'Killer AI'
    Tags:
    technology, science, robotics, learning, AI, fear, robots, artificial intelligence, United States
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik
    • Сomment

    All comments

    • evagas14
      If it's able to be afraid it can be kept in line.
    • michael
      this push for ai seems to be an incessant drive to justify something which has basic flaws. These engineers and comp.sci people are missing essential elements in their analysis of constructing what is, for want of a better term, slave. Lipton's generalisation about standing close to the edge does NOT apply to everyone. Maybe he and others should look into what is the basis of fear within the human animal first. This research does the ai push no favours.
    Show new comments (0)

    Top stories