00:33 GMT +324 September 2018
Listen Live
    A sentry robot freezes a hypothetical intruder by pointing its machine gun during its test. (File)

    'Killer Robots' Could Lower Barrier for Politicians to Start Wars

    © AFP 2018 / KIM DONG-JOO
    Opinion
    Get short URL
    210

    As robots and artificial intelligence (AI) have grown rapidly in recent years, visionaries in global AI research and development have expressed concerns over how the same technology could be used in lethal autonomous weapons, often referred to as "killer robots," because these unmanned weapons could lower the barrier for politicians to start wars.

    MOSCOW (Sputnik), Tommy Yang — In late August, a group of leading global AI researchers, including 116 founders of robotics and artificial intelligence companies from 26 countries, issued an open letter urging the United Nations to urgently address the challenge of lethal autonomous weapons and ban their use internationally.

    The letter was released by its key organizer, Toby Walsh, Scientia Professor of Artificial Intelligence at the University of New South Wales in Sydney, at the opening of the International Joint Conference on Artificial Intelligence 2017 in Melbourne, the world’s prominent gathering of top experts in AI and robotics.

    THERE SHOULD BE DIFFICULT BARRIERS TO WAR

    While many politicians defended the use of lethal autonomous weapons as they could help save human lives in a military conflict, the Australian expert told Sputnik that the lowered cost of starting a war could be a bad thing, because wars are supposed to be costly.

    "If we feel we can do this [getting involved in a military conflict] without risking human lives, maybe this lowers the barrier to war. And that’s a very bad thing. There should be very difficult barriers to war. War should be a massive loss. We should be discouraging it. It should be that politicians have to explain why our sons and daughters are coming home in body bags," Walsh told Sputnik.

    The expert argued that previous wars were started based on the same misconception.

    "It’s a rather short-sighted argument. It ignored the fact that all the civilians and other people got caught up in the crossfire. Maybe you have taken your people out of the battlefield; you’re not taking the civilians out of the battlefield. We probably have been drawn into these conflicts in Iraq or Afghanistan, because we thought we could fight without putting military boots on the ground. It’s a misconception that we could actually fight without risking soldiers’ lives," he said.

    The Australian scholar elaborated that if future wars will be with robots fighting robots, humans won’t need to fight wars anymore because the result can be decided by a game of chess.

    'DUMB ROBOTS' CAUSE MORE WORRIES

    Compared to super smart robots and AI technology we see in Hollywood movies, the Sydney-based AI expert expressed concerns that it’s the "dumb robots" that makes him worry the most.

    "Those sort of things you see in Hollywood movies, like Iron Man or The Terminator, they’re still a long way away. Actually, I am more worried about stupid AI, than I am about smart AI. We’ll be giving responsibilities to machines that aren’t very capable at the moment and certainly can’t follow international humanitarian laws. They won’t be able to make the right distinction and there will be a lot of collateral damage. It’s the incompetence that I’m worried about [more] than anything else," Walsh said.

    Walsh noted that the UK’s Ministry of Defense said it may actually remove humans from the loop of Predator-like drones, which is technically possible today.

    "It wouldn’t be very capable, but it will still be able to commit a lot of harm. We have already seen the fact that the Predator drones are actually killing a lot of the wrong people, even with humans in the loop. It’s not difficult to do that with fully autonomous drones," the expert said.

    In 2016, former US president Barack Obama admitted that drone and other airstrikes had killed between 64 and 116 civilians during his administration, a figure that is widely criticized as under-representing the loss of innocent civilian lives during those strikes.

    NO HUMAN-ENSLAVING EVIL AI

    Despite popular plots in Hollywood Sci-Fi movies were super smart AIs often try to conquer the human race, just like what Skynet tried to do in The Terminator, world-leading AI researchers dismissed such plots because they lack a basic understanding of AI technologies.

    "It [current AI technology] is really different from what you see in Arnold Schwarzenegger’s movies, where you have the evil AI fighting the good bodybuilder. In the movies, you have a goal conflict between the super smart AIs and the humans. It doesn’t really make sense for an AI to enslave humans. A super smart AI has very little interest in humans as slaves, because we’re miserable slaves for someone who can build a smart robot much more quickly and make it do whatever it wants to do," Jurgen Schmidhuber, a signatory to the open letter and a leading deep learning expert who co-founded AI research firm Nnaisense in Switzerland, told Sputnik.

    There have been several initiatives seeking to regulate the development of AI technologies. In December 2016, Dmitry Grishin, former chairman of Mail.ru group, proposed a draft law on robots, based on the Three Laws of Robotics conceived by Russian-born US Sci-Fi novelist Isaac Asimov in a short story named "Runaround" in 1942.

    According to Asimov’s Laws, a robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey the orders given it by human beings except where such orders would conflict with the First Law; a robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

    Russia's State Duma, the lower house of parliament, plans to introduce a new legislation regulating relations between humans and AI in the near future, Vyacheslav Volodin, the speaker of the State Duma, said Monday.

    "Relations between humans and AIs, the relations between humans and robots are the issues that we should define legally in the near future. This issue is on our agenda," Volodin said.

    But the Swiss AI expert argued that the regulation in the field would be difficult to put in place.

    "It’s rather difficult to regulate the use of a particular algorithm, such as the Long Short-Term Memory (LSTM), it’s about as difficult as regulating the use of fire. To a certain extent, you can regulate fire, but anybody can buy a set of matches, burn them at home or even burn his own house or his neighbor’s garden. Although everybody can do that, fire, as a powerful thing that has been known for 600,000-700,000 years, is something very useful, because it keeps us warm at night and we can cook with it. These two sides of fire are widely known. Society has adapted to its use. The advantages of fire are overwhelming that its disadvantages are accepted. I guess we will also have continually evolving sets of regulations for AI in a similar way," Schmidhuber said.

    The Swiss entrepreneur’s LSTM AI algorithm is now being used in 3 billion smartphones globally. He believes that AI in the future will not have a goal conflict with humans because they will realize that all the sources are out there in space, as less than one-billionth of all sunlight hits the earth. The AI will be ready to emigrate to outer space, which is impossible for humans, as AI can travel by radio just like how the algorithm is transmitted in his own labs, the Swiss expert explained.

    Related:

    N Korean Defectors Protest in Seoul Against Humanitarian Aid to North - Reports
    Russian Airborne Troops Returning to Permanent Bases After Zapad-2017 Drills
    US Navy Boosts Safety: Warships Required to Use AIS in High-Traffic Areas
    Tags:
    robots, politics, war, artificial intelligence, United States
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik
    • Сomment