22:09 GMT04 March 2021
Listen Live
    US
    Get short URL
    by
    308
    Subscribe

    Autonomous killing machines that could destroy targets on their own and possibly replace humans in some missions, are becoming a growing reality in developed countries. The trend has caused controversy over its morality, resulting in a worldwide demand to ban the use of the technology.

    A US panel called the National Security Commission on Artificial Intelligence, concluded public debates on Tuesday as to whether artificial intelligence should be implemented in the service of national security. As the panel's draft report reads, the US should not agree to a ban on the use or development of autonomous weapons powered by AI software.

    According to panel’s vice-chairman, a former US deputy secretary of defense, Robert Work, such systems are assumed to make fewer mistakes than humans during war actions, reducing the number of victims.

    “It is a moral imperative to at least pursue this hypothesis,” he suggested.

    Meanwhile, experts have admitted that autonomous weapons have certain pitfalls including biases in AI and software abuse. Nevertheless, a human presence is only necessary in the decision to launch nuclear warheads, the report says.

    The coordinator of the eight-year Campaign to Stop Killer Robots, Mary Wareham, noted that the commission’s “focus on the need to compete with similar investments made by China and Russia […] only serves to encourage arms races.”

    As the technological progress is accelerating, a number of countries and activists have called for a treaty to ban the use of fully-autonomous weapons, but relevant negotiations have not yet begun.

    Many experts, including former Google employees, have spoken on the danger of “lethal autonomous weapon systems” that could rapidly escalate conflicts. According to Google ex-software engineer Laura Nolan, killer robots could destroy entire cities in a matter of seconds by accidentally causing a “flash war”

    A former Google chief executive, Eric Schmidt, previously urged the US government not to allow killing machines such as drones and robots, “to decide on their own to engage in combat and who to kill."

    SpaceX CEO Elon Musk earlier also expressed concern about the dangers of the unregulated development of AI-powered robots in the future, suggesting that the next world war could be caused by competition for Al at the national level.

    Related:

    Video: US Air Force Base Security to Start Patrolling Grounds With ‘Robot Dogs’
    Mili-Robots by Hong Kong Scientists Can Deliver Drugs Through Human Body
    Disney Presents Sculptor Robot That Can Create in Different Styles
    Tags:
    Eric Schmidt, robots, Lethal Autonomous Weapons Systems (LAWS), Artificial Intelligence
    Community standardsDiscussion