MOSCOW (Sputnik) — States that deploy defense systems with a high degree of autonomy should undertake accurate assessment of possible dangers and ensure meaningful human control, speakers at this week's UN conference on Lethal Autonomous Weapons Systems (LAWS) told Sputnik.
Israel's Iron Dome and the US Phalanx and C-Ram are defense systems with a high degree of autonomy, programmed to respond automatically to incoming threats.
Although such systems are different from fully autonomous weapons, which are the main subject of the discussion in Geneva, they also raise concerns over possible dangers for humans.
"We believe that states should make an assessment of existing systems that have a high degree of autonomy in order to articulate why they do not pose the same dangers that killer robots might," Steve Goose, Director of the Arms Division at the Human Rights Watch (HRW) and co-founder of the Campaign to Stop Killer Robots said.
"Such systems that detect incoming missiles are a bit different from systems that would be able themselves to select targets to attack from a range of possible targets. We think states with those systems should explain how they ensure meaningful human control over every individual attack," Thomas Nash, director of Article 36 that works to prevent "unnecessary harm," told Sputnik.
One of the concerns about defense systems with high degree of autonomy is that such weapons could be extended for use against humans, Noel Sharkey, chairman of the International Committee for Robot Arms Control (ICRAC), explained.
More than 40 analysts and campaigners from a dozen of countries convened in Geneva on Monday to discuss LAWS and work out technical and legal aspects to the production of autonomous weapons.
Campaigners are calling for a preemptive ban on fully autonomous weapons. However, the United States, the United Kingdom, France and Israel appear to be opposing the ban, claiming it is not necessary because existing humanitarian law is enough to deal with the issue.