'Only True Safety is Ending Arms Race, Doing Away With Nuclear Weapons' - Prof

CC0 / / Artificial Intelligence
Artificial Intelligence - Sputnik International
Subscribe
The world is shifting from a human-controlled way of managing a variety of sectors, including the military, towards artificial intelligence (AI). The Pentagon has recently ordered the creation of an AI system for its strike drones. Dr Mark Gubrud, an adjunct assistant professor of peace, war, and defence at the University of North Carolina, has shared his opinion on potential threats that AI-guided military systems could lead to for people.

Sputnik: How do you assess the Pentagon’s push for artificial intelligence, especially one with the ability to order humans when to hit the “fire” button for nuclear missiles?

Dr Mark Gubrud: AI has obvious potential to be used in weapons. Humans are better at judging the most complicated, ambiguous situations, but when matters are simple and clear, machines can react faster and more accurately. Unfortunately, in war, mistakes are often irreversible, and if automated systems start fighting each other, we might not be able to stop them, or even understand what's happening, before it's too late.

Therefore, we must not allow any relaxation of human control. We need a strong treaty that makes human control of all weapons a matter of law and provides verification that autonomous weapons are not in use.

Unfortunately, the countries that are leading the development of autonomous, AI-driven weapons, including the United States, China and Russia, have resisted the creation of such a treaty.

Human control is an essential principle, but it is not enough, because humans make mistakes or may choose to take advice from machines that make mistakes.

The US is reportedly developing AI systems for intelligence analysis that could warn of an imminent nuclear attack by North Korea, and similar systems may end up watching Russia and China, too. What makes this especially dangerous is that the systems are intended to speed up the process, doing the analysis faster than humans can check it or develop an alternate analysis.

If a US president is woken at 3 am and told the AI is warning of an imminent attack, what will he or she do? There might be responses short of ordering an immediate pre-emptive nuclear attack, but those could also be escalatory and lead to the same outcome.

Sputnik: What could be the consequences of an AI wired to a country’s nuclear arsenal? How realistic is such a scenario?

Dr Mark Gubrud: I think it is unlikely that the US or any nation will enable a computer to launch nuclear weapons without a human decision. Russia reportedly has an automated system that can react if a first strike destroys the top leadership, and some American authors are now calling for the US to develop a similar system. However, I believe the Russian system involves humans and would only be activated if an attack was feared imminent.

Unfortunately, that is the most dangerous moment. For example, in 1983, Soviet computers warned of a US missile attack, and Russian Lt. Col. Stan Petrov made the call that it was a false warning, in part because there was no reason to expect an attack at that moment. No one can say that nuclear war could never happen by accident, but I think the most realistic danger is that we walk straight into it with our eyes wide open, taking ever-greater risks to avoid backing down in some crisis.

That is exactly what we are doing today with the shredding of treaties and the so-called new arms race, featuring new nuclear and non-nuclear weapons that shorten the times for attack and response. With the loss of the INF Treaty, it is very important for the US and Russia to renew START and consider new initiatives, such as missile testing and deployment limits, a hypersonic flight test ban, a ban on anti-satellite and space-based weapons, and a ban on killer robots.

We need to think creatively. For example, as a small start, I think in the wake of Russia's recent accident, it would be a good time for the US to suggest a permanent ban on nuclear-propelled missiles or airplanes, which both nations have previously attempted to develop and decided were too dangerous and unnecessary.

Sputnik: How reliable would an AI system be?

Dr Mark Gubrud: Complex systems can always fail, but in this case, there is a deeper problem. The fundamental problem with nuclear deterrence is that it only functions if it can fail - that is if nuclear war remains a possibility.

With our warning systems, we try to ensure that they will never give a false alarm, but also that they will not fail to warn us of an actual attack. These are contradictory objectives. As doctors, police and security officers know, it is impossible to simultaneously minimise "false positives" and "false negatives."

With our nuclear command and control systems, we try to ensure that the system can't be triggered by an unauthorised order, a hacker or an internal error, but also that they will function as intended, even under attack, if a proper order is given. Again, the same problem, contradictory objectives.

All this is true whether we are talking about AI or human systems. But when humans are involved, even when their official role is one that a machine could fulfil, they bring their full intelligence and understanding to the job. Humans want to live, want their families and the world to live, and will always check and check again that there isn't a mistake or a way out. Human control is essential, but it is not enough. The only true safety is in ending the arms race and doing away with nuclear weapons.

The Pentagon earlier asked Google’s parent company Alphabet Inc. for the creation of an AI system for its strike drones. How does this correlate with recent claims that social media companies should keep away from matters of the state?

Dr Mark Gubrud: Project Maven was supposedly about developing algorithms that could speed the work of human analysts, but the same work could eventually contribute to fully-autonomous lethal systems, and tech workers correctly recognise the danger of this arms race.

As the Pentagon's Will Roper has stated, Google terminating its contract only meant switching to other companies, but it was important because it created political awareness among the tech workers, which is still growing.

That is the reason for the pushback from proponents of killer robots. It's reminiscent of Hermann Goering's famous statement that any people can be led into war if you "tell them they are being attacked and denounce the pacifists for lack of patriotism and exposing the country to danger. It works the same way in any country".

The views and opinions expressed by Dr Mark Gubrud are those of the speaker and do not necessarily reflect those of Sputnik.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала