07:11 GMT +310 December 2018
Listen Live
    Artificial intelligence

    ‘Machines Can Make Mistakes’: US Nuke-Detecting AI Must Think Like a Human

    CC0
    Opinion
    Get short URL
    212

    The US military has recently increased its spending on artificial intelligence (AI) that can find and track hidden mobile nuclear missile launchers and detect preparations for a nuclear strike, but the real question is whether it will be superior to human intelligence.

    Dr. Lawrence Korb, a senior fellow at the Center for American Progress and a senior adviser to the Center for Defense Information, told Radio Sputnik's Loud & Clear on Tuesday that the implementation of AI technology in the nuclear sphere demonstrates how technology is changing the way countries fight wars.

    ​According to Reuters, the AI technology being developed by the Pentagon will be able to track the movements of mobile launchers and warn military commanders in advance about upcoming nuclear missile launches.

    "Technology is changing the nature of the way we gather information and the way we fight," Korb noted. "It's not just the US; a lot of other countries are using AI. The US budget for AI, which includes the development of AI software for analyzing drone footage, has increased from $400 million in the past to close to $1 billion right now," Korb told hosts John Kiriakou and Brian Becker.

    "The Pentagon is getting information from the private sector, like Google, and using it for national security. The US military is thinking it needs to keep up with and keep ahead of other countries. And we're not the only ones doing this. The Chinese are very big on using AI technology to offset the American advantage," Korb added. 

    "The real question is, will this technology work and lead to the right conclusions being made? We know that a lot of times during the Cold War, both the US and the Soviet Union had what they called ‘false positives,' which is when one side thought the other side was launching nuclear weapons when it was not. Fortunately, we had human beings whose job was to make sure of what was actually going on in those situations. What you want to do now with AI is make sure that threats are actually happening and people are not creating ‘false positives,'" Korb explained. 

    There has to be human oversight of AI, he said. "That's the key thing. If we ever give that up, our future will be based on machines, which can make mistakes, be corrupted and/or undermined. You do need to have the human oversight to make sure things make sense because if you don't, you can't take it back," Korb noted.

    In April, more than 4,000 Google employees wrote a letter to the company's CEO asking him to end Google's involvement in the AI program run by the Pentagon. The program, established in April 2017 and dubbed Project Maven, involves developing AI software for analyzing drone footage.

    "We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled and that Google draft, publicize and enforce a clear policy of stating that neither Google nor its contractors will ever build warfare technology," the petition said.

    On Friday, head of Google Cloud Diane Greene told employees in a staff meeting that Google does not plan on renewing its Project Maven contract with the Pentagon, which expires in 2019, due to internal pressure by employees.

    Related:

    Two US Osprey Military Aircraft Make Emergency Landing in Japan - Reports
    Blast at US Military Base in Northern Syria is Part of Training – Source
    N Korea Replaces 3 Top Military Officials Ahead of Trump-Kim Summit - Reports
    Israeli Hits Military Facility in Southern Gaza - Reports
    NATO Chief Says Israel to Receive No Military Aid From Bloc, If Iran Attacks
    Tags:
    artificial intelligence, military, drones, United States
    Community standardsDiscussion
    Comment via FacebookComment via Sputnik