- Sputnik International
World
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

‘Machines Can Make Mistakes’: US Nuke-Detecting AI Must Think Like a Human

CC0 / / Artificial intelligence
Artificial intelligence - Sputnik International
Subscribe
The US military has recently increased its spending on artificial intelligence (AI) that can find and track hidden mobile nuclear missile launchers and detect preparations for a nuclear strike, but the real question is whether it will be superior to human intelligence.

Dr. Lawrence Korb, a senior fellow at the Center for American Progress and a senior adviser to the Center for Defense Information, told Radio Sputnik's Loud & Clear on Tuesday that the implementation of AI technology in the nuclear sphere demonstrates how technology is changing the way countries fight wars.

https://www.spreaker.com/user/radiosputnik/pentagon-investing-heavily-in-controvers

​According to Reuters, the AI technology being developed by the Pentagon will be able to track the movements of mobile launchers and warn military commanders in advance about upcoming nuclear missile launches.

"Technology is changing the nature of the way we gather information and the way we fight," Korb noted. "It's not just the US; a lot of other countries are using AI. The US budget for AI, which includes the development of AI software for analyzing drone footage, has increased from $400 million in the past to close to $1 billion right now," Korb told hosts John Kiriakou and Brian Becker.

"The Pentagon is getting information from the private sector, like Google, and using it for national security. The US military is thinking it needs to keep up with and keep ahead of other countries. And we're not the only ones doing this. The Chinese are very big on using AI technology to offset the American advantage," Korb added. 

Artificial intelligence - Sputnik International
Tech Industry Veteran: AI Helps Humans Rather Than Replacing Them

"The real question is, will this technology work and lead to the right conclusions being made? We know that a lot of times during the Cold War, both the US and the Soviet Union had what they called ‘false positives,' which is when one side thought the other side was launching nuclear weapons when it was not. Fortunately, we had human beings whose job was to make sure of what was actually going on in those situations. What you want to do now with AI is make sure that threats are actually happening and people are not creating ‘false positives,'" Korb explained. 

A U.S. soldier preparing his Blue Force Tracker before departing Camp Victory, Iraq in 2005 - Sputnik International
US Army, USC Work on AI-Assisted Supersoldier to Ensure Battlefield Superiority

There has to be human oversight of AI, he said. "That's the key thing. If we ever give that up, our future will be based on machines, which can make mistakes, be corrupted and/or undermined. You do need to have the human oversight to make sure things make sense because if you don't, you can't take it back," Korb noted.

In April, more than 4,000 Google employees wrote a letter to the company's CEO asking him to end Google's involvement in the AI program run by the Pentagon. The program, established in April 2017 and dubbed Project Maven, involves developing AI software for analyzing drone footage.

"We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled and that Google draft, publicize and enforce a clear policy of stating that neither Google nor its contractors will ever build warfare technology," the petition said.

On Friday, head of Google Cloud Diane Greene told employees in a staff meeting that Google does not plan on renewing its Project Maven contract with the Pentagon, which expires in 2019, due to internal pressure by employees.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала