- Sputnik International
World
Get the latest news from around the world, live coverage, off-beat stories, features and analysis.

Kalashnikovs of Tomorrow: Musk, Hawking, Wozniak Fear Killer Robot Armies

© Sputnik / Vladimir Astapkovich / Go to the mediabankRobot Ball exhibition
Robot Ball exhibition - Sputnik International
Subscribe
Fears of artificial intelligence (AI) gone wrong prompted more than a thousand scholars and public figures - including theoretical physicist Stephen Hawking, SpaceX founder Elon Musk and Apple co-founder Steve Wozniak - to sign an open letter, cautioning that the autonomous weapons race is “a bad idea” and presents a major threat to humanity.

The letter, presented Monday at the International Joint Conference on AI in Buenos Aires by Future of Life Institute, warns about the high stakes of modern robotic systems that have reached a point at which they are to be feasible within just years, and that "a global arms race is virtually inevitable."

"This technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow," the letter states.

Elon Musk, CEO of Tesla Motors Inc., unveils the company's newest products, Powerwall and Powerpack in Hawthorne, Calif. - Sputnik International
Elon Musk Funds Major Research Grants on Dangers of Artificial Intelligence
"Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc."

While AI may promise to be truly beneficial to humanity in many ways, it has to be kept under strict controls, and perhaps even banned, the letter suggests, while warning that lethal autonomous weapons systems — or more simply, killer robots — which engage targets without human intervention, are on par with various weapons of mass destruction.

Among the letter’s signatories are theoretical physicist Stephen Hawking, Tesla, SpaceX and PayPal founder Elon Musk, linguist and philosopher Noam Chomsky, Apple co-founder Steve Wozniak, Skype co-founder Jaan Tallinn and Human Rights Watch arms division head Stephen Goose.

The United Nations debated a global ban on killer robots earlier this year.

Terminator Exhibition: T-800 - Sputnik International
Rise of the Machines: Robots, Cyborgs to Play First Fiddle in War of 2050
Stuart Russell, a professor of computer science at UC Berkeley and one of the letter’s signatories, published in the journal Nature that two programs developed by the US Defense Advanced Research Projects Agency (DARPA) — Fast Lightweight Autonomy (FLA) and Collaborative Operations in Denied Environment (CODE) — could "foreshadow planned uses" of killer robots, contravening the Geneva convention.

"The FLA project will program tiny rotorcraft to manoeuvre unaided at high speed in urban areas and inside buildings. CODE aims to develop teams of autonomous aerial vehicles carrying out 'all steps of a strike mission — find, fix, track, target, engage, assess’ in situations in which enemy signal-jamming makes communication with a human commander impossible," Russell wrote.

Earlier this month, Elon Musk, through Future of Life Institute, granted some $7 million to 37 global research projects on the opportunities and dangers of AI.

"Building advanced AI is like launching a rocket. The first challenge is to maximize acceleration, but once it starts picking up speed, you also need to focus on steering," Skype co-founder Tallinn, also one of FLI’s founders, commented.

Grant funding, expected to begin in August, will last up to three years. The awards, ranging from $20,000 to $1.5 million, will be used to build AI safety constraints and to answer many questions pertaining to the deployment of autonomous weapons systems.

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала