Perhaps he's been watching the Terminator: Genisys trailer on the loop, but Musk is apparently quite concerned that the artificial intelligence being developed by Google and other big tech firms will in fact, rise up and destroy humankind.
"I'm really worried about this," Musk is quoted as saying in his recently released biography, Elon Musk. Though Musk is friends with Page and does not doubt the good intentions behind Google's pursuit of AI, he's afraid that "He could produce something evil by accident."
— Marcoe Polo (@Marcoe_Polo21) May 13, 2015
Indeed, Google has been acquiring robotics companies steadily. In one twelve month period ending last February, Google had acquired 22 companies, half of which were robotics related.
— bday⁉️⁉️ (@frootybaap) May 9, 2015
DeepMind, for example, specializes in AI — their website humbly declares their mission is to "Solve Intelligence" — while other acquisitions are focused on certain aspects of robot mobility, image recognition, or language recognition.
— Thearse Leavens (@aka_TLeavens) May 13, 2015
One of Google's earlier robotics acquisitions, in 2013, was Boston Dynamics, best known for freaking people out with it's doglike robot that was interpreted as a sure sign of end times but some who, like Musk, fear this will all end badly.
— Botrax (@Botrax) May 10, 2015
Musk has been outspoken about his concerns over artificial intelligence and its potential to create a killer-robot apocalypse, or something of that nature. At a talk at MIT last October he called artificial intelligence "summoning the demon."
"You know all those stories where there's the guy with the pentagram and the holy water, and he's like, sure he can control the demon?" Musk said. "It doesn't work out."
— Dustin French (@Dustin_French) May 10, 2015
And Musk isn't alone here. In fact, many science and tech luminaries signed an open letter in basically calling on the AI world to please be careful, and make sure you take into consideration "maximizing the societal benefit of AI" — in other words, minimizing the chance this kills us all.
— Mitch Sanders (@mitchdata) March 25, 2015
One of the co-signatories on that letter was none other than Stephen Hawking, a fellow AI skeptic, who recently told the Zeitgeist Conference in London that computers would overtake humans in about 100 years.
"When that happens, we need to make sure the computers have goals aligned with ours," Hawking said, echoing the sentiments expressed in the open letter.
"Our future is a race between the growing power of technology and the wisdom with which we used it."
— Rory McCarty (@rorymccarty) May 7, 2015