01:49 GMT22 June 2021
Listen Live
    Get short URL
    0 104

    Tesla CEO Elon Musk is really really worried that his friend, Google cofounder Larry Page, has adopted some destructive interests - destructive for humanity, that is.

    Perhaps he's been watching the Terminator: Genisys trailer on the loop, but Musk is apparently quite concerned that the artificial intelligence being developed by Google and other big tech firms will in fact, rise up and destroy humankind. 

    "I'm really worried about this," Musk is quoted as saying in his recently released biography, Elon Musk. Though Musk is friends with Page and does not doubt the good intentions behind Google's pursuit of AI, he's afraid that "He could produce something evil by accident."

    Indeed, Google has been acquiring robotics companies steadily. In one twelve month period ending last February, Google had acquired 22 companies, half of which were robotics related. 

    DeepMind, for example, specializes in AI — their website humbly declares their mission is to "Solve Intelligence" — while other acquisitions are focused on certain aspects of robot mobility, image recognition, or language recognition.

    One of Google's earlier robotics acquisitions, in 2013, was Boston Dynamics, best known for freaking people out with it's doglike robot that was interpreted as a sure sign of end times but some who, like Musk, fear this will all end badly. 

    Musk has been outspoken about his concerns over artificial intelligence and its potential to create a killer-robot apocalypse, or something of that nature.  At a talk at MIT last October he called artificial intelligence "summoning the demon."

    "You know all those stories where there's the guy with the pentagram and the holy water, and he's like, sure he can control the demon?" Musk said. "It doesn't work out."

    And Musk isn't alone here. In fact, many science and tech luminaries signed an open letter in basically calling on the AI world to please be careful, and make sure you take into consideration "maximizing the societal benefit of AI" — in other words, minimizing the chance this kills us all.

    One of the co-signatories on that letter was none other than Stephen Hawking, a fellow AI skeptic, who recently told the Zeitgeist Conference in London that computers would overtake humans in about 100 years. 

    "When that happens, we need to make sure the computers have goals aligned with ours," Hawking said, echoing the sentiments expressed in the open letter. 

    "Our future is a race between the growing power of technology and the wisdom with which we used it."


    Tesla Model G? Elon Musk Almost Sold to Google in 2013
    Elon Musk: Human-Driven Cars Will be Banned in the Future, Too Dangerous
    Elon Musk Donates $10 Mln to Keep Artificial Intelligence Human-Friendly
    danger, robot apocalypse, Artificial Intelligence (AI), Robots, Stephan Hawking, Larry Page, Elon Musk
    Community standardsDiscussion