05:03 GMT13 June 2021
Listen Live
    Get short URL

    Artificial Intelligence (AI) systems have never been more quick in development - or more threatening, according to SpaceX founder and CEO of Tesla Motors Elon Musk, who is sponsoring 37 multinational research projects to clear the way for people and AI to work - safely - side by side.

    Known as one of the world's most eccentric entreprenurial risk-takers, Musk has long been worried about the pace at which AI technology is growing, and has now granted roughly $7 million to global research on the opportunities — and dangers — of the field, according to the Future of Life Institute (FLI), which Musk has supported since January 2015.

    The Boston-based FLI chose 37 teams — out of 300 applicants — and announced the results of the grant program this week, to "keep AI ethical, robust and beneficial." Research teams from Stanford, Berkeley, Oxford, Cambridge, Harvard and MIT are among the winners, called on to conduct research in computer science, law, and economics.

    Facing projects like Google's DeepMind, which aims to teach machines to read, the Tesla CEO, together with Microsoft founder Bill Gates and theoretical physicist Stephen Hawking, have repeatedly voiced concerns that AI systems are developing faster than ever, expecting that one day they will be out of control.

    "We need to be super careful with AI. Potentially more dangerous than nukes," said Musk, also CEO of Space Exploration Technologies Corp. (Spacex), stressing that the only benefit from AI would be in "drudgery… or tasks that are mentally boring, not interesting."

    In opposition to Musk, Facebook founder and CEO Mark Zuckerberg, in a Q&A on his profile, said he believes "more intelligent services will be much more useful" to consumers.

    FLI president Max Tegmark, marking the "Terminator Genisys" film hitting theaters this week, stressed that "the danger with the Terminator scenario isn't that it will happen, but that it distracts from the real issues posed by future AI."

    The grants, financed by Musk's fund and by the Open Philanthropy Project and ranging from $20,000 to $1.5 million, will be used to build AI safety constraints and to answer many questions such as the deployment of autonomous weapons systems.

    Expected to begin in August, grant funding will last up to three years.


    US Has no Alternative to Russian Rocket Engines for Space Launches
    Russian Dominance in Space Continues After SpaceX Explosion – US Media
    NASA Warns of Falling Debris After SpaceX Falcon 9 Rocket Failure
    America Needs Russian Rocket Engines to Fly to Space – US Space Command
    Terminator, robot apocalypse, Artificial Intelligence (AI), SpaceX Falcon 9, Future of Life Institute (FLI), Tesla Motors, Stephan Hawking, Elon Musk
    Community standardsDiscussion