Professor of Computer Science at Brown University Michael L. Littman said that he did not believe Musk’s warning were well-founded and explained that the tech entrepreneur was referring to a science fiction notion of AI in his predictions.
"There are two different concepts that are referred to as AI. There is the kind of AI that people have been developing for decades and that has, especially in the last decade, been having huge effects on society. That's the kind of AI Zuckerberg is talking about. Then, there's a Sci Fi notion of AI, sometimes called AGI (artificial general intelligence) that is a purely hypothetical construct. This kind of AI can improve itself, can create its own goals, can develop creative solutions to the problems it poses itself, and, in more apocalyptic scenarios, can unilaterally take actions that destroy all human life. That's what Musk is talking about," Littman told Sputnik.
According to the expert, that form of AI does not exist and is believed by some people to be possible based on "unsubstantiated extrapolations from what has already been created."
Littman noted that both AI and AGI had their drawbacks, and the disagreement between Musk and Zuckerberg centered around those of AGI.
"Both Zuckerberg and Musk are aware of the drawbacks of AI — the impacts on the labor market, the possibilities of automating large scale discrimination, the dangers of robot soldiers," Littman said.
"Musk thinks that AGI is a possible outcome of existing research and that, once achieved, threatens our existence. Zuckerberg does not see a straight line from current or near future AI and AGI fears," the expert said.
According to Professor of Computer Science at New York University Ernest Davis, "the fears of Musk, Stephen Hawking, and others, that AI poses an existential threat to humanity are wildly exaggerated."
"We build the AIs; we can build them exactly as we want to; and there is no reason to suppose that they will spontaneously get out of control," Davis told Sputnik.
The expert pointed out that there are many significantly more urgent and serious dangers, such as climate change and nuclear weapons.
According to Davis, the greatest drawback of AI is the fact that it could take on many jobs performed by humans.
Yoshua Bengio, Full Professor with the Department of Computer Science and Operations Research at Universite de Montreal, said that he was more worried about people misusing the technology.
"My concerns are not about AI taking over humanity, but rather about mis-use of AI technology by organizations looking for greed or power … Some people and organization may decide to use the power of AI for their own purposes, against the interest of the majority, e.g., for military or political purposes, or simply to concentrate wealth, at the expense of the rest of humanity," the Canadian computer scientist well-known for his work on artificial neural networks and deep learning said.
Bengio added that he thought it would be possible to train AI "towards objectives aligned with human values."
AI is already helping save human lives, be it through automated flight systems or better diagnostics facilities, according to Littman.
"I am very proud of my colleagues in the field who are working diligently on applications of AI that are already helping people in safety critical situations. Automated flight systems help pilots avoid fatal collisions… AI systems are being developed to run life-saving machines in ICUs and to diagnose cancer much earlier than is currently possible," the expert said.
Littman added that AI systems are also used to save lives of endangered animals by helping send human patrols to where poachers are likely to be. AI systems are also used to direct police and safety controls in transport hubs, such as airports and harbors.
According to Littman, the Humanity Centered Robotics Initiative at Brown University was researching the possibility of AI and robots monitoring the elderly and helping them in their everyday life.
"We are using them [AI, robots] to help people stay connected socially and to provide an ear for people who need certain kinds of counseling. We are working to make robots more flexible and to learn from people to carry out tasks that would be helpful around the house," Littman said.
Davis said that AI could be used in disaster management, for example, in sending a robot where it would be unsafe or impossible for a human to go.
"I think driverless cars will certainly save lives, and it is very likely that medical applications will save lives," Davis said.
According to Bengio, "progress in AI will certainly help medical research in general."
The expert added that robots could have positive impact on "education, energy production, energy savings, industrial production, commerce, security, consumer service, information services" as well as transport and medical services.