Google CEO Sundar Pichai, January 23, 2018
In an interview with Recode and MSNBC due to be aired on January 26, search giant head Sundar Pichai likened AI to fire, saying it is very useful, but needs to be treated with caution.
"AI is one of the most important things that humanity is working on," Pichai said. "It’s more profound than, I don’t know, electricity or fire. [While fire is good] it kills people, too. They learn to harness fire for the benefits of humanity, but we will have to overcome its downsides, too."
Meanwhile, the Google-owned AI company DeepMind has announced plans to delve into AI ethics and try to shed light on the matter in its would-be research projects.
Pichai underscored potential advancements in AI technology like those that could be used to cure cancer, according to Android Authority. Humanity should, however, be cautious about progress in the field, Pichai said.
So, let’s take a closer look at the rundown of others issuing warnings regarding AI's potential dangers; Apple’s Tim Cook is perhaps the most notable.
Apple Inc. CEO Tim Cook, December 3, 2017
Taking the floor at the internet conference in Ujena, China, Apple Inc. CEO Tim Cook, said he isn't worried about artificial intelligence proper, but people who think like machines, adding that technologies will empower us greatly, provided they are in harmony with humanity at large.
"I’m not worried about machines that think as people, I worry about people who think like machines. We need to work together to introduce technology to humanity," he said.
Nevertheless, he went on to say that many aspects of everyday life will change soon, allowing us to prosper from AI advances:
"Technologies can change the world for the better, if they are embedded in humanity. We believe that artificial intelligence will be able to embroider a person’s ability and help to make a breakthrough that transforms our lives in education, in access to health services and in countless other areas."
Cook’s approach appeared to be parallel to Russian President Vladimir Putin’s earlier statements, when he revealed that artificial intelligence, centered around human cognition, is what the future holds for Russia and the whole world, with no one to be left "at the end of the line." The statement outright prompted Tesla CEO Elon Musk’s concerned reply.
SpaceX, Tesla CEO Elon Musk, February — September 2017
Speaking out publicly following Russian President Putin’s remarks on who or rather what will rule the world, Musk said artificial intelligence could be humanity’s greatest existential threat, in that it could start a third world war.
Musk clarified that he was not just concerned about the prospect of a world leader starting the war, but also of an approach "that a [pre-emptive] strike is [the] most probable path to victory."
He even reiterated that AI is "vastly more risky" than North Korea’s bold nuclear ambitions, which are increasingly on the rise.
Musk’s fear of AI warfare has been a driving force in many of his public outcries. Earlier, for instance, he was one of 100-odd signatories calling for a UN-initiated ban on lethal autonomous weapons.
Separately, during his talk at the World Government Summit in Dubai in February 2017, Musk brought up the issue of "deep AI" which goes beyond driverless cars to what he dubbed "artificial general intelligence." He described the latter as AI that is "smarter than the smartest human on Earth" and called it a "dangerous situation."
Physicist Stephen Hawking, November 6, 2017
During a talk at the Web Summit technology conference in Lisbon, Portugal, in early November, the renowned physicist asserted that the emergence of artificial intelligence could be the "worst event in the history of our civilization." He warned against the potential danger of computers and machines, saying they are capable of "emulating human intelligence and even exceeding it," CNBC reported.
He admitted the future was shrouded in uncertainty, essentially with two possible outcomes:
"Success in creating effective AI, could be the biggest event in the history of our civilization. Or the worst. We just don't know. So we cannot know if we will be infinitely helped by AI, or ignored by it and side-lined, or conceivably destroyed by it," Hawking said during the speech.
"Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy."
Hawking assumed that to prevent the potentially negative effects, one needs to employ competent management and effective skills – "best practice," as he referred to it.
Microsoft Founder Bill Gates, February 2017
Outlining the potential labor market, Microsoft CEO Bill Gates stated that since robots are expected to steal human jobs, they should also pay income tax.
"If a human worker does $50,000 of work in a factory, that income is taxed," said Gates in a recent interview with Quartz editor-in-chief Kevin Delaney. "If a robot comes in to do the same thing, you'd think we'd tax the robot at a similar level."
Citing the upcoming shortage of human jobs, along with a string of irreplaceable human positions, he argued that tax revenue is needed to train and educate, as well as make "new, irreplaceable jobs available to human workers."