However, Oren Etzioni, professor of Computer Science at the University of Washington and CEO of the Allen Institute for Artificial Intelligence, argues that such headlines are in fact strongly influenced by the work of one man: professor Nick Bostrom of the Faculty of Philosophy at Oxford University, author of the bestselling treatise Superintelligence: Paths, Dangers, and Strategies.
Essentially, Bostrom claims that if machine brains surpass human brains in general intelligence, the resultant new ‘superintelligence’ could replace humans as the dominant lifeform on Earth. Furthermore, according to his findings, there’s a 10-percent probability that human-level AI will be attained by 2022, a 50-percent probability that this feat will be achieved by 2040, and 90-percent probability that such an entity will be created by 2075.
However, in his article published in the MIT Technology Review magazine Etzioni points out that Bostrom’s main source of data is an aggregate of four different surveys of groups, including participants of the Philosophy and Theory of AI conference that was held in 2011 in Thessaloniki, and members of the Greek Association for Artificial Intelligence.
Furthermore, it appears that Bostrom didn’t provide the response rates or the phrasing of questions used during those surveys, and neither did he account for the reliance on data collected in Greece.
"This aggregate of four surveys is the main source of data on the advent of human-level intelligence in over 300 pages of philosophical arguments, fables, and metaphors," Etzioni remarked.
Seeking to obtain a "more accurate assessment of the opinion of leading researchers in the field", he turned to the Fellows of the American Association for Artificial Intelligence, and in March 2016 the group sent out an anonymous survey to 193 fellows on Etzioni’s behalf, posing the following question: "In his book, Nick Bostrom has defined Superintelligence as 'an intellect that is much smarter than the best human brains in practically every field, including scientific creativity, general wisdom and social skills.' When do you think we will achieve Superintelligence?"
In the end, the vast majority of respondents concurred that the emergence of superintelligence is "beyond the foreseeable horizon," with 67.5 percent of them replying that it may arrive in more than 25 years from now, and 25 percent stating that it will never come into existence. The remaining 7.5 percent of respondents postulated that superintelligence may appear in the next 10 to 25 years.
It should also be noted that while the survey was anonymous, some of its participants opted to identify themselves, including such notable figures as cognitive psychologist and computer scientist Geoffrey Hinton, Turing Award winner Edward Feigenbaum, leading roboticist Rodney Brooks and Google’s Director of Research Peter Norvig.
"There are many valid concerns about AI, from its impact on jobs to its uses in autonomous weapons systems and even to the potential risk of superintelligence. However, predictions that superintelligence is on the foreseeable horizon are not supported by the available data," Etzioni surmised.
He also added that negative forecasts often tend to ignore the potential benefits of AI use, not to mention that superintelligence may in fact be born from a symbiotic relationship between artificial intelligence systems and humans.