Speaking at the Zeitgeist Conference 2015 in London, Hawking confirmed the fears of paranoid sci-fi buffs.
"Computers will overtake humans with AI [Artificial Intelligence] at some point within the next 100 years. When that happens, we need to make sure the computers have goals aligned with ours," he said. "Our future is a race between the growing power of technology and the wisdom with which we used it."
This isn’t the first time Hawking has warned of the consequences of AI — the field focusing on software or machine intelligence. Along with Telsa Motors CEO Elon Musk, the theoretical physicist signed an open letter in January cautioning against rapid and unchecked AI development.
Scientists, according to Hawking, are growing more preoccupied with developing and furthering the field of AI than they are with understanding its consequences for the human race.
"Success in creating AI would be the biggest event in human history," he wrote in a 2014 article for the Independent. "Unfortunately, it might also be the last, unless we learn how to avoid the risk."
"One can imagine such technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand," he continued. "Whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all."
Hawking believes that scientists and technologists need to fully coordinate and communicate their achievements in the field to avoid its potential pitfalls and ensure that it does not grow beyond humanity’s control.
AI technology is fast-becoming a part of every-day routines. Examples of the technology include Siri, an intelligent personal assistant used in iPhones and iPads and developed by Apple, and Google’s self-driving vehicles which also rely heavily on AI.
Global tech-giants like Google, Facebook, and Apple aren’t the only ones pursuing AI development. According to FT, over 150 startups in Silicon Valley today are working on AI.