Radio
Breaking news, as well as the most pressing issues of political, economic and social life. Opinion and analysis. Programs produced and made by journalists from Sputnik studios.

The Age of Artificial Superintelligence

The Age Of Artificial Superintelligence
Subscribe
With the introduction of artificial neural networks, humanity discovered that computers can learn, act and even create on their own. Even though AI is in its early stages of development, pretty soon machines will be able to surpass humans in terms of skills and intelligence.

The average person who uses computers and smartphones daily rarely thinks about what’s going on inside these machines. Not many people are aware that modern gadgets, which usually have a limited amount of computational power, routinely connect to neural networks to perform automated translations, scan images, and make predictions about the weather and city traffic. By performing these tasks, the AI networks inside Google, Microsoft and Facebook are sharpening their existing skills and learning new ones.

Half of programmers surveyed recently by the UK-based Financial Times believe that computers will be able to outsmart humans by 2040 and about 90% of them believes that artificial “superintelligence” will emerge by 2075.

Professor Nick Bostrom from Oxford University’s Future of Humanity Institute said in a Youtube video that “super-AI” will become so powerful that it will be able to invent things:

So think of all the crazy technologies that you could have imagined maybe humans could have developed in the fullness of time: cures for aging, space colonization, self-replicating nanobots or the uploading of minds into computers – all kinds of science fiction-y stuff that’s nevertheless consistent with the laws of physics. All of this superintelligence could develop, and possibly quite rapidly.

However, not all everyone is optimistic about the future role of AI. There are those who believe in Hollywood scenarios like the one from the movie Terminator, where a powerful supercomputer called Skynet becomes self-aware and wages war on human race.

Nick Bostrom says that the future of human-machine interaction is not necessarily doomed to be as bad as it’s being portrayed by Hollywood. However, since what superintelligence will be doing is essentially performing a powerful optimization process, people will have to think very carefully about setting rules and boundaries for computers.

The point here is that we should not be confident in our ability to keep a superintelligent genie locked up in its bottle forever. Sooner or later it will get out. I believe that the answer here is to figure out how to create superintelligent AI such that even if – or when – it escapes, it is still safe, because it is fundamentally on our side, because it shares our values.

Microsoft founder Bill Gates, quantum physicist Stephen Hawking as well as SpaceX and Tesla CEO Elon Musk have expressed concerns about the possibility that machines could become so advanced that they present an existential risk to humanity. And even though scientists cannot foresee all the consequences of accelerated machine learning, they are trying to warn future generations about thepossible positive and negative of consequences of the AI revolution.

We'd love to get your feedback at radio@sputniknews.com

Have you heard the news? Sign up to our Telegram channel and we'll keep you up to speed!

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала