Radio
Breaking news, as well as the most pressing issues of political, economic and social life. Opinion and analysis. Programs produced and made by journalists from Sputnik studios.

How Google's Computers Invented Their Own Language

How Google’s Computers Invented Their Own Language
Subscribe
In September of 2016, Google announced the launch of the Neural Machine Translation system. The technology was meant to improve the accuracy of automated translations. But things went far beyond the tech giant’s plan, as self-learning computers invented their own modus operandi.

When you type a sentence in Dutch in Google Translate to get its English translation, you’d normally expect the system to look up each word in some sort of big dictionary. But that’s not exactly the way it works. Thanks to the use of the so-called statistical approach, Google’s AI looks at the sentence as a whole. Then it picks groups of words and attempts to build a translated phrase according to certain rules – rules acquired through machine learning and user input. That way you get a result which is compared against proper phrases used by actual people on the Internet. To improve accuracy even more Google has been routinely “feeding” its supercomputers with gigabytes of documents from the United Nations website, which has mirror archives in six different languages.

In a Youtube video Eric Schmidt, who served as Google’s CEO for ten years, said that Google Translate went through massive improvements thanks to the use of neural networks:

If you look at the error rate of Google Translate, which all of you use, we had an error rate of roughly 23%, we were able to reduce it to 8% using the latest in neural networks.

Artificial neural networks are a method of computing where artificial neurons loosely imitate the human brain. There are neural networks with thousands, or even million neural units. The method is widely used in technologies such as speech recognition and image analysis. But machine translation remains a difficult task for computers due to the complexity of human languages.

However, with the introduction of Neural Machine technology, besides the reduced translation error rates, Google engineers noticed something unexpected. All by themselves the computers learned to translate between various pairs of languages, like Korean and Japanese, without the use of English as a bridge. It means that AI invented its own internal bridge language, which is hard for humans to understand. Surprisingly, the Neural Machine did not just look up corresponding words in English-Japanese and English-Korean dictionaries. Instead, the network compared phrase patterns, words and meanings in Japanese and Korean, and then successfully matched corresponding concepts.

Google is not the only company in the market which is actively researching and implementing machine learning and neural networks. Giants like Facebook and Microsoft are also developing their own automated speech, image recognition and machine translation services. So, perhaps, besides Google’s “interlingua”, we may very well see other examples of computers outsmarting their creators and speaking their own language.

We'd love to get your feedback at radio@sputniknews.com

Have you heard the news? Sign up to our Telegram channel and we'll keep you up to speed!

Newsfeed
0
To participate in the discussion
log in or register
loader
Chats
Заголовок открываемого материала