The first such conference was held at the initiative of three institutions: the National Research Nuclear University MEPhI, the Kogan Research Institute for Neurocybernetics and the Institute of Computational Modelling and the Siberian Branch of the Russian Academy of Sciences (RAS). This year marks the 19th anniversary of the conference.
Alexander Gorban, DSc in Physics and Mathematics, an expert in the field of neural networks in an interview with RIA Novosti correspondent discussed what neural networks actually are.
Mr. Gorban, could you explain what neural nets are?
Alexander Gorban: Neural networks are networks that consist of simple interconnected nodes of artificial neurons. Biological neurons, which can be found in our bodies, are cells that store, transmit and process information through electrical and chemical signals.
An artificial neural network was built in an attempt to understand the brain’s processes and use them in computer science.artificial neural networks are nothing but a very simplified model of the neuron found in biological neuronal nets. Each of these neurons is a simple node that transforms and transmits signals.
We can say it is the structure of their connections that plays the central role in this process, not the properties of the interconnected nodes (neurons). This is the idea behind the creation of different neural networks. A large amount of research on neuroinformatics is being focused on solving problems using these networks.
Why do we need neural networks when we have supercomputers? Do they have some sort of advantage?
Alexander Gorban: There is very popular conjecture by Marvin Minsky (American expert on artificial intelligence) stating that the acceleration of a parallel processor increases as the logarithm of the number of processing elements. For example, the acceleration of 100 processors is only twice as high as the acceleration of 10 processors.
Parallel processors take more time in order to communicate with each other rather than simply processing the information. But if you build a neuronal net to solve the same problem, you can almost completely realize parallelism’s potential, and the speed will increase almost in proportion to the number of processors.
A net built from formal neurons can be effectively realized on many parallel systems, achieving maximum speed. The main advantage of neural networks is that they are great for parallel computing. It is very important that they learn easily, too. Neural networks learn by drawing their own “conclusions” from examples that we give them.
Can you name the most important neural nets application?
Can you predict the future of neural networks?
Alexander Gorban: In the near future they will be used for predictive activity, optimization and real-time control. But I cannot and do not want to predict the long-term future. Some possibilities are a bit frightening.
For example, after passing a certain milestone of neural network development, we could become way too dependent on them. And in the event of a force majeure (mind you, I’m not talking about networks spiraling out of control; this exact problem can easily be dealt with), abandoning neuronal nets will practically turn us into cavemen. We are now discussing this and many other issues at the Neuroinformatics-2017 conference.