If you think about computer-generated art from the 1990s or 2000s, you may probably remember fractal images or colorful kaleidoscopes built using math formulas. Even slow and old machines were able to create these images. Nowadays, when computational power has grown dramatically, thanks to the use of artificial neural networks, machines are able to go well beyond primitive graphics.
In 2015, Leon Gatys from Germany’s Tuebingen University, together with his colleagues from Switzerland and Belgium, created the Neural Algorithm of Artistic Style. By using a pre-trained neural network the engineers taught the machine to combine pairs of images, one of which is a photo, and the other one — a famous painting. Thanks to the new algorithm, the neural network was able to successfully transfer the style of the painting to photos uploaded by users.
Here is Leon Gatys talking about his method in a video from Microsoft Research’s YouTube channel:
To perform artistic style transfer what we will do is we will extract the texture information from the painting and the information presented in the higher level of the convolutional neural network of the photograph, which basically preserves the content of the image and we will combine both of these representations into a new image that actually combines the style of the painting with the content of the photograph.
An artificial neural network is a method of computing which simulates the functions of a human brain. By using neural units connected to each other the network is capable of learning and solving complex tasks in areas like computer vision, speech recognition and machine translation.
Even though creating art is something totally new for neural networks, the machines are quickly developing their skills. Besides still images artistic style transfer technology has already been tested with moving pictures. So now it’s not a problem for a neural network to make just about any video look like a living Van Gogh masterpiece.
Gene Kogan is an artist and a programmer who is interested in AI and software for creativity. In his video about convolutional neural networks, he describes how machines analyze images by placing a square grid over the picture, and then going through different layers to figure out what is in each cell.
The activation maps in the second layer are more interesting because rather than looking for patterns from the raw pixels of the original image we’re now looking for patterns from the activation maps from previous layers of the network. So, for example it might be able to combine vertical edges and horizontal edges to detect corners, which we can think of as higher level features. As we do this process many times progressing through many layers of the network we acquire a higher level of features or representations of the image.
The last layers are the most interesting since the machine is actually able to recognize basic objects, whether it’s a phone or a water bottle, and match them with words from its database.
As neural networks learn new skills, it’s possible that they will not only be able to make their own versions of the visual content uploaded by humans, but they may start creating their own original art. A program called DeepDream developed by Google is already able to generate strange hallucinogenic images, which are seen by many as the dreams of the machine.
We'd love to get your feedback at firstname.lastname@example.org
Have you heard the news? Sign up to our Telegram channel and we'll keep you up to speed!