Leon Gatys is a PhD student in the lab of Prof. Matthias Bethge at the Centre for Integrative Neuroscience in Tübingen. In his PhD work, he uses deep convolutional neural networks, computational algorithms inspired by our knowledge of how the visual cortex is organized. They contain multiple layers of ‘neurons’ that learn connections between the layers after training with input signals, for example the pixels of images.
Leon uses these networks to explore the organisation of visual perception in tasks such as object recognition. In addition, he’s discovered that art style and content can be separated from the deep convolutional neural network layers, allowing art styles to be transferred from one image to another. This method allows anyone to turn their own pictures into artwork, and is available for all to enjoy on the popular website deepart.io.

result_380107

Why are deep convolutional neural networks so much better at perceptual inference tasks compared to other kinds of algorithms?
That is a good question. There is of course the obvious answer that deep CNNs are very powerful function approximators and if you train them with enough labeled data they can learn almost anything.
But in fact, I think we don’t really understand well why they perform so much better than all previous algorithms. From the perspective of Computational Neuroscience they are not very different to algorithms like the Neocognitron or HMAX that have been proposed many years ago. I actually believe that it is one of the most important tasks of Computational Neuroscience to develop theories of neural computation that can capture the vast difference in performance between something like HMAX and a modern deep CNN and thus provide a satisfying answer to your question.

When you began working on the deepart algorithm did you realise its potential for artwork or was it a by chance finding?
Back then we had just figured out that image features from pre-trained CNNs are great for texture modelling and synthesis. I remember we were discussing to combine the texture of artworks with photographs and I wasn’t really sure if it would work. But when I played around with it and we saw the first results – even though they were so much worse than what we can do now – it was immediately clear to us that this is very exciting and we are on to something exceptional.

Before you began your PhD how much did you know about the project? What inspired you to move to Tübingen?
Actually we hadn’t decided on a project yet when I moved to Tuebingen.
I had done an internship with Matthias the previous summer and I really enjoyed working with him and was generally fascinated by the question of how perceptual systems perform complex information processing tasks.
So I knew I wanted to do my PhD with him and then we just developed a project together once I was there.

result_455409What are you currently working on? Do you have plans for future career directions?
We have just recently published a paper that further develops the Neural Style Transfer algorithm by allowing a more fine-grained control on the stylisation outcome(https://arxiv.org/abs/1611.07865).

We believe that this pushes the technology to a quality that is acceptable for professional media editing and possibly we will see digital artists using it for all sorts of stuff in the future (e.g. just recently our Neural Style Transfer was used in a short film by this ‘twilight’ actress: https://arxiv.org/pdf/1701.04928v1.pdf )
Most of the work was done while I was an intern at Adobe Research last summer. I could definitely imagine going into industry after my PhD, given that there seem to be many opportunities where you can still do research and publish the work. But I haven’t made any decisions yet and am very excited to see what opportunities the future may bring.


result_457707
Do you think future neural networks will ever be able to develop their own style?
Yes I believe so, at least in a sense that the machine can find ways to satisfy certain pre-defined criteria, which can be very high-level. So I could imagine that eventually you could ask your computer something like “Make me an image that feels warm and dynamic’. And then the algorithm creates something like that for you. The only thing it needs for that is an image-based notion on what feels warm and dynamic for you, which could be learned from data. So in that sense we can have ‘creative’ machines that will generate images that mean something to us.

Do you have a favourite deepart creation to date? Are there any notable successes?
Together with the local Wirtschaftsförderung in Tübingen we had a project where they put posters of stylised scenes from Tübingen into the shopping windows in the Neckargasse. Some of them are really quite fascinating. This article features three of my favourite.


Celia Foster is a PhD candidate at the Max Planck Institute for Biological Cybernetics in Tübingen, Germany.


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: