Thursday evening was the fifth Deep Learning Meetup, hosted for the first time in Heuritech’s new offices. Here is a short report of the great talks we heard on Thursay, for all the deeplearners who couldn’t come. Photos of the event will be released soon!
Matthieu Cord : Deep CNN and Weak Supervision Learning for visual recognition
For the task of image classification, Deep Convolutional networks are known as the state-of-the art technique. Here is a figure of one of the best network, VGG16. It is a very deep network with a lot of convolution layer followed by max-pooling, reducing the dimensionality.
But to classify a picture, it would be helpful to know which part of the picture is the relevant part for classification.
The method presented by Oquab, Bottou, Laptev and Sivic is doing region selection by using a Weakly Supervised Learning technique. The idea is to use a up-scaled version of the picture, and then run the CNN on all possible frames of fixed size in the image (the CNN is for a fixed size of picture). By using this method, we can get a classification score for each smaller frame of fixed size of the picture. Then, we can show for a given class the « activation map », obtained by plotting for each frame the activation score of this class
Then, to classify an image, the score of each class is the score of this class on the sub-frame with the highest score for this class. And we can choose the class with the highest score.
The method introduced by Thibaut Durand, Nicolas Thome and Matthieu Cord is improving this last one, by using not only the frame with he maximum score, but also the frame with the minimal score. Why? The reason is that a part of the picture can contain some strong evidences that a given pattern is not on the picture.
As an example, on the figures above, we can see that the heat map identifies where the cars are, but it also identifies some strong evidences proving that there are no boat (because there is a tree).
Matthieu Cord presented the results of this method, and how it improves the results for image classification. He also presented how it is possible to improve these results by selecting not all frames but only a few of them.
Main reference : MANTRA: Minimum Maximum LSSVM for Image Classification and Ranking.
Tristan Deleu: Learning from examples with Neural Turing Machines
Tristan Deleu presented the Neural Turing Machines, introduced by Google Deep Mind in 2014 are a way to implement Memory augmented Neural Networks. What are these networks?
Usual networks take an input, and give an output. For example, an image classifier takes a picture a give a class for this picture, without using any other knowledge that the weights of the network.
What we would like would be to have a memory, to use some, external knowledge to understand the input. As an example, a question-answering system should be able to look into a database for the answer of a question, and not only use the question itself.
A Neural Turing Machine is a very sophisticated architecure, able to read and write in a memory. The goal of this method is to be able to learn « algorithms » with examples, like a human would learn to do something by looking other people doing it.
For the moment, the tasks solved by the NTM are simple : copy or multiple copy a sequence, sort a sequence, … But it is a promising step in the direction of memory-augmented networks.
To know more about it, Tristan Deleu wrote a great blogpost about it: Snips’ blogpost
He also developped with Snips’ team a theano-Lasagne open-source library, for anyone who want to play with it : github.com/snipsco/ntm-lasagne
Vincent Gire: Hyperparameter research
Every machine learning user knows that reaching the better hyper-parameter is a very complicated, time consuming and boring task. Especially in Deep Learning, where the number of hyper-parameter is huge (number of layer, number of neuron by layer, batch size, learning rate, …), and the time to make a network converge is huge.
But the hyper-parameter search is a dedicated research field. Some open source tools can be used to do it, like Hyperopt. They are mainly using some Bayesian optimization methods, taking into account at each step the loss of the last try, to reach the better hyper-parameter faster than a pure random search.
Oscar is free, can be run in parallel very easily in Python or Lua. You only have to send for each experiment the new score of the loss function, and Oscar suggests the new parameters.
It also include a great front end online to visualize the results..
Vincent Gire presented oscar system, and how he used it on practical issues.
Aloïs Gruson: Deep learning for music recommendation
The speaker was Aloïs Gruson, from Niland a French startup focused on machine learning for music search, recommendation. The goal of the research team can be presented in such a way: Create a high dimensional space where every song is a vector, and similar songs can be found by Nearest Neighbours. This also enables easy classification, retrieval and clusterization of songs.
Getting this vector space can be done in different ways:
- User based only through collaborative filtering, by factoring the user-song matrix. This requires having already a lot of users and is not related to the content of the songs.
- Content based by analysing the content of the songs (the supervision is then which songs are similar). This is the approach studied here.
They compared their hand-crafted features (10 years of research!) with a deep learning approach starting from the time-frequency spectrogram. They used a model similar to Sander Dielman’s one:
This architecture is a lot similar to a ConvNet for images, but with the convolution process only being on the time axis (1D temporal convolution).
The evaluate the results on a dataset of 8500 tracks in 141 playlists from mainstream music with search-engine like metrics. The results are very promising: slightly better than the hand-crafted features with much less a priori knowledge. Like the ConvNet models, it is also transferable to other types of music.
Try their demo!
It was great to have you all in our offices for this meetup ! See you soon for the next one, on April 5th. And Yann Lecun will be one of our speakers !