Self-Supervised Learning of Visual Representations

Mathilde Caron, Inria and Facebook AI Research (FAIR), France
Computing Abstraction

Self-supervised learning is the problem of training deep neural network systems without using any manual annotations. Training deep networks typically requires large amounts of annotated data, which has limited their applications in fields where accessing annotations is difficult. Moreover, manual annotations are biased towards a specific task and towards the annotator's own biases. As a result, they might be noisy and unreliable. We hope that training systems without any annotations can lead to better, more generic and more robust representations. Recent improvements in self-supervised learning methods have made them a serious alternative to traditional supervised training. In this seminar, we will discuss some of these methods and will give an overview of some recent contributions to the fast-growing field of self-supervised learning.

Bio: Mathilde Caron is currently a third-year PhD student at Inria and at Facebook AI Research (FAIR) in France, working on large-scale self-supervised representation learning for vision. Her supervisors are Julien Mairal, Piotr Bojanowski and Armand Joulin. Before that, she graduated from both Ecole polytechnique and KTH Royal Institute of Technology where she was mostly interested in applied mathematics and statistical learning. You can find a list of her publications here:


BlueJeans Link:

Meeting ID: 861480551 / Participant passcode: 7552