CA | ES | EN
Seminar

From Transfer Learning to Continual Learning
From Transfer Learning to Continual Learning

22/Feb/2022
22/Feb/2022

Speaker:

Joost van de Weijer
Joost van de Weijer

Institution:

Computer vision center
Computer vision center

Language :

EN
EN

Type :

Hybrid
Hybrid

Description:

TBA

One of the major assets of deep neural networks is that when trained on large data sets (source data), their knowledge can be transferred to small datasets (target data). Transfer learning for deep neural networks can be simply performed by finetuning the network on the new data. In this talk, I will introduce the research field of continual learning where the aim is to not only adapt to the target data but also keep the performance on the original source data. In addition, during adaptation to the target, the learner has no longer access to the source data.  This process can be repeated into a sequence of tasks that are learned one at a time. The aim for the learner is to perform well on all previous tasks at the end of the training process. The main challenge for continual learning is called catastrophic forgetting, where the learner suffers from a significant drop in performance on previous tasks. I will discuss a number of strategies to prevent catastrophic forgetting and will explain several methods developed in our group to address this problem. 

Joost van de Weijer is a Senior Scientist at the Computer Vision Center and leader of the Learning and Machine Perception (LAMP) group. He received his Ph.D. degree in 2005 from the University of Amsterdam. From 2005 to 2007, he was a Marie Curie Intra-European Fellow in the LEAR Team, INRIA Rhone-Alpes, France. From 2008 to 2012, he was a Ramon y Cajal Fellow at the Universidad Autonoma de Barcelona. He has served as an area chair for the main computer vision and machine learning conferences CVPR; ICCV; ECCV, NeurIPS. His main research interests include active learning, continual learning, transfer learning, domain adaptation, and generative models. 

 

One of the major assets of deep neural networks is that when trained on large data sets (source data), their knowledge can be transferred to small datasets (target data). Transfer learning for deep neural networks can be simply performed by finetuning the network on the new data. In this talk, I will introduce the research field of continual learning where the aim is to not only adapt to the target data but also keep the performance on the original source data. In addition, during adaptation to the target, the learner has no longer access to the source data.  This process can be repeated into a sequence of tasks that are learned one at a time. The aim for the learner is to perform well on all previous tasks at the end of the training process. The main challenge for continual learning is called catastrophic forgetting, where the learner suffers from a significant drop in performance on previous tasks. I will discuss a number of strategies to prevent catastrophic forgetting and will explain several methods developed in our group to address this problem. 

Joost van de Weijer is a Senior Scientist at the Computer Vision Center and leader of the Learning and Machine Perception (LAMP) group. He received his Ph.D. degree in 2005 from the University of Amsterdam. From 2005 to 2007, he was a Marie Curie Intra-European Fellow in the LEAR Team, INRIA Rhone-Alpes, France. From 2008 to 2012, he was a Ramon y Cajal Fellow at the Universidad Autonoma de Barcelona. He has served as an area chair for the main computer vision and machine learning conferences CVPR; ICCV; ECCV, NeurIPS. His main research interests include active learning, continual learning, transfer learning, domain adaptation, and generative models.