Deep Multi-Task Learning with evolving weights - Normandie Université Access content directly
Conference Papers Year : 2016

Deep Multi-Task Learning with evolving weights

Abstract

Pre-training of deep neural networks has been abandoned in the last few years. The main reason is the difficulty to control the overfitting and tune the consequential raised number of hyper-parameters. In this paper we use a multi-task learning framework that gathers weighted supervised and unsupervised tasks. We propose to evolve the weights along the learning epochs in order to avoid the break in the sequential transfer learning used in the pre-training scheme. This framework allows the use of unlabeled data. Extensive experiments on MNIST showed interesting results.
No file

Dates and versions

hal-02345855 , version 1 (04-11-2019)

Identifiers

  • HAL Id : hal-02345855 , version 1

Cite

Soufiane Belharbi, Romain Hérault, Chatelain Clement, Sébastien Adam. Deep Multi-Task Learning with evolving weights. European Symposium on Artificial Neural Networks (ESANN), Apr 2016, Brugge, Belgium. ⟨hal-02345855⟩
22 View
0 Download

Share

Gmail Facebook X LinkedIn More