Deep Multi-Task Learning with evolving weights
Abstract
Pre-training of deep neural networks has been abandoned in the last few years. The main reason is the difficulty to control the overfitting and tune the consequential raised number of hyper-parameters. In this paper we use a multi-task learning framework that gathers weighted supervised and unsupervised tasks. We propose to evolve the weights along the learning epochs in order to avoid the break in the sequential transfer learning used in the pre-training scheme. This framework allows the use of unlabeled data. Extensive experiments on MNIST showed interesting results.