Computer Science – Artificial Intelligence
Scientific paper
2011-04-21
Computer Science
Artificial Intelligence
Scientific paper
We present in this paper a novel approach for training deterministic auto-encoders. We show that by adding a well chosen penalty term to the classical reconstruction cost function, we can achieve results that equal or surpass those attained by other regularized auto-encoders as well as denoising auto-encoders on a range of datasets. This penalty term corresponds to the Frobenius norm of the Jacobian matrix of the encoder activations with respect to the input. We show that this penalty term results in a localized space contraction which in turn yields robust features on the activation layer. Furthermore, we show how this penalty term is related to both regularized auto-encoders and denoising encoders and how it can be seen as a link between deterministic and non-deterministic auto-encoders. We find empirically that this penalty helps to carve a representation that better captures the local directions of variation dictated by the data, corresponding to a lower-dimensional non-linear manifold, while being more invariant to the vast majority of directions orthogonal to the manifold. Finally, we show that by using the learned features to initialize a MLP, we achieve state of the art classification error on a range of datasets, surpassing other methods of pre-training.
Bengio Yoshua
Glorot Xavier
Mesnil Gregoire
Muller Xavier
Rifai Salah
No associations
LandOfFree
Learning invariant features through local space contraction does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Learning invariant features through local space contraction, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning invariant features through local space contraction will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-177675