Computer Science – Learning
Scientific paper
2011-05-05
Computer Science
Learning
10 pages
Scientific paper
We investigate unsupervised pre-training of deep architectures as feature generators for "shallow" classifiers. Stacked Denoising Autoencoders (SdA), when used as feature pre-processing tools for SVM classification, can lead to significant improvements in accuracy - however, at the price of a substantial increase in computational cost. In this paper we create a simple algorithm which mimics the layer by layer training of SdAs. However, in contrast to SdAs, our algorithm requires no training through gradient descent as the parameters can be computed in closed-form. It can be implemented in less than 20 lines of MATLABTMand reduces the computation time from several hours to mere seconds. We show that our feature transformation reliably improves the results of SVM classification significantly on all our data sets - often outperforming SdAs and even deep neural networks in three out of four deep learning benchmarks.
Sha Fei
Weinberger Kilian Q.
Xu Zhixiang Eddie
No associations
LandOfFree
Rapid Feature Learning with Stacked Linear Denoisers does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Rapid Feature Learning with Stacked Linear Denoisers, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Rapid Feature Learning with Stacked Linear Denoisers will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-689077