Computer Science – Neural and Evolutionary Computing
Scientific paper
2012-03-20
Computer Science
Neural and Evolutionary Computing
Scientific paper
The deep Boltzmann machine (DBM) has been an important development in the quest for powerful "deep" probabilistic models. To date, simultaneous or joint training of all layers of the DBM has been largely unsuccessful with existing training methods. We introduce a simple regularization scheme that encourages the weight vectors associated with each hidden unit to have similar norms. We demonstrate that this regularization can be easily combined with standard stochastic maximum likelihood to yield an effective training strategy for the simultaneous training of all layers of the deep Boltzmann machine.
Bengio Yoshua
Courville Aaron
Desjardins Guillaume
No associations
LandOfFree
On Training Deep Boltzmann Machines does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with On Training Deep Boltzmann Machines, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and On Training Deep Boltzmann Machines will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-494928