L1 regularization is better than L2 for learning and predicting chaotic systems

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

13 pages, 4 figures

Scientific paper

Emergent behaviors are in the focus of recent research interest. It is then of considerable importance to investigate what optimizations suit the learning and prediction of chaotic systems, the putative candidates for emergence. We have compared L1 and L2 regularizations on predicting chaotic time series using linear recurrent neural networks. The internal representation and the weights of the networks were optimized in a unifying framework. Computational tests on different problems indicate considerable advantages for the L1 regularization: It had considerably better learning time and better interpolating capabilities. We shall argue that optimization viewed as a maximum likelihood estimation justifies our results, because L1 regularization fits heavy-tailed distributions -- an apparently general feature of emergent systems -- better.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

L1 regularization is better than L2 for learning and predicting chaotic systems does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with L1 regularization is better than L2 for learning and predicting chaotic systems, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and L1 regularization is better than L2 for learning and predicting chaotic systems will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-216215

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.