Physics – Condensed Matter – Disordered Systems and Neural Networks
Scientific paper
1997-05-26
Physics
Condensed Matter
Disordered Systems and Neural Networks
12 pages Latex with 8 eps figures
Scientific paper
The performance of large neural networks can be judged not only by their storage capacity but also by the time required for learning. A polynomial learning algorithm with learning time $\sim N^2$ in a network with $N$ units might be practical whereas a learning time $\sim e^N$ would allow rather small networks only. The question of absolute storage capacity $\alpha_c$ and capacity for polynomial learning rules $\alpha_p$ is discussed for several feed-forward architectures, the perceptron, the binary perceptron, the committee machine and a perceptron with fixed weights in the first layer and adaptive weights in the second layer. The analysis is based partially on dynamic mean field theory which is valid for $N\to\infty$. Especially for the committee machine a value $\alpha_p$ considerably lower than the capacity predicted by replica theory or simulations is found. This discrepancy is resolved by new simulations investigating the learning time dependence and revealing subtleties in the definition of the capacity.
Bethge Anthea
Horner Heinz
No associations
LandOfFree
Bounds on learning in polynomial time does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Bounds on learning in polynomial time, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Bounds on learning in polynomial time will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-560830