UNREALIZABLE LEARNING IN BINARY FEEDFORWARD NEURAL NETWORKS

Physics – Condensed Matter

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

12 pages of uuencoded, compressed postscript. Includes figures. The paper can also be obtained from http://www.nordita.dk/ or

Scientific paper

Statistical mechanics is used to study unrealizable generalization in two large feed-forward neural networks with binary weights and output, a perceptron and a tree committee machine. The student is trained by a teacher being larger, i.e. having more units than the student. It is shown that this is the same as using training data corrupted by Gaussian noise. Each machine is considered in the high temperature limit and in the replica symmetric approximation as well as for one step of replica symmetry breaking. For the perceptron a phase transition is found for low noise. However the transition is not to optimal learning. If the noise is increased the transition disappears. In both cases $\epsilon _{g}$ will approach optimal performance with a $(\ln\alpha /\alpha)^k$ decay for large $\alpha$. For the tree committee machine noise in the input layer is studied, as well as noise in the hidden layer. If there is no noise in the input layer there is, in the case of one step of repl! ica symmetry breaking, a phase tra nsition to optimal learning at some finite $\alpha$ for all levels of noise in the hidden layer. When noise is added to the input layer the generalization behavior is similar to that of the perceptron. For one step of replica symmetry breaking, in the realizable limit, the values of the spinodal points found in this paper disagree with previously reported estimates \cite{seung1},\cite{schwarze1}. Here the value $\alpha _{sp} = 2.79$ is found for the tree committee machine and $\alpha _{sp} = 1.67$ for the perceptron.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

UNREALIZABLE LEARNING IN BINARY FEEDFORWARD NEURAL NETWORKS does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with UNREALIZABLE LEARNING IN BINARY FEEDFORWARD NEURAL NETWORKS, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and UNREALIZABLE LEARNING IN BINARY FEEDFORWARD NEURAL NETWORKS will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-611856

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.