Learning representations by back-propagating errors

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

249

Scientific paper

We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal `hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Learning representations by back-propagating errors does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Learning representations by back-propagating errors, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning representations by back-propagating errors will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-1739907

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.