Computer Science – Neural and Evolutionary Computing
Scientific paper
2001-08-17
Neurocomputing, vol. 47, pp. 103-118 (2002).
Computer Science
Neural and Evolutionary Computing
22 pages, 2 figures
Scientific paper
Artificial neurons with arbitrarily complex internal structure are introduced. The neurons can be described in terms of a set of internal variables, a set activation functions which describe the time evolution of these variables and a set of characteristic functions which control how the neurons interact with one another. The information capacity of attractor networks composed of these generalized neurons is shown to reach the maximum allowed bound. A simple example taken from the domain of pattern recognition demonstrates the increased computational power of these neurons. Furthermore, a specific class of generalized neurons gives rise to a simple transformation relating attractor networks of generalized neurons to standard three layer feed-forward networks. Given this correspondence, we conjecture that the maximum information capacity of a three layer feed-forward network is 2 bits per weight.
No associations
LandOfFree
Artificial Neurons with Arbitrarily Complex Internal Structures does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Artificial Neurons with Arbitrarily Complex Internal Structures, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Artificial Neurons with Arbitrarily Complex Internal Structures will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-544111