Physics – Condensed Matter – Disordered Systems and Neural Networks
Scientific paper
2004-08-12
Physics
Condensed Matter
Disordered Systems and Neural Networks
20 pages, 8 postscript figures included, submitted for publication
Scientific paper
We define gradient networks as directed graphs formed by local gradients of a scalar field distributed on the nodes of a substrate network G. We derive an exact expression for the in-degree distribution of the gradient network when the substrate is a binomial (Erdos-Renyi) random graph, G(N,p). Using this expression we show that the in-degree distribution R(l) of gradient graphs on G(N,p) obeys the power law R(l)~1/l for arbitrary, i.i.d. random scalar fields. We then relate gradient graphs to congestion tendency in network flows and show that while random graphs become maximally congested in the large network size limit, scale-free networks are not, forming fairly efficient substrates for transport. Combining this with other constraints, such as uniform edge cost, we obtain a plausible argument in form of a selection principle, for why a number of spontaneously evolved massive networks are scale-free. This paper also presents detailed derivations of the results recently reported in Nature, vol. 428, pp. 716 (2004).
Bassler Kevin E.
Hengartner Nicolas W.
Korniss Gyorgy
Kozma Balazs
Toroczkai Zoltan
No associations
LandOfFree
Gradient Networks does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Gradient Networks, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Gradient Networks will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-574656