Statistics – Machine Learning
Scientific paper
2008-11-26
Statistics
Machine Learning
7 pages, 3 figures, two-column, submitted to IEEE Transactions on Pattern Analysis and Machine Intelligence
Scientific paper
Min-cut clustering, based on minimizing one of two heuristic cost-functions proposed by Shi and Malik, has spawned tremendous research, both analytic and algorithmic, in the graph partitioning and image segmentation communities over the last decade. It is however unclear if these heuristics can be derived from a more general principle facilitating generalization to new problem settings. Motivated by an existing graph partitioning framework, we derive relationships between optimizing relevance information, as defined in the Information Bottleneck method, and the regularized cut in a K-partitioned graph. For fast mixing graphs, we show that the cost functions introduced by Shi and Malik can be well approximated as the rate of loss of predictive information about the location of random walkers on the graph. For graphs generated from a stochastic algorithm designed to model community structure, the optimal information theoretic partition and the optimal min-cut partition are shown to be the same with high probability.
Raj Anil
Wiggins Chris H.
No associations
LandOfFree
An information-theoretic derivation of min-cut based clustering does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with An information-theoretic derivation of min-cut based clustering, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and An information-theoretic derivation of min-cut based clustering will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-273787