Statistics – Machine Learning
Scientific paper
2011-06-09
Statistics
Machine Learning
Submitted
Scientific paper
It is of increasing importance to develop learning methods for ranking. In contrast to many learning objectives, however, the ranking problem presents difficulties due to the fact that the space of permutations is not smooth. In this paper, we examine the class of rank-linear objective functions, which includes popular metrics such as precision and discounted cumulative gain. In particular, we observe that expectations of these gains are completely characterized by the marginals of the corresponding distribution over permutation matrices. Thus, the expectations of rank-linear objectives can always be described through locations in the Birkhoff polytope, i.e., doubly-stochastic matrices (DSMs). We propose a technique for learning DSM-based ranking functions using an iterative projection operator known as Sinkhorn normalization. Gradients of this operator can be computed via backpropagation, resulting in an algorithm we call Sinkhorn propagation, or SinkProp. This approach can be combined with a wide range of gradient-based approaches to rank learning. We demonstrate the utility of SinkProp on several information retrieval data sets.
Adams Ryan Prescott
Zemel Richard S.
No associations
LandOfFree
Ranking via Sinkhorn Propagation does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Ranking via Sinkhorn Propagation, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Ranking via Sinkhorn Propagation will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-319615