Computer Science – Learning
Scientific paper
2012-03-15
Computer Science
Learning
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
Scientific paper
Most existing distance metric learning methods assume perfect side information that is usually given in pairwise or triplet constraints. Instead, in many real-world applications, the constraints are derived from side information, such as users' implicit feedbacks and citations among articles. As a result, these constraints are usually noisy and contain many mistakes. In this work, we aim to learn a distance metric from noisy constraints by robust optimization in a worst-case scenario, to which we refer as robust metric learning. We formulate the learning task initially as a combinatorial optimization problem, and show that it can be elegantly transformed to a convex programming problem. We present an efficient learning algorithm based on smooth optimization [7]. It has a worst-case convergence rate of O(1/{\surd}{\varepsilon}) for smooth optimization problems, where {\varepsilon} is the desired error of the approximate solution. Finally, our empirical study with UCI data sets demonstrate the effectiveness of the proposed method in comparison to state-of-the-art methods.
Huang Kaizhu
Jin Rong
Liu Cheng-Lin
Xu Zenglin
No associations
LandOfFree
Robust Metric Learning by Smooth Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Robust Metric Learning by Smooth Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Robust Metric Learning by Smooth Optimization will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-32044