An Efficient Primal-Dual Prox Method for Non-Smooth Optimization

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

We consider the non-smooth optimization problems in machine learning, where both the loss function and the regularizer are non-smooth functions. Previous studies on efficient empirical loss minimization assume either a smooth loss function or a strongly convex regularizer, making them unsuitable for non-smooth optimization. We develop an efficient method for a family of non-smooth optimization where the dual form of the loss function is bilinear in primal and dual variables. We cast a non-smooth optimization problem into a minimax optimization problem, and develop a primal dual prox method that solves the minimax optimization problem at a rate of $O(1/T)$, significantly faster than a standard gradient descent method ($O(1/\sqrt{T})$). Our empirical study verifies the efficiency of the proposed method for non-smooth optimization by comparing it to the state-of-the-art first order methods.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

An Efficient Primal-Dual Prox Method for Non-Smooth Optimization does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with An Efficient Primal-Dual Prox Method for Non-Smooth Optimization, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and An Efficient Primal-Dual Prox Method for Non-Smooth Optimization will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-107784

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.