Learning From An Optimization Viewpoint

Computer Science – Learning

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Thesis supervisor : Nati Srebro Thesis Committee : David McAllester, Arkadi Nemirovski, Alexander Razborov, Nati Srebro

Scientific paper

In this dissertation we study statistical and online learning problems from an optimization viewpoint.The dissertation is divided into two parts : I. We first consider the question of learnability for statistical learning problems in the general learning setting. The question of learnability is well studied and fully characterized for binary classification and for real valued supervised learning problems using the theory of uniform convergence. However we show that for the general learning setting uniform convergence theory fails to characterize learnability. To fill this void we use stability of learning algorithms to fully characterize statistical learnability in the general setting. Next we consider the problem of online learning. Unlike the statistical learning framework there is a dearth of generic tools that can be used to establish learnability and rates for online learning problems in general. We provide online analogs to classical tools from statistical learning theory like Rademacher complexity, covering numbers, etc. We further use these tools to fully characterize learnability for online supervised learning problems. II. In the second part, for general classes of convex learning problems, we provide appropriate mirror descent (MD) updates for online and statistical learning of these problems. Further, we show that the the MD is near optimal for online convex learning and for most cases, is also near optimal for statistical convex learning. We next consider the problem of convex optimization and show that oracle complexity can be lower bounded by the so called fat-shattering dimension of the associated linear class. Thus we establish a strong connection between offline convex optimization problems and statistical learning problems. We also show that for a large class of high dimensional optimization problems, MD is in fact near optimal even for convex optimization.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Learning From An Optimization Viewpoint does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Learning From An Optimization Viewpoint, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Learning From An Optimization Viewpoint will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-412246

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.