Computer Science – Learning
Scientific paper
2005-06-11
Computer Science
Learning
26 pages
Scientific paper
We consider the problem of sequential decision making under uncertainty in which the loss caused by a decision depends on the following binary observation. In competitive on-line learning, the goal is to design decision algorithms that are almost as good as the best decision rules in a wide benchmark class, without making any assumptions about the way the observations are generated. However, standard algorithms in this area can only deal with finite-dimensional (often countable) benchmark classes. In this paper we give similar results for decision rules ranging over an arbitrary reproducing kernel Hilbert space. For example, it is shown that for a wide class of loss functions (including the standard square, absolute, and log loss functions) the average loss of the master algorithm, over the first $N$ observations, does not exceed the average loss of the best decision rule with a bounded norm plus $O(N^{-1/2})$. Our proof technique is very different from the standard ones and is based on recent results about defensive forecasting. Given the probabilities produced by a defensive forecasting algorithm, which are known to be well calibrated and to have good resolution in the long run, we use the expected loss minimization principle to find a suitable decision.
No associations
LandOfFree
Competitive on-line learning with a convex loss function does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Competitive on-line learning with a convex loss function, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Competitive on-line learning with a convex loss function will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-666588