Statistics – Machine Learning
Scientific paper
2008-07-22
Statistics
Machine Learning
28 pages, 3 figures, submitted to a journal
Scientific paper
We study Bayesian discriminative inference given a model family $p(c,\x, \theta)$ that is assumed to contain all our prior information but still known to be incorrect. This falls in between "standard" Bayesian generative modeling and Bayesian regression, where the margin $p(\x,\theta)$ is known to be uninformative about $p(c|\x,\theta)$. We give an axiomatic proof that discriminative posterior is consistent for conditional inference; using the discriminative posterior is standard practice in classical Bayesian regression, but we show that it is theoretically justified for model families of joint densities as well. A practical benefit compared to Bayesian regression is that the standard methods of handling missing values in generative modeling can be extended into discriminative inference, which is useful if the amount of data is small. Compared to standard generative modeling, discriminative posterior results in better conditional inference if the model family is incorrect. If the model family contains also the true model, the discriminative posterior gives the same result as standard Bayesian generative modeling. Practical computation is done with Markov chain Monte Carlo.
Kaski Samuel
Puolamäki Kai
Salojärvi Jarkko
Savia Eerika
No associations
LandOfFree
Inference with Discriminative Posterior does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Inference with Discriminative Posterior, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Inference with Discriminative Posterior will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-436178