Computer Science – Artificial Intelligence
Scientific paper
2012-02-14
Computer Science
Artificial Intelligence
Scientific paper
Approximate Bayesian inference is NP-hard. Dagum and Luby defined the Local Variance Bound (LVB) to measure the approximation hardness of Bayesian inference on Bayesian networks, assuming the networks model strictly positive joint probability distributions, i.e. zero probabilities are not permitted. This paper introduces the k-test to measure the approximation hardness of inference on Bayesian networks with deterministic causalities in the probability distribution, i.e. when zero conditional probabilities are permitted. Approximation by stochastic sampling is a widely-used inference method that is known to suffer from inefficiencies due to sample rejection. The k-test predicts when rejection rates of stochastic sampling a Bayesian network will be low, modest, high, or when sampling is intractable.
van Engelen Robert A.
Yu Haohai
No associations
LandOfFree
Measuring the Hardness of Stochastic Sampling on Bayesian Networks with Deterministic Causalities: the k-Test does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Measuring the Hardness of Stochastic Sampling on Bayesian Networks with Deterministic Causalities: the k-Test, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Measuring the Hardness of Stochastic Sampling on Bayesian Networks with Deterministic Causalities: the k-Test will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-90751