Computer Science – Learning
Scientific paper
2012-03-15
Computer Science
Learning
Appears in Proceedings of the Twenty-Sixth Conference on Uncertainty in Artificial Intelligence (UAI2010)
Scientific paper
Over the past two decades, several consistent procedures have been designed to infer causal conclusions from observational data. We prove that if the true causal network might be an arbitrary, linear Gaussian network or a discrete Bayes network, then every unambiguous causal conclusion produced by a consistent method from non-experimental data is subject to reversal as the sample size increases any finite number of times. That result, called the causal flipping theorem, extends prior results to the effect that causal discovery cannot be reliable on a given sample size. We argue that since repeated flipping of causal conclusions is unavoidable in principle for consistent methods, the best possible discovery methods are consistent methods that retract their earlier conclusions no more than necessary. A series of simulations of various methods across a wide range of sample sizes illustrates concretely both the theorem and the principle of comparing methods in terms of retractions.
Kelly Kevin T.
Mayo-Wilson Conor
No associations
LandOfFree
Causal Conclusions that Flip Repeatedly and Their Justification does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Causal Conclusions that Flip Repeatedly and Their Justification, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Causal Conclusions that Flip Repeatedly and Their Justification will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-32195