Minimum relative entropy, Bayes and Kapur

Mathematics – Probability

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Inverse Theory, Probability Distributions

Scientific paper

The focus of this paper is to illustrate important philosophies on inversion and the similarly and differences between Bayesian and minimum relative entropy (MRE) methods. The development of each approach is illustrated through the general-discrete linear inverse. MRE differs from both Bayes and classical statistical methods in that knowledge of moments are used as ‘data’ rather than sample values. MRE like Bayes, presumes knowledge of a prior probability distribution and produces the posterior pdf itself. MRE attempts to produce this pdf based on the information provided by new moments. It will use moments of the prior distribution only if new data on these moments is not available. It is important to note that MRE makes a strong statement that the imposed constraints are exact and complete. In this way, MRE is maximally uncommitted with respect to unknown information. In general, since input data are known only to within a certain accuracy, it is important that any inversion method should allow for errors in the measured data. The MRE approach can accommodate such uncertainty and in new work described here, previous results are modified to include a Gaussian prior. A variety of MRE solutions are reproduced under a number of assumed moments and these include second-order central moments. Various solutions of Jacobs & van der Geest were repeated and clarified. Menke's weighted minimum length solution was shown to have a basis in information theory, and the classic least-squares estimate is shown as a solution to MRE under the conditions of more data than unknowns and where we utilize the observed data and their associated noise. An example inverse problem involving a gravity survey over a layered and faulted zone is shown. In all cases the inverse results match quite closely the actual density profile, at least in the upper portions of the profile. The similar results to Bayes presented in are a reflection of the fact that the MRE posterior pdf, and its mean are constrained not by d=Gm but by its first moment E(d=Gm), a weakened form of the constraints. If there is no error in the data then one should expect a complete agreement between Bayes and MRE and this is what is shown. Similar results are shown when second moment data is available (e.g. posterior covariance equal to zero). But dissimilar results are noted when we attempt to derive a Bayesian like result from MRE. In the various examples given in this paper, the problems look similar but are, in the final analysis, not equal. The methods of attack are different and so are the results even though we have used the linear inverse problem as a common template.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Minimum relative entropy, Bayes and Kapur does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Minimum relative entropy, Bayes and Kapur, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Minimum relative entropy, Bayes and Kapur will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-765725

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.