Mutual Information and Minimum Mean-square Error in Gaussian Channels

Computer Science – Information Theory

Scientific paper

Rate now

  [ 0.00 ] – not rated yet Voters 0   Comments 0

Details

Scientific paper

This paper deals with arbitrarily distributed finite-power input signals observed through an additive Gaussian noise channel. It shows a new formula that connects the input-output mutual information and the minimum mean-square error (MMSE) achievable by optimal estimation of the input given the output. That is, the derivative of the mutual information (nats) with respect to the signal-to-noise ratio (SNR) is equal to half the MMSE, regardless of the input statistics. This relationship holds for both scalar and vector signals, as well as for discrete-time and continuous-time noncausal MMSE estimation. This fundamental information-theoretic result has an unexpected consequence in continuous-time nonlinear estimation: For any input signal with finite power, the causal filtering MMSE achieved at SNR is equal to the average value of the noncausal smoothing MMSE achieved with a channel whose signal-to-noise ratio is chosen uniformly distributed between 0 and SNR.

No associations

LandOfFree

Say what you really think

Search LandOfFree.com for scientists and scientific papers. Rate them and share your experience with other people.

Rating

Mutual Information and Minimum Mean-square Error in Gaussian Channels does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.

If you have personal experience with Mutual Information and Minimum Mean-square Error in Gaussian Channels, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Mutual Information and Minimum Mean-square Error in Gaussian Channels will most certainly appreciate the feedback.

Rate now

     

Profile ID: LFWR-SCP-O-581400

  Search
All data on this website is collected from public sources. Our data reflects the most accurate information available at the time of publication.