Computer Science – Information Theory
Scientific paper
2007-07-30
Computer Science
Information Theory
41 pages; submitted to IEEE Transactions on Information Theory
Scientific paper
Universally achievable error exponents pertaining to certain families of channels (most notably, discrete memoryless channels (DMC's)), and various ensembles of random codes, are studied by combining the competitive minimax approach, proposed by Feder and Merhav, with Chernoff bound and Gallager's techniques for the analysis of error exponents. In particular, we derive a single--letter expression for the largest, universally achievable fraction $\xi$ of the optimum error exponent pertaining to the optimum ML decoding. Moreover, a simpler single--letter expression for a lower bound to $\xi$ is presented. To demonstrate the tightness of this lower bound, we use it to show that $\xi=1$, for the binary symmetric channel (BSC), when the random coding distribution is uniform over: (i) all codes (of a given rate), and (ii) all linear codes, in agreement with well--known results. We also show that $\xi=1$ for the uniform ensemble of systematic linear codes, and for that of time--varying convolutional codes in the bit-error--rate sense. For the latter case, we also show how the corresponding universal decoder can be efficiently implemented using a slightly modified version of the Viterbi algorithm which em employs two trellises.
Akirav Yaniv
Merhav Neri
No associations
LandOfFree
Competitive minimax universal decoding for several ensembles of random codes does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Competitive minimax universal decoding for several ensembles of random codes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Competitive minimax universal decoding for several ensembles of random codes will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-221925