Mathematics – Probability
Scientific paper
2007-07-31
Annals of Applied Probability 2007, Vol. 17, No. 3, 1019-1048
Mathematics
Probability
Published at http://dx.doi.org/10.1214/105051607000000023 in the Annals of Applied Probability (http://www.imstat.org/aap/) by
Scientific paper
10.1214/105051607000000023
Let (X_n,Y_n) be i.i.d. random vectors. Let W(x) be the partial sum of Y_n just before that of X_n exceeds x>0. Motivated by stochastic models for neural activity, uniform convergence of the form $\sup_{c\in I}|a(c,x)\operatorname {Pr}\{W(x)\gecx\}-1|=o(1)$, $x\to\infty$, is established for probabilities of large deviations, with a(c,x) a deterministic function and I an open interval. To obtain this uniform exact large deviations principle (LDP), we first establish the exponentially fast uniform convergence of a family of renewal measures and then apply it to appropriately tilted distributions of X_n and the moment generating function of W(x). The uniform exact LDP is obtained for cases where X_n has a subcomponent with a smooth density and Y_n is not a linear transform of X_n. An extension is also made to the partial sum at the first exceedance time.
No associations
LandOfFree
Uniform convergence of exact large deviations for renewal reward processes does not yet have a rating. At this time, there are no reviews or comments for this scientific paper.
If you have personal experience with Uniform convergence of exact large deviations for renewal reward processes, we encourage you to share that experience with our LandOfFree.com community. Your opinion is very important and Uniform convergence of exact large deviations for renewal reward processes will most certainly appreciate the feedback.
Profile ID: LFWR-SCP-O-222533