Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T07:18:23.626Z Has data issue: false hasContentIssue false

Tail asymptotics for the area under the excursion of a random walk with heavy-tailed increments

Published online by Cambridge University Press:  25 February 2021

Denis Denisov*
Affiliation:
University of Manchester
Elena Perfilev*
Affiliation:
Universität Augsburg
Vitali Wachtel*
Affiliation:
Universität Augsburg
*
*Postal address: Department of Mathematics, University of Manchester, Oxford Road, Manchester M13 9PL, UK. Email address: denis.denisov@manchester.ac.uk
**Postal address: Institut für Mathematik, Universität Augsburg, 86135 Augsburg, Germany.
**Postal address: Institut für Mathematik, Universität Augsburg, 86135 Augsburg, Germany.
Rights & Permissions [Opens in a new window]

Abstract

We study the tail behaviour of the distribution of the area under the positive excursion of a random walk which has negative drift and heavy-tailed increments. We determine the asymptotics for tail probabilities for the area.

Type
Research Papers
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction and statement of results

Let $\{S_n;\ n\geq 1\}$ be a random walk with independent and identically distributed increments $\lbrace X_k;\ k\geq 1\rbrace$ . We shall assume that the increments have negative expected value, $\mathbb{E} X_1=-a$ . Let $\overline F(x)=\mathbb{P} (X_1>x)$ be the tail distribution function of $X_1$ . Let $\tau\coloneqq \min\lbrace n\geq1\,{:}\, S_n\leq 0 \rbrace$ be the first time the random walk exits the positive half-line. We consider the area under the random walks excursion $\{S_1,S_2,\ldots,S_{\tau-1}\}$ :

\begin{equation*}A_\tau\coloneqq \sum_{k=0}^{\tau-1}S_k.\end{equation*}

Since $\tau$ is finite almost surely, the area $A_\tau$ is finite as well. In this note we will study asymptotics for $\mathbb{P}(A_\tau>x)$ , as $x\to \infty$ , in the case when the distribution of increments is heavy-tailed. This paper continues the research of [Reference Perfilev and Wachtel14], where the light-tailed case was considered.

The area under the random walk excursion appears in a number of combinatorial problems, for example in investigations of the asymptotic number of random trees, see [Reference Takacs16,Reference Takacs17,Reference Takacs18]; some further references may be found in [Reference Denisov, Kolb and Wachtel6]. Another application area is statistical physics, see, e.g., [Reference Dobrushin and Hryniv8] or [Reference Carmona and Pétrélis3] and references therein. Applications to queuing theory for the analysis of the load in Transmission Control Protocol networks and to risk theory are discussed in [Reference Borovkov, Boxma and Palmowski2].

In the light-tailed case logarithmic asymptotics for $\mathbb{P}(A_\tau>x)$ was obtained in [Reference Duffy and Meyn10], and exact local asymptotics in [Reference Perfilev and Wachtel14]. Heavy-tailed asymptotics for $\mathbb{P}(A_\tau>x)$ was previously studied in [Reference Borovkov, Boxma and Palmowski2], which considered the case when the increments of the random walk have a distribution with regularly varying tail, that is $\overline F(x)=x^{-\alpha }L(x)$ , where L(x) is a slowly varying function. For $\alpha>1$ it was shown that

(1) \begin{equation}\mathbb{P}(A_{\tau}>x)\sim\mathbb{E}\tau\overline F(\sqrt{2ax}),\qquad x\to\infty.\end{equation}

Here, note that $\mathbb{E}[\tau]<\infty$ follows from the assumption $\mathbb{E}[X_1]=-a<0$ ; see, e.g., [11, Chapter XII.2, Theorem 2]. The asymptotics can be explained by traditional heavy-tailed one-big-jump heuristics. In order to have a huge area, the random walk should have a large jump, say y, at the very beginning of the excursion. After this jump the random walk goes down along the line $y-an$ according to the law of large numbers. Thus, the duration of the excursion should be approximately $y/a$ . As a result, the area will be of order $y^2/2a$ . Now, from the equality $x=y^2/2a$ we infer that a jump of order $\sqrt{2ax}$ is needed. Since the same strategy is valid for the maximum $M_\tau\coloneqq \max_{n<\tau}S_n$ of the first excursion, one can rewrite (1) in the following way:

\begin{equation*}\mathbb{P}(A_\tau>x)\sim\mathbb{P}(M_\tau>\sqrt{2ax}),\qquad x\to\infty.\end{equation*}

However, the class of regularly varying distributions does not include all subexponential distributions, excluding, in particular, the log-normal distribution and Weibull distribution with parameter $\beta<1$ . The asymptotics for these remaining cases have been raised as an open problem in [13, Conjecture 2.2] for a strongly related workload process. We will reformulate this conjecture as

(2) \begin{equation}\mathbb{P}(A_\tau \gt x)\sim \mathbb{P}\left(\tau \gt \sqrt{\frac{2x}{a}}\right), \qquad x\to\infty,\end{equation}

when $F\in\mathcal S$ and $\mathcal S$ is a subclass of subexponential distributions. Note that using the asymptotics for

(3) \begin{equation}\mathbb{P}(\tau \gt x)\sim \mathbb{E}\tau\overline F(a x)\end{equation}

from [Reference Denisov and Shneer7] for Weibull distributions with parameter $\beta<1/2$ , we can see that in this case the asymptotics in (2) is equivalent to (1). In this note we partially settle (2). It is not difficult to show that the same arguments hold for the workload process and to prove the same asymptotics for the area of the workload process, thus settling the original [Reference Kulik and Palmowski13, Conjecture 2.2]. In passing, we note that it is doubtful that (2) holds in full. The reason is that for both $\tau$ and $A_\tau$ the asymptotics (3) and (2) are no longer valid for Weibull distributions with parameter $\beta>1/2$ . The analysis for $\beta>1/2$ involves a more complicated optimisation procedure leading to a Cramér series, and it is unlikely that the answers will be the same for the area and for the exit time.

1.1. Main results

We will now present the results. We will start with the regularly varying case. In this case the connection between the tails of $A_\tau$ and $M_\tau$ is strong and we will be able to use the asymptotics for $\mathbb{P}(M_\tau \gt x)$ found in [Reference Foss, Palmowski and Zachary12] (see also a short proof in [Reference Denisov4]) to find the asymptotics for $\mathbb{P}(A_\tau \gt x)$ .

Proposition 1. The following two statements hold.

  1. (a) If $\overline{F}(x)\coloneqq \mathbb{P}(X_1>x)=x^{-\alpha}L(x)$ with some $\alpha\ge1$ and $\mathbb{E}|X_1|<\infty$ then, uniformly in $y\in[\varepsilon\sqrt{x},\sqrt{2ax}]$ , $\varepsilon\in(0,1),$

    (4) \begin{equation}\mathbb{P}(A_\tau \gt x,M_\tau \gt y)\sim \mathbb{E}\tau\overline{F}(\sqrt{2ax}).\end{equation}
  2. (b) If $\overline{F}(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$ , where g(x) is a monotone continuously differentiable function satisfying $\frac{g(x)}{x^\beta}\downarrow$ for some $\beta\in(0,1/2)$ , and $\mathbb{E}|X_1|^\varkappa<\infty$ for some $\varkappa>1/(1-\beta)$ , then (4) holds uniformly in $y\in\left[\sqrt{2ax}-\frac{R\sqrt{2ax}}{g(\sqrt{2ax})},\sqrt{2ax}\right]$ , $R>0$ .

This statement obviously implies the following lower bound for the tail of $A_\tau$ :

(5) \begin{equation}\liminf_{x\to\infty}\frac{\mathbb{P}(A_\tau \gt x)}{\overline{F}(\sqrt{2ax})}\ge\mathbb{E}\tau.\end{equation}

Furthermore, using this proposition one can give an alternative proof of (1) under the assumption of the regular variation of $\overline F$ , which is much simpler than the original one in [Reference Borovkov, Boxma and Palmowski2]. We first split the event $\lbrace A_\tau \gt x\rbrace$ into two parts,

\begin{align*}\lbrace A_\tau \gt x\rbrace=\lbrace A_\tau \gt x, M_\tau \gt y\rbrace\cup\lbrace A_\tau \gt x, M_\tau\leq y\rbrace.\end{align*}

Clearly, $\lbrace A_\tau \gt x, M_\tau\leq y\rbrace\subseteq\lbrace\tau \gt x/y\rbrace.$ Therefore,

(6) \begin{align}\mathbb{P}(A_\tau \gt x, M_\tau \gt y)\leq\mathbb{P}(A_\tau \gt x)\leq\mathbb{P}(A_\tau \gt x, M_\tau \gt y)+\mathbb{P}(\tau \gt x/y).\end{align}

When $\alpha>1$ , according to Theorem I in [Reference Doney9] or [Reference Denisov and Shneer7, Theorem 3.2], $\mathbb{P}(\tau \gt t)\sim\mathbb{E}\tau\bar{F}\left(a t\right)$ as $t\to\infty$ . Choosing $y=\varepsilon \sqrt{x}$ and recalling that $\overline{F}$ is regularly varying, we get

(7) \begin{equation}\mathbb{P}(\tau \gt x/y)=\mathbb{P}(\tau \gt \sqrt{x}/\varepsilon)\sim \varepsilon^\alpha\mathbb{E}\tau\overline{F}(\sqrt{x}).\end{equation}

It follows from the first statement of Proposition 1 that

\begin{equation*}\mathbb{P}(A_\tau \gt x,M_\tau \gt \varepsilon \sqrt{x})\sim\mathbb{E}\tau\overline{F}(\sqrt{2ax}).\end{equation*}

Plugging this and (7) into (6), we get, as $x\to\infty$ ,

\begin{equation*}\mathbb{E}\tau\overline{F}(\sqrt{2ax})(1+o(1))\le\mathbb{P}(A_\tau \gt x)\le\mathbb{E}\tau\overline{F}(\sqrt{2ax})\left(1+\frac{\varepsilon^\alpha}{(2a)^{\alpha/2}}+o(1)\right).\end{equation*}

Letting $\varepsilon\to0$ , we arrive at (1).

The case of heavy-tailed distributions, which satisfy the conditions of Proposition 1(b), is more complicated. In particular, it seems that in this case there is a regime when the asymptotics in (1) is no longer valid. We will treat this case by using exponential bounds similar to Section 2.2 in [Reference Perfilev and Wachtel14] and asymptotics for $\mathbb{P}(\tau \gt x)$ from [Reference Denisov, Dieker and Shneer5] and [Reference Denisov and Shneer7].

First, we will introduce a subclass of subexponential distributions to consider. We will assume that $\mathbb{E}[X_1^2]=\sigma^2<\infty$ . Without loss of generality we may assume that $\sigma=1$ .

Assumption 1. Let

(8) \begin{equation}\overline F(x) \sim e^{-g(x)}x^{-2}, \qquad x\to \infty,\end{equation}

where g(x) is an eventually increasing function such that eventually

(9) \begin{equation}\frac{g(x)}{x^{\gamma_0}}\downarrow 0, \qquad x\to\infty,\end{equation}

for some $\gamma_0\in(0,1]$ .

Due to the asymptotic nature of equivalence in (8), without loss of generality we may assume that g is continuously differentiable and that (9) holds for all $x>0$ . Clearly, monotonicity in (9) implies that

(10) \begin{equation}g'(x)\le \gamma_0 \frac{g(x)}{x}\end{equation}

for all sufficiently large x. Using the Karamata representation theorem we can show that this class of subexponential distributions includes regularly varying distributions $\overline F(x)\sim x^{-r}L(x)$ for $r>2$ . Also, it is not difficult to show that lognormal distributions and Weibull distributions ( $\overline F(x) \sim {\rm e}^{-x^\beta}$ , $\beta\in(0,1)$ ) belong to our class of distributions. This class previously appeared in [15] for the analysis of large deviations of sums of subexponential random variables on the whole axis.

Now we are able to give rough (logarithmic) asymptotics for $\gamma_0\le 1$ .

Theorem 1. Let $\mathbb{E}[X_1]=-a<0$ and ${\rm Var}(X_1)<\infty$ . Assume that Assumption 1 holds with $\gamma_0=1$ . Then, there exits a constant $C>0$ such that

\begin{equation*}\mathbb{P}(A_\tau \gt x)\le Cx^{1/4} \exp\left\{-g(\sqrt{2ax})\sqrt{1-\frac{2Cg(\sqrt{2ax})}{a\sqrt{2ax}}}\right\}.\end{equation*}

Furthermore, for any $\varepsilon>0$ there exists $C>0$ such that

\begin{equation*}\liminf_{x\to\infty}\frac{\mathbb{P}(A_\tau \gt x)}{\overline F(\sqrt{2ax}+Cx^{1/4+\varepsilon})}\ge \mathbb{E}\tau.\end{equation*}

In, particular, if $\gamma_0<1$ then

\begin{equation*}\lim_{x\to\infty}\frac{\ln \mathbb{P}(A_\tau \gt x)}{\ln \overline F(\sqrt{2ax})}=1.\end{equation*}

To obtain the exact asymptotics we will impose a further requirement on the function g.

Assumption 2. Let g(x) satisfy

(11) \begin{equation}xg'(x)\to \infty, \qquad x\to \infty.\end{equation}

This assumption implies that

(12) \begin{equation}\frac{g(x)}{\log x}\to \infty.\end{equation}

In particular, it excludes all regularly varying distributions.

Theorem 2. Let $\mathbb{E}[X_1]=-a<0$ and ${\rm Var}(X_1)<\infty$ . Assume that Assumption 1 holds with $\gamma_0<1/2$ and, in addition, that Assumption 2 holds. Then

\begin{equation*}\mathbb{P}(A_\tau \gt x) \sim \mathbb{E}\tau \overline F(\sqrt{2ax}), \qquad x\to \infty.\end{equation*}

1.2. Discussion and organisation of the paper

The main result of this note, Theorem 2, provides tail asymptotics for $A_\tau$ in the case when increments of the random walk have a Weibull-like distribution with the shape parameter $\gamma_0<1/2$ . We believe that $\mathbb{P}(A_\tau \gt x)$ behaves differently in the case when $g(x)=x^{\gamma_0}$ with $\gamma_0\ge 1/2$ . This change in the asymptotic behaviour appears in the analysis of the exact asymptotics for $\mathbb{P}(\tau \gt n)$ and $\mathbb{P}(S_n>an)$ ; see, correspondingly, [Reference Denisov, Dieker and Shneer5,Reference Denisov and Shneer7].

The conjecture in [Reference Kulik and Palmowski13] was formulated for the workload process of a single-server queue rather than the area under the random walk excursion. However, one can prove analogous results for the Lévy processes by essentially the same arguments. It is well known that the workload of the M/G/1 queue can be represent as a Lévy process, and thus our results can be transferred to this setting almost immediately. We believe that the treatment of the workload of the general G/G/1 queue is not that different either.

The paper is organised as follows. We will start by proving Proposition 1 in Section 2. Then we will derive a useful exponential bound and prove Theorem 1 in Section 3. Finally, we derive exact asymptotics for $\mathbb{P}(A_\tau \gt x)$ and thus prove Theorem 2 in Section 4.

2. Proof of Proposition 1

Before giving the proof we collect some auxiliary results that we will need in this and the following sections.

We will require the following statement, the first part of which follows from Theorem 2 in [Reference Foss, Palmowski and Zachary12] (see also [Reference Denisov4] for a short proof), and the second part from [7, Theorem 3.2].

Proposition 2. Let $\mathbb{E}[X_1]=-a$ and either (a) $\overline{F}(x)\coloneqq \mathbb{P}(X_1>x)=x^{-\alpha}L(x)$ with some $\alpha>1$ or (b) $\overline{F}(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$ , where g(x) is a monotone continuously differentiable function satisfying $\frac{g(x)}{x^\beta}\downarrow$ for $\beta\in(0,1/2)$ , and $\mathbb{E}|X_1|^\varkappa<\infty$ for some $\varkappa>1/(1-\beta)$ ; then, for any fixed k,

(13) \begin{align}\mathbb{P}(M_k>y)&\sim \mathbb{P}(S_k>y)\sim k\overline F(y), \quad\quad y\to \infty , \end{align}
(14) \begin{align}\mathbb{P}\left(\max_{n\le\tau\wedge k}S_n>y\right)&\sim \mathbb{E}(\tau\wedge k)\overline{F}(y),\quad y\to \infty , \end{align}
(15) \begin{align}\hspace*{-2pt}\mathbb{P}(M_\tau \gt y)&\sim\mathbb{E}\tau\bar{F}(y),\qquad\qquad\qquad y\to\infty\end{align}

and

(16) \begin{equation}\mathbb{P}(\tau \gt n)\sim \mathbb{E}[\tau]\overline F(an), \qquad\qquad\quad\,\, n\to \infty.\end{equation}

In the proof we will need some properties of the function $\overline F(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$ that we will summarise in the following lemma, which will also be used later in the paper.

Lemma 1. Let the distribution function $\overline F(x)$ be such that $\overline{F}(x)\sim x^{-\varkappa} {\rm e}^{-g(x)}$ , where g(x) is a monotone continuously differentiable function satisfying $\frac{g(x)}{x^\beta}\downarrow$ for $\beta\in(0,1)$ , and $\mathbb{E}|X_1|^\varkappa<\infty$ for some $\varkappa>1/(1-\beta)$ . Then,

(17) \begin{align}g'(x) & \le \beta \frac{g(x)}{x},\qquad\qquad x>0 , \end{align}
(18) \begin{align}g(x) - g(y) & \le \beta g(y)\frac{x-y}{y},\quad x>y>0 , \end{align}
(19) \begin{align}g(x) - g(x-y) & \le \beta g(y),\qquad x\ge 2y>0 , \end{align}
(20) \begin{align}\sup_{y\le x^{1/\varkappa}}\frac{\overline F(x-y)}{\overline F(x)} & \to 1,\qquad x\to \infty . \end{align}

Proof. Since g(x) is continuously differentiable and $\frac{g(x)}{x^\beta}$ is monotone decreasing then, with necessity,

\begin{equation*}0\ge \left(\frac{g(x)}{x^\beta}\right)'=\frac{g'(x)x^\beta-\beta x^{\beta-1}g(x)}{x^{2\beta}} ,\end{equation*}

implying (17). To prove (18), note that

\begin{equation*}g(x)-g(y) =\int_y^x g'(t) \, {\rm d} t \le \beta\int_y^x \frac{g(t)}{t} \, {\rm d} t\le \beta \frac{g(y)}{y^\beta}\int_y^x \frac{1}{t^{1-\beta}} \, {\rm d} t= \beta g(y)\frac{x-y}{y}.\end{equation*}

To prove (18), note that, since $x-y\ge y$ ,

\begin{align*}g(x)-g(x-y) &=\int_{x-y}^x g'(t) \, {\rm d} t \le \beta\int_{x-y}^x \frac{g(t)}{t} \, {\rm d} t\le \beta \frac{g(y)}{y^\beta}\int_{x-y}^x \frac{1}{t^{1-\beta}} \, {\rm d} t\\[3pt] &\le \beta \frac{g(y)}{y^\beta}\int_{y}^{2y} \frac{1}{t^{1-\beta}} \, {\rm d} t\le \beta g(y)\frac{(2y)^\beta-y^\beta}{\beta y^\beta}\le \beta g(y),\end{align*}

since $2^\beta\le 1+\beta$ for $\beta\in[0,1]$ . To show (20), note that, uniformly in $y\le x^{1/\varkappa}$ , as $x\to\infty$ ,

\begin{align*}1&\le \frac{\overline F(x-y)}{\overline F(x)}\le \frac{\overline F(x-x^{1/\varkappa})}{\overline F(x)}=(1+o(1)) \exp\left\{g(x)-g(x-x^{1/\varkappa})\right\} \\[3pt]&\le (1+o(1))\exp\left\{\beta g(x-x^{1/\varkappa})\frac{x^\varkappa}{x-x^\varkappa}\right\}\le (1+o(1))\exp\left\{C\frac{x^{1/\varkappa}}{(x-x^{1/\varkappa})^{1-\beta}}\right\}\to 1,\end{align*}

since $1/\varkappa<1-\beta$ . Here, we have also made use of (18).

Proof of Proposition 2. To prove (13), (14), and (15), by Theorem 2 of [Reference Foss, Palmowski and Zachary12] it is sufficient to show that (a) or (b) implies that $F\in\mathcal S^*$ , that is, $\int_0^\infty \overline F(y)<\infty$ and

\begin{equation*}\int_0^x\overline F(y)\overline F(x-y) \, {\rm d} y \sim 2\overline F(x) \int_0^\infty \overline F(y) \, {\rm d} y , \qquad x\to \infty.\end{equation*}

The fact that (a) implies $F\in\mathcal S^*$ is well known and follows immediately from the dominated convergence theorem, since $ \overline F(x)\sim \overline F(x-y)$ for all fixed y and

\begin{equation*}\int_0^x \frac{\overline F(y)\overline F(x-y)}{\overline F(x) } \, {\rm d} y=2\int_0^{x/2} \frac{\overline F(y)\overline F(x-y)}{\overline F(x) } \, {\rm d} y ,\end{equation*}

and $\overline F(x-y)\le C\overline F(x)$ for some $C>0$ when $y\le x/2$ . Now, assume that (b) holds and show that $F\in\mathcal S^*$ . Consider

\begin{align*}2\int_0^{x/2} \frac{\overline F(y)\overline F(x-y)}{\overline F(x) } \, {\rm d} y.\end{align*}

Uniformly in $y\in [\ln x ,x/2]$ we have, by (19),

(21) \begin{equation}\frac{\overline F(y)\overline F(x-y)}{\overline F(x) }\le C \frac{x^\varkappa}{(x-y)^\varkappa y^\varkappa}{\rm e}^{g(x)-g(x-y)-g(y)}\le Cy^{-\varkappa} {\rm e}^{\beta g(y)-g(y)} ,\end{equation}

and therefore, since $\varkappa>1$ ,

\begin{align*}2\int_{\ln x }^{x/2} \frac{\overline F(y)\overline F(x-y)}{\overline F(x) } \, {\rm d} y \to 0.\end{align*}

Next, applying (20) we see that $\frac{\overline F(x-y)}{\overline F(x)} \to 1 $ uniformly in $y\in [0, \ln x]$ , which implies that $F\in\mathcal S^*$ .

The proof of (16) can be done by verification that (8) and (9) imply that the conditions of Theorem 3.1 (and hence of Theorem 3.2) of [Reference Denisov and Shneer7] hold. We will provide the arguments in the more complicated case (b). First, $X_1+a$ satisfies $\mathbb{E}[|X_1+a]^\varkappa<\infty$ by the assumption of this proposition. Convergence (3.1) in Theorem 3.1 of [Reference Denisov and Shneer7] holds by (20). Let

\begin{align*}\varepsilon(n)\coloneqq \sup_{x\ge 2n^{1/\varkappa}}\frac{\mathbb{P}(\xi_1>n^{1/\varkappa},\xi_2>n^{1/\varkappa}, \xi_1+\xi_2>x)}{\mathbb{P}(\xi_1>x)},\end{align*}

where $\xi_i=X_i+a$ , $i=1,2$ . To show (3.2) in Theorem 3.1 of [Reference Denisov and Shneer7] we need to prove that $\varepsilon(n)=o(1/n)$ . For $x\ge 2n^{1/\varkappa}$ we have

\begin{align*}\frac{\mathbb{P}(\xi_1>n^{1/\varkappa},\xi_2>n^{1/\varkappa}, \xi_1+\xi_2>x)}{\mathbb{P}(\xi_1>x)}&= 2P_1+P_2\\[3pt] &\coloneqq 2\int_{n^{1/\varkappa}}^{x/2}\mathbb{P}(\xi_1\in {\rm d} y)\frac{\mathbb{P}(\xi_2>x-y)}{\mathbb{P}(\xi_1>x)}+\frac{\mathbb{P}(\xi_1>x/2)^2}{\mathbb{P}(\xi_1>x)} .\end{align*}

Then, using (19),

\begin{align*}P_1&\le\int_{n^{1/\varkappa}}^{x/2}\mathbb{P}(\xi_1\in {\rm d} y)\frac{\mathbb{P}(\xi_2>x-y)}{\mathbb{P}(\xi_1>x)}\\[3pt]&\le C \int_{n^{1/\varkappa}}^{x/2}\mathbb{P}(\xi_1\in {\rm d} y) \frac{\overline F(x-y)}{\overline F(x)}\le C \int_{n^{1/\varkappa}}^{x/2}\mathbb{P}(\xi_1\in {\rm d} y) {\rm e}^{\beta g(y)} \end{align*}

for some C. Integrating by parts,

\begin{align*}P_1 & \le C\mathbb{P}(\xi_1>n^{1/\varkappa}){\rm e}^{\beta g(n^{1/\varkappa})}+C\int_{n^{1/\varkappa}}^{x/2} {\rm d} y g'(y) {\rm e}^{\beta g(y)}\mathbb{P}(\xi_1>y)\\[3pt]&\le C n^{-1}{\rm e}^{(\beta-1) g(n^{1/\varkappa})}+C \int_{n^{1/\varkappa}}^{x/2} {\rm d} y g'(y) y^{-\kappa} {\rm e}^{(\beta-1) g(y)}\\[3pt]&\le o(n^{-1})+cn^{-1}\int_{n^{1/\varkappa}}^{x/2} {\rm d} y g'(y) {\rm e}^{(\beta-1) g(y)}=o(n^{-1}) \end{align*}

uniformly in $x\ge 2n^{1/\varkappa}$ . Using (21), $P_2\le Cx^{-\varkappa} {\rm e}^{(\beta-1)g(x)}=o(1/n)$ uniformly in $x\ge 2n^{1/\varkappa}$ , which proves that $\varepsilon(n)=o(1/n)$ .

Define $\sigma_y=\inf\lbrace n<\tau\,{:}\,S_n>y\rbrace.$

Lemma 2. Under the conditions of Proposition 2,

\begin{equation*}\lim_{y\to\infty}\mathbb{P}(\sigma_y=k \mid M_\tau \gt y)\eqqcolon q_k,\quad k\ge1 ; \qquad\sum_{k=1}^\infty q_k=1.\end{equation*}

Proof. For every $k\geq 1$ ,

\begin{align*}\mathbb{P}(\sigma_y=k \mid M_\tau \gt y)&=\frac{\mathbb{P}(\sigma_y=k)}{\mathbb{P}(M_\tau \gt y)}\\[3pt]&=\frac{\mathbb{P}\left(\max_{n\le\tau\wedge k}S_n>y\right)-\mathbb{P}\left(\max_{n\le\tau\wedge(k-1)}S_n>y\right)}{\mathbb{P}(M_\tau \gt y)}.\end{align*}

It follows from (14) and (15) that

(22) \begin{align}\nonumber\lim_{y\rightarrow\infty}\mathbb{P}(\sigma_y=k \mid M_\tau \gt y)&=\frac{\mathbb{E}\tau\wedge k-\mathbb{E}\tau\wedge(k-1)}{\mathbb{E}\tau}\\[3pt]&=\frac{\mathbb{P}(\tau \gt k-1)}{\mathbb{E}\tau}\eqqcolon q_k, \qquad k\geq 1.\end{align}

It is clear that

\begin{align*}\sum_{k=1}^\infty q_k=\frac{1}{\mathbb{E}\tau}\sum_{k=0}^\infty\mathbb{P}(\tau \gt k-1)=1. \\[-40pt]\end{align*}

Lemma 3. For every fixed k,

\begin{align*}\sup_{v>y}\Bigg\vert\frac{\mathbb{P}(S_k>v,\sigma_y=k)}{\overline{F}(v)}-\mathbb{P}(\tau \gt k-1)\Bigg\vert\rightarrow 0 \text{ as } y\rightarrow\infty .\end{align*}

Proof. Fix some $N>0$ and define the events

\begin{equation*}D_{k,N}=\cup_{j=1}^k\left\lbrace X_j>v+kN, \vert X_l\vert \leq N \text{ for all } l\neq j, l\le k\right\rbrace.\end{equation*}

It is clear that $D_{k,N}\subseteq \lbrace S_k>v\rbrace.$ Therefore,

\begin{align*}\mathbb{P}(S_k>v,\sigma_y=k)&=\mathbb{P}(D_{k,N},\sigma_y=k)+\mathbb{P}(S_k>v, D^c_{k,N},\sigma_y=k)\\[3pt]&=\mathbb{P}(X_k>v+kN,\vert X_l\vert\leq N, \text{for all } l<k, \sigma_y>k-1)\\[3pt]& \quad + \mathbb{P}(S_k>v, D^c_{k,N},\sigma_y=k).\end{align*}

For the first term we have $(y>(k-1)N)$

(23) \begin{equation}\begin{split}\mathbb{P}(X_k>v+kN,& \vert X_l\vert\leq N, \text{ for all } l<k, \sigma_y>k-1)\\[3pt]&=\mathbb{P}(\tau \gt k-1,\vert X_l\vert\leq N, l<k)\overline{F}(v+kN)\\[3pt]&=\mathbb{P}(\tau \gt k-1)\overline{F}(v)-\varepsilon_N^{(1)}\overline{F}(v)+o(\overline{F}(v)) \end{split}\end{equation}

uniformly in $v > y$ , where $\varepsilon_N^{(1)}\coloneqq \mathbb{P}(\tau \gt k-1,\vert X_l\vert> N \text{ for some }l<k)\to0$ as $N \to \infty$ .

Furthermore,

(24) \begin{equation}\begin{split}\mathbb{P}(S_k>v, & D_{k,N}^c, \sigma_y=k)\leq\mathbb{P}(S_k>v, D_{k,N}^c)=\mathbb{P}(S_k>v)-\mathbb{P}(D_{k,N})\\[3pt]&=\mathbb{P}(S_k>v)-k\mathbb{P}(X_1>v+kN)(\mathbb{P}(\vert X_1\vert\leq N))^{k-1}\\[3pt]&=\varepsilon_N^{(2)}\overline{F}(v)+o(\overline{F}(v)),\end{split}\end{equation}

where $\varepsilon_N^{(2)}\coloneqq k\left(1-(\mathbb{P}(\vert X_1\vert\leq N))^{k-1}\right)\to0,$ as $N \to \infty$ . Combining (23) and (24) and letting $N\rightarrow\infty$ , we get the desired relation.

We now turn to the study of the tail behaviour of $A_\tau$ on the event $\{\sigma_y=k\}$ . For the corresponding result we need the following property of $\overline{F}$ .

Lemma 4. Assume that the conditions of Proposition 1 are fulfilled. Then

(25) \begin{equation}\lim_{y\to\infty}\frac{\overline{F}(y+h)}{\overline{F}(y)}=1\end{equation}

for any $h=o(y/g(y))$ . Furthermore, for every $R>0$ there exists a constant C such that

(26) \begin{equation}\overline{F}(y-h)\le C\overline{F}(y) \text{ for all } h\le R\frac{y}{g(y)}.\end{equation}

Proof. Since $\overline{F}(x)\sim x^{-\varkappa}{\rm e}^{-g(x)}$ and $\frac{y}{y+h}\to1$ , (25) will follow from

(27) \begin{equation}g(y+h)-g(y)\to0.\end{equation}

Since $\frac{g(x)}{x^\beta}$ is monotone decreasing and g is differentiable,

\begin{equation*}g'(x)\le \beta \frac{g(x)}{x}.\end{equation*}

Then, for $h<0$ ,

(28) \begin{align}g(y)-g(y-h)=\int^{y-h}_{y} g'(t) \, {\rm d} t\le \beta \int^{y-h}_{y} \frac{g(t)}{t} \, {\rm d} t\le \beta \frac{g(y-h)}{y-h}h.\end{align}

In the last step we have used the fact that $\frac{g(t)}{t}$ is decreasing. Similarly, for $h>0$ ,

\begin{equation*}g(y+h)-g(y)\le \beta \frac{g(y)}{y}h.\end{equation*}

These estimates yield (27).

To prove the second claim we note that, by (28),

\begin{align*}\frac{\overline{F}(y-h)}{\overline{F}(y)}\le C\left(\frac{y}{y-h}\right)^\varkappa {\rm e}^{g(y)-g(y-h)}\le C\exp\left\{\beta \frac{g(y-h)}{y-h}h\right\}.\end{align*}

If $h\le R\frac{y}{g(y)}$ then

\begin{align*}\frac{\overline{F}(y-h)}{\overline{F}(y)}\le C\exp\left\{\beta R \frac{yg(y-h)}{(y-h)g(y)}\right\}\le C\exp\left\{\beta R \frac{1}{(1-R/g(y))}\right\}.\end{align*}

This completes the proof.

Lemma 5. Assume that the conditions of Proposition 1 hold. Then

\begin{equation*}\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>z,\sigma_y=k\right)\sim q_k\mathbb{E}\tau\overline{F}(\sqrt{2az}),\qquad k\ge1,\end{equation*}

uniformly in $y\in[\varepsilon\sqrt{z},\sqrt{2az}]$ for regularly varying tails $\overline{F}$ and in $y\in\big[\sqrt{2ax}-\frac{R\sqrt{2ax}}{g(\sqrt{2ax})},\sqrt{2ax}\big]$ for tails satisfying the conditions of part (b) in Proposition 1.

Proof. By the Markov property, for every $z>0$ ,

\begin{align*}\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>z, \sigma_y=k\right)=\int_y^\infty\mathbb{P}(S_k\in {\rm d} v, \sigma_y=k)\mathbb{P}(A_\tau \gt z \mid S_0=v).\end{align*}

Let $\varkappa\in(1/(1-\beta),2)$ if $\overline{F}$ satisfies the conditions of part (b), and let $\varkappa=1$ in the case when $\overline{F}$ is regularly varying. Fix some $\delta>0$ and consider the set

\begin{equation*}B_v\coloneqq \left\{v-\delta v^{1/\varkappa}\le S_n+na\le v+\delta v^{1/\varkappa}\mbox{ for all }n\le\frac{v+\delta v^{1/\varkappa}}{a}\right\}.\end{equation*}

Since $\mathbb{E}|X_1|^\varkappa<\infty$ , it follows from the Marcinkiewicz–Zygmund law of large numbers that

\begin{equation*}\lim_{R\to\infty}\mathbb{P}\left(-R-\frac{\delta}{2} n^{1/\varkappa}<S_n+na<R+\frac{\delta}{2} n^{1/\varkappa}\text{ for all }n\ge1\right)=1.\end{equation*}

Consequently, $\mathbb{P}(B_v \mid S_0=v)\to 1$ as $v\to\infty$ . This implies that, as $y\rightarrow\infty$ ,

\begin{align*}\mathbb{P}\Bigg(\sum_{j=k}^{\tau-1}S_j>z & , \sigma_y=k\Bigg)\\[3pt]&=\int_y^\infty\mathbb{P}\left(S_k\in {\rm d} v,\sigma_y=k\right)\mathbb{P}(A_\tau \gt z \mid S_0=v)\\[3pt]&=\int_y^\infty\mathbb{P}\left(S_k\in {\rm d} v,\sigma_y=k\right)\mathbb{P}\left(\lbrace A_\tau \gt z\rbrace\cap B_v \mid S_0=v\right)+o\left(\mathbb{P}(\sigma_y=k)\right).\end{align*}

On the event $B_v\cap\{S_0=v\}$ one has

\begin{equation*}\frac{v-\delta v^{1/\varkappa}}{a}<\tau<\frac{v+\delta v^{1/\varkappa}}{a}.\end{equation*}

Consequently,

\begin{equation*}A_\tau=\sum_{n=0}^{\tau-1}S_n\ge \sum_{n=0}^{\tau-1}(v-\delta v^{1/\varkappa}-na)\ge \tau\left(v-\delta v^{1/\varkappa}-\frac{a\tau}{2}\right)\ge \frac{(v-\delta v^{1/\varkappa})^2}{2a}\end{equation*}

and

\begin{equation*}A_\tau=\sum_{n=0}^{\tau-1}S_n\le v+ \sum_{n=1}^{\tau-1}(v+\delta v^{1/\varkappa}-na)\le \tau\left(v+\delta v^{1/\varkappa}-\frac{a\tau}{2}\right)\le \frac{(v+\delta v^{1/\varkappa})^2}{2a}\end{equation*}

on the same event. In other words, $\mathbb{P}\left(\lbrace A_\tau \gt z\rbrace\cap B_v \mid S_0=v\right)=\mathbb{P}(B_v \mid S_0=v)$ if $v-\delta v^{1/\varkappa}\ge\sqrt{2az}$ , and $\mathbb{P}\left(\lbrace A_\tau \gt z\rbrace\cap B_v \mid S_0=v\right)=0$ if $v+\delta v^{1/\varkappa}<\sqrt{2az}$ . Therefore, for all v large enough,

\begin{align*}\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>z, \sigma_y=k\right)&\leq\int_{\sqrt{2az}-\delta(2az)^{1/2\varkappa}}^\infty \mathbb{P}(S_k\in {\rm d} v, \sigma_y=k)+o(\mathbb{P}(\sigma_y=k))\\[3pt]&=\mathbb{P}\left(S_{\sigma_y}>\sqrt{2az}-\delta(2az)^{1/2\varkappa}, \sigma_y=k\right)+o(\mathbb{P}(\sigma_y=k))\end{align*}

and

\begin{align*}\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>z, \sigma_y=k\right)&\geq\int_{\sqrt{2az}+2\delta(2az)^{1/2\varkappa}}^\infty \mathbb{P}(S_k\in {\rm d} v, \sigma_y=k)\mathbb{P}(B_v)+o(\mathbb{P}(\sigma_y=k))\\&=\mathbb{P}\left(S_{\sigma_y}>\sqrt{2az}+2\delta(2az)^{1/2\varkappa}, \sigma_y=k\right)+o(\mathbb{P}(\sigma_y=k)).\end{align*}

By Lemma 3, $\mathbb{P}(S_{\sigma_y}>v,\sigma_y=k)\sim\bar{F}(v)\mathbb{P}(\tau \gt k-1)$ as $y\rightarrow\infty$ uniformly in $v\geq y$ and, consequently,

\begin{align*}\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>z, \sigma_y=k\right) & \leq \bar{F}\left((\sqrt{2az}-\delta(2az)^{1/2\varkappa})\vee y\right)(\mathbb{P}(\tau \gt k-1)+o(1))\\& \quad +o(\mathbb{P}(\sigma_y=k))\end{align*}

and

\begin{align*}\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>z, \sigma_y=k\right) & \geq \overline{F}\left(\sqrt{2az}+2\delta(2az)^{1/2\varkappa}\right)(\mathbb{P}(\tau \gt k-1)+o(1))\\& \quad +o(\mathbb{P}(\sigma_y=k)).\end{align*}

Under our assumptions on $\overline{F}$ , we have

\begin{equation*}\lim_{\delta\to0}\lim_{z\to\infty}\frac{\overline{F}\left(\sqrt{2az}+2\delta(2az)^{1/2\varkappa}\right)}{\overline{F}\left((\sqrt{2az}-\delta(2az)^{1/2\varkappa})\vee y\right)}=1.\end{equation*}

Indeed, this relation is obvious for regularly varying tails, and under the conditions of part (b) it is a particular case of (25). Therefore,

\begin{align*}&\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>z, \sigma_y=k\right)=\overline{F}\left(\sqrt{2az}\right)(\mathbb{P}(\tau \gt k-1)+o(1))+o(\mathbb{P}(\sigma_y=k)).\end{align*}

Combining (15) and (22), we get $\mathbb{P}(\sigma_y=k)\sim q_k\mathbb{E}\tau\overline{F}(y).$ Thus, it remains to show that $\overline{F}(y)=O(\overline{F}(\sqrt{2az}))$ . This is obvious for regularly varying tails and $y\ge \varepsilon\sqrt{z}$ . For distributions satisfying the conditions of part (b), it suffices to apply (26) with $y=\sqrt{2az}$ .

Proof of Proposition 1. For every fixed $N\geq1$ we have

(29) \begin{align}&\mathbb{P}(A_\tau > x, M_\tau \gt y) \nonumber\\[3pt] &\quad =\sum_{k=1}^N\mathbb{P}(A_\tau \gt x, \sigma_y=k, M_\tau \gt y)+\mathbb{P}(A_\tau \gt x, \sigma_y>N, M_\tau \gt y).\end{align}

For the last term on the right-hand side we have

\begin{align*}\mathbb{P}(A_\tau \gt x, \sigma_y>N, M_\tau \gt y)&\leq\mathbb{P}(\sigma_y>N, M_\tau \gt y)\\[3pt]&=\mathbb{P}(M_\tau \gt y)\mathbb{P}(\sigma_y>N \mid M_\tau \gt y).\end{align*}

It follows from (22) that $\mathbb{P}(\sigma_y>N \mid M_\tau \gt y)\rightarrow\sum_{j=N+1}^\infty q_j$ as $y\rightarrow\infty$ . Then, using (15), we get

(30) \begin{align}\mathbb{P}(A_\tau \gt x,\sigma_y>N, M_\tau \gt y)\leq \varepsilon_N\bar{F}(y),\end{align}

where $\varepsilon_N\rightarrow 0$ as $N\rightarrow\infty$ .

For every fixed k we have $\mathbb{P}(A_\tau \gt x, \sigma_y=k, M_\tau \gt y)=\mathbb{P}(A_\tau \gt x, \sigma_y=k)$ . Since $S_j\in(0,y)$ for all $j<k$ , we obtain

\begin{align*}\mathbb{P}(A_\tau \gt x, \sigma_y=k)\leq\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>x-(k-1)y, \sigma_y=k\right)\end{align*}

and

\begin{align*}\mathbb{P}(A_\tau \gt x, \sigma_y=k)\geq\mathbb{P}\left(\sum_{j=k}^{\tau-1}S_j>x, \sigma_y=k\right).\end{align*}

Using Lemma 5 with $z=x$ and with $z=x-ky$ , we conclude that $\mathbb{P}(A_\tau \gt x,\sigma_y=k)\sim q_k\mathbb{E}\tau\overline{F}(\sqrt{2ax})$ . Consequently,

(31) \begin{equation}\sum_{k=1}^N\mathbb{P}(A_\tau \gt x, \sigma_y=k, M_\tau \gt y) \sim\overline{F}(\sqrt{2ax})\mathbb{E}\tau\sum_{k=1}^Nq_k.\end{equation}

Plugging (30) and (31) into (29) and letting $N\rightarrow\infty$ , we obtain

\begin{align*}\mathbb{P}(A_\tau \gt x, M_\tau \gt y)=(\mathbb{E}\tau+o(1))\overline{F}(\sqrt{2ax})+o(\overline{F}(y)).\end{align*}

Recalling that $\overline{F}(y)=O(\overline{F}(\sqrt{2ax}))$ , we finish the proof.

3. Proof of Theorem 1

We start by proving an exponential estimate for the area $A_n$ when random variables $X_j$ are truncated. Let $\overline X_n=\max(X_1,\ldots,X_n)$ . The next result is our main technical tool to investigate trajectories without big jumps.

Lemma 6. Let $\mathbb{E}[X_1]=-a$ and $\sigma^2\coloneqq {\rm Var}(X_1)<\infty$ . Assume that the distribution function F of $X_j$ satisfies (8) and that (9) holds with $\gamma_0=1$ . Then there exists a constant $C_0>0$ such that

\begin{equation*}\mathbb{P}(A_n>x,\overline X_n\le y) \le \exp\left\{-\lambda \frac{x}{n} - \lambda\frac{a n}{2} +C_0\lambda^2 n\right\},\end{equation*}

where $\lambda = \frac{g(y)}{y}$ .

Proof. We will prove this lemma by using the exponential Chebyshev inequality. For that, we need to obtain estimates for the moment-generating function of $A_n$ . First,

\begin{equation*}\mathbb{E}\left[ {\rm e}^{\frac{\lambda}{n}A_n};\ \overline X_n\le y\right]=\mathbb{E} \left[{\rm e}^{\frac{\lambda}{n}\sum_1^n(n-j+1)X_j};\ \overline X_n\le y\right]=\prod_{j=1}^n\varphi_y\left(\lambda_{n,j}\right),\end{equation*}

where $\varphi_y(t) \coloneqq \mathbb{E}[{\rm e}^{t X_j};\ X_j\le y]$ and $\lambda_{n,j} \coloneqq \lambda\frac{(n-j+1)}{n}$ . Then,

\begin{equation*}\varphi_y(\lambda_{n,j}) = \mathbb{E}[{\rm e}^{\lambda_{n,j}X_j};\ X_j\le 1/\lambda_{n,j}]+\mathbb{E}[{\rm e}^{\lambda_{n,j}X_j};\ 1/\lambda_{n,j} < X_j\le y] \eqqcolon E_1+E_2.\end{equation*}

Using the elementary bound ${\rm e}^x\le 1+x+x^2$ for $x\le 1$ , we obtain

\begin{align*}E_1\le 1+\lambda_{n,j}\mathbb{E}[X_j]+\lambda_{n,j}^2 \mathbb{E}[X_j^2]=1-a\lambda_{n,j}+(a^2+\sigma^2)\lambda_{n,j}^2.\end{align*}

Next, using integration by parts and the assumption in (8),

\begin{align*}E_2&=\int_{1/\lambda_{n,j}}^{y} {\rm e}^{\lambda_{n,j}t} \, {\rm d} F(t)=-\overline F(t){\rm e}^{\lambda_{n,j}t}\biggl |_{t=1/\lambda_{n,j}}^{t=y}+\lambda_{n,j}\int_{1/\lambda_{n,j}}^{y} {\rm e}^{\lambda_{n,j}t}\overline F(t) \, {\rm d} t \\[3pt]&\le {\rm e} \overline F(1/\lambda_{n,j})+C \lambda_{n,j}\int_{1/\lambda_{n,j}}^{y} {\rm e}^{\lambda_{n,j}t-g(t)}t^{-2} \, {\rm d} t.\end{align*}

Now note that, for $t\le y$ ,

\begin{equation*}\lambda_{n,j} t - g(t) = t \left(\lambda_{n,j}-\frac{g(t)}{t}\right)\le t \left(\lambda_{n,j}-\frac{g(y)}{y}\right),\end{equation*}

due to the condition in (9). Then,

\begin{equation*}\lambda_{n,j}-\frac{g(y)}{y} \le\lambda - \frac{g(y)}{y}=0\end{equation*}

and, therefore,

\begin{equation*}E_2\le {\rm e} \overline F(1/\lambda_{n,j})+C \lambda_{n,j}\int_{1/\lambda_{n,j}}^{y} t^{-2} \, {\rm d} t \le (C+e)\lambda_{n,j}^2,\end{equation*}

where we also used the Chebyshev inequality. As a result, for some constant C,

\begin{equation*}\varphi_y(\lambda_{n,j})= E_1+E_2\le 1-a\lambda_{n,j} +C\lambda_{n,j}^2.\end{equation*}

Consequently,

\begin{align*}\mathbb{E}\left[{\rm e}^{\frac{\lambda}{n}A_n};\ \overline X_n\le y\right]&\le \prod_{j=1}^n \left(1-a\lambda_{n,j} +C\lambda_{n,j}^2\right) \\[3pt]&=\exp\left\{\sum_{j=1}^n\ln \left(1-a\lambda_{n,j} +C\lambda_{n,j}^2\right)\right\}\\[3pt]&\le\exp\left\{\sum_{j=1}^n\left(-a\lambda_{n,j} +C\lambda_{n,j}^2\right)\right\}\\[3pt]&=\exp\left\{\sum_{j=1}^n\left(-a\lambda\frac{n-j+1}{n} +C\left(\lambda\frac{n-j+1}{n} \right)^2\right)\right\}\\[3pt]&\le \exp\left\{-\frac{a \lambda}{2} n +C\lambda^2 n\right\}.\end{align*}

Finally,

\begin{align*}\mathbb{P}(A_n>x,\overline X_n\le y)\le {\rm e}^{-\lambda\frac{x}{n}}\mathbb{E}\left[{\rm e}^{\frac{\lambda}{n}A_n};\ \overline X_n\le y\right]\le\exp\left\{-\lambda \frac{x}{n} -\frac{a \lambda}{2} n +C\lambda^2 n\right\}.\\[-40pt]\end{align*}

We can now obtain upper bounds for the tail of $A_\tau$ using the exponential bound in Lemma 6.

Lemma 7. Let $\mathbb{E}[X_1]=-a<0$ and ${\rm Var}(X_1)<\infty$ . Assume that the distribution function F of $X_j$ satisfies (8), and that (9) holds with $\gamma_0=1$ . Then there exists a constant $C>0$ such that

(32) \begin{align}\mathbb{P}(A_\tau \gt x, \overline X_\tau\le y)\le Cx^{1/4} \exp\left\{-2\frac{g(y)}{y}\sqrt{\left(\frac{a}{2}-\frac{2C_0g(y)}{y}\right)x}\right\}\end{align}

for all y satisfying $C_0 g(y)\le ay/4$ , where $C_0$ is the constant given by Lemma 6. Moreover,

\begin{equation*}\mathbb{P}(A_\tau \gt x)\le Cx^{1/4} \exp\left\{-g(\sqrt{2ax})\sqrt{\left(1-\frac{2C_0g(\sqrt{2ax})}{a\sqrt{2ax}}\right)^+}\right\}\ x\ge1,\end{equation*}

Proof. Using Lemma 6 with $y=\sqrt{2ax}$ we obtain

\begin{align*}\mathbb{P}(A_\tau \gt x, \overline X_\tau\le y)&\le \sum_{n=0}^\infty\mathbb{P}(A_n\geq x, \overline X_n \le \sqrt{2ax},\tau=n+1)\\&\le \sum_{n=1}^\infty\exp\left\{-\lambda \frac{x}{n} -\frac{a \lambda}{2} n +C\lambda^2 n\right\}=\sum_{n=1}^\infty\exp\left\{-\lambda \frac{x}{n} - \lambda I n\right\},\end{align*}

where $\lambda = \frac{g(\sqrt {2ax} )}{\sqrt {2ax}}$ and $I=\frac{a}{2}-C\lambda.$ The assumption $C_0g(y)\le y\frac{a}{4}$ implies that $I>\frac{a}{4}$ . Since I is positive, we have the inequality

\begin{equation*}\int_{n-1}^n\exp\left\lbrace-\lambda \frac{x}{y}-\lambda I(y+1)\right\rbrace {\rm d} y\ge\exp\left\lbrace-\lambda \frac{x}{n}-\lambda In\right\rbrace,\qquad n\ge1.\end{equation*}

With formula (25) on page 146 of [Reference Bateman1], we have

\begin{align*}\sum_{n=1}^\infty\exp\left\lbrace-\lambda\frac{x}{n}-\lambda I n\right\rbrace&\leq \int_0^\infty\exp\left\lbrace-\lambda \frac{x}{y}-\lambda I(y+1)\right\rbrace {\rm d} y \\&={\rm e}^{-\lambda I}\sqrt{\frac{4x}{I}}K_1(2\lambda\sqrt{Ix}).\end{align*}

Now, using the asymptotics for the modified Bessel function

\begin{equation*}K_1(z)\sim \sqrt{\frac{\pi}{2z}}{\rm e}^{-z} ,\end{equation*}

we obtain

\begin{align*}\sum_{n=1}^\infty\exp\left\lbrace-\lambda\frac{x}{n}-\lambda I n\right\rbrace\leq Cx^{1/4}\exp\{-2\lambda\sqrt{Ix}\}.\end{align*}

Therefore, (32) is proven.

The second claim in the lemma obviously holds for all x such that $C_0g(\sqrt{2ax})\ge a\sqrt{2ax}$ . Assume that x is so large that $C_0g(\sqrt{2ax})< a\sqrt{2ax}$ . Clearly,

\begin{equation*}\mathbb{P}(A_\tau \gt x)\le \mathbb{P}(A_\tau \gt x, \overline X_\tau\le \sqrt{2ax})+\mathbb{P}(A_\tau \gt x, \overline X_\tau \gt \sqrt{2ax})\eqqcolon P_1+P_2.\end{equation*}

By (32) with $y=\sqrt{2ax}$ ,

\begin{equation*}P_1\le Cx^{1/4} \exp\left\{-g(\sqrt{2ax})\sqrt{1-\frac{2C_0g(\sqrt{2ax})}{a\sqrt{2ax}}}\right\}.\end{equation*}

Next,

\begin{align*}P_2&\le \sum_{n=0}^\infty\mathbb{P}(A_\tau\geq x, M_n \le \sqrt{2ax}, X_{n+1}>\sqrt{2ax} ,\tau \gt n)\\&\le \sum_{n=0}^\infty\mathbb{P}( X_{n+1}>\sqrt{2ax}) \mathbb{P}(\tau \gt n) \le\mathbb{E}[\tau] \overline F(\sqrt {2a x}) = o(P_1).\end{align*}

Then, the claim follows.

Now we will give a lower bound.

Lemma 8. Let $\mathbb{E}[X_1]=-a<0$ and ${\rm Var}(X_1)<\infty$ . Then, for any $\varepsilon>0$ there exists $C>0$ such that

\begin{equation*}\liminf_{x\to\infty}\frac{\mathbb{P}(A_\tau \gt x)}{\overline F(\sqrt{2ax}+Cx^{1/4+\varepsilon/2})}\ge \mathbb{E}\tau.\end{equation*}

Proof. Fix $N\ge 1$ . Put $y^+ =\sqrt{2ax}+Cx^{1/4+\varepsilon/2},$ where C will be picked later. Since $\mathbb{E}[X_1^2]<\infty$ , by the strong law of large numbers,

\begin{equation*}\frac{S_{l}+al}{l^{1/2+\varepsilon}}\to 0, \qquad l\to \infty \mbox{ almost surely}.\end{equation*}

Hence, for any $\delta>0$ we can pick $R>0$ such that

\begin{equation*}\mathbb{P}\left( \min_{l\le \sqrt{2x/a}} (S_{l}+al +R + l^{1/2+\varepsilon})>0\right)>(1-\delta).\end{equation*}

Define

\begin{equation*}E^+_k\coloneqq \left\{\min_{l\le \sqrt{2x/a}} (S_{k+l}-S_k+al +R + l^{1/2+\varepsilon})>0,\tau \gt k, S_k>y^+\right\}.\end{equation*}

If $C>1+(2/a)$ then, for all x large enough, $al+l^{1/2+\varepsilon}+R\le\sqrt{2ax}+(2x/a)^{1/4+\varepsilon/2}+R\le y^+$ for all $l\le\sqrt{2x/a}$ . Therefore, for every $k\le N$ , $E^+_k\subset \{\tau \gt k+\sqrt{2x/a}\}$ . Furthermore, if $\tau \gt k+\sqrt{2x/a}$ then, on the event $E_k^+$ ,

\begin{align*}A_\tau&>\sum_{l=0}^{k+\sqrt{2x/a}}S_{k+l}>y^+\sqrt{2x/a}+\sum_{l=0}^{k+\sqrt{2x/a}}(S_{k+l}-S_l)\\&>y^+\sqrt{2x/a}-\sum_{l=0}^{k+\sqrt{2x/a}}(al+l^{1/2+\varepsilon}+R) .\end{align*}

Now, we can choose C so large that, for every $k\le N$ , $E_k^+\subset \{A_\tau \gt x\}$ . Hence,

\begin{align*}&\mathbb{P}(A_{\tau}>x) \ge\sum_{k=1}^N\mathbb{P}(A_\tau \gt x, \overline X_{k-1}\le y^+, X_k>y^+, \tau \gt k)\\&\quad\ge\sum_{k=1}^N\mathbb{P}(E_k^+, \overline X_{k-1}\le y^+, X_k>y^+, \tau \gt k)\\&\quad\ge\sum_{k=1}^N\mathbb{P}\left(\overline X_{k-1}\le y^+,\tau \gt k-1, X_k>y^+, \min_{l\le \sqrt{2x/a}} (S_{l+k}-S_k +R + l^{1/2+\varepsilon})>0\right)\\&\quad\ge (1-\delta)\sum_{k=1}^N \mathbb{P}\left(\overline X_{k-1}\le y^+,\tau \gt k-1\right)\overline F(y^+).\end{align*}

For every fixed k we have $\mathbb{P}\left(\overline X_{k-1}\le y^+,\tau \gt k-1\right) \to \mathbb{P}\left(\tau \gt k-1\right)$ as $x\to\infty$ . Furthermore, $\sum_{k=0}^N \mathbb{P}(\tau \gt k)\to\mathbb{E}\tau$ as $N\to\infty$ . Therefore, we can pick sufficiently large N such that

\begin{equation*}\liminf_{x\to\infty}\sum_{k=1}^N \mathbb{P}\left(\overline X_{k-1}\le y^+,\tau \gt k-1\right)\ge (1-\delta)\mathbb{E}\tau.\end{equation*}

Then, for all x sufficiently large, $\mathbb{P}(A_{\tau}>x)\ge (1-\delta)^2\mathbb{E}\tau\overline F(y^+)$ . As $\delta>0$ is arbitrarily small, we arrive at the conclusion.

Proof of Theorem 1. The upper bound follows from Lemma 7. The lower bound follows from Lemma 8. The rough asymptotics follow immediately from the lower and upper bounds, and from the observation that

(33) \begin{equation}\sup_{|y|\le x\rho(x)}\left|\frac{\log \overline F(x)}{\log \overline F(x+y)}-1\right|\to0,\end{equation}

where $\rho(x)\to0$ . To prove (33) we note that by (9) and (10)

\begin{align*} g(x+y)-g(x)&=\int_x^{x+y} g'(t) \, {\rm d} t \le\gamma_0 \int_x^{x+y} \frac{g(t)}{t} \, {\rm d} t\le \gamma_0 \frac{g(x)}{x^{\gamma_0}}\int_x^{x+y} \frac{1}{t^{1-\gamma_0}} \, {\rm d} t \\[3pt]\nonumber&\le \gamma_0\frac{g(x)}{x^{\gamma_0}}\frac{y}{x^{1-\gamma_0}}=\gamma_0g(x)\frac{y}{x},\qquad y>0.\end{align*}

This implies that, as $x\to\infty$ ,

(34) \begin{equation}\sup_{|y|\le x\rho(x)}\left|\frac{g(x+y)}{g(x)}-1\right|\to0.\end{equation}

Recalling that $\log\overline{F}(x)\sim-g(x)-2\log x$ , one easily obtains (33).

4. Proof of Theorem 2

Set

\begin{equation*}h(x)\coloneqq \frac{\sqrt{2ax}}{g(\sqrt{2ax})}, \qquad y=\sqrt{2ax} - Ch(x) \log x ,\end{equation*}

where $C>\frac{5/4}{1-\gamma_0}$ . We first split the probability $\mathbb{P}(A_{\tau}>x)$ as follows:

\begin{align*}\mathbb{P}(A_{\tau}>x)&=\mathbb{P}(A_{\tau}>x, \overline X_\tau\le y)+\mathbb{P}\left(A_{\tau}>x, \overline X_\tau \gt \sqrt{2ax}-\frac{1}{\log x}h(x)\right)\\[3pt]& \quad + \mathbb{P}\left(A_{\tau}>x, \overline X_\tau\in \left[y, \sqrt{2ax}-\frac{1}{\log x}h(x))\right]\right)\eqqcolon P_1+P_2+P_3.\end{align*}

The first term will be estimated using the exponential bound proved in Lemma 6.

Lemma 9. Let $\mathbb{E}[X_1]=-a$ and $\mathrm{Var}(X_1)<\infty$ . Assume that (8) and (9) hold with some $\gamma_0<1/2$ , together with (11). Then, $P_1 = o(\overline F(\sqrt{2ax}))$ .

Proof. According to (32),

\begin{align*}P_1&\le Cx^{1/4}\exp\left\{-2\frac{g(y)}{y}\sqrt{\left(\frac{a}{2}-2C_0\frac{g(y)}{y}\right)x}\right\}.\end{align*}

Since (9) holds for some $\gamma_0<1/2$ , $g^2(y)/y\to 0$ , and hence

\begin{equation*}P_1 \le Cx^{1/4}\exp\left\{-\frac{g(y)}{y}\sqrt{2ax}\right\}.\end{equation*}

Then,

\begin{equation*}\frac{P_1}{\overline F(\sqrt{2ax})}\le C x^{5/4}\exp\left\{g(\sqrt{2ax})-\frac{g(y)}{y}\sqrt{2ax}\right\}.\end{equation*}

To finish the proof, it is sufficient to show that

(35) \begin{equation}g(\sqrt{2ax})-\frac{g(y)}{y}\sqrt{2ax} + \frac{5}{4}\log x \to -\infty,\qquad x\to \infty.\end{equation}

We first note that

\begin{align*}d(x)&\coloneqq g(\sqrt{2ax})-\frac{g(y)}{y}\sqrt{2ax} =g(\sqrt{2ax}) - \frac{g(y)}{1-C\frac{\log x}{g(\sqrt{2ax})}}\\[3pt]&= g(\sqrt{2ax}) - g(y) - (C+o(1)) \log x \frac{g(y)}{g(\sqrt{2ax})}.\end{align*}

Using (18), we can see that

\begin{equation*}g(\sqrt{2ax}) - g(y) \le \gamma_0 \frac{g(y)}{y} (\sqrt{2ax}-y)=\gamma_0 C \frac{g(y)}{y}\log x \frac{\sqrt{2ax}}{g(\sqrt{2ax})}.\end{equation*}

Hence,

\begin{equation*}d(x)\le \left(\gamma_0\frac{\sqrt{2ax}}{y}-1 \right) (C+o(1)) \frac{g(y)}{g(\sqrt{2ax})}\log x.\end{equation*}

According to (34), $g(y)\sim g(\sqrt{2ax})$ . Therefore, (35) is valid for any C satisfying $C(\gamma_0-1)+\frac{5}{4}<0$ .

The next lemma gives the term that dominates in $\mathbb{P}(A_\tau \gt x)$ .

Lemma 10. Under the assumptions of Lemma 9 we have the following estimate:

\begin{equation*}P_2\le (\mathbb{E}\tau+o(1))\overline F(\sqrt{2ax}), \qquad x\to \infty.\end{equation*}

Proof. Put

\begin{equation*}y^*=\sqrt{2ax}-\frac{h(x)}{\log x}.\end{equation*}

By the total probability formula,

\begin{align*}P_2&\le \sum_{n=0}^\infty\mathbb{P}(A_\tau\geq x, \overline{X}_n \le y^*, X_{n+1}>y^* ,\tau \gt n)\\&\le \sum_{n=0}^\infty\mathbb{P}( X_{n+1}>y^*) \mathbb{P}(\tau \gt n)= \mathbb{E}[\tau] \overline F(y^*).\end{align*}

Now, note that, by (18) and (34),

\begin{align*}\frac{\overline F(y^*)}{\overline F(\sqrt{2ax})}&\le (1+o(1)) {\rm e}^{g(\sqrt{2ax})-g(y^*)}\le (1+o(1)) \exp\left\{\frac{\gamma_0 g(y^*)}{y^*}(\sqrt{2ax}-y^*)\right\} \\&\le (1+o(1)) \exp\left\{\frac{\gamma_0 g(y^*)}{y^*}\frac{1}{\log x}\frac{\sqrt{2ax}}{g(\sqrt{2ax})}\right\} = 1+o(1).\end{align*}

Then the statement immediately follows.

We proceed to the analysis of $P_3$ . Fix some $\delta>0$ and set $z=\frac{1}{a}\left(\sqrt{2ax} + \delta\sqrt{x} \right)$ . We split $P_3$ further as follows:

\begin{align*}P_3\le P_{31}+P_{32}+P_{33} & \coloneqq \mathbb{P}\left(A_{\tau}>x, \overline X_\tau\in \left[y, \sqrt{2ax}-\frac{1}{\log x}h(x)\right];\ J_1;\ \tau\le z\right )\\& \quad +\mathbb{P}\left(A_{\tau}>x, \overline X_\tau\in \left[y, \sqrt{2ax}-\frac{1}{\log x}h(x)\right];\ J_{\ge 2}, \tau\le z\right)\\& \quad + \mathbb{P}(\tau \gt z),\end{align*}

where $J_1 =\{ \mbox{there exists} $ k (1, $\tau$ ) such that $X_k>y \mbox{ and }\max_{1\le i\le \tau, i\neq k} X_i \le y\}$ and, correspondingly, $J_{\ge 2} =\{\mbox{there exist} $ k, l $\in(1,\tau$ ) such that $X_k>y \mbox{ and } X_l>y\}$ .

We start with the easier terms $P_{32}$ and $P_{33}$ . To deal with these terms we will use Proposition 2.

Lemma 11. Let assumptions (8) and (9) hold for $\gamma_0<1/2$ . Assume also that (11) holds as well. Then $P_{33} = o(\overline F(\sqrt{2ax}))$ as $x\to\infty$ .

Proof. We have, by Proposition 2, $P_{33}=\mathbb{P}(\tau \gt z)\le (\mathbb{E}\tau+o(1))\overline F(az) =O\big(\overline F(\sqrt{2ax}+\delta \sqrt{x})\big)$ . Therefore,

\begin{align*}\frac{P_{33}}{\overline F(\sqrt{2ax})}&\le C\exp\left\{g(\sqrt{2ax})-g(\sqrt{2ax}+\delta \sqrt{x})\right\}.\end{align*}

By the mean value theorem and by the assumption (11), $g(cx)-g(x)\to\infty$ as $x\to\infty$ for every $c>1$ . This completes the proof.

Lemma 12. Let $\mathbb{E}[X_1]=-a$ and $\mathrm{Var}(X_1)<\infty$ . Assume that (8) and (9) hold with some $\gamma_0<1/2$ , together with (11). Then $P_{32} = o(\overline F(\sqrt{2ax}))$ .

Proof. We can use the formula of total probability to write

\begin{equation*}P_{32} \le \sum_{k=1}^z \mathbb{P}(\tau=k, J_{\ge 2})\le \sum_{k=1}^z \frac{k^2}{2}\overline F(y)^2.\end{equation*}

Then,

\begin{equation*}\frac{P_{32}}{\overline F(\sqrt{2ax})}\le C x^{3/2}\frac{\overline F(y)^2}{\overline F(\sqrt{2ax})}\le C x^{1/2} {\rm e}^{g(\sqrt{2ax})-2g(y)}.\end{equation*}

Using (18) we can see that, in view of (12),

\begin{align*}\frac{P_{32}}{\overline F(\sqrt{2ax})} \le C x^{1/2} {\rm e}^{C\ln x -g(y)}\to 0 . \\[-40pt]\end{align*}

$P_{31}$ remains to be analysed. For that, introduce $\mu(y)\coloneqq \min\{n\ge 1\,{:}\, X_k>y\}$ . Now we will complete the proof with the following lemma.

Lemma 13. Let assumptions (8), (9), and (11) hold for $\gamma_0<1/2$ . Then $P_{31} = o(\overline F(\sqrt{2ax}))$ as $x\to\infty$ .

Proof. First, represent event $J_1$ as $J_1=J_{11}\cup J_{12}$ , where

\begin{align*}J_{11}&\coloneqq \{\mbox{$X_k>y$ for exactly one $k\in(0,\tau)$ and $X_i\le x^\varepsilon$ for all other $i<\tau$} ,\}\\[3pt]J_{12}&\coloneqq \{\mbox{$X_k>y$ for exactly one $k\in(0,\tau)$ and $X_i>x^\varepsilon $ for some $i\neq k, i<\tau$}\}.\end{align*}

Then,

\begin{align*}Q_{2}&\coloneqq \mathbb{P}\left(A_{\tau}>x, \overline X_\tau\in \left[y, \sqrt{2ax}-\frac{1}{\log x}h(x)\right];\ J_{12},\tau\le z\right )\\[3pt]&\le \sum_{j=1}^z \mathbb{P}(\tau=j, J_{12})\le \sum_{j=1}^z \frac{j^2}{2}\overline F(y)\overline F(x^\varepsilon)\le z^3 \overline F(y)\overline F(x^\varepsilon) , \end{align*}

so

\begin{align*}\frac{Q_{2}}{\overline F(\sqrt{2ax})}\le Cx^{3/2-2\varepsilon} {\rm e}^{g(\sqrt{2ax})-g(y)-g(x^{\varepsilon}))}\end{align*}

By (18), $g(\sqrt{2ax})-g(y)\le C\ln x$ . Then, in view of the relation (12), we have $g(\sqrt{2ax})-g(y)-g(x^{\varepsilon})\le -4\ln x$ , which implies that $Q_{2} = o(\overline F(\sqrt{2ax}))$ .

To estimate

\begin{equation*}Q_{1}\coloneqq \mathbb{P}\left(A_{\tau}>x, \overline X_\tau\in \left[y, \sqrt{2ax}-\frac{1}{\log x}h(x)\right];\ J_{11},\tau\le z\right )\end{equation*}

we make use of the exponential bound given in Lemma 6. Put

\begin{equation*}x^+(k)=x-k\left(\sqrt{2ax}-\frac{h(x)}{\log x}\right).\end{equation*}

Then, we have

\begin{align*}Q_{1}&=\sum_{k=0}^{z-1} \sum_{j=1}^k \mathbb{P}\left(A_k>x, \max_{i\neq j, i\le k} X_i\le x^\varepsilon, X_j\in\left[y, \sqrt{2ax}-\frac{h(x)}{\log x}\right] , \tau=k+1\right)\\[3pt]&\le\sum_{k=1}^z (k+1)\mathbb{P}(A_k>x^+(k), \overline X_k\le x^\varepsilon)\overline F(y)\\[3pt]&\le Cx^{1/2}\overline F(y)\sum_{k=1}^z\exp\left\{-\lambda \frac{x^+(k)}{k} -\frac{a \lambda}{2} k +C\lambda^2 k\right\},\end{align*}

where $\lambda = \frac{g(x^\varepsilon)}{x^\varepsilon}$ . Now note that

\begin{equation*}-\lambda \frac{x^+(k)}{k} -\frac{a \lambda}{2} k=-\lambda\left(-\sqrt{2ax}+\frac{h(x)}{\log x} +\frac{x}{k}+\frac{ak}{2}\right).\end{equation*}

Since

\begin{equation*}\frac{x}{k}+\frac{ak}{2}\ge \sqrt{2ax},\qquad k\ge1,\end{equation*}

we obtain

\begin{equation*}-\lambda \frac{x^+(k)}{k} -\frac{a \lambda}{2} k\le -\lambda \frac{h(x)}{\log x},\qquad k\ge1.\end{equation*}

Thus, $Q_{1} \le Cx {\rm e}^{-\lambda h(x)/\log x+\lambda^2 z}\overline{F}(y)$ . Next, we can pick $\varepsilon = \frac{1}{4(1-\gamma_0)} $ to achieve

\begin{align*}\lambda^2 z &\le C \left(\frac{g(x^\varepsilon)}{x^\varepsilon}\right)^2x^{1/2}=C\left(\frac{g(x^\varepsilon)}{x^{\varepsilon(1-1/(4\varepsilon))}}\right)^2=C\left(\frac{g(x^\varepsilon)}{x^{\gamma_0\varepsilon}}\right)^2\\[3pt]&<C\sup_t\left(\frac{g(t)}{t^{\gamma_0}}\right)^2<\infty \end{align*}

by the condition (9). Note that the assumption $\gamma_0<1/2$ implies that $\varepsilon=\frac{1}{4(1-\gamma_0)}<1/2$ . Then, using (8), we obtain

\begin{align*}\frac{Q_{1}}{\overline F(\sqrt{2ax})}\le Cx {\rm e}^{g(\sqrt{2ax})-g(y)-\lambda h(x)/\log x},\end{align*}

and, using (18),

\begin{align*}\frac{Q_{1}}{\overline F(\sqrt{2ax})}\le Cx^{C} {\rm e}^{-\lambda h(x)/\log x}.\end{align*}

Finally, noting that

\begin{align*}\lambda h(x) = \frac{g(x^\varepsilon)}{x^{\varepsilon}} \frac{\sqrt{2ax}}{g(\sqrt{2ax})}\end{align*}

grows polynomially, we obtain the required convergence to 0. The polynomial growth can be immediately seen for $g(x)=x^{\gamma_0}$ . However, a proper proof goes as follows:

\begin{align*}g(C\sqrt{x}) &= g(x^\varepsilon)+\int_{x^\varepsilon}^{C\sqrt{x}} g'(t) \, {\rm d} t\le g(x^\varepsilon)+\gamma_0\int_{x^\varepsilon}^{C\sqrt{x}} \frac{g(t)}{t} \, {\rm d} t \\[3pt]&\le g(x^\varepsilon)+\gamma_0\int_{x^\varepsilon}^{C\sqrt{x}} \frac{g(t)}{t^{\gamma_0}} t^{\gamma_0-1} \, {\rm d} t\le g(x^\varepsilon)+\frac{g(x^\varepsilon)}{x^{\varepsilon\gamma_0}}\int_{x^\varepsilon}^{C\sqrt{x}} t^{\gamma_0-1} \, {\rm d} t \\[3pt]&\le g(x^\varepsilon)+C\frac{g(x^\varepsilon)}{x^{\varepsilon \gamma_0}} x^{\gamma_0/2}\le C g(x^\varepsilon)x^{\gamma_0(1/2-\varepsilon)}.\end{align*}

Therefore $\lambda h(x)\ge x^{1/2-\varepsilon} x^{-\gamma_0(1/2-\varepsilon)}=x^{(1-\gamma_0)/2-1/4}$ , where we have used the equality $\varepsilon=\frac{1}{4(1-\gamma_0)}$ .

Proof of Theorem 2. Combining the preceding lemmas give us the upper bound. The lower bound has been shown in (5) under even weaker conditions.

Acknowledgement

We would like to thank the anonymous referees for a number of useful comments and suggestions that helped us to improve the paper.

References

Bateman, H. (1954). Tables of Integral Transforms, Vol. 1. McGraw-Hill, New York.Google Scholar
Borovkov, A. A., Boxma, O. J. and Palmowski, Z. (2003). On the integral of the workload process of the single server queue. J. Appl. Prob. 40, 200225.CrossRefGoogle Scholar
Carmona, P. and Pétrélis, N. (2019). A shape theorem for the scaling limit of the IPDSAW at criticality. Ann. Appl. Prob. 29, 875930.CrossRefGoogle Scholar
Denisov, D. (2005). A note on the asymptotics for the maximum on a random time interval of a random walk. Markov Process. Relat. Fields 11, 165169.Google Scholar
Denisov, D., Dieker, A. B. and Shneer, V. (2008). Large deviations for random walks under subexponentiality: the big-jump domain. Ann. Prob. 36, 19461991.CrossRefGoogle Scholar
Denisov, D., Kolb, M. and Wachtel, V. (2015). Local asymptotics for the area of random walk excursions, J. London Math. Soc. 91, 495513.CrossRefGoogle Scholar
Denisov, D. and Shneer, V. (2013). Asymptotics for the first passage times of Lévy processes and random walks. J. Appl. Prob. 50, 6484.CrossRefGoogle Scholar
Dobrushin, R. and Hryniv, O. (1996). Fluctuations of shapes of large areas under paths of random walks. Prob. Theory Relat. Fields 105, 423458.CrossRefGoogle Scholar
Doney, R. A. (1989). On the asymptotic behaviour of first passage times for transient random walks. Prob. Theory Relat. Fields 81, 239246.CrossRefGoogle Scholar
Duffy, K. and Meyn, S. (2014). Large deviation asymptotics for busy periods. Stoch. Syst. 4, 300319.CrossRefGoogle Scholar
Feller, W. (1971). An Introduction to Probability Theory and Its Applications, Vol. 2. Wiley, New York.Google Scholar
Foss, S., Palmowski, Z. and Zachary, S. (2005). The probability of exceeding a high boundary on a random time interval for a heavy-tailed random walk. Ann. Appl. Prob. 15, 19361957.CrossRefGoogle Scholar
Kulik, R. and Palmowski, Z. (2011). Tail behaviour of the area under a random process, with applications to queueing systems, insurance and percolations. Queueing Systems 68, 275284.CrossRefGoogle Scholar
Perfilev, A. and Wachtel, V. (2018). Local asymptotics for the area under the random walk excursion. Adv. Appl. Prob. 50, 600620.CrossRefGoogle Scholar
Rozovskii, L. V. (1993). Probabilities of large deviations on the whole axis. Theory Prob. Appl. 38, 5379.CrossRefGoogle Scholar
Takacs, L. (1991). A Bernoulli excursion and its various applications. Adv. Appl. Prob. 23, 557585.CrossRefGoogle Scholar
Takacs, L. (1992). On the total heights of random rooted trees. J. Appl. Prob. 29, 543556.CrossRefGoogle Scholar
Takacs, L. (1994). On the total heights of random rooted binary trees. J. Combinatorial Theory B 61, 155166.CrossRefGoogle Scholar