Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-06T04:13:17.256Z Has data issue: false hasContentIssue false

The exact asymptotics of the large deviation probabilities in the multivariate boundary crossing problem

Published online by Cambridge University Press:  03 September 2019

Yuqing Pan*
Affiliation:
The University of Melbourne
Konstantin A. Borovkov*
Affiliation:
The University of Melbourne
*
*Postal address: School of Mathematics and Statistics, The University of Melbourne, Parkville VIC 3010, Australia.
*Postal address: School of Mathematics and Statistics, The University of Melbourne, Parkville VIC 3010, Australia.
Rights & Permissions [Opens in a new window]

Abstract

For a multivariate random walk with independent and identically distributed jumps satisfying the Cramér moment condition and having mean vector with at least one negative component, we derive the exact asymptotics of the probability of ever hitting the positive orthant that is being translated to infinity along a fixed vector with positive components. This problem is motivated by and extends results of Avram et al. (2008) on a two-dimensional risk process. Our approach combines the large deviation techniques from a series of papers by Borovkov and Mogulskii from around 2000 with new auxiliary constructions, enabling us to extend their results on hitting remote sets with smooth boundaries to the case of boundaries with a ‘corner’ at the ‘most probable hitting point’. We also discuss how our results can be extended to the case of more general target sets.

Type
Original Article
Copyright
© Applied Probability Trust 2019 

1. Introduction

The present work was motivated by the following two-dimensional risk model from [Reference Avram, Palmowski and Pistorius2]. Consider two insurance companies that divide between them both claims and premia in specified fixed proportions, so that their risk processes $U_1$ and $U_2$ are respectively

\begin{align} U_i(t)\,:\!= u_i + c_i t- S_i(t),\quad i=1,2, \end{align}

where the $u_i>0$ are the initial reserves, the $c_i>0$ are the premium rates, and their respective claim processes $S_i(t)=\delta_i S (t) $ are fixed proportions (the $\delta_i>0$ are constants, $\delta_1+\delta_2=1$ ) of a common process S(t) of claims made against them. It is assumed [Reference Avram, Palmowski and Pistorius2] for definiteness that $c_1/\delta_1 > c_2/\delta_2$ , i.e. the second company receives less premium per amount paid out and so can be considered as a reinsurer. Avram [Reference Avram, Palmowski and Pistorius2] mostly dealt with the two ruin times

\begin{align} \tau_{\rm or} \,:\!= \inf\{t\ge 0\colon U_1(t) \wedge U_2 (t) \lt0\}, \quad \tau_{\rm sim} \,:\!= \inf\{t\ge 0\colon U_1(t) \vee U_2 (t) \lt0\}, \end{align}

at which at least one of the two or both of the companies are ruined, respectively. The key observation made in [Reference Avram, Palmowski and Pistorius2] was that both times are actually the first crossing times of some piecewise linear boundaries by the univariate claim process S(t). Thus, the problem of computing the respective ‘bivariate ultimate ruin probabilities’

\[ \Psi_{\rm or} (u_1, u_2)\,:\!= {\mathbb{P}} (\tau_{\rm or}\lt \infty),\quad \Psi_{\rm sim} (u_1, u_2)\,:\!= {\mathbb{P}} (\tau_{\rm sim}\lt \infty) \]

is reduced to finding univariate boundary crossing probabilities. When $u_1/\delta_1\ge u_2/\delta_2$ , the latter problem further reduces to simply computing usual univariate ruin probabilities. However, in the alternative case the situation is more interesting. For that latter case, assuming that S(t) is a compound Poisson process with positive jumps satisfying the Cramér moment condition, Theorem 5 of [Reference Avram, Palmowski and Pistorius2] gives the exact asymptotics of $ \Psi_{\rm sim} (as, s) $ as $s\to \infty$ , of which the nature depends on the (fixed) value of $a > 0$ . Namely, there exist a function $\kappa(a)>0$ and values $0\le a_1 \lt a_2\le \infty$ such that

(1) \begin{equation} \Psi_{\rm sim} (as, s) = (1+o(1)) \begin{cases} C(a) ^{-\kappa (a) s}, & a\not \in (a_1, a_2), \\ C(a) s^{-1/2 }^{-\kappa (a) s}, & a \in (a_1, a_2), \end{cases} \quad \text{as }s \to \infty. \end{equation}

However, the approach from [Reference Avram, Palmowski and Pistorius2] does not work in the case of a nondegenerate structure of the claim process $ (S_1(t), S_2(t)) $ , and so the general problem of finding the exact ruin probability asymptotics remained open.

We extend the asymptotics of the simultaneous ruin probability derived in Theorem 5 of [Reference Avram, Palmowski and Pistorius2] to a much more general class of d-variate, $d \ge 2,$ Sparre Andersen-type ruin models, in which there are d companies receiving premiums at the respective constant rates $c_i,\, i = 1, \ldots,d$ , and the claim events occur at the ‘event times’ $\tau(j),\,j\ge 1$ , in a renewal process N(t) (with independent and identically distributed (i.i.d.) interclaim times $\tau(j)-\tau (j-1)>0$ , $\tau (0)\,:\!=0$ ). For the jth claim event, the amount company i has to pay is the ith component of a d-variate random vector ${{J}}(j)\,:\!=(J_1(j), \ldots, J_d(j)) $ with a general (light tail) distribution, the vectors $ (\tau(j)-\tau(j-1), {{J}}(j)), \,j \geq 1,$ forming an i.i.d. sequence. Recall that in Theorem 5 of [Reference Avram, Palmowski and Pistorius2], ${{J}}(j)=(\delta_1, \delta_2)J(j) $ for an i.i.d. sequence of claims $J(j),\ N(t) $ being a Poisson process independent of the J(j). We achieve this by reducing the problem to finding the asymptotic behavior of the hitting probability of a remote set by an embedded random walk (RW). The latter problem was solved in [Reference Borovkov and Mogulskii10], but only in the case when the boundary of that set is smooth at the ‘most probable (hitting) point’ of that set by the RW. In that case, the asymptotics of the hitting probability are of the form represented by the first line on the right-hand side of (1).

The main contribution of the present work is an extension of the multivariate large deviation techniques from [Reference Borovkov and Mogulskii10] to the cases where the boundary of the remote set is not smooth at the ‘global most probable point’ (GMPP; for the formal definition thereof, see the text after (18) below), but, rather, that point is located at the ‘apex of the corner’ on the boundary. It is in such cases that the hitting probability asymptotics (in the bivariate case) are of the form shown in the second line on the right-hand side of (1). The condition $a\in (a_1, a_2) $ in (1) is equivalent to the GMPP being at the ‘corner’ of the remote quadrant (of which the hitting will mean simultaneous ruin in the bivariate case), while $a\not\in [a_1, a_2]$ corresponds to the GMPP being on one of the sides of the quadrant, which is the ‘smooth boundary case’ dealt with in [Reference Borovkov and Mogulskii10] (the boundary cases $a=a_i,\,i=1,2,$ correspond to the situations discussed in Remark 1 below). In Remark 1 we explain the ‘genesis’ of the power factor in the asymptotics in the case when the GMPP is at the ‘corner point’ of the target set. It turns out that in the case $d>2$ there is a whole spectrum of different power factors that can appear in front of the exponential factor for the asymptotic representation of the hitting probability, depending on the dimensionality of the ‘target set’ boundary component to which the GMPP belongs; see Remark 2 below.

To explain in more detail, let

\[ Q^+ \,:\!= \{ {{x}}=(x_1, \ldots, x_d) \in {\mathbb{R}}^d\colon x_j > 0,\, 1 \le j \le d \}. \]

Next note that, in the abovementioned d-variate Sparre Andersen-type model, the simultaneous ruin event is equivalent to the bivariate RW

(2) \begin{equation} {{S}}(n) \,:\!= \sum_{j = 1}^{n}{\rm{\xi}}(j), \quad n = 0,1,2,\ldots, \end{equation}

with i.i.d. jumps ${\rm{\xi}}(j) \,:\!= {{J}}(j) - {{c}}(\tau(j) - \tau(j-1)), \, j \geq 1,{{c}}\,:\!= (c_1, \ldots, c_d), $ hitting the set ${{u}} + {\rm cl}(Q^+), {{u}} \,:\!= (u_1, \ldots, u_d) \in Q^+$ being the vector of initial reserves of the d companies (here and in what follows, ${\rm cl}(V) $ and ${\rm int} (V) $ stand for the closure and interior of the set V, respectively), i.e.

\[ \{\tau_{\rm sim} \lt \infty\} = \{\eta ( {{u}} + {\rm cl}(Q^+)) \lt \infty \}, \quad{where } \eta (V) \,:\!=\inf\{n\ge 0\colon {{S}}(n)\in V\} \]

is the first hitting time of the Borel set $V \subset {\mathbb R}^d $ by the RW S. Assuming further that ${{u}}= s{{g}}$ for a fixed ${{g}}\in Q^+$ and an $s>0$ , we see that ${{u}} + {\rm cl}(Q^+) = sG,$ where

\[ G\,:\!={{g}}+ {\rm cl}(Q^+). \]

Therefore, the problem of finding the asymptotics of $\psi_{\rm sim} ( s{{g}}) $ as $s\to \infty$ dealt with in Theorem 5 of [Reference Avram, Palmowski and Pistorius2] is reduced to a special case of the main problem considered in [Reference Borovkov and Mogulskii10], i.e. computing the asymptotics of the probability

(3) \begin{align} {\mathbb{P}}(\eta (s G) \lt \infty) \quad\text{as } s\to\infty. \end{align}

However, as already mentioned, the main condition imposed in [Reference Borovkov and Mogulskii10] on the admissible sets G in (3) was that the boundary of sG at the ‘most probable point’ of that set is smooth (for precise definitions, see [Reference Borovkov and Mogulskii10, pp.248, 256]). This is not satisfied in the most interesting case of our ruin problem, where that point is located at the ‘corner’ s g of the set sG (which means, roughly speaking, that given that the RW S eventually hits sG, it is most likely that it does that in vicinity of that point s g). Thus, the results of [Reference Borovkov and Mogulskii10] are not applicable in that case. In this paper, we extend them to such situations, obtaining asymptotics for (3) of the form somewhat different from those in the ‘smooth boundary case’. In particular, in the case $d = 2$ our result implies the relation in the second line in (1) for our Sparre Anderson-type model.

Roughly speaking, the asymptotics of (3) in the d-dimensional case, when the boundary of G is smooth in the vicinity of the most probable point, was derived in [Reference Borovkov and Mogulskii10] as follows. Let

(4) \begin{equation} \Delta[{{y}}) \,:\!= \prod_{j = 1}^d [y_j, y_{j} + \Delta) \end{equation}

be the cube with the ‘left-bottom’ corner y and edge length $\Delta > 0$ . Starting with the representation

(5) \begin{equation} {\mathbb{P}}( \eta(sG) \lt \infty ) = \sum_{n \geq 1} {\mathbb{P}}( \eta(sG) = n ), \end{equation}

we compute the value of the summands on the right-hand side (RHS) of (5) by summing terms of the form

(6) \begin{equation} {\mathbb{P}}(\eta(sG) = n \mid {{S}}(n) \in \Delta[{{y}}) ){\mathbb{P}}( {{S}}(n) \in \Delta[{{y}}) ) \end{equation}

over y-values on a $\Delta$ -grid in a half-space (used instead of sG when s is large, which is possible since the boundary of the set is smooth in the vicinity of the most probable point). The second factor in (6) is evaluated using the integro-local large deviation theorem [Reference Borovkov and Mogulskii8], whereas the first factor can be computed using the smoothness of the boundary $\partial G$ by reducing the problem to evaluating the distribution of the global minimum of a one-dimensional RW with a positive trend. Finally, the sum on the RHS of (5) is computed using the Laplace method [Reference Erdélyi11].

A direct implementation however of the above scheme in our case encounters serious technical difficulties (in particular, there is no abovementioned reduction to the univariate problem when computing the first factor in (6)). That may explain why Borovkov and Mogulskii [Reference Borovkov and Mogulskii10] only dealt with the smooth boundary case. In the present paper, we employ a more feasible approach which introduces an auxiliary half-space $\widehat{H} \supset G,\ \partial \widehat{H} \cap G = \{ {{g}}\},$ such that the logarithmic asymptotics of ${\mathbb{P}}( \eta(s\widehat{H}) \lt \infty ) $ as $s\to\infty$ are the same as for ${\mathbb{P}}(\eta(sG) \lt \infty ),$ and the ‘most probable points’ for $s\widehat{H}$ and sG both coincide with s g (cf. Lemmata 2 and 3 in Section 3. Then, in Theorem 1 below, we use the approach from [Reference Borovkov and Mogulskii10] together with the integro-local large deviation theorem and the total probability formula to derive the fine asymptotics for the probabilities of the form

(7) \begin{equation} {\mathbb{P}}( \eta(sG) \lt \infty,\, \eta(s\widehat{H} ) = n, \, {{S}}(n) \in s{{g}} + \Delta[{{y}}) ). \end{equation}

Next we partition $\widehat{H}$ into a narrow half-cylinder with the generatrix orthogonal to $\partial \widehat{H},$ which covers the ‘very corner of sG’, and its complement in $\widehat{H}$ (as shown in Figure 3 in Section 3). The main contribution to ${\mathbb{P}}( \eta(sG) \lt \infty ) $ is computed by ‘integrating’ (7) in y over that half-cylinder and then by summing the resulting expressions (denoted by $P_{3,n}$ in the proof of Theorem 1 in Section 3; see (57) and (58)) over n using the Laplace method. The total contribution of the terms (7) with y outside that half-cylinder (which is equal to the sum $P_1+P_2$ ; cf. (54)) is shown to be negligibly small compared to the abovementioned main term.

To give precise definitions of the key concepts like the ‘most probable point’ and exact formulations of our results, we will need to introduce some notation and a number of important concepts from the large deviation theory for RWs with i.i.d. jumps in ${\mathbb{R}}^d$ . This is done in Section 2, which also contains a summary of the key properties of the deviation rate functions defined and discussed there, some auxiliary constructions, and the main result (Theorem 1) of the paper. Further auxiliary constructions and assertions are presented in Section 3, together with the proof of Theorem 1.

2. Some preliminaries and the main result

In this section we will present and discuss the key concepts needed for the Cramér large deviation theory, in particular, the first and second (deviation) rate functions. For an introduction to large deviation theory for univariate RWs and the main properties of the first rate function, see Chapter 9 of [Reference Borovkov5].

Unless stated otherwise, all the concepts and properties discussed in this section were introduced and/or established in [Reference Borovkov and Mogulskii6] and [Reference Borovkov and Mogulskii7]. Moreover, we will introduce three important conditions assumed to be met in the main theorem that we state at the end of the section. We conclude this section with remarks commenting on the difference between the forms of the asymptotics of the hitting probabilities in the smooth and nonsmooth cases, and also on possible extensions of our main result.

For vectors ${{u}} = (u_1, \ldots, u_d) $ and ${{v}} = (v_1, \ldots, v_d) \in {\mathbb{R}}^d,\,d \geq 2$ , we set $\langle {{u}}, {{v}} \rangle \,:\!= \smash{\sum_{i=1}^d u_iv_i}$ and $ \|{{v}}\| \,:\!= \langle {{v}}, {{v}} \rangle ^{1/2}$ . For a function $f \in C^1(S),$ S being an open subset of ${\mathbb{R}}^d$ , and ${{x}}\in S,$ we denote by $ f'({{x}}) \,:\!= \nabla_{{{x}}}f({{x}}),$ where $ \nabla_{{{x}}} \,:\!= ({\partial }/{\partial x_1}, \ldots, {\partial }/{\partial x_d}),$ the gradient of f at x. (On a couple of occasions, where it may be unclear with respect to what variable the gradient is computed, we will still have to use the nabla with the respective subscript.) By f” we denote the Hessian of the function $f \in C^2(S),$

\begin{equation} f''({{x}}) \,:\!= \nabla_{{{x}}}{\rm{T}} \nabla_{{{x}}}\, f({{x}}), \end{equation}

where ‘ ${\rm{T}}$ ’ denotes transposition,

Let ${\rm{\xi}}$ be a random vector in ${\mathbb{R}}^d$ satisfying the following condition.

(C1) The distribution F of ${\rm{\xi}}$ is nonlattice and there is no hyperplane $K = \{ {{x}} \colon \langle {{a}}, {{x}} \rangle = c \} \subset {\mathbb{R}}^d$ such that $F(K) = 1.$

The moment generating function of ${\xi} \in {\mathbb{R}}^d$ is denoted by

\begin{equation} \psi({\lambda})\,:\!= {\mathbb{E}}^{\langle {{\lambda}}, {{\xi}} \rangle} = \int ^{\langle {{\lambda}}, {{x}} \rangle} F({{x}}), \quad {\lambda} \in {\mathbb{R}}^d. \end{equation}

Let $\Theta_{\psi} \,:\!= \{ {\lambda} \in {\mathbb{R}}^d\colon \psi({\lambda}) \lt \infty \}$ be the set on which $\psi$ is finite. It is well known that $\Theta_{\psi} $ is convex. We will need the following Cramér moment condition imposed on F: (C2) $ \Theta_{\psi}$ contains a nonempty open set.

Under condition ( ${\rm C}_2$ ), for a fixed ${\lambda} \in \Theta_{\psi}$ , the Cramér transform $F_{{{\lambda}}}$ of the distribution F for that ${\lambda}$ is defined as the probability distribution given by

\begin{equation} F_{{{\lambda}}} (W) \,:\!= \frac{{\mathbb{E}} (^{\langle {{\lambda}},{{\xi}} \rangle} ;\ {\xi} \in W)}{\psi({\lambda})}, \quad W \in \mathcal{B}({\mathbb{R}}^d), \end{equation}

where $\mathcal{B}({\mathbb{R}}^d) $ is the $\sigma$ -algebra of Borel subsets of ${\mathbb{R}}^d$ (see, e.g. [Reference Borovkov3] and [Reference Borovkov4]). Denote by ${\rm{\xi}}_{({{\lambda}})}$ a random vector with distribution $F_{{{x}}}$ .

The first rate function $\Lambda({\alpha}) $ for the random vector ${\xi}$ is defined as

(8) \begin{equation} \Lambda({\alpha}) \,:\!= \sup_{{{\lambda}} \in \Theta_{\psi}} (\langle {\alpha},{\lambda} \rangle - \ln\psi({\lambda})), \quad {\alpha} \in {\mathbb{R}}^d, \end{equation}

which is the Legendre transform of the cumulant function $ \ln\psi({\lambda}) $ . For ${{\alpha}} \in {\mathbb{R}}^d$ , denote by ${\lambda}({{\alpha}}) $ the vector ${\lambda}$ at which the upper bound in (8) is attained (when such a vector exists, in which case it is always unique):

\begin{equation} \Lambda({\alpha}) =\langle {\alpha},{\lambda}({{\alpha}})\rangle - \ln\psi({\lambda}({{\alpha}})). \end{equation}

Define the Cramér range $\Omega_{\Lambda}$ for F as the set of all vectors that can be obtained as the expectations of the Cramér transforms $F_{{{x}}}$ of ${\rm{\xi}}$ for ${\lambda} \in {\rm int}(\Theta_{\psi}) $ :

\begin{equation} \Omega_{\Lambda} \,:\!= \bigg\{ {{\alpha}} = \frac{\psi'({\lambda})}{\psi({\lambda})} \equiv (\ln\psi({\lambda}))', \, {\lambda} \in {\rm int}(\Theta_{\psi}) \bigg\}. \end{equation}

The rate function $\Lambda$ is convex on ${\mathbb{R}}^d$ and strictly convex and analytic on $\Omega_{\Lambda}$ . Moreover, for ${{\alpha}}\in\Omega_{\Lambda}$ , we have (cf. [Reference Borovkov and Mogulskii6])

(9) \begin{equation} {\lambda}({{\alpha}}) = \Lambda'({{\alpha}}). \end{equation}

Introduce the notation $F^{({{\alpha}})} \,:\!= F_{{{x}}({{\alpha}})} $ and ${\rm{\xi}}^{({{\alpha}})} \,:\!= {\rm{\xi}}_{({{x}}({{\alpha}}))} ,$ and define

(10) \begin{equation} {{S}}^{({{\alpha}})}(n) \,:\!= \sum_{i=1}^n {\xi}^{({{\alpha}})}(i), \quad n =0, 1, \ldots , \end{equation}

where the $\smash{{\rm{\xi}}^{({{\alpha}})}(i)}$ are independent copies of $\smash{{\rm{\xi}}^{({{\alpha}})}}$ . For ${{\alpha}} \in \Omega_{\Lambda}$ , we can easily verify that

\begin{equation} {\mathbb{E}}\, {\xi}^{({{\alpha}})} = \ln \psi({\lambda})' |_{{{x}} = {{x}}({{\alpha}})} = {\alpha}, \quad {\rm{cov}} {\xi}^{({{\alpha}})} = \ln \psi({\lambda})'' |_{{{x}} = {{x}}({{\alpha}})} = ( \Lambda''({\alpha}) )^{-1}. \end{equation}

Denote by

\begin{equation} \sigma^2({\alpha}) \,:\!= {det}\, {{\rm{cov}}}\,{\xi}^{({{\alpha}})} = \text{det}\, ( \Lambda''({{\alpha}}) )^{-1} \end{equation}

the determinant of the covariance matrix of ${\xi}^{({{\alpha}})}$ .

The probabilistic interpretation of the first rate function is as follows (see, e.g. [Reference Borovkov and Mogulskii10]): for any ${{\alpha}} \in {\mathbb{R}}^d$ , letting $U_{\varepsilon}({{\alpha}}) $ denote the $\varepsilon$ -neighborhood of ${{\alpha}}$ ,

\begin{equation} \Lambda({{\alpha}}) = -\lim_{\varepsilon \rightarrow 0}\lim_{n \rightarrow \infty} \frac{1}{n}\ln {\mathbb{P}}\bigg(\frac{{{S}}(n)}{n} \in U_{\varepsilon}({{\alpha}}) \bigg), \end{equation}

Accordingly, for a set $B \subset {\mathbb{R}}^d$ , any point ${{\alpha}} \in B$ such that

(11) \begin{equation} \Lambda({{\alpha}}) = \inf_{{{v}} \in B}\Lambda({{v}}) \end{equation}

is called the most probable point (MPP) of B. If such an ${{\alpha}}$ is unique, we denote it by

(12) \begin{equation} {{\alpha}}[B] \,:\!= \text{arg\,min}_{{{v}} \in B}\Lambda({{v}}). \end{equation}

Since $\Lambda$ is convex, $\Lambda({{\alpha}}) \geq 0$ for any ${{\alpha}} \in {\mathbb{R}}^d,$ and $\Lambda({{\alpha}}) = 0$ if and only if ${{\alpha}} = {\mathbb{E}} {\rm{\xi}}$ for ${\mathbb{E}} {\rm{\xi}} \notin B, $ we always have

(13) \begin{equation} {{\alpha}}[B] = {{\alpha}}[\partial B], \end{equation}

so that the MPP in that case is on the boundary of B.

The concept of the MPP for the set G is related to the behavior of ${\mathbb{P}}({{{S}}(n)} \in sG ) $ as $s \rightarrow \infty,\ n \asymp s$ . However, we are interested in the probability of the event that the trajectory $\{{{{S}}(n)} \}_{n \geq 1}$ ever hits sG. To deal with that problem, we need to introduce the concept of the second rate function D defined in [Reference Borovkov and Mogulskii7] as follows:

(14) \begin{equation} D({{v}}) \,:\!= \inf_{t > 0} \frac{\Lambda(t{{v}})}{t}, \quad {{v}} \in {\mathbb{R}}^d. \end{equation}

This function admits an alternative representation (see Theorem 1 of [Reference Borovkov and Mogulskii7]) of the form

(15) \begin{equation} D({{v}})= \sup \{ \langle {\lambda},{{v}} \rangle \colon \psi ({\lambda})\le 1 \}, \quad {{v}} \in \Omega_\Lambda. \end{equation}

The following key properties ( ${\rm D}_1$ )–( ${\rm D}_4$ ) of the second rate function will be used below. The first property is an immediate consequence of representation (15):

(D1) The function D is convex on ${\mathbb{R}}^d$ .

Now introduce $t({{v}}) $ as the point at which the infimum in (14) is attained:

\begin{equation} D({{v}}) = \frac{\Lambda( t({{v}}){{v}})}{t({{v}})}. \end{equation}

The next property is established in Theorem 2 of [Reference Borovkov and Mogulskii7].

(D2) For any ${{v}} \in \Omega_{\Lambda},$ the point $t({{v}}){{v}}$ is an analyticity point of $\Lambda$ and $t({{v}}) $ is unique.

It will also be convenient to consider the reciprocal quantity

\begin{equation} u \,:\!= \frac{1}{t}. \end{equation}

For ${{v}} \in {\mathbb{R}}^d$ and $B \subset {\mathbb{R}}^d$ , set

(16) \begin{equation} D_u({{v}}) \,:\!= u \Lambda \bigg( \frac{{{v}}}{u}\bigg), \quad D_u(B) \,:\!= \inf_{{{v}} \in B} D_u({{v}}), \end{equation}

and let

(17) \begin{equation} D(B) \,:\!= \inf_{u > 0}D_u(B) = \inf_{u > 0}\inf_{{{v}} \in B} u \Lambda \bigg( \frac{{{v}}}{u}\bigg). \end{equation}

The value

(18) \begin{equation} u_B \,:\!= \text{arg\,min}_{u > 0} D_u(B) \end{equation}

is called the most probable time (MPT) for the set $B \subset {\mathbb{R}}^d$ . We set

(19) \begin{equation} r_B \,:\!= \frac{1}{u_B}. \end{equation}

The reason for calling $u_B$ the MPT is as follows. The problem of hitting the remote set sB ( $s \rightarrow \infty$ ) by the RW $\{{{{S}}(n)} \}_{n \geq 1}$ can be re-stated in the scaled time-space framework as that of hitting the original set B by the process $\{s^{-1}{{S}}(\lfloor {su} \rfloor)\}_{ u > 0}$ . Then, given that that continuous-time process hits B, it is most likely to do so at a time close to $u_B$ .

We refer to the point ${{b}}\in B$ such that $D({{b}})=D (B) $ as the global MPP (GMPP) for the set B. The probabilistic meaning of the GMPP is that, in a setting where $s\to\infty,$ if our RW ever hits the set sB, it is most likely that it will do that in the vicinity of s b (i.e. within a distance o(s) therefrom).

We will need two more properties of the function D.

(D3) If we have $D(B) = D({{b}}) $ for a ${{b}} \in B \subset {\mathbb{R}}^d$ then

(20) \begin{equation} D' ({{v}})|_{{{v}} = {{b}}} = \Lambda' ({{\alpha}})|_{{{\alpha}} = r_B{{b}}} = {\lambda}(r_B{{b}}). \end{equation}

The latter equality in (20) is the known key property (9) of the rate function $\Lambda$ . To prove the former equality, note that, from ( ${\rm D}_2$ ) and the implicit function theorem, we have

\begin{equation} \frac{\partial}{\partial u}D_u({{v}})\bigg|_{u = u({{v}})} = 0, \quad u({{v}}) \,:\!= \frac{1}{t({{v}})}. \end{equation}

Therefore, as $ D ({{v}})=D_{u({{v}})}({{v}}), $ using the chain rule results in

\begin{align} D' ({{v}}) &= ( D_{u({{v}})}({{v}}))' \\ &=\frac{\partial}{\partial u}D_u({{v}})|_{u = u({{v}})} u'({{v}}) + ( \nabla_{{{v}}}D_u({{v}}))|_{u = u({{v}})} \\ &= \bigg(\nabla_{{{v}}} u \Lambda \bigg(\frac{{{v}}}{u}\bigg)\bigg) \bigg|_{u = u({{v}})} \\ &= \Lambda'({{\alpha}})|_{{{\alpha}} = {{v}}/u({{v}})}. \end{align}

As $u({{b}}) = u_B = 1/r_B$ by assumption, property ( ${\rm D}_3$ ) is proved.

(D4) $D_u({{v}}) = u \Lambda ({{v}}/u ) $ is a convex function of $ (u, {{v}}) \in {{\mathbb {R}}}^+\times {{\mathbb {R}}}^d.$

To prove this property, let $ (u_1, {{v}}_1), (u_2, {{v}}_2) \in {\mathbb {R}}^+ \times {\mathbb {R}}^d$ be any two points in the time-space, $a \in (0,1) $ . The function $\Lambda$ is convex, so that

\begin{equation} \Lambda(p{{\alpha}}_1 + (1-p){{\alpha}}_2) \le p\Lambda({{\alpha}}_1) + (1-p)\Lambda({{\alpha}}_2), \quad p \in (0,1), \, {{\alpha}}_1, {{\alpha}}_2 \in \Omega_{\Lambda}. \end{equation}

By choosing $p \,:\!= {au_1}/{(au_1 + (1-a)u_2)}, {{\alpha}}_1 \,:\!= {{v}}_1/u_1,$ and ${{\alpha}}_2 \,:\!= {{v}}_2/u_2$ in the above inequality and multiplying both sides by $au_1 + (1-a)u_2,$ we obtain

(21) \begin{equation} (au_1+(1-a)u_2) \Lambda \bigg(\frac{a_1{{v}}_1 + (1-a){{v}}_2}{au_1 + (1-a)u_2}\bigg) \leq au_1\Lambda \bigg( \frac{{{v}}_1}{u_1} \bigg) + (1-a)u_2\Lambda \bigg( \frac{{{v}}_2}{u_2} \bigg), \end{equation}

which establishes the desired convexity. Property ( ${\rm D}_4$ ) is proved.

For $r > 0$ , let

(22) \begin{equation} L(r) \,:\!= \{ {{v}} \in {\mathbb {R}}^d\colon \Lambda({{v}}) = \Lambda({{\alpha}}(r)) \}, \end{equation}

be the level surface (line when $d = 2$ ) of $\Lambda$ that passes through the point

(23) \begin{equation} {{\alpha}}(r) \,:\!= {{\alpha}}[rG] \end{equation}

(see (12); we assume here that there exists a unique point ${{\alpha}}$ satisfying (11) with $B=rG$ ) and introduce the respective superlevel set

\begin{equation} \widehat{L} (r) \,:\!= \{ {{v}} \in {\mathbb {R}}^d\colon \Lambda({{v}}) \geq \Lambda({{\alpha}}(r)) \}. \end{equation}

Lemma 1. Let $r > 0$ . If there is an ${{\alpha}}_0 \in \Omega_{\Lambda} $ such that ${{\alpha}}_0$ is an MPP for the set rG, then this MPP is unique for rG:

\begin{equation} \{ {{\alpha}}_0\} = \{{{\alpha}}(r)\} = L(r) \cap rG. \end{equation}

The proof of Lemma 1 is given in Section 3.

Consider the following condition that depends on the parameter $r > 0$ : (C3(r)) We have

\begin{equation} {\lambda}(r{{g}}) \in Q^+, \quad r{{g}} \in \Omega_{\Lambda}, \quad \langle {\mathbb {E}} {\rm{\xi}}, {\lambda}(r{{g}}) \rangle \lt 0. \end{equation}

The first part of the condition means that the ‘external’ normal vector to the level surface of the convex function $\Lambda$ at the point r g points inwards rG, which means that the vertex r g is an MPP for rG. Under the second part of the condition, this MPP for rG is unique by Lemma 1: $r{{g}}= {{\alpha}}(r),$ so that ${\lambda}(r{{g}}) $ coincides with the vector

(24) \begin{equation} {{N}}(r) \,:\!= \Lambda' ({{\alpha}})|_{{{\alpha}} = {{\alpha}}(r)}= {\lambda}({{\alpha}}(r)), \quad r > 0, \end{equation}

which is a normal vector to the level surface L(r) at the point ${{\alpha}}(r) $ pointing inwards $\widehat{L}(r) $ (the above definition of N(r) makes sense whenever rG has a unique MPP). Since always $ {{N}}(r) \in {\rm cl}(Q^+) $ , the first part of ( ${\rm C}_3{(r)}$ ) excludes the case when the normal to L(r) at the point $ {{\alpha}}(r)=r{{g}}$ belongs to the boundary of the set rG.

The main result of the present paper is the following assertion.

Theorem 1. If conditions ( ${C}_1$ ), ( ${C}_2$ ) and ( ${C}_3$ ( $r_G$ )) are satisfied, where $r_G$ is defined by ( 19) with $B=G$ , then

(25) \begin{equation} {\mathbb {P}}( \eta(sG) \lt \infty ) = As^{-(d-1)/2}^{-sD(G)}(1+o(1)) \quad \text{as }s \rightarrow \infty, \end{equation}

where the value of the constant $A \in (0, \infty) $ is given in ( 72).

Remark 1. In the ‘smooth case’, when the boundary of G is twice continuously differentiable in the vicinity of the GMPP (the latter was defined after (19)), the exact asymptotics for the hitting probability was shown to have the form

(26) \begin{equation} {\mathbb {P}}( \eta(sG) \lt \infty ) = B ^{-sD(G)}(1 + o(1)) \quad\text{as } s \rightarrow \infty, \end{equation}

where the constant $B > 0$ (depending on F and G) can be written explicitly (see Theorem 7 of [Reference Borovkov and Mogulskii10]). Thus, the qualitative difference between the asymptotics (25) in the case of the orthant G with the GMPP at its vertex and the asymptotics (26) in the ‘smooth case’ is the presence of the power factor $\smash{s^{-(d-1)/2}}$ in the former formulation (cf. the factor $s^{-1/2}$ in the second line of (1), the asymptotics of the ruin probability in the special bivariate case from [Reference Avram, Palmowski and Pistorius2]).

The presence of that power factor can be roughly explained as follows. The distribution of the location of the first hitting point of the auxiliary half-space $s\widehat{H}(r_G) \supset sG$ (defined below; see (28) and (31)) is close to the normal law on its boundary $sH(r_G) $ with the mean point at s g and covariance matrix proportional to $s^{1/2}$ (see Corollary 3.2 of [Reference Borovkov and Mogulskii10]). However, the RW S will only have a noticeable chance of hitting sG at or after the time when it hits $s\widehat{H}(r_G) $ if the ‘entry point’ to $s\widehat{H}(r_G) $ is basically in a finite neighborhood of the vertex point s g. It is the integration over that neighborhood with respect to the abovementioned ‘almost normal’ distribution ‘of the scale $s^{1/2}$ ’ that results in the additional factor $\smash{s^{-(d-1)/2}}$ on the RHS of (25).

Remark 2. We can consider, in a similar way, the case where the GMPP neither lies on the face of the orthant (which would be the ‘smooth case’ dealt with in [Reference Borovkov and Mogulskii10]) nor is the vertex thereof (our case), but lies on an m-dimensional ( $1\le m \lt d-1$ ) component of the orthant boundary. It is not hard to see from our proofs that the hitting probability asymptotics in such a case will be ‘intermediate’ between (26) and (25), with the power factor $\smash{s^{-(d-m-1)/2}}$ .

A rough explanation of this is similar to that given in Remark 1. In that case, the distribution of the location of the first hitting point of the auxiliary half-space (of which the boundary will now contain the respective m-dimensional component of the orthant boundary) will again be close to the normal law on the boundary of that half-space, with the covariance matrix proportional to $s^{1/2}$ . But now, to have a noticeable chance of hitting the set sG, the ‘entry point’ to $s\widehat{H}(r_G) $ should be within a ‘short distance’ from that m-dimensional component of the orthant boundary (rather than from the GMPP itself). So now we will have to integrate with respect to the abovementioned ‘almost normal’ distribution over a subset of the hyperplane which is ‘bounded in $ (d-m-1) $ directions’; hence, the resulting power factor.

Remark 3. If conditions ( ${\rm C}_1$ ), ( ${\rm C}_2$ ), and ( ${\rm C}_3(r_G) $ ) are met except for the last assumption that $\langle {\mathbb {E}} {\rm{\xi}}, {{N}}(r_G) \rangle \lt 0,$ we still have a large deviation situation provided that ${\mathbb {E}} {\rm{\xi}} \notin~{\rm cl}(Q^+).$ In that case, g will still be the GMPP for G, but the asymptotics of (3) will be of the same form (25) as in the smooth boundary case (except for the value of the constant B). The reason for that will be clear from the proof of Theorem 1 (more precisely, from its part dealing with bounding the term $P_1$ ). Roughly speaking, what happens in that case is that if the RW S enters the auxiliary half-space $s\widehat{H}(r_G) $ in the sector from which one can ‘see’ the set sG along the rays with the directional vector ${\mathbb {E}} {\rm{\xi}}$ , then the RW will eventually hit sG with probability bounded away from 0. The probability of hitting that part of $s\widehat{H}(r_G) $ differs from the probability of hitting the ‘smooth case’ set $s\widehat{H}(r_G) $ by basically a constant factor.

Remark 4. As discussed at the beginning of this section, the RHS of (25) gives the asymptotics of the simultaneous ruin probability $\Psi_{{\rm sim}}(s{{g}}) $ as $s \to \infty$ in the d-dimensional extension of the problem from [Reference Avram, Palmowski and Pistorius2], under conditions ( ${\rm C}_1$ ), ( ${\rm C}_2$ ), and ( ${\rm C}_3(r_G) $ ). In the case of an alternative location of the GMPP, the asymptotics of $\Psi_{{\rm sim}}(s{{g}}) $ can be obtained from the main result of [Reference Borovkov and Mogulskii10] (when the GMPP is on the face of G) or arguing as indicated in Remark 2 (in all other cases).

Remark 5. Our result could also be extended to the case of a more general set G, with the property that the GMPP for hitting that set by our RW is at a ‘vertex’ on $\partial G$ . Here is a possible set (i)–(iv) of conditions for such an extension.

  1. (i) $$G \cap \{ t{\mathbb {E}}\xi :t \ge 0\} = \emptyset .$$

  2. (ii) There is a ${{g}}\in\partial G$ which is the unique GMPP of G.

  3. (iii) There exist $\varepsilon,\delta >0$ such that $D({{g}})\equiv D (G) \lt D(G\setminus U_{\varepsilon} ({{g}})) - \delta .$

  4. (iv) Denote by

    \[ C_\theta ({{b}})\,:\!=\bigg\{{{v}}\in {\mathbb {R}}^d\colon \arccos \frac{\langle{{v}}, {{b}}\rangle}{\|{{v}}\|\| {{b}} \|} \le \theta\bigg\}, \quad \theta \in (0,\pi/2), \]
    a circular cone in ${\mathbb {R}}^d$ with the axis direction vector b, opening angle $2\theta,$ and apex at 0, and by ${\zeta} $ the unit normal vector to the level surface of $\Lambda$ passing through the GMPP (see (35)). Then there exist a ${{b}} \in {\mathbb {R}}^d$ and values $0\lt\theta_1 \lt\theta_2 \lt\pi/2$ such that
    \[ C_{\theta_1} ({{b}}) \cap U_{\varepsilon} ({\textbf{{0}}}) \subset (G-g)\cap U_{\varepsilon} ({\textbf{{0}}}) \subset C_{\theta_2} ({\zeta})\cap U_{\varepsilon} ({\textbf{{0}}}) . \]

Condition (iv) ensures that $\partial G$ is nonsmooth at the GMPP g, where it has a ‘vertex’ with a positive solid angle at it. It is not very hard to verify, basically using the same argument as that used in the proof of our Theorem 1 (but with a number of appropriate changes), that ${\mathbb {P}} ( \eta(sG) \lt \infty ) $ for such a G will also have asymptotics of the form (25).

Remark 6. We can further extend the setup of our large deviation problem considering, instead of just ‘inflated sets’ sG, other versions of ‘remote sets’. Such possible versions include shifts $s{{g}}+V$ for some fixed set $V\subset {\mathbb {R}}^d$ (which coincides with the inflated set $s({{g}}+V) $ in our special case when $V={\rm cl}\, (Q^+),$ but would be different from that set when V is not a cone), combinations of shift and inflation which may, say, be of the form $ s{{g}}+s^v V$ for some $v>0,$ and so on. It appears that our approach would also work for some of those other settings, but the answers may be different in their form from both (25) and (26).

3. Proofs

For the reader’s convenience, we start this section with a short list of notation often used in the proofs that either have already been introduced or will appear below. Next to the notation we cite the equation number where the definition first appears.

  • Auxiliary hyperplanes and half-spaces: $H(r) = {{g}} + H_0(r), \widehat{H}(r) = {{g}} + \widehat{H}_0(r) $ (28).

  • Most probable points: ${{\alpha}}[B] = \text{arg\,min}_{{{v}} \in B}\Lambda({{v}}) $ (12); ${{\alpha}}(r) = {{\alpha}}[rG]$ (23); ${{\beta}}(r) ={{\alpha}}[r\widehat{H}(r_G)]$ (33); ${\chi} = n{{\beta}}(s/n) - s{{g}}$ (34).

  • The second rate function and related objects: $D_u({{v}}) = u \Lambda ( {{{v}}}/{u} ),\ D_u(B)\! = \inf_{{{v}} \in B} D_u({{v}}) $ (16); $D({{v}}) = \inf_{u > 0}D_u({{v}}) $ (14); $ D(B) = \inf_{u > 0}D_u(B) $ (17).

  • Most probable times: $u_B = \text{arg\,min}_{u > 0} D_u(B) $ (18); $r_B = 1/u_B$ (19).

  • Normals to the level surfaces of $\Lambda:\ {{N}}(r) = {\lambda}({{\alpha}}(r)) $ (24); ${\zeta} = {{N }(r_G)}/{\|{N}(r_G) \|}$ (35).

  • The first hitting time of the auxiliary half-space: $\eta_s = \eta(s\widehat{H}(r_G) ) $ (cf. (54); this formula also introduces probabilities $P_j,\, j=1,2,3$ ).

Finally, by c (with or without subscripts) we denote in this section positive constants (possibly different within one and the same argument and depending on F and g).

In all the assertions below except Lemma 2 we always assume that conditions ( ${\rm C}_1$ ), ( ${\rm C}_2$ ), and ( ${\rm C}_3(r_G) $ ), where $r_G$ is defined by (19) with $B=G$ , are met.

The scheme of the proof of our main result was outlined in the penultimate paragraph of the introduction. At the first step, we will prove Lemma 1.

Proof of Lemma 1. Suppose that there is another MPP ${{\alpha}}_1 \neq {{\alpha}}_0$ for the set rG. Denote by l the straight line segment with the end points ${{\alpha}}_0$ and ${{\alpha}}_1$ . Since both rG and the sublevel set $\widetilde{L}\,:\!= \{{{v}} \in {\mathbb {R}}^d\colon \Lambda({{v}}) \leq \Lambda({{\alpha}}_0) \}$ are convex and ${\rm int}(\tilde{L}) \cap (rG) = \emptyset$ (as $\Lambda({{v}}) \lt \Lambda({{\alpha}}_0) $ for any ${{v}} \in {\rm int}(\tilde{L}) $ ), we must have $l \subset (rG) \cap \tilde{L} = (rG) \cap \partial \tilde{L}$ . The latter relation implies that

(27) \begin{equation} \Lambda({{\alpha}}) = \Lambda({{\alpha}}_0), \quad {{\alpha}} \in l. \end{equation}

As ${{\alpha}}_0$ belongs to the open set $\Omega_{\Lambda}$ , there exists an $\varepsilon \in (0, \|{{\alpha}}_0 - {{\alpha}}_1 \|) $ such that $U_{\epsilon}({{\alpha}}_0) \subset \Omega_{\Lambda}$ , so that $\Lambda$ is strictly convex on $U_{\epsilon}({{\alpha}}_0).$ In particular, it is strictly convex on the segment $l \cap U_{\epsilon}({{\alpha}}_0) $ , which contradicts (27). Lemma 1 is proved.

Next we will construct auxiliary half-spaces. Recall (24) and let

\begin{equation} H_0(r) \,:\!= \{{{v}} \in {\mathbb {R}}^d\colon \langle {{v}}, {{N}}(r) \rangle = 0 \}, \quad \widehat{H}_0(r) \,:\!= \{{{v}} \in {\mathbb {R}}^d\colon \langle {{v}}, {{N}}(r) \rangle \geq 0 \} \end{equation}

be the linear subspace orthogonal to ${{N}}(r) $ and the ‘upper’ half-space bounded by $H_0(r) $ , respectively. We denote their respective translations by the vector g by

(28) \begin{equation} H(r) \,:\!= {{g}} + H_0(r) \quad\text{and}\quad \widehat{H}(r) \,:\!= {{g}} + \widehat{H}_0(r). \end{equation}

Under condition ( ${\rm C}_3(r) $ ), we have

(29) \begin{equation} rH(r) = r{{g}} + rH_0(r) = {{\alpha}}(r) + H_0(r) \quad\text{and}\quad r\widehat{H}(r) = {{\alpha}}(r) + \widehat{H}_0(r). \end{equation}

Since ${{\alpha}}(r) \in L(r) $ by (22) and ${{\alpha}}(r) = r{{g}}$ from condition ( ${\rm C}_3(r) $ ), we have $n{{\alpha}}(r) = nr{{g}} = s{{g}} \in n L(r) $ (see Figure 1), when we choose

(30) \begin{equation} r \,:\!= \frac{s}{n}, \end{equation}

where $s > 0$ is the parameter used to scale the set G and $n \in {\mathbb {N}}$ will have the interpretation of the number of steps in the RW S (see (2)). Hence, the sets

(31) \begin{equation} sH(r) = s{{g}} + H_0(r) \quad\text{and}\quad s\widehat{H}(r) = s{{g}} + \widehat{H}_0(r), \end{equation}

are respectively the tangent hyperplane to the scaled surface nL(r) at the point s g and the ‘upper’ half-space bounded by sH(r).

Figure 1 Auxiliary constructions: the level line L(r) of $\Lambda,$ its scaled version nL(r), the respectively tangent straight lines rH(r) and sH(r), and other related objects (case $d = 2, r = s/n$ ).

The role of the half-space $r\widehat{H}(r) $ is clarified in the next lemma, which shows that the MPP for $r\widehat{H}(r) $ coincides with the MPP for the scaled version rG of the set G.

Lemma 2. If conditions ( ${C}_1$ ), ( ${C}_2$ ), and ( ${C}_3$ (r)) are satisfied for an $r > 0$ , then

\begin{equation} {{\alpha}}[r\widehat{H}(r)] = {{\alpha}}(r) = r{{g}}. \end{equation}

Proof. In view of ( ${\rm C}_3(r) $ ), we only have to show that ${{\alpha}}[r\widehat{H}(r)] = r{{g}}$ . Since rH(r) is the tangent hyperplane to L(r) at the point $r{{g}},$ arguing as in the proof of Lemma 1, we see that the sublevel set ${\rm cl}(\widehat{L}(r)^c) $ has a unique contact point $r{{g}}$ with rH(r). It is clear that ${\rm int}(r\widehat{H}(r)) $ is separated from ${\rm cl}(\widehat{L}(r)^c) $ by the hyperplane rH(r). Hence, $\Lambda({{v}}) > \Lambda(r{{g}}),\,{{v}} \in {\rm int}(r\widehat{H}(r)) $ , which completes the proof of Lemma 2.

The properties of the half-space $\widehat{H}(r_G) $ stated in the next lemma will play a key role in our argument. It turns out that the crude asymptotics of ${\mathbb {P}}(\eta( s\widehat{H}(r_G))\lt \infty) $ as $s \to \infty$ are the same as those for ${\mathbb {P}}(\eta( sG)\lt \infty) $ and, moreover, the MPTs and MPPs for the sets G and $\widehat{H}(r_G) $ (and hence the GMPPs for them) are the same as well.

Lemma 3. Suppose that condition ( ${C}_3(r_G) $ ) holds. Then

\begin{equation} u_G = u_{\widehat{H}(r_G)} \quad\text{and}\quad {{\alpha}} (r_G ) = {{\alpha}} [ r_{\widehat{H}(r_G)}\widehat{H}(r_G) ] = r_G{{g}}, \end{equation}

so that $D(G) = D(\widehat{H}(r_G)),$ and $ (u_G, u_G{{\alpha}} (r_G )) = (u_{\widehat{H}(r_G)}, u_{\widehat{H}(r_G)}{{\alpha}} [r_{\widehat{H}(r_G)} \widehat{H}(r_G) ] ) $ is the unique point at which the infimum on the RHS of ( 17) is attained for both $B = G$ and $B=\widehat{H}(r_G) $ .

Proof. First we show that $ (u_G, {{g}}) $ is the unique time-space point where the infimum on the RHS of (17) is attained when $B = G$ . From ( ${\rm D}_3$ ) and ( ${\rm C}_3(r_G) $ ),

(32) \begin{equation} D'({{v}})|_{{{v}} = {{g}}} = \Lambda' ({{\alpha}})|_{{{\alpha}} = r_G{{g}}} \equiv {{N}}(r_G) \in Q^+. \end{equation}

As the sublevel set $\tilde{L}_1 \,:\!= \{{{v}} \in {\mathbb {R}}^d\colon D({{v}}) \leq D({{g}}) \}$ whose boundary passes through g is convex due to ( ${\rm D}_1$ ), relation (32) means that $\tilde{L}_1 \cap G=\{{{g}}\}$ . Therefore, $D(G) = D({{g}}) $ and g is the only point ${{v}} \in G$ such that $D(G) = D({{v}}) $ .

By ( ${\rm D}_2$ ), there is a unique point $t({{g}}) > 0$ such that $D({{g}}) = \Lambda(t({{g}}){{g}})/t({{g}}) $ . Hence, $ (u_G = 1/t({{g}}), {{g}}) $ is the unique point at which the infimum on the RHS of (17) with $B = G$ is attained.

Now note that, in view of (32), $H(r_G) $ is the tangent hyperplane to the level surface $\partial \tilde{L}_1$ at the point g. Arguing as in the proof of Lemma 1 and using the strict convexity of D in a neighborhood of g, which can be seen from ( ${\rm D}_3$ ) under condition ( ${\rm C}_3(r_G) $ ), we obtain $\tilde{L}_1 \cap H(r_G)=\{{{g}}\}$ . Repeating (with obvious changes, replacing G with $\widehat{H}(r_G) $ ) the argument in the first part of this proof, we see that $ (1/t({{g}}), {{g}}) $ is also the unique point at which the RHS of (17) with $B = \widehat{H}(r_G) $ attains its minimum, so that $u_{\widehat{H}(r_G)} = 1/t({{g}}) = u_G$ and

\begin{equation} {{\alpha}} [r_{\widehat{H}(r_G)}\widehat{H}(r_G)] = r_{\widehat{H}(r_G)}{{g}} = r_G{{g}} = {{\alpha}}[r_GG] \equiv {{\alpha}}(r_G) = r_G{{g}}. \end{equation}

Lemma 3 is proved.

To prove the main Theorem 1, we will need a few further ancillary results. Recall the notation given in (11), (29), (30), and denote by

(33) \begin{equation} {{\beta}}(r)\,:\!={{\alpha}}[r\widehat{H}(r_G)] \end{equation}

the MPP of the set $r\widehat{H}(r_G) $ . By Lemma 1,

\begin{equation} {{\beta}} (r_G) ={{\alpha}} (r_G)=r_G{{g}}. \end{equation}

Denote by $\mathcal{G}$ the class of functions $\gamma \colon {\mathbb {R}}^+ \rightarrow {\mathbb {R}}^+$ such that $\gamma(s) = o(s) $ as $s \rightarrow \infty$ . The next lemma describes the ‘movement’ of the MPPs for the half-spaces $r\widehat{H}(r_G) \equiv (s/n) \widehat{H}(r_G) $ for n-values in the $\gamma(s) $ -neighborhood of $s/r_G$ .

Lemma 4. Let $\gamma \in \mathcal{G}$ . There exists a constant vector ${\kappa}\in {\mathbb {R}}^d$ such that, as $s\to\infty$ , for $|n - s/r_G | \leq \gamma(s),$ we have

\begin{equation} n{{\beta}}\bigg(\frac{s}{n}\bigg) - s{{g}} = \bigg(n - \frac{s}{r_G}\bigg) {\kappa} + O(s^{-1}\gamma^2(s)). \end{equation}

Proof. Observe that

(34) \begin{equation} {\chi} \,:\!= n{{\beta}}\bigg(\frac{s}{n}\bigg) - s{{g}} = n({{\beta}}(r) - r{{g}}) = n[({{\beta}}(r) - {{\beta}}(r_G)) + (r_G - r){{g}}]. \end{equation}

To evaluate the first term on the RHS, first recall that ${{\beta}}(r) \in rH(r_G) $ according to (13) and introduce the unit normal vector to $H(r_G) $ (cf. (24)):

(35) \begin{equation} {\zeta} \,:\!= \frac{{{N}}(r_G)}{\|{{N}}(r_G) \|}. \end{equation}

Next note that $rH(r_G) = r_GH(r_G) + \varepsilon{\zeta}$ , where

(36) \begin{equation} \varepsilon \,:\!= (r- r_G)\langle {{g}}, {\zeta} \rangle =o(1) \quad\text{as } s\to\infty, \end{equation}

under the conditions of the lemma. Choose an orthonormal system ${{e}}_1, \ldots, {{e}}_{d-1}$ of vectors orthogonal to ${\zeta}$ and let J be the $ (d-1)\times d$ -matrix having these vectors as its rows. As ${{\beta}} (r) \in rH(r_G),$ this vector is of the form

\[ {\beta}(r)=r_G{{g}} + \varepsilon{\zeta} +\sum_{i=1}^{d-1}h_i{{e}}_i = r_G{{g}} + \varepsilon{\zeta} + {{h}}J, \quad {{h}}\,:\!=(h_1,\ldots, h_d)\in {\mathbb {R}}^{d-1}. \]

As ${{\beta}} (r) $ is the MPP for $r\widehat{H}(r_G) $ , it is the unique point of that form which is orthogonal to $H(r_G) $ or, which is the same, orthogonal to all ${{e}}_j, \,j=1,\ldots, d-1:$

(37) \begin{equation} {\lambda} (r_G{{g}} + \varepsilon{\zeta} + {{h}}J) J{\rm{T}} ={\textbf{{0}}}. \end{equation}

Next, assuming that $\|{{h}}\| = o(1) $ , we use condition ( ${\rm C}_3(r_G) $ ), the multivariate Taylor formula, and (9) to write

\begin{equation} {\lambda}(r_G{{g}} +\varepsilon {\zeta} + {{h}}J ) = {\lambda}(r_G{{g}}) + (\varepsilon {\zeta} + {{h}}J)\Lambda''(r_G{{g}}) + O(\varepsilon^2 +\|{{h}}\|^2). \end{equation}

Substituting this into (37), noting that ${\lambda}(r_G{{g}})J{\rm{T}} ={\textbf{{0}}}$ and setting $A\,:\!=\Lambda'' (r_G {{g}}) $ for brevity, we obtain

\[ ( \varepsilon{\zeta} + {{h}}J) A J{\rm{T}} +O(\varepsilon^2 +\|{{h}}\|^2)={\textbf{{0}}}. \]

The remainder term here is a continuous function of h, whereas $J AJ{\rm{T}} $ is a positive-definite matrix since A is. So we conclude that there exists a (unique, as we already know) solution to the above equation equal to ${{h}}= -\varepsilon{\zeta} A J{\rm{T}} (J A J{\rm{T}} )^{-1} + O(\varepsilon^2 ).$ Hence,

(38) \begin{equation} {{\beta}}(r) - {{\beta}}(r_G)\equiv {{\beta}}(r) - r_G{{g}} = \varepsilon ( {\zeta} - {\zeta} A J{\rm{T}} (J A J{\rm{T}} )^{-1} J ) + O(\varepsilon^2). \end{equation}

It follows from (34), (36), and (38) that

\begin{align} {{\beta}}(r) -r{{g}} &= {{\beta}}(r) - {{\beta}}(r_G) + (r_G - r) {{g}} \notag \\ &= \varepsilon ( {\zeta} - {\zeta} A J{\rm{T}} (J A J{\rm{T}} )^{-1} J ) + O(\varepsilon^2)+ (r_G - r) {{g}} \notag \\ &= \frac{( r_G-r) {\kappa }}{r_G} +O((r_G-r) ^2), \end{align}

where $ {\kappa }\,:\!= r_G[( {\zeta} A J{\rm{T}} (J A J{\rm{T}} )^{-1} J -{\zeta} ) \langle{{g}},{\zeta}\rangle + {{g}}] .$ As $n(r_G - r) = r_G(n- s/r_G) $ and $n( r_G-r)^2 = n^{-1}r_G^2(n-s/r_G)^2 = O(s^{-1}\gamma^2(s)) $ , the lemma is proved.

For ${{\alpha}} \in \Omega_{\Lambda}$ , recall (10) and, for ${{z}} \in {\mathbb {R}}^d$ , introduce the two functions

\begin{equation} p({{z}}) \,:\!= {\mathbb {P}}( \eta ( {\rm cl}(Q^+) - {{z}} ) \lt \infty ), \end{equation}

so that $p({{z}}) = 1$ for ${{z}} \in {\rm cl}(Q^+),$ and

(39) \begin{align} q_{{{\alpha}}}({{z}}) &\,:\!= {\mathbb {P}}\Big(\inf_{n \geq 1} \langle {\lambda}({{\alpha}}(r_G)), {{S}}^{({{\alpha}})}(n)\rangle \geq \langle {\lambda}({{\alpha}}(r_G)), {{z}} \rangle \Big) \end{align}

(cf. [Reference Borovkov and Mogulskii10, pp.~253--254]; in fact, $q_{{{\alpha}}}$ was defined there as an integral involving the RHS of (39), but on close inspection it is easy to see that it is actually the same as (39)). For a Borel subset $W \subset \widehat{H}_0(r_G) $ , a ${{w}} \in H_0(r_G),$ and $r > 0$ such that ${{\beta}}(r) \in \Omega_{\Lambda}$ , set

\begin{equation} E(r, {{w}}, W) \,:\!= \int_{W} ^{-\langle {{x}}({{\beta}}(r)), {{v}} \rangle} p({{w}} + {{v}})q_{{{\beta}}(r)}({{v}}){{v}} \lt \infty, \end{equation}

the last inequality being a consequence of the bound (49) below for p and the fact that ${\lambda}({{\beta}}(r)) \perp H_0(r_G).$ Finally, denote by $\mathcal{P}$ the orthogonal projection onto $H_0(r_G).$

The next theorem is a key step in implementing our approach based on auxiliary half-spaces. If the RW S hits sG then it inevitably hits the ‘best half-space approximation’ $s\widehat{H}(r_G)\supset sG$ to it (in the sense that both sets have the same crude hitting probability asymptotics). In Theorem 2, we compute the probability of hitting sG ‘localizing’ in both time and space when and where the RW first hits $s\widehat{H}(r_G) $ .

Theorem 2. Set ${{w}} \,:\!= n{{\beta}}(r) -s{{g}} + {{x}}$ . There exists a sequence $\delta_n\to 0$ such that, for any fixed $\Delta_0 > 0$ , $M_0 \in (0, \infty) $ , and $\gamma \in \mathcal{G}$ , we have, as $s \to \infty,$

(40) \begin{align} &{\mathbb {P}}( \eta(sG) \lt \infty, \eta (s\widehat{H}(r_G)) = n ,{{S}}(n) \in n{{\beta}}(r) + {{x}} + \Delta[{{y}}) )\notag \\ &\quad= \frac{ \exp\{-n\Lambda({{\beta}}(r)) - {{x}}\Lambda''({{\beta}}(r)){{x}}{\rm{T}}/{(2n)} + O(\|{{x}} \|^3n^{-2}) \} }{(2\pi n)^{d/2} \sigma({{\beta}}(r))}\notag \\ &\quad\times [E(r, {{w}}, \Delta[{{y}}))(1+o(1)) + o(\Delta^d\exp\{-c_1\|\mathcal{P}({{w}}+{{y}}) \| - c_2\langle {\zeta}, {{y}} \rangle \} ) ] \end{align}

uniformly in the range of the variables n, ${{x}} \in H_0(r_G) $ and y specified by

\begin{gather*} \bigg|n - \frac{s}{r_G}\bigg| \leq \gamma (s), \quad \Delta \in [\delta_n , \Delta_0], \\ \|{{x}} \| \leq \gamma(s), \quad \| {{y}}\| \lt M_0, \quad {{x}} + \Delta[{{y}}) \subset \widehat{H}_0(r_G). \end{gather*}

Remark 7. The point of separating the variables x and y in the statement of this theorem is that it will be convenient in the next step (Corollary 1) of the proof of our main result. At that step, we will obtain a representation similar to (40) where instead of the ‘small’ cube $\Delta[{{y}}) $ we will have a half-cylinder with a ‘small’ base $\Delta^*[{{x}}) \subset H_0(r_G) $ and generatrix parallel to ${\zeta}$ (to be achieved by ‘integrating’ the asymptotics from (40) with respect to y).

Proof of Theorem 2. Assume for simplicity that $d = 2$ (we will explain at the end of the proof how the argument changes in the case $d\ge 3$ ). Set $\Delta_m \,:\!= \Delta m^{-1},$ where $m=m(n)\to \infty$ as $n\to\infty$ slowly enough (the choice of m is discussed below). For ${{y}} = (y_1, y_2), $ set

\begin{equation} {{z}}^{i,j} \,:\!= (y_1 + (i-1) \Delta_m, y_2 + (j-1) \Delta_m ), \quad i,j \ge 1, \end{equation}

and partition the square $\Delta[{{y}}) $ into $m^2$ sub-squares $\Delta_m[{{z}}^{i,j}) $ : $\Delta[{{y}}) = \bigcup_{1\leq i,j \leq m} \Delta_m[{{z}}^{i,j}).$ Clearly, setting ${{x}}' \,:\!= n{{\beta}}(r) + {{x}} \equiv {{w}} + s{{g}},$ we have

(41) \begin{align} P &\,:\!= {\mathbb {P}}( \eta(sG) \lt \infty, \,\eta(s\widehat{H}(r_G)) = n,\, {{S}}(n) \in {{x}}' + \Delta[{{y}}) ) \notag \\ &= \sum_{1\leq i,j \leq m} {\mathbb {P}}( \eta(sG) \lt \infty,\, \eta(s\widehat{H}(r_G)) = n,\, {{S}}(n) \in {{x}}' + \Delta_m[{{z}}^{i,j}) ). \end{align}

Due to the Markov property, the (i,j)th term in the sum on the RHS of (41) equals

\begin{align} &\int_{\Delta_m[{{z}}^{i,j})} {\mathbb {P}}( \eta(sG) \lt \infty, \eta(s\widehat{H}(r_G)) = n, {{S}}(n) \in {{x}}' + {{v}} ) \nonumber \\ &\quad =\int_{\Delta_m[{{z}}^{i,j})} {\mathbb {P}}( \eta(sG) \lt \infty \mid \eta(s\widehat{H}(r_G)) = n, {{S}}(n) = {{x}}' + {{v}} ) \nonumber \\ &\quad \times {\mathbb {P}}( \eta(s\widehat{H}(r_G)) = n, {{S}}(n) \in {{x}}' + {{v}} ) \nonumber \\ &\quad =\int_{\Delta_m[{{z}}^{i,j})} p({{w}} + {{v}}) {\mathbb {P}}( \eta(s\widehat{H}(r_G)) = n, {{S}}(n) \in {{x}}' + {{v}} ) \\ &\quad=\!:\, I_{i,j}. \end{align}

Now introduce the time-reversed RW $\tilde{{{S}}}(k) \,:\!= {\rm{\xi}}(n) + {\rm{\xi}}(n-1) + \cdots + {\rm{\xi}}(n-k+1), \, 1 \leq k \leq n. $ Note that $\eta(s\widehat{H}(r_G)) $ is the first time the univariate RW $\{\langle {{{S}}}(k), {\zeta} \rangle\}_{k\ge 0}$ hits the level $\langle {{x}}', {\zeta} \rangle$ and that $ \langle {{w}}, {\zeta}\rangle =0,\ \langle {{v}}, {\zeta}\rangle >0$ for ${{v}}\in \Delta_m[{{z}}^{i,j}),$ so that

\begin{equation} \{\eta(s\widehat{H}(r_G)) = n, {{{S}}(n)} = {{x}}' + {{v}} \} = \Big\{\min_{1 \leq k \leq n} \langle \tilde{{{S}}}(k), {\zeta} \rangle > \langle {{v}}, {\zeta} \rangle, \, {{{S}}(n)} = {{x}}' + {{v}} \Big\}. \end{equation}

Furthermore, the function $p({{z}}) $ is nondecreasing along any ray with a directional vector ${{v}} \in {\rm cl}(Q^+) $ : as ${\rm cl}(Q^+) -{{z}} \subset {\rm cl}(Q^+) -{{z}} - {{v}}$ for such v, we have

(42) \begin{equation} p({{z}}+{{v}}) = {\mathbb {P}}( \eta({\rm cl}(Q^+) - {{z}} - {{v}}) \lt \infty ) \geq {\mathbb {P}}( \eta({\rm cl}(Q^+) - {{z}}) \lt \infty ) = p({{z}}). \end{equation}

Therefore,

(43) \begin{equation} \min_{{{v}} \in \Delta_m[{{z}}^{i,j}) }p({{v}}) = p({{z}}^{i,j}), \quad \max_{{{v}} \in \Delta_m[{{z}}^{i,j}) }p({{v}}) = p({{z}}^{i+1,j+1}), \end{equation}

and, as clearly $ \langle {{z}}^{i,j}, {\zeta}\rangle \leq \langle {{v}}, {\zeta} \rangle$ for ${{v}} \in \Delta_m[{{z}}^{i,j}),$ we obtain

(44) \begin{align} I_{i,j} &\leq \int_{\Delta_m[{{z}}^{i,j})} p({{w}} + {{z}}^{i+1, j+1}){\mathbb {P}}\Big(\min_{1 \leq k \leq n} \langle \tilde{{{S}}}(k), {\zeta} \rangle > \langle {{z}}^{i,j}, {\zeta}\rangle, \,{{{S}}(n)} = {{x}}' + {{v}} \Big) \notag \\ &= p({{w}} + {{z}}^{i+1, j+1}) {\mathbb {P}}\Big(\min_{1 \leq k \leq n} \langle \tilde{{{S}}}(k), {\zeta} \rangle > \langle {{z}}^{i,j}, {\zeta}\rangle, \,{{{S}}(n)} \in {{x}}' + \Delta_m[{{z}}^{i,j}) \Big) \notag \\ &= p({{w}} + {{z}}^{i+1, j+1}) {\mathbb {P}}\Big(\min_{1 \leq k \leq n} \langle \tilde{{{S}}}(k), {\zeta} \rangle > \langle {{z}}^{i,j}, {\zeta} \rangle \Bigm| {{{S}}(n)} \in {{x}}' + \Delta_m[{{z}}^{i,j}) \Big) \notag \\ &\times {\mathbb {P}}({{{S}}(n)} \in {{x}}' + \Delta_m[{{z}}^{i,j}) ). \end{align}

Asymptotic representations for the second and third factors on the RHS can be respectively obtained from Theorems 10 and 9 of [Reference Borovkov and Mogulskii10]. The assumptions of the theorems in [Reference Borovkov and Mogulskii10] include Cramér’s strong nonlattice condition $ (C_2) $ on the characteristic function of ${{\xi}}$ , but that condition is actually unnecessary provided that ${{\xi}}$ is just nonlattice and the ‘small cube’ edge is only allowed to decay slowly enough (the key tool for such an extension is the integro-local Stone’s theorem; for more detail, see, e.g. [Reference Borovkov and Mogulskii9]). Under such weakened conditions, the assertions of Theorems 10 and 9 of [Reference Borovkov and Mogulskii10] will still hold uniformly in the small cube edge lengths in the interval $ [\delta_n',\Delta_0]$ for some sequence $\delta_n'\to 0$ .

Now we will choose $m=m(n)\to \infty$ such that $\delta_n\,:\!=\delta_n'm\to 0$ as $n\to\infty.$ Since ${{x}}'/n = {{\beta}}(r) + o(1),$ by the modified version of Theorem 10 of [Reference Borovkov and Mogulskii10], for the second factor on the RHS of (44), we have

\begin{equation} {\mathbb {P}}\Big(\min_{1 \leq k \leq n} \langle \tilde{{{S}}}(k), {\zeta} \rangle > \langle {{z}}^{i,j}, {\zeta} \rangle \Bigm| {{{S}}(n)} = {{x}}' + \Delta_m[{{z}}^{i,j}) \Big) = q_{{{\beta}}(r)}({{z}}^{i,j})(1+o(1)) \end{equation}

(cf. [Reference Borovkov and Mogulskii10, p.264]), whereas by the modified version of Theorem 9 of [Reference Borovkov and Mogulskii10] (which, roughly speaking, is just a combination of Stone’s integro-local theorem with Cramér’s change of measure, a multi-variate version of Theorem 9.3.1 of [Reference Borovkov5]) for the third factor on the RHS of (44), we have the relation

\begin{equation} {\mathbb {P}}({{{S}}(n)} \in {{x}}' + \Delta_m[{{z}}^{i,j}) ) = \frac{\Delta_m^2(1+o(1))}{2\pi n \sigma(({{x}}' + {{z}}^{i,j})/n)} \exp \bigg\{-n\Lambda\bigg({{\beta}}(r) + \frac{{{x}} + {{z}}^{i,j}}{n} \bigg) \bigg\}. \end{equation}

Now, expanding the rate function in the exponential on the RHS about the point ${{\beta}}(r) $ and using (9), for the probability on the left-hand side (LHS), we obtain the representation

\[ \frac{\Delta^2_m(1+o(1))}{2\pi n\sigma({{\beta}}(r))}\exp \bigg\{- \!n\Lambda({{\beta}}(r)) - \langle {\lambda}({{\beta}}(r)), {{z}}^{i,j} \rangle - \frac{1}{2n}{{x}} \Lambda''({{\beta}}(r)){{x}}{\rm{T}} + \theta_{i,j} \bigg\}, \]

where the remainders o(1) and $\theta_{i,j} = O(\|{{x}}^3 \|n^{-2}) $ are both uniform in $\Delta \in [\delta_n, \Delta_0]$ and ${{z}}^{i,j} \in {\mathbb {R}}^d$ , ${{x}} \in H_0(r_G) $ such that $\|{{x}} \| \leq \gamma(s) $ , $\|{{z}}^{i,j} \| \lt M_0$ , ${{x}} + \Delta[{{z}}^{i,j}) \subset s\widehat{H}(r_G) $ . Here we used the Taylor expansion of $\Lambda$ at ${{\beta}}(r),$ relation (9), and the fact that $\langle {\lambda}({{\beta}}(r)), {{x}} \rangle = 0$ for ${{x}} \in H_0(r_G).$ Combining the above representations for the factors on the RHS of (44) yields an upper bound for $I_{i,j}$ .

In the same way, but now using the first relation in (43) andhe observation that $ \langle {{z}}^{i+1,j+1}, {\zeta} \rangle \geq \langle {{v}}, {\zeta} \rangle, \, {{v}} \in \Delta_m[{{z}}^{i,j}),$ we obtain a lower bound for $I_{i,j}$ of the same form as the upper bound, but involving $p({{w}} + {{z}}^{i,j}) $ and $q_{{{\beta}}(r)}({{z}}^{i+1, j+1}) $ on its RHS.

Summing the obtained upper and lower bounds for $I_{i,j},\,1 \leq i, j \leq m$ , we see from (41) that

\begin{align} \Delta_m^2 & \sum_{1 \leq i, j \leq m} p({{w}} + {{z}}^{i,j})q_{{{\beta}}(r)}({{z}}^{i+1, j+1})^{-\langle {{x}}({{\beta}}(r)), {{z}}^{i, j} \rangle}(1+o(1)) \\ & \leq J \\ &\,:\!= 2\pi n \sigma({{\beta}}(r))\exp \bigg\{n\Lambda({{\beta}}(r)) + \frac{1}{2n}{{x}}\Lambda''({{\beta}}(r)){{x}}{\rm{T}} - \theta \bigg\} P \\ & \leq \Delta_m^2 \sum_{1 \leq i, j \leq m} p({{w}} + {{z}}^{i+1,j+1})q_{{{\beta}}(r)}({{z}}^{i, j})^{-\langle {{x}}({{\beta}}(r)), {{z}}^{i, j} \rangle}(1+o(1)), \end{align}

where $\theta = O(\|{{x}}^3 \|/n^2) $ . As $\|{{z}}^{i,j}- {{z}}^{i+1,j+1}\|=2^{1/2}\Delta/m\to 0,$ we can now replace $\langle {\lambda}({{\beta}}(r)), {{z}}^{i,j} \rangle$ in the lower bound with $\langle {\lambda}({{\beta}}(r)), {{z}}^{i+1,j+1} \rangle,$ yielding

\begin{align} &\Delta_m^2 \sum_{1 \leq i, j \leq m} p({{w}} + {{z}}^{i,j})q_{{{\beta}}(r)}({{z}}^{i+1, j+1})^{- \langle {{x}}({{\beta}}(r)), {{z}}^{i+1,j+1} \rangle}(1+o(1)) \\ &\quad\leq J \\ &\quad\leq \Delta_m^2 \sum_{1 \leq i, j \leq m} p({{w}} + {{z}}^{i+1,j+1})q_{{{\beta}}(r)}({{z}}^{i, j})^{- \langle {{x}}({{\beta}}(r)), {{z}}^{i,j} \rangle}(1+o(1)). \end{align}

Observe that the LHS (RHS) in the above formula is, up to the factor $ (1+o(1)) $ , the lower (upper) Darboux sum for the function

(45) \begin{equation} p({{w}} + {{z}})q_{{{\beta}}(r)}({{z}})^{- \langle {{x}}({{\beta}}(r)), {{z}} \rangle}, \quad {{z}} \in \Delta[{{y}}). \end{equation}

It is not hard to see that the difference between the sums vanishes uniformly as $s \to \infty$ , and so they both tend to the Riemann integral $E(r, {{w}}, \Delta[{{y}})) $ of that function over $\Delta[{{y}}) $ .

Indeed, setting, for a function $h({{z}}), \, {{z}} \in {\mathbb {R}}^2$ ,

\begin{equation} \overline{h}^{i,j} \,:\!= h({{z}}^{i+1, j+1}), \quad \underline{h}^{i,j} \,:\!= h({{z}}^{i,j}), \quad\text{for } i, j \geq 1 \end{equation}

(the values of h at the top-right and left-bottom vertices of the subsquares $\Delta_m[{{z}}^{i,j}) $ , respectively), and letting $ f({{z}}) \,:\!= p({{w}}+{{z}}) $ and $ g({{z}}) \,:\!= q_{{{\beta}}(r)}({{z}})^{-\langle {{x}}({{\beta}}(r)), {{z}} \rangle},$ the difference between the upper and lower Darboux sums for (45) on $\Delta[{{y}}) $ can be written, suppressing the superscripts i,j in all the factors, as

\begin{equation} \delta \,:\!= \Delta_m^2\sum_{1 \leq i,j \leq m} ( \overline{f} \underline{g} - \underline{f} \overline{g}). \end{equation}

Using the monotonicity of both $f({{z}}) $ (see (42)) and the exponential factor $^{-\langle {{x}}({{\beta}}(r)), {{z}} \rangle}$ along directions from $Q^+$ , we can bound the value of the sum here as

(46) \begin{align} \sum_{1 \leq i,j \leq m}( \overline{f}\underline{g} - \underline{f}\overline{g}) & = \sum_{1 \leq i,j \leq m} (\overline{f}-\underline{f} )\underline{g} + \sum_{1 \leq i,j \leq m} \underline{f} (\underline{g}-\overline{g} ) \notag \\ & \leq ^{-\langle {{x}}({{\beta}}(r)), {{y}} \rangle}\sum_{1 \leq i,j \leq m} (\overline{f}-\underline{f} ) + f({{z}}^{m,m})\sum_{1 \leq i,j \leq m} (\underline{g}-\overline{g} ). \end{align}

Since $\overline{f}^{i,j} = \underline{f}^{i+1,j+1},\, 1 \leq i, j \leq m-1$ , using the telescoping argument, we see that the first sum on the RHS of (46) equals

\begin{align} \sum_{2 \leq i \leq m+1} f({{z}}^{i,m+1}) - \sum_{1 \leq i \leq m} f({{z}}^{i,1}) + \sum_{2 \leq j \leq m} f({{z}}^{m+1,j}) -\sum_{2 \leq j \leq m} f({{z}}^{1,j}) \leq 2mf({{z}}^{m+1,m+1}), \end{align}

whereas the second sum on the RHS of (46), using the same argument, is seen to be bounded from above by $ 2mg({{y}}) \leq 2m^{-\langle {{x}}({{\beta}}(r)), {{y}} \rangle}. $ Summarizing, we obtain

(47) \begin{align} \delta \leq 4\Delta^2 m^{-1}^{-\langle {{x}}({{\beta}}(r)), {{y}} \rangle} f({{z}}^{m+1,m+1}), \end{align}

where $f({{z}}^{m+1, m+1}) = p({{w}} + {{y}} + (\Delta, \Delta)).$

To bound the last quantity, we will derive a bound for the function $p({{u}}) $ in the general case $d \geq 2$ . It follows from the condition that $\langle {\mathbb {E}}{\rm{\xi}}, {\zeta} \rangle \lt 0$ (part of ( ${\rm C}_3(r_G) $ )) that there exists a

(48) \begin{equation} \begin{split} &\textrm{closed round cone}\ C \supset Q^+ \textrm{with the axis direction} {\zeta}, \textrm{apex at} {\textbf{{0}}}, \\ &\mathrm{and the opening angle} \pi - 2\phi \textrm{with} \phi > 0 \textrm{such that} - {\mathbb {E}}{\rm{\xi}} \in C. \end{split} \end{equation}

Clearly, $C \subset \widehat{H}_0(r_G) $ . For any ${{u}} \in \widehat{H}_0(r_G)\backslash C$ , denote by $ {{u}}' \,:\!= \text{arg\,min}_{{{v}} \in C}\|{{u}} - {{v}} \| $ the nearest to u point of C and let

\begin{equation} {\varkappa}({{u}}) \,:\!= \frac{{{u}}' - {{u}}}{\|{{u}}' - {{u}} \|} \end{equation}

be the inner normal to $\partial C$ at that point. Denote by $ \widehat{T}({{u}}) \,:\!= \{{{v}} \in {\mathbb {R}}^d\colon \langle {{v}}, \varkappa({{u}}) \rangle \ge 0 \}$ the half-space containing C and bounded by the tangent to the $\partial C$ hyperplane passing through the point ${{u}}'$ (and the origin). Clearly,

\begin{align} p({{u}}) &\leq {\mathbb {P}}(\eta(C-{{u}}) \lt \infty ) \\ & \leq {\mathbb {P}}(\eta(\widehat{T}({{u}}) - {{u}}) \lt \infty ) \\ & \leq {\mathbb {P}}\Big(\sup_{n \geq 1} \langle {{S}}(n), \varkappa({{u}}) \rangle \ge \|{{u}}' - {{u}} \| \Big) \\ & = {\mathbb {P}}\Big(\sup_{n \geq 1}S_{{{u}}}(n) \geq (\| \mathcal{P}({{u}}) \|\tan \phi - \langle {{u}}, {\zeta} \rangle )\sin \phi \Big), \end{align}

where $S_{{{u}}}(n) \,:\!= \langle {{S}}(n), \varkappa({{u}}) \rangle \equiv \smash{\sum_{k = 1}^{n}} \langle {\rm{\xi}}(k), \varkappa({{u}}) \rangle$ is a univariate RW with the negative drift: $ {\mathbb {E}} \langle {\rm{\xi}}, \varkappa({{u}})\rangle = - \langle -{\mathbb {E}}{\rm{\xi}}, \varkappa({{u}}) \rangle \lt 0 $ since $- {\mathbb {E}} {\rm{\xi}} \subset C \subset \widehat{T}({{u}}) $ and $\varkappa({{u}}) $ is the inner normal vector to $\partial \, \widehat{T}({{u}}),$ so that $\langle - {\mathbb {E}}{\rm{\xi}}, \varkappa({{u}}) \rangle > 0.$ Therefore,

(49) \begin{equation} p({{u}}) \leq ^{-\nu(\varkappa({{u}}))(\| \mathcal{P}({{u}}) \|\tan \phi - \langle {{u}}, {{\zeta}} \rangle )\sin \phi}, \end{equation}

where $ \nu(\varkappa({{u}})) \,:\!= \sup\{\nu \in {\mathbb {R}}\colon {\mathbb {E}} ^{\nu \langle {{\xi}}, \varkappa({{u}}) \rangle} \leq 1 \} > 0 $ (see [Reference Asmussen and Albrecher1, p.81]). That $\nu(\varkappa({{u}})) > 0$ follows from condition ( ${\rm C}_3(r_G) $ ) and the fact that $\phi > 0$ can be chosen arbitrary small, thus making all the vectors $\varkappa({{u}}) $ with $ {{u}} \in \widehat{H}_0(r_G)\backslash C $ arbitrary close to ${\zeta} \equiv {\lambda}({{\alpha}}(r_G))/\|{\lambda}({{\alpha}}(r_G)) \|$ with ${\lambda}({{\alpha}}(r_G)) \in \Theta_{\psi}$ . This also implies that

\begin{equation} \nu_0 \,:\!= \inf_{{{u}} \in \widehat{H}_0(r_G)\backslash C} \nu(\varkappa({{u}})) > 0, \end{equation}

which, together with (47) and (49), yields the bound

\begin{align} \delta &\leq c\Delta^2m^{-1}\exp\{-\langle {\lambda}({{\beta}}(r)), {{y}} \rangle - \nu_0(\| \mathcal{P}({{w}} + {{y}}) \|\tan \phi - \nu_0\langle {\zeta}, {{y}} \rangle ) \sin \phi \} \\ &\leq c\Delta^2m^{-1}\exp\{ -c_1\|\mathcal{P}({{w}} + {{y}}) \| - c_2\langle {\zeta}, {{y}} \rangle \} \end{align}

for small enough $c_1, c_2 > 0$ (as ${\lambda}({{\beta}}(r)) = h{\zeta}$ for h bounded away from 0 and $\phi$ can be chosen arbitrary small). Therefore,

\begin{equation} J = E(r, {{w}}, \Delta[{{y}}))(1+o(1)) + o(\Delta^2^{-\langle {{\beta}}(r), {{y}} \rangle -c_1\|\mathcal{P}({{w}}+{{y}}) \| - c_2\langle {{\zeta}}, {{y}} \rangle}) \end{equation}

uniformly in the specified range. This completes the proof in the case $d=2$ .

For $d\ge 3,$ we partition $\Delta[ {y} )\subset{\mathbb {R}}^d$ into $m^d$ small cubes (instead of $m^2$ small squares, as in the case $d=2$ ). After that, all the computations are done in the same way as above (including (43), where the min and max of p are now attained at the opposite vertices of the small cubes), except for the ‘telescoping argument’ following (46). Instead of the sums over the nodes on the edges of the square $\Delta[ {y} ),$ we end up now with sums over the nodes on the faces of the cube $\Delta[ {y} ),$ yielding a factor $m^{d-1}$ instead of m. But, as we then divide the result by $m^d$ (instead of $m^2$ , which was the case when $d=2$ ), we end up with the same desired final result. Theorem 1 is proved.

Next we will use Theorem 2, ‘integrating’ representation (40) to compute the probability of ever hitting sG localizing only the time when S first hits $s\widehat{H}(r_G) $ and the projection onto $H_0(r_G) $ of the point where S enters that set. This result will be used in the key step in the proof of Theorem 1, when evaluating the contribution of the main term $P_3$ (to be defined in (54)).

Fix a Cartesian coordinate system in the hyperplane $H_0(r_G) $ and, for ${{v}} \in H_0(r_G) $ and $\Delta > 0$ , denote by $\Delta^*[{{v}}) $ the $ (d-1) $ -dimensional cube in $H_0(r_G) $ with edges parallel to the axes in the chosen coordinate system, the ‘left–bottom’ vertex at v, and the edge length $\Delta$ (cf. (4)). Denote by

\begin{equation} W(\Delta^*[{{v}})) \,:\!= \bigcup_{t \geq 0}\{\Delta^*[{{v}}) + t{\zeta} \} \end{equation}

the half-cylinder with the base $\Delta^*[{{v}}) $ and generatrix parallel to the unit normal ${\zeta}$ to $H_0(r_G) $ . Recall the notation ${{w}} = n{{\beta}}(r) - s{{g}} + {{x}}$ from Theorem 2 and set

(50) \begin{equation} \Xi(s,n)\,:\!= \frac{^{-n\Lambda({{\beta}}(r)) } }{(2\pi n)^{d/2} \sigma({{\beta}}(r))}, \quad\text{where}\quad r = \frac{s}{n}. \end{equation}

Following Remarks 1 and 3 of [Reference Borovkov and Mogulskii10], we can ‘tile’ the half-cylinder $W(\Delta^*[{\textbf{{0}}})) $ with ‘small’ cubes $\Delta'[{{y}}) $ with $\Delta' \to 0$ and then sum the representations for those small cubes given by Theorem 2, thus ‘integrating’ these local representations to obtain the following result.

Corollary 1. There exists a sequence $\delta_n^*\to 0$ as $n\to\infty$ such that, for any fixed $\Delta_0 >0$ and $\gamma \in \mathcal{G}$ ,we have, as $ s \to \infty$ ,

(51) \begin{align} &{\mathbb {P}}( \eta(sG) \lt \infty,\, \eta (s\widehat{H}(r_G))\notag \\ &\quad= n ,\,{{S}}(n) \in n{{\beta}}(r) + {{x}} + W(\Delta^*[\bf{0})) ) \notag \\ &\quad= \Xi (s,n)\exp\bigg\{-\frac{1}{2n}{{x}} \Lambda''({{\beta}}(r)){{x}}{\rm{T}} + O\bigg(\frac{\|{{x}} \|^3}{n^2} \bigg) \bigg\} [E(r, {{w}}, W(\Delta^*[{\textbf{{0}}})) )(1 + o(1)) + R ], \end{align}

where $ R = o( \int_{\Delta^*[{{0}})} ^{-c_1\|{{w}} \|} \mu({{w}})),\ \mu$ being the $ (d-1) $ -dimensional volume measure on $H_0(r_G) $ , the $o(\cdot) $ term being uniform in ${{x}} \in H_0(r_G),$ and $n \geq 1$ such that $\|{{x}} \| \leq \gamma(s) , |n - s/r_G| \leq \gamma(s), $ and $\Delta \in [\delta_n^* , \Delta_0]$ .

We just note here that the bound for R is obtained by choosing ${{y}} \perp H_0(r_G) $ in Theorem 1 and integrating along the direction of ${\zeta}$ .

Now we are ready to proceed to proving the main result of the paper.

Proof of Theorem 1. First we will partition the half-space $s\widehat{H}(r_G)\supset s G$ into several subsets and, for each of them, evaluate the probability of ever hitting sG when the RW first hits $s\widehat{H}(r_G) $ in the respective partition element. How we carry out these computations will be different for different elements of the partition.

We will now assume that $d=2$ as in this case it is easier to explain how we do the evaluation. The construction to be used when $d\ge 3$ is described later, just after (53).

Let ${{e}} = (e_1, e_2) \,:\!= (\zeta_2, -\zeta_1) $ be the unit vector orthogonal to ${\zeta}$ such that $e_1 > 0$ . For $M \geq 1$ (to be chosen later), set ${{a}}_{\pm} \,:\!= s{{g}} \pm (M\ln s){{e}}$ and consider the sets

\begin{align} V_{+} \,:\!= \{{{v}} \in s\widehat{H}(r_G)\colon \langle {{v}}, {{e}}\rangle \ge \langle {{a}}_{+}, {{e}} \rangle \}, \quad V_{-} \,:\!= \{{{v}} \in s\widehat{H}(r_G)\colon \langle {{v}}, {{e}}\rangle \lt - \langle {{a}}_{-}, {{e}} \rangle \}. \end{align}

Next we will split each of the sets $V_{\pm}$ into two parts. We need to consider two alternative situations, depending on whether ${\mathbb {E}}{\xi}$ is in $ - Q^+$ or not.

Case ${\mathbb {E}} {\rm{\xi}} \in - Q^+.$ In this case, we set (see Figure 2)

\begin{gather*} V_{1+} \,:\!= V_+ \cap \big\{ {{v}}\colon v_2 \leq sg_2 - \tfrac12 (M\ln s)|e_2| \big\}, \\ V_{1-} \,:\!= V_{-} \cap \big\{{{v}}\colon v_1 \leq sg_1 - \tfrac12 (M\ln s)e_1 \big\}, \end{gather*}

and set

(52) \begin{equation} \begin{gathered} V_{2-} \,:\!= V_{-}\backslash V_{1-}, \quad V_{2+} \,:\!= V_{+} \backslash V_{1+}, \\ V_1 \,:\!= V_{1+} \cup V_{1-}, \quad V_2 \,:\!= V_{2+} \cup V_{2-}, \quad V_3 \,:\!= s\widehat{H}(r_G) \backslash (V_{-} \cup V_{+}). \end{gathered} \end{equation}

Figure 2 The auxiliary sets $V_{j\pm},\,j = 1, 2,$ and $V_3$ in the case ${\mathbb {E}} {\rm{\xi}} \in -Q^+$ .

Case ${\mathbb {E}} {\rm{\xi}} \notin -Q^+$ (but ( ${\rm C}_3(r_G) $ ) is still met, i.e. $\langle {\mathbb {E}}{\rm{\xi}}, {\zeta} \rangle \lt 0$ ). Here the above simple construction of the sets $V_{j\pm}$ must be somewhat modified. For definiteness, assume that ${\mathbb {E}} \xi_2 > 0$ , so that ${\mathbb {E}} {\rm{\xi}}$ lies in the interior of the second quadrant, implying that $\langle {\mathbb {E}} {\rm{\xi}}, {{e}} \rangle\lt0.$ In this case, all we need to change in the above definition of the sets $V_{{\cdot}}$ is to amend how $V_{j+},\, j = 1, 2,$ are specified (the $V_{j-}$ stay the same; in the alternative case, when ${\mathbb {E}} \xi_1 > 0$ , we have to redefine $V_{j-},\, j = 1, 2,$ keeping $V_{j+}$ unchanged). This is done as follows. Introduce the points

\begin{equation} {{a}}'_+ \,:\!= s{{g}} + \frac{{\mathbb {E}} {\rm{\xi}}}{\langle {\mathbb {E}} {\rm{\xi}}, {{e}} \rangle} M\ln s \end{equation}

(which is the intersection of the ray emanating from s g in the direction of $-{\mathbb {E}}{\rm{\xi}}$ and the straight line parallel to ${\zeta}$ and passing through ${{a}}_+$ ) and

\begin{gather*} {{a}}''_+ \,:\!= {{a}}_+ + \frac{1}{3}({{a}}'_+ - {{a}}_+) = s{{g}} + \bigg(\frac{2}{3}{{e}} + \frac{{\mathbb {E}} {\rm{\xi}}}{3\langle {\mathbb {E}} {\rm{\xi}}, {{e}} \rangle}\bigg)M\ln s, \\ {{a}}_0 \,:\!= s{{g}} - \bigg( \frac{{\mathbb {E}} {\rm{\xi}}}{\langle {\mathbb {E}} {\rm{\xi}}, {{e}} \rangle} - {{e}} \bigg)\frac{M\ln s}{3}. \end{gather*}

In words, ${{a}}''_+$ is one third of the way from ${{a}}_+$ to ${{a}}'_+$ going along the direction of ${\zeta}$ , whereas ${{a}}_0$ is at the same distance from s g in the opposite way (see Figure 3).

Figure 3. The auxiliary sets $V_{j\pm},\,j = 1, 2,$ and $V_3$ in the case ${\mathbb {E}} {\rm{\xi}} \not\mathrel{\in} -Q^+$

Now we define $V_{1+}$ as the intersection of $V_+$ with the half-plane lying underneath the straight line $\ell$ going through the points ${{a}}_0$ and ${{a}}''_+$ :

(53) \begin{equation} V_{1+}\,:\!= V_+\cap \bigg\{{{v}} \in {\mathbb {R}}^2\colon {{v}} = {{a}}_0 + x\bigg(\frac{1}{3}{{e}} + \frac{2{\mathbb {E}} {\rm{\xi}}}{3\langle {\mathbb {E}} {\rm{\xi}}, {{e}} \rangle} \bigg) - y{\zeta}, \, x\in{\mathbb {R}},\, y \geq 0 \bigg\}. \end{equation}

All the other sets $V_{{\cdot}}$ are defined now according to (52).

For $d\ge 3 ,$ we use a general construction of the $V_j$ (there will only be three sets here, no need for $V_{j\pm}$ ) that extends (53). It is applicable whether ${\mathbb {E}} {\rm{\xi}} $ lies in $-Q^+$ or not. We first set $V_3\,:\!= \{{{v}}=s{{g}}+{{u}}\in s \widehat{H} (r_G)\colon \|{{u}}-\langle {{u}},{\zeta}\rangle {\zeta}\|\le M\ln s\}$ to be a ‘round’ half-cylinder in $ s \widehat{H}(r_G) $ with generatrix parallel to ${\zeta}$ and the base that is the $ (d-1) $ -dimensional ball that is a subset of $ s H(r_G) $ , has its center at s g, and is of radius $M\ln s.$ Then we use the cone C described in (48) to define

\[ C_s: = s{{g}}- \frac{M\ln s}{3\tan \phi}{\zeta}+C, \quad V_1\,:\!= V_3^c \cap C_s, \quad V_2\,:\!= V_3^c \setminus V_1. \]

Now set $\eta_s \,:\!= \eta(s\widehat{H}(r_G) ) $ and write

(54) \begin{align} {\mathbb {P}}(\eta(sG) \lt \infty ) = \sum_{j = 1}^3 {\mathbb {P}}(\eta(sG) \lt \infty, \,{{S}}(\eta_s) \in V_j )=:\sum_{j=1}^3 P_j. \end{align}

We will show that $P_1$ and $P_2$ are negligibly small compared to the RHS of (25). After that, we will use Corollary 1 to demonstrate that, choosing a large enough M, the term $P_3$ can be made arbitrary (relatively) close to the RHS of (25).

Bounding $P_1.$ First we note that in the case $d=2$ we have

\[ P_1=P_{1-}+P_{1+}, \quad P_{1\pm}\,:\!= {\mathbb {P}}(\eta(sG) \lt \infty, \,{{S}}(\eta_s) \in V_{1\pm} ) . \]

Assume that ${\mathbb {E}}{\rm{\xi}} \in -Q^+.$ Then

\begin{align} P_{1+} &\,:\!= \int_{V_{1+}}{\mathbb {P}}(\eta(sG) \lt \infty \mid \eta_s \lt \infty,\, {{S}}(\eta_s) = {{v}} ) {\mathbb {P}}( \eta_s \lt \infty,\, {{S}}(\eta_s) \in {{v}} ) \\ & \leq \int_{V_{1+}} {\mathbb {P}}\Big(\sup_{n \geq 1} S_2(n) \geq 2^{-1} (M\ln s)|e_2| \Big) {\mathbb {P}}(\eta_s \lt \infty, \,{{S}}(\eta_s) \in {{v}} ) \\ & = {\mathbb {P}}\Big(\sup_{n \geq 1} S_2(n) \geq 2^{-1} (M\ln s)|e_2| \Big)\int_{V_{1+}} {\mathbb {P}}(\eta_s \lt \infty, \,{{S}}(\eta_s) \in {{v}} ) \\ &\leq s^{-c_0M} {\mathbb {P}}(\eta_s \lt\infty ), \end{align}

where $c_0 \,:\!= 2^{-1}|e_2|\nu_0 >0$ and we used the strong Markov property to obtain the first inequality and a bound of the form (49) for the distribution tail of $\sup_{n \geq 1} S_2(n).$ That $|e_2|>0$ is due to condition ( ${\rm C}_3(r_G) $ ) (as it excludes situations where $H(r_G) $ is parallel to any of the coordinate axes). The term $P_{1-}$ is bounded in the same way.

Since ${\mathbb {P}}(\eta_s \lt \infty )~\sim~c^{-sD(s\widehat{H}(r_G))}$ as $s \to \infty$ by Theorem 7 of [Reference Borovkov and Mogulskii10] and $ D(\widehat{H}(r_G)) = D(G) $ by Lemma 3, we showed that, for some $0 \lt c, c_1 \lt \infty$ ,

(55) \begin{equation} P_1 \leq cs^{-c_1M}^{ - sD(G)}. \end{equation}

Choosing $M > 1/(2c_1) $ ( $M > (d-1)/(2c_1) $ when $d > 2$ ) completes the argument.

Now we turn to the case when ${\mathbb {E}}{\rm{\xi}} \notin Q^+,{\mathbb {E}}\xi_2 > 0$ and use the alternative construction (53) of $V_{1+}$ . Note that that half-space is separated from sG by a gap of width $cM\ln s$ for some $c > 0$ in the direction orthogonal to $\ell$ . Furthermore, denote by ${\zeta}'$ a unit vector orthogonal to $\ell$ and such that $\langle {\zeta}, {\zeta}' \rangle > 0$ (so that ${\zeta}'$ is pointing in the direction of sG). It is easy to verify that, by the above construction, we have ${\mathbb {E}}\langle {\rm{\xi}}, {\zeta}' \rangle \lt 0. $ This means that we are in the same situation as above, when considering the case ${\mathbb {E}} {\rm{\xi}} \in -Q^+,$ and can use the same argument to establish that $P_1$ is negligibly small.

The last argument extends in a straightforward way to the case $d \ge 3$ as well: by construction, in that case the set $V_1$ is ‘separated’ from sG by a gap of (variable) width $\ge c M\ln s$ for some $c>0. $

Bounding $P_2 = {\mathbb {P}}( \eta(sG) \lt \infty, \,{{S}}(\eta_s) \in V_2 ) $ . We again start with the case $d=2$ . It is clear from our constructions (see Figures 2 and 3) that there exists a $c_2 > 0$ such that $V_2 \subset s_1\widehat{H}(r_G) $ with $s_1 \,:\!= s + c_2M\ln s$ (we can take $c_2 \,:\!= (M\ln s)^{-1}\min_{{{v}} \in V_2} \langle {{v}}, {\zeta} \rangle$ , where the minimum is attained at the vertex of one of the sets $V_{2\pm}$ ). Therefore, again using Theorem 7 of [Reference Borovkov and Mogulskii10] and our Lemma 1, we have

(56) \begin{align} P_2 &\leq {\mathbb {P}}(\eta_s \lt \infty, {{S}}(\eta_s) \in V_2 )\notag \\ &\leq {\mathbb {P}}(\eta(s_1\widehat{H}(r_G) ) \lt \infty ) \notag \\ & \sim c^{-s_1D(\widehat{H}(r_G))} \notag \\ &= cs^{-c_2MD(G)}^{ -sD(G)}. \end{align}

Choosing a large enough M, we establish the desired result. There is no change in the argument when $d\ge 3.$

Evaluating $P_3 = {\mathbb {P}}(\eta(sG) \lt \infty, \,{{S}}(\eta_s) \in V_3 ).$ Clearly,

(57) \begin{equation} P_3 = \sum_{n = 1}^{\infty} P_{3,n}, \quad P_{3,n} \,:\!= {\mathbb {P}}(\eta(sG) \lt \infty, \, \eta_s = n, \, {{S}}(n) \in V_3 ), \quad n \geq 1. \end{equation}

First we will compute the sum of the terms $P_{3,n}$ with

\begin{equation} n \in N_s \,:\!= \{n\colon |n - su_G | \le Ms^{1/2} \}. \end{equation}

In the assertion of Corollary 1, choose $\gamma(s) \,:\!= Ms^{1/2},$ where $M = M(s) \to \infty$ slowly enough so that the term $O(\|{{x}} \|^3/n^2) $ in the exponential in (51) is o(1) for $\|{{x}} \| \leq \gamma(s) $ (i.e. $M= o(s^{1/6})).$ For a $\Delta > 0$ , let $m \,:\!= (M\ln s) / \Delta$ (we can assume without loss of generality that $m \in {\mathbb {N}}$ ). First assume for simplicity that $d=2,$ and set ${{t}}_k \,:\!= k\Delta {{e}} $ and ${{z}}_k \,:\!= s{{g}} + {{t}}_k,\,k = -m, \ldots, m$ (so that ${{z}}_{-m} = {{a}}_-$ and ${{z}}_m = {{a}}_+).$ Recalling that $r = 1/u$ and $r_G = 1/u_G$ , in view of Corollary 1 with ${{x}} = {{x}}_k \,:\!= {{z}}_k - n{{\beta}}(1/u) \equiv {{t}}_k + s{{g}} - n{{\beta}}(1/u) $ , we have

(58) \begin{align} P_{3,n} &= {\mathbb {P}}(\eta(sG) \lt \infty,\, \eta_s = n,\, {{S}}(n) \in V_3 ) \notag \\ &= \sum_{k=-m}^{m-1} {\mathbb {P}}(\eta(sG) \lt \infty,\, \eta_s = n,\, {{S}}(n) \in W(\Delta^*[{{z}}_k)) ) \notag \\ & = (1+o(1))\Xi(s,n) \sum_{k = -m}^{m-1} ^{-{{x}}_k \Lambda''({{\beta}}(1/u)) {{x}}_k{\rm{T}}/{(2n)} } E\bigg(\frac{1}{u}, {{t}}_k, W(\Delta^*[{\textbf{{0}}}))\bigg) + o(\Xi(s,n)), \end{align}

where the remainder term $o(\Xi(s,n)) $ appears as the result of summing the R terms in (51), as we can easily verify that $\int_{H_0(r_G)} ^{-c_1\|{{w}} \|} \mu({{w}}) \lt \infty.$

Next observe that

\begin{equation} E\bigg(\frac{1}{u}, {{t}}_k, W(\Delta^*[{\textbf{{0}}}))\bigg) = \int_{\Delta^*[{{t}}_k)} \rho_u({{t}})\mu({{t}}), \end{equation}

where we set, for ${{t}} \in H_0(1/u_G),$

\begin{equation} \rho_u({{t}}) \,:\!= \int_{0}^{\infty}^{-\langle {{x}}({{\beta}}(1/u)), {{t}} -{{t}}_k+ y{{\zeta}} \rangle} q_{{{\beta}}(1/u)}({{t}}-{{t}}_k + y{\zeta}) p({{t}} + y{\zeta}) y. \end{equation}

Note that, since $^{- \langle {{x}}({{\beta}}(1/u)), {{t}}-{{t}}_k+ y{{\zeta}} \rangle} = ^{-\langle {{x}}({{\beta}}(1/u)), y{{\zeta}} \rangle}$ and $ q_{{{\beta}}(1/u)}({{t}}-{{t}}_k + y{\zeta}) = q_{{{\beta}}(1/u)}( y{\zeta}) $ for ${{t}} \in H_0(1/u_G) $ , we actually have

\begin{equation} \rho_u({{t}}) = \int_{0}^{\infty}^{-\langle {{x}}({{\beta}}(1/u)), y{{\zeta}} \rangle} q_{{{\beta}}(1/u)}( y{\zeta}) p({{t}} + y{\zeta}) y. \end{equation}

Recalling our notation (34), the sum on the RHS of (58) can be expressed as

\begin{align} \sum_{k = -m}^{m-1} ^{-({{t}}_k - {{\chi}}) \Lambda''({{\beta}}(1/u))({{t}}_k - {{\chi}}){\rm{T}}/{(2n)} } \int_{\Delta^*[{{t}}_k)} \rho_u({{t}}) \mu({{t}}). \end{align}

Setting $f({{z}}) \,:\!= \exp\{-{{z}} \Lambda''({{\beta}}(1/u)){{z}}{\rm{T}}/{(2n)} \} $ , we can easily verify that

(59) \begin{equation} \frac{f({{z}} + \Delta_1{{e}})}{f({{z}})} = 1+ o(1) \end{equation}

uniformly in $n \in N_s$ , $\Delta_1 \in (0, \Delta],$ and $\| {{z}}\| \leq cMs^{1/2},\,c > 0$ . Therefore, letting $\Delta \to 0$ sufficiently slowly, we can replace the above sum with the integral over the set $\Delta^*_0 [{{a}}_-) $ with $\Delta^*_0 \,:\!= 2m\Delta \equiv 2M\ln s$ to obtain

(60) \begin{equation} P_{3,n} = (1+o(1)) \Xi(s,n)\int_{\Delta^*_0[{{a}}_-)} ^{-({{t}} - {{\chi}}) \Lambda''({{\beta}}(1/u)) ({{t}} - {{\chi}}){\rm{T}}/ {(2n)}}\rho_u({{t}}) \mu({{t}}) + o(\Xi(s,n)). \end{equation}

Recalling that ${{a}}_- = -(M\ln s) {{e}},$ we have from Lemma 1 (with $\gamma(s) = Ms^{1/2}$ ) that

\begin{align} \exp &\bigg\{-\frac{1}{2n}({{t}}-{\chi})\Lambda'' \bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg) ({{t}}-{\chi}){\rm{T}} \bigg\} \\ &= \exp \bigg\{ -\frac{1}{2n}{\chi}\Lambda'' \bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg){\chi}{\rm{T}} +\frac{1}{n}{{t}}\Lambda'' \bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg){\chi}{\rm{T}} - \frac{1}{2n}{{t}}\Lambda'' \bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg){{t}}{\rm{T}} \bigg\} \\ &= (1+o(1))\exp \bigg\{ -\frac{1}{2n}{\chi}\Lambda'' \bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg){\chi}{\rm{T}} \bigg\} \end{align}

uniformly in ${{t}} \in \Delta_0^*[{{a}}_-) $ and $n \in N_s$ . Hence, it follows from (60) that

\begin{equation} P_{3,n} = (1+o(1))\Xi(s,n)^{-{{\chi}}\Lambda'' ({{\beta}}(1/u)){{\chi}}{\rm{T}}/{(2n)} } \int_{\Delta^*_0[{{a}}_-)} \rho_u({{t}})\mu({{t}}) + o(\Xi(s,n)). \end{equation}

Note that $\smash{\int_{\Delta^*_0[{{a}}_-)} \rho_u({{t}})}\mu({{t}}) = E(1/u, {\textbf{{0}}}, W(\Delta^*[{{a}}_-))) $ and, as $M \to \infty$ , we have $ E(1/u, {\textbf{{0}}}, W(\Delta^*[{{a}}_-))) \to E(1/u, {\textbf{{0}}}, \widehat{H}_0(1/u_G)) $ , so that

(61) \begin{align} P_{3,n} = (1+o(1))\Xi(s,n)^{-{{\chi}}\Lambda'' ({{\beta}}(1/u)){{\chi}}{\rm{T}}/{(2n)} } E\bigg(\frac{1}{u}, {\textbf{{0}}}, \widehat{H}_0\bigg(\frac{1}{u_G}\bigg)\bigg) +o(\Xi(s,n)). \end{align}

Representation (61) holds in the case $d\ge 3 $ as well. This is shown using the same argument as above, the only difference being that, instead of partitioning the straight line segment with end points ${{a}}_-$ and ${{a}}_+$ into small subintervals $\Delta^*[{{z}}_k),$ we partition the base of the half-cylinder $V_3$ into small cubes (showing that the ‘boundary effects’ arising due to the ‘imperfection’ of such a partition of that ball will be negligible).

Recalling the representation $ {\chi} = (n - s u_G) {\kappa} + O(s^{-1}\gamma^2(s)) $ from Lemma 4 and setting

(62) \begin{equation} a(u) \,:\!= {\kappa}\Lambda''\bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg) {\kappa}{\rm{T}} , \end{equation}

we see that, for $|n-s u_G|\le \gamma (s),$ we have

\begin{align} \exp \bigg\{-\frac{1}{2n}{\chi}\Lambda''\bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg) {\chi}{\rm{T}} \bigg\} &=\exp \bigg\{ -\frac{1}{2n}[ a(u)( n - su_G)^2 + O(s^{-1}\gamma^3(s) )] \bigg\} \\ &= \exp \bigg\{- a(u)s\frac{(u- u_G)^2}{2u} + O( s^{-2} \gamma^3(s) ) \bigg\} \\ &= \exp \bigg\{- a(u)s\frac{ (u - u_G)^2}{2u}\bigg\}(1+o(1)) \\ &= \exp \bigg\{- a(u_G)s\frac{(u - u_G)^2}{2u} \bigg\}(1+o(1)), \end{align}

since $\gamma (s) =Ms^{1/2}, M =o(s^{1/6}),$ and $|u - u_G| \leq Ms^{-1/2}$ for $n \in N_s,$ and the function a(u) is continuous.

Recalling (16), (50), and the fact that $n = su$ , we have

\begin{equation} \Xi(s,n) = \frac{^{-sD_u(\widehat{H}(1/u_G))}}{(2\pi s)^{d/2}u^{d/2}\sigma({{\beta}}(1/u))}. \end{equation}

We conclude that the first term on the RHS of (61), after the substitution $n = su,$ takes (up to the factor $1+o(1) $ ) the form

\begin{align} \pi_s(u) &\,:\!= \frac{1}{(2\pi s)^{d/2}u^{d/2}\sigma({{\beta}}(1/u))} \exp \bigg\{ -sD_u\bigg(\widehat{H}\bigg(\frac{1}{u_G}\bigg)\bigg) - a(u_G)s\frac{(u-u_G)^2}{2u} \bigg\} \\ &\times E\bigg(\frac{1}{u}, {\textbf{{0}}}, \widehat{H}_0\bigg(\frac{1}{u_G}\bigg)\bigg), \end{align}

and so in this part of the proof we aim to compute the sum

(63) \begin{equation} \sum_{n \in N_s} P_{3, n} = (1+o(1))\sum_{n \in N_s}\pi_s\bigg(\frac{n}{s}\bigg) + \sum_{n \in N_s}o(\Xi(s,n)). \end{equation}

To replace the first sum on the RHS of (63) by the respective integral with respect to $ u$ , we note that, for $0 \leq \theta \lt 1$ and $u \in [u_G - Ms^{-1/2}, u_G + Ms^{-1/2}] =: I_s,$ we have

\begin{equation} \frac{\pi_s(u + \theta/s)}{\pi_s(u)} = 1+ o(1). \end{equation}

This can be verified by an elementary calculation, using the continuity of ${{\beta}}(1/u) $ and $E(1/u, {\textbf{{0}}}, \widehat{H}_0(1/u_G)) $ in u, and also the fact that, by the mean value theorem,

\begin{equation} D_{u + \theta/s}\bigg(\widehat{H}\bigg(\frac{1}{u_G}\bigg)\bigg) = D_{u}\bigg(\widehat{H}\bigg(\frac{1}{u_G}\bigg)\bigg) + D'_{u}\bigg(\widehat{H}\bigg(\frac{1}{u_G}\bigg)\bigg)\bigg|_{u+\theta^*/s} \frac{\theta}{s} \end{equation}

for some $\theta^* \in (0, \theta),$ where $D'_{u}(\widehat{H}(1/u_G))|_{u+\theta^*/s} = o(1) $ uniformly in $u \in I_s$ since $D'_{u}(\widehat{H}(1/u_G))|_{u=u_G} = 0$ (cf. the proof of Lemma 3). Therefore, the first sum on the RHS of (63) equals

(64) \begin{equation} \tilde P_3 \,:\!= \frac{(1+o(1))\tilde{E}_{u_G}}{(2\pi)^{d/2} s^{d/2-1}} \int_{I_s} \exp \bigg\{ -sD_u\bigg(\widehat{H}\bigg(\frac{1}{u_G}\bigg)\bigg) - a(u_G)s\frac{(u-u_G)^2}{2u} \bigg\} u, \end{equation}

where we used the fact that

\begin{equation} \tilde{E}_u \,:\!= \frac{E(1/u, {\textbf{{0}}}, \widehat{H}_0(1/u_G))}{u^{d/2} \sigma({{\beta}}(1/u))} = (1+o(1))\tilde{E}_{u_G} \quad\text{for }u \in I_s. \end{equation}

To be able to now apply the Laplace method for evaluating the integral on the RHS of (64), we will need the following lemma.

Lemma 5. There exists a $\delta > 0$ such that the function $D_u(\widehat{H}(r_G)) $ is convex on the interval $ (u_G -\delta, u_G + \delta).$

Proof. First note that, in view of ( ${\rm C}_3(r_G) $ ), there is a $\delta > 0 $ such that ${{\beta}}(1/u) $ is well defined for $u \in (u_G - \delta, u_G + \delta).$ That the function $D_u(\widehat{H}(r_G)) = u\Lambda(\widehat{H}(r_G)/u) \equiv u\Lambda({{\beta}}(1/u)) $ is convex in u on that interval means that, for $u_1, u_2 \in (u_G -\delta, u_G + \delta) $ , $a \in (0, 1),$ and $u_0 \,:\!= au_1 + (1-a)u_2$ , we have

(65) \begin{equation} u_0\Lambda\bigg({{\beta}}\bigg(\frac{1}{u_0}\bigg)\bigg) \leq au_1\Lambda\bigg({{\beta}}\bigg(\frac{1}{u_1}\bigg)\bigg) + (1-a)u_2\Lambda\bigg({{\beta}}\bigg(\frac{1}{u_2}\bigg)\bigg). \end{equation}

Recall that ${{\beta}}(1/u) $ is the MPP of the set $\widehat{H}(r_G)/{u}$ and, as $\Lambda$ is convex, that point is located on the boundary $H(r_G)/{u}$ of that set by (13). By ( ${\rm D}_4$ ) (setting ${{v}}_k \,:\!= u_k {{\beta}}(1/u_k) $ in (21)), letting $ {{\beta}}_0 \,:\!= {au_1}{{\beta}}(1/u_1)/{u_0} + {(1-a)u_2}{{\beta}}(1/u_2)/{u_0}, $ we have

(66) \begin{equation} u_0\Lambda ( {{\beta}}_0 ) \leq au_1 \Lambda\bigg({{\beta}}\bigg(\frac{1}{u_1}\bigg)\bigg) + (1-a)u_2\Lambda\bigg({{\beta}}\bigg(\frac{1}{u_2}\bigg)\bigg). \end{equation}

On the other hand, as ${{\beta}}(1/u) \in H(r_G)/{u}$ , we also have

\begin{equation} \frac{au_1}{u_0}{{\beta}}\bigg(\frac{1}{u_1}\bigg) \in \frac{a}{u_0}H(r_G) \quad\text{and}\quad \frac{(1-a)u_2}{u_0}{{\beta}}\bigg(\frac{1}{u_2}\bigg) \in \frac{1-a}{u_0}H(r_G). \end{equation}

Hence, we conclude that ${{\beta}}_0 \in H(r_G)/{u_0}.$ However, ${{\beta}}(1/u_0) $ is the MPP of the ‘upper’ half-space $\widehat{H}(r_G)/{u_0},$ and, therefore, $\Lambda({{\beta}}(1/u_0)) \leq \Lambda({{\beta}}_0).$ Together with (66) this proves (65).

Now it follows that the function in the exponential in (64) is concave and continuously differentiable in a neighborhood of the point $u = u_G$ at which it attains its maximum value equal to $-sD_u(\widehat{H}(1/u_G)) = -sD(G) $ (by Lemma 3). Furthermore, there exist (see Equation (28) of [Reference Borovkov and Mogulskii10])

\begin{equation} \sigma^2_D \,:\!= \frac{^2}{ u^2}D_u\bigg(\widehat{H}\bigg(\frac{1}{u_G}\bigg)\bigg) \bigg|_{u=u_G} > 0 \quad\text{and}\quad \frac{^2}{ u^2} \bigg(\frac{(u-u_G)^2}{u} \bigg) \bigg|_{u= u_G} = \frac{2}{u_G}. \end{equation}

By the routine use of the Laplace method (see, e.g. Section 2.4 of [Reference Erdélyi11]), recalling that we let $M = M(s) \to \infty$ , we find that the integral in (64) equals

\begin{equation} (1+o(1))^{-sD(G)}\sqrt{\frac{2\pi}{s(\sigma^2_D+a(u_G)/u_G)}}. \end{equation}

Therefore, letting $\sigma^*_D \,:\!= \sqrt{\sigma^2_D + a(u_G)u_G^{-1}}$ , we have

(67) \begin{equation} \tilde P_3 = \frac{(1+o(1))E(1/u_G, {\textbf{{0}}}, \widehat{H}(1/u_G))}{(2\pi)^{(d-1)/2}u_G^{d/2} \sigma^*_D\sigma({{\alpha}}(1/u_G))} \frac{^{-sD(G)}}{s^{(d-1)/2}}. \end{equation}

It remains to compute the sum of the second terms $o(\Xi(s,n)) $ in (63) over $n \in N_s$ . Applying the Laplace method in the same way as when evaluating $\tilde P_3$ , we find that

(68) \begin{equation} \sum_{n \geq 1} \Xi(s,n) = O(\tilde P_3). \end{equation}

So the abovementioned sum of the remainders is $o(\tilde P_3). $ We conclude from (63) that

(69) \begin{equation} \sum_{n \in N_s}P_{3,n} = (1+o(1)) \tilde P_3. \end{equation}

Next we will bound the sum $\sum_{n \notin N_s}P_{3,n}$ . For a fixed $\gamma \in \mathcal{G}$ (to be chosen later, after (70); we will need a function growing faster than $Ms^{1/2}$ ), let

\begin{align} N^*_s \,:\!= \{n \in {\mathbb {N}}\colon Ms^{1/2} \lt |n - su_G| \leq \gamma(s) \}, \quad N^{**}_s \,:\!= \{n \in {\mathbb {N}}\colon |n - su_G| > \gamma(s) \}, \end{align}

and show that the sums of $P_{3,n}$ over $n \in N^*_s$ and $n \in N^{**}_s$ are both $o(\tilde P_3).$ These sums will have to be bounded in different ways, the sum over $N^{**}_s$ being easier to handle.

Consider the sum over $n\in N^{*}_s$ . It will again be easier to first explain the proof in the case $d=2$ ; it is extended to the general case using the same argument as presented after representation (61). By Corollary 1, for $n \in N^*_s,$ expression (58) becomes

\begin{align} P_{3,n} &= (1+o(1))\Xi(s,n)\sum_{k = -m}^{m-1} \bigg[\exp \bigg\{-\frac{1}{2n}{{x}}_k \Lambda'' \bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg) {{x}}_k{\rm{T}} + O(\| {{x}}_k\|^3n^{-2})\bigg\} \\ &\times E\bigg(\frac{1}{u}, {{t}}_k, W(\Delta^*[{\textbf{{0}}}))\bigg)\bigg] + o(\Xi(s,n)), \end{align}

where the remainder term $o(\Xi(s,n) ) $ is the same as that in (58). It will turn out that, for $n \in N^*_s,$ the values of ${{x}}_k$ will be large enough to ensure the desired result due to the quadratic term in the exponential in the sum.

Recall that ${{x}}_k = {{t}}_k + s{{g}} - n{{\beta}}(1/u) $ . Since $\|s{{g}} - n{{\beta}}(1/u) \| \lt c\gamma(s) $ for $n \in N^*_s$ by Lemma 4 and $\|{{t}}_k \| \le M \ln s,\,k = -m, \ldots, m,$ we have $\|{{x}}_k \| \lt c_1\gamma(s),\,k = -m, \ldots, m$ . It is not hard to verify that relation (59) holds for $\|{{z}}\| \lt \gamma(s) $ as well. Therefore, setting

\[ \Upsilon (n,s,{{t}})\,:\!=\frac{1}{2n}({{t}} - {\chi}) \Lambda''\bigg({{\beta}}\bigg(\frac{1}{u}\bigg)\bigg) ({{t}} - {\chi}){\rm{T}} \]

and following steps similar to those used to obtain (60), we have, for $n \in N^*_s,$

(70) \begin{equation} P_{3,n} = (1+o(1)) \Xi(s,n) \int_{\Delta^*_0[{{a}}_-)} ^{- \Upsilon (n,s,{{t}}) + O(\| {{\chi}}\|^3 n^{-2})} \rho_u({{t}})\mu({{t}}) + o(\Xi(s,n)). \end{equation}

Now choose $\gamma (s)\,:\!= s^{5/8}$ and $M\,:\!=s^{1/10}$ (thus ensuring that $\gamma (s) \gg Ms^{1/2}$ and $M=o(s^{1/6}) $ , as required), and consider the first factor in the integrand. By Lemma 1, we have $\| {\chi}\|^3 n^{-2} = O(\gamma^3(s)s^{-2})= o(1) $ for $n\in N^*_s$ . Furthermore, as $\|{{t}}\|\le M\ln S,$ due to the same lemma, using a computation similar to that following (62), for n from the same range, we have

\begin{align} \Upsilon (n,s,{{t}}) & =\frac1{2n} (n-su_G)^2 a(u) + O(s^{-2}\gamma^3 (s)+s^{-1}\gamma (s)M\ln s) \\ &\ge \frac{a(u)}{2u} M^2 +o(1) \\ &= \frac{a(u_G)}{2u_G}M^2 (1+o(1)) \\ &\ge c_0 M^2 \end{align}

for some $c_0>0,$ as $a(u)/u \to a(u_G)/u_G > 0.$

Now recalling that ${{a}}_- = -(M\ln s){{e}}, \Delta_0 = 2M\ln s,$ and $M \to \infty$ , we see that the expression on the RHS of (70) does not exceed

\begin{align} &(1+o(1))\Xi(s,n)^{-c_0 M^2} \int_{\Delta^*_0[{{a}}_-)} \rho_u({{t}})\mu({{t}}) + o(\Xi(s,n)) \\ &\quad= \bigg(E\bigg({{\beta}}\bigg(\frac{1}{u}\bigg), {\textbf{{0}}}, \widehat{H}_0\bigg(\frac{1}{u_G}\bigg)\bigg) + 1\bigg) o(\Xi(s,n)) \\ &\quad= o(\Xi(s,n)) \end{align}

as $E(1/u, {\textbf{{0}}}, \widehat{H}_0(1/u_G)) \lt \infty.$ Therefore, it follows from (68) that

(71) \begin{align} \sum_{n \in N^*_s} P_{3,n} = o\bigg(\sum_{n \in N^*_s} \Xi(s,n) \bigg) = o\bigg(\sum_{n \geq 1} \Xi(s,n) \bigg) = o( \tilde P_3). \end{align}

This bound is obtained in the case $d\ge 3$ in exactly the same way, using the same change in the argument as described in the paragraph following (61).

It remains to evaluate the term $\sum_{n \in N^{**}_s} P_{3,n}.$ From (57) and Chebyshev’s exponential inequality, we have

\begin{equation} P_{3,n} \leq {\mathbb {P}}\bigg( {{S}}(n) \in s\widehat{H}\bigg(\frac{1}{u_G}\bigg) \bigg) \leq ^{-n\Lambda({{\beta}}(1/u))} = ^{-sD_u(\widehat{H}(1/u_G))}. \end{equation}

Recall that $D_u(\widehat{H}(r_G)) $ is convex in a neighborhood of $u_G$ and attains its minimum at $u_G$ , with $ ({}/{ u})D_u(\widehat{H}(r_G))|_{u=u_G} = 0$ and $ ({^2}/{ u^2}) D_u(\widehat{H}(r_G))|_{u = u_G} = \sigma^2_D > 0$ . Setting $n_1 \,:\!= |n - u_Gs|$ , for some $\delta > 0,$ we see that, for our chosen $\gamma (s)=s^{5/8}, $ we have, for some $c_k \in (0, \infty),\,1\le k \le 5,$ the bounds

\begin{align} \sum_{n \in N^{**}_s} ^{-sD_{n/s}(\widehat{H}(1/u_G))} &\leq 2^{-sD(G)}\bigg(\sum_{\gamma(s) \lt n_1 \leq \delta s}^{-c_1n_1^2/s} + \sum_{ n_1 > \delta s} ^{-c_3s\delta^2 - c_2s(n_1-\delta s)} \bigg) \\ &\leq 2^{-sD(G)} \bigg(\frac{c_4s}{\gamma(s)}^{-c_1\gamma^2(s)/s } + c_5^{-c_3s\delta^2}\bigg) \\ &=o(\tilde P_3). \end{align}

Together with (57), (67), (69), and (71), that leads to

\begin{equation} P_3 = A s^{-(d-1)/2}^{-sD(G)}(1+o(1)), \end{equation}

where

(72) \begin{equation} A\,:\!= \frac{E(1/u_G, {\textbf{{0}}}, \widehat{H}(1/u_G))}{(2\pi)^{(d-1)/2}u_G^{d/2} \sigma^*_D\sigma({{\alpha}}(1/u_G))}. \end{equation}

Together with (55) and (56), this completes the proof of Theorem 1.

4 A numerical example

To illustrate our main result, we will present the outcome of a simulation study where we used an importance sampling algorithm to obtain Monte Carlo estimates for ${\mathbb {P}}(\eta (sG)\lt\infty) $ for a range of s values in the case of a bivariate RW with a normal jump distribution.

The estimate is based on the change-of-measure representation

\[ {\mathbb {P}}(\eta(sG)\lt\infty) ={\mathbb {E}}_{{{\lambda}}} ^{-\langle{{\lambda}}, {{S}}(\eta(sG))\rangle} , \]

where ${\mathbb {E}}_{{{\lambda}}}$ is the expectation with respect to the probability measure ${\mathbb {P}}_{{{\lambda}}},$ under which the ${\xi}_i$ are i.i.d. random vectors with distribution $F_{{{\lambda}}},$ and ${\lambda}$ is chosen so that

(73) \begin{equation} \psi ({\lambda})=1, \quad {\mathbb {P}}_{{{\lambda}}}(\eta(sG) \lt \infty)=1. \end{equation}

We took F to be the bivariate normal distribution $N({\mu},\Sigma) $ with a nondegenerate $\Sigma$ , in which case clearly $ \psi ({\lambda}) = \exp\{{\mu} {\lambda}^\top + \smash{\tfrac12} {\lambda} \Sigma {\lambda}^\top\}, $ and the first relation in (73) is satisfied on an ellipse passing through the origin. Furthermore, we can easily show that here $\Lambda ({{\alpha}})=\smash{\tfrac12} ({{\alpha}}- {\mu}) \Sigma^{-1} ( {{\alpha}}- {\mu})^\top$ and, given that condition ( ${\rm C}_3(r_G) $ ) is satisfied (so that, in particular, $D(G)=D({{g}}) $ ), we have

\[ D(G) = \frac1{2t({{g}})}( t({{g}}){{g}}- {\mu}) \Sigma^{-1} ( t({{g}}){{g}}- {\mu})^\top, \]

where $t({{g}}) $ solves the equation ${}(\Lambda (t{{g}})/t)/{ t} =0.$

For our numerical example, we chose

\[ {\mu}\,:\!= (-0.5, -0.3), \quad \Sigma \,:\!= \begin{pmatrix} 1 & 0.4\sqrt{ 0.8} \\ 0.4\sqrt{ 0.8} & 0.8 \end{pmatrix}, \quad {{g}}\,:\!= (1.5, 2). \]

It is easy to verify that conditions ( ${\rm C}_1$ ), ( ${\rm C}_2$ ), and ( ${\rm C}_3(r_G) $ ) are met in this case. Next we had to choose a ${\lambda}$ that would satisfy (73); we took ${\lambda}^*\,:\!=(0.533\,131\,5, 0.710\,842\,0) $ (in which case $\psi({\lambda}^*)-1\approx 2.6\times 10^{-8}$ ). A routine computation yields $D(G)\approx 2.229\,39.$

We simulated $5\times 10^4$ trajectories of ${{S}}^{({{\alpha}})}$ with $ {{\alpha}}\,:\!={{\alpha}}( {\lambda}^*)=(0.287\,450\,0,0.459\,412\,5). $ For each trajectory, we simulated the first 350 steps (that was always enough to hit sG with $s=15$ in our experiment), testing at each step the condition that the RW ${{S}}^{({{\alpha}})}$ hits sG for each $s=7+0.02k, \,k=0,1,\ldots, 400.$ Then taking the sample means of $ ^{-\langle{{\lambda^*}} ,{{S}}(\eta(sG))\rangle}$ yielded simultaneous estimates for ${\mathbb {P}}(\eta(sG)\lt \infty) $ for all s-values from the above grid.

In Figure 4 we present the ratio of the main term $As^{-1/2}^{-sD(G)}$ on the RHS of (25) to the Monte Carlo estimates for $s\in [7,15],$ together with the 99% confidence intervals (obtained as discussed in [Reference Asmussen and Albrecher1, p.463]). As computing the theoretical value of A is somewhat cumbersome, for the purposes of the present illustration, we used the value of A obtained by fitting the simulation data (which yielded $A\approx 0.3396$ ), concentrating on verifying the functional form of (25). Fitting that formula to the values of the Monte Carlo estimates yielded $D(G)\approx 2.229\,54$ (so that the relative error for the second rate function is less than $10^{-4}$ ). The plot shows remarkable stability for the ratio, thus confirming the validity of our main result.

Figure 4 The ratio of the main term in the theoretical asymptotics (25) for ${\mathbb {P}}(\eta (sG)\lt \infty) $ for the normal RW to the Monte Carlo estimates, together with the ends of the 99% confidence intervals thereof, for $s\in [7,15].$

Acknowledgements

This research was funded partially by the Australian Government through the Australian Research Council’s Discovery Projects funding scheme (project DP150102758). Y. Pan was also supported by the Australian Postgraduate Award and the School of Mathematics and Statistics at The University of Melbourne. The authors are grateful to the anonymous referee for comments that helped to improve the exposition of the paper.

References

Asmussen, S. and Albrecher, H. (2010). Ruin Probabilities, 2nd edn. World Scientific, Hackensack, NJ.CrossRefGoogle Scholar
Avram, F., Palmowski, Z. and Pistorius, M. R. (2008). Exit problem of a two-dimensional risk process from the quadrant: exact and asymptotic results. Ann. Appl. Prob. 18, 24212449.CrossRefGoogle Scholar
Borovkov, A. A. (1995). On the Cramér transform, large deviations in boundary value problems, and the conditional invariance principle. Siberian Math. J. 36, 417434.CrossRefGoogle Scholar
Borovkov, A. A. (1996). On the limit conditional distributions connected with large deviations. Siberian Math. J. 37, 635646.CrossRefGoogle Scholar
Borovkov, A. A. (2013). Probability Theory, 2nd edn. Springer, London.CrossRefGoogle Scholar
Borovkov, A. A. and Mogulskii, A. A. (1992). Large deviations and testing statistical hypotheses. I. Large deviations of sums of random vectors. Siberian Adv. Math. 2, 52120.Google Scholar
Borovkov, A. A. and Mogulskii, A. A. (1996). The second rate function and the asymptotic problems of renewal and hitting the boundary for multidimensional random walks. Siberian Math. J. 37, 647682.CrossRefGoogle Scholar
Borovkov, A. A. and Mogulskii, A. A. (1998). Integro-local limit theorems including large deviations for sums of random vectors. Theory Prob. Appl. 43, 112.CrossRefGoogle Scholar
Borovkov, A. A. and Mogulskii, A. A. (2000). Integro-local limit theorems including large deviations for sums of random vectors. II. Theory Prob. Appl. 45, 322.CrossRefGoogle Scholar
Borovkov, A. A. and Mogulskii, A. A. (2001). Limit theorems in the boundary hitting problem for a multidimensional random walk. Siberian Math. J. 42, 245270.CrossRefGoogle Scholar
Erdélyi, A. (2010). Asymptotic Expansions. Dover, New York.Google Scholar
Figure 0

Figure 1 Auxiliary constructions: the level line L(r) of $\Lambda,$ its scaled version nL(r), the respectively tangent straight lines rH(r) and sH(r), and other related objects (case $d = 2, r = s/n$).

Figure 1

Figure 2 The auxiliary sets $V_{j\pm},\,j = 1, 2,$ and $V_3$ in the case ${\mathbb {E}} {\rm{\xi}} \in -Q^+$.

Figure 2

Figure 3. The auxiliary sets $V_{j\pm},\,j = 1, 2,$ and $V_3$ in the case ${\mathbb {E}} {\rm{\xi}} \not\mathrel{\in} -Q^+$

Figure 3

Figure 4 The ratio of the main term in the theoretical asymptotics (25) for ${\mathbb {P}}(\eta (sG)\lt \infty) $ for the normal RW to the Monte Carlo estimates, together with the ends of the 99% confidence intervals thereof, for $s\in [7,15].$