Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-11T17:10:48.791Z Has data issue: false hasContentIssue false

Deviation bounds for the first passage time in the frog model

Published online by Cambridge University Press:  22 July 2019

Naoki Kubota*
Affiliation:
College of Science and Technology, Nihon University
*
*Postal address: College of Science and Technology, Nihon University, 24-1, Narashinodai 7-chome, Funabashi-shi, Chiba 274-8501, Japan. Email address: kubota.naoki08@nihon-u.ac.jp
Rights & Permissions [Opens in a new window]

Abstract

We consider the so-called frog model with random initial configurations. The dynamics of this model are described as follows. Some particles are randomly assigned to any site of the multidimensional cubic lattice. Initially, only particles at the origin are active and these independently perform simple random walks. The other particles are sleeping and do not move at first. When sleeping particles are hit by an active particle, they become active and start moving in a similar fashion. The aim of this paper is to derive large deviation and concentration bounds for the first passage time at which an active particle reaches a target site.

Type
Original Article
Copyright
© Applied Probability Trust 2019 

1. Introduction

1.1. The model

For d ≥ 2, we write ℤd for the d-dimensional cubic lattice. Let $\omega = (\omega (x))_{{x \in \mathbb Z{^d}}}$ be independent random variables with a common law on ${{\mathbb N}_0}: = {\mathbb N} \cup \{ 0\} $, not concentrated at zero. Furthermore, independently of ω, let $ ({S_k}(x,\ell ))_{k = 0}^\infty , x \in {{\mathbb Z}^d}, \ell \in {\mathbb N} $, be independent simple random walks on ℤd with ${S_0}(x,\ell ) = x$. For any x, y ∈ ℤd, we now introduce the first passage time T(x, y) from x to y as follows:

$$T(x,y): = \inf \left\{ {\sum\limits_{i = 0}^{m - 1} \tau ({x_i},{x_{i + 1}}):m \ge 1, {x_0} = x, {x_m} = y, {x_1}, \ldots ,{x_{m - 1}} \in \mathbb Z{^d}} \right\},$$(1.1)

where

$$\tau ({x_i},{x_{i + 1}}): = \inf \{ k \ge 0:{S_k}({x_i},\ell ) = {x_{i + 1}}{{\rm for}\;{\rm some}}1 \le \ell \le \omega ({x_i})\} ,$$

with the convention that τ(x i, x i+1): = ∞ if ω(x i) = 0. The fundamental object of study is the first passage time T(0, x) conditioned on the event {ω(0) ≥ 1}. Its intuitive meaning is as follows. We now regard simple random walks as ‘frogs’ and ω stands for an initial configuration of frogs, i.e. ω(y) frogs sit on each site y (there is no frog at y if ω(y) = 0). Suppose that the origin 0 is occupied by at least one frog. They are active and independently perform simple random walks, but the other frogs are sleeping and do not move at first. When sleeping frogs are attacked by an active frog, they become active and start doing independent simple random walks. Then T(0, x) describes the first passage time at which an active frog reaches a site x.

The frog model was originally introduced by Ravishankar, and its idea comes from the following information spreading. Consider that every active frog has some information. When it hits sleeping frogs, the information is shared between them. Active frogs move freely and play a role in spreading the information. Recent interests of the frog model are recurrence and transience of frogs (see Section 1.3 for details), and there are few results for the behavior of the first passage time except for [Reference Alves, Machado and Popov3], [Reference Alves, Machado, Popov and Ravishankar4], and [Reference Johnson and Junge22]. (Recently, the first passage time is also studied in a Euclidean setting [Reference Beckman6].) However, in view of the information spreading, it is important to investigate the behavior of the first passage time precisely. For this purpose, in this paper, we propose nontrivial deviation bounds for the first passage time (see Theorems 1.1, 1.2, and 1.3 below).

1.2. Main results

Let us first mention the results obtained by Alves et al. [Reference Alves, Machado, Popov and Ravishankar4] to describe our main results. First, note that the first passage time is subadditive:

$$T(x,z) \le T(x,y) + T(y,z),\quad \quad x,y,z \in {{\mathbb Z}^d}.$$

Alves et al. [Reference Alves, Machado, Popov and Ravishankar4, Section 2, Theorem 1.1, Steps 1–6] obtained the following asymptotic behavior of the first passage time. There exists a norm μ(·) (which is called the time constant) on ℝd such that almost surely on the event {ω(0) ≥ 1},

$$\mathop {\lim }\limits_{{\scriptstyle \parallel x{\parallel _1} \to \infty \atop \scriptstyle x \in {{\mathbb Z}^d}} } \frac{{T(0,x) - \mu (x)}}{{\parallel x{\parallel _1}}} = 0,$$(1.2)

where ‖·‖1 is the 1-norm on ℝd. Furthermore, μ(·) is invariant under permutations of the coordinates and under reflections in the coordinate hyperplanes, and satisfies

$$\parallel x{\parallel _1} \le \mu (x) \le \mu ({\xi _1})\parallel x{\parallel _1},\quad \quad x \in {{\mathbb R}^d},$$(1.3)

where ξ 1 is the first coordinate vector of ℝd. The first inequality in (1.3) is a consequence of (1.2). Indeed, since T(0, kx) ≥ ‖kx1 for x ∈ ℤd, we have almost surely on the event {ω(0) ≥ 1},

$$\mu (x) = \mathop {\lim }\limits_{k \to \infty } \frac{1}{k}T(0,kx) \ge \parallel x{\parallel _1},\quad \quad x \in {{\mathbb Z}^d}.$$

Hence, the first inequality in (1.3) is valid for x ∈ ℤd and we can easily extend it to the case x ∈ ℝd. On the other hand, the second inequality in (1.3) follows from the properties of the time constant.

To prove (1.2), roughly speaking, Alves et al. applied the subadditive ergodic theorem to the process T(ix, jx), 0 ≤ I < j, with ω(ix), ω(jx) ≥ 1. This method requires the integrability of the first passage time. Thus, in [4, Lemmata 2.2 and 2.3], they first derived the following tail estimate. There exist constants 0 < C 1, C 2 < ∞ and 0 < α 1 < 1 such that, for all x ∈ ℤd and $t \ge \parallel x\parallel _1^4$,

$${\mathbb P}(T(0,x) \ge t |\omega (0) \ge 1) \le {C_1}{{{\rm e}}^{ - {C_2}{t^{\alpha 1}}}}.$$(1.4)

Our main results are the following upper and lower large deviations for the first passage time. Throughout this paper, we write ${\mathbb P}: = {\mathbb P}( \cdot \omega (0) \ge 1)$ to shorten notation.

Theorem 1.1

There exists a constant 0 < α 2 < 1 such that, for all ε < 0,

$$\mathop {\limsup }\limits_{{\scriptstyle \parallel x{\parallel _1} \to \infty \atop \scriptstyle x \in {{\mathbb Z}^d}} } \frac{1}{{\parallel x\parallel _1^{ {\alpha _2}}}}\log {\mathbb P}(T(0,x) \ge (1 + \varepsilon )\mu (x)) < 0.$$

Theorem 1.2

If ${\mathbb E}[\omega (0)] < \infty $ then there exists a constant 0 < α 3 < 1 such that, for all ε < 0.,

$$\mathop {\limsup }\limits_{{\scriptstyle \parallel x{\parallel _1} \to \infty \atop \scriptstyle x \in {{\mathbb Z}^d}} } \frac{1}{{\parallel x\parallel _1^{{\alpha _3}}}}\log {\mathbb P}(T(0,x) \le (1 - \varepsilon )\mu (x)) \lt 0.$$

From the above theorems, we can expect the existence of the optimal speeds for the upper and lower large deviations, i.e. there exist exponents β, β′ ∈ (0, 1) such that

$$\frac{1}{{\parallel x\parallel _1^\beta }}\log {\mathbb P}(T(0,x) \ge (1 + \varepsilon )\mu (x))\quad {\text{and}}\quad \frac{1}{{\parallel x\parallel _1^{\beta '}}}\log {\mathbb P}(T(0,x) \le (1 - \varepsilon )\mu (x))$$(1.5)

converge to some strictly negative constants as $\parallel x{\parallel _1} \to \infty ,x \in {{\mathbb Z}^d}$. Let us argue here this problem and observe that the optimal speed (if it exists) has to be ‖x1 to some power in (0, 1) under some assumptions. For simplicity of notation, denote by ${\cal I}$ the random set of all sites of ℤd which frogs initially occupy, i.e.

$${\cal I}\{ x \in {{\mathbb Z}^d}:\omega (x) \ge 1\} .$$

First observe the upper large deviation of T(0, x). Let x ∈ ℤd with ‖x1 < 1 and consider the event A that ${({S_ \cdot }(0,\ell ))_{1 \le \ell \le \omega (0)}}$ and ${({S_ \cdot }({\xi _1},\ell ))_{1 \le \ell \le \omega ({\xi _1}) {\rm v} 1}}$ stay inside the set {0, ξ 1} until time $ (1 + \varepsilon )\mu ({\xi _1})\parallel x{\parallel _1}$. We divide ${\mathbb P}(A \cap \{ 0 \in {\cal I}\} )$ into two terms:

$${\mathbb P}(A \cap \{ 0 \in {\cal I}\} ) = {\mathbb P}(A \cap \{ 0 \in {\cal I},\; {\xi _1}{\cal I}\} ) + {\mathbb P}(A \cap \{ 0,{\xi _1} \in {\cal I}\} ).$$(1.6)

Since each simple random walk jumps to one of its nearest neighbors at each step, the first term on the right-hand side of (1.6) is equal to

$$\eqalign{ \sum\limits_{L = 1}^\infty {\mathbb P}(\omega (0) & = L)P({\xi _1 \notin \cal I}){(2d)^{ - (L + 1)(1 + \varepsilon )\mu ({\xi _1})\parallel x{\parallel _1}}} \cr & = \left[ {{{(2d)}^{ - (\omega (0) + 1)(1 + \varepsilon )\mu ({\xi _1})\parallel x{\parallel _1}}}1\{ 0 \in \cal I, {\xi _1} \notin \cal I\} } \right].} $$

Similarly, the second term on the right-hand side of (1.6) is equal to

$$\mathbb P \left[ {{{(2d)}^{ - (\omega (0) + \omega ({\xi _1}))(1 + \varepsilon )\mu ({\xi _1})\parallel x{\parallel _1}}}1\{ 0,{\xi _1} \in {\cal I}\} } \right].$$

Therefore, we have

$$\mathbb P(A) = \mathbb E\left[ {{{(2d)}^{ - (\omega (0) + \omega ({\xi _1}) + 1)(1 + \varepsilon )\mu ({\xi _1})\parallel x{\parallel _1}}}} \right].$$

Note that by (1.3), on the event $A \cap \{ 0 \in {\cal I}\} $,

$$T(0,x) \ge \left\lceil(1 + \varepsilon )\mu ({\xi _1}) \right\rceil \parallel x{\parallel _1} \ge (1 + \varepsilon )\mu (x),$$

and Jensen’s inequality yields

$$\mathop {\liminf}\limits_{{\scriptstyle \parallel x{\parallel _1} \to \infty \atop \scriptstyle x \in {\mathbb Z^d}} } {1 \over {\parallel x{\parallel _1}}}\log \mathbb P(T(0,x) \ge (1 + \varepsilon )\mu (x)) \ge - (2 \mathbb E[\omega (0)] + 1)(1 + \varepsilon )\mu ({\xi _1})\log (2d).$$

This bound does not guarantee the existence of the optimal speed for the upper large deviation and has no meaning if ${\mathbb E}[\omega (0)] = \infty $. However, in the case where ${\mathbb E}[\omega (0)] < \infty $, the above bound combined with Theorem 1.1 shows that the exponent β appearing in (1.5) lies in [α 2, 1] (if the optimal speed exists for the upper large deviation).

Next we treat the lower large deviation of T(0, x). Note that Theorem 1.2 of [Reference Alves, Machado, Popov and Ravishankar4] states that if there exists δ ∈ (0, d) such that ${\mathbb P}(\omega (0) \ge t) \ge {(\log t)^{ - \delta }}$ holds for all large t then we have μ(x) = ‖x1. Our lower large deviation (Theorem 1.2) does not treat this situation because of the assumption of finite mean of ω(0). However, for now, we do not know whether μ(x) = ‖x1 holds even if ω(0) has a finite mean, and have to consider the speed of the lower large deviation for several directions x ∈ ℤd: μ(x) < (1 − ε)−1x1 and μ(x) ≥ (1 − ε)−1x1. In the case where μ(x) < (1 − ε)−1x1, we have $T(0,nx) \ge \parallel nx{\parallel _1} > (1 - \varepsilon )\mu (nx)$ and

$${\mathbb P}(T(0,nx) \le (1 - \varepsilon )\mu (nx)) = 0.$$

Hence, if μ(x) < (1 − ε)−1x1 then we do not determine the optimal speed of the lower large deviation for the direction x in the sense of (1.5). On the other hand, in the case where μ(x) ≥ (1 − ε)−1x1, we have (1 − ε)μ(nx) ≥ ‖x1. Fix a self-avoiding nearest-neighbor path (0 = v 0,v 1…,v m = nx) with minimal length m = ‖nx1, and let A’ be the event that S k(0,1) = v k for all 0 ≤ km. It holds that ${\mathbb P}(A') = (2d){^{ - \parallel nx{\parallel _1}}}$ and T(0,nx) ≤ (1 − ε)μ(nx) on the event $A' \cap \{ 0 \in {\cal I}\} $. Hence,

$$\mathop {\liminf}\limits_{n \to \infty } \frac{1}{{n\parallel x{\parallel _1}}}\log {\mathbb P}(T(0,nx) \le (1 - \varepsilon )\mu (nx)) \ge - \log (2d).$$

This combined with Theorem 1.2 proves that in the case where μ(x) ≥ (1 − ε)−1x1, the exponent β′ appearing in (1.5) lies in [α 3, 1] (if the optimal speed of the lower large deviation exists for the direction x).

Optimizing the speeds for the above large deviations may be difficult in general because the propagation of active frogs depends on the behavior of the simple random walk and the initial configuration of the frogs. From [Reference Hughes21, pp. 333, 338] the average cardinality of the set {S k(0,1): 0 ≤ kn} is of order n/log n if d = 2, and of order n if d ≥ 3. This means that an active frog wakes more sleeping frogs up in d ≥ 3 than in d = 2. Moreover, without relation to the dimension, active frogs are easily generated if each site of ℤd first has a lot of sleeping frogs. Thus, the first passage time and the time constant seem to be strongly related to the dimension d and the law of the initial configuration ω. At present we do not have enough information to determine the optimal speed of the large deviations and would like to address these problems in future research.

Our key tool to prove Theorems 1.1 and 1.2 is the modified first passage time defined as follows: For any x ∈ ℤd, let x* be the closest point to x in ${\cal I}$ for the 1-norm, with a deterministic rule to break ties. Then, the modified first passage time T*(x, y) is given by

$${T^&#x002A;}(x,y):= T({x^&#x002A;},{y^&#x002A;}).$$

By definition, the subadditivity is inherited from the original first passage time:

$${T^&#x002A;}(x,z) \le {T^&#x002A;}(x,y) + {T^&#x002A;}(y,z),\quad \quad x,y,z \in {\mathbb Z^d}.$$

A particular difference between T(x, y) and T*(x, y) is that T(x, y) is inevitably equal to ∞ if ω(x) = 0, but T*(x, y) is almost surely finite. Moreover, we can derive the following concentration inequality for T*(0, x).

Theorem 1.3

Assume that E[ω(0)] < ∞. For all γ < 0, there exist constants 0 < C 3, C 4, C 5 < ∞ and 0 < α 4 < 1 such that, for all x ∈ ℤd\{0} and ${C_3}{(1 + \log \parallel x{\parallel _1})^{1/{\alpha _4}}} \le t \le \gamma \sqrt {\parallel x{\parallel _1}} $,

$$\mathbb P \left( {|{T^&#x002A;}(0,x) - \mathbb E[{T^&#x002A;}(0,x)]| \ge t\sqrt {\parallel x{\parallel _1}} } \right) \le {C_4}{{\rm{e}}^{ - {C_5}{t^{{\alpha _4}}}}}.$$

Theorem 1.3 is not only of independent interest in view of the investigation of the modified first passage time, but also plays a key role in obtaining Theorem 1.2 as mentioned in Subsection 1.4 below.

1.3. Earlier literature

There are various models related to the spread of information except for our frog model. Ramrez–Sidoravicius [Reference Ramrez and Sidoravicius33] studied a stochastic growth model representing combustion, which is the frog model on ℤd for continuous-time simple random walks. Although the frog model has active and sleeping frogs, the model in which all frogs are active from the beginning has also been investigated (see, for instance, [Reference Kesten and Sidoravicius24], [Reference Kesten and Sidoravicius25], [Reference Kurkova, Popov and Vachkovskaia27], and [Reference Popov32]). This model is regarded as an infected model and its dynamics are described as follows. We consider continuous-time simple random walks as frogs (which are active from the beginning and never sleep). Initially the frog from the origin is infected, while the other frogs are healthy. Infected frogs transmit the disease to all the frogs they meet without recovery. Furthermore, we can also find the so-called activated random walk model. This is similar to the combustion model, but there are initially some active frogs and any active frog may fall back to sleep randomly; see [Reference Basu, Ganguly and Hoffman5], [Reference Dickman, Rolla and Sidoravicius8], [Reference Rolla and Tournier34], [Reference Rolla and Sidoravicius35], [Reference Sidoravicius and Teixeira36], and the references given therein.

We shall return to the topic of the frog model. The first published result on the frog model is due to Telcs–Wormald [Reference Telcs and Wormald37, Section 2.4] (in their paper, the frog model was called the ‘egg model’). They treated the frog model on ℤd with one-frog-per-site initial configuration, and proved that it is recurrent for all d ≥ 1, i.e. almost surely, active frogs infinitely often visit the origin. (Otherwise, we say that the frog model is transient.) This result proposed an interesting relationship between the strength of transience for a single random walk and the superior numbers of frogs.

To observe this more precisely, Popov [Reference Popov31] considered the frog model with Bernoulli initial configurations and exhibited phase transitions of its transience and recurrence. After that, Alves et al. coped with that kind of problem for the frog model with random initial configuration and random lifetime; see [Reference Alves, Machado and Popov2] and [Reference Popov32] for more details. In particular, [Reference Popov32] is a nice survey on the frog model and presents several open problems. It has also been a great help to recent progress on recurrence and transience for the frog model: We refer the interested reader to [Reference Ghosh, Noren and Roitershtein14], [Reference Höfelsauer and Weidner17], and [Reference Kosygina and Zerner26] for the frog model on lattices, [Reference Döbler10], [Reference Döbler and Pfeifroth9], and [Reference Gantert and Schmidt11] for the frog model with drift on lattices, and [Reference Hoffman, Johnson and Junge18], [Reference Hoffman, Johnson and Junge19], and [Reference Hoffman, Johnson and Junge20] for the frog model on trees.

In this way, recent interest of the frog model seems to concentrate in recurrence and transience problems. On the other hand, as mentioned in Subsection 1.1, there are few results for the behavior of the first passage time in the frog model. We hope that our work is useful for research in the frog model (including the recurrence and transience problems) and the related models mentioned above.

1.4. Organization of the paper

Let us now describe how the present article is organized. In Section 2, for convenience, we summarize some notation and results for supercritical site percolation on ℤd and provide an upper tail estimate for the first passage time (see Proposition 2.4 below). In addition, Corollary 2.1 tells us that we have to switch frogs frequently to realize the first passage time. More precisely, it guarantees that each frog realizing T(0, x) must find the next one within the 1-ball of radius o(‖x1). This fact will play an important role in the proofs of Theorems 1.2 and 1.3. It is generally difficult to observe the behavior of the frogs realizing the first passage time. This is because, without loss of the minimality, we have to handle the behavior of the frogs and those initial configurations at the same time. In the proof of Corollary 2.1, we overcome this difficulty by using a large deviation estimate for the simple random walk combined with Proposition 2.4.

In Sections 3 and 4 we prove our main results (Theorems 1.1, 1.2, and 1.3). An early study of large deviation estimates in stochastic growth models was carried out by Grimmett–Kesten [Reference Grimmett and Kesten16]. They only treated the large deviation estimates for the first coordinate axis. However, we want to obtain the large deviation estimates for all directions as in Theorems 1.1 and 1.2. Thus, our arguments for these theorems to a large extent is based on the previous work of Garet–Marchand [Reference Garet and Marchand12], [Reference Garet and Marchand13] for the chemical distance in the Bernoulli percolation.

For the proof of Theorem 1.1, we basically follow the strategy taken in [12, Subsection 3.3]. Let us give the sketch of the proof here. For simplicity, we consider Theorem 1.2 only for x = 1. Fix ε < 0 and take N large enough. A site y of ℤd is said to be ‘good’ if $\parallel Ny - {(Ny)^&#x002A;}{\parallel _1} \le \sqrt N $, $\parallel N(y + \xi ) - {(N(y + \xi ))^&#x002A;}{\parallel _1} \le \sqrt N $ and ${T^&#x002A;}(Ny,N(y + \xi )) \le (1 + \varepsilon )\mu (N{\xi _1})$ for any coordinate vector ξ. Note that (1.2) and the independent and identically distributed (i.i.d.) set-up of the configuration imply that each site y of ℤd is good with high probability. Hence, good sites induce a finitely dependent site percolation on ℤd with parameter sufficiently close to one (see Lemma 3.3 below). Suppose that an arbitrary integer n is much larger than N. Results in Subsection 2.1 below guarantee that the failure probability of the following event decays exponentially in n: there exist good sites y 1,…,y Q such that

  • Qn/N and ‖y qy q+11 = 1 for all 1 ≤ qQ − 1;

  • ‖(Ny 1)*‖1 and ‖ 1 − (Ny Q)*‖1 are smaller than n 1/4.

On this event and $\{ 0 \in \cal I \} $,

$$\eqalign{T(0,n{\xi _1}) & \le T(0,(N{y_1}{)^&#x002A;}) + \sum\limits_{q = 1}^{Q - 1} {T^&#x002A;}(N{y_q},N{y_{q + 1}}) + T((N{y_Q}{)^&#x002A;},n{\xi _1}) \cr & \lesssim T(0,(N{y_1}{)^&#x002A;}) + (1 + \varepsilon )\mu (n{\xi _1}) + T((N{y_Q}{)^&#x002A;},n{\xi _1}).}$$

Using the upper tail estimate (stated in Proposition 2.4), we can control the first and third terms on the right-hand side, and the desired bound follows in the x = ξ 1 case. We need some more work to carry out the above argument uniformly in any direction x.

In Section 4 we begin with the proof of Theorem 1.2. The lower large deviations have been studied for the first passage time in the first passage percolation and the chemical distance in the Bernoulli percolation, which are the counterparts of the first passage time in the frog model (we refer the interested reader to [Reference Ahlberg1], [Reference Garet and Marchand12], and [Reference Kesten23]). These counterparts are induced by sequences of nearest-neighbor points on ℤd and depend on only one randomness. On the other hand, the first passage time in the frog model may use sequences of nonnearest-neighbor points on ℤd (see (1.1)) and depends on two sources of randomness: the simple random walks and the initial configuration. In [Reference Ahlberg1], [Reference Garet and Marchand12], and [Reference Kesten23], the key tool for the lower large deviations is a renormalization procedure combined with a BK-like inequality. Although we also use a renormalization argument to show Theorem 1.1 and Proposition 2.4, due to the difference stated above, a BK-like inequality does not work well for the lower large deviation in the frog model. To overcome this problem, we use the concentration inequality for T*(0,x) as follows. Divide T(0,x) − μ(x) into three terms:

$$\eqalign{T(0,x){\rm{ }} & - \mu (x) \cr & = \{ T(0,x) - {T^&#x002A;}(0,x)\} + \{ {T^&#x002A;}(0,x) - \mathbb E[{T^&#x002A;}(0,x)]\} + \{ \mathbb E[{T^&#x002A;}(0,x)] - \mu (x)\}.}$$

From Lemma 3.1 below, $\mathbb E[{T^&#x002A;}(0,x)] \ge \mu (x)$ holds and the third term is harmless for the lower tail. The second term can be controlled once we get the concentration inequality for T*(0,x), which is Theorem 1.3. Hence, in the proof of Theorem 1.2 we try to compare T (0,x) and T*(0,x) on the event {ω(0) ≥ 1} by using Corollary 2.1.

The remainder of Section 4 will be devoted to the proof of Theorem 1.3. For the proof, we follow the approach taken by Garet–Marchand [Reference Garet and Marchand13, Section 3]. In [Reference Garet and Marchand13, Section 3], they constructed an approximation of the chemical distance with a deterministic upper bound and applied Chebyshev’s inequality combined with exponential versions of the Efron–Stein inequality (see (4.3) and (4.4) below) to it. As mentioned above, the chemical distance is induced by sequences of nearest-neighbor points on ℤd, but the first passage time in the frog model may use sequences of nonnearest-neighbor points on ℤd. This difference disturbs the direct use of Garet–Marchand’s approximation for our first passage time. To overcome this problem, we define passage times between every sufficiently remote two points x and y to be Cxy1, where C ∈ (0,∞) is a constant much larger than μ(ξ 1). The first passage time modified in this way is dominated by C‖·‖1. Furthermore, (1.2) says that if ‖xy1 is large enough then the first passage time from x to y is approximately equal to μ(xy) < Cxy1. This means that the modified first passage time tends not to use sufficiently remote points, and, hence, it is comparable to the original first passage time. This observation implies our desired approximation of the first passage time.

We close this section with some general notation. Write ‖·‖1 and ‖·‖ for the 1- and -norms on ℝd. Denote by {ξ 1,…,ξ d} the canonical basis of ℝd, and let ${\xi ^d}: = \{ \xi \in {\mathbb Z^d}:{\left\| \xi \right\|_1} = 1\} $. For i ∈ {1, ∞}, x ∈ ℝd and r < 0, B i(x,r) is the i-ball in ℝd of center x and radius r, i.e.

$${B_i}(x,r)\{ y \in {\mathbb R^d}:\parallel y - x{\parallel _i} \le r\} .$$

Throughout this paper, we use c, c′, C, C′, C i, and α i,i = 1,2,…, to denote constants with 0 < c, c′, C, C′, C i <∞ and 0 < α i < 1, respectively.

2. Preliminaries

2.1. Supercritical site percolation

Let $X = ({X_v}{)_{v \in {\mathbb Z^d}}}$ be a family of random variables taking values in {0,1}. This induces the random set $\{ v \in {\mathbb Z^d}:{X_v} = 1\} $. The chemical distance d X(v 1,v 2) for X between v 1 and v 2 is defined by

$${d_X}({v_1},{v_2})\inf \{ \# \pi :\pi {\rm{is}}\;{\rm{a}}\;{\rm{nearest}} - {\rm{neigibor}}\;{\rm{path}}\;{\rm{from }}{v_1}{\rm{to}}\,{v_2}{\rm{using}}\,{\rm{only}}\,{\rm{sites}}\,{\rm{in}}\{ v \in {Z^d}:{X_v} = 1\} \} ,$$

where #π is the length of a path π. A connected component of $\{ v \in {\mathbb Z^d}:{X_v} = 1\} $ which contains infinitely many points is called an infinite cluster for X. If there exists almost surely a unique infinite cluster for X then we denote it by C (X).

For 0 < p < 1, let ${\eta _p} = ({\eta _p}(v{))_{v \in {\mathbb Z^d}}}$ denote a family of independent random variables satisfying

$$\mathbb P({\eta _p}(v) = 1) = 1 - \mathbb P({\eta _p}(v) = 0) = p,\quad \quad v \in {\mathbb Z^d}.$$

This is called the independent Bernoulli site percolation on ℤd of the parameter p. It is well known that there is p c = p c(d) ∈ (0,1) such that if p < p c then the infinite cluster C (η p) exists; see, for instance, Theorems 1.10 and 8.1 of [Reference Grimmett15]. The following proposition presents estimates for the size of the holes in the infinite cluster C (η p) and the chemical distance ${d_{{\eta _p}}}( \cdot , \cdot )$ (see [13, below Equation (2.2) and Corollary 2.2] for the proof).

Proposition 2.1

For p < p c, the following results hold:

  • (a) there exist constants C 6 and C 7 such that, for all t < 0,

    $$\mathbb P({C_\infty }({\eta _p}) \cap {B_1}(0,t) = \O ) \le {C_6}{{\rm{e}}^{ - {C_7}t}};$$
  • (b) there exist constants C 8, C 9, and C 10 such that, for all $v \in {\mathbb Z^d}$ and tC 8v1,

    $$\mathbb P(t \le {d_{{\eta _p}}}(0,v) < \infty ) \le {C_9}{{\rm{e}}^{ - {C_{10}}t}}.$$

A part of the proof of Theorem 1.1 relies on the following proposition obtained by Garet–Marchand [Reference Garet and Marchand12, Theorem 1.4]. (Their argument works not only for bond percolation but also for site percolation.) This tells us that, when p is sufficiently close to 1, the chemical distance looks like the 1-norm.

Proposition 2.2

For each γ < 0, there exists p′(γ) ∈ (p c,1) such that, for all p < p′(γ),

$$\mathop {\limsup }\limits_{\parallel v{\parallel _1} \to \infty } {1 \over {\parallel v{\parallel _1}}}\log \mathbb P((1 + \gamma )\parallel v{\parallel _1} \le {d_{{\eta _p}}}(0,v) < \infty ) < 0.$$

Proposition 2.2 gives an estimate of the deviation of ${d_{{\eta _p}}}(0,v)$ only around ‖v1. In the proof of Proposition 2.4, we need an estimate of the upper tail for ${d_{{\eta _p}}}(0,v)$ sufficiently away from ‖v1, and Proposition 2.2 does not work well there. On the other hand, to prove Theorem 1.2, it is necessary that ${d_{{\eta _p}}}(0,v)$ is sufficiently close to ‖v1. The constant C 8 is generally a large constant, and Proposition 2.1(b) is not enough to prove Theorem 1.1. Accordingly, Proposition 2.1(b) and Proposition 2.2 are similar, but we need both results in this paper.

We finally recall the concept of stochastic domination. Let $X = ({X_v}{)_{v \in {\mathbb Z^d}}}$ and $Y = ({Y_v}{)_{v \in {\mathbb Z^d}}}$ be families of random variables taking values in {0,1}. We say that X stochastically dominates Y if

$$\mathbb E[f(X)] \ge \mathbb E[f(Y)]$$

for all bounded, increasing, measurable functions $f:{\{ 0,1\} ^{{^d}}} \to \mathbb R$. Furthermore, a family $X = ({X_v}{)_{v \in {\mathbb Z^d}}}$ of random variables is said to be finitely dependent if there exists L < 0 such that any two subfamilies ${({X_{{v_1}}})_{{v_1} \in {\Lambda _1}}}$ and ${({X_{{v_2}}})_{{v_2} \in {\Lambda _2}}}$ are independent whenever ${\Lambda _1},{\Lambda _2} \subset {\mathbb Z^d}$ satisfy ‖v 1v 21 < L for all v 1 ∈ Λ1 and v 2 ∈ Λ2.

The following stochastic comparison is useful to compare locally dependent fields with the independent Bernoulli site percolation. For the proof, we refer the reader to [Reference Grimmett15, Theorem 7.65] or [Reference Liggett29, Theorem B26] for instance.

Proposition 2.3

Suppose that $X = ({X_v}{)_{v \in {\mathbb Z^d}}}$ is a finitely dependent family of random variables taking values in {0,1}. For a given 0 < p < 1, X stochastically dominates η p provided $\mathop {\inf }\nolimits_{v \in {\mathbb Z^d}} \mathbb P({X_v} = 1)$ is sufficiently close to 1.

2.2 Upper tail estimate for the first passage time

As mentioned in Subsection 1.1, the tail estimate (1.4) is established for x ∈ ℤd and $t \ge \parallel x\parallel _1^4$. The condition $t \ge \parallel x\parallel _1^4$ is reasonable to derive the integrability of T(0,x), but it is not enough to study the deviation of T(0,x) around the time constant μ(x). In fact, since μ(x) is of order ‖x1, the bottom $\parallel x\parallel _1^4$ of the range of t deviates from μ(x) too much. Hence, the aim of this subsection is to improve the condition $t \ge \parallel x\parallel _1^4$ as in the following proposition.

Proposition 2.4

There exist constants C 11, C 12, C 13, and α 5 such that, for all x ∈ ℤd and tC 11x1,

$$\mathbb P(T(0,x) \ge t) \le {C_{12}}{{\rm{e}}^{ - {C_{13}}{t^{ {\alpha _5}}}}}.$$(2.1)

Before the proof, we need some preparation. Let N be a positive integer to be chosen large enough later and set $N': = {N^{1/4}}/(4d)$. Moreover, set Λq: = 2Nq + (−N′, N′)d for q ∈ ℤd. Then, the boxes ${\Lambda _q}, q \in {\mathbb Z^d}$, form a partition of ℤd and each site in ℤd is contained in precisely one box. A site v of ℤd is said to be white if the following conditions hold:

  • ${\Lambda _q} \cap \cal I \O $ for all q ∈ ℤd with ΛqB (Nv, N).

  • T(x,y) ≤ N for all $x,\,y \in {B_\infty }(Nv,\,N) \cap {\cal I}$ with ‖xy1N 1/4.

We say that v is black otherwise.

Lemma 2.1

We can find p ∈ (p c,1) and N ≥ 1 such that ${({\bf 1}\{ v\;{\rm is}\; {\rm white}}\} )_{v \in {{\mathbb Z}^d}}$ stochastically dominates η p and the infinite white cluster ${\cal C}_\infty ^{\rm{w}}: = {{\cal C}_\infty }((1\{ v\,{\rm is}\; {\rm white})_{v \in {\mathbb Z^d}})$ exists.

Proof. Let us first check that, for every v ∈ ℤd the event {v is white} depends only on states in B 1(Nv, 2N). It suffices to show that, for all x, yB 1(Nv, 2N), the event {T(x, y) ≤ N} depends only on states in B 1(Nv, 2N). By the definition of the first passage time, the event {T(x, y) ≤ N} can be replaced with the event that there exist m ≥ 1 and ${x_0},{x_1}, \ldots ,{x_m} \in {{\mathbb Z}^d},$ with x 0 = x and x m = y such that

$$\sum\limits_{i = 0}^{m - 1} \tau ({x_i},{x_{i + 1}}) \le N.$$

Since every frog can only move to an adjacent site at each step, the above sum is strictly bigger than N provided ‖x ix 01 < N for some 1 ≤ im. Hence, the x i must satisfy ‖x iNv1 ≤ 2N. This means that the event {T(x, y) ≤ N depends only on states in B 1(Nv, 2N).

We next show that $\mathop {\inf }\nolimits_{v \in {{\mathbb Z}^d}} {\mathbb P}(v\;{\rm is}\; {\rm white})$ converges to 1 as N → ∞. The union bound proves

$${\mathbb P}(0{\text{is black}}) \le \sum\limits_{{\scriptstyle q \ge 1 \atop \scriptstyle {\Lambda _q} \subset {B_\infty }(0,N)} } {\mathbb P}({\Lambda _q} \cap {\cal I} = \O )\quad + \sum\limits_{{\scriptstyle x,y \in {B_\infty }(0,N) \atop \scriptstyle \parallel x - y{\parallel _1} \le {N^{1/4}}} } {\mathbb P}(T(0,y - x) > N).$$

The first summation is not larger than $c{N^d}{\mathbb P(0{\cal \notin I})^{c'{N^{d/4}}}}$ for some constants c and c′, and it clearly goes to 0 as N → ∞. By (1.4), we can also see that the second summation vanishes as N → ∞. Therefore, from translation invariance, $\mathop {\inf }\nolimits_{v \in {{\mathbb Z}^d}} {\mathbb P}(v\;{{\rm is}\, {\rm white}})$ converges to 1 as N → ∞.

With these observations, the proof is complete by using Proposition 2.3 and the same strategy taken in the proof of Proposition 5.2 of [Reference Mourrat30].

After the preparation above, we move to the proof of Proposition 2.4.

Proof of Proposition 2.4. Without loss of generality, we can assume that ‖x1d 4. Let p and N be the constants appearing in Lemma 2.1. Consider the events

$$\eqalign{&{\Gamma _1}\{ {\rm{there}}\,{\rm{exists}}\,{v_1} \in {\cal C}_\infty ^{\rm{w}} \cap {B_1}(0,{t^{1/4}}){\rm{and}}\,{\rm{there}}\,{\rm{exists}}\,{v_2} \in {\cal C}_\infty ^{\rm{w}} \cap {B_1}(v(x),{t^{1/4}}){\rm such}\;{\rm that}\,{d^{\rm{w}}}({v_1},{v_2}) \gt 4{C_8}t\} ,\cr & {\Gamma _2}\{ T(0,y) \gt (3N{)^4}t\,{\rm{and}}\,T(z,x) \gt (3N{)^4}t\,{\rm{for}}\,{\rm{all}}\,y \in {B_1}(0,2N{t^{1/4}})\,{\rm{and }}\,z \in {B_{\rm{1}}}{\rm{(}}Nv{\rm{(}}x{\rm{), 2}}N{t^{{\rm{1/4}}}}) \cap {\cal I}\} ,}$$

where d w(∙,∙)is the chemical distance for ${(1\{ v\,{\text{is}}\,{\text{white}}\} )_{v \in {{\mathbb Z}^d}}}$ and v(x) is the site v of ℤd minimizing ‖Nvx with a deterministic rule to break ties. Note that, on the event ${\Gamma _1} \cap {\Gamma _2} \cap \{ 0 \in {\cal I}\} $,

$$T(0,\,x) < \{ 2{(3N)^4} + 4{C_8}{N^2}\} t\;\quad t \ge {\left\| x \right\|_1}$$

To complete the proof, we shall estimate ${\mathbb P}(\Gamma _1^{\text{c}})$ and ${\mathbb P}(\Gamma _2^{\text{c}})$. Lemma 2.1 implies that ${\mathbb P}(\Gamma _1^{\text{c}})$ is bounded from above by

$$\eqalign{&{\mathbb P}\left( {{d_{{\eta _p}}}({v_1},{v_2}) \ge 4{C_{8}}t{\text{for all}}{v_1} \in {{\cal C}_\infty }({\eta _p}) \cap {B_1}\left( {0,{t^{1/4}}} \right){\text{and}}{v_2} \in {{\cal C}_\infty }({\eta _p}) \cap {B_1}(v(x),{t^{1/4}})} \right) \cr & \quad \quad \le 2{\mathbb P}({{\cal C}_\infty }({\eta _p}) \cap {B_1}(0,{t^{1/4}}) = \O ) + \sum\limits_{{\scriptstyle {v_1} \in {B_1}(0,{t^{1/4}}) \atop \scriptstyle {v_2} \in {B_1}(v(x),{t^{1/4}})} } {\mathbb P}(4{C_8}t \le {d_{{\eta _p}}}({v_1},{v_2}) \gt \infty ).} $$(2.2)

From Proposition 2.1(a), the first term on the right-hand side of (2.2) is not larger than $2{C_6}{{\rm{e}}^{ - {C_7}{t^{1/4}}}}$. Note that, for t ≥ ‖x1, ${v_1} \in {B_1}(0,{t^{1/4}}),$ and ${v_2} \in {B_1}(v(x),{t^{1/4}})$,

$$\parallel {v_1} - {v_2}{\parallel _1} \le 2{t^{1/4}} + \frac{1}{N}\parallel Nv(x) - x{\parallel _1} + \frac{{\parallel x{\parallel _1}}}{N} \le 4t.$$

This combined with Proposition 2.1(b) shows that the second term on the right-hand side of (2.2) is exponentially small in t. Consequently, ${\mathbb P}(\Gamma _1^{\text{c}})$ decays faster than ${{\rm{e}}^{ - {C_7}{t^{1/4}}}}$. On the other hand, we have, for t ≥ ‖x1 and $z \in {B_1}(Nv(x),2N{t^{1/4}})$,

$$\parallel x - z{\parallel _1} \le \parallel x - Nv(x){\parallel _1} + \parallel Nv(x) - z{\parallel _1} \le 3N{t^{1/4}}.$$

This together with (1.4) proves that ${\mathbb P}(\Gamma _2^{\text{c}})$ is bounded from above by a multiple of ${t^{d/2}}\exp \{ - {C_{2}}{(3N)^{4 {\alpha _1}}}{t^{ {\alpha _1}}}\} $. Therefore (2.1) immediately follows from the above bounds for ${\mathbb P}(\Gamma _1^{\text{c}})$ and ${\mathbb P}(\Gamma _2^{\text{c}})$.

We close this section with the corollary of Proposition 2.4.

Corollory 2.1

Suppose that ${\mathbb E}[\omega (0)] < \infty $ . Then there exist constants C 14, C 15, and α 6 such that, for all x ∈ ℤd and t < 0,

$${\mathbb P}({there}\,{exists}\,{v_1},{v_2} \in {\cal I}\,{with}\parallel {v_1} - {v_2}{\parallel _1} \ge t\,{such}\,{that}\,T(0,x) = T(0,{v_1}) + \tau ({v_1},{v_2}) + T({v_2},x)) \le {C_{14}}\parallel x\parallel _1^{2d}{{\rm{e}}^{ - {C_{15}}{t^{ {\alpha _6}}}}}.$$(2.3)

Proof. Since the left-hand side of (2.3) is smaller than or equal to ${\mathbb P}(T(0,x) \ge t)$, the corollary immediately follows from Proposition 2.4 provided tC 11x1.

Assume that t < C 11x1. We use Proposition 2.4 to show that the left-hand side of (2.3) is bounded from above by

$$\eqalign{& {C_{12}}\exp \{ - {C_{13}}{({C_{11}}\parallel x{\parallel _1})^{ {\alpha _5}}}\} + \sum\limits_{{\scriptstyle {v_1},{v_2} \in {B_1}(0,{C_{11}}\parallel x{\parallel _1}) \atop \scriptstyle \parallel {v_1} - {v_2}{\parallel _1} \ge t} } {\mathbb P}(\tau (0,{v_2} - {v_1}) = T(0,{v_2} - {v_1})) \cr & \quad \quad \le {C_{12}}{{\text{e}}^{ - {C_{13}}{t^{ {\alpha _5}}}}} + \sum\limits_{{\scriptstyle {v_1},{v_2} \in {B_1}(0,{C_{11}}\parallel x{\parallel _1}) \atop \scriptstyle \parallel {v_1} - {v_2}{\parallel _1} \ge t} } \{ {I_1}({v_2} - {v_1}) + {I_2}({v_2} - {v_1})\} , \cr} $$(2.4)

where, for z ∈ ℤd,

$$\eqalign{& {I_1}(z){\mathbb P}\left( {\mathop {\max }\limits_{{\scriptstyle 0 \le k \le {C_{11}}\parallel z{\parallel _1} \atop \scriptstyle 1 \le \ell \le \omega (0)} } \parallel {S_k}(0,\ell ){\parallel _1} \ge \parallel z{\parallel _1}} \right), \cr & {I_2}(z){\mathbb P}\left( {\mathop {\max }\limits_{{\scriptstyle 0 \le k \le {C_{11}}\parallel z{\parallel _1} \atop \scriptstyle 1 \le \ell \le \omega (0)} } \parallel {S_k}(0,\ell ){\parallel _1} < \parallel z{\parallel _1}, \tau (0,z) = T(0,z)} \right). \cr} $$

To estimate I 1(v 2v 1), we rely on the following simple large deviation estimate for the simple random walk; see [Reference Lawler28, Lemma 1.5.1]. For any γ < 0, there exists a constant c (which may depend on γ) such that, for all n, u ≥ 0,

$${\mathbb P}\left( {\mathop {\max }\limits_{0 \le k \le n} \parallel {S_k}(0,1){\parallel _1} \ge \gamma u\sqrt n } \right) \le c{{\text{e}}^{ - u}}.$$

Fix v 1,v 2B 1(0, C 11x1) with ‖v 1v 21t, and set $\gamma = {C_{11}}^{ - 1/2}$, n = C 11v 2v 11, and $u = \parallel {v_2} - {v_1}\parallel _1^{1/2}$. Then,

$$\displaylines{ {I_1}({v_2} - {v_1}) \le \sum\limits_{L = 1}^\infty {\mathbb P}(\omega (0) = L)\sum\limits_{\ell = 1}^L {\mathbb P}\left( {\mathop {\max }\limits_{0 \le k \le {C_{11}}\parallel {v_2} - {v_1}{\parallel _1}} \parallel {S_k}(0,\ell ){\parallel _1} \ge \parallel {v_2} - {v_1}{\parallel _1}} \right) \cr \le {\mathbb E}[\omega (0)]c{{\text{e}}^{ - {t^{1/2}}}}. \cr} $$

We again use Proposition 2.4 to obtain, for v 1,v 2B 1(0, C 11x1) with ‖v 1v 21t,

$$\displaylines{{I_2}({v_2} - {v_1}) \le {\mathbb P}({C_{11}}\parallel {v_2} - {v_1}{\parallel _1} < \tau (0,{v_2} - {v_1}) = T(0,{v_2} - {v_1})) \cr \le {C_{12}}\exp \{ - {C_{13}}{({C_{11}}t)^{ {\alpha _5}}}\} . \cr} $$

Therefore, (2.3) follows from (2.4) and these bounds for I 1(v 2v 1) and I 2(v 2v 1). □

3. Upper large deviation bound

In this section we give the proof of Theorem 1.1. We basically follow the approach taken in [Reference Garet and Marchand12, Subsection 3.3]. Let us first prepare some notation and lemmata.

Lemma 3.1

For each x ∈ ℤd, ${\mathbb P}$ -a.s. and in L 1,

$$\mu (x) = \mathop {\lim }\limits_{k \to \infty } \frac{1}{k}{T^&#x002A;}(0,kx) = \mathop {\lim }\limits_{k \to \infty } \frac{1}{k}{\mathbb E}[{T^&#x002A;}(0,kx)] = \mathop {\inf }\limits_{k \ge 1} \frac{1}{k}{\mathbb E}[{T^&#x002A;}(0,kx)].$$(3.1)

Proof. From (1.2), we have, on the event $\{ 0 \in {\cal I}\} $ of positive probability,

$$\mu (x) = \mathop {\lim }\limits_{\begin{array}{c}k \to \infty , kx \in {\cal I} \end{array}} \frac{1}{k}T(0,kx) = \mathop {\lim }\limits_{\begin{array}{c} k \to \infty , kx \in {\cal I} \end{array}} \frac{1}{k}{T^&#x002A;}(0,kx).$$

Therefore, once the integrability of T * (0, x) is proved (3.1) follows from the subadditive ergodic theorem for the process ${T^&#x002A;}(ix,jx), 0 \le i < j,i,j \in {{\mathbb N}_0}$.

For the integrability,

$$\eqalign{{\mathbb E}[{T^&#x002A;}(0,x)] & \le \int_0^\infty {\mathbb P}\left( {\parallel {0^&#x002A;}{\parallel _1} \gt \frac{t}{{3{C_{11}}}}} \right) {\text{d}}t + \int_0^\infty {\mathbb P}\left( {\parallel x - {x^&#x002A;}{\parallel _1} \gt \frac{t}{{3{C_{11}}}}} \right) {\text{d}}t \cr & \quad + \int_0^\infty {\mathbb P}\left( {{T^&#x002A;}(0,x) \ge t, \parallel {0^&#x002A;}{\parallel _1} \le \frac{t}{{3{C_{11}}}}, \parallel x - {x^&#x002A;}{\parallel _1} \le \frac{t}{{3{C_{11}}}}} \right) {\text{d}}t.} $$

It is clear that the first and second terms on the right-hand side are finite. Moreover, the third term is not larger than

$$3{C_{11}}\parallel x{\parallel _1} + \sum\limits_{{\scriptstyle y \in {B_1}(0,t/(3{C_{11}})) \atop \scriptstyle z \in {B_1}(x,t/(3{C_{11}}))} } \int_{3{C_{11}}\parallel x{\parallel _1}}^\infty {\mathbb P}(T(0,z - y) \ge t) {\text{d}}t,$$

and the integrability of T * (0, x) follows by using Proposition 2.4.

We denote by S d the symmetric group on {1,…,d}. For each x = (x 1,…,x d) ∈ ℝd, σ ∈ Sd, and ε ∈ {+1, −1}d, we define

$${\Psi _{\sigma ,\varepsilon }}(x)\sum\limits_{i = 1}^d \varepsilon (i){x_{\sigma (i)}}{\xi _i}.$$

Then, ${\cal O}({{\mathbb Z}^d}) \{ {\Psi _{\sigma ,\varepsilon }}:\sigma \in {{\cal S}_d}, \varepsilon \in {\{ + 1, - 1\} ^d}\} $ is the group of orthogonal transformations that preserve the grid ℤd. Consequently, its elements also preserve the 1-norm ‖·‖1 and the time constant μ(·). For x ∈ ℝd and (g 1,…,g d) ∈ (O(ℤd))d, the linear map $L_x^{{g_1}, \ldots ,{g_d}}$ is defined by

$$L_x^{{g_1}, \ldots ,{g_d}}(y)\sum\limits_{i = 1}^d {y_i}{g_i}(x),\quad \quad y = ({y_1}, \ldots ,{y_d}) \in {{\mathbb R}^d}.$$

To study the first passage time in each direction x, we want to find a basis of ℝd adapted to the studied direction, i.e. made of images of x by elements of O(ℤd). The following technical lemma, which is obtained by Garet–Marchand [Reference Garet and Marchand12, Lemma 2.2], gives the existence of such a basis.

Lemma 3.2

For each x ∈ ℝd , there exists a family (g 1,x,g2,x,…,g d,x) ∈ (O(ℤd))d with ${g_{1,x}} = {\text{I}}{{\text{d}}_{{{\mathbb R}^d}}}$ such that the linear map ${L_x}L_x^{{g_{1,x}}, \ldots ,{g_{d,x}}}$ satisfies

$${C_{16}}\parallel x{\parallel _1}\parallel y{\parallel _1} \le \parallel {L_x}(y){\parallel _1} \le \parallel x{\parallel _1}\parallel y{\parallel _1},\quad \quad y \in {{\cal R}^d},$$

where C 16 is a universal constant not depending on x, y, and (g 1,x,g2,x,…,g d,x).

Proof of Theorem 1.1. We fix an arbitrary ε < 0 and break the proof into three steps.

Step 1. In this step we choose appropriate constants for our proof. By (1.3), μ(y) ≥ 1 holds for all y ∈ ℝd with ‖y1 = 1. Hence, there exists δ < 0 such that, for all y ∈ ℝd with ‖y1 = 1,

$$\left( {1 + \frac{{3\delta }}{{2{C_{11}}}}} \right){(1 + \delta )^2}\mu (y) + 2\delta \lt \mu (y)(1 + \varepsilon ) $$(3.2)

and

$$\delta < \frac{{{C_{11}}}}{2}.$$

To shorten notation, write

$$\beta \frac{\delta }{{2{C_{11}}}} < \frac{1}{4}.$$

Take $M \in \mathbb N$ large enough such that

$$M \ge \frac{d}{\delta }\max \left\{ {\frac{{\mu ({\xi _1})}}{2},\frac{{8{C_{11}}}}{{{C_{16}}}}} \right\} \ge 4,$$(3.3)

and choose p ∈ (0, 1) to satisfy

$$p \gt p'\left( {\frac{\beta }{{1 + 2\beta }}} \right) \gt {p_c},$$(3.4)

where p′(·) is the parameter appearing in Proposition 2.2.

Step 2. In this step we tackle the construction of the renormalization procedure. Let N be a positive integer to be chosen large enough later. A site v ∈ ℤd is said to be good if the following conditions hold, for all y ∈ ℤd / M with ‖y1 = 1:

  • (1) ${T^&#x002A;}(N{L_{My}}(v),N{L_{My}}(v + \xi )) \le MN\mu (y)(1 + \delta )$ for all $\xi \in {{\cal E}^d}$.

  • (2) (NL My (v))* is included in ${B_1}(N{L_{My}}(v),\sqrt N )$, and ${(N{L_{My}}(v + \xi ))^&#x002A;}$ belongs to ${B_1}(N{L_{My}}(v + \xi ),\sqrt N )$ for all $\xi \in {{\cal E}^d}$.

Otherwise, v is called bad.

Lemma 3.3

There exists $N \in {\mathbb N}$ such that ${(1\{ v\,{\rm is}\,{\rm good}\} )_{v \in {{\mathbb Z}^d}}}$ stochastically dominates η p.

Proof. Since the set {y ∈ ℤd / M :‖y1 = 1} is finite, Lemma 3.1 implies that ℙ(v is bad) converges to 0 as N → ∞. Therefore, due to Proposition 2.3, our task is now to show the finite dependence of ${(1\{ v\,{\text{is}}\,{\text{good}}\} )_{v \in {{\mathbb Z}^d}}}$. Condition (ii) depends only on the configuration in ${B_1}(N{L_{My}}(v),\sqrt N )$ and ${B_1}(N{L_{My}}(v + \xi ),\sqrt N )$. We have $\parallel N{L_{My}}(v) - N{L_{My}}(v + \xi ){\parallel _1} \le MN$ from Lemma 3.2, and condition (ii) particularly depends only on the configuration in B 1(NL My (v),2MN). The same argument as in the proof of Lemma 2.1 implies that condition (i) depends only on the configuration in ${B_1}(N{L_{My}}(v),2MN\mu ({\xi _1})(1 + \delta ))$. Note that Lemma 3.2 ensures that if $\parallel v - w{\parallel _1} \gt (2/{C_{16}})\mu ({\xi _1})(1 + \delta )$, then, for all y ∈ ℤd / M with ‖y1 = 1,

$$\parallel N{L_{My}}(v) - N{L_{My}}(w){\parallel _1} \gt 2MN\mu ({\xi _1})(1 + \delta ).$$

With these observations, ${(1\{ v\,{\text{is}}\,{\text{good}}\} )_{v \in {{\mathbb Z}^d}}}$ is finitely dependent.

For a given x ∈ ℤd \{0}, we set x’x/‖x1. Then, there exists $\widehat x \in {{\mathbb Z}^d}/M$ such that $\parallel \widehat x{\parallel _1} = 1$ and $\parallel x' - \widehat x{\parallel _1} \le d/(2M)$. Note that, by (1.3) and (3.3),

$$|\mu (x') - \mu (\widehat x)| \le \mu ({\xi _1})\parallel x' - \widehat x{\parallel _1} \le \mu ({\xi _1}) \frac{d}{{2M}} \le \delta $$(3.5)

and

$$\parallel x' - \widehat x{\parallel _1} \le \frac{{{C_{16}}\beta }}{8}.$$(3.6)

The definition of ${L_{M\widehat x}}$ and Lemma 3.2 tell us that, for all 1 ≤ id,

$$\mu ({L_{M\widehat x}}({\xi _i})) = M\mu (\widehat x),\quad \quad \parallel {L_{M\widehat x}}({\xi _i}){\parallel _1} = M,$$

and, for all y ∈ ℝd,

$${C_{16}}M\parallel y{\parallel _1} \le \parallel {L_{M\widehat x}}(y){\parallel _1} \le M\parallel y{\parallel _1}.$$

Denote by d g(·,·) the chemical distance for ${(1\{ v\,{\text{is}}\,{\text{good}}\} )_{v \in {{\mathbb Z}^d}}}$. We now consider the event

$$G \{ {\rm{there}}\,{\rm{exists}}\,v \in {\cal A}(0,\beta \parallel \overline x {\parallel _1}), {\rm{there}}\,{\rm{exists}}\,w \in {\cal A}(\overline x ,\beta \parallel \overline x {\parallel _1}){\rm{such}}\,{\rm{that }}\;{d^{\rm{g}}}(v,w) \lt (1 + 3\beta )\parallel \overline x {\parallel _1}\} ,$$

where

$$\overline x \left\lfloor {\frac{{\parallel x{\parallel _1}}}{{MN}}} \right\rfloor {\xi _1}$$

and

$${\cal A}(z,r)\{ y \in {{\mathbb Z}^d}:\frac{r}{2} \le \parallel y - z{\parallel _1} \le r\} ,\quad \quad z \in {{\mathbb Z}^d}, r \gt 0.$$

It is easy to see that, on the event G, for some $v \in {\cal A}(0,\beta \parallel \overline x {\parallel _1})$ and $w \in {\cal A}(\overline x ,\beta \parallel \overline x {\parallel _1})$,

$${T^&#x002A;}(N{L_{M\widehat x}}(v),N{L_{M\widehat x}}(w)) < MN\mu (\widehat x)(1 + \delta )(1 + 3\beta )\parallel \overline x {\parallel _1}.$$(3.7)

Furthermore, Lemma 3.3 proves

$$\eqalign{P({G^{\text{c}}}) & \le P({d_{{\eta _p}}}(v,w) \ge (1 + 3\beta )\parallel \overline x {\parallel _1}{\text{for}}\,{\text{all}}\,v \in {\cal A}(0,\beta \parallel \overline x {\parallel _1})\,{\text{and}}\,w \in {\cal A}(\overline x ,\beta \parallel \overline x {\parallel _1})) \cr & \le 2{\mathbb P}({B_1}(0,\frac{{\beta \parallel \overline x {\parallel _1}}}{2}) \cap {{\cal C}_\infty }({\eta _p}) = \O ) \cr & + \sum\limits_{{\scriptstyle v \in {B_1}(0,\beta \parallel \overline x {\parallel _1}) \atop \scriptstyle w \in {B_1}(\overline x ,\beta \parallel \overline x {\parallel _1})} } {\mathbb P}(\frac{{1 + 3\beta }}{{1 + 2\beta }}\parallel v - w{\parallel _1} \le {d_{{\eta _p}}}(v,w) < \infty ). } $$

Thanks to $\beta < {\textstyle{1 \over 4}}$ and (3.4), Proposition 2.1(a) and Proposition 2.2 imply that, for some constants c and c′,

$${\mathbb P}({G^{\text{c}}}) \le c{{\text{e}}^{ - {C_7}\beta \parallel x{\parallel _1}/(2MN)}} + c{(\frac{{\beta \parallel x{\parallel _1}}}{{MN}})^{2d}}{{\text{e}}^{ - c'(1 - 2\beta )\parallel x{\parallel _1}/(MN)}}.$$(3.8)

Step 3. Finally, we complete the proof. There is no loss of generality in assuming that

$$\parallel x{\parallel _1} \ge \frac{{4MN}}{{\beta {C_{16}}}}.$$(3.9)

By the definition of x’ and (3.2),

$$\eqalign{{\mathbb P}(T(0,x) \ge (1 + \varepsilon )\mu (x)) & = {\mathbb P}(T(0,x) \ge \mu (x')(1 + \varepsilon )\parallel x{\parallel _1}) & \quad \le {\mathbb P}\left( {T(0,x) \gt \left( {1 + \frac{{3\delta }}{{2{C_{11}}}}} \right){{(1 + \delta )}^2}\mu (x')\parallel x{\parallel _1} + 2\delta \parallel x{\parallel _1}} \right).}$$(3.10)

Let A be the event that T(0,y)<δx1 for all $y \in N{L_{M\widehat x}}({\cal A}(0,\beta \parallel \overline x {\parallel _1})) + {B_1}(0,\sqrt N )$ and T(z,x)<δx1 for all $z \in [N{L_{M\widehat x}}({\cal A}(\overline x ,\beta \parallel \overline x {\parallel _1})) + {B_1}(0,\sqrt N )] \cap {\cal I}$. By (14) and (16), on the event GA ∩ {0 ∈ I}, there exist $v \in {\cal A}(0,\beta \parallel \overline x {\parallel _1})$ and $w \in {\cal A}(\overline x ,\beta \parallel \overline x {\parallel _1})$ such that

$$\eqalign{T(0,x) & \le T(0,(N{L_{M\widehat x}}(v{))^&#x002A;}) + {T^&#x002A;}(N{L_{M\widehat x}}(v),N{L_{M\widehat x}}(w)) + T((N{L_{M\widehat x}}(w{))^&#x002A;},x) \cr & \le MN\mu (\widehat x)(1 + \delta )(1 + 3\beta )\parallel \overline x {\parallel _1} + 2\delta \parallel x{\parallel _1} \cr & \le \left( {1 + \frac{{3\delta }}{{2{C_{11}}}}} \right){(1 + \delta )^2}\mu (x')\parallel x{\parallel _1} + 2\delta \parallel x{\parallel _1}.}$$

This means that the right-hand side of (3.10) is bounded from above by P(G c) + P(A c). Due to (3.8), our task is to estimate P(A c). We use Lemma 3.2 and (3.9) to obtain, for $y \in N{L_{M\widehat x}}({\cal A}(0,\beta \parallel \overline x {\parallel _1})) + {B_1}(0,\sqrt N )$,

$$\parallel y{\parallel _1} \le MN\beta \parallel \overline x {\parallel _1} + \sqrt N \le \beta \parallel x{\parallel _1} + \sqrt N \le \frac{{17}}{{16}}\beta \parallel x{\parallel _1} \le \frac{\delta }{{{C_{??}}}}\parallel x{\parallel _1}$$

and

$$\parallel y{\parallel _1} \ge MN{C_{16}} \frac{{\beta \parallel \overline x {\parallel _1}}}{2} - \sqrt N \ge {C_{16}}\frac{{\beta \parallel x{\parallel _1}}}{2} - MN\frac{{{C_{16}}\beta }}{2} - \sqrt N \ge \frac{{13}}{{32}}{C_{16}}\beta \parallel x{\parallel _1}.$$

Similarly, for $z \in N{L_{M\widehat x}}({\cal A}(\overline x ,\beta \parallel \overline x {\parallel _1})) + {B_1}(0,\sqrt N )$,

$$\frac{{13}}{{32}}{C_{16}}\beta \parallel x{\parallel _1} \le \parallel z - N{L_{M\widehat x}}(\overline x ){\parallel _1} \le \frac{{17}}{{16}}\beta \parallel x{\parallel _1}.$$

In addition, by (3.6), we have, for $z \in N{L_{M\widehat x}}({\cal A}(\overline x ,\beta \parallel \overline x {\parallel _1})) + {B_1}(0,\sqrt N )$,

$$\eqalign{\parallel x - z{\parallel _1} & \le \parallel x - N{L_{M\widehat x}}(\overline x ){\parallel _1} + \parallel N{L_{M\widehat x}}(\overline x ) - z{\parallel _1} \cr & \le \frac{3}{8}{C_{16}}\beta \parallel x{\parallel _1} + \frac{{17}}{{16}}\beta \parallel x{\parallel _1} \cr & \le 2\beta \parallel x{\parallel _1} \cr & = \frac{\delta }{{{C_{11}}}}\parallel x{\parallel _1} }$$

and

$$\eqalign{\parallel x - z{\parallel _1} & \ge \parallel z - N{L_{M\widehat x}}(\overline x ){\parallel _1} - \parallel N{L_{M\widehat x}}(\overline x ) - x{\parallel _1} \cr & \ge \frac{{13}}{{32}}{C_{16}}\beta \parallel x{\parallel _1} - \frac{3}{8}{C_{16}}\beta \parallel x{\parallel _1} \cr & = \frac{{{C_{16}}}}{{32}}\beta \parallel x{\parallel _1}.}$$

Therefore,

$$\eqalign{ {\mathbb P}({A^{\text{c}}}) & \le \sum\limits_{(13/32){C_{16}}\beta \parallel x{\parallel _1} \le \parallel y{\parallel _1} \le (17/16)\beta \parallel x{\parallel _1}} {\mathbb P}(T(0,y) \gt {C_{11}}\parallel y{\parallel _1}) \cr & \quad + \sum\limits_{(13/32){C_{16}}\beta \parallel x{\parallel _1} \le \parallel z - N{L_{M\widehat x}}(\overline x ){\parallel _1} \le (17/16)\beta \parallel x{\parallel _1}} {\mathbb P}(T(0,x - z) \gt {C_{11}}\parallel x - z{\parallel _1}),} $$

and this combined with Proposition 2.4 completes the proof.

4. Lower large deviation and concentration bounds

The aim of this section is to prove Theorems 1.2 and 1.3. Let us first show Theorem 1.2 by using Theorem 1.3. The proof of Theorem 1.3 will be given after that of Theorem 1.2.

Proof of Theorem 1.2. Let v(x) denote a site of ${\cal I}$ satisfying

$$T{(0^&#x002A;},x) = T{(0^&#x002A;},v(x)) + \tau (v(x),x).$$

We first prove that, for all ε < 0, there exist constants C 17 and C 18 such that

$${\mathbb P}(T(v(x),{x^&#x002A;}) \ge \varepsilon \parallel x{\parallel _1}) \le {C_{17}}\exp \left\{ { - {C_{18}}\parallel x\parallel _1^{ {\alpha _5} \wedge {\alpha _6}}} \right\}.$$(4.1)

Corollary 2.1 tells us that there exist constants c and c’ such that

$$ \mathbb P({\rm{there}}\,{\rm{exists}}\,{v_1},{v_2} \in \cal I\,{\rm{with}}\parallel {v_1} - {v_2}{\parallel _1} \ge \varepsilon {(2{C_{11}})^{ - 1}}\parallel x{\parallel _1}{\rm{such}}\,{\rm{that}}\;T{(0^&#x002A;},x) = T{(0^&#x002A;},{v_1}) + \tau ({V_1},\,{v_2}) + T({v_2},\,x)) \le c{{\rm{e}}^{ - c'\parallel x\parallel _1^{ {\alpha _6}}}}.$$

It follows that

$$\eqalign{ & {\mathbb P}(T(v(x),{x^&#x002A;}) \ge \varepsilon \parallel x{\parallel _1}) \cr & \le c{{\text{e}}^{ - c'\parallel x\parallel _1^{ {\alpha _6}}}} + {\mathbb P}(\parallel x - {x^&#x002A;}{\parallel _1} \ge \frac{{\varepsilon \parallel x{\parallel _1}}}{{2{C_{11}}}}) \cr & \quad + {\mathbb P}(T(v(x),{x^&#x002A;}) \ge \varepsilon \parallel x{\parallel _1}, \parallel v(x) - x{\parallel _1} < \frac{{\varepsilon \parallel x{\parallel _1}}}{{2{C_{11}}}}, \parallel x - {x^&#x002A;}{\parallel _1} < \frac{{\varepsilon \parallel x{\parallel _1}}}{{2{C_{11}}}}). \cr} $$

Since the second term has the desired form, our task is to bound the last probability. To this end, we use Proposition 2.4 to obtain, for some constants C and C′,

$$\eqalign{ & {\mathbb P}(T(v(x),{x^&#x002A;}) \ge \varepsilon \parallel x{\parallel _1}, \parallel v(x) - x{\parallel _1} \lt \frac{{\varepsilon \parallel x{\parallel _1}}}{{2{C_{11}}}}, \parallel x - {x^&#x002A;}{\parallel _1} \lt \frac{{\varepsilon \parallel x{\parallel _1}}}{{2{C_{11}}}}) \cr & \le \sum\limits_{{\scriptstyle y \in {{\mathbb Z}^d} \atop \scriptstyle \parallel y - x{\parallel _1} \lt (2{C_{11}}{)^{ - 1}}\varepsilon \parallel x{\parallel _1}} } \sum\limits_{{\scriptstyle z \in {{\mathbb Z}^d} \atop \scriptstyle \parallel x - z{\parallel _1} \lt (2{C_{11}}{)^{ - 1}}\varepsilon \parallel x{\parallel _1}} } {\mathbb P}(T(0,z - y) \ge \varepsilon \parallel x{\parallel _1}) \cr & \le C{{\text{e}}^{ - C'\parallel x\parallel _1^{ {\alpha _5}}}}. \cr} $$

Hence (4.1) follows.

Take $t = \varepsilon \sqrt {\parallel x{\parallel _1}} $ and recall that 0* = 0 under ${\mathbb P} = {\mathbb P}( \cdot | 0 \in {\cal I})$. Then, by (1.3) and (3.1),

$${\mathbb P}(T(0,x) \le (1 - \varepsilon )\mu (x)) \le {\mathbb P}{(0 \in {\cal I})^{ - 1}}{\mathbb P}\left( {T{{(0}^&#x002A;},x) - {\mathbb E}[{T^&#x002A;}(0,x)] \le - t\sqrt {\parallel x{\parallel _1}} } \right).$$

The last probability is bounded from above by

$${\mathbb P}(T{(0^&#x002A;},x) - {T^&#x002A;}(0,x) \le - \frac{t}{2}\sqrt {\parallel x{\parallel _1}} ) + {\mathbb P}({T^&#x002A;}(0,x) - {\mathbb E}[{T^&#x002A;}(0,x)] \le - \frac{t}{2}\sqrt {\parallel x{\parallel _1}} ).$$

Note that

$$[{T^&#x002A;}(0,x) \le T{(0^&#x002A;},v(x)) + T(v(x),{x^&#x002A;}) + \tau (v(x),x) \le T{(0^&#x002A;},x) + T(v(x),{x^&#x002A;}),$$

and (4.1) implies that

$$\eqalign{ {\mathbb P}(T{(0^&#x002A;},x) - {T^&#x002A;}(0,x) \le - \frac{t}{2}\sqrt {\parallel x{\parallel _1}} ) & \le {\mathbb P}(T(v(x),{x^&#x002A;}) \ge \frac{\varepsilon }{2}\parallel x{\parallel _1}) \cr & \le {C_{17}}\exp \left\{ { - {C_{18}}\parallel x\parallel _1^{ {\alpha _5} \wedge {\alpha _6}}} \right\}.} $$

Furthermore, Theorem 1.3 proves that

$${\mathbb P}({T^&#x002A;}(0,x) - {\mathbb E}[{T^&#x002A;}(0,x)] \le - \frac{t}{2}\sqrt {\parallel x{\parallel _1}} ) \le {C_4}\exp \{ - {C_5}{(\frac{\varepsilon }{2}\sqrt {\parallel x{\parallel _1}} )^{ {\alpha _4}}}\} ,$$

and, therefore, the theorem follows.

Proof of Theorem 1.3. For each t < 0, define the two-point function σ t(∙,∙) as follows. Take K < d(C 11 + γ +1). First, if ‖xyt and τ(x, y) < 4Kt, then set σ t (x, y):= 4Kt. Next, if ‖xy < t then set σ t (x, y):= 4Ktxy. Otherwise, set σ t (x, y):= τ(x, y). By definition, for any x, y ∈ ℤd,

$$\parallel x - y{\parallel _1} \le {\sigma _t}(x,y) \le 4K(t \vee \parallel x - y{\parallel _\infty }).$$(4.2)

We write T t(x, y) for the first passage time from x to y corresponding to σ t (∙,∙), i.e.,

$${T_t}(x,y)\inf \left\{ {\sum\limits_{i = 0}^{m - 1} {\sigma _t}({x_i},{x_{i + 1}}):m \ge 1, {x_0} = x, {x_m} = y, {x_1}, \ldots ,{x_{m - 1}} \in {{\mathbb Z}^d}} \right\}.$$

Proposition 4.1

There exist constants C 19, C 20, and α 7 such that, for all x ∈ ℤd\{0} and 0 ≤ t ≤ ‖x1,

$$\max \{\mathbb P ({T_t}{(0^&#x002A;},{x^&#x002A;}) \ne {T^&#x002A;}(0,x)),{\mathbb E[({T_t}{(0^&#x002A;},{x^&#x002A;}) - {T^&#x002A;}(0,x{))^2}]^{1/2}}\} \le {C_{19}}\parallel x\parallel _1^{4d}{{\rm{e}}^{ - {C_{20}}{t^{ {\alpha _7}}}}}.$$

Proposition 4.2

For all γ < 0, there exists a constant C 21 such that, for all x ∈ ℤd\{0} and $0 \le t \le \gamma \sqrt {\parallel x{\parallel _1}} $,

$${\mathbb P}(|{T_t}(0,x) - {\mathbb E}[{T_t}(0,x)]| \ge t\sqrt {\parallel x{\parallel _1}} ) \le 2{{\text{e}}^{ - {C_{21}}t}}.$$

Let us postpone the proofs of these propositions to the end of this section, and continue the proof of Theorem 1.3. To this end, without loss of generality, we can assume that $\parallel x{\parallel _1} \ge {(32K{\mathbb E}[1 \vee \parallel {0^&#x002A;}{\parallel _\infty }])^2}$. Take c ≥ 1 large enough such that, for all $t \ge c{(1 + \log \parallel x{\parallel _1})^{1/ {\alpha _7}}}$,

$${C_{19}}\parallel x\parallel _1^{4d}{{\rm{e}}^{ - {C_{20}}{t^{ {\alpha _7}}}}} \le {C_{19}}{{\rm{e}}^{ - {C_{20}}{t^{ {\alpha _7}}}/2}} \le \frac{t}{4}.$$

From (4.2) and Proposition 4.1, we have

$$\eqalign{|{\mathbb E}[{T^&#x002A;}(0,x)] & - {\mathbb E}[{T_t}(0,x)]| \le {\mathbb E}[|{T^&#x002A;}(0,x) - {T_t}{(0^&#x002A;},{x^&#x002A;})|] + 2{\mathbb E}[{T_t}{(0,0^&#x002A;}) \vee {T_t}{(0^&#x002A;},0)] \cr & \quad \le {C_{19}}\parallel x\parallel _1^{4d}{{\text{e}}^{ - {C_{20}}{t^{ {\alpha _7}}}}} + 8K{\mathbb E}[t \vee \parallel {0^&#x002A;}{\parallel _\infty }].} $$

Hence, for all $t \ge c{(1 + \log \parallel x{\parallel _1})^{1/ {\alpha _7}}}$,

$$|{\mathbb E}[{T^&#x002A;}(0,x)] - {\mathbb E}[{T_t}(0,x)]| \le \frac{t}{2}\sqrt {\parallel x{\parallel _1}} .$$

This together with Proposition 4.1 leads to

$$\eqalign{ & {\mathbb P}(|{T^&#x002A;}(0,x) - {\mathbb E}[{T^&#x002A;}(0,x)]| \ge t\sqrt {\parallel x{\parallel _1}} ) \cr & \quad \quad \le {C_{19}}{{\text{e}}^{ - {C_{20}}{t^{ {\alpha _7}}}/2}} + {\mathbb P}(|{T_t}{(0^&#x002A;},{x^&#x002A;}) - {\mathbb E}[{T_t}(0,x)]| \ge \frac{t}{2}\sqrt {\parallel x{\parallel _1}} ). \cr} $$

For the second term on the right-hand side,

$$\eqalign{|{T_t}{(0^&#x002A;},{x^&#x002A;}) - {\mathbb E}[{T_t}(0,x)]| & \le |{T_t}{(0^&#x002A;},{x^&#x002A;}) - {T_t}(0,x)| + |{T_t}(0,x) - {\mathbb E}[{T_t}(0,x)]| \cr & \le {T_t}{(0,0^&#x002A;}) \vee {T_t}{(0^&#x002A;},0) + {T_t}(x,{x^&#x002A;}) \vee {T_t}({x^&#x002A;},x) \cr & \quad + |{T_t}(0,x) - {\mathbb E}[{T_t}(0,x)]|.} $$

Using (4.2) again we obtain

$$\eqalign{& {\mathbb P}(|{T_t}{(0^&#x002A;},{x^&#x002A;}) - E[{T_t}(0,x)]| \ge \frac{t}{2}\sqrt {\parallel x{\parallel _1}} ) \cr & \quad \quad \le 2{\mathbb P}(4K(t \vee \parallel {0^&#x002A;}{\parallel _\infty }) \ge \frac{t}{6}\sqrt {\parallel x{\parallel _1}} ) + {\mathbb P}(|{T_t}(0,x) - E[{T_t}(0,x)]| \ge \frac{t}{6}\sqrt {\parallel x{\parallel _1}} ), \cr} $$

and the theorem is a consequence of Proposition 4.2.

In the rest of this section we shall prove Propositions 4.1 and 4.2.

Proof of Proposition 4.1. Let 0 ≤ t ≤ ‖x1. We first estimate $\mathbb P({T_t}{(0^&#x002A;},{x^&#x002A;}){T^&#x002A;}(0,x))$. To this end, consider the following events Γj, 1 ≤ j ≤ 5:

$$\matrix{{{\Gamma _1} \left\{ {{{\left\| {{0^&#x002A;}} \right\|}_1} \le {t \over 8}, {{\left\| {x - {x^&#x002A;}} \right\|}_1} \le {t \over 8}} \right\},} \hfill \cr{{\Gamma _2} \bigcap\limits_{\matrix{y,z \in {B_1}(0,7K\parallel x{\parallel _1}) \cr} } \{ y \in {\cal I}\,{\rm{implies}}\,{\rm{that}}\,T(y,z)T(y,{v_1}) + \tau ({v_1},{v_2}) + T({v_2},z)} \hfill \cr{{\rm{for}}\,{\rm{all}}\,{v_1},{v_2} \in {\cal I}\,{\rm{with}}\parallel {v_1} - {v_2}{\parallel _1} \ge {t \over 2}\} ,} \hfill \cr{{\Gamma _3}\bigcap\limits_{{\scriptstyle y,z \in {B_1}(0,7K\parallel x{\parallel _1}) \atop\scriptstyle \parallel y - z{\parallel _\infty } \le 2t} } \{ y \in {\cal I}\,{\rm{implies}}\,{\rm{that}}\,T(y,z) \le 2Kt\} ,} \hfill \cr{{\Gamma _4}\bigcap\limits_{{\scriptstyle y,z \in {B_1}(0,7K\parallel x{\parallel _1}) \atop\scriptstyle \parallel y - z{\parallel _\infty } \ge t/2} } \{ y \in {\cal I}\,{\rm{implies}}\,{\rm{that}}\,T(y,z) \le K\parallel y - z{\parallel _\infty }\} ,} \hfill \cr{{\Gamma _5}\bigcap\limits_{y,z \in {B_1}(0,7K\parallel x{\parallel _1})} \{ y \in {\cal I}\,{\rm{implies}}\,{\rm{that}}\,T(y,z) \ge {T_t}(y,z)\} .} }$$

We shall observe that T t(0*,x*) = T*(0,x) holds on the event $\bigcap\nolimits_{j = 1}^5 {\Gamma _j}$. Denote by $ ({x_i})_{i = 0}^m$ a finite sequence of ℤd satisfying that x 0 = 0*, x m = 0* and ${T_t}{(0^&#x002A;},{x^&#x002A;}) = \sum\nolimits_{i = 0}^{m - 1} {\sigma _t}({x_i},{x_{i + 1}})$. Moreover, the index i 0 is defined by

$${i_0}\max \{ 0 \le i \le m:T{(0^&#x002A;},{x_i}) = {T_t}{(0^&#x002A;},{x_i})\} .$$

On the event Γ1, we have ‖0*‖1Kx1 and it holds by (4.2) that

$${T_t}{(0^&#x002A;},{x^&#x002A;}) \le 4K(t \vee \parallel {0^&#x002A;} - {x^&#x002A;}{\parallel _\infty }) \le 4K\{ t \vee (\frac{t}{4} + \parallel x{\parallel _\infty })\} \le 5K\parallel x{\parallel _1}.$$

This proves that the x i are included in B 1(0, 6Kx1) on the event Γ1. Let ${x'_{{i_0}}}$ denote a site of ${\cal I}$ satisfying $T{(0^&#x002A;},{x_{{i_0}}}) = T{(0^&#x002A;},{x'_{{i_0}}}) + \tau ({x'_{{i_0}}},{x_{{i_0}}})$. Note that $\parallel {x'_{{i_0}}} - {x_{{i_0}}}{\parallel _1} \le t/2$ and $\parallel {x'_{{i_0}}}{\parallel _1} \le 7K\parallel x{\parallel _1}$ on the event Γ1 ∩ Γ2. Assume that i 0 < m. Then, on the event Γ1 ∩ Γ5,

$$T{(0^&#x002A;},{x_{{i_0} + 1}}) \gt {T_t}{(0^&#x002A;},{x_{{i_0} + 1}}) = {T_t}{(0^&#x002A;},{x_{{i_0}}}) + {\sigma _t}({x_{{i_0}}},{x_{{i_0} + 1}}) = T{(0^&#x002A;},{x_{{i_0}}}) + {\sigma _t}({x_{{i_0}}},{x_{{i_0} + 1}}).$$

We now consider the following three cases:

  1. $\parallel {x_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \le t$ and $\tau ({x_{{i_0}}},{x_{{i_0} + 1}}) \gt 4Kt$;

  2. $\parallel {x_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \gt t$;

  3. $\parallel {x_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \le t$ and $\tau ({x_{{i_0}}},{x_{{i_0} + 1}}) \le 4Kt$.

Case 1: On the event Γ1 ∩ Γ2,

\[\parallel {x'_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \le \parallel {x'_{{i_0}}} - {x_{{i_0}}}{\parallel _\infty } + \parallel {x_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \le \frac{t}{2} + t \le 2t.\]

Therefore, on the event Γ1 ∩ Γ2 ∩ Γ3 ∩ Γ5,

$$\eqalign{{c}T{(0^&#x002A;},{x_{{i_0} + 1}}) & \gt T{(0^&#x002A;},{x_{{i_0}}}) + {\sigma _t}({x_{{i_0}}},{x_{{i_0} + 1}}) \cr & = T{(0^&#x002A;},{x_{{i_0}}}) + 4Kt \cr & \ge T{(0^&#x002A;},{{x'}_{{i_0}}}) + T({{x'}_{{i_0}}},{x_{{i_0} + 1}}) \cr & \ge T{(0^&#x002A;},{x_{{i_0} + 1}}).}$$

This is a contradiction.

Case 2: On the event Γ1 ∩ Γ2,

\[\parallel {x'_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \ge \parallel {x_{{i_0} + 1}} - {x_{{i_0}}}{\parallel _\infty } - \parallel {x_{{i_0}}} - {x'_{{i_0}}}{\parallel _\infty } \ge \frac{t}{2},\]

and on the event Γ1 ∩ Γ2 ∩ Γ5,

$$\eqalign{T{(0^&#x002A;},{x_{{i_0} + 1}}) & \gt T{(0^&#x002A;},{x_{{i_0}}}) + {\sigma _t}({x_{{i_0}}},{x_{{i_0} + 1}}) \cr & = T{(0^&#x002A;},{x_{{i_0}}}) + 4K \parallel {x_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \cr & \ge T{(0^&#x002A;},{{x'}_{{i_0}}}) + 2Kt + 2K \parallel {x_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty }.}$$

It follows that on the event Γ1 ∩ Γ2 ∩ Γ4 ∩ Γ5,

$$\eqalign{{c}T{(0^&#x002A;},{x_{{i_0} + 1}}) & \gt T{(0^&#x002A;},{{x'}_{{i_0}}}) + 2K \parallel {{x'}_{{i_0}}} - {x_{{i_0}}}{\parallel _\infty } + 2K \parallel {x_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \cr & \ge T{(0^&#x002A;},{{x'}_{{i_0}}}) + 2K \parallel {{x'}_{{i_0}}} - {x_{{i_0} + 1}}{\parallel _\infty } \cr & \ge T{(0^&#x002A;},{{x'}_{{i_0}}}) + T({{x'}_{{i_0}}},{x_{{i_0} + 1}}) \ge T{(0^&#x002A;},{x_{{i_0} + 1}}),}$$

and this leads to another contradiction.

Case 3: Since ${\sigma _t}({x_{{i_0}}},{x_{{i_0} + 1}}) = \tau ({x_{{i_0}}},{x_{{i_0} + 1}})$, on the event Γ1 ∩ Γ5,

\[T{(0^&#x002A;},{x_{{i_0} + 1}}) \gt T{(0^&#x002A;},{x_{{i_0}}}) + {\sigma _t}({x_{{i_0}}},{x_{{i_0} + 1}}) = T{(0^&#x002A;},{x_{{i_0}}}) + \tau ({x_{{i_0}}},{x_{{i_0} + 1}}) \ge T{(0^&#x002A;},{x_{{i_0} + 1}}),\]

which is also a contradiction.

With these observations, on the event $\bigcap\nolimits_{j = 1}^5 {\Gamma _j}$, i 0 = m must hold and T t(0*, x) = T*(0, x) is valid. It remains to estimate the probability of $\bigcup\nolimits_{j = 1}^5 \Gamma _j^{\rm{c}}$. Obviously, ${\mathbb P}(\Gamma _1^{\text{c}})$ is exponentially small in t. The following bound is an immediate consequence of Proposition 2.4 and Corollary 2.1: For some constants c and c′,

$${\mathbb P}(\Gamma _2^{\text{c}}) + {\mathbb P}(\Gamma _3^{\text{c}}) + {\mathbb P}(\Gamma _4^{\text{c}}) \le c\parallel x\parallel _1^{4d}\exp \{ - c'{t^{ {\alpha _5} \wedge {\alpha _6}}}\} .$$

To estimate ${\mathbb P}(\Gamma _5^{\text{c}})$, let us introduce the event Γ6(w) that T(0, w) ≠ T(0, v 1) + τ(v 1, v 2) + T(v 2, w) for all ${v_1},{v_2} \in {\cal I}$ with ‖v 1v 2t. Since T(0, w) ≥ T t(0, w) on the event ${\Gamma _6}(w) \cap \{ 0 \in {\cal I}\} $, we have

$$\eqalign{{\mathbb P}(\Gamma _5^{\text{c}}) & \le \sum\limits_{y,z \in {B_1}(0,7K\parallel x{\parallel _1})} {\mathbb P}(T(0,z - y) - {T_t}(0,z - y) \lt 0) \cr & \le \sum\limits_{y,z \in {B_1}(0,7K\parallel x{\parallel _1})} {\mathbb P}({\Gamma _6}{(z - y)^{\text{c}}}).} $$

From Corollary 2.1, this is bounded from above by a multiple of $\parallel x\parallel _1^{4d}{{\rm{e}}^{ - {C_{15}}{t^{ {\alpha _{16}}}}}}$. Therefore, we get the desired bound for $\mathbb P(({T_t}{(0^&#x002A;},{x^&#x002A;}){T^&#x002A;}(0,x)) $.

We next estimate ${\mathbb E}{[({T_t}{(0^&#x002A;},{x^&#x002A;}) - {T^&#x002A;}(0,x{))^2}]^{1/2}}$. Schwarz’s inequality implies that

$$\matrix{{\mathbb E[({T_t}{{(0}^&#x002A;},{x^&#x002A;}) - {T^&#x002A;}(0,x{{))}^2}]} \hfill \cr{\quad \quad = [({T_t}{{(0}^&#x002A;},{x^&#x002A;}) - {T^&#x002A;}(0,x{{))}^2}\{ {T_t}{{(0}^&#x002A;},{x^&#x002A;}){T^&#x002A;}(0,x)\} ]} \hfill \cr {\quad \quad \le {{({{[{T_t}{{{{(0}^&#x002A;},{x^&#x002A;})}^4}]}^{1/2}} + {{[{T^&#x002A;}(0,x))}^4}]}^{1/2}}){{({T_t}{{(0}^&#x002A;},{x^&#x002A;}){T^&#x002A;}(0,x))}^{1/2}}.} } $$

By (21),

$${\mathbb E}[{T_t}{{(0^&#x002A;},{x^&#x002A;})^4}] \le {(4K)^4}{\mathbb E}[(t \vee \parallel {0^&#x002A;} - {x^&#x002A;}{\parallel _\infty }{)^4}] \le {(12K)^4}(2{\mathbb E}[\parallel {0^&#x002A;}\parallel _1^4] + 1)\parallel x\parallel _1^4.$$

On the other hand, letting r(s):= s 1/4/(3C 11), we have

$$\eqalign{{\mathbb E}{[{T^&#x002A;}(0,x))^4}] & \le {(3{C_{11}}\parallel x{\parallel _1})^4} + \int_{{{(3{C_{11}}\parallel x{\parallel _1})}^4}}^\infty {\mathbb P}({T^&#x002A;}{(0,x)^4} \ge s) {\text{d}}s \cr & \le {(3{C_{11}}\parallel x{\parallel _1})^4} + 2\int_{{{(3{C_{11}}\parallel x{\parallel _1})}^4}}^\infty {\mathbb P}(\parallel {0^&#x002A;}{\parallel _1} \ge r(s)) {\text{d}}s \cr & \quad + \int_{{{(3{C_{11}}\parallel x{\parallel _1})}^4}}^\infty \sum\limits_{{\scriptstyle y \in {B_1}(0,r(s)) \atop \scriptstyle z \in {B_1}(x,r(s))} } {\mathbb P}(T(y,z) \ge {s^{1/4}}, y \in {\cal I}) {\text{d}}s. \cr} $$

It follows from Proposition 2.4 that ${\mathbb E}{[{T^&#x002A;}(0,x))^4}]$ is not greater than a multiple of $\parallel x\parallel _1^4$. Combining these bounds with that for $\mathbb P({T_t}{(0^&#x002A;},{x^&#x002A;}){T^&#x002A;}(0,x)) $, we can derive the desired bound for ${\mathbb E}{[({T_t}{(0^&#x002A;},{x^&#x002A;}) - {T^&#x002A;}(0,x{))^2}]^{1/2}}$, and the proof is complete. □

Before starting the proof of Proposition 4.2, let us prepare some notation and lemmata. For a given x ∈ ℤd\{0} and t < 0, tile ℤd with copies of (−t/2, t/2)d such that each box is centered at a point in ℤd and each site in ℤd is contained in precisely one box. We denote these boxes by ${\Lambda _q}, q \in {\mathbb N}$, and consider the random variables

$${U_q}: = ((\omega (z{))_{z \in {\Lambda _q}}},({S_ \cdot }(z,\ell {))_{z \in {\Lambda _q},\ell \in {\mathbb N}}}),\quad \quad q \in {\mathbb N}.$$

Note that $ ({U_q})_{q = 1}^\infty $ are independent and identically distributed. Due to (4.2), T t(0, x) depends only on states in some finite boxes Λ1,…, ΛQ, and T t(0, x) can be regarded as a function of $ ({U_q})_{q = 1}^Q$:

\[Z: = {T_t}(0,x) = {T_t}(0,x,{U_1}, \ldots ,{U_Q}).\]

In addition, let $ ({U'_q})_{q = 1}^Q$ be independent copies of $ ({U_q})_{q = 1}^Q $ and define

\[{Z'_q}: = {T_t}(0,x,{U_1}, \ldots ,{U_{q - 1}},{U'_q},{U_{q + 1}}, \ldots ,{U_Q}),\quad \quad 1 \le q \le Q.\]

Our main tools for the proof of Proposition 4.2 are Chebyshev’s inequality and the following exponential versions of the Efron–Stein inequality; we refer the reader to [Reference Boucheron, Lugosi and Massart7, Theorem 6.16] and [13, Lemma 3.2]. For any λ,θ < 0 with λθ < 1,

$$\log {\mathbb E}[\exp \{ - \lambda (Z - {\mathbb E}[Z])\} ] \le \frac{{\lambda \theta }}{{1 - \lambda \theta }}\log {\mathbb E}[\exp \{ \frac{{\lambda {V_ - }}}{\theta }\} ],$$(4.3)

where

$${V_ - }\sum\limits_{q = 1}^Q {\mathbb E}[(Z - {Z'_q})_ - ^2|{U_1}, \ldots ,{U_Q}].$$

Furthermore, if there exist δ < 0 and functions $ ({\phi _q})_{q = 1}^Q$, $ ({\psi _q})_{q = 1}^Q,$ and $ ({g_q})_{q = 1}^Q$ such that, for all 1 ≤ qQ,

\[{(Z - {Z'_q})_ - } \le {\psi _q}({U'_q}),\quad \quad (Z - {Z'_q})_ - ^2 \le {\phi _q}({U'_q}){g_q}({U_1}, \ldots ,{U_Q}),\]

and ${\mathbb E}[{{\text{e}}^{\delta {\psi _q}({U_q})}}{\phi _q}({U_q})] < \infty $, then, for any λ,θ < 0 with λ < δ Λ θ −1,

$$\log {\mathbb E}[\exp \{ \lambda (Z - {\mathbb E}[Z])\} ] \le \frac{{\lambda \theta }}{{1 - \lambda \theta }}\log {\mathbb E}[\exp \{ \frac{{\lambda W}}{\theta }\} ],$$(4.4)

where

$$W\sum\limits_{q = 1}^Q {\mathbb E}[{{\text{e}}^{\delta {\psi _q}({U_q})}}{\phi _q}({U_q})]{g_q}({U_1}, \ldots ,{U_Q}).$$

We use the following lemmata to estimate the right-hand sides of (4.3) and (4.4).

Lemma 4.1

Write π t(0, x) = (0 = x 0, x 1,…,x m = x) for the finite sequence ofd that has ${T_t}(0,x) = \sum\nolimits_{i = 0}^{m - 1} {\sigma _t}({x_i},{x_{i + 1}})$, chosen with a deterministic rule to break ties. Moreover, let R q be the event that π t(0, x) intersects Λq. Then we have, for 1 ≤ qQ,

$${(Z - {Z'_q})_ - } \le 8Kt1{R_q}.$$(4.5)

Proof. Since ${(Z - {Z'_q})_ - } = ({Z'_q} - Z)\{ Z \le {Z'_q}\} \cap {R_q}$, we focus on the event $\{ Z \le {Z'_q}\} \cap {R_q}$ from now on. Let us first treat the case where x ∈ Λq. Define i 0:= min{0 ≤ im:x i ∈ Λq} and set $a:={x_{{i_0}}}$. Then, since ‖axt,

\[{Z'_q} - Z \le {T_t}(a,x,{U_1}, \ldots ,{U_{q - 1}},{U'_q},{U_{q + 1}}, \ldots ,{U_Q}) \le 4Kt.\]

For the case where x ∉ Λq, let us introduce the indices i 1 and i 2 as follows:

\[{i_1}\min \{ 0 \le i \le m - 1:{x_i} \in {\Lambda _q}\} ,\quad \quad {i_2}\max \{ 0 \le i \le m - 1:{x_i} \in {\Lambda _q}\} .\]

In addition, write $a:={x_{{i_1}}}$, $b:={x_{{i_2}}},$ and $c:={x_{{i_2} + 1}}$. If ‖act, then

\[{Z'_q} - Z \le {T_t}(a,c,{U_1}, \ldots ,{U_{q - 1}},{U'_q},{U_{q + 1}}, \ldots ,{U_Q}) \le 4Kt.\]

If ‖ac < t and ‖bct, then

$$\eqalign{{{Z'}_q} - Z & \le {T_t}(a,c,{U_1}, \ldots ,{U_{q - 1}},{{U'}_q},{U_{q + 1}}, \ldots ,{U_Q}) \cr & \le 4K\parallel a - c{\parallel _\infty } \cr & \le 4K(\parallel a - b{\parallel _\infty } + \parallel b - c{\parallel _\infty }) \cr & \le 8Kt.}$$

Otherwise (i.e. ‖ac < t and ‖bc < t),

$$\eqalign{{{Z'}_q} - Z & \le {T_t}(a,c,{U_1}, \ldots ,{U_{q - 1}},{{U'}_q},{U_{q + 1}}, \ldots ,{U_Q}) - {\sigma _t}(b,c) \cr & \le 4K(\parallel a - c{\parallel _\infty } - \parallel b - c{\parallel _\infty }) \cr & \le 4K\parallel a - b{\parallel _\infty } \cr & \le 4Kt.}$$

With these observations, ${Z'_q} - Z \le 8Kt$ is valid in the case where x ∉ Λq, and (4.5) follows.

Lemma 4.2

There exists a constant C22 ≥ 1 such that

\[\sum\limits_{q = 1}^Q 1{R_q} \le {C_{22}}K(1 \vee \frac{{\parallel x{\parallel _\infty }}}{t}).\]

Proof. Let π t(0, x) = (0 = x 0, x 1,…,x m = x). For each z ∈ ℤd, write w(z) for the center of the box Λq containing z. Then, define ρ 0:= 0 and, for j ≥ 1,

$${\rho _{j + 1}}\min \left\{ {{\rho _j} < i \le m:{x_i} \\notin w({x_{{\rho _j}}}) + {{( - {{3t} \over 2},{{3t} \over 2}]}^d}} \right\},$$

with the convention that min∅:= ∞. Define J:= max{j ≥ 1: ρ j < ∞} and assume that

\[J \gt 4K(1 \vee \frac{{\parallel x{\parallel _\infty }}}{t}).\]

By definition, we have T t(0, x) ≥ Jt and, hence,

\[{T_t}(0,x) \ge Jt \gt 4K(t \vee \parallel x{\parallel _\infty }),\]

which contradicts (4.2). Therefore,

\[J \le 4K(1 \vee \frac{{\parallel x{\parallel _\infty }}}{t}),\]

and the proof is complete since π t(0, x) intersects at most 3d(J + 1)Λqs.

We are now in a position to prove Proposition 4.2.

Proof of Proposition 4.2. Fix arbitrary γ < 0, x ∈ ℤd\{0} and $0 \le t \le \gamma \sqrt {\parallel x{\parallel _1}} $. We use Chebyshev’s inequality to obtain, for all u, λ ≥ 0,

$$\eqalign{{\mathbb P}(|{T_t}(0,x) - {\mathbb E}[{T_t}(0,x)]| \ge u) & \le {{\text{e}}^{ - \lambda u}}{\mathbb E}[\exp \{ \lambda |Z - {\mathbb E}[Z]|\} ] \cr & \le {{\text{e}}^{ - \lambda u}}({\mathbb E}[\exp \{ - \lambda (Z - {\mathbb E}[Z])\} ] + {\mathbb E}[\exp \{ \lambda (Z - {\mathbb E}[Z])\} ]).}$$(4.6)

On the other hand, Lemmata 4.1 and 4.2 show that

\[{V_ - } \le {(8Kt)^2}\sum\limits_{q = 1}^Q 1{R_q} \le {C_{14}}{(8Kt)^2}(1 \vee \frac{{\parallel x{\parallel _\infty }}}{t}).\]

Moreover, taking δ:= 1/t, ϕ q:= (8Kt)2, ψ q:= 8Kt, and g q:= 1R q (see the notation above (4.4)), we use Lemmata 4.1 and 4.2 again to obtain

\[W \le {(8Kt)^2}{{\rm{e}}^{8K}}\sum\limits_{q = 1}^Q {R_q} \le {C_{22}}{(8Kt)^2}{{\rm{e}}^{8K}}(1 \vee \frac{{\parallel x{\parallel _\infty }}}{t}).\]

These bounds combined with (4.3) (4.4), and (4.6) prove that, for all u ≥ 0 and for all λ, θ < 0 with 0 < λ < t −1 Λ (2θ)−1,

$$\eqalign{{\mathbb P}(|{T_t}(0,x) - {\mathbb E}[{T_t}(0,x)]| \ge u) & \le 2{{\text{e}}^{ - \lambda u}}\exp \{ \frac{{{\lambda ^2}}}{{1 - \lambda \theta }}{C_{22}}{(8K)^2}{{\text{e}}^{8K}}t(t \vee \parallel x{\parallel _\infty })\} \cr & \le 2\exp \{ 2{C_{22}}{(8K)^2}{{\text{e}}^{8K}}t(t \vee \parallel x{\parallel _\infty }){\lambda ^2} - u\lambda \} .} $$

Substitute $u = t\sqrt {\parallel x{\parallel _1}} $ for

$${\mathbb P}(|{T_t}(0,x) - {\mathbb E}[{T_t}(0,x)]| \ge t\sqrt {\parallel x{\parallel _1}} ) \le 2\exp \{ 2{C_{22}}{(8K)^2}{{\text{e}}^{8K}}t(t \vee \parallel x{\parallel _\infty }){\lambda ^2} - t\sqrt {\parallel x{\parallel _1}} \lambda \} .$$

To minimize the right-hand side, we choose

\[\lambda = \frac{{\sqrt {\parallel x{\parallel _1}} }}{{4{C_{22}}{{(8K)}^2}{{\rm{e}}^{8K}}(t \vee \parallel x{\parallel _\infty })}}.\]

Since $t \le \gamma \sqrt {\parallel x{\parallel _1}} $, C 22 ≥ 1, and Kγ,

\[\lambda \le \frac{{\sqrt {\parallel x{\parallel _1}} }}{{2{K^2}\parallel x{\parallel _\infty }}} \le \frac{{\sqrt {\parallel x{\parallel _1}} }}{{2K\parallel x{\parallel _1}}} = \frac{1}{{2K\sqrt {\parallel x{\parallel _1}} }} \lt \frac{1}{t}.\]

In addition, taking θ = (3λ)−1 leads to 0 < λ < t −1 Λ (2θ)−1. Therefore,

$${\mathbb P}(|{T_t}(0,x) - {\mathbb E}[{T_t}(0,x)]| \ge t\sqrt {\parallel x{\parallel _1}} ) \le 2\exp \left\{ { - \frac{t}{{8{C_{22}}{{(8K)}^2}{{\text{e}}^{8K}}(1 + \gamma )}}} \right\},$$

which proves the proposition.

Acknowledgements

The author was supported by JSPS Grant-in-Aid for Young Scientists (B) 16K17620. The author would also like to express his profound gratitude to the reviewer for a very careful reading of the manuscript.

References

Ahlberg, D. (2015). A Hsu–Robbins–Erdös strong law in first-passage percolation. Ann. Prob. 43, 19922025.CrossRefGoogle Scholar
Alves, O. S. M., Machado, F. P., and Popov, S. Y.. 2002. Phase transition for the frog model. Electron. J. Prob., 7, 21pp.CrossRefGoogle Scholar
Alves, O. S. M., Machado, F. P., and Popov, S. Y.. 2002. The shape theorem for the frog model. Ann. Appl. Prob., 12, 533546.Google Scholar
Alves, O. S. M., Machado, F. P., Popov, S. Y., and Ravishankar, K.. 2001. The shape theorem for the frog model with random initial configuration. Markov Process. Relat. Fields, 7, 525539.Google Scholar
Basu, R., Ganguly, S., and Hoffman, C.. 2015. Non-fixation of symmetric activated random walk on the line for small sleep rate. Preprint. Available at http://arxiv.org/abs/1508.05677v1http://arxiv.org/abs/1508.05677v1.Google Scholar
Beckman, E. et al. 2017. Asymptotic behavior of the brownian frog model. Preprint. Available at http://arxiv.org/abs/1710.05811v1http://arxiv.org/abs/1710.05811v1.Google Scholar
Boucheron, S., Lugosi, G., and Massart, P.. 2013. Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press.CrossRefGoogle Scholar
Dickman, R., Rolla, L. T., and Sidoravicius, V.. 2010. Activated random walkers: facts, conjectures and challenges. J. Statist. Phys., 138, 126142.CrossRefGoogle Scholar
Döbler, C. and Pfeifroth, L.. 2014. Recurrence for the frog model with drift on. Electron. Commun. Prob., 19, 13pp.CrossRefGoogle Scholar
Döbler, C. et al. 2018. Recurrence and transience of frogs with drift on. Electron. J. Prob., 23, 23pp.CrossRefGoogle Scholar
Gantert, N. and Schmidt, P.. 2009. Recurrence for the frog model with drift on. Markov Process. Relat. Fields, 15, 5158.Google Scholar
Garet, O. and Marchand, R.. 2007. Large deviations for the chemical distance in supercritical bernoulli percolation. Ann. Prob., 35, 833866.CrossRefGoogle Scholar
Garet, O. and Marchand, R.. 2010. Moderate deviations for the chemical distance in Bernoulli percolation. ALEA Latin Amer. J. Prob. Math. Statist., 7, 171191.Google Scholar
Ghosh, A., Noren, S., and Roitershtein, A.. 2017. On the range of the transient frog model on. Adv. Appl. Prob., 49, 327343.CrossRefGoogle Scholar
Grimmett, G.. 1999. Percolation, 2nd edn. Springer, Berlin.CrossRefGoogle Scholar
Grimmett, G. and Kesten, H.. 1984. First-passage percolation, network flows and electrical resistances. Z. Wahrscheinlichkeitsth. 66, 335366.CrossRefGoogle Scholar
Höfelsauer, T. and Weidner, F.. 2016. The speed of frogs with drift on. Markov Process. Relat. Fields, 22, 379392.Google Scholar
Hoffman, C., Johnson, T., and Junge, M.. 2016. From transience to recurrence with Poisson tree frogs. Ann. Appl. Prob., 26, 16201635.CrossRefGoogle Scholar
Hoffman, C., Johnson, T., and Junge, M.. 2017. Infection spread for the frog model on trees. Preprint. Available at http://arxiv.org/abs/1710.05884v1.Google Scholar
Hoffman, C., Johnson, T., Junge, M. 2017. Recurrence and transience for the frog model on trees. Ann. Prob., 45, 28262854.CrossRefGoogle Scholar
Hughes, B. D.. 1995. Random Walks and Random Environments. Vol. 1. Oxford University Press.Google Scholar
Johnson, T. and Junge, M.. 2018. Stochastic orders and the frog model. Ann. Inst. H. Poincaré Prob. Statist. 54, 10131030.CrossRefGoogle Scholar
Kesten, H.. 1986. Aspects of first passage percolation. In École d’été de Probabilités de Saint-Flour, XIV (Lecture Notes Math. 1180), Springer, Berlin, pp. 125264.CrossRefGoogle Scholar
Kesten, H. and Sidoravicius, V.. 2005. The spread of a rumor or infection in a moving population. Ann. Prob., 33, 24022462.CrossRefGoogle Scholar
Kesten, H. and Sidoravicius, V.. 2008. A shape theorem for the spread of an infection. Ann. Math. (2), 167, 701766.CrossRefGoogle Scholar
Kosygina, E. and Zerner, M. P. W.. 2017. A zero-one law for recurrence and transience of frog processes. Prob. Theory Relat. Fields, 168, 317346.CrossRefGoogle Scholar
Kurkova, I., Popov, S., and Vachkovskaia, M.. 2004. On infection spreading and competition between independent random walks. Electron. J. Prob., 9, 293315.CrossRefGoogle Scholar
Lawler, G. F.. 1991. Intersections of Random Walks. Birkhäuser, Boston, MA Google Scholar
Liggett, T. M.. 1999. Stochastic Interacting Systems: Contact, Voter and Exclusion Processes (Fundamental Principles Math. Sci. 324). Springer, Berlin.CrossRefGoogle Scholar
Mourrat, J.-C.. 2012. Lyapunov exponents, shape theorems and large deviations for the random walk in random potential. ALEA Latin Amer. J. Prob. Math. Statist., 9, 165211.Google Scholar
Popov, S. Y.. 2001. Frogs in random environment. J. Statist. Phys., 102, 191201.CrossRefGoogle Scholar
Popov, S. Y.. 2003. Frogs and some other interacting random walks models. Discrete Math. Theoret. Comput. Sci. AC, 277288.Google Scholar
Ramrez, A. F. and Sidoravicius, V.. 2004. Asymptotic behavior of a stochastic combustion growth process. J. Europ. Math. Soc., 6, 293334.CrossRefGoogle Scholar
Rolla, L. T., Tournier, L.. 2018. Non-fixation for biased activated random walks. Ann. Inst. H. Poincaré Prob. Statist., 54, 938951.CrossRefGoogle Scholar
Rolla, L. T. and Sidoravicius, V.. 2012. Absorbing-state phase transition for driven-dissipative stochastic dynamics on. Invent. Math., 188, 127150.CrossRefGoogle Scholar
Sidoravicius, V. and Teixeira, A.. 2017. Absorbing-state transition for stochastic sandpiles and activated random walks. Electron. J. Prob., 22, 35pp.CrossRefGoogle Scholar
Telcs, A. and Wormald, N. C.. 1999. Branching and tree indexed random walks on fractals. J. Appl. Prob., 36, 9991011.CrossRefGoogle Scholar