1. Introduction and main results
One of the most used models in the dynamic of populations is the Galton–Watson branching process, which has numerous applications in different areas, including physics, biology, medicine, and economics. We refer the reader to the books of Harris [Reference Harris17] and Athreya and Ney [Reference Athreya and Ney6] for an introduction. Branching processes in random environments were first considered by Smith and Wilkinson [Reference Smith and Wilkinson24] and by Athreya and Karlin [Reference Athreya and Karlin4, Reference Athreya and Karlin5]. The subject has been further studied by Kozlov [Reference Kozlov20, Reference Kozlov21], Dekking [Reference Dekking7], Liu [Reference Liu22], D’Souza and Hambly [Reference D’Souza and Hambly8], Geiger and Kersting [Reference Geiger and Kersting9], Guivarc’h and Liu [Reference Guivarc’h and Liu16], Geiger, Kersting and Vatutin [Reference Geiger, Kersting and Vatutin10], Afanasyev [Reference Afanasyev1], and Kersting and Vatutin [Reference Kersting and Vatutin19], to name only a few. Recently, in [Reference Grama, Lauvergnat and Le Page13], based on new conditioned limit theorems for sums of functions defined on Markov chains in [Reference Grama, Lauvergnat and Le Page11, Reference Grama, Lauvergnat and Le Page12, Reference Grama, Lauvergnat and Le Page14, Reference Grama, Le Page and Peigné15], exact asymptotic results for the survival probability when the environment is a Markov chain have been obtained for branching processes in Markovian environment (BPMEs). In this paper we complement these by new results, such as a limit theorem for the normalized number of particles and a Yaglom-type theorem for BPMEs.
We start by introducing the Markovian environment, which is given on the probability space $\left( \Omega, \mathscr{F}, \mathbb{P} \right)$ by a homogeneous Markov chain $\left( X_n \right)_{n\geq 0}$ with values in the finite state space $\mathbb{X}$ and with the matrix of transition probabilities $\textbf{P} = (\textbf{P} (i,j))_{i,j\in \mathbb{X}}$ . We suppose the following.
Condition 1. The Markov chain $\left( X_n \right)_{n\geq 0}$ is irreducible and aperiodic.
Condition 1 implies a spectral gap property for the transition operator $\textbf{P}$ of $\left( X_n \right)_{n\geq 0}$ , defined by the relation $\textbf{P} g (i) = \sum_{j\in \mathbb{X}} g(\,j) \textbf{P}(i,j)$ for any g in the space $\mathscr{C} (\mathbb X)$ of complex functions g on $\mathbb{X}$ endowed with the norm $\left\lVert{g}\right\rVert_{\infty} = \sup_{x\in \mathbb{X}} \left\lvert{g(x)}\right\rvert$ . Indeed, Condition 1 is necessary and sufficient for the matrix $(\textbf{P} (i,j))_{i,j\in \mathbb X}$ to be primitive (all entries of $\textbf{P}^{k_0}$ are positive for some $k_0 \geq 1$ ). By the Perron–Frobenius theorem, there exist positive constants $c_1$ , $c_2$ , a unique positive $\textbf{P}$ -invariant probability $\boldsymbol{\nu}$ on $\mathbb{X}$ ( $\boldsymbol{\nu}(\textbf{P})=\boldsymbol{\nu}$ ), and an operator Q on $\mathscr{C}(\mathbb X)$ such that, for any $g \in \mathscr{C}(\mathbb X)$ and $n \geq 1$ , $i \in \mathbb{X}$ ,
where $Q \left(1 \right) = 0$ and $\boldsymbol{\nu} \left(Q(g) \right) = 0$ with $\boldsymbol{\nu}(g) \,{:\!=}\, \sum_{i \in \mathbb{X}} g(i) \boldsymbol{\nu}(i)$ . In particular, from (1.1), it follows that, for any $(i,j) \in \mathbb{X}^2$ ,
Set $\mathbb N \,{:\!=}\, \left\{0,1,2,\ldots\right\}$ . For any $i\in \mathbb X$ , let $\mathbb{P}_i$ be the probability law on $ \mathbb X^{\mathbb N} $ and $\mathbb{E}_i$ the associated expectation generated by the finite-dimensional distributions of the Markov chain $\left( X_n \right)_{n\geq 0}$ starting at $X_0 = i$ . Note that $\textbf{P}^ng(i) = \mathbb{E}_i \left( g(X_n) \right)$ , for any $g \in \mathscr{C}(\mathbb X)$ , $i \in \mathbb{X}$ , and $n\geq 1.$
Assume that on the same probability space $\left( \Omega, \mathscr{F}, \mathbb{P} \right)$ , for any $i \in \mathbb{X}$ , we are given a random variable $\xi_i$ with the probability generating function
Consider a collection of independent and identically distributed random variables $( \xi_i^{n,j} )_{j,n \geq 1}$ having the same law as the generic variable $\xi_i$ . The variable $\xi_{i}^{n,j}$ represents the number of children generated by the parent $j\in \{1, 2, \dots\}$ at time n when the environment is i. Throughout the paper, the sequences $( \xi_i^{n,j} )_{j,n \geq 1}$ , $i\in \mathbb{X}$ , and the Markov chain $\left( X_n \right)_{n\geq 0}$ are supposed to be independent.
Denote by $\mathbb E$ the expectation associated to $\mathbb P.$ We assume that the variables $\xi_i$ have positive means and finite second moments.
Condition 2. For any $i \in \mathbb{X}$ , the random variable $\xi_i$ satisfies the following: $\mathbb{E} \left( \xi_i \right)>0$ and $\mathbb{E} ( \xi_i^2 ) < +\infty.$
From Condition 2 it follows that $0< f_i^{\prime}(1) < +\infty$ and $f_i^{\prime\prime}(1) < +\infty.$
We are now prepared to introduce the branching process $\left( Z_n \right)_{n\geq 0}$ in the Markovian environment $\left( X_n \right)_{n\geq 0}$ . The initial population size is $Z_0 = z \in \mathbb N$ . For $n\geq 1$ , we let $Z_{n-1}$ be the population size at time $n-1$ and assume that at time n the parent $j\in \{1, \dots Z_{n-1} \}$ generates $\xi_{X_n}^{n,j}$ children. Then the population size at time n is given by
where the empty sum is equal to 0. In particular, when $Z_0=0$ , it follows that $Z_n=0$ for any $n\geq 1$ . We note that for any $n\geq 1$ the variables $\xi_{i}^{n,j}$ , $j\geq 1$ , $i\in \mathbb X$ , are independent of $Z_{0},\ldots, Z_{n-1}$ .
Introduce the function $\rho: \mathbb X \mapsto \mathbb R$ satisfying
Along with $\left( Z_n \right)_{n\geq 0}$ consider the Markov walk $\left( S_n \right)_{n\geq 0}$ such that $S_0 = 0$ and, for $n\geq 1$ ,
The couple $\left( X_n, Z_n\right)_{n\geq 0}$ is a Markov chain with values in $\mathbb X \times \mathbb N $ , whose transition operator $\widetilde{\textbf{P}} $ is defined by the following relation: for any $i\in \mathbb X$ , $z\in \mathbb N,$ $\ s\in [0,1]$ , and $h: \mathbb X \mapsto \mathbb R$ bounded measurable,
where $h_s(i,z)= h(i) s^z$ . Let $\mathbb{P}_{i,z}$ be the probability law on $ (\mathbb X \times \mathbb N )^{\mathbb N}$ and $\mathbb{E}_{i,z}$ the associated expectation generated by the finite-dimensional distributions of the Markov chain $\left( X_n, Z_n \right)_{n\geq 0}$ starting at $X_0 = i$ and $Z_0=z$ . By straightforward calculations, for any $i\in \mathbb X$ , $z\in \mathbb N$ ,
The following non-lattice condition is used indirectly in the proofs in the present paper; it is needed to ensure that the local limit theorem for the Markov walk (1.4) holds true.
Condition 3. For any $\theta, a \in \mathbb{R}$ , there exist $m\geq 0$ and a path $x_0, \dots, x_m$ in $\mathbb{X}$ such that $\textbf{P}(x_0,x_1) \cdots \textbf{P}(x_{m-1},x_m) {\textbf{P}}(x_m,x_0) > 0$ and
Condition 3 is an extension of the corresponding non-lattice condition for independent and identically distributed random variables $X_0, X_1, \ldots $ , which can be stated as follows: there exists $m\geq 0$ such that $X_0 + \cdots + X_m$ does not take values in the lattice $(m+1)\theta + a\mathbb{Z}$ with some positive probability, whatever $\theta, a \in \mathbb{R}$ . Usually, the latter is formulated in an equivalent way with $m=0$ . For Markov chains, Condition 3 is equivalent to the condition that the Fourier transform operator
has a spectral radius strictly less than 1 for $t\not= 0$ ; see Lemma 4.1 of [Reference Grama, Lauvergnat and Le Page14]. Non-latticity for Markov chains with not necessarily finite state spaces is considered, for instance, in Shurenkov [Reference Shurenkov23] and Alsmeyer [Reference Alsmeyer3].
For the following facts and definitions we refer to [Reference Grama, Lauvergnat and Le Page13]. Under Condition 1, from the spectral gap property of the operator $\textbf{P}$ it follows that, for any $\lambda \in \mathbb{R}$ and any $i \in \mathbb{X}$ , the limit
exists and does not depend on the initial state of the Markov chain $X_0=i$ . Moreover, the number $k(\lambda)$ is the spectral radius of the transfer operator $\textbf{P}_{\lambda}$ :
In particular, under Conditions 1 and 3, $k(\lambda)$ is a simple eigenvalue of the operator $\textbf{P}_{\lambda}$ and there is no other eigenvalue of modulus $k(\lambda)$ . In addition, the function $k(\lambda)$ is analytic on $\mathbb R.$
The BPME is said to be subcritical if $k'(0)<0$ , critical if $k'(0)=0$ , and supercritical if $k'(0)>0$ . The following identity has been established in [Reference Grama, Lauvergnat and Le Page13]:
where $\mathbb E_{\boldsymbol{\nu}} $ is the expectation generated by the finite-dimensional distributions of the Markov chain $\left( X_n \right)_{n\geq 0}$ in the stationary regime, and $\phi (\lambda)=\mathbb{E}_{\boldsymbol{\nu}} ( \exp\{\lambda\ln f_{X_1}^{\prime}(1)\} )$ , $\lambda \in \mathbb{R}.$ The relation (1.9) proves that the classification made in the case of BPMEs and that for independent and identically distributed environment are coherent: when the random variables $\left( X_n \right)_{n\geq 1}$ are independent and identically distributed with common law $\boldsymbol{\nu}$ , from (1.9) it follows that the two classifications coincide.
In the present paper we will focus on the critical case: $k'(0)=0$ . Our first result establishes the exact asymptotic of the survival probability of $Z_n$ jointly with the event $\{X_n=j\}$ when the branching process starts with z particles.
Theorem 1.1. Assume Conditions 1–3 and $k'(0) = 0.$ Then there exists a positive function $u(i,z): \mathbb{X} \times \mathbb{N} \mapsto \mathbb R_{+}^{*}$ such that for any $(i,j) \in \mathbb{X}^2$ and $z \in \mathbb{N}$ , $z\not=0,$
An explicit formula for u(i,z) is given in Proposition 3.4. In the case $z=1$ , Theorem 1.1 has been proved in [Reference Grama, Lauvergnat and Le Page13, Theorem 1.1]. The proof for the case $z>1$ , which is not a direct consequence of the case $z=1$ , will be given in Proposition 2.3.
We shall complement the previous statement by studying the asymptotic behavior of $Z_n$ given $Z_n>0$ under the following condition.
Condition 4. The random variables $\xi_i$ , $i\in \mathbb X$ , satisfy
Condition 4 is quite natural—it says that each parent can generate more than one child with positive probability. In the present paper is used to prove the nondegeneracy of the limit of the martingale
in the key result Lemma 4.1.
The next result concerns the nondegeneracy of the limit law of the properly normalized number of particles $Z_n$ at time n jointly with the event $\{X_n=j\}$ .
Theorem 1.2. Assume Conditions 1–4 and $k'(0) = 0.$ Then, for any $i \in \mathbb{X}$ , $z \in \mathbb{N}$ , $z\not=0,$ there exists a probability measure $\mu_{i,z}$ on $\mathbb{R}_{+}$ such that, for any continuity point $t\geq0$ of the distribution function $\mu_{i,z} ([0,\cdot])$ and $j \in \mathbb{X}$ , it holds that
and
Moreover, it holds that $\mu_{i,z} (\{0\})=0$ .
From [Reference Grama, Lauvergnat and Le Page14, Lemma 10.3] it follows that, under Conditions 1 and 3, the quantity
is finite and positive, i.e. $0< \sigma <\infty$ . Let
be the Rayleigh distribution function. The following assertion gives the asymptotic behavior of the normalized Markov walk $S_n$ jointly with $X_n$ provided $Z_n>0$ .
Theorem 1.3. Assume Conditions 1–4 and $k'(0) = 0.$ Then, for any $i,j \in \mathbb{X}$ , $z \in \mathbb{N}$ , $z\not=0$ , and $t \in \mathbb R$ ,
and
The following assertion is the Yaglom-type limit theorem for $\log Z_n$ jointly with $X_n$ .
Theorem 1.4. Assume Conditions 1–4 and $k'(0) = 0.$ Then, for any $i, j \in \mathbb{X}$ , $z \in \mathbb{N}$ , $z\not=0$ , and $t \geq 0$ ,
and
As mentioned before, in the proofs of the stated results we make use of the previous developments in the papers [Reference Grama, Lauvergnat and Le Page13, Reference Grama, Lauvergnat and Le Page14]. These studies are based heavily on the existence of the harmonic function and the study of the asymptotic of the probability of the exit time for Markov chains which were performed recently in [Reference Grama, Lauvergnat and Le Page12, Reference Grama, Le Page and Peigné15]; these are recalled in the next section. For recurrent Markov chains, alternative approaches based on renewal arguments are possible. The advantage of the harmonic function approach proposed here is that it could be extended to more general Markov environments which are not recurrent. In particular, with these methods one could treat multi-type branching processes in random environments.
The outline of the paper is as follows. In Section 2 we give a series of assertions for walks on Markov chains conditioned to stay positive and prove Theorem 1.1 for $z>1$ . In Section 3 we state some preparatory results for branching processes. The proofs of Theorems 1.2, 1.3, and 1.4 are given in Sections 4 and 5.
We end this section by fixing some notation. As usual the symbol c will denote a positive constant depending on all previously introduced constants. In the same way the symbol c equipped with subscripts will denote a positive constant depending only on the indices and all previously introduced constants. All these constants will change in value at every occurrence. By $f \circ g$ we mean the composition of two functions f and g: $f \circ g (\cdot)=f(g(\cdot))$ . The indicator of an event A is denoted by $\unicode{x1D7D9}_A$ . For any bounded measurable function f on $\mathbb{X}$ , random variable X in some measurable space $\mathbb{X}$ , and event A, we define
2. Facts about Markov walks conditioned to stay positive
2.1. Conditioned limit theorems
We start by formulating two propositions which are consequences of the results in [Reference Grama, Lauvergnat and Le Page12], [Reference Grama, Lauvergnat and Le Page13], and Reference Grama, Lauvergnat and Le Page14].
We introduce the first time when the Markov walk $\left(y+ S_n \right)_{n\geq 0}$ becomes nonpositive: for any $y \in \mathbb{R}$ , set
where $\inf \emptyset = 0.$ Conditions 1 and 3 together with $\boldsymbol{\nu}(\rho) = 0$ ensure that the stopping time $\tau_y$ is well defined and finite $\mathbb{P}_i$ -almost surely (-a.s.), for any $i \in \mathbb{X}$ .
The following important proposition is a direct consequence of the results in [Reference Grama, Lauvergnat and Le Page12] adapted to the case of a finite Markov chain. It proves the existence of the harmonic function related to the Markov walk $\left(y+ S_n \right)_{n\geq 0}$ and states some of its properties that will be used in the proofs of the main results of the paper.
Proposition 2.1. Assume Conditions 1 and 3 and $k'(0) = 0$ . There exists a nonnegative function V on $\mathbb{X} \times \mathbb{R}$ such that the following hold:
-
1. For any $(i,y) \in \mathbb{X} \times \mathbb{R}$ and $n \geq 1$ ,
\begin{equation*} {}\mathbb{E}_i \left( V \left( X_n, y+S_n \right) \,;\, \tau_y > n \right) = V(i,y). {}\end{equation*} -
2. For any $i\in \mathbb{X}$ , the function $V(i,\cdot)$ is nondecreasing, and for any $(i,y) \in \mathbb{X} \times \mathbb{R}$ ,
\begin{equation*} {}V(i,y) \leq c \left( 1+\max(y,0) \right). {}\end{equation*} -
3. For any $i \in \mathbb{X}$ , $y > 0$ , and $\delta \in (0,1)$ ,
\begin{equation*} {}\left( 1- \delta \right)y - c_{\delta} \leq V(i,y) \leq \left(1+\delta \right)y + c_{\delta}. {}\end{equation*}
We need the asymptotic of the probability of the event $\{ \tau_y > n\}$ jointly with the state of the Markov chain $(X_n)_{n\geq 1}$ .
Proposition 2.2. Assume Conditions 1 and 3 and $k'(0)=0$ .
-
1. For any $(i,y) \in \mathbb{X} \times \mathbb{R}$ and $j \in \mathbb{X}$ , we have
\begin{align*}\lim_{n\to +\infty} \sqrt{n} \mathbb{P}_{i} \left( X_n = j \,,\, \tau_y > n \right) = \frac{2V(i,y) \boldsymbol{\nu} (\,j)}{\sqrt{2\pi} \sigma}.\end{align*} -
2. For any $(i,y) \in \mathbb{X} \times \mathbb{R}$ , $j\in \mathbb{X}$ , and $n\geq 1$ ,
$${\mathbb{P}_i}\left( {{X_n} = j,{\tau _y} > n} \right) \le c{{1 + \max (y,0)} \over {\sqrt n }}.$$
For a proof of the first assertion of Proposition 2.2, see Lemma 2.11 in [Reference Grama, Lauvergnat and Le Page13]. The second is deduced from the point (b) of Theorem 2.3 of [Reference Grama, Lauvergnat and Le Page12].
Denote by $ \textrm{supp}(V) = \left\{ (i,y) \in \mathbb{X} \times \mathbb{R} : \right.$ $\left. V(i,y) > 0 \right\}$ the support of the function V. By the point 3 of Theorem 1.1, the harmonic function V satisfies the following property: for any $i\in \mathbb X$ there exists $y_i\geq 0$ such that $(i,y)\in \text{supp}\ V$ for any $y>y_i$ .
In addition to the previous two propositions we need the following result, which gives the asymptotic behavior of the conditioned limit law of the Markov walk $\left(y+ S_n \right)_{n\geq 0}$ jointly with the Markov chain $\left(X_n \right)_{n\geq 0}$ . It extends Theorem 2.5 of [Reference Grama, Lauvergnat and Le Page12], which considers the asymptotic of
given the event $\{\tau_y >n\}$ .
Proposition 2.3. Assume Conditions 1 and 3 and $k'(0)=0$ .
-
1. For any $(i,y) \in \textrm{supp}(V)$ , $ j\in \mathbb{X} $ , and $t\geq 0$ ,
\begin{align*} {}\mathbb{P}_i \left( \left.{\frac{y+S_n}{\sigma \sqrt{n}} \leq t, X_n=j } \,\middle|\,{\tau_y >n}\right. \right) \underset{n\to+\infty}{\longrightarrow} {} \Phi^+(t) \boldsymbol{\nu} (\,j).\end{align*} -
2. There exists $\varepsilon_0 >0$ such that, for any $\varepsilon \in (0,\varepsilon_0)$ , $n\geq 1$ , $t_0 > 0$ , $t\in[0,t_0]$ , $(i,y) \in \mathbb{X} \times \mathbb{R}$ , and $ j \in \mathbb{X}$ ,
\begin{align*} {}&\left\lvert{ \mathbb{P}_i \left(\frac{y+S_n}{\sqrt{n} \sigma} \leq t, X_n=j, \tau_y > n \right) - \frac{2V(i,y)}{\sqrt{2\pi n}\sigma} \Phi^+(t) \boldsymbol{\nu} (\,j) }\right\rvert \\[3pt] {}&\hskip6cm \leq c_{\varepsilon,t_0} \frac{\left( 1+\max(y,0)^2 \right)}{n^{1/2+\varepsilon}}.\end{align*}
Proof. It is enough to prove the point 2 of the proposition. This will be derived from the corresponding result in Theorem 2.5 of [Reference Grama, Lauvergnat and Le Page12]. We first establish an upper bound. Let $k=\big[n^{1/4}\big]$ and $\| \rho \|_{\infty}=\max_{i\in \mathbb X } | \rho(i) |$ . Since
we have
By the Markov property
Now, using (1.2) and setting
we have
By Theorem 2.5 and Remark 2.10 of [Reference Grama, Lauvergnat and Le Page12], there exists $\varepsilon_0>0$ such that for any $\varepsilon \in (0, \varepsilon_0)$ and $t_0>0$ , $n\geq1$ , we have, for $t_{n,k}\leq t_0 $ ,
Since $| t_{n,k} - t| \leq c_{t_0}\frac{1}{n^{1/4}}$ and $\Phi^+$ is smooth, we obtain
From (2.2), (2.3), (2.4), and (2.5) it follows that
Now we shall establish a lower bound. With the notation introduced above, we have
As in the proof of (2.6), we establish the lower bound
Note that $0 \geq \min_{n-k < i \leq n} \{ y+S_i \} \geq y+S_{n-k} - k \| \rho \|_{\infty}$ , on the set $\{ n-k < \tau_y \leq n \}$ . Set
Then
Again by Theorem 2.5 and Remark 2.10 of [Reference Grama, Lauvergnat and Le Page12], there exists $\varepsilon_0>0$ such that for any $\varepsilon \in (0, \varepsilon_0)$ and $t_0>0$ , $n\geq1$ , we have, for $t_{n,k}\leq t_0 $ ,
Since $| t_{n,k} | \leq c_{t_0}\frac{1}{n^{1/4}}$ and $\Phi^+(0)=0$ , we obtain
From (2.9), (2.10), and (2.11), we deduce that
Using (2.7), (2.8), and (2.12), one gets
which together with (2.6) ends the proof of the point 2 of the proposition. The point 1 follows from the point 2.
We need the following estimate, whose proof can be found in [Reference Grama, Lauvergnat and Le Page14].
Proposition 2.4. Assume Conditions 1 and 4 and $k'(0)=0$ . Then there exists $c > 0$ such that for any $a>0$ , nonnegative function $\psi \in \mathscr{C}(\mathbb X)$ , $y \in \mathbb{R}$ , $t \geq 0$ , and $n \geq 1$ ,
2.2. Change of probability measure
Fix any $(i,y) \in \textrm{supp}(V)$ and $z\in \mathbb N$ . The harmonic function V from Proposition 2.1 allows us to introduce the probability measure $\mathbb{P}_{i,y,z}^+$ on ${(\mathbb X \times \mathbb N})^{\mathbb{N}}$ and the corresponding expectation $\mathbb{E}_{i,y,z}^+$ , by the following relation: for any $n\geq 1$ and any bounded measurable g: ${(\mathbb X \times \mathbb N )}^{n} \mapsto \mathbb{R}$ ,
The fact that the function V is harmonic (by point 1 of Proposition 2.1) ensures the applicability of the Kolmogorov extension theorem and shows that $\mathbb{P}_{i,z,y}^+$ is a probability measure. In the same way we define the probability measure $\mathbb{P}_{i,y}^+$ and the corresponding expectation $\mathbb{E}_{i,y}^+$ : for any $(i,y) \in \textrm{supp}(V)$ , $n \geq 1$ , and any bounded measurable g: $\mathbb{X}^n \to \mathbb{R}$ ,
The relation between the expectations $\mathbb{E}_{i,y,z}^+$ and $\mathbb{E}_{i,y}^+$ is given by the following identity: for any $n \geq 1$ and any bounded measurable g: $\mathbb{X}^n \to \mathbb{R}$ ,
With the help of Proposition 2.4, we have the following bounds.
Lemma 2.5. Assume Conditions 1 and 3 and $k'(0)=0$ . For any $(i,y) \in \textrm{supp}(V)$ , we have, for any $k \geq 1$ ,
In particular,
The proof, being similar to that in [Reference Grama, Lauvergnat and Le Page13], is left to the reader.
We need the following statements. Let $ \mathscr F_n = \sigma \{ X_0,Z_0, \ldots, X_n, Z_n \} $ and $(Y_n)_{n\geq 0}$ a bounded $(\mathscr F_n)_{n\geq 0}$ -adapted sequence.
Lemma 2.6. Assume Conditions 1–3 and $k'(0)=0$ . For any $k \geq 1$ , $(i,y) \in \textrm{supp}(V)$ , $z\in \mathbb N$ , and $j \in \mathbb{X}$ ,
Proof. For the sake of brevity, for any $(i,j) \in \mathbb{X}^2$ , $y \in \mathbb{R}$ , and $n \geq 1$ , we set
Fix $k \geq 1$ . By the point 1 of Proposition 2.2, it is clear that for any $(i,y) \in \textrm{supp}(V)$ and n large enough, $\mathbb{P}_i \left( \tau_y > n \right) > 0$ . By the Markov property, for any $j \in \mathbb{X}$ and $n \geq k+1$ large enough,
Using the point 1 of Proposition 2.2, by the Lebesgue dominated convergence theorem,
Lemma 2.7. Assume that $(i,y) \in \text{supp}\ V $ and $z\in \mathbb N$ . For any bounded $(\mathscr F_n)_{n\geq 0}$ -adapted sequence $(Y_n)_{n\geq 0}$ such that $Y_n\to Y_{\infty}$ $\mathbb P_{i,y,z}^{+}$ -a.s.,
Proof. Let $k\geq1$ and $\theta>1$ . Then
We bound the second term in the right-hand side of (2.16):
By the point 1 of Proposition 2.2, we have
Now we shall prove that
Recall that $\theta >1$ . By the Markov property (conditioning on $\mathscr F_n$ ),
where we use the notation $P_{n'}(i',y') \,{:\!=}\, \mathbb P_{i',y'} \big( \tau_y^{\prime} > n' \big).$ By the point 2 of Proposition 2.2 and the point 3 of Proposition 2.1, there exists $y_0>0$ such that for $i' \in \mathbb X$ , $y' > y_0$ , and $n'\in\mathbb N$ ,
Representing the right-hand side of (2.20) as a sum of two terms and using (2.21) gives
Using the point 1 of Proposition 2.3 and the point 1 of Proposition 2.2, we have
By the change-of-measure formula (2.13),
Letting first $n\to\infty$ and then $k\to\infty$ , by the Lebesgue dominated convergence theorem,
From (2.22)–(2.25) we deduce (2.19). Now (2.16)–(2.19) imply that, for any $\theta>1$ ,
Since $\theta $ can be taken arbitrarily close to 1, we conclude the claim of the lemma.
The next assertion is an easy consequence of Lemmata 2.6 and 2.7.
Lemma 2.8. Assume that $(i,y) \in \text{supp}\ V $ , $j\in \mathbb{X}$ , and $z\in \mathbb N$ . For any bounded $(\mathscr F_n)_{n\geq 0}$ -adapted sequence $(Y_n)_{n\geq 0}$ such that $Y_n\to Y_{\infty}$ $\mathbb P_{i,y,z}^{+}$ -a.s.,
Proof. For any $n > k \geq1$ , we have
By Lemma 2.6, the first term in the right-hand side of (2.26) converges to
as $n\to\infty$ , where $ \lim_{k\to \infty} \mathbb E_{i,z,y}^{+} Y_{k} = \mathbb E_{i,y,z}^{+} Y_{\infty}.$ By Lemma 2.7, the second term in the right-hand side of (2.26) vanishes, which completes the proof.
2.3. The dual Markov chain
Note that the invariant measure $\boldsymbol{\nu}$ is positive on $\mathbb{X}$ . Therefore the dual Markov kernel
is well defined. On an extension of the probability space $(\Omega, \mathscr{F}, \mathbb{P})$ we consider the dual Markov chain $\left( X_n^* \right)_{n\geq 0}$ with values in $\mathbb{X}$ and with transition probability $\textbf{P}^*$ . The dual chain $\left( X_n^* \right)_{n\geq 0}$ can be chosen to be independent of the chain $\left( X_n \right)_{n\geq 0}$ . Accordingly, the dual Markov walk $\left(S_n^* \right)_{n\geq 0}$ is defined by setting
For any $y \in \mathbb{R}$ define the first time when the Markov walk $\left(y+ S_n^* \right)_{n\geq 0}$ becomes nonpositive:
For any $i\in \mathbb{X}$ , denote by $\mathbb{P}_i^*$ and $\mathbb{E}_i^*$ the probability and the associated expectation generated by the finite-dimensional distributions of the Markov chain $( X_n^* )_{n\geq 0}$ starting at $X_0^* = i$ .
It is easy to verify (see [Reference Grama, Lauvergnat and Le Page13]) that $\boldsymbol{\nu}$ is also $\textbf{P}^*$ invariant and that Conditions 1 and 3 are satisfied for $\textbf{P}^*$ . This implies that Propositions 2.1–2.4, formulated in Subsection 2.1, hold also for the dual Markov chain $(X_n^*)_{n\geq 0}$ and the Markov walk $\left(y+ S_n^* \right)_{n\geq 0}$ , with the harmonic function $V^*$ such that, for any $(i,y) \in \mathbb{X} \times \mathbb{R}$ and $n \geq 1$ ,
The following duality property is obvious (see [Reference Grama, Lauvergnat and Le Page13]).
Lemma 2.9. (Duality.) For any $n\geq 1$ and any function g: $\mathbb{X}^n \to \mathbb{C}$ ,
3. Preparatory results for branching processes
Throughout the remainder of the paper we will use the following notation. For $ s \in [0,1)$ , let
and, by continuity,
In addition, for $s \in [0,1),\ z\in \mathbb{N},\ z\neq 0$ , let $g_{z}(s)=s^z$ and
and, by continuity,
For any $n\geq 1$ , $z\in \mathbb{N},\ z\not= 0$ , and $s\in [0,1]$ , the following quantity will play an important role in our study:
Under Condition 2, for any $i\in \mathbb X$ and $s\in [0,1]$ we have $f_i(s)\in [0,1]$ and $f_{X_1}\circ \cdots \circ f_{X_n} (s) \in [0,1]$ . This implies that, for any $s\in [0,1]$ ,
For any $n\geq 1$ and $z\in \mathbb{N},\ z\neq 0$ , the function $s\mapsto q_{n,z} (s) $ is convex on [0,1]. Since the sequence $( \xi_i^{n,j} )_{j,n \geq 1}$ is independent of the Markov chain $\left( X_n \right)_{n\geq 0},$ with $s=0$ , we have, $\mathbb P_{i,y,z}^{+}$ -a.s.,
Note also that $\{ Z_{n} >0\} \supset \{ Z_{n+1} >0\}$ and therefore, for $n\geq 1$ ,
Taking the limit as $n\to\infty$ , $\mathbb P_{i,y,z}^{+}$ -a.s.,
Moreover, by convexity of the function $q_{n,z}(s)$ we have
The following formula (whose proof is left to the reader) is similar to the well-known statements from the papers by Agresti [Reference Agresti2] and Geiger and Kersting [Reference Geiger and Kersting9]: for any $s \in [0,1)$ and $n \geq 1$ ,
We can rewrite (3.6) in the following more convenient form: for any $s \in [0,1)$ and $n \geq 1$ ,
where
Since $\frac{1}{2}\varphi(0) \leq \varphi(s) \leq 2 \varphi(1) $ , for any $k \in \{ 1, \dots, m \}$ ,
By Theorem 5 of [Reference Athreya and Karlin4], for any $(i,y) \in \textrm{supp}(V)$ , $s \in[0,1)$ , $m\geq 1$ , and $k \in \{ 1, \dots, m \}$ , there exists a random variable $\eta_{k,\infty}(s)$ such that
everywhere, and by (3.8), for any $s \in[0,1)$ and $k \geq 1 $ ,
In the same way,
everywhere. For any $s \in [0,1)$ , define $q_{\infty,z}(s)$ by setting
By Lemma 2.5, we have that
Lemma 3.1. Assume Conditions 1 and 3 and $k'(0)=0$ . For any $(i,y) \in \text{supp}\ V$ , $z\in \mathbb{N},\ z\neq 0$ , and $s\in [0,1)$ ,
and
Proof. We give a sketch only. Following the proof of Lemma 3.2 in [Reference Grama, Lauvergnat and Le Page13], for any $(i,y) \in \text{supp}\ V$ , by (3.7), (3.8), and (3.10), we obtain
The last term in the right-hand side of (3.14) converges to 0 as $n\to \infty$ by (3.11). By Lemma 2.5 and the Lebesgue dominated convergence theorem, we have
Taking the limit as $l\to \infty$ , again by Lemma 2.5, we conclude the first assertion of the lemma. The second assertion follows from the first one, since $q_{n,z}(s) \leq 1$ and $q_{\infty,z}(s)\leq 1$ .
Lemma 3.2. Assume Conditions 1 and 3 and $k'(0)=0$ . For any $(i,y)\in \text{supp}\ V$ and $z\in \mathbb N$ , $z\not=0$ , we have, for any $k \geq 1$ , $\mathbb P_{i,y,z}^{+}$ -a.s.,
Proof. By (3.4) we have, $\mathbb P_{i,z,y}^{+}$ -a.s.,
By Lemma 2.5 and a monotone convergence argument,
Thus, $\mathbb P_{i,y}^{+}$ -a.s.,
which ends the proof of the lemma.
We will make use of the following lemma.
Lemma 3.3. There exists a constant c such that, for any $z\in \mathbb N$ , $z\not=0,$ and $y\geq 0$ sufficiently large,
Proof. We follow the same line as the proof of Theorem 1.1 in [Reference Grama, Lauvergnat and Le Page13]. First, we have
Using (3.5) and the fact that $q_{n,z}(0)$ is nonincreasing in n, we have
Setting $B_{n,j}=\{ -(\,j+1) < \min_{1\leq k\leq n} (y+S_k) \leq -j \}$ , this implies
Using the point 2 of Proposition 2.2 we obtain the assertion of the lemma.
It is known from the results in [Reference Grama, Lauvergnat and Le Page12] that when y is sufficiently large, $(i,y) \in \text{supp}\ V$ . For $(i,y) \in \text{supp}\ V$ , set
Theorem 1.1 is a direct consequence of the following proposition, which extends Theorem 1.1 in [Reference Grama, Lauvergnat and Le Page13] to the case $z>1$ .
Proposition 3.4. Assume Conditions 1–4. Suppose that $i\in \mathbb X$ and $z\in \mathbb N, z \neq 0$ . Then for any $i \in \mathbb X$ the limit as $y\to \infty$ of V(i,y) U(i,y,z) exists and satisfies
Moreover,
Proof. By Lemma 2.8 and (3.17), for $(i,y)\in \text{supp}\ V$ and $j \in \mathbb X, $
and
By Lemma 3.3,
From (3.18), (3.19), and (3.20), when y is sufficiently large,
Similarly, when y is sufficiently large,
Since $\mathbb{P}_{i,z} \left( Z_n > 0 \,,\, X_n = j \,,\, \tau_y > n \right)$ is nondecreasing in y, from (3.18) it follows that the function
is nondecreasing in y. Moreover, by (3.21) and (3.22), we deduce that u(i, y, z) as a function of y is bounded by $L_0$ . Therefore its limit as $y \to \infty$ exists: $ u(i,z)= \lim_{y\to \infty } u(i,y,z). $ To prove that $U(i,y,z)=\mathbb E_{i,y}^{+} q_{\infty,z}(0) >0$ , it is enough to remark that, by (3.13), it holds that $\mathbb E_{i,y}^{+} q^{-1}_{\infty,z}(0) <\infty$ . On the other hand, $V(i,y)>0$ for large enough y. Therefore $u(i,z)>0$ , which proves the first assertion. The second assertion follows immediately from (3.21) and (3.22) by letting $y\to\infty$ .
4. Proof of Theorem 1.2
Throughout this section we write, for $n\geq 1$ ,
and, for $0 \leq k \leq n$ ,
Recall the following identities, which will be useful in the proofs:
For any $n\geq 1,$ set
It is easy to see that, by the definition (2.1) of $\tau_y$ , we have $\{\tau_0>n\}=\{L_{0,n}>0\}$ , so that (4.4) is equivalent to
We first prove a series of auxiliary statements.
Lemma 4.1. Assume Conditions 1–4. Let $s\geq 0$ . For any $(i,0)\in \text{supp}\, V$ and $z\in \mathbb N$ , $z\not=0$ , there exists a positive random variable $W_{i,z}$ such that
where
Moreover, for any $(i,0)\in \text{supp}\ V$ and $z\in \mathbb N$ , $z\not=0$ , it holds $\mathbb P^{+}_{i,0,z}$ -a.s. that
For any $(i,0)\not\in \text{supp}\ V$ and $z\in \mathbb N$ , $z\not=0$ ,
Proof. Define
Since
is a positive $((\mathscr F_n)_{n\geq0}, \mathbb P^{+}_{i,0,z} )$ -martingale, its limit, say
exists $\mathbb P^{+}_{i,0,z}$ -a.s. and is nonnegative. Therefore, $\mathbb P^{+}_{i,0,z}$ -a.s.
Now the first assertion follows from Lemma 2.8.
For the second assertion we use a result from Kersting [Reference Kersting18] stated in a more general setting of branching processes with varying environment. To apply it we shall condition with respect to the environment $(X_n)_{n\geq 0}$ , so that one can consider that the environment is fixed. The condition (A) in [Reference Kersting18] is obviously satisfied because of Condition 4. Moreover, according to Lemma 3.2, the extinction probability satisfies, $P_{i,0,z}^{+}$ -a.s.,
By Theorem 2 in [Reference Kersting18], this implies that, $P_{i,0,z}^{+}$ -a.s.,
Since $P_{i,0,z}^{+}$ -a.s. we have $ \cup_{p\geq 1} \{ Z_p =0\} \subset \{ W_{i,z} = 0\},$ we obtain the second assertion.
The third assertion follows from the point 1 of Proposition 2.2, since $V(i,0)=0$ . This ends the proof of the lemma.
Notice that $P_{\infty}(i,s,z)$ can be rewritten as
This shows that $P_{\infty}(i,s,z)$ is the Laplace transformation at $e^{-s}$ of a measure on $\mathbb R_{+}$ which assigns the mass $0$ to the set $\{0\}$ .
We will need the following lemma.
Lemma 4.2. There exists a constant $c $ such that, for any $n\geq 1, z\in \mathbb{N}, z\neq 0$ ,
Proof. Since $T_n$ is a function only on the environment $\left( X_k \right)_{k\geq 0}$ , conditioning with respect to $\left( X_k \right)_{k\geq 0}$ , we have
with $q_{n,z}(0)$ defined by (3.2). Using the bound (3.5) we obtain
By (4.1) and the duality (Lemma 2.9),
Using the local limit theorem for the dual Markov chain (see Proposition 2.4) and following the proof of Lemma 2.5, we obtain
The key point of the proof of Theorem 1.2 is the following statement.
Proposition 4.3. Assume Conditions 1–4. For any $i\in \mathbb X$ , $s\in \mathbb R$ , and $z\in \mathbb N$ , $z\not=0$ ,
With $s=+\infty$ ,
where
is defined in Theorem 1.1.
Proof. Using (4.3), one has
By Lemma 4.2,
We now deal with the term $J_1(n)$ . We shall make use of the notation $P_{n}(i,y,z)$ defined in (4.5). By the Markov property (conditioning with respect to $\mathscr F_k = \sigma \{ X_0,Z_0, \ldots, X_k, Z_k \}$ ), and using (4.2), we obtain
Therefore
For brevity, write
It is easy to see that, with some $ l\leq n$ ,
For $J_{12}(n,l)$ , we have, using (4.5),
Using the point 2 of Proposition 2.2 and Lemma 4.2,
where to bound the second line we split the summation into two parts, for $k > n/2$ and $k\leq n/2$ . Let $\varepsilon >0$ be arbitrary. Then there exists $n_{\varepsilon, z}$ such that, for $n\geq l\geq n_{\varepsilon, z}$ ,
For $J_{11}(n,l)$ , we have
Since l is fixed, taking the limit as $n\to \infty$ , by Lemmata 4.1 and 4.2,
Since $\varepsilon$ is arbitrary, from (4.11), (4.12), and (4.13), taking the limit as $l\to\infty$ , we deduce that
From this and (4.9)–(4.10) we deduce the first assertion of the proposition. The second one is proved in the same way.
Now we proceed to prove Theorem 1.2. Denote by
the joint law law of $\frac{Z_n}{e^{S_n}}$ and $X_n=j$ given $Z_n>0$ under $\mathbb P_i$ , where B is any Borel set of $\mathbb R_+.$ Set for short
We shall prove that the sequence $(\mu_{n,i,z})_{n\geq1}$ is convergent in law. For this we use the convergence of the corresponding Laplace transformations:
By Proposition 4.3, we see that, with $t=e^{-s}$ ,
It is obvious that $u(i,z, e^{-s})$ is also a Laplace transformation at $t=e^{-s}$ of a measure on $\mathbb R_{+}$ which assigns the mass 0 to the set $\{0\}$ and that u(i,z) is the total mass of this measure. Therefore the ratio
is the Laplace transformation of a probability measure $\mu_{i,z}$ on $R_{+}$ such that $\mu_{i,z}(\{0\})=0.$
5. Proof of Theorems 1.3 and 1.4
Recall that by the properties of the harmonic function, for any $i\in \mathbb X$ there exists $y_i\geq 0$ such that $(i,y)\in \text{supp}\ V$ for any $y\geq y_i.$
First we prove the following auxiliary statement.
Lemma 5.1. Assume Conditions 1–4. Let $i\in \mathbb X$ and $z\in \mathbb N$ , $z\not = 0$ . For any $\theta \in (0,1)$ and $y\geq 0$ large enough such that $(i,y)\in \text{supp}\ V$ ,
Proof. Let $m, n\geq 1$ be such that $[\theta n] >m$ . Then
Let $P_n(i,y)=\mathbb P_{i}(\tau_y>n).$ By the Markov property
Using the point 2 of Proposition 2.2 and the point 3 of Proposition 2.1, on the set $\{ \tau_y>[\theta n] \}$ ,
Therefore
Using the bound (3.1) and again the point 2 of Proposition 2.2,
If $(i,y)\not\in \text{supp}\ V$ ,
If $(i,y)\in \text{supp}\ V$ , changing the measure by (2.14), we have
Taking the limit as $n\to\infty$ and then as $m\to\infty$ , by Lemma 3.1,
Taking into account the previous bounds we obtain the assertion of the lemma.
Proof of Theorem 1.3. Let $i,j\in \mathbb X$ , $y\geq 0$ , $z\in \mathbb N$ , $z\not=0$ , $t\in \mathbb R$ . Then, for any $n\geq 1$ and $y \geq 0,$
By Lemma 3.3,
In the sequel we study $I_{1}(n,y)$ . Let $\theta \in (0,1)$ be arbitrary. We decompose $I_{1}(n,y)$ into two parts:
In the following lemma we prove that the second term $\sqrt{n}I_{12}(n,\theta,y)$ vanishes as $n\to \infty.$
Lemma 5.2. Assume Conditions 1–4. For any $i,j\in \mathbb X$ , $z\in \mathbb N$ , $z\not= 0$ , and $y\geq 0$ sufficiently large,
Proof. Obviously,
As in (3.18), choosing $y\geq 0$ such that $(i,y) \in \text{supp}\ V$ , we have
We shall prove that for any $\theta \in (0,1)$ ,
For any $m\geq 1$ and n such that $[\theta n] >m$ ,
By Lemma 2.6,
Taking the limit as $m\to\infty$ , by (3.17), we have
By Lemma 5.1,
which together with (5.8) and (5.9) proves (5.7). From (5.5), (5.6), and (5.7) we obtain the assertion of the lemma.
To handle the term $I_{11}(n,\theta,y)$ we choose any m satisfying $1\leq m \leq [\theta n]$ and split it into two parts:
By Lemma 5.1, we have
The following lemma gives a limit for $\sqrt{n}I_{111}(n,m,y)$ as $n\to\infty$ and $m\to\infty$ .
Lemma 5.3. Assume Conditions 1–4. Suppose that $i,j\in \mathbb X$ , $z\in \mathbb N$ , $z\not = 0$ , and $t\in \mathbb R.$ Then, for any $y\geq 0$ sufficiently large,
Proof. Without loss of generality we can assume that $n\geq2m.$ Let $y \geq 0$ be so large that $(i,y)\in \text{supp}\ V.$ Set
Then $I_{111}(n,m,y)$ can be rewritten as
If $t<0$ , we have $t_{n,y}<0$ for n large enough, and the assertion of the lemma becomes obvious since $I_{111}(n,m,y)= 0 =\Phi^{+} (t)$ . Therefore, it is enough to assume that $t \geq 0$ . To find the asymptotic of $I_{111}(n,m,y)$ we introduce the following notation: for $y',t'\in \mathbb R$ and $1\leq k=n-m\leq n$ ,
Set
By the Markov property,
Since $t_{n,m,y} \leq \sqrt{2} (t +y/\sigma)$ , by Proposition 2.3, there exists an $\varepsilon>0$ such that for any $(i,y')\in \mathbb X\times \mathbb R$ ,
Therefore, using (5.13),
This implies that
Note that with m and $y\geq 0$ fixed, we have $t_{n,m,y} \to t$ as $n\to\infty$ . Therefore
When $(i,y) \in \text{supp}\ V$ , the change of measure (2.13) gives
Since $\mathbb P_{i,y,z}^{+} (Z_m>0) = \mathbb E_{i,y}^{+} q_{m,z}(0)$ , using Lemma 2.7,
which, together with (5.14), (5.15), and (5.16), gives
This proves the assertion of the lemma for $t\geq 0$ .
We now perform the final assembly. From (5.1), (5.2), (5.3), and (5.4), we have, for any $i,j\in \mathbb X$ , $z\in \mathbb N$ , $z\not= 0$ , $t\in \mathbb R$ , and y sufficiently large,
From (5.10), (5.11), and (5.12) we obtain
From (5.17) and (5.18), taking consecutively the limits as $n\to \infty$ and $m\to \infty$ , we obtain, for any $i,j \in \mathbb{X}$ , $z \in \mathbb{N}$ , $z\not = 0$ , $t \in \mathbb R$ , and $y\geq 0$ sufficiently large so that $(i,y)\in \text{supp}\ V$ ,
Taking the limit as $y\to \infty$ , we obtain, for any $i,j \in \mathbb{X}$ , $z \in \mathbb{N}$ , $z\not = 0$ , and $t \in \mathbb R$ ,
where $u(i,z)>0$ is defined in Proposition 3.4. This proves the first assertion of the theorem. The second assertion is obtained from the first one by using Theorem 1.1.
Proof of Theorem 1.4. The assertion of Theorem 1.4 is easily obtained from Theorem 1.3. Let $\varepsilon>0$ be arbitrary. By simple computations,
Fix some $A>1$ . Then, for n sufficiently large so that $e^{\varepsilon \sqrt{n}}>A$ ,
where, by Theorem 1.2,
and
Since $\mu_{i,z}$ is a probability measure of mass 0 in 0, taking the limit as $A\to +\infty$ , we have that
As $\varepsilon$ is arbitrary, using Theorem 1.3 we conclude the proof.
Acknowledgements
We wish to thank the editor and two referees for their remarks and comments, which helped to improve the presentation of the paper. Two of the authors wish to pay tribute to Emile Le Page, who was the initiator of this article and who, to our great regret, passed away this year.
Funding Information
There are no funding bodies to thank in relation to the creation of this article.
Competing Interests
There are no competing interests to declare which arose during the preparation or publication process for this article.