Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-06T07:44:24.222Z Has data issue: false hasContentIssue false

Branching processes in a random environment with immigration stopped at zero

Published online by Cambridge University Press:  04 May 2020

Elena Dyakonova*
Affiliation:
Steklov Mathematical Institute
Doudou Li*
Affiliation:
Beijing Normal University
Vladimir Vatutin*
Affiliation:
Steklov Mathematical Institute and Beijing Normal University
Mei Zhang*
Affiliation:
Beijing Normal University
*
*Postal address: Steklov Mathematical Institute, 8 Gubkin St., Moscow, 119991, Russia.
***Postal address: School of Mathematical Sciences & Laboratory of Mathematics and Complex Systems, Beijing Normal University, Beijing 100875, P.R. China.
*Postal address: Steklov Mathematical Institute, 8 Gubkin St., Moscow, 119991, Russia.
**Email address: elena@mi-ras.ru
Rights & Permissions [Opens in a new window]

Abstract

A critical branching process with immigration which evolves in a random environment is considered. Assuming that immigration is not allowed when there are no individuals in the population, we investigate the tail distribution of the so-called life period of the process, i.e. the length of the time interval between the moment when the process is initiated by a positive number of particles and the moment when there are no individuals in the population for the first time.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction and statement of main results

One of the popular models of branching processes is the model of Galton–Watson branching processes. Different versions of such processes describe the evolution of populations of particles and have found various applications in physics, demography, biology, and other fields of science.

One of the most important and interesting questions from theoretical and practical points of view for such processes is the distribution of the so-called life periods of a branching process, defined as the length of the time interval between the moment when the first invader (or invaders) came to an empty site until the moment when the site becomes empty again (see, for instance, [Reference Badalbaev and Mashrabbaev4, Reference Mitov14, Reference Vatutin20, Reference Zubkov22]). Information on the length of such periods may be used, for example, in epidemiology, ecology, and seismology. In the context of epidemics, such periods correspond to the duration of outbreaks of diseases that do not lead to full epidemics, and to the period of occupancy of sites in metapopulations [Reference Haccou, Jagers and Vatutin7]. They may be used to analyse cell proliferation [Reference Kimmel and Axelrod13] or plasmid incompatibility [Reference Novick and Hoppenstead15, Reference Seneta and Tavare18], the waiting time interval for the end of earthquake aftershocks [Reference Kagan8], and for considering some other models of a similar nature.

In this paper we consider branching processes allowing immigration and evolving in a random environment. In such a process individuals reproduce independently of each other according to random offspring distributions which vary from one generation to another. In addition, immigrants arrive in each generation independently of the development of the population and according to laws varying at random from generation to generation. To give a formal definition, let $\Delta =\left( \Delta _{1},\Delta _{2}\right) $ be the space of all pairs of probability measures on $\mathbb{N}_{0}=\{0,1,2,\ldots \}$. Equipped with the componentwise metric of total variation, $\Delta $ becomes a Polish space. Let $\textbf{Q}=\{F,G\}$ be a random vector with independent components taking values in $\Delta $, and let $\textbf{Q}_{n}=\{F_{n},G_{n}\}$, $n=1,2,\ldots $, be a sequence of independent copies of $\textbf{Q}$. The infinite sequence $\mathcal{E}=\left\{ \textbf{Q}_{1},\textbf{Q}_{2},\ldots\right\} $ is called a random environment. For branching processes in a random environment, we always write the conditional probability measure $P(\cdot)\coloneqq \textrm{P}(\cdot \mid \mathcal{E})$ as the quenched probability, and $\textrm{P}(\cdot)\coloneqq \textrm{E}[\textrm{P}(\cdot \mid \mathcal{E})]$ as the annealed probability.

A sequence of $\mathbb{N}_{0}$-valued random variables $\textbf{Y}=\left\{Y_{n},\ n\in \mathbb{N}_{0}\right\} $ specified on the respective probability space $(\Omega ,\mathcal{F},\textrm{P})$ is called a branching process with immigration in the random environment (BPIRE) if $Y_{0}$ is independent of $\mathcal{E}$ and, given $\mathcal{E}$, the process $\textbf{Y}$ is a Markov chain with

\begin{equation*}\mathcal{L}( Y_{n} \mid Y_{n-1}=y_{n-1}, \mathcal{E}=(\textbf{q}_{1},\textbf{q}_{2},\ldots)) =\mathcal{L}(\xi _{n1}+\cdots +\xi _{ny_{n-1}}+\eta _{n})\end{equation*}

for every $n\in \mathbb{N}\coloneqq \mathbb{N}_{0}\backslash \left\{ 0\right\} $, $y_{n-1}\in \mathbb{N}_{0}$ and $\textbf{q}_{1}=\left( f_{1},g_{1}\right) ,\textbf{q}_{2}=\left( f_{2},g_{2}\right) ,\ldots\in \textbf{Q}$, where $\xi_{n1},\xi _{n2},\ldots $ are independent and identically distributed (i.i.d.) random variables with distribution $f_{n} $ and independent of the random variable $\eta _{n}$ with distribution $g_{n} $. In the language of branching processes, $Y_{n-1}$ is the $(n-1)$th-generation size of the population, $f_{n}$ is the distribution of the number of children of an individual at generation $n-1$, and $g_{n}$ is the law of the number of immigrants at generation n.

Along with the process $\textbf{Y}$ we consider a branching process $\textbf{Z}=\left\{ Z_{n},\ n\in \mathbb{N}_{0}\right\} $ in the random environment $\mathcal{E}_{1}=\left\{ F_{1},F_{2},\ldots\right\} $ which, given $\mathcal{E}_{1}$, is a Markov chain with $Z_{0}=1$ and, for $n\in \mathbb{N}$,

\begin{equation*}\mathcal{L}( Z_{n} \mid Z_{n-1}=z_{n-1},\mathcal{E}_{1}=(\,f_{1},f_{2},\ldots)) =\mathcal{L}(\xi _{n1}+\cdots +\xi _{nz_{n-1}}). \end{equation*}

It will be convenient to assume that if $Y_{n-1}=y_{n-1}>0$ is the population size of the ($n-1)$th generation of $\textbf{Y}$ then first $\xi_{n1}+\cdots +\xi _{ny_{n-1}}$ individuals of the nth generation are born and then $\eta _{n}$ immigrants enter the population.

This agreement allows us to consider a modified version $\textbf{W}=\left\{W_{n},\ n\in \mathbb{N}_{0}\right\} $ of the process $\textbf{Y}$ specified as follows. Assume, without loss of generality, that $Y_{0}>0$. Let $W_{0}=Y_{0}$ and, for $n\geq 1$,

(1)\begin{equation}W_{n}\coloneqq \left\{\begin{array}{l@{\quad}l}0 & \text{if }T_{n}\coloneqq \xi _{n1}+\cdots +\xi _{nW_{n-1}}=0, \\[4pt] T_{n}+\eta _{n} & \text{if }T_{n}>0.\end{array}\right. \end{equation}

We call $\textbf{W}$ a branching process with immigration stopped at zero and evolving in the random environment.

The aim of the present paper is to study the tail distribution of the random variable

\begin{equation*}\zeta \coloneqq \min \left\{ n\geq 1\,:\,W_{n}=0\right\}\end{equation*}

under the annealed approach. To formulate our main result we consider the so-called associated random walk $\textbf{S}=\left( S_{0},S_{1},\ldots\right) $. This random walk has initial state $S_{0}$ and increments $X_{n}=S_{n}-S_{n-1}$, $n\geq 1$, defined as

\begin{equation*}X_{n}\coloneqq \log \mathfrak{m}( F_{n}) ,\end{equation*}

which are i.i.d. copies of the logarithmic mean offspring number $X\coloneqq \log \mathfrak{m}(F)$ with

\begin{equation*}\mathfrak{m}(F)\coloneqq \sum_{j=0}^{\infty }jF\left( \left\{\,j\right\} \right) .\end{equation*}

We suppose that X is almost surely (a.s.) finite.

With each pair of measures (F, G) we associate the respective probability generating functions

\begin{equation*}F(s)\coloneqq \sum_{j=0}^{\infty }F\left( \left\{\, j\right\} \right) s^{\,j},\qquad G(s)\coloneqq \sum_{j=0}^{\infty }G\left( \left\{\,j\right\} \right) s^{\,j}.\end{equation*}

We impose the following restrictions on the distributions of F and G.

Hypothesis A1. The probability generating function F(s) is geometric with probability 1, that is,

\begin{equation*}F(s)=\frac{q}{1-ps}=\frac{1}{1+\mathfrak{m}(F)(1-s)}\end{equation*}

with random $p,q\in (0,1)$ satisfying $p+q=1$ and

\begin{equation*}\mathfrak{m}(F)=\frac{p}{q}=\textrm{e}^{\log (p/q)}=\textrm{e}^{X}.\end{equation*}

Hypothesis A2. There exist real numbers $\kappa \in \lbrack 0,1)$ and $\gamma ,\sigma \in (0,1]$ such that, with probability 1,

  1. (i) the inequality $F(0)\geq \kappa $ is valid;

  2. (ii) the estimate $G(s)\leq s^{\gamma }$ holds for all $s\in \lbrack \kappa ^{\sigma },1]$.

To formulate one more assumption, we introduce the right-continuous function $U\,:\,\mathbb{R} \rightarrow [0,\infty )$ specified by the relation

\begin{equation*}U(x)\coloneqq \textbf{1}\{ x\geq 0\}+\sum\limits_{i=1}^{\infty}\textrm{P}(S_{\gamma_{i}}\geq -x),\qquad x\in \mathbb{R} , \end{equation*}

where $0\coloneqq \gamma_{0}<\gamma_{1}<\cdots $ are the strict descending ladder epochs of S (given $S_{0}=0$),

\begin{equation*}\gamma_{i}\coloneqq \min \{n>\gamma_{i-1}\,:\,S_{n}<S_{\gamma_{i-1}}\},\end{equation*}

and $\textbf{1}(A)$ is the indicator of the event A. Then U(x) is a renewal function with increments distributed as $-S_{\gamma}$. One may check (see, for instance, formula (1.5) in [Reference Afanasyev, Geiger, Kersting and Vatutin3] and formula (2.6) in [Reference Afanasyev, Boeinghoff, Kersting and Vatutin2]) that, for any oscillating random walk,

(2)\begin{equation}\textrm{E}\left[ U(x+X);\,X+x\geq 0\right] =U(x),\qquad x\geq 0. \end{equation}

Hypothesis A3. The distribution of X is nonlattice, the sequence $\left\{ S_{n},n\geq 0\right\} $ satisfies the Doney–Spitzer condition

(3)\begin{equation}\lim_{n\rightarrow \infty }\textrm{P}\left( S_{n}>0\right) =:\,\rho \in (0,1),\end{equation}

and there exists $\varepsilon >0$ such that

\begin{equation*}\textrm{E}( \log ^{+}G^{\prime }(1)) ^{\rho ^{-1}+\varepsilon}<\infty \quad \text{ and } \quad \textrm{E}( U(X)\log ^{+}G^{\prime}(1)) ^{1+\varepsilon }<\infty ,\end{equation*}

where $\log ^{+}x=\max ( 0,\log x) $.

We now formulate our main result.

Theorem 1. Let Hypotheses A1–A3 be satisfied. Then there exists a function l(n), slowly varying at infinity, such that

\begin{equation*}\textrm{P}\left( \zeta >n\right) \sim \frac{l(n)}{n^{1-\rho }}\end{equation*}

as $n\rightarrow \infty .$

It is convenient to describe the range of possible values of the parameter $\kappa $ by examples.

Let

\begin{equation*}\mathcal{A}\coloneqq \{0<\alpha <1;\,|\beta |<1\}\cup \{1<\alpha <2;\,|\beta |\leq1\}\cup \{\alpha =1,\beta =0\}{\cup }\{\alpha =2,\beta =0\}\end{equation*}

be a subset of $\mathbb{R}^{2}$. For $(\alpha ,\beta )\in \mathcal{A}$ and a random variable X we write $X\in \mathcal{D}\left( \alpha ,\beta \right) $ if the distribution of X belongs to the domain of attraction of a stable law with characteristic function

\begin{equation*}\mathcal{G}_{\alpha ,\beta }(t)\coloneqq \exp \left\{-c|t|^{\,\alpha }\left( 1-\textrm{i} \beta \frac{t}{|t|}\tan \frac{\pi \alpha }{2}\right) \right\} ,\qquad c>0, \end{equation*}

where $\alpha$ is the index of stability and $\beta$ is the skewness parameter of the corresponding distribution, and, in addition, $\textrm{E}\left[ X\right] =0$ if this moment exists. If $X_{n}\overset{\textrm{d}}{=}X\in \mathcal{D}\left( \alpha ,\beta \right) $ then the parameter $\rho $ in (3) is given (see, for instance, [Reference Zolotarev21]) by

\begin{equation*}\displaystyle\rho =\left\{\begin{array}{l@{\qquad}l}\frac{1}{2} & \text{if \ }\alpha =1, \\[4pt] \frac{1}{2}+\frac{1}{\pi \alpha }\arctan \left( \beta \tan \frac{\pi \alpha}{2}\right) & \text{otherwise}.\end{array}\right. \end{equation*}

Note that if $\textrm{E}\left[ X\right] =0$ and $\textrm{Var}X\in (0,\infty) $ then the central limit theorem implies that $\rho =1/2$.

Example 1. If Hypothesis A1 is valid and

\begin{equation*}X=\log \mathfrak{m}(F)=\log (p/q)\in \mathcal{D}\left( \alpha ,\beta \right)\end{equation*}

with $\alpha \in (0,2)$, then

\begin{equation*}\textrm{P}\left( \log (p/q)>x\right) \sim \frac{1}{x^{\alpha }l_{1}(x)}\quad\text{as }x\rightarrow \infty ,\end{equation*}

where $l_{1}(x)$ is a function slowly varying at infinity. Therefore,

\begin{equation*}\textrm{P}\left( \log \frac{q}{1-q}<-x\right) \sim \frac{1}{x^{\alpha}l_{1}(x)}\end{equation*}

as $\ x\rightarrow \infty $, implying

\begin{equation*}\textrm{P}\left( F(0)=q<\frac{\textrm{e}^{-x}}{1+\textrm{e}^{-x}}\right) \sim \frac{1}{x^{\alpha }l_{1}(x)}.\end{equation*}

As a result, $\textrm{P}\left( F(0)<y\right) >0$ for any $y>0.$

Thus, if $\alpha \in (0,2)$ then point (i) of Hypothesis A2 reduces to the trivial inequality $F(0)\geq \kappa =0$. Moreover, given $\kappa =0$, point (ii) of Hypothesis A2 implies $G\left( 0\right) =0$ which, in turn, leads to the inequality

\begin{equation*}G(s)=\sum_{j=1}^{\infty }G\left( \left\{\,j\right\} \right) s^{\,j}\leq s\end{equation*}

for all $s\in \lbrack 0,1]$. The last means that at least one immigrant enters $\textbf{W}$ each time when it is allowed by (1).

The case $\textrm{E}\left[ X^{2}\right] <\infty $ is less restrictive and allows for $\kappa >0$, i.e. for the absence of immigrants in some generations of $\textbf{W}$ (even when they are allowed).

Example 2. Let

\begin{equation*}F(s)=\left\{\begin{array}{l@{\quad}cl}\frac{1}{1+63\left( 1-s\right) } & \text{with probability} & \quad \frac{1}{2}, \\ & & \\\frac{63}{64-s} & \text{with probability} & \quad \frac{1}{2} ,\end{array}\right.\end{equation*}

and the probability generating function of immigrants be deterministic:

\begin{equation*}G(s)=\frac{2}{3}s^{2}+\frac{1}{3} \quad \text{ with probability 1.}\end{equation*}

Clearly, $\textrm{E}\left[ \log \mathfrak{m}(F)\right] =0$, $\textrm{Var}\left[ \log \mathfrak{m}(F)\right] \in \left( 0,\infty \right) $. It is not difficult to see that

\begin{equation*}F(0)\geq 1/64 \quad \text{and} \quad G(s)\leq s^{1/3}\text{ for all }s\in [8^{-1},1] =[ 64^{-1/2},1] .\end{equation*}

Thus, the conditions of Theorem 1 are fulfilled with $\kappa =1/64$, $\gamma =1/3$, and $\sigma =1/2$.

We note that Zubkov [Reference Zubkov22] considered a problem similar to ours for a branching process with immigration $\left\{ Y_{c}(n),n\geq 0\right\} $ evolving in a constant environment. He assumed that $G\left( 0\right) >0$, and investigated the distribution of the so-called life period $\zeta _{c}$ of such a process initiated at time N and defined as

\begin{equation*}Y_{c}(N-1)=0, \quad \min_{N\leq k<N+\zeta _{c}}Y_{c}(k)>0, \quad Y_{c}(N+\zeta _{c})=0.\end{equation*}

The same problem for other models of branching processes with immigration evolving in a constant environment was analyzed, for instance, in [Reference Badalbaev and Mashrabbaev4, Reference Mitov14, Reference Seneta and Tavare18, Reference Vatutin20].

Various properties of BPIRE have been investigated by several authors (see, for instance, [Reference Afanasyev1, Reference Kaplan9, Reference Kesten, Kozlov and Spitzer11, Reference Key12, Reference Roitershtein17, Reference Tanny19]). However, asymptotic properties of the life periods of BPIRE have not been considered up to now.

2. Auxiliary statements

Given the environment $\mathcal{E}=\left\{ (F_{n},G_{n}),n\in \mathbb{N}\right\} $, we construct the i.i.d. sequence of pairs of generating functions

\begin{equation*}F_{n}(s)\coloneqq \sum_{j=0}^{\infty }F_{n}\left( \left\{\,j\right\} \right)s^{\,j},\qquad G_{n}(s)\coloneqq \sum_{j=0}^{\infty }G_{n}\left( \left\{\,j\right\}\right) s^{\,j} , \quad s\in \lbrack 0,1],\end{equation*}

and use below the convolutions of the generating functions $F_{1},\ldots,F_{n}$ specified for $0\leq i\leq n-1$ by the equalities

\begin{eqnarray*}F_{i,n}(s) & \,\coloneqq\, & F_{i+1}(F_{i+2}(\ldots (F_{n}(s))\ldots )), \\[3pt] F_{n,i}(s) & \coloneqq & F_{n}(F_{n-1}(\ldots (F_{i+1}(s))\ldots )), \\[3pt] F_{n,n}(s) & \coloneqq & s.\end{eqnarray*}

The evolution of the BPIRE defined by (1) may now be described for $n\geq 1$ by the relation

(4)\begin{align}\textrm{E}[s^{W_{n}} \mid \mathcal{E},W_{n-1}] &=(F_{n}(0))^{W_{n-1}}+((F_{n}(s))^{W_{n-1}}-(F_{n}(0))^{W_{n-1}}) G_{n}(s) \notag \\[4pt] &=(F_{n}(0))^{W_{n-1}}(1-G_{n}(s))+(F_{n}(s))^{W_{n-1}}G_{n}(s) .\end{align}

To keep a unified notation we assume that $W_{0}=Y_{0}>0$ has the (random) probability generating function

\begin{equation*}N(0;\,s)\coloneqq \frac{G_{0}(s)-G_{0}(0)}{1-G_{0}(0)} ,\end{equation*}

where $G_{0}(s)\overset{\textrm{d}}{=}G(s)$. Other classes of the initial distribution may be considered in a similar way.

Setting

\begin{equation*}N(n;\,s)\coloneqq \textrm{E}[s^{W_{n}} \mid \mathcal{E}],\qquad n\geq 1\end{equation*}

we have, by (4),

(5)\begin{align}N(n;\,s) &=\textrm{E}\left[(F_{n}(0))^{W_{n-1}}(1-G_{n}(s))+(F_{n}(s))^{W_{n-1}}G_{n}(s) \mid \mathcal{E}\right] \notag \\[4pt] &=N(n-1;\,F_{n}(0))(1-G_{n}(s))+N(n-1;\,F_{n}(s))G_{n}(s)\\[4pt] &= N(n-1;\,F_{n}(0))(1-G_{n}(s))+N(n-2;\,F_{n-1}(0))(1-G_{n-1}(F_{n}(s)))G_{n}(s)\notag \\[4pt] &\quad + \ N(n-2;\,F_{n-1}(F_{n}(s)))G_{n-1}(F_{n}(s))G_{n}(s),\nonumber \end{align}

where for $n=1$ one should take into account only the first two equalities. Assuming $\prod_{j=n+1}^{n}G_{j}(F_{j,n}(s))=1$, we obtain, by induction,

\begin{align*}N(n;\,s) = &\sum_{k=0}^{n-1}N(n-k-1;\,F_{n-k}(0))(1-G_{n-k}(F_{n-k,n}(s)))\prod_{j=n-k+1}^{n}G_{j}(F_{j,n}(s)) \\ &+ \ N(0;\,F_{0,n}(s))\prod_{j=1}^{n}G_{j}(F_{j,n}(s)).\end{align*}

Note that, according to (5),

\begin{equation*}N(n;\,0)=N(n-1;\,F_{n}(0)),\qquad n\geq 1.\end{equation*}

Also,

\begin{equation*}\textrm{E}N(n;\,0)=\textrm{P}\left( W_{n}=0\right) =\textrm{P}\left( \zeta\leq n\right) .\end{equation*}

Hence, setting $s=F_{n+1}(0)$, taking the expectation with respect to the environment, and using the independence of the elements of the environment, we get

(6)\begin{align}\textrm{E}\left[ N(n+1;\,0)\right] &=\sum_{k=0}^{n-1}\textrm{E}\left[ N(n-k;\,0)\right] \textrm{E}\Big[ ( 1-G_{n-k}(F_{n-k,n+1}(0)))\prod_{j=n-k+1}^{n}G_{j}(F_{j,n+1}(0))\Big] \notag \\ &\quad +\textrm{E}\Big[ N(0;\,F_{0,n+1}(0))\prod_{j=1}^{n}G_{j}(F_{j,n+1}(0))\Big] . \end{align}

Denoting, for $n\geq 0$,

\begin{align*}R_{n} & \coloneqq 1-\textrm{E}\left[ N(n;\,0)\right] =\textrm{E}\left[ 1-N(n;\,0)\right]=\textrm{P}\left( \zeta >n\right) , \\[2pt] H_{n}^{\ast } & \coloneqq \textrm{E}\bigg[ \frac{1-G_{0}(F_{0,n+1}(0))}{1-G_{0}(0)}\prod_{i=1}^{n}G_{i}(F_{i,n+1}(0))\bigg] , \\[2pt] d_{n} & \coloneqq \textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}( F_{i,n+1}(0)) \bigg] =\textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}( F_{i,0}(0)) \bigg] ,\end{align*}

observing that

\begin{align*}H_{n} &\coloneqq \textrm{E}\bigg[ (1-G_{0}(F_{0,n+1}(0)))\prod_{i=1}^{n}G_{i}(F_{i,n+1}(0)) \bigg] \\ &=\textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}( F_{i,n+1}(0)) \bigg] -\textrm{E}\bigg[ \prod_{i=1}^{n+1}G_{i}( F_{i,n+2}(0)) \bigg]=d_{n}-d_{n+1} \end{align*}

and using the equality

\begin{align*} &\textrm{E}\bigg[ ( 1-G_{n-k}(F_{n-k,n+1}(0)))\prod_{j=n-k+1}^{n}G_{j}(F_{j,n+1}(0))\bigg] \\ &\quad = \textrm{E}\bigg[ (1-G_{0}(F_{0,k+1}(0)) )\prod_{j=1}^{k}G_{j}(F_{j,k+1}(0))\bigg] ,\end{align*}

we rewrite (6) as a renewal type equation,

(7)\begin{equation}R_{n+1}=\sum_{k=0}^{n-1}H_{k}R_{n-k}+H_{n}^{\ast },\qquad n\geq 0.\end{equation}

Let

\begin{equation*}\mathcal{R}(s)\coloneqq \sum_{n=1}^{\infty }R_{n}s^{n}.\end{equation*}

Lemma 1.

\begin{equation*}\mathcal{R}(s)=\frac{s\mathcal{H}^{\ast }(s)+sR_{1}}{\left( 1-s\right)D\left( s\right) } ,\end{equation*}

where

\begin{equation*}D\left( s\right) \coloneqq \sum_{n=0}^{\infty }d_{n}s^{n}\quad \text{and} \quad \mathcal{H}^{\ast }(s)\coloneqq \sum_{n=1}^{\infty }H_{n}^{\ast }s^{n}.\end{equation*}

Proof. Set

\begin{equation*}\mathcal{H}(s)\coloneqq \sum_{n=0}^{\infty }H_{n}s^{n}.\end{equation*}

Clearly,

\begin{equation*}s\mathcal{H}(s)=\sum_{n=0}^{\infty }(d_{n}-d_{n+1})s^{n+1}=sD\left( s\right)-D(s)+1.\end{equation*}

Multiplying (7) by $s^{n+1}$ and summing over n from 1 to $\infty $ we get

\begin{equation*}\mathcal{R}(s)-sR_{1}=s\mathcal{H}(s)\mathcal{R}(s)+s\mathcal{H}^{\ast }(s) ,\end{equation*}

or

\begin{equation*}\mathcal{R}(s)=\frac{s\mathcal{H}^{\ast }(s)+sR_{1}}{1-s\mathcal{H}(s)}=\frac{s\left( \mathcal{H}^{\ast }(s)+R_{1}\right) }{\left( 1-s\right)D\left( s\right) }.\end{equation*}

The lemma is proved.

Denote, for $0\leq i\leq n$,

\begin{equation*}A_{n}\coloneqq \textrm{e}^{S_{n}},\qquad B_{i,n}\coloneqq \sum_{k=i}^{n}\textrm{e}^{S_{k}},\qquad B_{n}\coloneqq B_{0,n},\end{equation*}

and introduce the function

\begin{equation*}C_{n}(s)\coloneqq \prod_{i=1}^{n}F_{i,0}(s).\end{equation*}

Lemma 2. Under Hypothesis A1,

\begin{equation*}C_{n}\coloneqq C_{n}(0)=\frac{1}{B_{n}}.\end{equation*}

Proof. Hypothesis A1 implies that

\begin{equation*} F_{i}(s)=\frac{q_{i}}{1-p_{i}s}=\frac{1}{1+e^{X_{i}}\left( 1-s\right) }\end{equation*}

for all $i=1,2,\ldots $ Using these equalities it is not difficult to check by induction that, for $n\geq 1$,

\begin{equation*}F_{n,0}(s)=1-\frac{A_{n}}{\left( 1-s\right) ^{-1}+B_{1,n}}=\frac{\left(1-s\right) ^{-1}+B_{1,n-1}}{\left( 1-s\right) ^{-1}+B_{1,n}},\end{equation*}

where $B_{1,0}=0$ by definition. Therefore,

(8)\begin{equation}C_{n}(s)=\prod_{i=1}^{n}\frac{\left( 1-s\right) ^{-1}+B_{1,i-1}}{\left(1-s\right) ^{-1}+B_{1,i}}=\frac{\left( 1-s\right) ^{-1}}{\left( 1-s\right)^{-1}+B_{1,n}}. \end{equation}

Setting $s=0$ in (8), we prove the lemma.

To go further we need more notation. Denote

\begin{equation*}L_{n}\coloneqq \min \left(S_{0},S_{1},\ldots,S_{n}\right).\end{equation*}

Let, as before, $\mathcal{E}=\left\{ \textbf{Q}_{1},\textbf{Q}_{2},\ldots\right\} $ be a random environment and let $\mathcal{F}_{n}$, $n\geq 1$, be the $\sigma $-field of events generated by the random pairs $\textbf{Q}_{1}=\{F_{1},G_{1}\},\textbf{Q}_{2}=\{F_{2},G_{2}\},\ldots,\textbf{Q}_{n}=\{F_{n},G_{n}\}$ and the sequence $W_{0},W_{1},\ldots,W_{n}$. These $\sigma $-fields form a filtration $\mathfrak{F}$. Now the increments $\left\{ X_{n},n\geq 1\right\} $ of the random walk S are measurable with respect to the $\sigma $-field $\mathcal{F}_{n}$. Using the property (2) of U we introduce a sequence of probability measures $\{ \textrm{P}_{(n)}^{+},n\geq 1\} $ on the $\sigma $-field $\mathcal{F}_{n}$ by means of the density

\begin{equation*}\textrm{d}\textrm{P}_{(n)}^{+}\coloneqq U(S_{n})\textbf{1}\{ L_{n}\geq 0\} \textrm{d}\textrm{P}.\end{equation*}

This and Kolmogorov’s extension theorem show that, on a suitable probability space, there exists a probability measure $\textrm{P}^{+}$ on the $\sigma $ -field $\mathfrak{F}$ (see [Reference Afanasyev, Geiger, Kersting and Vatutin3] and [Reference Afanasyev, Boeinghoff, Kersting and Vatutin2] for more detail) such that

\begin{equation*}\textrm{P}^{+} \mid \mathcal{F}_{n}=\textrm{P}_{(n)}^{+},\qquad n\geq 1.\end{equation*}

Under the measure $\textrm{P}^{+}$, the sequence $\{S_{n},n\geq0\}$ is a Markov chain with state space $[0,\infty)$ and transition probability

\begin{equation*}\textrm{P}^{+}(x;\,\textrm{d} y)=\frac{1}{U(x)}\textrm{P}(x;\,\textrm{d} y)U(y)\textbf{1}\{ y\geq 0\}.\end{equation*}

This change of measure is the well-known Doob h-transform from the theory of Markov processes. Thus, under $\textrm{P}^{+}$, the random walk $S_{n}$, $n\geq 0$, stays nonnegative; see, for example, [Reference Kersting and Vatutin10].

We now formulate two known statements dealing with conditioning $\left\{L_{n}\geq 0\right\} $.

Lemma 3 (See [Reference Afanasyev, Geiger, Kersting and Vatutin3, Lemma 2.5] or [Reference Kersting and Vatutin10, Lemma 5.2].). Let the condition (3) hold and let $\xi _{1},\xi _{2},\ldots $ be a sequence of uniformly bounded random variables adapted to the filtration $\mathfrak{F}$ such that the limit

\begin{equation*}\xi _{\infty }\coloneqq \lim_{n\rightarrow \infty }\xi _{n}\end{equation*}

exists $\textrm{P}^{+}$-a.s. Then

\begin{equation*}\lim_{n\rightarrow \infty }\textrm{E}[\xi _{n}\,|\,L_{n}\geq 0]=\textrm{E}^{+}\left[ \xi _{\infty }\right] .\end{equation*}

Let

\begin{equation*}\tau (n)\coloneqq \min \left\{ i\geq 0\,:\,S_{i}=L_{n}\right\} .\end{equation*}

Lemma 4 (See [Reference Afanasyev, Geiger, Kersting and Vatutin3, Lemma 2.2].). Let u(x), $x\geq 0$, be a nonnegative, nonincreasing function with $\int_{0}^{\infty }u(x)\textrm{d} x<\infty $. If the condition (3) holds then, for every $\varepsilon >0$, there exists a positive number $m=m(\varepsilon )$ such that, for all $n\geq m$,

\begin{equation*}\sum_{k=m}^{n}\textrm{E}\left[ u(-S_{k});\,\tau (k)=k\right] \textrm{P}\left(L_{n-k}\geq 0\right) \leq \varepsilon \textrm{P}\left( L_{n}\geq 0\right) .\end{equation*}

3. Proof of the main result

It is known (see, for instance, [Reference Rogozin16] or [Reference Bingham, Goldie and Teugels5, Theorem 8.9.12]) that if Hypothesis A3 is valid then there exists a slowly varying function $l_{2}(n)$ such that

\begin{equation*}\textrm{P}\left( L_{n}\geq 0\right) \sim \frac{l_{2}(n)}{n^{1-\rho }},\qquad n\rightarrow \infty . \end{equation*}

We now prove an important statement describing the asymptotic behavior of $d_{n}$ as $n\rightarrow \infty $. To this end, we introduce the reflected random walk

\begin{equation*}\tilde{S}_{0}=0,\qquad \tilde{S}_{k}=\tilde{X}_{1}+ \cdots +\tilde{X}_{k},\quad k\geq 1,\end{equation*}

where $\tilde{X}_{k}=-X_{k}$, and supply in the following the relevant variables and measures by the upper symbol $^{\symbol{126}}.$

Note that $\tilde{X}_{k}\in \mathcal{D}\left( \alpha ,-\beta \right) $ and

\begin{equation*}\lim_{n\rightarrow \infty }\textrm{P}( \tilde{S}_{n}>0)=\lim_{n\rightarrow \infty }\textrm{P}\left( S_{n}<0\right) =1-\rho .\text{ }\end{equation*}

Hence it follows that

(9)\begin{equation}\textrm{P}( \tilde{L}_{n}\geq 0) \sim \frac{l_{3}(n)}{n^{\rho }},\qquad n\rightarrow \infty , \end{equation}

for a slowly varying function $l_{3}(n)$.

Lemma 5. If Hypotheses A1–A3 are satisfied then there exists a constant $\theta >0$ such that

\begin{equation*}d_{n}\sim \theta \textrm{P}( \tilde{L}_{n}\geq 0) \sim \theta\frac{l_{3}(n)}{n^{\rho }},\qquad n\rightarrow \infty .\end{equation*}

Proof. According to Lemma 2,

\begin{equation*}C_{n}=\frac{1}{B_{n}}=\frac{1}{1+\textrm{e}^{-\tilde{S}_{1}}+\cdots+\textrm{e}^{-\tilde{S}_{n}}}=:\frac{1}{\tilde{B}_{n}}.\end{equation*}

We set

\begin{equation*}\tilde{\tau}(n)\coloneqq \min \{ i\geq 0\,:\,\tilde{S}_{i}=\tilde{L}_{n}\}\end{equation*}

and write

\begin{equation*}d_{n}=\sum_{k=0}^{n}\textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}(F_{i,0}(0)) ;\,\tilde{\tau}(n)=k\bigg] .\end{equation*}

Recalling point (i) of Hypothesis A2, we conclude that, for any $i\geq 1$,

\begin{equation*}F_{i,0}^{\sigma }(0)=F_{i,i-1}^{\sigma }(F_{i-1,0}(0))\geq F_{i,i-1}^{\sigma}(0)\geq \kappa ^{\sigma }.\end{equation*}

This estimate, point (ii) of Hypothesis A2, and Lemma 2 imply that

\begin{align*}\textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}( F_{i,0}(0)) ;\,\tilde{\tau}(n)=k\bigg] &\leq \textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}(F_{i,0}^{\sigma }(0)) ;\,\tilde{\tau}(n)=k\bigg] \\[3pt] &\leq \textrm{E}\bigg[ \bigg( \prod_{i=1}^{n}F_{i,0}^{\sigma}(0)\bigg) ^{\gamma };\,\tilde{\tau}(n)=k\bigg] =\textrm{E}\bigg[ \frac{1}{( \tilde{B}_{n}) ^{\sigma \gamma }};\,\tilde{\tau}(n)=k\bigg] .\end{align*}

Further,

\begin{equation*}\textrm{E}\bigg[ \frac{1}{( \tilde{B}_{n}) ^{\sigma \gamma }};\,\tilde{\tau}(n)=k\bigg] \leq \textrm{E}\left[ \textrm{e}^{\sigma \gamma \tilde{S}_{k}};\,\tilde{\tau}(n)=k\right] =\textrm{E}\left[ \textrm{e}^{\sigma \gamma \tilde{S}_{k}};\,\tilde{\tau}(k)=k\right] \textrm{P}( \tilde{L}_{n-k}\geq 0).\end{equation*}

Using Lemma 4 with $u(x)=\textrm{e}^{-\sigma \gamma x}$, we conclude that, for any $\varepsilon >0$, there exists $m=m\left( \varepsilon \right) $ such that

(10)\begin{eqnarray}&&\sum_{k=m}^{n}\textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}(F_{i,0}(0)) ;\,\tilde{\tau}(n)=k\bigg] \notag \\&&\quad \leq \sum_{k=m}^{n}\textrm{E}\left[ \textrm{e}^{\sigma \gamma \tilde{S}_{k}};\,\tilde{\tau}(k)=k\right] \textrm{P}( \tilde{L}_{n-k}\geq 0) \leq\varepsilon \textrm{P}( \tilde{L}_{n}\geq 0) .\end{eqnarray}

We now consider fixed $k\leq m$ and write

\begin{eqnarray*}&&\textrm{E}\bigg[ \prod_{i=1}^{n}G_{i}( F_{i,0}(0)) ;\,\tilde{\tau}(n)=k\bigg] \\&&\qquad =\textrm{E}\bigg[ \prod_{i=1}^{k}G_{i}( F_{i,0}(0))\prod_{j=k+1}^{n}G_{j}( F_{j,k}(F_{k,0}(0))) ;\,\tilde{\tau}(n)=k\bigg] \\&&\qquad =\textrm{E}\bigg[ \prod_{i=1}^{k}G_{i}( F_{i,0}(0))\Theta ( n-k;\,F_{k,0}(0)) ;\,\tilde{\tau}(k)=k\bigg] ,\end{eqnarray*}

where

\begin{equation*}\Theta ( n;\,s) \coloneqq \textrm{E}\bigg[ \prod_{j=1}^{n}G_{j}(F_{j,0}(s)) ;\,\tilde{L}_{n}\geq 0\bigg] .\end{equation*}

Using the arguments applied to establish [Reference Afanasyev, Geiger, Kersting and Vatutin3, Lemma 2.7], one may check that, under the conditions of Theorem 1,

\begin{align*}&\sum_{j=1}^{\infty }( 1-G_{j}( F_{j,0}(s)) ) \leq\sum_{j=1}^{\infty }G_{j}^{\prime }(1)( 1-F_{j,0}(s)) \\&\quad\leq\sum_{j=1}^{\infty }G_{j}^{\prime }(1)( 1-F_{j,0}(0))\leq \sum_{j=1}^{\infty }G_{j}^{\prime }(1)\textrm{e}^{-\tilde{S}_{j}}<\infty \qquad {\tilde{\textrm{P}}}^{+}\text{{-}a.s.}\end{align*}

Hence it follows that

\begin{equation*}\xi _{n}(s)\coloneqq \prod_{j=1}^{n}G_{j}( F_{j,0}(s)) \rightarrow \xi_{\infty }(s)\coloneqq \prod_{j=1}^{\infty }G_{j}( F_{j,0}(s)) >0\end{equation*}

${\tilde{\textrm{P}}}^{+}$-a.s. Since $\xi _{n}(s)\rightarrow \xi _{\infty }(s)$${\tilde{\textrm{P}}}^{+}$-a.s. as $n\rightarrow \infty $, it follows from Lemma 3 that, for each $s\in \lbrack 0,1)$,

\begin{equation*}\Theta ( n;\,s) \sim {\tilde{\textrm{E}}}^{+}\left[ \xi _{\infty }(s)\right] \textrm{P}( \tilde{L}_{n}\geq 0) ,\qquad n\rightarrow \infty .\end{equation*}

Applying the dominated convergence theorem gives, on account of (9) and properties of slowly varying functions,

(11)\begin{align} & \lim_{n\rightarrow \infty }\textrm{E}\bigg[ \prod_{i=1}^{k}G_{i}(F_{i,0}(0)) \frac{\Theta ( n-k;\,F_{k,0}(0)) }{\textrm{P}( \tilde{L}_{n}\geq 0) };\,\tilde{\tau}(k)=k\bigg]\\ & \qquad\qquad\qquad = \textrm{E}\bigg[ \prod_{i=1}^{k}G_{i}( F_{i,0}(0)){\tilde{\textrm{E}}}^{+}\bigg[ \prod_{j=0}^{\infty }\hat{G}_{j}( \hat{F}_{j,0}(F_{k,0}(0))) \bigg] ;\,\tilde{\tau}(k)=k\bigg],\nonumber \end{align}

where $\hat{G}_{j},\hat{F}_{j,0}$ are independent copies of $G_{j},F_{j,0}$.

Combining (11) with (10), we get

\begin{equation*}\lim_{n\rightarrow \infty }\frac{1}{\textrm{P}( \tilde{L}_{n}\geq0) }\textrm{E}\bigg[ \prod_{i=0}^{n-1}G_{i}( F_{i,n}(0)) \bigg] =\theta ,\end{equation*}

where

\begin{equation*}\theta \coloneqq \sum_{k=0}^{\infty }\textrm{E}\bigg[ \prod_{i=1}^{k}G_{i}(F_{i,0}(0)) {\tilde{\textrm{E}}}^{+}\bigg[ \prod_{j=0}^{\infty }\hat{G}_{j}( \hat{F}_{j,0}(F_{k,0}(0))) \bigg] ;\,\tilde{\tau}(k)=k\bigg].\end{equation*}

This proves Lemma 5.

Proof of Theorem 1. We know that

\begin{equation*}d_{n}\sim \theta \frac{l_{3}(n)}{n^{\rho }}\end{equation*}

as $n\rightarrow \infty $. This and a Tauberian theorem (see [Reference Feller6, Chapter XIII.5, Theorem 5]) imply that, for $s\uparrow 1$,

\begin{equation*}D(s)=\sum_{n=1}^{\infty }d_{n}s^{n}\sim \theta \Gamma \left( 1-\rho \right)\frac{l_{3}\left( 1/(1-s)\right) }{\left( 1-s\right) ^{1-\rho }}.\end{equation*}

Thus,

\begin{equation*}\mathcal{R}(s)=\frac{s\left( \mathcal{H}^{\ast }(s)+R_{1}\right) }{\left(1-s\right) D\left( s\right) }\sim \frac{\mathcal{H}^{\ast }(1)+R_{1}}{\theta\Gamma \left( 1-\rho \right) l_{3}\left( 1/(1-s)\right) \left( 1-s\right)^{\rho }}\end{equation*}

as $s\uparrow 1$. Since the sequence $\left\{ R_{n},n\geq 1\right\} $ is monotone decreasing, it follows (see [Reference Feller6, Chapter XIII.5, Theorem 5]) that

\begin{equation*}R_{n}\sim \frac{\mathcal{H}^{\ast }(1)+R_{1}}{\theta \Gamma \left( \rho\right) \Gamma \left( 1-\rho \right) }\frac{n^{\rho -1}}{l_{3}\left(n\right) }\qquad \text{ as }n\rightarrow \infty .\end{equation*}

Theorem 1 is proved.

Acknowledgements

This work was supported by the Natural Science Foundation of China, grant 11871103, and the High-End Foreign Experts Recruitment Program (No. GDW20171100029), and by the Russian Science Foundation under grant 19-11-00111.

The authors are deeply grateful to the anonymous referee for their careful reading of the original manuscript and helpful suggestions to improve the paper.

References

Afanasyev, V. I. (2014). Conditional limit theorem for maximum of random walk in a random environment. Theory Prob. Appl. 58, 525545.CrossRefGoogle Scholar
Afanasyev, V. I., Boeinghoff, Ch., Kersting, G. and Vatutin, V. A. (2012). Limit theorems for weakly subcritical branching processes in random environment. J. Theoret. Prob. 25, 703732.10.1007/s10959-010-0331-6CrossRefGoogle Scholar
Afanasyev, V. I., Geiger, J., Kersting, G. and Vatutin, V. A. (2005). Criticality for branching processes in random environment. Ann. Prob. 33, 645673.CrossRefGoogle Scholar
Badalbaev, I. S. and Mashrabbaev, A. (1983). Lifetimes of an $r \gt 1$-type Galton–Watson process with immigration. Izv. Akad. Nauk UzSSR, Ser. Fiz., Mat. Nauk. 2, 713.Google Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation. Cambridge University Press.CrossRefGoogle Scholar
Feller, W. (1971). An Introduction to Probability Theory and its Applications, Vol. 2. John Wiley, New York.Google Scholar
Haccou, P., Jagers, P. and Vatutin, V. A. (2005) Branching Processes in Biology: Evolution, Growth and Extinction (Camb. Ser. Adaptive Dynamics), 5. Cambridge University Press.CrossRefGoogle Scholar
Kagan, Y. Y. (2010) Statistical distributions of earthquake numbers: consequence of branching process. Geophys. J. Int., 180, 13131328.CrossRefGoogle Scholar
Kaplan, N. (1974). Some results about multidimensional branching processes with random environments. Ann. Prob. 2, 441455.CrossRefGoogle Scholar
Kersting, G. and Vatutin, V. (2017). Discrete Time Branching Processes in Random Environment, ISTE & Wiley, New York.CrossRefGoogle Scholar
Kesten, H., Kozlov, M. V. and Spitzer, F. (1975). A limit law for random walk in a random environment. Comp. Math. 30, 145168.Google Scholar
Key, E. S. (1987). Limiting distributions and regeneration times for multitype branching processes with immigration in a random environment. Ann. Prob. 15, 344353.10.1214/aop/1176992273CrossRefGoogle Scholar
Kimmel, M. and Axelrod, D. E. (2002). Branching Processes in Biology. Springer, New York.CrossRefGoogle Scholar
Mitov, K. V. (1982). Conditional limit theorem for subcritical branching processes with immigration. In Matem. i Matem. Obrazov. Dokl. ii Prolet. Konf. C”yuza Matem. B”lgarii, Sl”nchev Bryag, 6–9 Apr. Sofia, pp. 398403.Google Scholar
Novick, R. P. and Hoppenstead, F. C. (1978) On plasmid incompatibility. Plasmid 1, 431434.CrossRefGoogle ScholarPubMed
Rogozin, B. A. (1962). The distribution of the first ladder moment and height and fluctuation of a random walk. Theory Prob. Appl. 16, 575595.CrossRefGoogle Scholar
Roitershtein, A. (2007) A note on multitype branching processes with immigration in a random environment. Ann. Prob. 35, 15731592.CrossRefGoogle Scholar
Seneta, E. and Tavare, S. (1983). A note on models using the branching process with immigration stopped at zero. J. Appl. Prob. 20, 1118.CrossRefGoogle Scholar
Tanny, D. (1981). On multitype branching processes in a random environment. Adv. Appl. Prob. 13, 464497.10.2307/1426781CrossRefGoogle Scholar
Vatutin, V. A. (1977). A conditional limit theorem for a critical branching process with immigration. Math. Notes 21, 405411.CrossRefGoogle Scholar
Zolotarev, V. M. (1957). Mellin–Stiltjes transform in probability theory. Theory Prob. Appl. 2, 433460.CrossRefGoogle Scholar
Zubkov, A. M. (1972). Life-periods of a branching process with immigration. Theory Prob. Appl. 17, 174183.10.1137/1117018CrossRefGoogle Scholar