Hostname: page-component-745bb68f8f-d8cs5 Total loading time: 0 Render date: 2025-02-11T04:32:04.298Z Has data issue: false hasContentIssue false

On supercritical branching processes with emigration

Published online by Cambridge University Press:  08 July 2022

Georg Braun*
Affiliation:
University of Tübingen
*
*Postal address: Mathematical Institute, University of Tübingen, Auf der Morgenstelle 10, 72076 Tübingen, Germany. Email address: georg.braun@uni-tuebingen.de
Rights & Permissions [Opens in a new window]

Abstract

We study supercritical branching processes under the influence of an independent and identically distributed (i.i.d.) emigration component. We provide conditions under which the lifetime of the process is finite or has a finite expectation. A theorem of Kesten–Stigum type is obtained, and the extinction probability for a large initial population size is related to the tail behaviour of the emigration.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Branching processes are a fascinating class of stochastic processes, which model the evolution of a population under the assumption that different individuals give birth to a random number of children independently of each other. Timeless monographs, which present the theory of classical branching processes, include [Reference Athreya and Ney1], [Reference Harris10], and [Reference Sevastyanov28].

In the present article we will study the consequences of including an independent and identically distributed (i.i.d.) emigration component between consecutive generations in the supercritical regime. Intuitively speaking, the properties of this model are determined by the interplay of two opposite effects, namely the explosive nature of supercritical branching processes and the decrease in population size caused by emigration.

Formally, let $((\xi_{n,j})_{j \geq 1}, Y_n)_{n \geq 1}$ denote a sequence of i.i.d. random variables. Assume that $(\xi_{1,j})_{j \geq 1}$ is i.i.d. and let $\xi$ be an independent copy of the family $(\xi_{n,j})_{n,j \geq 1}$ . Moreover, suppose Y is an independent copy of $(Y_n)_{n \geq 1}$ and that both $\xi$ and Y only take values in $\mathbb{N}_0$ . Then we define a branching process with emigration $(Z_n)_{n \geq 0}$ by setting $Z_0 \,:\!=\, k \in \mathbb{N}$ and recursively

(1.1) \begin{equation}Z_{n+1} \,:\!=\, \Biggl( \sum_{j=1}^{Z_n} \xi_{n+1,j} - Y_{n+1} \Biggr)_+, \quad n \geq 0 .\end{equation}

Throughout this article we will focus on the supercritical case and, more precisely, assume that

\begin{align*} \lambda \,:\!=\, \mathbb{E} [ \xi] \in (1,\infty). \end{align*}

Naturally, our study will concentrate on the extinction time

\begin{align*}\tau \,:\!=\, \inf \{ n \geq 1 \mid Z_n = 0 \}, \quad \mathrm{where}\ \inf \emptyset \,:\!=\, \infty.\end{align*}

In Theorem 2.1, we verify that $\tau$ is almost surely finite if and only if $\mathbb{E} [ \!\log_+ Y] = \infty$ . Moreover, in Theorem 2.2, we show that $\mathbb{E} [ \tau] < \infty$ if

(1.2) \begin{equation}0 < \sum_{n \geq 1} \prod_{m=1}^n \mathbb{P} \bigl[ Y \leq r (\lambda + \varepsilon)^m \bigr] < \infty \quad \text{for $ \varepsilon > 0$ and $ r \in (0,\infty)$,}\end{equation}

and, under some additional assumptions, $\mathbb{E} [\tau]= \infty$ provided that

(1.3) \begin{equation}\sum_{n \geq 1} \prod_{m=1}^n \mathbb{P} \bigl[ Y \leq r \lambda^m m^{-\theta} \bigr] = \infty\quad \text{for some $ \theta \in (1,\infty)$, $r \in (0,\infty)$.}\end{equation}

The precise statements of all of our results will be given in Section 2. In Theorem 2.3 we relate the behaviour of the extinction probabilities

\begin{align*} q_k \,:\!=\, \mathbb{P} [\tau < \infty \mid Z_0=k], \quad k \geq 1, \end{align*}

to the tail behaviour of Y as $k \rightarrow \infty$ . We also present a strong limit theorem for the population size $Z_n$ as $n \rightarrow \infty$ ; see Theorem 2.4.

Our study is motivated by a simple observation, which links our model to subcritical autoregressive processes. We will explain it in Section 3. To the best of our knowledge, this connection has not been investigated in the literature so far.

While our criteria ensuring $\mathbb{E} [ \tau] < \infty$ , respectively $\mathbb{E} [ \tau] = \infty$ , are not exact, as illustrated by the following example, the gap is quite narrow.

Example 1.1. Assume that there are $c \in (0,\infty)$ and $n_0 \in \mathbb{N}$ with

\begin{align*} \mathbb{P} [ Y > n ] = \dfrac{c}{\log n} \quad \text{for all $ n \geq n_0$.} \end{align*}

Then $\mathbb{E}[ \!\log_+ Y] = \infty$ and hence $\tau < \infty$ almost surely. If $c > \log \lambda$ , then by Raabe’s test, (1.2) holds. If $c \leq \log \lambda$ , then by the Gauss test, (1.3) holds.

Let us end this introduction by giving an overview of the literature dealing with branching processes with emigration.

Vatutin [Reference Vatutin31] explored the critical case $\lambda = 1$ for $Y \equiv 1$ and $\sigma^2 \,:\!=\, {\mathrm{var}} [\xi] \in (0,\infty)$ . He showed that $\tau$ has a regularly varying tail with exponent $-1-2/\sigma^2$ and, if all moments of $\xi$ are finite, proved that $2 Z_n/n \sigma^2$ , conditioned on being positive, converges weakly to an exponential distribution. These results were improved by Vinokurov [Reference Vinokurov34] and Kaverin [Reference Kaverin13], and more recently by Denisov, Korshunov, and Wachtel [Reference Denisov, Korshunov and Wachtel5]. More generally, the approach of [Reference Denisov, Korshunov and Wachtel5] allows size-dependent offspring distribution and immigration.

More or less specific models of critical branching processes involving both immigration and emigration were studied in [Reference Nagaev and Khan19], [Reference Yanev and Mitov36], and [Reference Yanev and Yanev37].

The present article and most of the above literature deals with specific cases of controlled branching. This model was introduced by Sevastyanov and Zubkov [Reference Sevastyanov and Zubkov29], who classified eventual extinction for a control function of linear and polynomial growth. We also refer to the related work by Zubkov [Reference Zubkov39, Reference Zubkov40]. Yanev [Reference Yanev35] generalized this model by assuming random i.i.d. control functions $(\varphi_n)_{n \geq 1}$ . Roughly speaking, the present article assumes $\varphi_n (m) \,:\!=\, ( m - Y_n )_+$ , $m \in \mathbb{N}$ . For a recent monograph on controlled branching, see [Reference Velasco, del Puerto and Yanev32].

We also want to mention some results for models in continuous time. In this scenario the population size changes if exactly one individual gives birth to a random number of children or if an emigration event, sometimes called catastrophe, occurs. The case that each individual has either 0 or 2 children would correspond to a birth-and-death process. In [Reference Pakes21] and [Reference Pakes24], Pakes gave results for a catastrophe rate proportional to the population size. In [Reference Pakes22], [Reference Pakes23], and [Reference Pakes25], he studied this model with a size-independent emigration rate. In the supercritical regime, he related the almost sure eventual extinction to the condition $\mathbb{E} [ \!\log_+ Y ] = \infty$ ; see [Reference Pakes22, Theorem 2.1 and Corollary 3.1]. This was also verified by Grey [Reference Grey6], who proved the same result for the time-discrete model of our present article under the assumption that $(\xi_{1,j})_{j \geq 1}$ and $Y_1$ are independent. However, this independence condition can be avoided; see Theorem 2.1 of the present article.

2. Preliminaries and statement of results

In the following we usually assume $Z_0 = k \in \mathbb{N}$ , $\lambda \in (1,\infty)$ , as well as:

  1. (H1) There exists a strictly increasing sequence $(k_n)_{n \geq 0}$ in $\mathbb{N}$ , which satisfies $k_0 =k$ and $\mathbb{P} \bigl[ \sum_{j=1}^{k_n} \xi_{1,k} - Y_1 = k_{n+1} \bigr] > 0$ for all $n \in \mathbb{N}_0$ , and

  2. (H2) $ \mathbb{P} \bigl[ \sum_{j=1}^n \xi_{1,j} - Y_1 \leq n-1 \bigr] > 0 $ for all $n \geq 1$ .

In some of our results, we will also need Grey’s restriction:

  1. (IND) The random variables $(\xi_{1,j})_{j \geq 1}$ and $Y_1$ are independent.

Conditions (H1) and (H2) ensure that neither emigration nor branching will dominate each other fully for the possibly rather small initial state, respectively for a large population size.

If $\lambda > 1$ , then condition (H1) is always satisfied provided that the initial population $Z_0 = k \in \mathbb{N}$ is chosen large enough. As similar arguments will occur in many of our proofs, let us briefly explain how this can be verified. By truncation of the offspring distribution $\xi$ (see also Observation A.1 in Appendix A), we may restrict ourselves to the case $\lambda < \infty$ . Choose $\varepsilon > 0$ with $\lambda - \varepsilon > 1$ . Then, by the law of large numbers,

\begin{equation*}\mathbb{P} \Biggl[ \sum_{j=1}^k \xi_{1,j} > ( \lambda - \varepsilon) k \Biggr] \rightarrow 0 \quad \mathrm{for}\ k \rightarrow \infty. \end{equation*}

In particular, we know that there exists $K \in \mathbb{N}$ such that this probability is smaller than $1/2$ for all $k \geq K$ . Moreover, we can choose $N \in \mathbb{N}$ with $\mathbb{P} [Y_1 \geq N] < 1/2$ . Then, for all $k \geq K$ with $N < \varepsilon k$ , we deduce

\begin{align*}\mathbb{P} [Z_1 > k \mid Z_0 = k] > 0.\end{align*}

So if $\lambda > 1$ , then condition (H1) holds if $Z_0 = k$ is chosen large enough. On the other hand, we know that condition (H2) holds, for example, in the case of $\mathbb{P} [ \xi =0]>0$ or if the emigration distribution Y is unbounded and (IND).

Note that (H1) and (H2) are preserved if the value of $Z_0 = k \geq 1$ is increased. The state space of $(Z_n)_{n \geq 0}$ may be affected by such a modification. However, this will not cause any problems in our study.

Let us give a final thought regarding (H1) and (H2). Consider a modification of the branching process $(Z_n)_{n \geq 0}$ , in which the population gets revived with exactly k individuals upon extinction. Then this renewal version of $(Z_n)_{n \geq 0}$ is a Markov chain, which is irreducible and exhibits an infinite state space containing 0 if and only if both (H1) and (H2) are satisfied. In [Reference Pakes20], Pakes investigated this revived branching process when the emigration component is absent but with a more general resetting mechanism upon extinction. Moreover, both the subcritical and critical cases are studied in [Reference Pakes20].

Theorem 2.1. $\tau < \infty$ almost surely if and only if $\mathbb{E} [\!\log_+ Y] = \infty$ .

If the process $(Z_n)_{n \geq 0}$ dies out almost surely, it is natural to ask whether its expected lifetime is finite or infinite.

Theorem 2.2. Assume $\mathbb{E} [ \!\log_+ Y ] = \infty$ .

  1. (I) $ \mathbb{E} [\tau] < \infty$ , provided that there are $\varepsilon > 0$ and $r \in (0,\infty)$ with

    \begin{align*} 0 < \sum_{n \geq 1} \prod_{m=1}^n \mathbb{P} \bigl[ Y \leq r (\lambda + \varepsilon)^m \bigr] < \infty. \end{align*}
  2. (II) $\mathbb{E} [\tau] = \infty$ , provided that $\mathbb{E} \big[ \xi^{1+\delta} \big] < \infty$ for $\delta > 0$ , $\mathrm{(IND)}$ , and

    \begin{align*} \sum_{n \geq 1} \prod_{m=1}^n \mathbb{P} \bigl[ Y \leq r \lambda^m m^{-\theta} \bigr] = \infty \quad \textit{{for} $ \theta \in (1, \infty)$, $r \in (0,\infty)$.} \end{align*}

If the process $(Z_n)_{n \geq 0}$ does not become extinct almost surely, one might try to understand the distribution of $\tau$ and the extinction probabilities $(q_k)_{k \geq 1}$ in the case of a large initial population size $k \geq 1$ . For this, we use Karamata’s concept of slow and regular variation and assume:

  1. (REG) $\mathbb{P} [Y>t]$ varies regularly for $t \rightarrow \infty$ with index $\alpha \in (0,\infty)$ .

For a gentle introduction to slow and regular variation, we refer the reader to [Reference Mikosch18]. A measurable function $L\colon [0,\infty) \rightarrow (0,\infty)$ is called slowly varying for $t \rightarrow \infty$ if for all $c \in (0, \infty)$ we have $L(ct)/L(t) \rightarrow 1$ as $t \rightarrow \infty$ . Moreover, a measurable function $f\colon [0,\infty) \rightarrow (0,\infty)$ is regularly varying for $t \rightarrow \infty$ if there exists $\alpha \in \mathbb{R}$ , $t_0 \in [0,\infty)$ and a slowly varying function L satisfying $f(t) = t^\alpha L(t)$ for all $t \geq t_0$ . In this case the constant $\alpha \in \mathbb{R}$ is unique and $- \alpha$ is called the index of f.

Theorem 2.3. Assume $\mathrm{(REG)}$ and let $N \in \mathbb{Z}_{\geq 2} \cup \{ \infty \}$ . Then

\begin{equation*}\limsup_{k \rightarrow \infty} \ \mathbb{P} [ \tau < N \mid Z_0 = k] \ \mathbb{P} [Y>k]^{-1} \leq \sum_{l=1}^{N-1} \lambda^{-\alpha l}.\end{equation*}

Furthermore, if all exponential moments of $\xi$ are finite, then

\begin{align*}\lim_{k \rightarrow \infty} \ \mathbb{P} [ \tau < N \mid Z_0 = k]\ \mathbb{P} [Y>k]^{-1} = \sum_{l=1}^{N-1} \lambda^{-\alpha l}.\end{align*}

By choosing $N= \infty$ in Theorem 2.3, we obtain results on the extinction probabilities $(q_k)_{k \geq 1}$ for $k \rightarrow \infty$ .

Besides $\tau$ and $(q_k)_{k \geq 1}$ , one can also investigate the asymptotic behaviour of the process $(Z_n)_{n \geq 0}$ conditioned on its non-extinction. Observe that the sequence $W_n \,:\!=\, \lambda^{-n} Z_n$ , $n \geq 0$ , is a non-negative supermartingale. Hence, as in the case without any migration, Doob’s martingale convergence theorem yields the existence of the a.s. limit

\begin{align*}W\,:\!=\, \lim_{n \rightarrow \infty} W_n, \quad \mathrm{and\ moreover }\ 0 \leq \mathbb{E} [W] \leq k.\end{align*}

Theorem 2.4.

  1. (a) $\mathbb{P} [W > 0] > 0$ if and only if

    \begin{align*} \mathbb{E} [ \xi \log_+ \xi] < \infty \quad \textit{and} \quad \mathbb{E} [ \!\log_+ Y] < \infty.\end{align*}
    Furthermore, in this case, $\mathbb{P}[W>0]= \mathbb{P}[\tau=\infty]$ .
  2. (b) Assume $\mathbb{P}[ W > 0] > 0$ , $\mathbb{P}[ \xi = \lambda] < 1$ and $\mathrm{(IND)}$ . Then

    \begin{align*} \mathbb{P}[a < W < b] > 0 \quad \textit{for all $ 0 \leq a < b \leq \infty $.} \end{align*}

The proofs of Theorem 2.1 and Theorem 2.2 are somewhat similar and are therefore addressed together in Section 4. The arguments needed for Theorem 2.3 and Theorem 2.4 are different and slightly more technical, and are thus carried out separately in Sections 5 and 6.

3. Relation to the random difference equation

In this section we always assume $\xi \equiv \lambda$ . Then (1.1) simplifies to

\begin{align*}Z_{n+1} = ( \lambda Z_n - Y_{n+1})_+, \quad n \geq 0.\end{align*}

Consider the process $( \hat{Z}_n )_{n \geq 0}$ defined by $\hat{Z}_0 \,:\!=\, Z_0 = k$ and

\begin{align*}\hat{Z}_{n+1} \,:\!=\, \lambda \hat{Z}_n - Y_{n+1}, \quad n \geq 0.\end{align*}

Then, by induction over $n \geq 0$ , we can verify $Z_n = ( \hat{Z}_n )_+$ and

\begin{equation*}\hat{Z}_n = \lambda^n k - \sum_{j=1}^{n} \lambda^{n-j} Y_j.\end{equation*}

Let $m \in \mathbb{N}_0$ . Then, for all $n \geq 0$ , we know that

(3.1) \begin{align} \mathbb{P} [ Z_{n} > m ] &= \mathbb{P} [ \hat{Z}_{n} > m ] = \mathbb{P} \Biggl[ \lambda^{-n} m + \sum_{j=1}^n \lambda^{-j} Y_j < k \Biggr]\end{align}
(3.2) \begin{align} \qquad\qquad\qquad\ \ \quad = \mathbb{P} \Biggl[ \lambda^{-n} m + \sum_{j=1}^n \lambda^{-(n-j)} \lambda^{-1} Y_j < k \Biggr] = \mathbb{P} [ \hat{X}_n < k ],\end{align}

where $( \hat{X}_n)_{n \geq 0}$ is the autoregressive process defined by $\hat{X}_0 \,:\!=\, m$ and

\begin{equation*}\hat{X}_{n+1} \,:\!=\, \lambda^{-1} \hat{X}_n + \lambda^{-1} Y_{n+1}, \quad n \geq 0.\end{equation*}

The study of this random difference equation was initiated by Kesten [Reference Kesten16] in the more general random-coefficient version

\begin{align*} X_{n+1} \,:\!=\, A_{n+1} X_n + Y_{n+1}, \quad n \geq 0, \end{align*}

where the sequence $(A_n,Y_n)_{n \geq 1}$ is typically assumed to be i.i.d. and independent of $X_0$ . The Markov chain $(X_n)_{n \geq 0}$ is sometimes called a perpetuity. For more information on this process, we also refer to the monograph [Reference Buraczewski, Damek and Mikosch3] and the exposition in [Reference Iksanov12, Chapter 2].

Formally, (3.1) and (3.2) establish that $(Z_n)_{n \geq 0}$ and $(\hat{X}_n)_{n \geq 0}$ are dual Markov chains in the sense of Siegmund; see [Reference Siegmund30].

Let $A_1 \geq 0$ and $Y_1 \geq 0$ in the following. In the contractive case $\mathbb{E} [ \!\log A_1] < 0$ , it is well known that the condition

\begin{align*} \mathbb{E} [ \!\log_+ Y_1] < \infty \end{align*}

is related to the existence of a stationary solution for $(X_n)_{n \geq 0}$ ; see e.g. [Reference Vervaat33, Theorem 1.6] or [Reference Buraczewski, Damek and Mikosch3, Theorem 2.1.3]. This can be explained in the following way. Assume that $Y_0 \,:\!=\, X_0$ has the same distribution as $Y_1$ . Then, for fixed $n \geq 1$ , by exchangeability

\begin{align*} X_n =\sum_{j=0}^n A_n \cdots A_{j+1} Y_j \overset{\mathrm{d}}{=} \sum_{j=0}^{n} A_1 \cdots A_{j} Y_{j+1} \,=:\, X^{\prime}_n,\end{align*}

and $X^{\prime}_n \rightarrow X_\infty$ almost surely for $n \rightarrow \infty$ , provided the limit

\begin{align*}X_\infty \,:\!=\, \sum_{n \geq 0} A_1 \cdots A_n Y_{n+1}\end{align*}

exists. If $A_1 \equiv \lambda^{-1}$ and $Y_1 \geq 0$ , then the limit $X_\infty$ is almost surely finite if and only if $\mathbb{E} [ \!\log_+ Y] < \infty$ ; see e.g. Lemma 4.1 in Section 4. Inserting $m=0$ into (3.1) and (3.2) gives

(3.3) \begin{equation}\mathbb{P}[ \tau = \infty] = \lim_{n \rightarrow \infty} \mathbb{P} [Z_n > 0] = \lim_{n \rightarrow \infty} \mathbb{P} [ \hat{X}_n < k ] = \mathbb{P} [ \hat{X}_\infty < k],\end{equation}

where

\begin{align*}\hat{X}_\infty \,:\!=\, \lambda^{-1} \sum_{n \geq 0} \lambda^{-n} Y_{n+1},\end{align*}

and we can recover the statement of Theorem 2.1 in this way. Moreover, consider (3.3) and the following result, which was obtained by Grincevičius in [Reference Grincevičius8].

Theorem 3.1. (Grincevičius.) Assume $\mathbb{P}[ Y_1 > t]$ varies regularly for $t \rightarrow \infty$ with index $\alpha \in (0,\infty)$ , $\mathbb{E}[ A_1^\alpha] < 1$ and $\mathbb{E}[ A_1^\beta] < \infty$ for $\beta > \alpha$ . Then

\begin{align*}\lim_{k \rightarrow \infty} \ \mathbb{P} [ X_\infty > k ]\ \mathbb{P} [Y_1>k]^{-1} = \sum_{j=0}^\infty \mathbb{E}[ A_1^\alpha]^j.\end{align*}

In the specific case $\xi \equiv \lambda$ and $A_1 \equiv \lambda^{-1}$ , we can use this result to recover the asymptotic formula for $(q_k)_{k \geq 1}$ as $k \rightarrow \infty$ given Theorem 2.3. Note that $X_\infty$ and $\hat{X}_\infty$ differ by the constant $\lambda^{-1}$ , which explains why the limit in Theorem 2.3 is $\lambda^{-\alpha}/(1-\lambda^{-\alpha})$ rather than $1/(1-\lambda^{-\alpha})$ .

It is worth mentioning that Grey questioned some parts of the original proof of Theorem 3.1 and gave a new, improved version in [Reference Grey7].

Finally, note that, in our case, all random variables involved in the definition of $(\hat{X}_n)_{n \geq 0}$ are non-negative and hence Kellerer’s theory of recurrence and transience of order-preserving chains is available; see [Reference Kellerer14] and [Reference Kellerer15]. By again inserting $m=0$ into (3.1) and (3.2), we deduce that

\begin{align*}\mathbb{E} [\tau] = \sum_{n \geq 0} \mathbb{P} [ \tau > n] = \sum_{n \geq 0} \mathbb{P} [ Z_n > 0] = \sum_{n \geq 0} \mathbb{P} [ \hat{X}_n < k ],\end{align*}

and hence conclude that $\mathbb{E}[ \tau] = \infty$ if and only if $( \hat{X}_n)_{n \geq 0}$ is recurrent. A recent result by Zerner [Reference Zerner38] states that this is rather generally the case if and only if there exists $b \in (0, \infty)$ with

\begin{align*} \sum_{n \geq 1} \prod_{m=1}^n \mathbb{P} [ Y_1 \leq b \lambda^m ] = \infty.\end{align*}

Clearly, in the specific case $\xi \equiv \lambda$ , this result characterizes the finiteness of $\mathbb{E}[\tau]$ exactly and hence more precisely than Theorem 2.2.

Interestingly, Zerner’s criterion applies not only to general random-coefficient autoregressive processes but also to rather general subcritical branching processes with immigration. For many of these models, the existence of a stationary solution is related to a finite logarithmic moment of the immigration; see [Reference Heathcote11] and [Reference Quine26].

4. Proofs of Theorem 2.1 and Theorem 2.2

As preparation, we start with two simple lemmas.

Lemma 4.1. Let $(U_n)_{n \geq 1}$ be i.i.d. non-negative random variables. Then

\begin{align*}\limsup_{n \rightarrow \infty} \dfrac{U_n}{n} = \begin{cases}0& \quad \textit{if $ \mathbb{E}[U_1] < \infty$,}\\[4pt]\infty & \quad \textit{if $ \mathbb{E} [U_1] = \infty$.}\end{cases}\end{align*}

The proof of Lemma 4.1 follows directly from the Borel–Cantelli lemma; see also [Reference Gut9, Chapter 6, Proposition 1.1]. Lemma 4.1 is known in the context of supercritical branching processes with immigration and can be used to obtain some of Seneta’s classical results on whether immigration increases the speed of divergence; see also [Reference Seneta27] and [Reference Dawson4, Section 3.1.1].

We will also apply the following concentration estimate, which can be seen as a weaker but more general form of Chebyshev’s inequality.

Lemma 4.2. Let $(V_n)_{n \geq 0}$ denote a sequence of i.i.d. random variables and $S_n \,:\!=\, \sum_{j=1}^n V_j$ for all $n \geq 1$ . Assume $\mathbb{E} [V_1]=0$ and that there is $\delta \in (0,1]$ with $c\,:\!=\, \mathbb{E} \big[ |V_1|^{1+\delta}\big ] < \infty$ . Then

\begin{equation*}\mathbb{P} [ |S_n | > t] \leq 2 c n t^{-1-\delta} \quad \textit{for all $ n \geq 1 $ and $t \in (0,\infty)$.}\end{equation*}

Proof. By the Marcinkiewicz–Zygmund inequality (see [Reference Gut9, Chapter 3, Corollary 8.2]),

\begin{align*}\mathbb{E} \big[ \vert S_n \vert^{1+\delta} \big] \leq 2 c n \quad \text{for all $ n \geq 1$.}\end{align*}

Thus the claim follows by applying Markov’s inequality.

The branching process, which is obtained from $(Z_n)_{n \geq 0}$ by neglecting any emigration, will be denoted by $(Z^{\prime}_n)_{n \geq 0}$ . Formally, we set $Z^{\prime}_0 \,:\!=\, k \geq 1$ and

\begin{align*}Z^{\prime}_{n+1} \,:\!=\, \sum_{j=1}^{Z^{\prime}_n} \xi_{n+1,j}, \quad n \geq 0.\end{align*}

We will also work with the stopping time

\begin{align*} \tau^{\prime} \,:\!=\, \inf \{ n \geq 1 \mid Z^{\prime}_{n+1} \leq Y_{n+1} \}. \end{align*}

Observe that, by definition, $Z_n \leq Z^{\prime}_n$ and $\tau \leq \tau^{\prime}$ almost surely.

Proof of Theorem 2.1. First, let $\mathbb{E}[ \!\log_+ Y]= \infty$ . Choose $\varepsilon > 0$ and set

\begin{align*}T \,:\!=\, \inf \{ n \geq 1 \mid Z^{\prime}_m \leq ( \lambda + \varepsilon)^m\ \mathrm{for\ all }\ m \geq n \}.\end{align*}

Then, for fixed $n \geq 1$ , Markov’s inequality gives

\begin{align*} \mathbb{P} \bigl[ Z^{\prime}_n > ( \lambda + \varepsilon)^n \bigr] \leq k \biggl( 1 + \dfrac{\varepsilon}{\lambda} \biggr)^{-n}.\end{align*}

Hence, by the Borel–Cantelli lemma, $T < \infty$ almost surely. Furthermore, applying Lemma 4.1 with $U_n \,:\!=\, \log_+ Y_n$ gives $Y_n \geq ( \lambda + \varepsilon)^n$ for infinitely many $n \geq 1$ almost surely. This yields $\tau \leq \tau^{\prime} < \infty$ almost surely.

Secondly, let $\mathbb{E} [ \!\log_+ Y] < \infty$ . By truncation of the offspring distribution (see Observation A.1 from Appendix A), we can assume that the offspring distribution $\xi$ is bounded and $\sigma^2 \,:\!=\, {\mathrm{var}} [\xi] \in [0,\infty)$ . Moreover, as (H1) and (H2) hold, we can choose $Z_0 = k$ large enough to ensure that these conditions remain true after truncation.

Fix $\varepsilon > 0$ with $\lambda_1 \,:\!=\, \lambda - 2 \varepsilon > 1$ and let $\lambda_0 \,:\!=\, \lambda - \varepsilon $ . Then, for all $n \geq 1$ , consider the following events:

\begin{equation*}A_n \,:\!=\, \Biggl\{ \sum_{j=1}^{\lfloor \lambda_0^n \rfloor} \xi_{n+1,j} \geq \biggl( \lambda - \dfrac{\varepsilon}{2} \biggr) \bigl\lfloor \lambda_0^n \bigr\rfloor \Biggr\}, \quad B_n \,:\!=\, \bigl\{ Y_n \leq \lambda_1^n \bigr\}.\end{equation*}

For all $n \geq 1$ , we find that by Chebyshev’s inequality

\begin{equation*}\mathbb{P}[ A_n^c] \leq \mathbb{P} \Biggl[ \Biggl\vert \sum_{j=1}^{\lfloor \lambda_0^n \rfloor} \xi_{1,j} - \lambda \bigl\lfloor \lambda_0^n \bigr\rfloor \Biggr\vert > \dfrac{\varepsilon}{2} \bigl\lfloor \lambda_0^n \bigr\rfloor \Biggr] \leq \biggl( \dfrac{\varepsilon}{2} \biggr)^{-2} \dfrac{\sigma^2}{\bigl\lfloor \lambda_0^n \bigr\rfloor}.\end{equation*}

Since $\lambda_0 > 1$ , due to the Borel–Cantelli lemma, almost surely only finitely many events $A_n^c$ , $n \geq 1$ , occur. On the other hand, by Lemma 4.1, we know that almost surely all but finitely many events $B_n$ , $n \geq 1$ , occur. Also note that $(A_n)_{n \geq 1}$ is a sequence of independent events, and so is $(B_n)_{n \geq 1}$ . All in all, we can fix $n_0 \in \mathbb{N}$ such that

(4.1) \begin{equation}\biggl( \lambda - \dfrac{\varepsilon}{2} \biggr) \bigl\lfloor \lambda_0^n \bigr\rfloor - \lambda_1^n \geq \bigl\lfloor \lambda_0^{n+1} \bigr\rfloor \quad \text{for all $ n \geq n_0$}\end{equation}

and

(4.2) \begin{equation}\min\! ( \mathbb{P}[A], \mathbb{P}[B] ) > \dfrac{1}{2}, \quad \text{where $ A \,:\!=\, \bigcap_{n \geq n_0} A_n$, $B \,:\!=\, \bigcap_{n \geq n_0} B_n$.}\end{equation}

Note that the value of $n_0$ does not depend on k. By using (4.2), we find

\begin{align*} \mathbb{P} [ A \cap B] = \mathbb{P}[ A] + \mathbb{P} [ B] - \mathbb{P} [A \cup B] \geq \mathbb{P}[ A] + \mathbb{P} [B] - 1 > 0. \end{align*}

Finally, by recalling (H1) and (H2), we can increase $Z_0 = k$ to ensure

\begin{align*}\mathbb{P} [ C] > 0, \quad \text{where $ C \,:\!=\, \bigl\{ Z_{n_0} \geq \bigl\lfloor \lambda_0^{n_0} \bigr\rfloor \bigr\}$.}\end{align*}

By inserting our construction of the events A and B and using (4.1), an inductive argument yields $Z_n \geq \lfloor \lambda_0^n \rfloor$ for all $n \geq n_0$ on the event $A \cap B \cap C$ . Since the events $A \cap B$ and C are independent by definition,

\begin{equation*}\mathbb{P} [ \tau = \infty] \geq \mathbb{P} [ A \cap B \cap C] = \mathbb{P} [A \cap B]\ \mathbb{P} [C] > 0.\end{equation*}

In fact, a careful look at the second part of this proof reveals the following result, which we need in the proof of Theorem 2.4.

Proposition 4.1. Assume $\mathbb{E}[ \!\log_+ Y] < \infty$ . Then $q_k \rightarrow 0$ for $k \rightarrow \infty$ .

The proof of this proposition is left to the reader.

Proof of Theorem 2.2. (I) Since $\tau \leq \tau^{\prime}$ , it suffices to verify $\mathbb{E} [\tau^{\prime}] < \infty$ . Fix $\varepsilon > 0$ and $r \in (0, \infty)$ according to the assumption and set

\begin{align*}T &\,:\!=\, \inf \{ n \geq 1 \mid Z^{\prime}_m \leq r (\lambda + \varepsilon)^{m-1} \ \text{for all $ m \geq n$} \}, \\\hat{T} &\,:\!=\, \inf \{ n > T \mid Y_n > r ( \lambda + \varepsilon)^n \}.\end{align*}

Then $\tau^{\prime} \leq \hat{T}$ almost surely by construction, and hence it suffices to prove $\mathbb{E} [ \hat{T}] < \infty$ . As in the proof of Theorem 2.1, we know $T < \infty$ almost surely. For all $n \geq 1$ , by applying Markov’s inequality we deduce

(4.3) \begin{equation}\mathbb{P} [T= n] \leq \mathbb{P} \bigl[ Z^{\prime}_{n-1} > r ( \lambda + \varepsilon )^{n-2} \bigr] \leq \dfrac{\lambda k}{r} \biggl( 1 + \dfrac{\varepsilon}{\lambda} \biggr)^{-n+2}.\end{equation}

Moreover, since $( (\xi_{n,j})_{j \geq 1}),Y_n )_{n \geq 1}$ is i.i.d., we know that

(4.4) \begin{equation}\mathbb{E} [ \hat{T} ] = \sum_{n \geq 1} \mathbb{E} [ \hat{T} \mid T=n ] \ \mathbb{P} [T=n] = \sum_{n \geq 1} ( \mathbb{E} [T_n] + n ) \ \mathbb{P} [ T = n],\end{equation}

where

\begin{align*}T_n \,:\!=\, \inf \bigl\{ m \geq 1 | Y_{n+m} > r ( \lambda + \varepsilon)^{n+m} \bigr\}, \quad n \geq 1.\end{align*}

For all $n \geq 1$ , we have

\begin{align*}\mathbb{E} [ T_n ] &= 1 + \sum_{m \geq 1} \mathbb{P} [ T_n > m ] = 1 + \sum_{m \geq 1} \prod_{l=1}^m \ \mathbb{P} \bigl[ Y \leq r ( \lambda + \varepsilon )^{n+l} \bigr]\\&= 1 + \Biggl( \sum_{m \geq n+1} \prod_{l=1}^m \ \mathbb{P} \bigl[ Y \leq r ( \lambda + \varepsilon )^l \bigr] \Biggr) \Biggl( \prod_{l=1}^n\ \mathbb{P} \bigl[ Y \leq r ( \lambda + \varepsilon )^l \bigr] \Biggr)^{-1}.\end{align*}

Because of our choice of $r \in (0,\infty)$ , we conclude that $1 \leq \mathbb{E} [T_n] < \infty$ for all $n \geq 1$ . Observe that we can insert this formula for $\mathbb{E} [T_n]$ into (4.4). Then, by using the estimate (4.3), we can deduce $\mathbb{E} [ \hat{T}] < \infty$ and the claim follows.

(II) First, note that by possibly increasing $r \in (0,\infty)$ , we can guarantee that there exists $n_0 \in \mathbb{N}$ satisfying both $r n_0^{-\theta} < 1$ and

\begin{align*} \mathbb{P} [ Y \leq \kappa r ] > 0, \quad \text{where $ \kappa \,:\!=\, \prod_{n \geq n_0} \bigl( 1 - r n^{-\theta} \bigr) \in (0,1)$.} \end{align*}

Fix $r \in (0,\infty)$ accordingly. By possibly increasing $n_0 $ , which results in an increase of $\kappa$ , and by using our assumption, we can ensure

(4.5) \begin{equation}\sum_{n \geq n_0} \prod_{l=n_0}^n \ \mathbb{P} \bigl[ Y \leq \kappa r \lambda^l l^{-\theta} \bigr] = \infty.\end{equation}

Fix $\eta$ with $(1+\delta)^{-1} < \eta < 1$ . Then, for all $n \geq n_0$ , let

\begin{equation*}N_n \,:\!=\, \lambda^n \biggl( 1 + \dfrac{1}{n} \biggr) \prod_{l=n_0}^n \bigl( 1 - r l^{-\theta} \bigr), \quad f_n \,:\!=\, \kappa \lambda^{\eta n} , \ g_n\,:\!=\, \kappa r n^{- \theta} \lambda^n.\end{equation*}

By possibly increasing $n_0 \in \mathbb{N}$ and recalling $\eta < 1$ , for all $n \geq n_0$ ,

\begin{align*}\lambda \lfloor N_n \rfloor - \lceil f_n \rceil &\geq \lambda^{n+1} \biggl( 1 + \dfrac{1}{n} \biggr) \prod_{l=n_0}^n \bigl( 1 - r l^{- \theta} \bigr) - \kappa \lambda^{\eta n} n - 2\\&\geq \lambda^{n+1} \biggl( 1 + \dfrac{1}{n} \biggr) \prod_{l=n_0}^n \bigl( 1 - r l^{- \theta} \bigr) - \lambda^{\eta n} n^2 \prod_{l=n_0}^n \bigl( 1 - r l^{-\theta} \bigr)\\&= \biggl( \lambda^{n+1} \biggl( 1 + \dfrac{1}{n} \biggr) - \lambda^{\eta n} n^2 \biggr) \prod_{l=n_0}^n \bigl( 1 - r l^{-\theta} \bigr)\\&= \biggl( \lambda^{n+1} + \dfrac{\lambda^{n+1}}{n+1} + \dfrac{\lambda^{n+1}}{n(n+1)} - \lambda^{\eta n} n^2 \biggr) \prod_{l=n_0}^n \bigl( 1 - r l^{-\theta} \bigr) \\&\geq \lambda^{n+1} \biggl( 1 + \dfrac{1}{n+1} \biggr) \prod_{l=n_0}^n \bigl( 1 - r l^{-\theta} \bigr).\end{align*}

Moreover, by possibly increasing $n_0 \in \mathbb{N}$ , we can guarantee $g_n \geq n$ for all $n \geq n_0$ . So, by inserting the definition of $\kappa$ , for all $n \geq n_0$ we get

\begin{align*}\lceil g_{n+1} \rceil &\leq g_{n+1} + 1 \leq \biggl( 1 + \dfrac{1}{n+1} \biggr) g_{n+1} \\& < \lambda^{n+1} \biggl( 1 + \dfrac{1}{n+1} \biggr) \Biggl( \prod_{l=n_0}^n \bigl( 1 - r l^{-\theta} \bigr) \Biggr) r (n+1)^{-\theta}.\end{align*}

By combining the previous two estimates, for all $n \geq n_0$ , we directly find

(4.6) \begin{equation}\lambda \lfloor N_n \rfloor - \lceil f_n \rceil - \lceil g_{n+1} \rceil \geq N_{n+1}.\end{equation}

For all $n \geq n_0$ , we consider the event

\begin{align*}D_n \,:\!=\, \Biggl\{ \sum_{j=1}^{\lfloor N_n \rfloor} \xi_{n,j} \geq \lambda \lfloor N_n \rfloor - \lceil f_n \rceil \Biggr\}.\end{align*}

Since we assume $\mathbb{E} \bigl[\xi^{1+\delta}\bigr] < \infty$ , by Lemma 4.2 there is $c \in (0,\infty)$ with

\begin{equation*}\mathbb{P} [ D_n^c ] \leq \mathbb{P} \Biggl[ \Biggl\vert \sum_{j=1}^{\lfloor N_n \rfloor} \xi_{1,j} - \lambda \lfloor N_n \rfloor \Biggr\vert > \lceil f_n \rceil \Biggr] \leq \dfrac{2 c \lfloor N_n \rfloor }{\lfloor f_n \rfloor^{1+\delta}} \quad \text{for all $ n \geq n_0$.}\end{equation*}

Recalling our definition of $N_n$ , $f_n$ , and $\eta$ , and noticing that the events $(D_n)_{n \geq n_0}$ are independent, we obtain

\begin{align*}\mathbb{P} [ D] > 0, \quad \text{where $ D \,:\!=\, \bigcap_{n \geq n_0} D_n$.}\end{align*}

Consider the stopping time

\begin{align*}T \,:\!=\, \inf \{ n > n_0 \mid Y_n > g_n \}.\end{align*}

Then, by definition of T and $(g_n)_{n \geq 0}$ , we deduce from (4.5)

\begin{equation*}\mathbb{E} [T] = \sum_{n \geq 0} \mathbb{P} [ T > n] = n_0 +1 + \sum_{n \geq n_0+1} \prod_{l=n_0 + 1}^n \mathbb{P} \bigl[ Y \leq \kappa r l^{-\theta} \lambda^l \bigr] = \infty.\end{equation*}

Finally, by (H1) and (H2), we may assume that the value of $Z_0 = k \geq 1$ is chosen large enough to ensure that

\begin{align*}\mathbb{P} [C] > 0, \quad \text{where $ C \,:\!=\, \{ Z_{n_0} \geq N_{n_0} \}$.}\end{align*}

Note that, by construction, C and D are independent events. All in all, by (4.6), we can deduce that $\tau \geq T$ on the event $ B\,:\!=\, C \cap D$ , which occurs with a positive probability. Finally, by (IND),

\begin{equation*}\mathbb{E} [ \tau ] \geq \mathbb{E} [ \tau 1_B] \geq \mathbb{E} [ T 1_B] = \mathbb{E} [ T \mid B]\ \mathbb{P}[B] = \mathbb{E}[ T]\ \mathbb{P}[ B] = \infty.\end{equation*}

5. Proof of Theorem 2.3

For convenience, we split the proof of Theorem 2.3 into smaller parts by formulating and separately proving the following two lemmas.

Lemma 5.1. Assume $\mathrm{(REG)}$ . Then

\begin{align*}C \,:\!=\, \limsup_{k \rightarrow \infty} q_k\ \mathbb{P} [Y>k]^{-1} \leq \dfrac{\lambda ^{-\alpha}}{1- \lambda^{-\alpha}}.\end{align*}

Lemma 5.2. Assume $\mathrm{(REG)}$ and that all exponential moments of $\xi$ are finite. Moreover, let $N \in \mathbb{Z}_{\geq 2} \cup \{ \infty \}$ . Then

\begin{equation*}\liminf_{k \rightarrow \infty}\ \mathbb{P} [ \tau < N \mid Z_0 = k]\ \mathbb{P} [Y>k]^{-1} \geq \sum_{l=1}^{N-1} \lambda^{-\alpha l}.\end{equation*}

Proof of Lemma 5.1. By truncation of the offspring distribution (see Observation A.1 from Appendix A), we may assume that $\xi$ is almost surely bounded and, in particular, has finite exponential moments.

Let us verify $C < \infty$ as a first step. For this, choose $\varepsilon > 0$ with $\lambda_0 \,:\!=\, \lambda - 2 \varepsilon > 1$ and set $\lambda_1 \,:\!=\, \lambda - \varepsilon$ . Then, by Lemma B.1 from Appendix B, there are $c_1,\ldots,c_{N} \in (0,\infty)$ such that the sequence

\begin{align*} x_0\,:\!=\, 1, \quad x_{n+1} \,:\!=\, \begin{cases}\lambda_1 x_n - \lambda_0^{n}, & \quad n \geq N,\\[4pt]\lambda_1 x_n - c_{n+1} , & \quad n \leq N - 1,\end{cases} \end{align*}

is strictly positive and satisfies $x_n \geq c^n$ for $c > 1$ and all $n \geq 1$ . Furthermore, for all $k \geq 1$ , we consider the events

\begin{align*}A_{k,n} \,:\!=\, \Biggl\{ \sum_{j=1}^{\lfloor k x_n \rfloor} \xi_{n+1,j} \geq \lambda_1 k x_n \Biggr\}, \quad n \geq 0, \quad A_k \,:\!=\, \bigcap_{n \geq 0} A_{k,n}.\end{align*}

For all $k \geq 1$ , we have

(5.1) \begin{equation}q_k = \mathbb{P} [\tau < \infty, A_k \mid Z_0 = k] + \mathbb{P} \bigl[ \tau < \infty, A_k^c \mid Z_0 =k \bigr],\end{equation}

as well as

\begin{align*}\mathbb{P} \bigl[ \tau < \infty, A_k^c \mid Z_0 = k \bigr] \leq \sum_{n \geq 0} \mathbb{P} \bigl[A_{k,n}^c\bigr].\end{align*}

By the Cramér–Chernoff method or a sub-Gaussian concentration estimate (see [Reference Boucheron, Labor and Massart2, Section 2.1 resp. Section 2.2]), and by our knowledge concerning the sequence $(x_n)_{n \geq 0}$ ,

\begin{align*} \mathbb{P} \bigl[ \tau < \infty, A_k^c \mid Z_0 = k\bigr] \rightarrow 0\quad \text{for $k \rightarrow \infty$}\end{align*}

exponentially fast. Therefore, by applying condition (REG), we deduce

\begin{align*}\lim_{k \rightarrow \infty}\ \mathbb{P} \bigl[ \tau < \infty, A_k^c \mid Z_0 = k \bigr]\ \mathbb{P} [Y>k]^{-1} = 0,\end{align*}

and, by recalling (5.1), we further conclude

\begin{equation*}C = \limsup_{k \rightarrow \infty} \ \mathbb{P} [ \tau < \infty, A_k \mid Z_0 = k]\ \mathbb{P} [Y>k]^{-1} .\end{equation*}

Fix $k \geq 1$ and let $Z_0 = k$ . Then, by construction of $A_k$ and $(x_n)_{n \geq 0}$ ,

\begin{align*}\{ \tau < \infty \} \cap A_k \subseteq \bigcup_{n=1}^{N -1} \{ Y_n > k c_{n+1} \} \cup \bigcup_{n \geq N} \bigl\{ Y_n > k \lambda_0^n \bigr\},\end{align*}

since $Z^{\prime}_n \geq k c_{n+1}$ for $n < N$ and $Z^{\prime}_n \geq k \lambda_0^n$ on $A_k$ . Consequently

(5.2) \begin{equation}C \leq \sum_{n=1}^{N-1} \mathbb{P} [ Y _n > c_{n+1} k ] + \mathbb{P} \Biggl[ \sum_{n \geq N} Y_n \lambda_{0}^{-n} > k \Biggr].\end{equation}

Note that on the one hand, due to (REG),

\begin{equation*}\lim_{k \rightarrow \infty} \sum_{n=1}^{N-1} \mathbb{P} [ Y > c_{n} k ]\ \mathbb{P} [Y>k]^{-1} = \sum_{n=1}^{N-1} c_{n}^{-\alpha} < \infty,\end{equation*}

and on the other hand, by applying Theorem 3.1 with $A_1 \equiv \lambda_{0}^{-1}$ ,

\begin{align*}\limsup_{k \rightarrow \infty} \ \mathbb{P} \Biggl[ \sum_{n \geq N} Y_n \lambda_{0}^{-n} > k \Biggr]\ \mathbb{P} [Y>k]^{-1} < \infty.\end{align*}

All in all, by (5.2) we conclude that $C < \infty$ .

In the second step, fix $0 < \delta < \lambda_1 = \lambda-\varepsilon$ and set $\lambda_2\,:\!=\, \lambda_1 - \delta$ . Later we will let $\varepsilon \searrow 0$ and $\delta \searrow 0$ , when $\lambda_1 \nearrow \lambda$ and $\lambda_2 \nearrow \lambda$ . For all $k \geq 1$ , we define the events

\begin{align*}D_{1,k} \,:\!=\, \{ Y_1 > \lambda_1 k \}, \quad D_{2,k} \,:\!=\, \{ \delta k \leq Y_1 \leq \lambda_1 k \}\quad \text{and}\quad D_{3,k} \,:\!=\, \{ Y_1 < \delta k \}.\end{align*}

For every $k \geq 1$ , given $Z_0=k$ we can decompose $\{ \tau < \infty \}$ by using the three events $D_{1,k}$ , $D_{2,k}$ , and $D_{3,k}$ . From this decomposition we obtain

\begin{equation*}q_k \leq \mathbb{P} [ D_{1,k}] + \mathbb{P} [\tau < \infty, D_{2,k} \mid Z_0 = k] + \mathbb{P} [\tau < \infty, \ D_{3,k} \mid Z_0 = k].\end{equation*}

Let us introduce the notation $q_r \,:\!=\, q_{\lfloor r \rfloor}$ for $r \in (0,\infty)$ . Note that the function $r \mapsto q_r$ is monotone non-increasing with respect to r, and

\begin{equation*}\mathbb{P} \Biggl[ \sum_{j=1}^n \xi_{1,j} \leq \biggl( \lambda - \dfrac{\varepsilon}{2} \biggr) n \Biggr] \rightarrow 0 \quad \text{exponentially fast as $ n \rightarrow \infty$,}\end{equation*}

which can be verified, as in the first step, by the Cramér–Chernoff method or a sub-Gaussian concentration estimate. By combining these two remarks with our assumption (REG), for all $\varepsilon > 0$ and $\delta > 0$ ,

\begin{align*}&\limsup_{k \rightarrow \infty} \ \mathbb{P}[ Y>k]^{-1} \ \mathbb{P} [\tau < \infty, D_{2,k} \mid Z_0 = k]\\&\quad =\limsup_{k \rightarrow \infty} \ \mathbb{P} [ Y > k]^{-1} \ \mathbb{P} [D_{2,k}]\ \mathbb{P} [ \tau < \infty \mid D_{2,k}, Z_0 = k]\\ &\quad \leq \limsup_{k \rightarrow \infty} \ \mathbb{P}[ Y > k]^{-1}\ \mathbb{P} [Y \geq \delta k] q_{(\varepsilon/2) k} \\ &\quad = 0,\end{align*}

where for the last step we have also inserted our knowledge that $C < \infty$ . Similarly, we can verify

\begin{align*}\limsup_{k \rightarrow \infty}\ \mathbb{P}[ Y > k]^{-1} \ \mathbb{P} [ \tau < \infty, D_{3,k} \mid Z_0 = k] \leq \limsup_{k \rightarrow \infty}\ \mathbb{P} [Y > k]^{-1} q_{\lambda_2 k}.\end{align*}

All in all, and again by invoking (REG),

\begin{align*}C &= \limsup_{k \rightarrow \infty} \ \mathbb{P} [ Y>k]^{-1} q_k \\&\leq \limsup_{k \rightarrow \infty}\ \mathbb{P} [Y>k]^{-1} \ \mathbb{P} [ Y > \lambda_1 k] + \limsup_{k \rightarrow \infty}\ \mathbb{P} [Y>k]^{-1} q_{\lambda_2 k}\\&=\limsup_{k \rightarrow \infty}\ \mathbb{P} [Y>k]^{-1} \ \mathbb{P}[ Y > \lambda_1 k] + \limsup_{k \rightarrow \infty} \dfrac{ \mathbb{P}[Y > \lambda_2 k] q_{\lambda_2 k}}{\mathbb{P} [Y>k]\ \mathbb{P} [Y>\lambda_2 k]}\\&\leq \lambda_1^{-\alpha} + C \lambda_2^{-\alpha}.\end{align*}

Letting $\delta \searrow 0$ and $\varepsilon \searrow 0$ , $C \leq \lambda^{-\alpha} + C \lambda^{-\alpha}$ and the claim follows.

Proof of Lemma 5.2. Because of monotonicity, it suffices to prove the claim for $N < \infty$ . Fix $\varepsilon > 0$ . For all $k \geq 1$ and $l=1,\ldots,N-1$ , define

\begin{align*}A_{k,l} \,:\!=\, \Biggl\{ \sum_{j=1}^{\lceil k (\lambda + \varepsilon)^l \rceil} \xi_{l,j} \leq k (\lambda + \varepsilon)^{l+1} \Biggr\}, \quad A_k \,:\!=\, \bigcap_{l=1}^{N-1} A_{k,l}.\end{align*}

Then, for all $l=1,\ldots,N-1$ , $\mathbb{P} [A_{k,l}^c] \rightarrow 0$ for $k \rightarrow \infty$ exponentially fast due to the Cramér–Chernoff method. Hence, by (REG),

\begin{align*}L_N^- & \,:\!= \liminf_{k \rightarrow \infty}\ \mathbb{P} [ \tau < N | Z_0 = k]\ \mathbb{P} [Y>k]^{-1} \\& = \liminf_{k \rightarrow \infty} \ \mathbb{P} [ \tau < N, A_k | Z_0 = k]\ \mathbb{P} [Y > k]^{-1}.\end{align*}

By inserting the definition of both $A_k$ of $(Z_n)_{n \geq 0}$ , we further obtain

\begin{align*} L_N^- \geq \liminf_{k \rightarrow \infty}\ \mathbb{P} \bigl[ \exists l \in \{ 1,\ldots,N-1 \}\colon Y_l \geq k ( \lambda + \varepsilon)^l, A_k \bigr] \ \mathbb{P} [Y > k]^{-1}. \end{align*}

Again, we combine (REG) with the fact that for all $l=0,\ldots,N-1$ , $\mathbb{P} [A_{k,l}^c] \rightarrow 0$ for $k \rightarrow \infty$ exponentially fast. This gives us

\begin{align*}L_N^- \geq \liminf_{k \rightarrow \infty} \ \mathbb{P} \bigl[ \exists l \in \{ 1,\ldots,N-1 \}\colon Y_l \geq k ( \lambda + \varepsilon)^l \bigr] \ \mathbb{P} [Y > k]^{-1}.\end{align*}

Now, by applying the inclusion–exclusion principle, recalling that the sequence $(Y_m)_{m \geq 1}$ is i.i.d. and working with (REG), we obtain

\begin{align*}L_N^- \geq \sum_{l=1}^{N-1} \lim_{k \rightarrow \infty} \mathbb{P} \bigl[ Y_1 \geq k ( \lambda + \varepsilon)^l \bigr]\ \mathbb{P} [Y>k]^{-1} = \sum_{l=1}^{N-1} ( \lambda + \varepsilon)^{-\alpha l}.\end{align*}

The claim now follows by letting $\varepsilon \searrow 0$ .

Proof of Theorem 2.3. Because of both Lemma 5.1, which covers the case $N=\infty$ , and Lemma 5.2, it suffices to prove that, for fixed $2 \leq N < \infty$ ,

\begin{align*}L_N^+ \,:\!=\, \limsup_{k \rightarrow \infty} \ \mathbb{P} [\tau < N \mid Z_0=k] \leq \sum_{l=1}^{N-1} \lambda^{-\alpha l}.\end{align*}

As in the proof of Lemma 5.1, we can assume that the offspring distribution is bounded and, in particular, all exponential moments of $\xi$ are finite. Let us verify, for arbitrary $\varepsilon_1 \in (0,1)$ with $\lambda_0 \,:\!=\, \lambda - 2 \varepsilon_1 > 1$ ,

(5.3) \begin{equation}L_N^+ \leq \sum_{l=1}^{N-1} \lambda_0^{-\alpha l}.\end{equation}

Let $\lambda_1 \,:\!=\, \lambda - \varepsilon_1$ and define, for all $k \geq 1$ and $l=1,\ldots,N-1$ , the events

\begin{align*}B_{k,l} \,:\!=\, \Biggl\{ \sum_{j=1}^{\lfloor k \lambda_1^l \rfloor} \xi_{l,j} \geq k \biggl(\lambda - \dfrac{\varepsilon_1}{2} \biggr) \lambda_1^l \Biggr\}, \quad B_k \,:\!=\, \bigcap_{l=1}^{N-1} B_{k,l}.\end{align*}

For all $l=1,\ldots,N-1$ , the Cramér–Chernoff method or a sub-Gaussian concentration estimate implies that $\mathbb{P} [B_{k,l}^c] \rightarrow 0$ for $k \rightarrow \infty$ exponentially fast, and hence

\begin{align*}L_N^+ = \limsup_{k \rightarrow \infty}\ \mathbb{P} [ \tau < N, B_k \mid Z_0 = k]\ \mathbb{P} [Y>k]^{-1}.\end{align*}

Consider the events

\begin{align*}C_k \,:\!=\, \bigl\{ \exists l \in \{ 1,\ldots,N-1 \} \colon Y_l > k \lambda_0^l \bigr\}, \quad k \geq 1.\end{align*}

Then, by (REG),

\begin{align*}\limsup_{k \rightarrow \infty} \ \mathbb{P}[C_k]\ \mathbb{P}[Y>k]^{-1}& \leq \limsup_{k \rightarrow \infty} \sum_{l=1}^{N-1} \mathbb{P} \bigl[ Y_l > k \lambda_0^l \bigr]\ \mathbb{P} [Y>k]^{-1}\\& = \sum_{l=1}^{N-1} \lim_{k \rightarrow \infty} \mathbb{P} \bigl[ Y > k \lambda_0^l \bigr]\ \mathbb{P} [Y>k]^{-1} \\&= \sum_{l=1}^{N-1} \lambda_0^{-\alpha l},\end{align*}

and hence, in order to obtain the inequality (5.3), it suffices to show that

(5.4) \begin{equation}\lim_{k \rightarrow \infty}\ \mathbb{P} \bigl[ \tau < N, B_k, C_k^c \mid Z_0 = k \bigr]\ \mathbb{P}[Y>k]^{-1} = 0.\end{equation}

Let $\varepsilon_2 \in (0,1)$ and introduce, for all $k \geq 1$ , the random variable

\begin{equation*} R_k \,:\!=\, \# \bigl\{ l=1,\ldots,N-1 \mid Y_l \geq \varepsilon_2 k \lambda_0^l \bigr\}.\end{equation*}

Then, since (REG) holds and $(Y_m)_{m \geq 1}$ is i.i.d., we easily obtain

(5.5) \begin{equation}\limsup_{k \rightarrow \infty}\ \mathbb{P} \bigl[ \tau < N,\ B_k,\ C_k^c,\ R_k \geq 2 \mid Z_0 = k \bigr]\ \mathbb{P} [Y>k]^{-1} = 0.\end{equation}

For a given $\varepsilon_1 > 0$ , choose $0 < \varepsilon_2 < \varepsilon_1/2$ . Then, for every $k \geq 1$ , by inserting the definition of $B_k$ , $R_k$ and $(Z_n)_{n \geq 0}$ , we can deduce that

\begin{align*}\mathbb{P} [ \tau < N, B_k, R_k=0 \mid Z_0 = k] = 0.\end{align*}

In particular, we obtain

(5.6) \begin{equation}\limsup_{k \rightarrow \infty} \ \mathbb{P} \bigl[ \tau < N, B_k, C_k^c, R_k = 0 \mid Z_0 = k \bigr]\ \mathbb{P} [Y>k]^{-1} = 0.\end{equation}

Combining (5.5) and (5.6), in order to verify (5.4), we only need to show that

(5.7) \begin{equation}\limsup_{k \rightarrow \infty}\ \mathbb{P} \bigl[ \tau < N, C_k^c, R_k =1 \mid Z_0 = k \bigr]\ \mathbb{P} [Y>k]^{-1} = 0.\end{equation}

Let $k \geq 1$ and $l=1,\ldots,N-1$ . Then let us introduce events $B^{\prime}_{k,l,1}, \ldots, B_{k,l,N-1}$ by

\begin{align*}B^{\prime}_{k,l,r} \,:\!=\, \Biggl\{ \sum_{j=1}^{\lfloor k x_{r-1} \rfloor} \xi_{r,j} \geq \lambda_1 k x_{r-1} \Biggr\}, \quad r=1,\ldots,N-1,\end{align*}

where $x_0\,:\!=\, 1$ and

\begin{align*}x_{r+1}\,:\!=\, \lambda_1 x_r - b_r,\quad b_r \,:\!=\, \begin{cases}\lambda_0^r, & \quad r=l,\\[3pt]\varepsilon_2 \lambda_0^r, & \quad r \neq l.\end{cases}\end{align*}

For all $k \geq 1$ , let $B^{\prime}_k$ denote the event that all events $B^{\prime}_{k,l,r}$ , l, $r=1,\ldots,N-1$ , occur. Then, again by the Cramér–Chernoff method or the sub-Gaussian concentration inequality, we know that

\begin{align*}&\limsup_{k \rightarrow \infty} \ \mathbb{P} \bigl[ \tau < N, C_k^c, R_k =1 \mid Z_0 = k \bigr]\ \mathbb{P} [Y>k]^{-1}\\&\quad = \limsup_{k \rightarrow \infty} \ \mathbb{P} \bigl[ \tau < N, B^{\prime}_k, C_k^c, R_k =1 \mid Z_0 = k \bigr]\ \mathbb{P} [Y>k]^{-1}.\end{align*}

According to Lemma B.2 from Appendix B, for every $\varepsilon_1 > 0$ it is possible to choose $\varepsilon_2 > 0$ small enough to ensure

\begin{align*}\mathbb{P} \bigl[ \tau < N, B^{\prime}_k, C_k^c, R_k =1 \mid Z_0 = k \bigr] = 0 \quad \text{for all $ k \geq 1$,}\end{align*}

where we have inserted the definition of the events $B^{\prime}_k$ and $C_k^c$ , as well as the definition of the random variable $R_k$ and of the process $(Z_n)_{n \geq 0}$ . In particular, by choosing $\varepsilon_2 > 0$ small enough, (5.7) follows.

6. Proof of Theorem 2.4

In the following we will again work with the branching process $(Z^{\prime}_n)_{n \geq 0}$ , but more generally assume $Z^{\prime}_0 = k^{\prime} \geq 1$ and possibly $k \neq k^{\prime}$ . Let $q^{\prime} \in [0,1)$ denote the extinction probability of $(Z^{\prime}_n)_{n \geq 0}$ given $k^{\prime} = 1$ and recall the existence of the almost sure martingale limit

\begin{align*}W^{\prime} \,:\!=\, \lim_{n \rightarrow \infty} \lambda^{-n} Z^{\prime}_n \in [0,\infty).\end{align*}

By the Kesten–Stigum theorem [Reference Kesten and Stigum17], it is known that $W^{\prime} = 0$ almost surely if and only if $\mathbb{E} [ \xi \log_+ ( \xi)] = \infty$ . Moreover, if $\mathbb{E} [ \xi \log_+ ( \xi)] < \infty$ , then, given that $(Z^{\prime}_n)_{n \geq 0}$ survives forever, $W^{\prime} > 0$ almost surely.

Our main idea is to divide the population into two groups. Then, if the emigration is weak, it may only affect one of these groups.

Lemma 6.1. (Decomposition.) Fix $k_0 > k$ with $\mathbb{P} [Z_1 = k_0] > 0$ and $k^{\prime}\,:\!=\, k_0 - k$ . Let $Z_1^{(1)} \,:\!=\, k$ , $Z_1^{(2)}\,:\!=\, k^{\prime}$ , and define, for all $n \geq 1$ , recursively,

\begin{align*}Z_{n+1}^{(1)} \,:\!=\, \Biggl( \sum_{j=1}^{Z_n^{(1)}} \xi_{n+1,j} - Y_{n+1} \Biggr)_+, \quad Z_{n+1}^{(2)} \,:\!=\, \sum_{j=Z_n^{(1)}+1}^{Z_n^{(1)} + Z_n^{(2)}} \xi_{n+1,j}.\end{align*}

Then

(6.1) \begin{equation}\bigl( Z_n^{(1)} \bigr)_{n \geq 1} \overset{\mathrm{d}}{=} (Z_n)_{n \geq 0}, \quad \bigl( Z_n^{(2)} \bigr)_{n \geq 1} \overset{\mathrm{d}}{=} (Z^{\prime}_n)_{n \geq 0},\end{equation}

and if $\mathrm{(IND)}$ holds, then the processes $\big(Z_n^{(1)} \big)_{n \geq 0}$ and $\big(Z_n^{(2)} \big)_{n \geq 0}$ are independent. Moreover, for all $n \geq 1$ ,

(6.2) \begin{align} Z_n & = Z_n^{(1)} + Z_n^{(2)} \quad \textit{on the event} \quad \{ Z_1 = k_0 \} \cap \bigl\{ Z_n^{(1)} > 0 \bigr\},\end{align}
(6.3) \begin{align} Z_n & \geq Z_n^{(1)} + Z_n^{(2)} \quad \textit{on the event} \quad \{ Z_1 \geq k_0 \} \cap \bigl\{ Z_n^{(1)} > 0 \bigr\}.\end{align}

Proof of Lemma 6.1. By construction, $\bigl( Z_n^{(1)} \bigr)_{n \geq 0}$ and $\bigl( Z_n^{(2)} \bigr)_{n \geq 0}$ , are time-homogeneous Markov chains with the same initial state and transition probabilities as $(Z_n)_{n \geq 0}$ and $(Z^{\prime}_n)_{n \geq 0}$ , respectively. Hence (6.1) holds.

By inserting the definitions of $Z_n^{(1)}$ and $Z_n^{(2)}$ , one can straightforwardly verify both (6.2) and (6.3). The details are therefore omitted.

Finally, assume (IND) and let $a_1$ , $a_2$ , $b_1$ , $b_2 \in \mathbb{N}$ . Then, for all $n,m \geq 1$ ,

\begin{align*}&\mathbb{P} \bigl[ Z_{n+1}^{(1)} = a_2, Z_{m+1}^{(2)}= b_2 \mid Z_n^{(1)} = a_1, Z_m^{(2)} = b_1 \bigr] \\&\quad = \mathbb{P} \Biggl[ \Biggl( \sum_{j=1}^{a_1} \xi_{n+1,j} - Y_{n+1} \Biggr)_+ = a_2, \sum_{j=a_1 + 1}^{a_1 + b_1 } \xi_{m+1,j} = b_2 \Biggr] \\&\quad = \mathbb{P} \Biggl[ \Biggl( \sum_{j=1}^{a_1} \xi_{n+1,j} - Y_{n+1} \Biggr)_+ = a_2 \Biggr]\ \mathbb{P} \Biggl[ \sum_{j=a_1 + 1}^{a_1 + b_1} \xi_{m+1,j} = b_2 \Biggr]\\&\quad = \mathbb{P} \bigl[ Z_{n+1}^{(1)} = a_2 \mid Z_n^{(1)} = a_1 \bigr]\ \mathbb{P} \bigl[ Z_{m+1}^{(2)} = b_2 \mid Z_m^{(2)} = b_1 \bigr].\end{align*}

Hence transitions of $\bigl( Z_n^{(1)} \bigr)_{n \geq 1}$ and $\bigl(Z_n^{(2)} \bigr)_{n \geq 0}$ are independent and the claim follows by recalling that the initial states are chosen constant.

Proof of Theorem 2.4. (a) Let $\mathbb{P} [W>0]>0$ . Then $\mathbb{P} [\tau= \infty]>0$ , and hence by Theorem 2.1 we immediately obtain $\mathbb{E} [ \!\log_+ Y] < \infty$ . Besides, using $Z_n \leq Z^{\prime}_n$ for $Z_0 = Z^{\prime}_0 = k$ and applying the classical Kesten–Stigum theorem (see [Reference Kesten and Stigum17]), we directly conclude $\mathbb{E} [ \xi \log_+ \xi] < \infty$ .

On the contrary, let us assume $\mathbb{E}[ \!\log_+ Y] < \infty$ and $\mathbb{E}[ \xi \log_+ \xi] < \infty$ . Then, due to Theorem 2.1, $\mathbb{P}[ \tau = \infty] > 0$ , and hence it suffices to show that

\begin{align*}\mathbb{P}[W>0] = \mathbb{P}[ \tau = \infty].\end{align*}

In order to obtain this claim, we note that $\{ W > 0 \} \subseteq \{ \tau = \infty \}$ and verify

(6.4) \begin{equation}\mathbb{P} [ W=0, \tau = \infty] = 0.\end{equation}

By (H1) and (H2), we know that $\{ \tau = \infty \} = \{ Z_n \rightarrow \infty \}$ almost surely. Moreover, W is monotone with respect to $Z_0 = k$ . Hence

(6.5) \begin{align} \mathbb{P} [ W = 0, \tau = \infty] &\leq \liminf_{ k \rightarrow \infty} \ \mathbb{P} [ W =0 \mid Z_0 = k]\end{align}
(6.6) \begin{align} \qquad\quad \ \, \, \quad \quad \qquad \qquad = 1 - \limsup_{k \rightarrow \infty} \ \mathbb{P} [ W > 0 \mid Z_0 = k].\end{align}

Choose $\varepsilon > 0$ with $\lambda - \varepsilon > 1$ and set $k_0 ( k) \,:\!=\, \lfloor (\lambda - \varepsilon) k \rfloor$ for all $k \geq 1$ . Then, by the strong law of large numbers,

(6.7) \begin{equation}\limsup_{k \rightarrow \infty} \ \mathbb{P} [ Z_1 \geq k_0(k) \mid Z_0 = k] = 1\end{equation}

and $k_0 ( k)- k \rightarrow \infty$ for $k \rightarrow \infty$ . Fix $k \geq 1$ , $k_0 = k_0 (k)$ and assume $Z_0 = k$ . Then, by making use of the notation introduced in Lemma 6.1 and (6.3),

(6.8) \begin{equation}\mathbb{P} [ W > 0] \geq \mathbb{P} [ Z_1 \geq k_0 ]\ \mathbb{P} \Bigl[ \forall n \geq 0\colon Z_n^{(1)} > 0,\ \lim_{n \rightarrow \infty} \lambda^{-n} Z_n^{(2)} > 0 \Bigr].\end{equation}

Recalling (6.1) and Proposition 4.1, we know that

(6.9) \begin{equation}\mathbb{P} \bigl[ \forall n \geq 0 \colon Z_n^{(1)} > 0 \bigr] = 1 - q_k \rightarrow 1 \quad \text{for $ k \rightarrow \infty$.}\end{equation}

On the other hand, by (6.1) and the Kesten–Stigum theorem [Reference Kesten and Stigum17],

\begin{align*}\mathbb{P} \Bigl[ \lim_{n \rightarrow \infty} \lambda^{-n} Z_n^{(2)} > 0 \Bigr] = \mathbb{P} [ W^{\prime} > 0 \mid Z^{\prime}_0 = k_0(k) -k ] = 1 - (q^{\prime})^{k_0(k) - k},\end{align*}

and, since $k_0 (k) - k \rightarrow \infty$ for $k \rightarrow \infty$ , we further obtain

(6.10) \begin{equation}\lim_{k \rightarrow \infty} \ \mathbb{P} \Bigl[ \lim_{n \rightarrow \infty} \lambda^{-n} Z_n^{(2)} > 0 \Bigr] = 1.\end{equation}

By combining (6.7), (6.9), and (6.10) with (6.8), we conclude

\begin{align*}\limsup_{k \rightarrow \infty} \ \mathbb{P} [ W > 0 \mid Z_0 = k] = 1,\end{align*}

and hence (6.4) and the claim follow by recalling (6.5) and (6.6).

(b) First, let $a=0$ . Assume that there exists $b \in (0,\infty)$ satisfying $\mathbb{P}[ 0 < W < b] = 0$ and $\mathbb{P} [ 0 < W < b + \varepsilon] > 0$ for all $\varepsilon > 0$ . Then we choose $\varepsilon > 0$ and $\delta > 0$ with

(6.11) \begin{equation}\tilde{b} \,:\!=\, \lambda^{-1} ( b + \varepsilon) + \delta < b.\end{equation}

Also fix $k_0 > k$ with $\mathbb{P} [ Z_1 = k_0] > 0$ and again recall the notation from Lemma 6.1. Then, by the decomposition (6.2) and (6.11),

\begin{align*}&\mathbb{P} [ 0 < W < \tilde{b}] \geq\mathbb{P} [ Z_1 = k_0 ]\ \mathbb{P} \Bigl[ \lim_{n \rightarrow \infty} \lambda^{-n} Z_n^{(1)} \in \bigl( 0, \lambda^{-1} (b + \varepsilon) \bigr),\ \lim_{n \rightarrow \infty} \lambda^{-n} Z_n^{(2)} < \delta \Bigr].\end{align*}

By using (IND) and Lemma 6.1, we further deduce that

\begin{equation*}\mathbb{P} [0 < W < \tilde{b}] \geq \mathbb{P} [ Z_1 = k_0 ] \ \mathbb{P} [ 0 < W < b + \varepsilon ]\ \mathbb{P} [0 < W^{\prime} < \lambda \delta ],\end{equation*}

where we assume $Z_0 = k$ and $Z^{\prime}_0 = k_0- k$ . The first two probabilities on the right-hand side of this inequality are positive by construction. The third factor is also positive. This follows, for example, from the fact that W has a strictly positive Lebesgue density on $(0,\infty)$ ; see e.g. [Reference Athreya and Ney1, Chapter 1, Part C]. Consequently $\mathbb{P} [0 < W < \tilde{b}] > 0$ , which is a contradiction to our assumptions on $b \in (0,\infty)$ . Hence the claim is true if $a=0$ .

For arbitrary $a > 0$ , again fix $k_0 >k $ with $\mathbb{P} [Z_1 = k_0] > 0$ and also $\varepsilon > 0$ with $\varepsilon < b-a$ . Then, by the same arguments as for $a=0$ , and again assuming $Z_0 = k$ and $Z^{\prime}_0 = k_0 - k$ , we obtain

\begin{equation*}\mathbb{P} [ a < W < b] \geq \mathbb{P} [ Z_1 = k_0 ] \ \mathbb{P} [0 < W < \lambda \varepsilon ] \ \mathbb{P} [ \lambda a < W^{\prime} < \lambda ( b - \varepsilon) ].\end{equation*}

Since we have verified the claim for $a=0$ , we know that the second factor on the right-hand side of this inequality is positive. Our choice of $\varepsilon$ implies that the third factor is also positive. Hence, as for $a=0$ , we can indeed deduce $\mathbb{P}[a < W < b]> 0$ .

Appendix A. Truncation of the offspring distribution

In some of our proofs we make use of the following observation.

Observation A.1. (Truncation of the offspring distribution.) Let $(Z_n)_{n \geq 0}$ denote a branching process with emigration, which is defined recursively by (1.1) under the same assumptions as in Section 1. Moreover, let $\lambda \,:\!=\, \mathbb{E} [ \xi_{1,1}] \in (0,\infty]$ and assume that the distribution of $\xi_{1,1}$ is unbounded. Then, for every $s \in (0,\infty)$ , there exists $N \in \mathbb{N}$ with the following property. If we define, for all $n, j \geq 1$ ,

\begin{align*}\tilde{\xi}_{n,j} \,:\!=\, \begin{cases}\xi_{n,j},& \xi_{n,j} \leq N,\\[4pt]N,& \xi_{n,j} > N,\end{cases}\end{align*}

as well as $\tilde{Z}_0 \,:\!=\, Z_0 \,:\!=\, k$ , and, recursively,

\begin{align*}\tilde{Z}_{n+1} \,:\!=\, \Biggl( \sum_{j=1}^{\tilde{Z}_n} \tilde{\xi}_{n+1,j} - Y_{n+1} \Biggr)_+, \quad n \geq 0,\end{align*}

then we arrive at a branching process with emigration $(\tilde{Z}_n)_{n \geq 0}$ , which satisfies $\tilde{Z}_n \leq Z_n$ almost surely for all $n \geq 0$ , has a bounded offspring distribution and, if $\lambda = \infty$ , then $s < \mathbb{E} [ \tilde{\xi}_{1,1}] < \infty$ , respectively $0 < \lambda - \mathbb{E} [\tilde{\xi}_{1,1}] < s$ , for $\lambda < \infty$ . In particular, if $\tilde{\tau}$ denotes the extinction time of the process $(\tilde{Z}_n)_{n \geq 0}$ , then $\tau \geq \tilde{\tau}$ almost surely.

Moreover, if condition $\mathrm{(IND)}$ , respectively $\mathrm{(H2)}$ , is satisfied for the process $(Z_n)_{n \geq 0}$ , then, by construction, the corresponding condition also holds for $(\tilde{Z}_n)_{n \geq 0}$ . Finally, if $\lambda \in (1,\infty)$ and $s < \lambda - 1$ , then, as for the process $(Z_n)_{n \geq 0}$ , we know that condition $\mathrm{(H1)}$ holds for $(\tilde{Z}_n)_{n \geq 0}$ provided $Z_0 = \tilde{Z}_0 = k$ is chosen large enough.

Appendix B. Notes on the recursion $\boldsymbol{x}_{\boldsymbol{n}\textbf{+}\textbf{1}} = \boldsymbol{a} \boldsymbol{x}_\boldsymbol{n} - \boldsymbol{b}_\boldsymbol{n}$

The following claims can be proved by elementary arguments. We omit the details.

Lemma B.1. Let $x_0\,:\!=\, 1$ , $a \in (1,\infty)$ and $\varepsilon > 0$ with $a-\varepsilon_1 > 1$ . Then there exists $N \in \mathbb{N}$ and $c_1,\ldots,c_N \in (0,\infty)$ such that for the sequence $(x_n)_{n \geq 0}$ defined by

\begin{align*}x_{n+1} \,:\!=\, a x_n - b_n, \quad \textit{where}\ b_n \,:\!=\, \begin{cases}(a - \varepsilon)^{n}, & n \geq N,\\[3pt]c_{n+1},& n \leq N - 1,\end{cases}\end{align*}

there exists $c > 1$ with $x_n \geq c^n > 0$ for all $n \geq 1$ .

Lemma B.2. Let $N \in \mathbb{N}$ , $x_0 \,:\!=\, 1$ , $a \in (1,\infty)$ and $\varepsilon_1 > 0$ with $a- \varepsilon_1 > 1$ . Then there exists $\varepsilon_2 > 0$ with the following property. For arbitrary $l \in \{ 1,\ldots,N-1\}$ , the recursion

\begin{align*} x_{n+1} \,:\!=\, a x_n - b_n, \quad \textit{where}\ b_n \,:\!=\,\begin{cases}(a - \varepsilon_1)^l,&n=l,\\\varepsilon_2 ( a - \varepsilon_1)^n,&n \neq l,\end{cases}\end{align*}

defines reals $x_1,\ldots,x_N$ with $x_j \geq \varepsilon_1 > 0$ for all $j=1,\ldots,N$ .

Acknowledgements

Many thanks go to my advisor Martin Zerner for his helpful comments and suggestions. I also thank Martin Möhle for bringing the concept of dual Markov chains to my attention, and Vitali Wachtel for providing the preprint [Reference Denisov, Korshunov and Wachtel5]. I am also grateful to both referees for many comments which improved the quality of the present article.

Funding information

The author is very grateful for financial support in the form of a PhD scholarship by the Landesgraduiertenförderung Baden-Württemberg.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Athreya, K. B. and Ney, P. E. (1972). Branching Processes. Springer, Berlin.10.1007/978-3-642-65371-1CrossRefGoogle Scholar
Boucheron, S., Labor, G. and Massart, P. (2013). Concentration Inequalities: A Nonasymptotic Theory of Independence. Oxford University Press.10.1093/acprof:oso/9780199535255.001.0001CrossRefGoogle Scholar
Buraczewski, D., Damek, E. and Mikosch, T. (2016). Stochastic Models with Power-Law Tails. Springer, Cham.10.1007/978-3-319-29679-1CrossRefGoogle Scholar
Dawson, D. A. (2017). Introductory lectures on stochastic population systems, Carleton University, Ottawa. Available at arXiv:1705.03781.Google Scholar
Denisov, D., Korshunov, D. and Wachtel, V. (2016). At the edge of criticality: Markov chains with asymptotically zero drift. Preprint.Google Scholar
Grey, D. R. (1988). Supercritical branching processes with density independent catastrophes. Math. Proc. Camb. Phil. Soc. 104, 413416.10.1017/S0305004100065579CrossRefGoogle Scholar
Grey, D. R. (1994). Regular variation in the tail behaviour of solutions of random difference equations. Ann. Appl. Prob. 4, 169183.10.1214/aoap/1177005205CrossRefGoogle Scholar
Grincevičius, A. K. (1975). One limit distribution for a random walk on the line. Lithuanian Math. J. 15, 580589.10.1007/BF00969789CrossRefGoogle Scholar
Gut, A. (2005). Probability: A Graduate Course. Springer, New York.Google Scholar
Harris, T. E. (1963). The Theory of Branching Processes. Springer, Berlin.10.1007/978-3-642-51866-9CrossRefGoogle Scholar
Heathcote, C. R. (1966). Corrections and comments on the paper ‘A branching process allowing immigration’. J. R. Statist. Soc. B [Statist. Methodology] 28, 213217.Google Scholar
Iksanov, A. (2016). Renewal Theory for Perturbed Random Walks and Similar. Birkhäuser, Cham.10.1007/978-3-319-49113-4CrossRefGoogle Scholar
Kaverin, S. V. (1990). Refinement of limit theorems for critical branching processes with emigration. Theory Prob. Appl. 35, 574580.10.1137/1135080CrossRefGoogle Scholar
Kellerer, H. G. (1992). Ergodic behaviour of affine recursions I, II, III. Preprints, University of Munich. Available at http://www.mathematik.uni-muenchen.de/126kellerer/ Google Scholar
Kellerer, H. G. (2006). Random dynamical systems on ordered topological spaces. Stoch. Dyn. 6, 255300.10.1142/S0219493706001797CrossRefGoogle Scholar
Kesten, H. (1973). Random difference equations and renewal theory for products of random matrices. Acta Math. 131, 207248.10.1007/BF02392040CrossRefGoogle Scholar
Kesten, H. and Stigum, B. P. (1966). A limit theorem for multidimensional Galton–Watson processes. Ann. Math. Stat. 37, 12111223.10.1214/aoms/1177699266CrossRefGoogle Scholar
Mikosch, T. (1999). Regular variation, subexponentiality and their applications in probability theory. Eurandom Report 99013.Google Scholar
Nagaev, S. V. and Khan, L. V. (1980). Limit theorems for a critical Galton–Watson process with migration. Theory Prob. Appl. 25, 514525.10.1137/1125063CrossRefGoogle Scholar
Pakes, A. G. (1971). A branching process with a state dependent immigration component. Adv. Appl. Prob. 3, 301314.10.2307/1426173CrossRefGoogle Scholar
Pakes, A. G. (1986). The Markov branching-catastrophe process. Stoch. Process. Appl. 23, 133.10.1016/0304-4149(86)90014-1CrossRefGoogle Scholar
Pakes, A. G. (1988). The Markov branching process with density-independent catastrophes, I: Behaviour of extinction probabilities. Math. Proc. Camb. Phil. Soc. 103, 351366.10.1017/S0305004100064938CrossRefGoogle Scholar
Pakes, A. G. (1989). The Markov branching process with density-independent catastrophes, II: The subcritical and critical cases. Math. Proc. Camb. Phil. Soc. 106, 369383.10.1017/S0305004100078178CrossRefGoogle Scholar
Pakes, A. G. (1989). Asymptotic results for the extinction time of Markov branching processes allowing emigration, I: Random walk decrements. Adv. Appl. Prob. 21, 243269.10.2307/1427159CrossRefGoogle Scholar
Pakes, A. G. (1990). The Markov branching process with density-independent catastrophes, III: The supercritical case. Math. Proc. Camb. Phil. Soc. 107, 177192.10.1017/S0305004100068444CrossRefGoogle Scholar
Quine, M. P. (1970). The multi-type Galton–Watson process with immigration. J. Appl. Prob. 7, 411422.10.2307/3211974CrossRefGoogle Scholar
Seneta, E. (1970). On the supercritical Galton–Watson process with immigration. Math. Biosci. 7, 914.10.1016/0025-5564(70)90038-6CrossRefGoogle Scholar
Sevastyanov, B. A. (1974). Verzweigungsprozesse. Akademie, Berlin.Google Scholar
Sevastyanov, B. A. and Zubkov, A. M. (1974). Controlled branching processes. Theory Prob. Appl. 19, 1424.10.1137/1119002CrossRefGoogle Scholar
Siegmund, D. (1976). The equivalence of absorbing and reflecting barrier problems for stochastically monotone Markov processes. Ann. Prob. 4, 914924.10.1214/aop/1176995936CrossRefGoogle Scholar
Vatutin, V. A. (1978). A critical Galton–Watson branching process with emigration. Theory Prob. Appl. 22, 465481.10.1137/1122058CrossRefGoogle Scholar
Velasco, M. G., del Puerto, I. and Yanev, G. P. (2017). Controlled Branching Processes. Wiley.Google Scholar
Vervaat, W. (1979). On a stochastic difference equation and a representation of nonnegative infinitely divisible random variables. Adv. Appl. Prob. 11, 750783.10.2307/1426858CrossRefGoogle Scholar
Vinokurov, G. V. (1987). On a critical Galton–Watson branching process with emigration. Theory Prob. Appl. 32, 350353.10.1137/1132048CrossRefGoogle Scholar
Yanev, N. M. (1976). Conditions for the degeneracy of $\varphi$ -branching processes with random $\varphi$ . Theory Prob. Appl. 20, 421428.10.1137/1120052CrossRefGoogle Scholar
Yanev, N. M. and Mitov, K. V. (1985). Critical branching processes with nonhomogeneous migration. Ann. Prob. 13, 923933.10.1214/aop/1176992914CrossRefGoogle Scholar
Yanev, G. P. and Yanev, N. M. (1995). Critical branching processes with random migration. In Branching Processes (Lecture Notes Statist. 99), ed. C. C. Heyde, pp. 3646. Springer, New York.10.1007/978-1-4612-2558-4_5CrossRefGoogle Scholar
Zerner, M. P. W. (2018). Recurrence and transience of contractive autoregressive processes and related Markov chains. Electron. J. Prob. 23, 124.10.1214/18-EJP152CrossRefGoogle Scholar
Zubkov, A. M. (1970). A condition for the extinction of a bounded branching process. Math. Notes 8, 472477.10.1007/BF01093437CrossRefGoogle Scholar
Zubkov, A. M. (1974). Analogues between Galton–Watson processes and $\varphi$ -branching processes. Theory Prob. Appl. 19, 319339.Google Scholar