Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-11T06:14:15.568Z Has data issue: false hasContentIssue false

Random population dynamics under catastrophic events

Published online by Cambridge University Press:  15 August 2022

Patrick Cattiaux*
Affiliation:
Université de Toulouse
Jens Fischer*
Affiliation:
Université de Toulouse and Universität Potsdam
Sylvie Rœlly*
Affiliation:
Universität Potsdam
Samuel Sindayigaya*
Affiliation:
Institut d’Enseignement Supérieur de Ruhengeri
*
*Postal address: Institut de Mathématiques de Toulouse, CNRS UMR 5219, Université Paul Sabatier, 118 route de Narbonne, 31062 Toulouse CEDEX 9, France. Email address: patrick.cattiaux@math.univ-toulouse.fr
**Postal address: Institut de Mathématiques de Toulouse, CNRS UMR 5219, Université Paul Sabatier, 118 route de Narbonne, 31062 Toulouse CEDEX 9, France; Institut für Mathematik der Universität Potsdam, Karl-Liebknecht-Str. 24–25, 14476 Potsdam OT Golm, Germany. Email address: jens.fischer@math.univ-toulouse.fr
***Postal address: Institut für Mathematik der Universität Potsdam, Karl-Liebknecht-Str. 24-25, 14476 Potsdam OT Golm, Germany. Email address: roelly@math.uni-potsdam.de
****Postal address: Institut d’Enseignement Supérieur de Ruhengeri, Musanze Street NM 155, PO Box 155, Ruhengeri, Rwanda. Email address: sindasam12@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

In this paper we introduce new birth-and-death processes with partial catastrophe and study some of their properties. In particular, we obtain some estimates for the mean catastrophe time, and the first and second moments of the distribution of the process at a fixed time t. This is completed by some asymptotic results.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

The aim of this work is to propose a model for the evolution of the size of a population subjected to exceptional conditions, such as a genocide; see [Reference Sindayigaya13]. To this end, we introduce a new birth-and-death type process with partial catastrophe. Indeed, birth-and-death processes (BD processes for short) are the more standard stochastic models for the description of the evolution of a population’s size.

A BD process assigns arbitrary non-negative birth-and-death rate pairs to a birth or a death of an individual in the population. Hence, whenever the population size changes, it grows or decreases by exactly one individual. The BD process is under suitable assumptions a continuous-time Markov chain (CTMC) on the discrete state space $\mathbb{N}$ and jump size $\pm 1$ . For a more in-depth discussion of continuous-time Markov chains and BD processes, see the textbooks [Reference Anderson1] or [Reference Norris12].

BD processes have a long history and were first discussed, amongst others, for arbitrary birth-and-death rate by Feller [Reference Feller6] and Kendall [Reference Kendall10]. Note that BD processes can also model immigration and emigration of individuals. Nonetheless, changes in population size which concern not only individuals but groups cannot be considered. In [Reference Brockwell2], [Reference Brockwell3], and [Reference Brockwell, Gani and Resnick4], Brockwell and his coauthors pioneered an extension of the BD process including the possibility of a catastrophe captured by a sudden and exceptionally large decrease in population size. They were particularly concerned with the probability of extinction as well as the time to extinction in such population models.

Mathematically one can capture catastrophes by allowing the process to make larger jumps downwards, rather than the jump size of 1 in both directions, in the classical framework of a BD process. The classic birth-and-death rate pair is denoted by $(\lambda_i,\mu_i)$ for a population of size $i\in\mathbb{N}$ . These are complemented by catastrophe rates $(\gamma_i)_{i\in\mathbb{N}}$ as well as the corresponding law of the catastrophe sizes $(d_i(j))_{j\leq i}$ , where $d_i(j)$ is the probability that a catastrophe in a population of size i leads to j deaths. The infinitesimal generator of the process is described by Brockwell in [Reference Brockwell2] under classical assumptions on the coefficients as follows.

Definition 1.1. (BD process with catastrophes [Reference Brockwell2].) A BD process $X=(X_t)_{t\geq 0}$ with general catastrophes is a continuous-time Markov chain with values in $\mathbb{N}$ associated with an infinitesimal generator $\tilde{Q}=(\tilde{q}_{ij})_{i,j\in\mathbb{N}}$ , of the form

\begin{equation*}\begin{cases}\tilde{q}_{ij} = \gamma_i\, d_i(i-j) \mathbb{1}_{[0,i)}(j) + \mu_i \mathbb{1}_{i-1}(j) + \lambda_i \mathbb{1}_{i+1}(j),& j\neq i, \\[5pt]\tilde{q}_{ii} = - (\lambda_i + \mu_i + \gamma_i) + \gamma_i d_i(0),\end{cases}\end{equation*}

with, for any $i\in\mathbb{N}$ , $\lambda_i,\mu_i,\gamma_i,d_i(k) \in \mathbb{R}_+$ and $\sum_{k=0}^i d_i(k) = 1$ . Moreover, $\lambda_0 = \mu_0 = \gamma_0 = 0$ and $ \sum_{i=1}^{\infty}{\lambda_i^{-1}} =+ \infty$ .

An important class of BD processes with general catastrophes considers exclusively total catastrophes by setting $d_{i}(j)=\mathbb{1}_{\{j=i\}}$ , that is, in the case of a catastrophe, the process jumps from its current state i to the state 0. Note that in Definition 1.1, since $\tilde{q}_{00}=0$ , the state 0 is absorbing. Therefore a total catastrophe may happen at most once before the population dies out. Moreover, without immigration ( $\lambda_0 = 0$ ), a catastrophe leads to the extinction of the population. Note that the infinitesimal generator Q retains a tridiagonal form, if one only considers the states $i\geq 1$ . Van Doorn and Zeifmann [Reference van Doorn and Zeifman15, Reference van Doorn and Zeifman16] use this fact to investigate the transition probabilities at any time t and to extend the classical representation result of the transition probabilities of a BD process in terms of associated orthogonal polynomials by Karlin and McGregor [Reference Karlin and McGregor9]. Assuming constant catastrophe rates $\gamma_i \equiv \gamma$ , Swift [Reference Swift14] obtains explicit expressions for the transition probabilities in terms of their generating function.

Brockwell, Gani, and Resnick [Reference Brockwell, Gani and Resnick4] introduce BD processes with different types of catastrophes: geometric catastrophes, uniform catastrophes, and binomial catastrophes; see also [Reference Di Crescenzo, Giorno, Nobile and Ricciardi5]. The binomial model, later studied in [Reference Kapodistria, Tuan and Resing8], considers a binomial redistribution of the population on the set of integers up to the current one. This induces a new expected population size concentrated around a fixed proportion $p \in [0,1]$ of the previous one, which does not seem to correspond to the data we want to fit. The practical and statistical aspects of the problem, however, will not be discussed here.

In the present paper we extend the study of populations under total catastrophes by considering populations that are subject to partial catastrophes. This means that, with positive rate $\gamma_i$ , the population size i can be drastically reduced to a distinguished state $\mathfrak{n}\geq 1$ , as soon as i exceeds $\mathfrak{n}$ . Note that, in contrast to a total catastrophe, the population cannot die out as a consequence of a partial catastrophe. Of course this model is the simplest one in this spirit, and instead of only one catastrophic new state $\mathfrak{n}$ , we could consider a new distribution concentrated around $\mathfrak{n}$ . Nevertheless, the study of this simple mathematical model is the first necessary step.

In Section 2 we define the exact class of processes we will study, and present our main results. We are interested in the first hitting time $T_X^{(\mathfrak{n})}$ of the (catastrophic) population size $\mathfrak{n}$ . Under the assumption of linear birth, death, and catastrophe rates, we will use the tools introduced by Brockwell [Reference Brockwell3], in order to obtain explicit expressions for the expected catastrophe time $\mathbb{E}\bigl[T_X^{(\mathfrak{n})}\mid X_0=n_0\bigr],\, n_0\geq \mathfrak{n}$ . We also identify its limiting behavior for a large initial population, i.e. when $n_0\to\infty$ . Proofs are presented in Section 3. We then study the first two moments of the population size at a fixed time t. After establishing positive recurrence of the BD process with partial catastrophe in Section 4, we compute and discuss explicit upper bounds for the first and second moments of the process in Section 5.

2. Birth-and-death process with partial catastrophe: our main results

We fix a catastrophic state $\mathfrak{n}\in\mathbb{N}^{\ast}$ and set $d_{i}(j)=\mathbb{1}_{\{j=i-\mathfrak{n}\}}, i\geq \mathfrak{n}$ . We introduce a positive rate $\nu$ to model a (minimal) immigration phenomenon when the population vanishes, ensuring the irreducibility of the process. Here $(\gamma_i)_{i\in\mathbb{N}}$ , $(\lambda_i)_{i\in\mathbb{N}}$ , $(\mu_i)_{i\in\mathbb{N}}$ are respectively the catastrophe, the birth, and the death rates, where the index i represents the size of the population. All rates are assumed to be linear in the population size with proportionality coefficients respectively $\gamma,\lambda,\mu > 0$ ; see (2.1). Finally, throughout this paper we assume that $\lambda > \mu$ , that is, the individual birth rate exceeds the individual death rate. Hence, in the absence of a catastrophic event, the basic birth-and-death process with immigration would model a growing population.

We consider the CTMC on the state space $\mathbb{N}$ denoted by $X=(X_t)_{t\geq 0}$ , whose infinitesimal generator $Q=(q_{ij})_{i,j\in\mathbb{N}}$ is given by

(2.1) \begin{equation}q_{ij} =\begin{cases}\lambda_0 = \nu,&\\[5pt]\lambda_i = \lambda\cdot i,& j = i+1,\\[5pt]\mu_i = \mu\cdot i,& j = i-1, \ i\geq 1,\ i\neq \mathfrak{n} + 1,\\[5pt]\gamma_i = \gamma\cdot i,& j =\mathfrak{n},\ i > \mathfrak{n} + 1,\\[5pt]\mu_i + \gamma_i,& j = \mathfrak{n},\ i=\mathfrak{n} + 1,\\[5pt]-\sum_{j\neq i} q_{ij},& j = i.\end{cases}\end{equation}

The choice of linear dependence with respect to the size of the population for the birth and death rates is very natural. On the other hand, the linear rate of the partial catastrophes means that the risk grows with the number of individuals. This situation appears in a population where any individual carries a risk of $\gamma$ which might lead to a catastrophe, e.g. transmitting a deadly disease which, once it happens, kills a huge part of the population on a very fast time scale. The cardinality $\mathfrak{n}$ may be seen as that of the population of possible transmitters, so they contribute to the rate $\gamma_i$ , but at the same time individuals immune to the effects of the catastrophe, e.g. by vaccination or natural resistance, such that they do not vanish when the catastrophe sets in.

Mathematically, these assumptions lead to explicitly derivable and particularly nice results, which were the main goal for our first analysis of this kind of population model to emphasize their qualitative behavior.

Its transition graph is depicted in Figure 1.

Figure 1. Transition graph of X.

Definition 2.1. A birth-and-death process with partial catastrophe (BD+C $_{\mathfrak{n}}$ process) X is a CTMC with infinitesimal generator Q defined by the equations (2.1) and $X_0 = n_0>\mathfrak{n}$ .

Its so-called catastrophe time $T_X^{(\mathfrak{n})}$ is defined as its hitting time of the catastrophic state $\mathfrak{n}$ :

\begin{equation*} T_X^{(\mathfrak{n})} \;:\!=\; \inf\{t\geq 0 \mid X_t = \mathfrak{n} \}.\end{equation*}

Our results on the catastrophe time $T_X^{(\mathfrak{n})}$ are presented in the next three theorems. In Theorem 2.1 we state its almost sure finiteness, and in Theorems 2.2 and 2.1 we go on to study its expectation and its asymptotic behavior as the initial size of the population tends to $\infty$ .

Theorem 2.1. (Finiteness of catastrophe time.) Let X be a BD+C $_{\mathfrak{n}}$ process whose infinitesimal generator Q is given by (2.1). Then its catastrophe time $T_X^{(\mathfrak{n})}$ is almost surely finite.

Better, we can compute the first moment of $T_X^{(\mathfrak{n})}$ . Using the notation

(2.2) \begin{equation}\mathbb{E}_{i}\bigl[T_X^{(\mathfrak{n})}\bigr] \;:\!=\; \mathbb{E}\bigl[T_X^{(\mathfrak{n})}\mid X_0 = i + \mathfrak{n}\bigr],\end{equation}

we find an explicit expression for $\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr]$ and infer its asymptotic behavior for $i\to\infty$ .

Theorem 2.2. (Explicit computation of the mean catastrophe time.) Let X be a BD+C $_{\mathfrak{n}}$ process whose infinitesimal generator Q is given by (2.1). Let $\underset{\bar{}}{a}<1<\bar{a}$ denote the distinct real zeros of the polynomial $\textbf{x} \mapsto \mu \textbf{x}^2-(\lambda+\mu+\gamma)\textbf{x}+\lambda$ . The mean catastrophe time – defined in (2.2) – is given by

\begin{equation*}\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr] = c\biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr)\sum_{k=1}^{\infty}\dfrac{\underset{\bar{}}{a}^k}{k+\mathfrak{n}}+ c \sum_{k=1}^{i-1}\dfrac{1}{k+\mathfrak{n}} \biggl(\dfrac{1}{\bar{a}^{i-k}}- \dfrac{1}{\underset{\bar{}}{a}^{i-k}}\biggr),\quad i\geq 1 ,\end{equation*}

with $c = \bigl(\sqrt{(\lambda+\mu+\gamma)^2-4\lambda\mu}\bigr)^{-1}$ .

Moreover, we obtain an explicit decreasing rate of the mean catastrophe time for large initial populations, which is in some sense counterintuitive.

Corollary 2.1. Let X be the above BD+C $_{\mathfrak{n}}$ process. The asymptotic behavior of its mean catastrophe time for large initial populations is

\begin{equation*}\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr]={\textrm{O}} (i^{-1}) .\end{equation*}

Proofs of Theorems 2.12.2 and Corollary 2.1 are postponed to Section 3.

After establishing positive recurrence of the BD+C $_{\mathfrak{n}}$ process in Section 4, we will present in Section 5 the proofs of the following properties of – upper bounds for – the first and second moments of the process.

Theorem 2.3. (Upper bound for the mean.) Consider $(X_t)_{t\geq 0}$ , the BD+C $_{\mathfrak{n}}$ process whose infinitesimal generator Q is given by (2.1). Then the following upper bound holds:

\begin{equation*}\mathbb{E}[X_t]\leq \bar{m}(t), \quad t\geq 0,\end{equation*}

where the function $\bar{m}$ is the solution to the differential equation

(2.3) \begin{equation}\begin{cases} \bar{m}(0) = n_0,&\\[5pt]\bar{m}'(t) = - \gamma \, \bar{m}(t)^2 + (\lambda -\mu +\gamma\,\mathfrak{n}) \, \bar{m}(t) +\nu, & t>0. \end{cases}\end{equation}

We also obtain a similar result for the second moment.

Theorem 2.4. (Upper bound for the second moment.) Let $(X_t)_{t\geq 0}$ be the BD+C $_{\mathfrak{n}}$ process whose infinitesimal generator Q is given by (2.1). Consider the function $\bar{m}$ solution to (2.3) and assume that the initial size of the population is larger than a constant $\bar{m}_e$ computed in (5.4). Then the second moment admits the upper bound

\begin{equation*}\mathbb{E}\bigl[X_t^2\bigr]\leq \bar{v}(t), \quad t\geq 0,\end{equation*}

the function $\bar{v}(t)$ being the solution to the differential equation

\begin{equation*}\begin{cases} \bar{v}(0) = n_0^2,&\\[5pt]\bar{v}'(t) = 2(\lambda- \mu )\bar{v}(t) + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}(t) - \gamma \bar{v}(t)^{{3}/{2}} +\nu, & t>0. \end{cases}\end{equation*}

Such a $\bar{v}$ is bounded uniformly in time, and thus so is $\mathbb{E}\bigl[X_t^2\bigr]$ .

We close Section 5 with a discussion of the quality of the bounds $\bar{m}$ and $\bar{v}$ as defined in the previous theorems, with a focus on their long-time behavior.

3. Expected catastrophe time

This section consists of three subsections which lead through the proofs of Theorem 2.1, Theorem 2.2, and Corollary 2.1.

The proofs are based on various lemmas and propositions which shed light on recurrence relations of order 2. In particular, we examine the limit behavior of their solutions in Lemma 3.2, the speed of divergence in Lemma 3.3, the dependence on the initial values in Lemmas 3.4 and 3.5, and possible explicit expressions for the minimal solution in Lemmas 3.6 and 3.8. A road map for the whole proof structure can be seen in Figure 2.

Figure 2. Dependences and implications of results in Section 3.

3.1. Finiteness of time of catastrophe

We analyze the catastrophe time $T^{(\mathfrak{n})}_X$ using the auxiliary process $Y=(Y_t)_{t\geq 0}$ defined by shifting the process X by $\mathfrak{n}$ and stopping it at catastrophe time $T^{(\mathfrak{n})}_X$ :

\begin{equation*}Y_t \;:\!=\; X_{t\wedge T^{(\mathfrak{n})}_X}-\mathfrak{n}, \quad t\geq 0.\end{equation*}

We discuss its properties and their implications for $T^{(\mathfrak{n})}_X$ in the following straightforward lemma.

Lemma 3.1. Let X be the BD+C $_{\mathfrak{n}}$ process whose generator Q satisfies (2.1) with initial condition $X_0 = n_0 > \mathfrak{n}$ . Then Y is a BD process with total catastrophe whose birth, death, and catastrophe rates are affine and given respectively by

(3.1) \begin{equation} \tilde{\lambda}_i\;:\!=\; \lambda(i+\mathfrak{n}), \quad \tilde{\mu}_i\;:\!=\; \mu(i+\mathfrak{n}) \quad{and}\quad \tilde{\gamma}_i\;:\!=\; \gamma(i+\mathfrak{n}), \quad i\in\mathbb{N} .\end{equation}

The catastrophe time of X corresponds to the extinction time for the process Y.

While the proof of this lemma is evident, it yields a helpful tool for further analysis. In this way we switch from characterizing the first hitting time of the state $\mathfrak{n}$ for the process X to the more classical study of the extinction time of the process Y. In particular, the state 0 is absorbing for Y and is a boundary state. This gives us an advantage compared to analyzing $T^{(\mathfrak{n})}_X$ directly, since X may leave the state $\mathfrak{n}$ again both to $\mathfrak{n}+1$ or $\mathfrak{n}-1$ . We may thus directly use the tools developed by Brockwell to analyze extinction times for birth-and-death processes with general catastrophes: see [Reference Brockwell3, Lemma 3.1]. We recall this result in the following proposition.

Proposition 3.1. ([Reference Brockwell3, Lemma 3.1]) For fixed $u\geq 0$ , consider the sequence $(\alpha_i(u))_{i\in\mathbb{N}}$ defined per iteration by

\begin{equation*}\alpha_0(u)=0, \alpha_1(u) = 1, \quad \sum_{j=1}^{i+1} \tilde{q}_{ij}\, \alpha_j(u) = u \, \alpha_i(u),\quad i\in\mathbb{N}^{\ast}.\end{equation*}

Let $\alpha_\infty(0)\;:\!=\; \lim_{i\to\infty} \alpha_i(0)$ . Let $(Z_t)_{t\geq 0}$ be a BD process with general catastrophes as defined in Definition 1.1. Its time to extinction $T^0_Z\;:\!=\; \inf\{t\geq 0 \mid Z_t = 0\}$ verifies

  1. (i) $\mathbb{P}\bigl[T^0_Z< \infty\mid Z_0=i\bigr] = 1-{\alpha_i(0)}/{\alpha_\infty(0)}$ , $i\in\mathbb{N}$ ,

  2. (ii) $\mathbb{P}\bigl[T^0_Z< \infty \mid Z_0=i\bigr] = 1 \ {\textit{for all}}\ i\in\mathbb{N}^{\ast}\Leftrightarrow \mathbb{P}\bigl[T^0_Z < \infty\mid Z_0=1\bigr] = 1 \Leftrightarrow \alpha_\infty(0)=\infty$ .

It is worth noticing that while Brockwell [Reference Brockwell2] studied the linear case $\tilde{\lambda}_i =\lambda \, i$ in detail, we have to consider the shifted (affine) case $\tilde{\lambda}_i =\lambda i + \lambda \mathfrak{n}$ , so we have to perform all calculations when applying Proposition 3.1 to the process Y.

Using Proposition 3.1 to study $T^0_Y$ , we only have to analyze the behavior of the associated sequence $(a_i(0))_{i\in\mathbb{N}}$ . It is the aim of the following lemma.

Lemma 3.2. Using the notations (3.1), consider for $u\geq 0$ and $\textrm{a}>0$ the sequence $(a_i( u ))_{i\in\mathbb{N}}$ defined by the recurrence relation

(3.2) \begin{equation}\begin{cases}(a_0( u ),\,a_1( u )) = (0,\textrm{a}) , \\[5pt]\tilde{\lambda}_i a_{i+1}( u ) = ( u +\tilde{\lambda}_i + \tilde{\mu}_i + \tilde{\gamma}_i)\, a_i( u ) - \tilde{\mu}_i \, a_{i-1}( u ),& i\geq 1.\end{cases}\end{equation}

Then $(a_i( u ))_{i\in\mathbb{N}}$ is non-decreasing and $a_\infty(0)=\infty$ .

Proof. Fix $u\geq 0 $ and set $a_i\;:\!=\; a_i(u)$ to improve readability. Note that (3.2) is equivalent to

\begin{equation*}\tilde{\lambda}_i (a_{i+1}-a_i) = u a_i + \tilde{\mu}_i (a_i - a_{i-1}) +\tilde{\gamma}_i (a_i-a_0).\end{equation*}

First $a_1-a_0 = \textrm{a} > 0$ by assumption. Fix $i\geq 1$ and suppose that for $j\leq i-1$ the inequality $a_{j+1}-a_j\geq 0$ holds. Hence, in particular, $a_i\geq 0$ . Moreover

\begin{equation*} \tilde{\lambda}_i( a_{i+1} - a_{i} ) = u a_i + \tilde{\mu}_i (a_i - a_{i-1}) +\tilde{\gamma}_i \sum_{j=1}^i(a_j-a_{j-1})\geq u a_i \geq 0.\end{equation*}

By induction $(a_i)_{i\in\mathbb{N}}$ is non-decreasing, and thus $a_i\geq 0$ for all $i\in\mathbb{N}$ . Moreover, we obtain $\tilde{\lambda}_i a_{i+1} \geq (\tilde{\lambda}_i + \tilde{\gamma}_i)a_i$ . Hence, for all $i\in\mathbb{N}$ ,

\begin{equation*} a_{i+1} \geq \dfrac{\tilde{\lambda}_i + \tilde{\gamma}_i}{\tilde{\lambda}_i}a_i = \biggl(1+\dfrac{\gamma}{\lambda}\biggr) a_i\geq \ldots \geq \biggl(1+\dfrac{\gamma}{\lambda}\biggr)^i \textrm{a},\end{equation*}

and thus $a_\infty(0)=\lim_{i\to\infty} a_i = \infty$ with at least a geometric rate.

Indeed, in the following lemma, we quantify the exact speed of divergence for the sequence $(a_i(0))_{i\in\mathbb{N}}$ .

Lemma 3.3. Take $u=0$ in (3.2). Then the sequence $ ({a_i(0)}/{ a_{i+1}(0)})_{i\in\mathbb{N}}$ converges to the smaller real zero $\underset{\bar{}}{a}\in(0,1)$ of the polynomial

\begin{equation*}\textbf{x} \mapsto \mu \textbf{x}^2-(\lambda+\mu+\gamma)\textbf{x}+\lambda.\end{equation*}

Proof. Set $a_i\;:\!=\; a_i(0)$ to improve readability. For $i\geq 2$ ,

\begin{align*}a_{i-1} a_{i}^{-1} &= \dfrac{\tilde{\lambda}_{i-1}a_{i-1}}{(\tilde{\lambda}_{i-1} + \tilde{\gamma}_{i-1} + \tilde{\mu}_{i-1})a_{i-1} - \tilde{\mu}_{i-1}a_{i-2}}\\[5pt] &= \dfrac{\lambda a_{i-1}}{(\lambda + \gamma + \mu )a_{i-1} - \mu\, a_{i-2}}.\end{align*}

Setting $z_i = a_i a_{i+1}^{-1}$ for $i\geq 1$ , we therefore have

\begin{equation*}z_1 = \dfrac{\lambda }{\lambda + \gamma + \mu}, \quad z_i = \dfrac{\lambda}{\lambda + \gamma + \mu - \mu z_{i-1}}.\end{equation*}

Consider the map $\phi\colon [0,1]\to[0,1]$ defined by $\phi(x)= {\lambda}/{(\lambda + \gamma + \mu - \mu x)}$ . Since $0<\phi(0)<\phi(1) < 1$ and $\phi$ is strictly increasing, $\phi$ has a unique fixed point $\underset{\bar{}}{a}\in(0,1)$ given by

\begin{equation*}\underset{\bar{}}{a} = \dfrac{\lambda + \mu+\gamma - \sqrt{(\lambda + \mu+\gamma)^2 - 4\lambda\mu}}{2\mu}.\end{equation*}

Moreover, $\lim_{i\to\infty}z_i= \underset{\bar{}}{a}$ .

Applying Proposition 3.1 and Lemma 3.2 to the shifted process Y defined in Lemma 3.1, we obtain that the extinction time $T_Y^0$ is almost surely finite. Hence the catastrophe time $T^{(\mathfrak{n})}_X$ associated with $(X_t)_{t\geq 0}$ is almost surely finite. This completes the proof of Theorem 2.1.

3.2. Explicit computation of mean catastrophe time

In this subsection we prove Theorem 2.2. To this end we first recall a well-known result about the mean of hitting times; see e.g. [Reference Norris12]. If Y is an irreducible CTMC with infinitesimal generator $Q=(q_{ij})_{ij}$ , the sequence $\bigl(\mathbb{E}\bigl[T_Y^0\mid Y_0=i\bigr]\bigr)_i$ is the minimal positive solution of the linear system

\begin{equation*} \begin{cases}x_i =0,& i=0,\\[5pt] -\sum_{j\in\mathbb{N}} q_{ij} x_j = 1,& i\not=0 .\end{cases}\end{equation*}

Hence, since $T_Y^0=T_X^{(\mathfrak{n})}$ a.s. the sequence $\bigl(\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr]\bigr)_{i\geq 0}$ satisfies the recurrence relation

(3.3) \begin{equation}\tilde{\lambda}_i x_{i+1} = -1 + (\tilde{\gamma}_i+\tilde{\lambda}_i+ \tilde{\mu}_i)x_{i} - \tilde{\mu}_i x_{i-1},\quad i\geq 1,\end{equation}

where $x_0=0$ and the value of $x_1$ has to be determined.

In what follows, we use Lemma 3.2 extensively, always fixing $u=0$ and abbreviating $a_i \;:\!=\; a_i(0)$ . We focus first on the dependence of the solution of (3.3) with respect to the value of $x_1 \in \mathbb{R}$ .

Lemma 3.4. Let $(x_i)_{i\in\mathbb{N}}$ be a solution sequence to the recurrence relation (3.3), where the coefficients are given by (3.1). Then there is at most one possible value for $x_1$ such that the sequence $(x_i)_{i\in\mathbb{N}}$ is bounded.

Proof. Consider two solutions $(x_i)_{i\in\mathbb{N}}$ and $(x'_{\!\!i})_{i\in\mathbb{N}}$ of the recurrence relation (3.3) satisfying $x_1=x$ resp. $x'_{\!\!1}=x'< x$ . The sequence ( $\Delta_i\;:\!=\; x_i - x'_{\!\!i}, i\in\mathbb{N}$ ) satisfies (3.2) with $\Delta_1= x-x'$ and $u=0$ . By Lemma 3.2, $(\Delta_i)_{i\in\mathbb{N}}$ is a non-decreasing sequence tending to $\infty$ as $i\to\infty$ .

If there were two bounded sequences with different values x and x for $i=1$ , their difference would also be bounded, which is a contradiction.

The following lemmas yield, step by step, the value of $x_1$ for which the solution sequence of (3.3) is uniformly bounded. As a first step we show in Lemma 3.5 a dichotomy of the divergence behavior around a certain initial value $\hat{x}$ . This is going to provide a direct argument for the convergence radius of an associate generating function in Lemma 3.7.

Lemma 3.5. There exists a unique value $\hat{x}>0$ such that:

  1. (i) if $x_1 < \hat{x}$ , $\lim_{i\to\infty}x_i = -\infty$ ,

  2. (ii) if $x_1 > \hat{x}$ , $\lim_{i\to\infty}x_i = +\infty$ .

Proof. Consider unbounded solutions $(x_i)_{i\in\mathbb{N}}$ to (3.3). As in the proof of Lemma 3.4, take two solutions $(x_i)_{i\in\mathbb{N}}$ and $(x'_{\!\!i})_{i\in\mathbb{N}}$ of (3.3) with $x'_{\!\!1}=x'< x=x_1$ . Since the sequence $(\Delta_i=x_i - x'_{\!\!i}, \ i\in\mathbb{N})$ is non-decreasing by Lemma 3.2, we obtain $x_i > x'_{\!\!i}$ for all $i\geq 1$ . Therefore solutions preserve, for any i, the order of their values for $i=1$ .

Consider the case where there is an $i_0\in\mathbb{N}$ such that $x_{i_0-1} \geq 0$ and $x_{i_0}<0$ . Note that if there is a $k\in\mathbb{N}$ such that $x_{k}<0$ , such a pair $(x_{i_0-1},x_{i_0})$ may always be found since $x_0 = 0$ . Then, because $\tilde{\gamma}_i > 0$ for all $i\in\mathbb{N}$ , we have

(3.4) \begin{equation}\tilde{\lambda}_{i_0} (x_{i_0 + 1} - x_{i_0}) = -1 + \tilde{\gamma}_{i_0} x_{i_0} + \tilde{\mu}_{i_0}(x_{i_0} - x_{i_0-1}) < 0.\end{equation}

Thus $x_{i_0+1} < x_{i_0} < 0$ and

\begin{equation*}0 > \tilde{\gamma}_{i_0} x_{i_0} > \tilde{\gamma}_{i_0} x_{i_0+1} > \tilde{\gamma}_{i_0+1}x_{i_0+1}.\end{equation*}

Hence inductively we see that starting from $i_0$ the sequence $(x_i)_{i\geq {i_0}}$ is decreasing and negative. In particular, multiplying the recurrence relation (3.4) with $-1$ and using Lemma 3.2 with $u = 1$ , we obtain that $(x_i)_{i\geq {i_0}}$ diverges to $-\infty$ as $i\to\infty$ . In particular, once the sequence $(x_i)_{i\in\mathbb{N}}$ becomes negative, it stays negative.

Secondly, consider a positive unbounded solution $(x'_{\!\!i})_{i\geq 0}$ of (3.3). Then, for any level $C>0$ , there is an index $I\in\mathbb{N}$ such that $x'_{\!\!i} < C$ for all $i<I$ and $x'_{\!\!I} \geq C$ . Let $C>0$ be sufficiently large that $C\tilde{\gamma}_I>1$ with $I=\inf\{i\colon x'_{\!\!I} \geq C\}$ . Then, by the recurrence relation (3.3), we obtain

\begin{equation*}\tilde{\lambda}_I(x'_{\!\!I+1}-x'_{\!\!I}) = \tilde{\mu}_I(x'_{\!\!I}-x'_{\!\!I-1}) + \tilde{\gamma}_I x'_{\!\!I} - 1 > C \tilde{\gamma}_I - 1>0.\end{equation*}

Hence $x'_{\!\!I+1}>x'_{\!\!I}$ and thus $\tilde{\gamma}_{I+1} x'_{\!\!I+1}> \tilde{\gamma}_I x'_{\!\!I}>1$ . Furthermore, there is an $\varepsilon>0$ such that $x'_{\!\!I+1}\geq C+\varepsilon$ and $x'_{\!\!I}< C+\varepsilon$ . Applying the same arguments to $x'_{\!\!I+1}$ with a new constant C set to $C+\varepsilon$ yields inductively that $(x'_{\!\!i})_{i\geq N}$ is strictly increasing, by assumption unbounded, and therefore divergent to $+\infty$ .

Therefore, since two solution sequences of (3.3) preserve the order of their initial values, using Lemma 3.4, there exists a critical value $\hat{x}>0$ such that for $x_1> \hat{x}$ the solution to (3.3) tends to $\infty$ , while for $x_1< \hat{x}$ it tends to $-\infty$ .

In fact we will be able to compute the critical value $\hat{x}$ employing generating functions as a tool.

Lemma 3.6. Let $(x_i)_{i\in\mathbb{N}}$ denote the solution sequence to the recurrence relation (3.3) with $x_1\;:\!=\; x$ . Its generating function $\mathcal{E}(z) \;:\!=\; \sum_{i=0}^{\infty}x_i z^i$ satisfies

(3.5) \begin{equation} \mathcal{E}(z) = z \dfrac{\lambda x - \sum_{k=1}^{\infty}{z^k}/{(k+\mathfrak{n})}}{\mu z^2 - q z + \lambda}\end{equation}

within its radius of convergence $R\in[0,\infty)$ , where $q \;:\!=\; \lambda + \mu + \gamma$ .

Proof. Set $\tilde{q}_i \;:\!=\; q(i+\mathfrak{n})$ such that (3.3) becomes

\begin{equation*}\tilde{q}_i x_{i} = \tilde{\lambda}_i x_{i+1} + \tilde{\mu}_i x_{i-1} + 1.\end{equation*}

By exploiting this formulation, we obtain

\begin{align*}\mathcal{E}(z) &= \sum_{i=1}^{\infty}x_i z^i \\[3pt]&= \sum_{i=1}^{\infty} (\tilde{\lambda}_i x_{i+1} + \tilde{\mu}_i x_{i-1} + 1)\dfrac{z^i}{\tilde{q}_i}\\[3pt]&= \dfrac{\lambda}{q}\sum_{i=1}^{\infty} x_{i+1}z^i + \dfrac{\mu}{q}\sum_{i=1}^{\infty} x_{i-1}z^i + \sum_{i=1}^{\infty}\dfrac{z^i}{\tilde{q}_i}\\[3pt]&= \dfrac{\lambda}{qz}\sum_{i=1}^{\infty} x_{i+1}z^{i+1} + \dfrac{\mu}{q}z\sum_{i=1}^{\infty} x_i z^i + \sum_{i=1}^{\infty}\dfrac{z^i}{\tilde{q}_i}\\[3pt]&= \dfrac{\lambda}{qz}(\mathcal{E}(z) - xz )+ \dfrac{\mu}{q}z\mathcal{E}(z) + \sum_{i=1}^{\infty}\dfrac{z^i}{\tilde{q}_i} ,\end{align*}

which leads to the result (3.5).

Based on the explicit generating function, we derive an expression of the coefficients by differentiating in 0. To justify this approach we have to establish that $\mathcal{E}$ converges within a positive radius of convergence.

Lemma 3.7. The generating function $\mathcal{E}$ given by (3.6) has a positive radius of convergence.

Proof. Let $\hat{x}$ be as in Lemma 3.5. Consider solutions $(x_i)_{i\in\mathbb{N}}$ and $(x'_{\!\!i})_{i\in\mathbb{N}}$ to (3.3) with $x'_1=x'< \hat{x} < x=x_1$ . Since there is an $i_0\in\mathbb{N}$ such that $x'_{\!\!i}<0$ for all $i\geq i_0$ , it follows for $\Delta_i\;:\!=\; x_i-x'_{\!\!i}$ that

\begin{equation*}\Delta_i\in [\!\max\{x_i,|x'_{\!\!i}|\},2\max\{x_i,|x'_{\!\!i}|\}], \quad i\geq i_0.\end{equation*}

Further, $(\Delta_i)_{i\in\mathbb{N}}$ satisfies (3.2) with $u=0$ , so according to Lemma 3.3 it grows like $\underset{\bar{}}{a}^{-i}$ as $i\to\infty$ . By the upper and lower bound of $\Delta_i$ for $i\geq i_0$ we obtain that both $(x_i)_{i\in\mathbb{N}}$ and $(x'_{\!\!i})_{i\in\mathbb{N}}$ grow asymptotically at most like $\underset{\bar{}}{a}^{-i}$ . Note that the solution $(\hat{x}_i)_{i\in\mathbb{N}}$ of (3.3) with $\hat{x}_1=\hat{x}$ cannot grow faster than $\underset{\bar{}}{a}^{-i}$ , as $i\to\infty$ by positivity of $\Delta_i$ for all $i\geq 1$ . Thus the series

\begin{equation*}\mathcal{E}(z) = \sum_{i=1}^{\infty}x_i z^i\end{equation*}

converges for any initial value at least within the radius of convergence $R \;:\!=\; \underset{\bar{}}{a}>0$ .

We proceed with a direct calculation of the coefficients. Recall that $\underset{\bar{}}{a}<\bar{a}$ are the distinct real zeros of the polynomial $\textbf{x} \mapsto \mu \textbf{x}^2-(\lambda+\mu+\gamma)\textbf{x}+\lambda$ .

Lemma 3.8. Any solution $(x_i)_{i\in\mathbb{N}}$ to equation (3.3) with $x_1=x >0$ has the form

(3.6) \begin{equation}x_i = \lambda\, c \, x\,\biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr)- c \sum_{k=1}^{i-1}\dfrac{ 1}{k+ \mathfrak{n} } \biggl( \dfrac{1}{\underset{\bar{}}{a}^{i-k}} -\dfrac{1}{\bar{a}^{i-k}} \biggr),\end{equation}

with $ c = \bigl(\sqrt{(\lambda+\mu+\gamma)^2-4\lambda\mu}\bigr)^{-1}$ .

Proof. According to Lemmas 3.6 and 3.7, the introduced generating function $\mathcal{E}$ has a positive radius of convergence and the particular form

\begin{equation*}\mathcal{E}(z) = z\dfrac{f(z)}{g(z)} \quad \text{with}\quad f(z)\;:\!=\; \lambda x - \sum_{k=1}^{\infty}\dfrac{z^k}{k+\mathfrak{n}} \quad \text{and}\quad g(z)\;:\!=\; \mu z^2 - q z + \lambda .\end{equation*}

Note first that, given that form, we have

\begin{equation*}\partial_z^i\mathcal{E}(z)\vert_{z=0} = i\,\partial_z^{i-1}\biggl(\dfrac{f(z)}{g(z)}\biggr)\bigg\vert_{z=0}= i\sum_{k=0}^{i-1}\binom{i-1}{k}\biggl(\partial_z^{k}f(z)\partial_z^{i-1-k} \biggl(\dfrac{1}{g}\biggr)(z)\biggr)\bigg\vert_{z=0} .\end{equation*}

Due to the form of g we have

\begin{equation*}\dfrac{1}{g(z)} =\dfrac{ c }{z-\bar{a}} - \dfrac{ c }{z-\underset{\bar{}}{a}},\end{equation*}

and therefore any derivative of order n, namely,

\begin{equation*}\partial_z^i \dfrac{1}{g(z)} \bigg\vert_{z=0} = i! \, c \biggl( \dfrac{1}{\underset{\bar{}}{a}^{i+1}}-\dfrac{1}{\bar{a}^{i+1}}\biggr).\end{equation*}

Considering the numerator f(z), we have

\begin{equation*}\partial_z^i \,f(z)\vert_{z=0} = -\sum_{k=i}^{\infty} \dfrac{z^{k-i}}{k+ \mathfrak{n} }k\cdot(k-1)\cdot\ldots\cdot (k-i+1)\bigg\vert_{z=0} = -\dfrac{i!}{i+ \mathfrak{n} }.\end{equation*}

Hence

\begin{align*}\partial_z^i\mathcal{E}(z)\vert_{z=0} &= i\,\partial_z^{i-1}\biggl(\dfrac{f(z)}{g(z)}\biggr)\bigg\vert_{z=0}\\[5pt]&= i\biggl(\lambda\,x\,(i-1)! c \biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr) \nonumber\\ &\quad -\sum_{k=1}^{i-1}\binom{i-1}{k} \dfrac{k!}{k+ \mathfrak{n} } c (i-1-k)!\biggl(-\dfrac{1}{\bar{a}^{i-k}}+ \dfrac{1}{\underset{\bar{}}{a}^{i-k}}\biggr)\biggr)\nonumber\\ &= i!\biggl(\lambda\,x\, c \biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr)-\sum_{k=1}^{i-1}\dfrac{ c }{k+ \mathfrak{n} } \biggl( \dfrac{1}{\underset{\bar{}}{a}^{i-k}}-\dfrac{1}{\bar{a}^{i-k}}\biggr)\biggr)\nonumber \end{align*}

Since $x_i= i!^{-1}\partial_z^i\mathcal{E}(z)\vert_{z=0} $ , we find that for any $i\geq 1$

\begin{equation*}x_i = \lambda\,c\,x \biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr)- c\sum_{k=1}^{i-1}\dfrac{ 1 }{k+ \mathfrak{n} } \biggl(-\dfrac{1}{\bar{a}^{i-k}}+ \dfrac{1}{\underset{\bar{}}{a}^{i-k}}\biggr).\end{equation*}

We can now proceed with the proof of Theorem 2.2 by combining the previously presented results as follows. The sequence of expected hitting times $\bigl(\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr]\bigr)_{i\in\mathbb{N}}$ is a solution to (3.3). Lemma 3.8 shows that the expected value for $i\geq 0$ is in fact given by

\begin{equation*}\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr] = \lambda\, \mathbb{E}_1\bigl[T_X^{(\mathfrak{n})}\bigr] \, c \biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr)-c \sum_{k=1}^{i-1}\dfrac{ 1 }{k+ \mathfrak{n} } \biggl(\dfrac{1}{\underset{\bar{}}{a}^{i-k}}-\dfrac{1}{\bar{a}^{i-k}} \biggr),\end{equation*}

with $ c = \bigl(\sqrt{(\lambda+\mu+\gamma)^2-4\lambda\mu}\bigr)^{-1}$ , where we followed the usual convention that an empty sum equals 0. It remains to derive the value of $\mathbb{E}_1\bigl[T_X^{(\mathfrak{n})}\bigr]$ to complete the proof of Theorem 2.2.

To improve the readability in what follows, we introduce the following notation:

\begin{equation*}\Phi_{n}(z) \;:\!=\; \sum_{k=0}^{\infty}\dfrac{z^k}{k+n}, \quad |z|<1, \ n\in\mathbb{N}^*.\end{equation*}

Therefore

\begin{equation*}\mathcal{E}(z) = z \dfrac{\lambda x_1 - \Phi_{\mathfrak{n}}(z)+ \mathfrak{n} ^{-1}}{\mu z^2 - q z + \lambda}\end{equation*}

In fact, it turns out that the value $x_1={\lambda^{-1}}(\Phi_{\mathfrak{n}}(\underset{\bar{}}{a}) - \mathfrak{n} ^{-1})$ yields the minimal positive solution of (3.3) which does not tend to $+\infty$ , as proved in the following lemma.

Lemma 3.9. The sequence $(x_i)_{i\in\mathbb{N}}$ satisfying (3.3) with $x_1= {\lambda^{-1}}(\Phi_{\mathfrak{n}}(\underset{\bar{}}{a})- \mathfrak{n} ^{-1})$ is the minimal positive solution to (3.3). Moreover, $x_i = {\textrm{O}} (i^{-1})$ as $i\to\infty$ . Therefore

\begin{equation*}\hat{x} = \dfrac{1}{\lambda}\bigl(\Phi_{\mathfrak{n}}(\underset{\bar{}}{a})- \mathfrak{n} ^{-1}\bigr).\end{equation*}

Proof. By Lemma 3.8,

\begin{align*}x_i &= \lambda\, c \, x_1\!\biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr)-c \sum_{k=1}^{i-1}\dfrac{ 1 }{k+ \mathfrak{n} } \biggl(\dfrac{1}{\underset{\bar{}}{a}^{i-k}}- \dfrac{1}{\bar{a}^{i-k}} \biggr)\nonumber\\[1pt]&=c \sum_{k=1}^{\infty} \dfrac{\underset{\bar{}}{a}^{k}}{k+\mathfrak{n}} \biggl(\dfrac{1}{\underset{\bar{}}{a}^i}-\dfrac{1}{\bar{a}^i}\biggr)- c \sum_{k=1}^{i-1}\dfrac{ 1}{k+ \mathfrak{n} } \biggl(\dfrac{1}{\underset{\bar{}}{a}^{i-k}}-\dfrac{1}{\bar{a}^{i-k}}\biggr)\nonumber\\[1pt]&= \dfrac{c}{\underset{\bar{}}{a}^i}\sum_{k=i}^{\infty} \dfrac{\underset{\bar{}}{a}^{k}}{k+\mathfrak{n}} - \dfrac{c}{\bar{a}^i}\sum_{k=1}^{\infty} \dfrac{\underset{\bar{}}{a}^{k}}{k+\mathfrak{n}} + c \sum_{k=1}^{i-1}\dfrac{ 1}{k+ \mathfrak{n} } \dfrac{1}{\bar{a}^{i-k}}.\end{align*}

Using the notation

\begin{equation*}b_i \;:\!=\; \sum_{k=1}^{i-1}\frac{ \bar{a}^{k} }{k+ \mathfrak{n} },\end{equation*}

then

\begin{equation*}\sum_{k=1}^{i-1}\dfrac{ 1 }{k+ \mathfrak{n} } \dfrac{1}{\bar{a}^{i-k}} = \dfrac{b_i}{\bar{a}^{i}}\end{equation*}

and

\begin{equation*}\dfrac{b_{i+1}-b_i}{\bar{a}^{i+1}-\bar{a}^{i}}= \dfrac{1}{(\bar{a}-1)(i+\mathfrak{n})}\to 0,\quad i\to\infty.\end{equation*}

Using the fact that $\bar{a}>1$ and applying the Stolz–Césaro theorem, we obtain

\begin{equation*}\dfrac{b_i}{\bar{a}^{i}}= \dfrac{1}{\bar{a}^{i}}\sum_{k=1}^{i-1}\dfrac{ \bar{a}^{k} }{k+ \mathfrak{n} } \to 0,\quad \text{as $ i\to\infty$.}\end{equation*}

Thus the asymptotic behavior of $(x_i)_{i\geq 0}$ is the same as that of

(3.7) \begin{equation} \dfrac{c}{\underset{\bar{}}{a}^i}\sum_{k=i}^{\infty} \dfrac{\underset{\bar{}}{a}^{k}}{k+\mathfrak{n}}\geq \dfrac{c}{i+\mathfrak{n}}.\end{equation}

Let us check an upper bound. Using the integral comparison,

\begin{align*}\sum_{k=i}^{\infty} \dfrac{\underset{\bar{}}{a}^{k}}{k+\mathfrak{n}} &\leq \dfrac{\underset{\bar{}}{a}^{i}}{i+\mathfrak{n}} + \int_i^{\infty}\dfrac{\underset{\bar{}}{a}^{s}}{s+\mathfrak{n}} \,{\textrm{d}} s\\[5pt] &\leq \dfrac{\underset{\bar{}}{a}^{i}}{i+\mathfrak{n}} + \dfrac{\underset{\bar{}}{a}^{-\mathfrak{n}}}{i+\mathfrak{n}}\int_{i+\mathfrak{n}}^{\infty}\textrm{exp}(\!\log\underset{\bar{}}{a}{s}) \,{\textrm{d}} s\\[5pt]&= \dfrac{\underset{\bar{}}{a}^{i}}{i+\mathfrak{n}} + \dfrac{\underset{\bar{}}{a}^{i}}{(i+\mathfrak{n})(\!-\!\log\underset{\bar{}}{a})},\end{align*}

which implies that

\begin{equation*}\dfrac{1}{\underset{\bar{}}{a}^i}\sum_{k=i}^{\infty} \dfrac{\underset{\bar{}}{a}^{k}}{k+\mathfrak{n}}\end{equation*}

vanishes as $i\to\infty$ . Hence $x_i\to 0$ as $i\to\infty$ , which implies in particular that the sequence is bounded.

Furthermore, the sequence $(x_i)_{i\in\mathbb{N}}$ is positive since otherwise it would diverge to $-\infty$ by the same arguments used in the proof of Lemma 3.5. Moreover, the lower bound (3.7) and the upper bound of order ${\textrm{O}} (i^{-1})$ implies the convergence rate $x_i = {\textrm{O}} (i^{-1})$ as $i\to\infty$ .

Finally, by Lemma 3.5, the sequence $(x_i)_{i\geq 0}$ with $x_1={\lambda^{-1}}(\Phi_{\mathfrak{n}}(\underset{\bar{}}{a})- \mathfrak{n} ^{-1})$ is indeed the minimal positive solution to (3.3), which uniquely determines the value of $\hat{x}$ .

Note that the Lemma 3.9 completes the proof of Theorem 2.2 and we can proceed to the proof of Corollary 2.1. The sequence of expected hitting times $\bigl(\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr]\bigr)_{i\in\mathbb{N}}$ is the solution of

\begin{equation*}\begin{cases}x_0=0, \ x_1= \hat{x},\\[5pt]\lambda_i x_{i+1} = -1 + (\gamma_i+\lambda_i+ \mu_i)x_{i} - \mu_i x_{i-1},& i\geq 1,\end{cases}\end{equation*}

with

\begin{equation*}\hat{x}= \dfrac{1}{\lambda}\sum_{k=1}^{\infty}\dfrac{\underset{\bar{}}{a}^k}{k+\mathfrak{n}},\end{equation*}

where, from Lemma 3.9, it exhibits the same rate of convergence of $\mathbb{E}_i\bigl[T_X^{(\mathfrak{n})}\bigr]$ to 0 as $i\to \infty$ .

4. Recurrence property and stationary distribution

Based on the information we obtained on the expected hitting times of the catastrophe state, we now prove the following theorem.

Theorem 4.1. (Positive recurrence.) Let $X=(X_t)_{t\geq 0}$ be the BD+C $_{\mathfrak{n}}$ process whose infinitesimal generator Q is given by (2.1). Then X is positive recurrent; it exhibits a unique non-degenerated stationary distribution $\pi$ and, for any initial distribution and for large time, $X_t$ converges in distribution to $\pi$ .

Note that this is radically different from the behavior of a BD process with immigration satisfying $\lambda>\mu$ . In that case the population grows in expectation over time and no stationary distribution exists; see e.g. [Reference Anderson1]. Hence, by introducing a partial catastrophe into the model its behavior changes drastically, ensuring, for any catastrophe rate $\gamma>0$ , the existence of a unique stationary distribution.

To prove the theorem, we apply the existence of Lyapunov functions.

Proof of Theorem 4.1. Recall that a function $V\colon \mathbb{N}\to\mathbb{R}^+\cup\{+\infty\}$ is a Lyapunov function associated with the CTMC X if V satisfies

\begin{equation*}QV(n) \leq -1 + \mathbb{1}_A(n),\end{equation*}

where Q is the generator of X and A is some petite set; see e.g. [Reference Meyn and Tweedie11]. Take $A=\{0,\ldots,\mathfrak{n}\}$ . By the previous section, $\sup_{i\in\mathbb{N}}\mathbb{E}\bigl[T_X^A\mid X_0=i\bigr]<\infty$ , where $T_X^A$ is the first time the process X hits the set A. Hence $i \mapsto 1+\mathbb{E}\bigl[T^A\mid X_0=i\bigr]$ yields a Lyapunov function. It follows directly by irreducibility of X that the process X admits a stationary distribution $\pi$ and that it is positive recurrent. The uniqueness of $\pi$ follows from the irreducibility of X and the limiting behavior follows from classical results on non-explosive continuous-time Markov chains; see e.g. [Reference Norris12].

5. Expected population size at fixed times

In this last section we analyze the first two moments of the BD+C $_{\mathfrak{n}}$ process at a fixed time $t\geq 0$ and prove Theorems 2.3 and 2.4. Due to the asymmetric form of the generator (2.1), we focus on upper bounds of the first and second moments of $(X_t)_{t\geq 0}$ .

In contrast to the last section, the parameter $\nu$ representing the immigration rate in the case of extinction of the population now plays an important role, making the state 0 non-absorbing.

5.1. Associated Kolmogorov equations

Let us consider the Kolmogorov equations associated with the BD+C $_{\mathfrak{n}}$ process $(X_t)_{t\geq 0}$ to analyze its behavior at fixed times. With $P_n(t) \;:\!=\; \mathbb{P}[X_t = n\mid X_0 = n_0], n_0> \mathfrak{n}$ , the following identities hold:

(5.1) \begin{align}\left\{\begin{array}{l}\dfrac{\textrm{d}}{\textrm{d}t}P_0(t) = \mu P_1(t) - \nu P_0(t),\\[8pt]\dfrac{\textrm{d}}{\textrm{d}t}P_n(t) =\lambda_{n-1} P_{n-1}(t) - n(\mu + \lambda)P_{n}(t) +(n+1)\mu P_{n+1}(t), \quad 0< n < \mathfrak{n}, \\[8pt]\dfrac{\textrm{d}}{\textrm{d}t}P_{\mathfrak{n}}(t) =(\mathfrak{n}-1)\lambda P_{\mathfrak{n}-1}(t) - \mathfrak{n}(\mu + \lambda)P_{\mathfrak{n}}(t) +({\mathfrak{n}}+1)\mu P_{{\mathfrak{n}}+1}(t)+ \gamma\!\sum_{i = {\mathfrak{n}}+1}^{\infty} iP_i(t) , \\[8pt]\dfrac{\textrm{d}}{\textrm{d}t}P_n(t) = (n-1)\lambda P_{n-1}(t) -n(\gamma + \mu + \lambda)P_{n}(t) + (n+1)\mu P_{n+1}(t), \quad n > \mathfrak{n},\end{array}\right.\end{align}

with initial condition $P_{n}(0)=\delta_{n_0,n}$ . Recall that $\lambda_{n}= n\, \lambda$ if $n\geq 1$ but $\lambda_0 = \nu$ .

5.2. Upper bounds for first and second moments

We approach the first and second moments by analyzing their corresponding ODEs.

Proof of Theorem 2.3. Using (5.1), we obtain

\begin{align*} \dfrac{\textrm{d}}{\textrm{d}t}\mathbb{E}[X_t] &= \lambda \sum_{n=1}^{\infty}n(n-1) P_{n-1}(t)- (\mu + \lambda) \sum_{n=1}^{\infty} n^2P_{n}(t) \\[5pt] & \quad + \mu \sum_{n=1}^{\infty}n(n+1) P_{n+1}(t) - \gamma\sum_{n = \mathfrak{n}+1}^{\infty} (n-\mathfrak{n}) n P_n(t) +\nu P_0(t)\\[5pt] &= (\lambda - \mu) \mathbb{E}[X_t] + \gamma\sum_{n = \mathfrak{n}+1}^{\infty} ( \mathfrak{n} - n) n P_n(t) +\nu P_0(t).\end{align*}

If we denote the first moment by $m(t)\;:\!=\; \mathbb{E}[X_t]$ , then it solves the ODE

(5.2) \begin{equation} m'(t) = (\lambda-\mu) \, m(t) - \gamma\sum_{n = \mathfrak{n}+1}^{\infty} (n- \mathfrak{n}) n P_n(t)+\nu P_0(t), \quad m(0) = n_0.\end{equation}

In particular, by Jensen’s inequality,

\begin{align*} m'(t) &\leq (\lambda -\mu)\, m(t) + \gamma\sum_{n = 0}^{\infty} ( \mathfrak{n} -n) n P_n(t)+\nu P_0(t)\\[5pt] &\leq (\lambda -\mu )\, m(t)+ \gamma ( \mathfrak{n} -m(t))\, m(t) +\nu.\end{align*}

The solution $\bar{m}$ to the ODE

(5.3) \begin{equation}\bar{m}'(t) = (\lambda -\mu) \bar{m}(t) + \gamma ( \mathfrak{n} - \bar{m}(t))\, \bar{m}(t) + \nu, \quad \bar{m}(0) = n_0,\end{equation}

is therefore a candidate for an upper bound of m. To this end, we will show that the difference $\bar{m} - m$ is a non-negative function on $[0,\infty)$ .

We first prove that $\bar{m} - m$ has a strict local minimum at $t=0$ . Note that solutions of both ODEs (5.2) and (5.3) exist globally on $(0,\infty)$ . Since both right-hand sides are continuous in t for $t\geq 0$ , we can extend the definition of the derivative to the set $[0,\infty)$ . Note that if $n_0 > \mathfrak{n}$ then

\begin{equation*} \lim_{t\to 0^+}m'(t) = (\lambda -\mu) n_0 + \gamma ( \mathfrak{n} - n_0)n_0 = \lim_{t\to 0^+}\bar{m}'(t) - \nu < \lim_{t\to 0^+}\bar{m}'(t)\end{equation*}

and $\bar{m}(0) = m(0)$ . Thus, at some time, $\bar{m}$ dominates m:

\begin{equation*}{\text{there exists $t_0>0$ such that $ m(t) \leq \bar{m}(t)$ for all $t\in[0,t_0)$.}}\end{equation*}

Assume that this domination only holds locally, or equivalently, that there is a $t_2 > t_0$ such that $\bar{m}(t_2)< m(t_2)$ . By continuity there exists an interval $I_{\delta}\;:\!=\; (t_2-\delta,t_2+\delta)$ such that $\bar{m}(t)< m(t)$ for all $t\in I_{\delta}$ . Without loss of generality we may assume that $\bar{m}(t)\geq m(t)$ for all $t\in[0,t_2-\delta)$ . Set $t_1 \;:\!=\; t_2-\delta$ . By continuity of $\bar{m}$ and m, $\bar{m}(t_1)=m(t_1)$ . Additionally, since $\bar{m}$ and m are $C^1$ -functions, $\bar{m}'(t_1) < m'(t_1)$ . But on the other hand,

\begin{equation*}\bar{m}'(t_1) = g(\bar{m}(t_1)) = g(m(t_1)) \geq m'(t_1),\end{equation*}

where $g(x) \;:\!=\; (\lambda -\mu) x + \gamma ( \mathfrak{n} - x)x +\nu$ , which leads to a contradiction. Thus $\bar{m}$ dominates m globally.

Note that the ODE (5.3) is the same as (2.3), whose solution is a logistic growth function. In particular, $\bar{m}(t)$ converges for large t to its equilibrium value $\bar{m}_e$ given by the largest zero of the polynomial $\textbf{x} \mapsto \gamma \, \textbf{x}^2 - (\lambda -\mu +\gamma\,\mathfrak{n})\textbf{x} -\nu$ ,

(5.4) \begin{equation} \bar{m}_e \;:\!=\; \dfrac{(\lambda -\mu +\gamma\,\mathfrak{n}) + \sqrt{(\lambda -\mu +\gamma\,\mathfrak{n})^2 + 4\gamma\nu}}{2\gamma}.\end{equation}

Moreover $t \mapsto \bar{m}(t)$ is decreasing whenever it is larger than its equilibrium $\bar{m}_e$ .

Proof of Theorem 2.4. The dynamics of the process’s second moment $v(t)\;:\!=\; \mathbb{E}\bigl[X_t^2\bigr]$ can be derived in the same way as that of the first moment, and it is given by

\begin{align*} v'(t) &= 2(\lambda- \mu )v(t) + (\lambda+ \mu) m(t)- \gamma\sum_{n = \mathfrak{n}+1}^{\infty} (n^3 - \mathfrak{n}^2 n) P_n(t)\\[5pt] &\leq 2(\lambda- \mu )v(t) + (\lambda+ \mu ) m(t)- \gamma\sum_{n = 0}^{\infty} (n^3 - \mathfrak{n}^2 n) P_n(t) +\nu P_0(t)\\[5pt] &\leq 2(\lambda- \mu )v(t) + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}(t)- \gamma v(t)^{{3}/{2}} +\nu .\end{align*}

The last inequality, due to Jensen’s inequality, is actually strict for $t>0$ :

\begin{equation*} v'(t) < 2(\lambda- \mu )v(t) + (\lambda+ \mu+ \gamma\,\mathfrak{n}^2 )\bar{m}(t) - \gamma v(t)^{{3}/{2}} +\nu, \quad t>0.\end{equation*}

Now consider the function $\bar{v}$ , the solution of the ODE

(5.5) \begin{equation} \bar{v}'(t) = 2(\lambda- \mu )\, \bar{v}(t) + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\, \bar{m}(t) - \gamma \, \bar{v}(t)^{{3}/{2}} +\nu, \quad \bar{v}(0) = n_0^2.\end{equation}

It is a candidate for an upper bound of v. Compare the right limits in 0 of the first derivatives:

\begin{equation*}\lim_{t\to 0^+} v'(t) < 2(\lambda- \mu )n_0^2 + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )n_0 - \gamma n_0^3 +\nu = \lim_{t\to 0^+} \bar{v}'(t).\end{equation*}

By the same argument as for m, we obtain the domination of v by the function $\bar{v}$ .

The positive function $\bar{v}$ is indeed bounded, as we will now prove.

Assume conversely that any large value can be taken by $\bar{v}$ . In particular, choose any $c_0$ sufficiently large that

\begin{equation*} 2(\lambda- \mu )c_0 + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )n_0 - \gamma c_0^{{3}/{2}} +\nu< 0 ,\end{equation*}

and suppose that there exists a time $t_0$ such that $\bar{v}(t_0) = c_0$ . Since by assumption $n_0>\bar{m}_e$ , the function $\bar{m}$ decreases from $\bar{m}(0) = n_0$ . It follows that

\begin{align*} \bar{v}'(t_0) &= 2(\lambda- \mu )\bar{v}(t_0) + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}(t_0) - \gamma \bar{v}(t_0)^{{3}/{2}} +\nu\\[5pt] &\leq 2(\lambda- \mu )c_0 + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )n_0 - \gamma c_0^{{3}/{2}} +\nu \\[5pt] & <0.\end{align*}

Hence, whenever $\bar{v}$ takes the value $c_0$ , it has a negative derivative. Thus, for $t>t_0$ , $\bar{v}(t)$ is uniformly bounded by $c_0$ , which is a contradiction.

We also compare $\bar{v}$ and $\bar{m}^2$ and obtain information on the long-time behavior of $\bar{v}.$

Proposition 5.1. Let $\bar{v}$ be solution of the ODE (5.5) with $n_0 > \bar{m}_e$ and $\lambda+\mu \geq 2\nu$ . Then $\bar{v}(t)\geq \bar{m}^2(t)$ for all $t\geq 0$ , where $\bar{m}$ is the solution of the ODE (5.3). Moreover, as $t\to\infty$ , $\bar{v}(t)$ tends to the solution $\bar v_\infty$ of the following equation:

(5.6) \begin{equation} 0 = 2(\lambda- \mu )\bar{v}_{\infty} + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}_e - \gamma \bar{v}_{\infty}^{{3}/{2}} +\nu.\end{equation}

To prove these properties, we analyze the nullcline, i.e. the function $\textbf{v} (t)$ solution of the limit equation obtained from (5.5) with vanishing left-hand side:

(5.7) \begin{equation} 0 = 2(\lambda- \mu )\textbf{v} (t) + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}(t) - \gamma (\textbf{v} (t))^{{3}/{2}} +\nu.\end{equation}

To prove the uniqueness of the solution of the equation (5.6), we use Descartes’ rule of signs, which we now recall (see [Reference Hall and Knight7]).

Lemma 5.1. The number of positive roots counted with multiplicities of a polynomial with real coefficients is equal to the number of changes of sign in the list of coefficients, or is less than this number by a multiple of 2.

We also obtain the $C^1$ -regularity of the function $\textbf{v}$ by the following lemma.

Lemma 5.2. Let $t\geq 0$ and $\textbf{P}^t\colon \textbf{x}\mapsto a_0(t) + \sum_{k = 1}^n a_k \textbf{x}^k$ be a real polynomial with $a_0\in C^1([0,\infty),\mathbb{R})$ and $a_0'(t)<0$ for all $t>0$ . If for all $t\geq 0$ the polynomial $\textbf{P}^t$ has a unique positive simple zero denoted by $x_0^t$ , then $t\mapsto x_0^t \in C^1((0,\infty),\mathbb{R})$ .

Proof of Lemma 5.2. For any $t> 0$ and sufficiently small $h>0$ ,

\begin{align*}0&= \sum_{k = 1}^n a_k\bigl( \bigl(x_0^{t+h}\bigr)^k - \bigl(x_0^t\bigr)^k\bigr) + a_0(t+h) - a_0(t) \\[5pt]&= \bigl(x_0^{t+h} - x_0^t\bigr) \sum_{k = 1}^n a_k\sum_{l = 1}^{k-1}\bigl(x_0^{t+h}\bigr)^l \bigl(x_0^t\bigr)^{k-1-l} + a_0(t+h) - a_0(t).\end{align*}

Note that

\begin{equation*}\sum_{k = 1}^n a_k\sum_{l = 1}^{k-1}\bigl(x_0^{t+h}\bigr)^l \bigl(x_0^t\bigr)^{k-1-l}\neq 0\quad\text{and}\quad x_0^{t+h} - x_0^t\neq 0,\end{equation*}

because $a_0$ is decreasing by assumption. Thus

\begin{equation*}\dfrac{x_0^{t+h} - x_0^t}{h} = \dfrac{a_0(t)-a_0(t+h) }{h} \Biggl(\sum_{k = 1}^n a_k\sum_{l = 1}^{k-1}\bigl(x_0^{t+h}\bigr)^l \bigl(x_0^t\bigr)^{k-1-l}\Biggr)^{-1}\end{equation*}

and hence $t\mapsto x_0^t \in C^1((0,\infty),\mathbb{R})$ .

Proof of Proposition 5.1. Define a positive function $\psi$ by $\psi(t)^2 \;:\!=\; \textbf{v}(t)$ , where $\textbf{v}(t)$ solves the equation (5.7). Then $\psi(t)$ is a positive zero of the polynomial

\begin{equation*}\textbf{x} \mapsto - \gamma \textbf{x}^{3} + 2(\lambda-\mu )\textbf{x}^2 + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}(t) +\nu.\end{equation*}

By Lemma 5.1, for fixed t it exists, is simple and unique. Therefore the nullcline is defined pointwise by $\textbf{v}(t)\;:\!=\; \sqrt{\psi(t)}, t\geq 0$ . Since $t \mapsto \bar{m} (t) $ is continuous, $\textbf{v}$ is also continuous. The function $\textbf{v}$ is even continuously differentiable by the positivity of $\psi$ , smoothness of $\bar{m}$ , and Lemma 5.2. Moreover, since $\bar{m}$ converges to $m_e$ as $t\to\infty$ , $\textbf{v}$ converges as $t\to\infty$ to the solution of the equation (5.6). Furthermore, differentiating (5.7), we obtain

\begin{equation*}0 = 2(\lambda- \mu )\textbf{v}'(t) + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}'(t) - \dfrac{3}{2}\gamma \textbf{v}'(t)\textbf{v}(t)^{{1}/{2}}.\end{equation*}

Since $\bar{m}'$ tends to 0 as $t\to\infty$ , therefore $\textbf{v}'$ converges to some value denoted by $\textbf{v} '(\infty)$ , which solves the equation

\begin{equation*}0 = 2(\lambda- \mu )\textbf{v} '(\infty)- \dfrac{3}{2}\gamma \textbf{v} '(\infty)\bar{v}_{\infty}^{{1}/{2}}.\end{equation*}

Thus, either $\textbf{v} '(\infty) = 0$ or

\begin{equation*}\bar{v}_{\infty}=\biggl(\frac{4(\lambda- \mu )}{3\gamma}\biggr)^2.\end{equation*}

The latter value cannot solve (5.6). Hence $\textbf{v} '(\infty) = 0$ . Note that if $x>x_0^t$ , then

\begin{equation*}- \gamma x^{3} + 2(\lambda-\mu )x^2 + (\lambda+ \mu + \gamma\,\mathfrak{n}^2 )\bar{m}(t) +\nu < 0\end{equation*}

and vice versa. Thus, $\bar{v}$ always tends towards the nullcline $\textbf{v}$ . Since $\bar{v}$ always tends towards the nullcline $\textbf{v}$ and $\textbf{v}'$ tends to 0, $\bar{v}$ also converges to $\bar{v}_{\infty}$ , which proves the claim.

Note, by the way, that one deduces from equation (5.6) that $\bar{v}_{\infty}\sim {\textrm{O}} (\mathfrak{n}^2)$ as $\mathfrak{n}\to\infty$ .

On the comparison between $\bar{v}(t)$ and $\bar{m}^2(t)$ : by assumption, $\bar{v}(0) = n_0^2 = \bar{m}(0)^2$ . Note that

\begin{equation*}\bar{m}(t)\geq \bar{m}_e> \mathfrak{n} \Rightarrow(\mathfrak{n} - \bar{m}(t))^2 >0\Rightarrow 2\bar{m}(t)(\mathfrak{n} -\bar{m}(t)) < \mathfrak{n}^2 - \bar{m}^2(t) .\end{equation*}

Therefore

(5.8) \begin{align} \!\!\!\!\!\!\!\!\!\!\!\bigl(\bar{m}^2\bigr)'(t) = 2\bar{m}(t)\bar{m}'(t) = - 2\gamma \bar{m}(t)^3 + 2(\lambda -\mu +\gamma\,\mathfrak{n})\bar{m}^2(t) + 2\nu\bar{m}(t) \end{align}
(5.9) \begin{align} &< \gamma \bigl(\mathfrak{n}^2 - \bar{m}^2(t)\bigr)\bar{m}(t) + 2(\lambda -\mu)\bar{m}^2(t) + 2\nu\bar{m}(t)\nonumber\\[5pt]&=-\gamma \bigl(\bar{m}^2(t)\bigr)^{{3}/{2}} + 2(\lambda -\mu)\bar{m}^2(t)+ \bigl(2 \nu + \gamma\,\mathfrak{n}^2 \bigr)\bar{m}(t) \nonumber \\[5pt]&\leq -\gamma \bigl(\bar{m}^2(t)\bigr)^{{3}/{2}} + 2(\lambda -\mu)\bar{m}^2(t) +\bigl(\lambda+ \mu + \gamma\,\mathfrak{n}^2 \bigr)\bar{m}(t) , \end{align}

where the assumption $2\nu \leq\lambda + \mu $ is used for the last inequality. Now by (5.5) and (5.8),

\begin{align*}\bar{v}'(0) -\bigl(\bar{m}^2\bigr)'(0) &= 2(\lambda- \mu )\, \bar{v}(0) + \bigl(\lambda+ \mu + \gamma\,\mathfrak{n}^2 \bigr)\, \bar{m}(0) - \gamma \, \bar{v}(0)^{{3}/{2}} +\nu \\[5pt]&\quad - \bigl( - 2\gamma \bar{m}(0)^3 + 2(\lambda -\mu +\gamma\,\mathfrak{n})\bar{m}^2(0) + 2\nu\bar{m}(0)\bigr) \\[5pt]&= (\lambda + \mu - 2 \nu)n_0 + \gamma n_0( n_0- \mathfrak{n} )^2 +\nu> 0.\end{align*}

This inequality propagates for all $t\geq 0$ . This can be proved by the same argument as in the proof of Theorem 2.3, using (5.9).

5.3. Accuracy of these upper bounds

Previously, we found an upper bound $\bar{m}$ to the first moment m. We would like to quantify the sharpness of this bound by estimating the non-negative difference function D defined by

(5.10) \begin{equation}D(t) \;:\!=\; \bar{m}(t)- m(t) \ \geq 0, \quad D(0)=0.\end{equation}

Moreover, $\limsup_{t\to\infty} D(t) \leq \bar{m}_e $ , which implies that D is a bounded function.

Proposition 5.2. (Upper bound for the difference function D.) Let m, $\bar{m}$ , v, and $\bar{v}$ be defined as above under the same assumptions. Then the difference function D defined in equation (5.10) is pointwise bounded from above by the solution $\bar{D}$ of the non-linear ODE

(5.11) \begin{equation} \begin{cases}\bar{D}'(t) = - \gamma \bar{D}(t)^2 + (\lambda-\mu+\gamma (\mathfrak{n}+ 2 \bar{m}(t)))\bar{D}(t) +\gamma \bigl(\bar{v}(t)-\bar{m}(t)^2\bigr) + \gamma\dfrac{\mathfrak{n}^2}{4} +\nu,& t> 0, \\[5pt]\bar{D}(0) = 0. &\end{cases}\end{equation}

Moreover, the function $\bar{D}$ tends for large time to the positive solution $\bar{D}_{\infty}$ of the equation

(5.12) \begin{equation}0 = - \gamma \bar{D}_{\infty}^2 + (\lambda-\mu+\gamma (\mathfrak{n}+ 2 \bar{m}_e) )\bar{D}_{\infty} +\gamma\bigl(\bar{v}_{\infty}- \bar{m}_e^2 \bigr) + \gamma\dfrac{\mathfrak{n}^2}{4} +\nu.\end{equation}

Proof. By differentiating the function D defined in (5.10), we obtain

\begin{align}D'(t) &\leq (\lambda-\mu)D(t) + \mathfrak{n} \gamma\bar{m}(t) -\gamma\bar{m}^2(t) + \gamma\sum_{n\geq \mathfrak{n}+1} (n-\mathfrak{n})nP_n(t)+\nu\nonumber \\[5pt]&\leq -\gamma D(t)^2 +(\lambda-\mu+\gamma \mathfrak{n})D(t) - 2\gamma D(t)m(t)+ 2\gamma D(t)\bar{m}(t) \nonumber\\[5pt]&\quad +\gamma\bigl(\bar{v}(t)-\bar{m}^2(t)\bigr) + \gamma\sum_{n = 0}^{\mathfrak{n}} (\mathfrak{n} - n)nP_n(t) +\nu \notag \\[5pt] &\leq - \gamma D(t)^2 + (\lambda-\mu+\gamma (\mathfrak{n}+2\bar{m}(t)))D(t) +\gamma\bigl(\bar{v}(t)-\bar{m}^2(t)\bigr) + \gamma\dfrac{\mathfrak{n}^2}{4}+\nu\nonumber.\end{align}

The function $\bar{D}$ , solving the ODE (5.11), is an upper bound for D since $D(0) = \bar{D}(0)$ and

\begin{equation*}\lim_{t\to 0^+} D'(t) = 0, \quad \lim_{t\to 0^+} \bar{D}'(t) = \gamma\dfrac{\mathfrak{n}^2}{4} +\nu > 0.\end{equation*}

Thus, by similar arguments as for $\bar{m}$ and $\bar{v}$ , it follows that $\bar{D}$ is a global upper bound of D.

To investigate the asymptotic behavior of $\bar{D}(t)$ we use techniques similar to those we applied in the proof of Proposition 5.1. This time the equation (5.11) induces the positive nullcline $\textbf{D}$ satisfying

\begin{equation*}0 = - \gamma \textbf{D}(t)^2 + (\lambda-\mu+\gamma (\mathfrak{n} + 2\bar{m}(t)))\textbf{D}(t) +\gamma\bigl(\bar{v}(t)-\bar{m}^2(t)\bigr) + \gamma\dfrac{\mathfrak{n}^2}{4}+\nu.\end{equation*}

Again applying Lemma 5.1, one proves its existence and also the convergence of $\bar{D}$ towards this nullcline. As in Proposition 5.1, since $\bar{v}$ and $\bar{m}$ converges for large t, $\textbf{D} (t)$ converges to $\bar{D}_{\infty}$ , which is the unique positive solution to

\begin{equation*}0 = - \gamma \bar{D}_{\infty}^2 + (\lambda-\mu+\gamma (\mathfrak{n}+ 2 \bar{m}_e) )\bar{D}_{\infty} +\gamma\bigl(\bar{v}_{\infty}- \bar{m}_e^2 \bigr) + \gamma\dfrac{\mathfrak{n}^2}{4} +\nu.\end{equation*}

Therefore, analogously as before, the function $\bar{D}$ converges to $\bar{D}_{\infty}$ thanks to the positivity of $\lambda-\mu+\gamma (\mathfrak{n} + 2\bar{m}(t))$ and the $C^1$ -regularity of $\textbf{D}$ .

Solving (5.12) explicitly, we obtain

\begin{equation*}\bar{D}_{\infty} = \dfrac{(\lambda-\mu+\gamma (\mathfrak{n}+ 2 \bar{m}_e )) +\sqrt{(\lambda-\mu+\gamma (\mathfrak{n}+ 2 \bar{m}_e) )^2+ \gamma^2(4(\bar{v}_{\infty}- \bar{m}_e^2 ) +\mathfrak{n}^2) + 4 \gamma \nu}}{2\gamma},\end{equation*}

and consequently $\bar{D}_{\infty}\sim{\textrm{O}} (\mathfrak{n})\text{ as }\mathfrak{n}\to\infty$ . Hence, for small $\mathfrak{n}$ , the small size of $\bar{D}$ leads to sufficiently good estimates by considering $\bar{m}$ instead of m. Instead, for large $\mathfrak{n}$ , the large values of $\bar{D}$ do not allow us to conclude whether $\bar{m}$ and m are close. Finer estimates would be needed to paint a clearer picture of the sharpness of the upper bounds, but they seem to be currently out of reach.

Acknowledgements

The authors warmly thank Fanny Delebecque for helpful discussions. It is also their pleasure to thank an anonymous referee for her/his very accurate reading and for her/his recommendation on how to improve the redaction of the proof of Theorem 2.2. Thanks too for the hints concerning further research.

Funding information

The first three authors acknowledge the Franco-German University (UFA) for its support through the binational Collège Doctoral Franco-Allemand CDFA 01-18. The second author wants to thank the UFA for his Bourse de cotutelle de thèse.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Anderson, W. J. (1991). Continuous-Time Markov chains. Springer.CrossRefGoogle Scholar
Brockwell, P. J. (1985). The extinction time of a birth, death and catastrophe process and of a related diffusion model. Adv. Appl. Prob. 17, 4252.CrossRefGoogle Scholar
Brockwell, P. J. (1986). The extinction time of a general birth and death process with catastrophes. J. Appl. Prob. 23, 851858.CrossRefGoogle Scholar
Brockwell, P. J., Gani, J. M. and Resnick, S. I. (1982). Birth, immigration and catastrophe processes. Adv. Appl. Prob. 14, 709731.CrossRefGoogle Scholar
Di Crescenzo, A., Giorno, V., Nobile, A. G. and Ricciardi, L. M. (2008). A note on birth–death processes with catastrophes. Statist. Prob. Lett. 78, 22482257.CrossRefGoogle Scholar
Feller, W. (1939). Die Grundlagen der volterraschen Theorie des Kampfes ums dasein in wahrscheinlichkeitstheoretischer Behandlung. Acta Bioth. Ser. A. 5, 1140.Google Scholar
Hall, H. S. and Knight, S. R. (1891). Higher Algebra: A Sequel to Elementary Algebra for Schools, 1st edn. Macmillan, London; St Martin’s Press, New York. Available at https://archive.org/details/higheralgebraseq00hall.Google Scholar
Kapodistria, S., Tuan, P.-D. and Resing, J. (2016). Linear birth/immigration-death process with binomial catastrophes. Prob. Eng. Inf. Sci. 30, 79111.CrossRefGoogle Scholar
Karlin, S. and McGregor, J. (1958). Linear growth, birth and death processes. J. Math. Mech. 7, 643662.Google Scholar
Kendall, D. G. (1948). On the generalized birth-and-death process. Ann. Math. Statist. 19, 115.Google Scholar
Meyn, S. and Tweedie, R. L. A survey of Foster–Lyapunov techniques for general state space Markov processes. Available at http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.43.2517.Google Scholar
Norris, J. R. (1997). Markov Chains. Cambridge University Press, Cambridge.CrossRefGoogle Scholar
Sindayigaya, S. (2016). The population mean and its variance in the presence of genocide for a simple birth–death–immigration–emigration process using the probability generating function. Internat. J. Statist. Anal. 6, 18.Google Scholar
Swift, R. J. (2001). Transient probabilities for a simple birth–death–immigration process under the influence of total catastrophes. Internat. J. Math. Math. Sci. 25, 689692.CrossRefGoogle Scholar
van Doorn, E. A. and Zeifman, A. I. (2005). Birth–death processes with killing. Statist. Prob. Lett. 72, 3342.CrossRefGoogle Scholar
van Doorn, E. A. and Zeifman, A. I. (2005). Extinction probability in a birth–death process with killing. J. Appl. Prob. 42, 185198.Google Scholar
Figure 0

Figure 1. Transition graph of X.

Figure 1

Figure 2. Dependences and implications of results in Section 3.