Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-11T10:02:04.684Z Has data issue: false hasContentIssue false

Multitype branching process with non-homogeneous Poisson and contagious Poisson immigration

Published online by Cambridge University Press:  22 November 2021

Landy Rabehasaina*
Affiliation:
University Bourgogne Franche-Comté
Jae-Kyung Woo*
Affiliation:
University of New South Wales
*
*Postal address: Laboratory of Mathematics of Besançon, University Bourgogne Franche-Comté, 16 route de Gray, 25030 Besançon CEDEX, France. Email address: lrabehas@univ-fcomte.fr
**Postal address: School of Risk and Actuarial Studies, University of New South Wales, Sydney, Australia. Email address: j.k.woo@unsw.edu.au
Rights & Permissions [Opens in a new window]

Abstract

In a multitype branching process, it is assumed that immigrants arrive according to a non-homogeneous Poisson or a contagious Poisson process (both processes are formulated as a non-homogeneous birth process with an appropriate choice of transition intensities). We show that the normalized numbers of objects of the various types alive at time t for supercritical, critical, and subcritical cases jointly converge in distribution under those two different arrival processes. Furthermore, we provide some transient expectation results when there are only two types of particles.

Type
Original Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Single or multitype branching processes with different stochastic assumptions on the immigration process have been applied in diverse fields in applied probability such as biology, epidemiology, and demography. For example, Resing [Reference Resing27] utilized the theory of multitype branching processes in discrete time with immigration to study the joint queue length process in the different queues of a polling system in queueing theory. More generally, a network of infinite server queues may be seen as multitype Galton–Watson processes with immigration; see e.g. [Reference Altman2]. Some actuarial applications of branching processes such as a reinsurance chain were discussed in [Reference Rolski, Schmidli, Schmidt and Teugels28, Section 7.5]. Regarding applications in biology, Hyrien et al. [Reference Hyrien, Peslak, Yanev and Palis16] recently considered multitype branching processes with homogeneous Poisson immigration to study stress erythropoiesis, although they pointed out that a non-homogeneous Poisson process (NHPP) might be more realistic in that situation. The reader is referred to [Reference Mitov, Yanev and Hyrien24] for a detailed discussion about the relevant literature on various types of branching processes.

In this paper we consider a multitype branching process in which there are different types of particles, and new particles that arrive according to an NHPP or a contagious Poisson process (CPP). For immigration processes, an alternative to homogeneous Poisson processes, NHPP and CPP are chosen for the following reasons. NHPP and CPP are within the class of non-homogeneous birth processes, which means the intensity of event occurrence possibly varies with time (e.g. seasonality of catastrophe incidence) (NHPP) or the past state of the process (e.g. the number of previous shocks or the number of accidents incurred in the past) (CPP). As a result, these non-Poissonian processes have been widely used in various areas such as engineering, applied probability, biological science, and actuarial science. In particular, a CPP is an immigration process with a linear birth rate and is basically a homogeneous case of generalized Pólya processes (GPP) studied in [Reference Willmot35], [Reference Cha9], and [Reference Cha and Finkelstein10], for example. This process is also regarded as a contagious model that is suitable for describing the spread of an infection as proposed in [Reference Pólya25] and the accident proneness, as considered by [Reference Greenwood and Yule17], [Reference Bates7], and [Reference Bühlmann8, Section 2.2]. Since a branching process can be used to study the dynamic network of the spread of infectious diseases, a CPP is naturally a suitable choice for the immigration arrival process to model the occurrence of contagious events due to the explanation given previously. In addition, we refer to some papers that have considered GPP in other fields. For example, see [Reference Konno19] for a more detailed discussion of the use of GPP in the framework of a non-stationary-type master equation approach in mathematical physics. As a GPP models the decreasing sequence of inter-arrival times depending on the number of the events in the past (of which their positive dependence is proved in [Reference Cha and Finkelstein11]), it becomes a more realistic choice in modeling mortality as well as the increasingly damaging impact on an ageing system, as studied in [Reference Cha and Finkelstein11].

The focus of our paper is to study the joint asymptotic behavior of a process representing the numbers of different types of particles alive at time t when the immigration process is described by an NHPP or a CPP. These models may be interpreted differently depending on whether we are in an epidemic, actuarial, queueing, or reliability setting. In an epidemic setting, the particles represent contaminated cells, the types represent their locations, and those cells can move to other locations where they can possibly contaminate other cells. In an actuarial setting, a particle may represent certain types of claims or tasks that need to be processed in different branches of an insurance company before being settled, or are in different stages of a reinsurance contract, as explained in [Reference Rolski, Schmidli, Schmidt and Teugels28, Section 7.5]. In a queueing setting, a particle is a customer who arrives and gets served immediately in the setting of infinite server queues and, after leaving the queue, is replicated into several new customers who are sent to other queues for their subsequent service. In a reliability setting, particles are interconnected parts in a system which can be damaged upon external shock arrivals and need to be repaired, or are dependent line outages in a power network which may cause cascading blackouts; see [Reference Qi, Ju and Sun26].

Most papers in the literature consider the critical case of underlying branching mechanisms; see e.g. [Reference Weiner33], [Reference Durham12], [Reference Weiner34], or [Reference Mitov, Yanev and Hyrien24]. In this paper we consider all three underlying branching mechanisms, including supercritical, critical, and subcritical cases. Indeed, it is well known that in the subcritical and critical cases for a continuous-time multitype Galton–Watson process, i.e. when the eigenvalue of the mean matrix of offspring does not exceed 1, extinction is certain, whereas the survival probability in an infinite horizon is positive in the supercritical case. Depending on the modeling context, either a critical, subcritical, or supercritical case may be more appropriate for modeling the dynamics of the quantity of interest. In particular, in a queueing or actuarial context where the clients or tasks will eventually exit the system, it may be more plausible to use a critical or subcritical case. For example, in the case of polling systems, the stable case corresponds to the subcritical branching process and the heavy traffic limit is studied using the near-critical branching process in [Reference van der Mei30]. On the other hand, in the context of epidemiology, a supercritical case may be more appropriate for modeling the rapid expansion of a particular disease at the beginning of the outbreak.

The three cases of underlying branching mechanisms exhibit different behaviors of the branching process with immigration. Concerning the immigration arrival process, particular attention in the forthcoming results is given to the case where the immigration rate increases very fast. This is a property of the GPP, as the (stochastic) arrival rate is linear with respect to the number of arrivals at the current time, so that the inter-arrival time decreases with respect to the failure rate order, as explained in [Reference Cha and Finkelstein11] and [Reference Badía, Sangüesa and Cha6]. In the NHPP, assumptions on the intensity function are such that an exponential behavior for the latter is studied, yielding different kinds of asymptotic results. For a single type branching process with general lifetime distribution of the particles, the reader is referred to [Reference Hyrien, Mitov and Yanev14] and [Reference Hyrien, Mitov and Yanev15], which proved asymptotic results and functional central limit theorems, respectively, in the supercritical and subcritical cases.

In this paper we also investigate the transient moment of the process for two types of particle branching mechanism when the renewal function associated with the arrival process is explicit. See e.g. [Reference Hyrien, Peslak, Yanev and Palis16] for a similar study when the particular lifetime particles have a general distribution.

The rest of the paper is organized as follows. In Section 2 we describe a multitype branching process with/without immigration and relevant assumptions. Some convergence results for the distribution of a number of different types of particles at time t (denoted by N(t) in vector form) under different immigration processes (NHPP, CPP) are obtained from its characteristic function (CF). In Section 3, NHPP is assumed for the arrival process of immigrants, and convergence results for N(t) are given in Theorems 3.1 and 3.2, with a particular emphasis on the case when the intensity of the arrival process increases exponentially. A result in the critical case is given in Theorem 3.3, which agrees with the previous results in the same context in [Reference Weiner34], [Reference Vatutin31], and [Reference Mitov, Yanev and Hyrien24]. For the critical case, we provide some remarks on homogeneous Poisson immigration and one-dimensional branching processes with immigration in Remark 3.1. In Sections 3.1, 3.2, and 3.3 we give detailed proofs of Theorems 3.1, 3.2, and 3.3. Section 4 considers a CPP for the immigration process. Asymptotic behaviors of N(t) are studied in Theorem 4.1 in the function of the parameters of the arrival process, with detailed proofs included in Sections 4.1, 4.2, and 4.3. In the proofs of Theorems 3.1, 3.2, 3.3, and 4.1, we show that, for a conveniently chosen normalizing function g(t), the process $N(t)/g(t)$ converges in distribution to an identifiable limit as $t\to +\infty$ by proving that the corresponding CF converges. Finally, in Section 5 we present some transient results in scenarios when there are two types of particles in the branching process.

Lastly, the following matrix notation will be used throughout the paper. For any matrix $M\in \mathbb{R}^{m\times n}$, $M'\in \mathbb{R}^{n\times m}$ denotes its transpose. By $\langle u,v\rangle=\sum_{i=1}^k u_i v_i$ we denote the usual inner product between two vectors $u=(u_1,\ldots,u_k)'$ and $v=(v_1,\ldots,v_k)'$, associated with the corresponding Euclidean norm

\begin{equation*}\|u\|=\sqrt{\sum_{i=1}^k u_i^2}.\end{equation*}

We let $\mathbf{1}=(1,\ldots,1)'$, a vector with ones of an appropriate dimension, $\mathbb{R}_+^k=[0,+\infty)^k$ and $\mathbb{R}_+^{*k}=(0,+\infty)^k$. We also let $L^1(\mathbb{R}_+)$ be the set of integrable measurable functions from $\mathbb{R}_+$ to $\mathbb{R}$. Finally, we denote $\mathbb{N}^\ast=\{1,2,\ldots\}$ and $\mathbb{N}=\mathbb{N}^\ast \cup \{0\}$.

2. The model

The baseline model, a classical multitype branching process (without immigration), is described as follows. We consider a set of particles of k possible types, with a type i particle having exponential lifetime with mean $1/\mu_i$ for $i=1,\ldots,k$, denoted by $\mathcal{E}(\mu_i)$ for $\mu_i >0$. Upon its death, a type i particle produces $Y_j^{(i)}$ copies of type j particles for all $j=1,\ldots,k$, where $(Y_{1}^{(i)},\ldots,Y_{k}^{(i)})$ is a random vector with corresponding probabilities $p_i(\mathbf{n})=p_i(n_1,\ldots,n_k)=\mathbb{P} (Y_j^{(i)}=n_j,\ j=1,\ldots,k)$ for $\mathbf{n}=(n_1,\ldots,n_k)\in \mathbb{N}^k$, and generating functions defined by

(2.1) \begin{equation}h_i(z)=h_i(z_1,\ldots,z_k)= \sum_{\mathbf{n} \in \mathbb{N}^k} p_i(\mathbf{n}) z_1^{n_1}\cdots z_k^{n_k},\quad i=1,\ldots,k,\ z\in [0,1]^k .\end{equation}

In other words, $p_i(\mathbf{n})$ is the probability that a type i particle produces $n_1,\ldots,n_k$ copies of type $1,\ldots,k$ particles respectively. Then all copies evolve independently and have the same dynamics. Note that $p_j(0,\ldots,0)$ is the probability that no replica is made, i.e. the probability that the particle does not produce any copies at the end of its lifetime. The mean numbers of copies from type i particle are denoted by $(m_{i,1},\ldots,m_{i,k})=(\mathbb{E}[Y_{1}^{(i)}],\ldots,\mathbb{E}[Y_{k}^{(i)}])$. We suppose that $(Y_{1}^{(i)},\ldots,Y_{k}^{(i)})$ is not a.s. a zero vector, which guarantees that the base process does not die out immediately. We consider the vector process $N^o(t)=(N^o_1(t),\ldots,N^o_k(t))'$, where $N^o_j(t)$ represents the number of j type particles at time t. Also, it is assumed that at time 0 the number of particles $N^o(0)$ is a random vector $I=(I(1),\ldots,I(k))'$ with distribution

(2.2) \begin{equation}I\sim \sum_{\mathbf{n}\in \mathbb{N}^k} p_{\mathbf{n}} \delta_{\mathbf{n}},\end{equation}

for some probability vector $(p_{\mathbf{n}})_{\mathbf{n}\in \mathbb{N}^k}$, and where $\delta_{\mathbf{n}}$ is the Dirac distribution concentrated at $\mathbf{n}\in \mathbb{N}^k$, and the second-order moment holds:

\begin{equation*}{\text{Assumption }{\bf (A)}} \quad \mathbb{E} ( \| I\|^2)= \sum_{j=1}^k \mathbb{E} ( I(j)^2)<+\infty.\end{equation*}

A central remark is that, given $N^o(0)=\mathbf{n}_0=(\mathbf{n}_0(1),\ldots,\mathbf{n}_0(k))'\in \mathbb{N}^k$, $N^o(t)$ has the same distribution as the independent sum

(2.3) \begin{equation}\sum_{j=1}^k \sum_{l=1}^{n_0(j)} N^{j,l}(t),\end{equation}

where, for each $j=1,\ldots,k$, $( \{ N^{j,l}(t),\ t\ge 0\})_{l\in\mathbb{N}^\ast}$ are independent identically distributed (i.i.d.) copies of a generic multitype branching process $\{ N^{j}(t)=(N^{j}_1(t),\ldots,N^{j}_k(t)),\, t\ge 0\}$ with the branching mechanism given by (2.1), and $N^j_i (t)$ represents the number of type i particles at time t produced from a type j particle generated at time 0, i.e. $N^{j}(0)$ is a vector of which the jth entry is 1 and 0 elsewhere. The CF of $N^o(t)$ is denoted by $\varphi^o_{t}(s)\;:\!=\;\mathbb{E}[{\textrm{e}}^{\langle {\textrm{i}} s,N^o(t)\rangle}]$ for $s \in \mathbb{R}^k$. According to [Reference Athreya and Ney5, Chapter V], $\{N^o(t), t \geq 0\}$ is a continuous-time multitype branching process (without immigration). Note that, in view of the representation (2.3), we have the following expression:

\begin{equation*} \varphi^o_{t}(s) = \mathbb{E}\left( \prod_{j=1}^k \varphi^j_t(s)^{I(j)}\right)=\sum_{\mathbf{n}_0 \in \mathbb{N}^k} \prod_{j=1}^k \varphi^j_t(s)^{\mathbf{n}_0(j)} p_{\mathbf{n}_0},\end{equation*}

where $\varphi^j_t(s)\;:\!=\;\mathbb{E} (\!\exp (\langle {\textrm{i}} s,N^j(t)\rangle) )$.

We recall some useful results which will be used often in the subsequent study. First, it is convenient to introduce a $k\times k$ matrix $A=(a_{ij})_{i,j=1,\ldots,k}$ where the $a_{ij}$ are defined by

(2.4) \begin{equation}a_{ij}=\mu_j(m_{ij}-\mathbb{1}_{[i=j]}),\quad i,j=1,\ldots,k.\end{equation}

We suppose that A is regular, that is, all entries of the matrix $\exp(t_0 A)$ are positive for some $t_0>0$ (see [Reference Athreya and Ney5, definition (10)]). This entails that the largest eigenvalue $\rho$ of A is with multiplicity 1. It is commonly known that we are in the subcritical case if $\rho<0$, in the critical case if $\rho =0$, and in the supercritical case if $\rho >0$. We let u and v be the $k\times 1$ right and left eigenvectors respectively, i.e. such that $Au=\rho u$ and $v'A=\rho v'$, with positive entries, and normalized in such a way that $\langle u,{\bf 1}\rangle=1$ and $\langle u,v\rangle=1$. Then, in [Reference Athreya and Ney5, Theorem 1] it was shown that $\{\langle u, N^o(t)\,{\textrm{e}}^{-\rho t}\rangle,\ t\geq 0\}$ is a martingale when $N^o(0)$ is a constant $\mathbf{n}_0\in \mathbb{N}^k$. In the case when $N^o(0)=I$ given by (2.2), it is clear that this property still holds because Assumption $({\bf A})$ in particular implies that I is integrable. Finally, we will suppose that the process is non-singular, that is, for all $i=1,\ldots,k$, $p_i(\mathbf{n})$ in (2.1) is not in the form $p_i(\mathbf{n})=\mathbb{1}_{[n_i=1,\ n_j=0,\ j\neq i]}$ for all $i,j=1,\ldots,k$, meaning that each particle does not produce exactly one offspring.

From [Reference Athreya and Ney5, Theorem 2] the a.s. asymptotic behavior of $N^o(t)$ as $t\rightarrow +\infty$ is given in the following lemma under the assumptions described previously.

Lemma 2.1. There exists a non-negative random variable W such that

\begin{equation*}\lim_{t\rightarrow +\infty} N^o(t) \,{\textrm{e}}^{-\rho t}=Wv,\quad {a.s.}\end{equation*}

Note that the CF of W is given by

\begin{equation*}\varphi_W(x)=\mathbb{E}[{\textrm{e}}^{{\textrm{i}} x W}]=\sum_{\mathbf{n}_0 \in \mathbb{N}^k} \mathbb{E}[{\textrm{e}}^{{\textrm{i}} x W}\mid N^o(0)=\mathbf{n}_0]\,p_{\mathbf{n}_0},\quad x\in \mathbb{R} .\end{equation*}

Because of (2.3), we have

\begin{equation*}\mathbb{E}[{\textrm{e}}^{{\textrm{i}} x W}\mid N^o(0)=\mathbf{n}_0]=\mathbb{E}\Big[{\textrm{e}}^{{\textrm{i}} x \sum_{j=1}^k \sum_{l=1}^{\mathbf{n}_0(j)}W^{j,l}}\Big]=\prod_{j=1}^k \varphi_j(x)^{\mathbf{n}_0(j)},\end{equation*}

where $W^{j,l}$ is the a.s. limit of $N^{j,l}(t) \,{\textrm{e}}^{-\rho t}$ as $t\to +\infty$. For each $j=1,\ldots,k$, $(W^{j,l})_{l\in \mathbb{N}}$ is an i.i.d. sequence having a CF denoted by $\varphi_j(x)=\mathbb{E} ({\textrm{e}}^{{\textrm{i}} x W^{j,l}})$, and the $W^{j,l}$, $j=1,\ldots,k$, $l=1,\ldots,\mathbf{n}_0(j)$ are independent. The vector $\varphi(x)\;:\!=\;(\varphi_1(x),\ldots,\varphi_k(x))$ is in general not explicit but satisfies a particular integral equation (see [Reference Athreya and Ney5, equation (28)] for details). Finally, the CF of W is then given as a function of this vector $\varphi(x)$:

(2.5) \begin{equation}\varphi_W(x)=\sum_{\mathbf{n}_0 \in \mathbb{N}^k} \prod_{j=1}^k \varphi_j(x)^{\mathbf{n}_0(j)} p_{\mathbf{n}_0}=\mathbb{E} \biggl( \prod_{j=1}^k \varphi_j(x)^{I(j)}\biggr),\quad x \in \mathbb{R}.\end{equation}

We then move on to a multitype branching process with immigration, which is the central stochastic process studied in this paper. Let us consider that new particles (immigrants) arrive at time $T_i$, $i\ge 1$ according to a random vector $I_i=(I_i(1),\ldots,I_i(k))'$ having the same distribution (2.2) as I. Then it evolves according to the branching mechanism described at the beginning of this section. We assume that the immigration sequence $(I_i)_{i\in\mathbb{N}}$ is i.i.d. The vector process $N(t)=(N_1(t),\ldots,N_k(t))'$ represents the number of each type of particles at time t defined as

(2.6) \begin{equation}N(t)=\sum^{S(t)}_{i=1} N^{o,i}(t-T_i),\quad t\geq 0,\end{equation}

where $\{N^{o,i}(t), t \geq 0\}_{i\in {\mathbb{N}}}$ are i.i.d. copies of $\{N^{o}(t), t \geq 0\}$ with $N^{o,i}(0)=I_i$. The process $\{ S(t), t\ge 0\}$ with $S(0)=0$ is the arrival process for new particles associated with a non-decreasing sequence $(T_i)_{i\in\mathbb{N}}$ with $T_0=0$, which represents the arrival time of the ith particle, and inter-arrival times $(T_i-T_{i-1})_{i\in \mathbb{N}^*}$. In other words $N^{o,i}(t-T_i)$ is a vector of the number of particles in each system at time t generated from the $I_i(j)$ particles of type j, $j=1,\ldots,k$, arriving at $T_i$. Also, an underlying assumption is that $N^{o,i}_j(t)=0$ when $t<0$ for $j=1,\ldots,k$, $\{ S(t), \ t\ge 0\}$, and $\{N^{o,i}(t),\ t\ge 0\}$, $i\in \mathbb{N}$, are processes independent of each other. Hence N(t) is a continuous-time multitype branching process with immigration given by the process $\{ S(t), \ t\ge 0\}$.

3. Immigration modeled by a non-homogeneous Poisson process (NHPP)

We assume in this section that $\{ S(t), t\ge 0\}$ is an NHPP with intensity $\lambda(t)> 0 $ for $t \ge 0$ being locally integrable, and set $\Lambda(t)\;:\!=\; \int_0^t \lambda(y) \,{\textrm{d}} y$ for $t\ge 0$.

To study the asymptotic behavior of N(t) in (2.6) when $t\rightarrow +\infty$, we first need the CF of N(t). The following result is an easy extension of [Reference Durham12, equation (2)]; see also [Reference Mitov, Yanev and Hyrien24, Theorem 1] for a similar result that concerns the probability generating function of N(t).

Lemma 3.1. The CF of N(t) in (2.6) admits the following expression:

(3.1) \begin{equation}\varphi_t(s)=\mathbb{E}[{\textrm{e}}^{\langle {\textrm{i}} s, N(t)\rangle}]=\exp \biggl\{ \int^t_0 [\varphi^o_{t-x} (s)-1]\lambda(x)\,{\textrm{d}} x \biggr\}=\exp \biggl\{ \int^t_0 [\varphi^o_x (s)-1]\lambda(t-x)\,{\textrm{d}} x \biggr\}\end{equation}

for all $ s \in \mathbb{R}^k$.

Proof. Given $S(t)=n$, $(T_1,\ldots,T_n)$ are distributed as the ordered statistics $(U_{(1)},\ldots,U_{(n)})$ with $(U_1,\ldots,U_n)$ being independent with density

\begin{equation*} \frac{\lambda(y)}{\Lambda(t)} \mathbb{1}_{[0,t]}(y) , \end{equation*}

we find that

\begin{equation*}\varphi_t(s)=\sum^\infty_{n=0} \mathbb{E} \biggl[\!\exp \biggl\{\biggl\langle {\textrm{i}} s, \sum^n_{l=1}N^{o,l} (t-U_{(l)})\biggr\rangle\biggr\} \biggr] \,{\textrm{e}}^{-\Lambda (t)} \dfrac{(\Lambda (t))^n}{n!}.\end{equation*}

Since

\begin{equation*}\sum^n_{l=1}N^{o,l} (t-U_{(l)})=\sum^n_{l=1}N^{o,l} (t-U_l)\end{equation*}

and by independence of $(U_1,\ldots,U_n)$ and the process $\{N^{o,l}(t), t\geq 0\}$, we obtain

\begin{equation*}\varphi_t(s)=\sum^\infty_{n=0} \biggl\{\dfrac{1}{\Lambda(t)} \int^t_0 \mathbb{E}[\!\exp(\langle {\textrm{i}} s, N^o(t-y)\rangle)] \lambda(y)\,{\textrm{d}} y \biggr\}^n \,{\textrm{e}}^{-\Lambda (t)} \dfrac{(\Lambda (t))^n}{n!}.\end{equation*}

A change of variable $x\;:\!=\;t-y$ concludes the proof.

The following results show that the normalized process converges towards different limits depending on the assumptions regarding the intensity of the arrival process.

Theorem 3.1. Suppose that the intensity $ \lambda(t)$ of the NHPP $\{S(t),\ t \ge 0 \}$ satisfies that ${\textrm{e}}^{-\rho t} \lambda(t)$ is integrable. Then

(3.2) \begin{equation}{\textrm{e}}^{-\rho t} N(t) \stackrel{\mathcal{D}}{\longrightarrow} \int_0^\infty \,{\textrm{e}}^{-\rho z} \,{\textrm{d}}\mathcal{Y}^W_z, \quad t\rightarrow +\infty,\end{equation}

where $ \{\mathcal{Y}^W_t,\ t\ge 0\}$ is a non-homogeneous compound Poisson process with intensity $ \lambda (y)$ and jumps distributed as Wv.

Proof. The proof is given in Section 3.1.

Although Theorem 3.1 is valid when the eigenvalue $\rho$ has any sign, it is especially interesting in the supercritical case $\rho>0$, as ${\textrm{e}}^{-\rho t} \lambda(t)\in L^1({\mathbb{R}}_+)$ implies that the intensity $\lambda(t)$ can, for example, grow exponentially as ${\textrm{e}}^{\delta t}$ for $0\le \delta<\rho$. However, Theorem 3.1 becomes less interesting in the critical case $\rho=0$ or subcritical case $\rho < 0$, as the condition $ {\textrm{e}}^{-\rho t} \lambda(t)\in L^1({\mathbb{R}}_+)$ roughly means that the intensity tends to 0 potentially very fast. Hence the following supplementary results show how the renormalized process N(t) converges in distribution or in probability

  • when the intensity grows exponentially in the critical or subcritical case,

  • when the intensity grows exponentially as ${\textrm{e}}^{\delta t}$ with $\delta\ge \rho$ in the supercritical case, complementing Theorem 3.1.

Theorem 3.2. Let us suppose that the intensity $ \lambda(t)$ of the NHPP $\{S(t),\ t \ge 0 \}$ satisfies $\lambda(t)\sim \lambda_\infty \,{\textrm{e}}^{\delta t}$ as $t\to +\infty$ for some $\delta\ge 0$ and $\lambda_\infty >0$. Then the following convergences hold as $t\to+\infty$:

(3.3) \begin{align} N(t) & \stackrel{\mathcal{D}}{\longrightarrow} \nu \quad {if }\ \rho<0{\ and\ }\delta=0,\qquad\qquad\qquad \qquad\qquad \qquad\qquad \qquad\ \ \ \end{align}
(3.4) \begin{align} {\textrm{e}}^{-\delta t} N(t) & \stackrel{{\mathbb P}}{\longrightarrow} \lambda_\infty (\delta \operatorname{Id} -A)^{-1} \mathbb{E}(I) \quad {if\ } [\rho \le 0{\ and\ }\delta>0]{\ or\ }[\rho>0{\ and\ } \delta>\rho], \end{align}
(3.5) \begin{align} {\textrm{e}}^{-\delta t} \dfrac{N(t)}{t} & \stackrel{{\mathbb P}}{\longrightarrow} \lambda_\infty uv'\;\mathbb{E}(I)\ \quad {if\ }\rho=\delta >0 , \qquad\qquad\qquad\qquad\qquad \qquad\qquad \quad\ \end{align}

where $\operatorname{Id}$ is the identity matrix and $\nu$ is a distribution on $\mathbb{R}^k_+$ with CF given by

\begin{equation*}\int_{\mathbb{R}^k_+} \,{\textrm{e}}^{\langle {\textrm{i}} s,x\rangle}\nu({\textrm{d}} x)=\exp \biggl\{ \lambda_\infty \int^{+\infty}_0 [\varphi^o_y(s)-1] \,{\textrm{d}} y\biggr\},\quad s \in \mathbb{R}^k.\end{equation*}

Proof. The proof is presented in Section 3.2.

In the following, we are especially interested in the particular critical case $\rho=0$. From Theorems 3.1 and 3.2, if $ \lambda(t)$ is integrable then N(t) converges in distribution to $\lim_{t\to +\infty}\mathcal{Y}^W_t$ in (3.2), and if $\lambda(t)\sim \lambda_\infty \,{\textrm{e}}^{\delta t}$ with $\delta>0$ then we have convergence in probability of ${\textrm{e}}^{-\delta t}N(t)$ in (3.4). We note that an intermediary case is worth exploring when $\lambda(t)$ does not have explosive behavior and, roughly speaking, does not converge to 0. This is the case if the associated Cesàro limit $\lim_{t\to \infty}\Lambda(t)/t=\lambda_\infty $ exists, and hence some additional convergence result may be obtained. Before detailing this convergence result, we introduce the following quantities:

(3.6) \begin{align} Q&\;:\!=\; \dfrac{1}{2}\sum^k_{i,\ell,n=1} \dfrac{\partial^2 h_i}{\partial z_\ell \partial z_n}(1,\ldots,1) u_\ell u_n v_i,\notag \\ \beta_j&\;:\!=\; \biggl(\sum^k_{\ell=1} \mu_\ell^{-1} u_\ell v_\ell \biggr)\dfrac{u_j}{Q},\quad j=1,\ldots,k,\end{align}
(3.7) \begin{align} \bar{\beta} &\;:\!=\; \sum_{j=1}^k \mathbb{E}(I(j))\beta_j,\end{align}
(3.8) \begin{align} c&\;:\!=\; \dfrac{ \left(\sum^k_{\ell=1} \mu_\ell^{-1} u_\ell v_\ell \right)^2}{Q},\end{align}

where we recall that $h_i(z)=h_i(z_1,\ldots,z_k)$ is the generating function associated with $(p_i(\mathbf{n}))_{\mathbf{n} \in \mathbb{N}^k}$ given in (2.1). We note that the assumption that $(Y_{1}^{(i)},\ldots,Y_{k}^{(i)})$ is not zero for some $i\in \{1,\ldots,k\}$ entails that Q is positive. The following result holds.

Theorem 3.3. Let us assume that the moments of all orders of the random vector $(Y_{1}^{(i)},\ldots,Y_{k}^{(i)})$ exist for all $i=1,\ldots,k$ and the intensity admits a Cesàro finite limit $\lambda_\infty=\lim_{t\to \infty}\Lambda(t)/t > 0$. When $\rho=0$ (critical case), we have convergence in distribution, that is,

(3.9) \begin{equation}\dfrac{N(t)}{t}\stackrel{\mathcal{D}}{\longrightarrow} \mathcal{Z} v\otimes \mu^{-1} , \quad t\rightarrow +\infty,\end{equation}

where $\mathcal{Z}$ is a random variable distributed as $\Gamma (\lambda_\infty \bar{\beta}, c)$ with $v\otimes \mu^{-1} =(v_1 \mu_1^{-1},\ldots,v_k \mu_k^{-1})$, $\bar{\beta}$ and c given by (3.7) and (3.8) respectively. Here $\Gamma (\alpha, \theta)$ denotes the gamma distribution with shape parameter $\alpha$ and rate parameter $\theta$.

Proof. The proof is presented in Section 3.3.

Thus it turns out that in the critical case the supports of the limits (3.2), (3.4), and (3.9) are respectively included in the positive half-line spanned by v, $(\delta \operatorname{Id} -A)^{-1} \mathbb{E}(I)$ and $v\otimes \mu^{-1} $. It is worth pointing out that all three results concern the critical case, but with different assumptions on the intensity function.

Remark 3.1. When the intensity $\lambda(t)$ is a constant value $\lambda$, Theorem 3.3 is the particular case of [Reference Weiner34, Theorem 2] which considers general inter-arrival times, with a slight change of notation (in that reference $\mu_i$ stands for the mean lifetime of a type i particle, as opposed to $\mu_i^{-1}$ in this paper). See also [Reference Vatutin31, Theorem 1] for a similar result. When $\lambda(t)$ converges to some limit $\lambda_\infty$, it converges towards the same limit in the sense of Cesàro and the limit in distribution (3.9) corresponds to [Reference Mitov, Yanev and Hyrien24, Theorem 8]. However, in this paper we introduce a different approach to the proof of Theorem 3.3 (given in Section 3.3). In contrast to [Reference Weiner34], which proved the result by showing that the joint moments of $N(t)/t$ converge, our approach does not require renewal arguments and relevant results. In particular, we start directly with the CF (3.1), which is expressed handily in Lemma 3.1, and study its convergence. A similar approach was adopted by Mitov, Yanev, and Hyrien [Reference Mitov, Yanev and Hyrien24], although they started the proof from a uniform estimate from [Reference Sevast’yanov29] for the probability generating function of $\{N^o(t),\ t\ge 0 \}$.

Remark 3.2. From (3.44) in the proof of Theorem 3.3, the limiting distribution of (3.9) admits an integral form similar to the right-hand side of (3.2), which is available by applying Campbell’s formula (the details are given at the beginning of Section 3.1). Indeed, one can check the equality in distribution of $\mathcal{Z}v\otimes \mu^{-1} $ and $\int^{+\infty}_0 \,{\textrm{e}}^{- t} \,{\textrm{d}}\mathcal{Y}^c_t$, where $\{\mathcal{Y}^c_t, t\geq 0\}$ is a compound Poisson process with intensity $\lambda_\infty \bar{\beta}$ and jumps distributed as $ \chi v\otimes \mu^{-1} $ with $\chi\sim \mathcal{E}(c)$.

We now proceed to the proofs of Theorems 3.1, 3.2, and 3.3.

3.1 Proof of Theorem 3.1

We start from the CF in (3.1), which entails that the CF of ${\textrm{e}}^{-\rho t} N(t)$ is given by

\begin{equation*}\varphi_t(s\,{\textrm{e}}^{-\rho t})= \exp \biggl\{ \int^t_0 [\varphi^o_x (s\,{\textrm{e}}^{-\rho t})-1] \lambda(t-x)\,{\textrm{d}} x \biggr\} ,\quad s\in \mathbb{R}^k.\end{equation*}

The main difficulty in the proof is to show the following convergence:

(3.10) \begin{equation}\int^t_0 [\varphi^o_{x} (s\,{\textrm{e}}^{-\rho t})-1]\lambda(t-x)\,{\textrm{d}} x \longrightarrow \int^{+\infty}_0 [\varphi_{Wv}(s \,{\textrm{e}}^{-\rho x})-1] \lambda(x ) \,{\textrm{d}} x,\quad t\rightarrow +\infty,\ s\in \mathbb{R}^k,\end{equation}

where $\varphi_{Wv}$ is the CF of Wv. By Campbell’s formula (see [Reference Kyprianou20, formula (2.9), Theorem 2.7] or [Reference Jeanblanc, Yor and Chesney18, Exercise 8.6.3.11]), $\exp \{ \int^{+\infty}_0 [\varphi_{Wv} (s\,{\textrm{e}}^{-\rho y})-1]\lambda(y ) \,{\textrm{d}} y\}$ is the CF of $\int_0^\infty \,{\textrm{e}}^{-\rho z} \,{\textrm{d}}\mathcal{Y}^W_z$, where $ \{\mathcal{Y}^W_t,\ t\ge 0\}$ is a non-homogeneous compound Poisson process with intensity $ \lambda(y)$ and jumps distributed as Wv. Hence we have convergence in distribution of ${\textrm{e}}^{-\rho t} N(t)$ towards $\int_0^\infty \,{\textrm{e}}^{-\rho z} \,{\textrm{d}}\mathcal{Y}^W_z$ if (3.10) holds. This proves (3.2).

In order to prove (3.10) we need to exploit the convergence $N^o(y) \,{\textrm{e}}^{-\rho y} \longrightarrow Wv$ a.s. as $y \rightarrow +\infty$ given in Lemma 2.1. Studying (3.10) is equivalent to analyzing the limit as $t\rightarrow +\infty$ of

(3.11) \begin{equation}Q_t \;:\!=\; \int^{t}_0 [\varphi^o_{t-x} (s\,{\textrm{e}}^{-\rho t})-1] \lambda(x) \,{\textrm{d}} x = \int^{t}_0 \{ \mathbb{E}[\!\exp (\langle {\textrm{i}} s, N^o(t-x)\,{\textrm{e}}^{-\rho t}\rangle)]-1 \} \lambda(x) \,{\textrm{d}} x .\end{equation}

That is, $Q_t$ may be expressed as

\begin{equation*} Q_t\;:\!=\;Q_{1,t}+Q_{2,t},\end{equation*}

where

(3.12) \begin{align} Q_{1,t}&\;:\!=\; \int^t_0 \biggl\{ \mathbb{E}\biggl[\!\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\,{\textrm{e}}^{-\rho x}\biggr\rangle\biggr) - \exp(\langle {\textrm{i}} s, Wv \,{\textrm{e}}^{-\rho x}\rangle)\biggr] \biggr\} \lambda(x) \,{\textrm{d}} x, \end{align}
(3.13) \begin{align} Q_{2,t}&\;:\!=\; \int^t_0 \{ \mathbb{E}[\!\exp (\langle {\textrm{i}} s, Wv \,{\textrm{e}}^{-\rho x}\rangle)]-1 \} \lambda(x) \,{\textrm{d}} x. \qquad\qquad\qquad\, \qquad\qquad\quad\end{align}

We then examine the limits of (3.12) and (3.13) separately. Finally it will be shown that (3.12) tends to 0 and (3.13) tends to the right-hand side of (3.10).

Limit of $Q_{1,t}$ in (3.12) as $t\rightarrow +\infty$. We shall utilize the following basic inequality in the subsequent proof:

(3.14) \begin{equation}|{\textrm{e}}^{{\textrm{i}} a}-{\textrm{e}}^{{\textrm{i}} b}|\leq |a-b|,\quad a\in \mathbb{R},~b\in \mathbb{R} ,\end{equation}

and also we have $|{\textrm{e}}^{{\textrm{i}} a}-{\textrm{e}}^{{\textrm{i}} b}| \leq 2$. Hence $|{\textrm{e}}^{{\textrm{i}} a}-{\textrm{e}}^{{\textrm{i}} b}|\leq |a-b| \wedge 2$ for $a\in \mathbb{R}$ and $b\in \mathbb{R}$. We then deduce that

(3.15) \begin{align}|Q_{1,t}| &\leq \int^{t}_0 \mathbb{E}\biggl[\bigg| \biggl\langle s, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\,{\textrm{e}}^{-\rho x}\biggr\rangle- \langle s, Wv \,{\textrm{e}}^{-\rho x}\rangle\bigg|\wedge 2 \biggr] \lambda(x)\,{\textrm{d}} x\notag \\[5pt] & = \int^{\infty}_0 \mathbb{1}_{[0\leq x \leq t]} \mathbb{E}\biggl[\bigg| \biggl\langle s, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\,{\textrm{e}}^{-\rho x}\biggr\rangle- \langle s, Wv \,{\textrm{e}}^{-\rho x}\rangle\bigg|\wedge 2 \biggr] \lambda(x)\,{\textrm{d}} x .\end{align}

By the dominated convergence theorem, we will show in the following that (3.15) tends to zero as $t \to +\infty$. From the pointwise convergence as $t \to +\infty$ in Lemma 2.1 with the help of the dominated convergence theorem, we find that the integrand goes to zero, that is,

\begin{equation*}\mathbb{1}_{[0\leq x \leq t]} \mathbb{E} \biggl[ \bigg| {\textrm{e}}^{-\rho x} \biggl\langle s, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\biggr\rangle-{\textrm{e}}^{-\rho x} \langle s, Wv\rangle \bigg|\wedge 2 \biggr] \lambda(x)\longrightarrow 0,\quad t\rightarrow \infty ,\quad \text{for all } x\ge 0 .\end{equation*}

We now find an upper bound function $f(x) \geq 0$ of this integrand such that $\int^{+\infty}_0 f(x) \,{\textrm{d}} x < +\infty$. Recall that u is an eigenvector with positive entries $u_i$ for $i=1,\ldots,k$ such that $Au=\rho u$ (where the elements of the matrix A are defined in (2.4)). Since $u_i >0$ for all i, there exists some constant $\kappa>0$ which is large enough satisfying

(3.16) \begin{equation}0\leq |s_j| \leq \kappa u_j \quad \text{for all } j=1,\ldots,k,\end{equation}

where we recall that the vector $s=(s_1,\ldots,s_k)$ is fixed. For example, $\kappa$ can be chosen as $\max_{j=1,\ldots,k} |s_j|/ u_j$. Also, note that $\mathbb{E}[ (X+Y) \wedge 2] \leq (\mathbb{E}[X]+\mathbb{E}[Y]) \wedge 2$ for non-negative random variables X and Y. Using these results and the martingale property of $\{\langle u, N^o(t)\,{\textrm{e}}^{-\rho t}\rangle, t\geq 0\}$, also denoting $|s|\;:\!=\;(|s_1|,\ldots,|s_k|)$, we conclude that the integrand is bounded as

(3.17) \begin{align}&\mathbb{1}_{[0\leq x \leq t]} \biggl\{\biggl( {\textrm{e}}^{-\rho x} \mathbb{E} \biggl[\bigg| \biggl\langle s, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\biggr\rangle\bigg|\biggr]+ {\textrm{e}}^{-\rho x} \mathbb{E}[ |\langle s, Wv\rangle|] \biggr)\wedge 2\biggr\} \lambda(x)\notag \\[5pt] & \quad \leq \mathbb{1}_{[0\leq x \leq t]} \biggl\{\biggl( {\textrm{e}}^{-\rho x} \mathbb{E} \biggl[ \biggl\langle |s|, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\biggr\rangle\biggr]+ {\textrm{e}}^{-\rho x} \mathbb{E}[ \langle |s|, Wv\rangle] \biggr)\wedge 2\biggr\} \lambda(x) \notag \\[5pt] & \quad \leq \mathbb{1}_{[0\leq x \leq t]} \biggl\{ \biggl( {\textrm{e}}^{-\rho x} \kappa \ \mathbb{E} \biggl[ \biggl\langle u, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\biggr\rangle\biggr]+ {\textrm{e}}^{-\rho x} \mathbb{E}[\langle |s|, Wv\rangle]\biggr)\wedge 2\biggr\} \lambda(x),\end{align}

where the first equality is due to the fact that ${N^o(t-x)}{\,{\textrm{e}}^{-\rho (t-x)}}$ and Wv have non-negative entries, and the last inequality is due to (3.16). The first expectation in (3.17) is $\mathbb{E} [ \langle u, N^o(0)/{\textrm{e}}^{\rho \times 0}\rangle]$ because of the martingale property, and in turn it is equal to $\langle u, \mathbb{E}(I)\rangle$ because of $N^o(0)=I$. The second expectation is some finite constant. Therefore we conclude that, for some constants $K>0$ and $K^\ast>0$, (3.17) is bounded as

\begin{align*}&\mathbb{1}_{[0\leq x \leq t]} \biggl\{ \biggl( {\textrm{e}}^{-\rho x} \kappa \mathbb{E} \biggl[ \biggl\langle u, \dfrac{N^o(t-x)}{{\textrm{e}}^{\rho (t-x)}}\biggr\rangle\biggr]+ {\textrm{e}}^{-\rho x} \mathbb{E}[\langle |s|, Wv\rangle] \biggr) \wedge 2\biggr\} \lambda(x)\\[5pt] & \quad = \mathbb{1}_{[0\leq x \leq t]} [(\kappa \ \langle u, \mathbb{E}(I)\rangle {\textrm{e}}^{-\rho x}+K{\textrm{e}}^{-\rho x}) \wedge 2]\lambda(x) \\[5pt] &\quad \leq K^\ast \,{\textrm{e}}^{-\rho x} \lambda(x)\\[5pt] &\quad \;:\!=\;f(x).\end{align*}

It is now shown that the integrand in (3.15) tends to 0 as $t\rightarrow +\infty$ for a fixed x and is dominated by the function f(x), which is integrable by assumption. Therefore, by the dominated convergence theorem, we conclude that (3.15) goes to 0 as $t\rightarrow \infty$, which implies that $Q_{1,t}$ in (3.12) verifies $\lim_{t\rightarrow \infty } Q_{1,t}=0$.

Limit of $Q_{2,t}$ in (3.13) as $t\rightarrow +\infty$. Let us show that $ | \{ \varphi_{Wv} (s\,{\textrm{e}}^{-\rho x})-1 \} \lambda(x) |$ is upper-bounded by some integrable function. With the help of (3.14), the following inequality holds:

\begin{equation*}|\!\exp (\langle {\textrm{i}} s, Wv \,{\textrm{e}}^{-\rho x }\rangle)-1| \leq |\langle s, Wv \,{\textrm{e}}^{-\rho x}\rangle| \leq {\textrm{e}}^{-\rho x} \langle |s|, Wv\rangle.\end{equation*}

We then arrive at the following bound:

\begin{equation*}| \{ \varphi_{Wv} (s\,{\textrm{e}}^{-\rho x})-1 \} \lambda(x) | = | \{ \mathbb{E}[\!\exp (\langle {\textrm{i}} s, Wv \,{\textrm{e}}^{-\rho x}\rangle)]-1 \} \lambda(x)|\le {\textrm{e}}^{-\rho x} \lambda(x) \mathbb{E} [ \langle |s|, Wv\rangle],\end{equation*}

which is indeed integrable by the integrability assumption for $ {\textrm{e}}^{-\rho x} \lambda(x)$. Then, by the dominated convergence theorem, we find that

\begin{equation*}Q_{2,t}\longrightarrow \int^{+\infty}_0 \{\varphi_{Wv} (s\,{\textrm{e}}^{-\rho x})-1 \}\lambda(x)\,{\textrm{d}} x,\quad t\rightarrow \infty .\end{equation*}

Hence, combining the limits of $Q_{1,t}$ and $Q_{2,t}$, (3.10) is proved. Consequently, this completes the proof.

3.2 Proof of Theorem 3.2

Proving the limit of N (t ) in (3.3) as $t\to +\infty$. First we recall that $\rho<0$ and $\delta=0$, and the intensity satisfies $\lim_{t \rightarrow \infty}\lambda(t)=\lambda_\infty$. We proceed to show that $\int^{t}_0 [\varphi^o_x (s)-1]\lambda(t-x) \,{\textrm{d}} x$ in Lemma 3.1 converges to $\lambda_\infty\int^{\infty}_0 [\varphi^o_x (s)-1] \,{\textrm{d}} x$ as $t\to +\infty$ by a dominated convergence argument. Let $A_\lambda\ge 0$ be large enough such that $C_\lambda \;:\!=\;\sup_{y\ge A_\lambda} \lambda(y)$ is finite, and $t\ge A_\lambda$. We start by writing

(3.18) \begin{align} &\int^{t}_0 [\varphi^o_x (s)-1]\lambda(t-x) \,{\textrm{d}} x \notag \\ &\quad = \int^{\infty}_0 \mathbb{1}_{[0\le x\le t]}[\varphi^o_x (s)-1]\lambda(t-x) \,{\textrm{d}} x \notag \\&\quad = \int^{\infty}_0 \mathbb{1}_{[0\le x\le t-A_\lambda]}[\varphi^o_x (s)-1]\lambda(t-x) \,{\textrm{d}} x + \int^{\infty}_0 \mathbb{1}_{[t-A_\lambda < x\le t]}[\varphi^o_x (s)-1]\lambda(t-x) \,{\textrm{d}} x.\end{align}

Let us study each term on the right-hand side of (3.18). Using the inequality in (3.16), we find

(3.19) \begin{align} |\varphi^o_x(s)-1| & = |\mathbb{E}[{\textrm{e}}^{\langle {\textrm{i}} s, N^o(x)\rangle}]-1| \notag \\[2pt] &\le \mathbb{E}[ |{\textrm{e}}^{\langle {\textrm{i}} s,N^o(x)\rangle}-1|] \notag \\[2pt] &\le \mathbb{E} [|\langle s,N^o(x)\rangle|] \notag \\[2pt] &\le \mathbb{E}[\langle |s|,N^o(x)\rangle] \notag \\[2pt] & \leq \kappa \mathbb{E} [\langle u, N^o(x)\rangle] \notag \\[2pt] &= \kappa \,{\textrm{e}}^{\rho x} \mathbb{E}[\langle u, N^o(x) \,{\textrm{e}}^{-\rho x}\rangle] \notag \\[2pt] &= \kappa \,{\textrm{e}}^{\rho x} \mathbb{E}[\langle u, N^o(0)\rangle]\notag \\[2pt] &= \kappa \,{\textrm{e}}^{\rho x} \langle u,\mathbb{E} (I)\rangle,\end{align}

where we recall that $\kappa=\max_{j=1,\ldots,k} |s_j|/u_j$, for example. Since $\lambda(\cdot)$ is locally integrable (so $\int_0^{A_\lambda}\lambda (x) \,{\textrm{d}} x$ is finite), and ${\textrm{e}}^{\rho x} \le {\textrm{e}}^{\rho (t-A_\lambda)}$ for $t-A_\lambda\le x$ and $\rho<0$, the second term on the right-hand side of (3.18) thus verifies

(3.20) \begin{align} &\biggl| \int^{\infty}_0 \mathbb{1}_{[t-A_\lambda< x\le t]}[\varphi^o_x (s)-1]\lambda(t-x) \,{\textrm{d}} x\biggr| \notag \\[3pt] &\quad \le \int^{\infty}_0 \mathbb{1}_{[t-A_\lambda < x\le t]}|\varphi^o_x (s)-1|\lambda(t-x) \,{\textrm{d}} x \notag \\[3pt] &\quad \le \kappa \langle u,\mathbb{E} (I)\rangle \int^{\infty}_0 \mathbb{1}_{[t-A_\lambda< x\le t]} \,{\textrm{e}}^{\rho x} \lambda(t-x) \,{\textrm{d}} x \notag \\[3pt] &\quad \le \kappa \langle u,\mathbb{E} (I)\rangle {\textrm{e}}^{\rho (t-A_\lambda)} \int^{\infty}_0 \mathbb{1}_{[t-A_\lambda < x\le t]} \lambda(t-x) \,{\textrm{d}} x \notag \\[3pt] &\quad = \kappa \langle u,\mathbb{E} (I)\rangle {\textrm{e}}^{\rho (t-A_\lambda)} \int_0^{A_\lambda}\lambda (x) \,{\textrm{d}} x \notag \\[3pt] &\quad \longrightarrow 0,\quad t\to\infty ,\ t\ge A_\lambda.\end{align}

Concerning the first term on the right-hand side of (3.18), from (3.19) and the definition of $C_\lambda$ we find that

(3.21) \begin{align}\mathbb{1}_{[0\le x\le t-A_\lambda]} |\varphi^o_x(s)-1|\lambda(t-x) & \le \mathbb{1}_{[0\le x\le t-A_\lambda]} \kappa \,{\textrm{e}}^{\rho x} \langle u,\mathbb{E} (I)\rangle \lambda(t-x) \notag \\& \le C_\lambda \kappa \,{\textrm{e}}^{\rho x} \langle u,\mathbb{E} (I)\rangle .\end{align}

Since $\rho<0$, the right-hand side of (3.21) is integrable, and

\begin{equation*}\lim_{t\to +\infty} \mathbb{1}_{[0\le x\le t-A_\lambda]}[\varphi^o_x (s)-1]\lambda(t-x)=[\varphi^o_x (s)-1]\lambda_\infty,\end{equation*}

the dominated convergence theorem entails that the first term on the right-hand side of (3.18) converges to $\lambda_\infty\int^{\infty}_0 [\varphi^o_x (s)-1] \,{\textrm{d}} x$ as $t\to +\infty$. Hence, combining with the limit obtained in (3.20), we deduce that the limit of the left-hand side of (3.18) as $t\to\infty$ is $\lambda_\infty\int^{\infty}_0 [\varphi^o_x (s)-1] \,{\textrm{d}} x$. We now argue that the function

\begin{equation*}s\mapsto \exp \biggl( \lambda_\infty \int^{\infty}_0 [\varphi^o_x (s)-1] \,{\textrm{d}} x\biggr)\end{equation*}

is continuous in a neighborhood of $0\in \mathbb{R}^k$ such as $s\in (-1,1)^k$, which by Lévy’s continuity theorem implies that

\begin{equation*}\exp \biggl( \lambda_\infty \int^{\infty}_0 [\varphi^o_x (s)-1] \,{\textrm{d}} x\biggr)\end{equation*}

is the CF of some distribution $\nu$ on $\mathbb{R}^k_+$. To be more precise, we observe that when $s\in (-1,1)^k$, the upper bound $\kappa$ in (3.21) can be chosen independently from $s\in (-1,1)^k$ as $\max_{j=1,\ldots,k} 1/u_j$, a dominated convergence argument subsequently implies this continuity property.

Proving the limit of ${\textrm{e}}^{-\delta t}N(t)$ in (3.4) as $t\to +\infty$. We assume here that $\rho\le 0$ and $\delta>0$ or that $\rho> 0$ and $\delta>\rho$. First, the CF of ${\textrm{e}}^{-\delta t}N(t)$ is given as ${\textrm{e}}^{R_t}$, where $R_t$ can be written as

(3.22) \begin{align} R_t&= \int^{t}_0 [\varphi^o_{t-x} (s\,{\textrm{e}}^{-\delta t})-1] \lambda(x) \,{\textrm{d}} x \notag \\ &= \int^{t}_0 \{ \mathbb{E}[\!\exp (\langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}, N^o(t-x)\rangle)]-1 \} \lambda(x) \,{\textrm{d}} x = R_{1,t}+R_{2,t}, \qquad\qquad\quad\ \, \ \ \end{align}
(3.23) \begin{align}R_{1,t}&\;:\!=\; \int_0^\infty \mathbb{1}_{[0\le x\le t]}\mathbb{E} [\!\exp ( \langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}, N^o(t-x)\rangle )- 1 - \langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}, N^o(t-x)\rangle ]\lambda(x) \,{\textrm{d}} x, \end{align}
(3.24) \begin{align}\!\!\!\!\!\!\!\!\! R_{2,t}&\;:\!=\; \int_0^\infty \mathbb{1}_{[0\le x\le t]}\mathbb{E}[ \langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}, N^o(t-x)\rangle]\lambda(x) \,{\textrm{d}} x .\qquad\qquad\qquad\qquad\qquad\ \ \ \ \end{align}

Similarly to (3.11), we then study the limits of $R_{1,t}$ and $R_{2,t}$ separately as $t\to +\infty$.

Limit of $R_{1,t}$ in (3.23) as $t\to +\infty$. It will be shown that $R_{1,t}$ tends to 0 as $t\to +\infty$. Let us first note that for all $x\in\mathbb{R}$ we have $|{\textrm{e}}^{{\textrm{i}} x}-1-{\textrm{i}} x|\le |x|^2$ if $|x|\le 1$, and (using $|{\textrm{e}}^{{\textrm{i}} x}-1|\le |x|$) $|{\textrm{e}}^{{\textrm{i}} x}-1-{\textrm{i}} x|\le |{\textrm{e}}^{{\textrm{i}} x}-1| + |{\textrm{i}} x|\le 2 |x|\le 2|x|^2$ if $|x|> 1$, so that we have the following general inequality:

\begin{equation*}|{\textrm{e}}^{{\textrm{i}} x}-1-{\textrm{i}} x|\le 2|x|^2\quad \text{for all } x\in \mathbb{R}.\end{equation*}

The above result, combined with the Cauchy–Schwarz inequality, yields the upper bound for $|R_{1,t}|$ given by

(3.25) \begin{align}|R_{1,t}| &\le 2\int_0^\infty \mathbb{1}_{[0\le x\le t]}\mathbb{E}[ |\langle s\,{\textrm{e}}^{-\delta t}, N^o(t-x)\rangle|^2]\lambda(x)\, {\textrm{d}} x \notag \\&\le 2\|s\|^2\int_0^\infty \mathbb{1}_{[0\le x\le t]}\,{\textrm{e}}^{-2\delta t}\mathbb{E}[ \|N^o(t-x)\|^2]\lambda(x)\, {\textrm{d}} x.\end{align}

Here we separate the cases $\rho<0$, $\rho=0$, and $\rho>0$, with the last case requiring the additional constraint $\delta>\rho$. If $\rho<0$, the growth rate in Lemma A.1 (with a proof in Appendix A) implies that $\mathbb{E}[ \|N^o(t-x)\|^2]\le C \,{\textrm{e}}^{\rho(t-x)}$ for some constant $C>0$. Also, the assumption $\lambda(x)\sim \lambda_\infty \,{\textrm{e}}^{\delta x}$, as $x\to\infty$, in particular implies that $\lambda(x)$ is bounded by ${\textrm{e}}^{\delta x}$ up to a constant, hence we get from (3.25) that for some (different) constant $C>0$

\begin{align*} |R_{1,t}| &\le C \int_0^\infty \mathbb{1}_{[0\le x\le t]}\,{\textrm{e}}^{-2\delta t} \,{\textrm{e}}^{\rho(t-x)}\,{\textrm{e}}^{\delta x} \,{\textrm{d}} x \\ &= C \,{\textrm{e}}^{(-2\delta+\rho)t} \int_0^t \,{\textrm{e}}^{(-\rho+\delta)x}\,{\textrm{d}} x \notag \\ &= \dfrac{C}{-\rho + \delta}[{\textrm{e}}^{-\delta t}- {\textrm{e}}^{(-2\delta+\rho)t}]\notag \\ &\longrightarrow 0,\quad \text{as } t\to +\infty .\end{align*}

If $\rho=0$, the growth rate in Lemma A.1 implies that $\mathbb{E}[ \|N^o(t-x)\|^2]$ is ${\textrm{O}}((t-x)+1)$, hence for some constant $C>0$ we have

\begin{align*} |R_{1,t}| &\le C \int_0^\infty \mathbb{1}_{[0\le x\le t]}\,{\textrm{e}}^{-2\delta t} [(t-x) +1]\,{\textrm{e}}^{\delta x} \,{\textrm{d}} x\\[2pt] &\le C(t+1) \int_0^t \,{\textrm{e}}^{-2\delta t} \,{\textrm{e}}^{\delta x} \,{\textrm{d}} x \\[2pt] &= \dfrac{C( t+1)}{\delta} [{\textrm{e}}^{-\delta t}-{\textrm{e}}^{-2\delta t}]\\[2pt] &\longrightarrow 0,\quad \text{as } t\to +\infty .\end{align*}

Finally, if $\rho>0$ then the growth rate in Lemma A.1 implies that $\mathbb{E}[ \|N^o(t-x)\|^2]$ is less than ${\textrm{e}}^{2\rho (t-x)}$ up to a constant, hence for some constant $C>0$ we have

(3.26) \begin{equation}|R_{1,t}| \le C \int_0^\infty \mathbb{1}_{[0\le x\le t]}\,{\textrm{e}}^{-2\delta t} \,{\textrm{e}}^{2\rho(t-x)}\,{\textrm{e}}^{\delta x} \,{\textrm{d}} x = C \,{\textrm{e}}^{2(-\delta+\rho)t} \int_0^t \,{\textrm{e}}^{(-2\rho+\delta)x}\,{\textrm{d}} x,\end{equation}

which can be shown approaching 0 as $t\to +\infty$, using $\delta>\rho$.

Limit of $R_{2,t}$ in (3.24) as $t\to+\infty$, and conclusion. From [Reference Athreya and Ney5, page 208], we know that the mean matrix of the multitype process $N^o(z)$ is expressed as $\mathbb{E}[N^o(z)]={\textrm{e}}^{Az} \mathbb{E}(I)$, where the matrix A is defined in (2.4). Therefore $R_{2,t}$ can be expressed, after some manipulation, as

(3.27) \begin{align}R_{2,t} & = \int_0^t \,{\textrm{e}}^{-\delta t}\langle {\textrm{i}} s, {\textrm{e}}^{A(t-x)} \mathbb{E}(I)\rangle \lambda(x) \,{\textrm{d}} x \notag \\[2pt] &= \int_0^\infty \mathbb{1}_{[0\le x\le t]} \,{\textrm{e}}^{-\delta t}\langle {\textrm{i}} s, {\textrm{e}}^{Ax} \mathbb{E}(I)\rangle \lambda(t-x) \,{\textrm{d}} x. \end{align}

We now wish to use the dominated convergence theorem in order to find the limit in (3.27). Upper-bounding $\lambda(x)$ by $C \,{\textrm{e}}^{\delta x}$ for some constant $C>0$ results in

\begin{equation*} | \mathbb{1}_{[0\le x\le t]} \,{\textrm{e}}^{-\delta t}\langle {\textrm{i}} s, {\textrm{e}}^{Ax} \mathbb{E}(I)\rangle \lambda(t-x) | \le C\langle |s|, {\textrm{e}}^{(A-\delta \operatorname{Id})x} \mathbb{E}(I)\rangle,\end{equation*}

which is integrable because $\delta>\rho$ (in either case $\rho\le 0$ or $\rho>0$) so that all eigenvalues of $A-\delta \operatorname{Id}$ have negative real parts. Also, the assumption that $\lambda(x)\sim \lambda_\infty \,{\textrm{e}}^{\delta x}$, as $x\to\infty$, results in

\begin{equation*}\mathbb{1}_{[0\le x\le t]} \,{\textrm{e}}^{-\delta t}\langle {\textrm{i}} s, {\textrm{e}}^{Ax} \mathbb{E}(I)\rangle \lambda(t-x)\longrightarrow \lambda_\infty \langle {\textrm{i}} s, {\textrm{e}}^{(A-\delta \textrm{Id})x} \mathbb{E}(I)\rangle,\quad\text{as $t\to +\infty$,}\ \text{for all $x\ge 0$.}\end{equation*}

Hence we deduce from (3.27) that

\begin{align*} R_{2,t} &\longrightarrow \lambda_\infty \int_0^\infty \langle {\textrm{i}} s, {\textrm{e}}^{(A-\delta \operatorname{Id})x}\; \mathbb{E}(I)\rangle \,{\textrm{d}} x \\ & = \biggl\langle {\textrm{i}} s, \lambda_\infty \int_0^\infty \,{\textrm{e}}^{(A-\delta \operatorname{Id})x} \,{\textrm{d}} x \; \mathbb{E}(I)\biggr\rangle\\& =\langle {\textrm{i}} s, \lambda_\infty (\delta \operatorname{Id}-A)^{-1} \mathbb{E}(I)\rangle.\end{align*}

Since $R_{1,t}\longrightarrow 0$ as $t\to +\infty$, we arrive at the convergence of the CF of ${\textrm{e}}^{-\delta t}N(t)$ to $\exp(\langle {\textrm{i}} s, \lambda_\infty (\delta \operatorname{Id}-A)^{-1} \mathbb{E}(I)\rangle)$, so that ${\textrm{e}}^{-\delta t}N(t)$ converges in distribution (or, equivalently, in probability) towards $\lambda_\infty (\delta \operatorname{Id}-A)^{-1} \mathbb{E}(I)$. Hence the second case of (3.4) is proved.

Proving the limit of ${\textrm{e}}^{-\delta t} {N(t)}/{t}$ in (3.5) as $t\to +\infty$. We assume the supercritical case $\rho>0$ and $\rho=\delta$. In this case, instead of (3.22), we consider the quantity

\begin{equation*}R_t\;:\!=\;\int^{t}_0 [\varphi^o_{t-x} (s\,{\textrm{e}}^{-\delta t}/t)-1] \lambda(x) \,{\textrm{d}} x=\int^{t}_0 \{ \mathbb{E}[\!\exp (\langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}/t, N^o(t-x)\rangle)]-1 \} \lambda(x) \,{\textrm{d}} x\end{equation*}

such that the CF of ${\textrm{e}}^{-\delta t} {N(t)}/{t}$ is equal to ${\textrm{e}}^{R_t}$, which is similarly decomposed as in (3.23) and (3.24) as $R_t=R_{1,t}+R_{2,t}$, where

(3.28) \begin{align} R_{1,t}&\;:\!=\; \int_0^\infty \mathbb{1}_{[0\le x\le t]}\mathbb{E}[\!\exp( \langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}/t, N^o(t-x)\rangle)- 1 - \langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}/t, N^o(t-x)\rangle]\lambda(x) \,{\textrm{d}} x, \end{align}
(3.29) \begin{align} R_{2,t}&\;:\!=\; \int_0^\infty \mathbb{1}_{[0\le x\le t]}\mathbb{E}[ \langle {\textrm{i}} s\,{\textrm{e}}^{-\delta t}/t, N^o(t-x)\rangle]\lambda(x) \,{\textrm{d}} x .\qquad\qquad\ \quad\qquad\qquad\qquad\end{align}

Utilizing the inequality in (3.26), we obtain, for some constant $C>0$,

\begin{equation*}|R_{1,t}| \le \dfrac{1}{t^2} C \int_0^\infty \mathbb{1}_{[0\le x\le t]}\,{\textrm{e}}^{-2\delta t} \,{\textrm{e}}^{2\rho(t-x)}\,{\textrm{e}}^{\delta x} \,{\textrm{d}} x = C \dfrac{1}{t^2} \int_0^t \,{\textrm{e}}^{-\delta x}\,{\textrm{d}} x\longrightarrow 0,\end{equation*}

thus (3.28) tends to 0 as $t\to +\infty$. Using the expression $\mathbb{E}[N^o(z)]={\textrm{e}}^{Az}\mathbb{E}(I)$ as in (3.27) and the change of variable $z\;:\!=\;1-x/t$ together with the assumption $\delta=\rho$, we can rewrite (3.29) as

(3.30) \begin{align}R_{2,t} & = \dfrac{1}{t}\int_0^t \,{\textrm{e}}^{-\delta t}\langle {\textrm{i}} s, {\textrm{e}}^{A(t-x)} \mathbb{E}(I)\rangle \lambda(x) \,{\textrm{d}} x \notag \\&= \biggl\langle {\textrm{i}} s, \int_0^1 \,{\textrm{e}}^{A t(1-z)}\,{\textrm{e}}^{-\rho t} \lambda(tz) \,{\textrm{d}} z \; \mathbb{E}(I)\biggr\rangle\notag \\&= \biggl\langle {\textrm{i}} s, \int_0^1 \,{\textrm{e}}^{(A- \rho \operatorname{Id}) t(1-z)}\,{\textrm{e}}^{-\rho t z} \lambda(tz) \,{\textrm{d}} z \; \mathbb{E}(I)\biggr\rangle.\end{align}

We now wish to investigate (3.30) when $t\to +\infty$. Since A is regular, the Perron–Frobenius theory entails that ${\textrm{e}}^{(A-\rho \operatorname{Id})x}$ converges to uv as $x\to +\infty$; see e.g. [Reference Athreya and Ney5, limit (17)]. Hence, for all $z\in (0,1)$, we have $\lim_{t\to \infty}\,{\textrm{e}}^{(A-\rho \operatorname{Id}) t(1-z)}=uv'$. Also, the assumption $\lambda(x)\sim \lambda_\infty \,{\textrm{e}}^{\delta x}$ as $x\to\infty$ with $\delta=\rho$ implies that $\lim_{t\to \infty} \,{\textrm{e}}^{-\rho t z} \lambda(tz) =\lambda_\infty$ for all $z\in (0,1)$, so that by the dominated convergence theorem we may let $t\to +\infty$ in (3.30) and obtain

\begin{equation*}R_{2,t}\longrightarrow \langle {\textrm{i}} s, \lambda_\infty uv'\; \mathbb{E}(I)\rangle,\quad t\to +\infty.\end{equation*}

Therefore, since $R_t=R_{1,t}+R_{2,t}$ with $\lim_{t\to \infty}R_{1,t}=0$, we conclude the convergence (3.5).

3.3 Proof of Theorem 3.3 in the critical case $\rho=0$

We start with Lemma 3.1, from which we deduce that the CF of $N(t)/t$ admits the expression

(3.31) \begin{align}\mathbb{E}\biggl[\!\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N(t)}{t}\biggr\rangle\biggr)\biggr] = \mathbb{E}[\!\exp (\langle t^{-1}{\textrm{i}} s, N(t)\rangle)]= \exp \biggl\{ \int^t_0 [ \varphi^o_{t-y}(t^{-1}s)-1]\lambda(y) \,{\textrm{d}} y \biggr\}.\end{align}

We thus study

(3.32) \begin{align}\int^t_0 [ \varphi_{t-y} (t^{-1}s )-1 ] \lambda(y) \,{\textrm{d}} y&= \int^t_0 \mathbb{E}\biggl[\!\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^o(t-y)}{t}\biggr\rangle\biggr)-1\biggr] \lambda(y) \,{\textrm{d}} y \notag \\[5pt] &= \int^{\Lambda(t)}_0 \mathbb{E}\biggl[\!\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^o(t-\Lambda^{-1}(y))}{t}\biggr\rangle\biggr)-1\biggr] \,{\textrm{d}} y \notag \\[5pt] &= \int^1_0 \Lambda(t)\; \mathbb{E}\biggl[\!\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^o( t - \Lambda^{-1} (\Lambda(t) x))}{t}\biggr\rangle\biggr)-1\biggr] \,{\textrm{d}} x\notag \\[5pt] &\;:\!=\;-\int^1_0 \gamma_t(x) \,{\textrm{d}} x,\end{align}

where $\Lambda^{-1}(\cdot)$ is the inverse of the function $\Lambda(\cdot)$ (invertible as it is assumed that $\lambda(t)>0$ for all $t\geq 0$), the second-to-last equality is due to a change of variable with $x\;:\!=\;y/\Lambda(t)$, and $\gamma_t(x)$ is given by

(3.33) \begin{equation}\gamma_t(x)\;:\!=\; \Lambda(t)\; \mathbb{E}\biggl[1-\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^o( t - \Lambda^{-1} (\Lambda(t) x))}{t}\biggr\rangle\biggr)\biggr].\end{equation}

We note that the assumption $\lim_{t\to +\infty}\Lambda(t)/t=\lambda_\infty$ implies that $\lim_{t\to +\infty}\Lambda^{-1}(t)/t=\lambda_\infty^{-1}$, which is in turn equivalent to

(3.34) \begin{equation}\Lambda^{-1}(t)\sim \lambda_\infty^{-1} t, \quad \text{that is,}\quad \Lambda^{-1}(t)=\lambda_\infty^{-1} t + \eta(t) t,\end{equation}

where $\lim_{t\to \infty}\eta(t)=0$.

In the following, we shall prove by the dominated convergence theorem that the right-hand side of (3.32) has the following limit:

(3.35) \begin{equation}-\int^1_0 \gamma_t(x) \,{\textrm{d}} x \longrightarrow \lambda_\infty \bar{\beta} \int_0^\infty \mathbb{E} [\!\exp (\langle {\textrm{i}} s, {\textrm{e}}^{-y} \mathcal{X}\rangle)-1 ]\,{\textrm{d}} y, \quad t\to +\infty,\end{equation}

where $ \bar{\beta}$ is given by (3.7). Here we have

(3.36) \begin{equation}\mathcal{X}=\chi v\otimes \mu^{-1} \in [0,+\infty)^k,\end{equation}

where $\chi \sim \mathcal{E}(c) $ for $c>0$ given by (3.8) and the survival function of $\mathcal{X}$ given by

(3.37) \begin{equation}{\mathbb P} ( \mathcal{X} > z)=\exp\biggl( - c \max_{i=1,\ldots,k} \dfrac{z_i}{v_i \mu_i^{-1} }\biggr),\quad z=(z_1,\ldots,z_k)\in [0,+\infty)^k .\end{equation}

The proof is broken down into the following steps.

Step 1: Dominating the integrand in (3.32). Using the basic inequality $|{\textrm{e}}^{{\textrm{i}} x}-1| \leq x$, we have for all $t\geq 0$ and $x\in(0,1)$, $s=(s_1,\ldots,s_k) \in \mathbb{R}^k$ that

\begin{equation*}\biggl| 1-\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^o( t - \Lambda^{-1} (\Lambda(t) x))}{t}\biggr\rangle\biggr) \biggr| \leq \biggl\langle |s|, \dfrac{N^o( t - \Lambda^{-1} (\Lambda(t) x))}{t}\biggr\rangle.\end{equation*}

Using (3.16), we find

\begin{equation*}\biggl\langle |s|, \dfrac{N^o(t - \Lambda^{-1} (\Lambda(t) x) )}{t}\biggr\rangle \leq \kappa \biggl(\biggl\langle u, \dfrac{N^o(t - \Lambda^{-1} (\Lambda(t) x) )}{t}\biggr\rangle\biggr).\end{equation*}

Hence, taking the expectation and multiplying by $\Lambda(t)$ on both sides results in

(3.38) \begin{align} | \gamma_t(x) | &\leq \Lambda(t) \kappa\ \mathbb{E} \biggl[ \biggl\langle u, \dfrac{N^o( t - \Lambda^{-1} (\Lambda(t) x))}{t} \biggr\rangle\biggr] \notag \\& \le C_\lambda \kappa \ \mathbb{E} [\langle u, N^o(t - \Lambda^{-1} (\Lambda(t) x))\rangle] \notag \\ &= C_\lambda \kappa\ \mathbb{E}[\langle u,N^o(0)\rangle]\notag \\ &=C_\lambda\kappa \langle u, \mathbb{E}(I)\rangle, \quad t\ge t_0, \end{align}

where the first equality is obtained by the martingale argument and $C_\lambda\;:\!=\;\sup_{t\ge t_0}\Lambda(t)/t$, which is finite for a large enough $t_0$. Since $C_\lambda$ and $\kappa$ are constants (independent of t and x), the integrand in (3.32) is dominated by some constant independent from $t\ge t_0$ and $x\in (0,1)$.

Step 2: Almost sure limit of the integrand in (3.32). Let us now prove the following convergence for (3.33):

(3.39) \begin{equation}\gamma_t(x) \longrightarrow \gamma(x)\;:\!=\; \lambda_\infty \dfrac{ \bar{\beta}}{1-x} \mathbb{E} [1-\exp (\langle {\textrm{i}} s, (1-x)\; \mathcal{X}\rangle ) ],\end{equation}

for a fixed $x\in (0,1)$ and $s=(s_1,\ldots,s_k)\in \mathbb{R}^k$ as $t\to +\infty$, where $\bar{\beta}$ is defined by (3.7). To show this, we first express $\gamma_t(x)$ in (3.33) using the representation (2.3) as

(3.40) \begin{align} \gamma_t(x) &= \sum_{\mathbf{n}_0=(\mathbf{n}_0(1),\ldots,\mathbf{n}_0(k))\in \mathbb{N}^k} \gamma_t(x,\mathbf{n}_0) p_{\mathbf{n}_0}, \end{align}

where

(3.41) \begin{align} \gamma_t(x,\mathbf{n}_0) &\;:\!=\; \Lambda(t)\; \biggl[1-\mathbb{E}\biggl(\! \exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^o( t - \Lambda^{-1} (\Lambda(t) x))}{t}\biggr\rangle\biggr)\biggr|\ N^o(0)=\mathbf{n}_0\biggr)\biggr]\notag \\&\ = \Lambda(t)\; \biggl[ 1-\prod_{j=1}^k \psi^j(t,x)^{\mathbf{n}_0(j)}\biggr], \end{align}
(3.42) \begin{align}\!\!\!\!\!\! \!\psi^j(t,x)&\;:\!=\; \mathbb{E}\biggl[\!\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x))}{t}\biggr\rangle\biggr)\biggr],\quad j=1,\ldots,k.\qquad\end{align}

In the following, we define for all $J\subset \{1,\ldots,k\}$ and $z=(z_1,\ldots,z_k)\in \mathbb{R}_+^{*k}$ the vector $z^J$ of which the jth entry $z^J_j$ is $z_j$ if $j\in J$, and some arbitrary negative value (e.g. $-1$) otherwise. The following lemma, proved in Appendix B, gives the asymptotic behavior of $\psi^j(t,x)$, $j=1,\ldots,k$, as $t\to \infty$, which helps us prove (3.39).

Lemma 3.2. The following limit holds for all $j=1,\ldots,k$ and $x\in(0,1)$:

(3.43) \begin{equation}t[\psi^j(t,x)-1]\longrightarrow \dfrac{1}{1-x} \beta_j [ 1-\mathbb{E} (\! \exp [\langle {\textrm{i}} s,(1-x) \mathcal{X}\rangle] ) ],\quad t\to \infty.\end{equation}

Note that (3.43) in particular implies that $\psi^j(t,x)-1={\textrm{O}}(1/t)\longrightarrow 0$ as $t\to \infty$, which, plugged into (3.41) together with $\Lambda(t)\sim \lambda_\infty t$ as $t\to +\infty$, yields

\begin{equation*}\gamma_t(x,\mathbf{n}_0) \sim \lambda_\infty\biggl[\sum_{j=1}^k \mathbf{n}_0(j)\; t[\psi^j(t,x)-1]\biggr], \quad t\to +\infty.\end{equation*}

This implies, along with (3.43), that (3.40) converges as

\begin{equation*}\gamma_t(x) \longrightarrow \dfrac{\lambda_\infty }{1-x} \bigg\{\sum_{j=1}^k \beta_j \biggl[\sum_{\mathbf{n}_0 \in \mathbb{N}^k} \mathbf{n}_0(j) p_{\mathbf{n}_0}\biggr]\biggr\} [ 1-\mathbb{E} (\! \exp [\langle {\textrm{i}} s,(1-x) \mathcal{X}\rangle])],\quad t\to +\infty ,\end{equation*}

and thus (3.39) holds.

Step 3: Proof of (3.35). Thanks to (3.38) and (3.39), by the dominated convergence theorem, we thus deduce that (3.32) converges as $t\to +\infty$ to

\begin{equation*}-\int_0^1 \gamma(x) \,{\textrm{d}} x= -\int_0^1 \dfrac{\lambda_\infty \bar{\beta}}{1-x} \mathbb{E}[1-\exp (\langle {\textrm{i}} s, (1-x)\; \mathcal{X}\rangle) ] \,{\textrm{d}} x,\end{equation*}

which results in (3.35) after a change of variable $y\;:\!=\;-\ln (1-x)$.

Step 4: End of proof. From (3.31) with the convergence results of (3.32) towards (3.35), we find that

(3.44) \begin{equation}\mathbb{E}\biggl[\!\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N(t)}{t}\biggr\rangle\biggr)\biggr] \longrightarrow\exp\biggl( \lambda_\infty \bar{\beta} \int_0^\infty\mathbb{E}[\!\exp (\langle {\textrm{i}} s, {\textrm{e}}^{-y} \mathcal{X}\rangle)-1 ]\,{\textrm{d}} y \biggr),\quad t\rightarrow +\infty,\end{equation}

for $ s\in \mathbb{R}^k$. Since $\mathcal{X}=\chi v\otimes \mu^{-1} $ with $\chi\sim \mathcal{E}(c)$, we compute that

\begin{equation*}\mathbb{E} [\!\exp (\langle {\textrm{i}} s, {\textrm{e}}^{-t} \mathcal{X}\rangle )-1 ]= \mathbb{E} [\!\exp (\langle {\textrm{i}} s, v\otimes \mu^{-1} \rangle \chi \,{\textrm{e}}^{-t} )-1 ] =\dfrac{{\textrm{e}}^{-t} \langle {\textrm{i}} s, v\otimes \mu^{-1} \rangle}{c- {\textrm{e}}^{-t} \langle {\textrm{i}} s, v\otimes \mu^{-1} \rangle}.\end{equation*}

In turn, a change of variable $z\;:\!=\;{\textrm{e}}^{-t} \langle {\textrm{i}} s, v\otimes \mu^{-1} \rangle$ yields that the right-hand side of the above convergence is the CF equivalent to

\begin{equation*}\biggl(\dfrac{c}{c-\langle {\textrm{i}} s, v\otimes \mu^{-1} \rangle} \biggr)^{\lambda_\infty \bar{\beta}},\end{equation*}

which is indeed the CF of $\mathcal{Z} v\otimes \mu^{-1} $ in (3.9). This completes the proof.

4. Immigration modeled by a contagious Poisson process (CPP)

As discussed in Section 1, CPP is a special case of GPP which has become a well-known contagion model when the transition intensity in the non-homogeneous birth process is a linear function of the current state multiplied by a function of the current time. In this section we now assume that the arrival process $\{S(t),\ t\ge 0 \}$ is a CPP and note that $S(0^-)=0$. We first consider GPP and obtain the CF of N(t). This will in turn imply that the one for a CPP is available. It is known that GPP (or a positive contagion model in [Reference Bühlmann8] and [Reference Willmot35]) is a particular case of a self-exciting counting process with intensity rate $\lambda(t)$ satisfying

(4.1) \begin{equation}\lambda(t)=[aS(t^-)+b]\lambda_t,\quad a >0,\ b> 0, \end{equation}

for some underlying function $\lambda_t>0$ for $t\in (0,+\infty)$ which is assumed to be continuous and integrable over finite ranges. Let us denote $\Lambda_t=\int_0^t \lambda_y \,{\textrm{d}} y$ for $t\geq 0$. For $b=1$, this arrival process was referred to by Le Gat [Reference Le Gat23] as the linear extension of the Yule process. Hence the intensity increases linearly with the number of arrivals at time t. This explains why these models could be appropriate for situations where the arriving particles representing cells infected by a rapidly expanding disease contaminate other cells in an organism modeled by a certain network mechanism, or where the occurrence of shocks causes outages of interconnected lines in a power system, as studied in [Reference Qi, Ju and Sun26]. In particular, we shall focus on the case when $\lambda_t=\lambda >0$ is constant. In this case, $\{S(t),\ t\ge 0 \}$ is called a CPP in [Reference Allison1] and [Reference Wasserman32].

Let us start by establishing the CF of N(t) as obtained in Lemma 3.1 for the NHPP immigration.

Lemma 4.1. When the new particle arrives according to a GPP with the intensity rate given in (4.1), the CF of N(t) in (2.6) admits the following expression:

(4.2) \begin{equation} \varphi_t(s)=\biggl\{1- \int^t_0 [\varphi^o_{t-y}(s)-1] a \lambda_y \,{\textrm{e}}^{a \Lambda_y} \,{\textrm{d}} y \biggr\}^{-b/a}. \end{equation}

In particular, when $\lambda_t=\lambda$ in (4.1) (i.e. the CPP case), (4.2) is simplified as

(4.3) \begin{equation} \varphi_t(s)=\biggl\{1-\int^t_0 [\varphi^o_{t-y}(s)-1] a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y \biggr\}^{-b/a}. \end{equation}

Proof. It is known that the marginal distribution of S(t) is expressed as a negative binomial distribution with $\Lambda_t=\int_0^t \lambda_y \,{\textrm{d}} y$ (e.g. [Reference Cha9, Theorem 1(i)]) given by

\begin{equation*} p_t(n)\;:\!=\;\mathbb{P}(S(t)=n)=\dfrac{\Gamma(b/a+n)}{\Gamma(b/a)n!} (1-{\textrm{e}}^{-a \Lambda_t})^n ({\textrm{e}}^{-a \Lambda_t})^{b/a}, \end{equation*}

where $\Gamma(z)=\int^\infty_0 x^{z-1}\,{\textrm{e}}^{-x}\,{\textrm{d}} x$, for all complex numbers z with positive real part, is the gamma function. The above is a negative binomial distribution (r,p), where $r=b/a$ and $p=1-{\textrm{e}}^{-a \Lambda_t}$. Its probability generating function is

\begin{equation*} P_t(z)=\sum^\infty_{n=0} z^n p_t(n)=\biggl(\dfrac{1-p}{1-pz}\biggr)^r \end{equation*}

for all complex numbers z verifying $|z|<p^{-1}$.

Then, from [Reference Landriault, Willmot and Xu21, Section 3.2], the CF of N(t) can be expressed as a compound negative binomial distribution

(4.4) \begin{equation} \varphi_t(s)=P_t(\widetilde{f}_t(s)), \end{equation}

where the CF of the secondary distribution is given by

(4.5) \begin{equation} \widetilde{f}_t(s)=\int^t_0 q_t(y) \varphi^o_{t-y}(s)\,{\textrm{d}} y, \end{equation}

with

(4.6) \begin{equation} q_t(y)=\dfrac{a \lambda_y \,{\textrm{e}}^{a \Lambda_y}}{{\textrm{e}}^{a \Lambda_t}-1},\quad 0\leq y\leq t. \end{equation}

Since

\begin{equation*} P_t(z)=\biggl(\dfrac{1-p}{1-pz}\biggr)^r, \end{equation*}

(4.4) is obtained as

\begin{equation*} \varphi_t(s)=\biggl( \dfrac{{\textrm{e}}^{-a \Lambda_t}} {1-(1-{\textrm{e}}^{-a \Lambda_t}) \widetilde{f}_t(s)}\biggr)^{b/a}=[ {\textrm{e}}^{a \Lambda_t}-({\textrm{e}}^{a\Lambda_t}-1)\widetilde{f}_t(s)]^{-b/a}. \end{equation*}

Using $\int^t_0 a\lambda_y \,{\textrm{e}}^{a \Lambda_y} \,{\textrm{d}} y={\textrm{e}}^{a \Lambda_t}-1$, we find from (4.5) and (4.6) that

\begin{align*}{\textrm{e}}^{a \Lambda_t}-({\textrm{e}}^{a \Lambda_t}-1)\widetilde{f}_t(s)&= \int^t_0 a \lambda_y \,{\textrm{e}}^{a \Lambda_y} \,{\textrm{d}} y + 1 - \int^t_0 a \lambda_y \,{\textrm{e}}^{a \Lambda_y} \varphi^o_{t-y} (s) \,{\textrm{d}} y\\&=1+ \int^t_0 [1-\varphi^o_{t-y}(s)] a \lambda_y \,{\textrm{e}}^{a \Lambda_y} \,{\textrm{d}} y. \end{align*}

That is,

\begin{equation*} \varphi_t(s)=\biggl\{1+ \int^t_0 [1-\varphi^o_{t-y}(s)] a \lambda_y \,{\textrm{e}}^{a \Lambda_y} \,{\textrm{d}} y \biggr\}^{-b/a}, \end{equation*}

or equivalently (4.2).

In the case of a constant baseline intensity $\lambda_t=\lambda$, taking the expectation on both sides of (4.1) yields $\mathbb{E}[\lambda(t)]=a\lambda \mathbb{E}[S(t^-)] + \lambda b$. Since $\mathbb{E}[S(t^-)]=\mathbb{E}[S(t)]$ and $\{S(t)-\int_0^t \lambda(s) \,{\textrm{d}} s,\ t\ge 0\}$ is a martingale (see e.g. [Reference Jeanblanc, Yor and Chesney18, Proposition 8.3.2.1]), we arrive at $\mathbb{E}[\lambda(t)]=a\lambda \int_0^t \mathbb{E}[\lambda(s)] \,{\textrm{d}} s + \lambda b$ for all $t\ge 0$, from which the expected intensity has the closed form

(4.7) \begin{equation}\mathbb{E}[\lambda(t)]= b \lambda \,{\textrm{e}}^{a\lambda t},\quad t\ge 0 .\end{equation}

We note that there is some resemblance between this exponential expression in (4.7) in the GPP case and the exponential asymptotic form $\lambda(t)\sim \lambda_\infty \,{\textrm{e}}^{\delta t}$ of the (deterministic) intensity appearing in Theorems 3.1 and 3.2 in the NHPP case. However, due to the randomness in time feature of the intensity in this case, we can observe different limiting behaviors of the branching process N(t) with the CPP immigration. More precisely, in the following it is shown that the distributional behavior changes depending on whether the largest eigenvalue $\rho$ of A is less than, larger than, or equal to $a\lambda$. The main result of this section is given in the following theorem.

Theorem 4.1. Suppose that $\lambda_t=\lambda$, a is defined by (4.1) and $\rho$ is the largest eigenvalue of the matrix A with elements $a_{ij}$ defined by (2.4).

  1. (1) If $\rho>a\lambda$, then

    (4.8) \begin{equation}{\textrm{e}}^{-\rho t} N(t) \stackrel{\mathcal{D}}{\longrightarrow} \mathcal{Z}_T\ v,\quad t\rightarrow +\infty,\end{equation}
    where v is the left eigenvector of the matrix A, $T\sim \Gamma(b/a,1)$ and $\{ \mathcal{Z}_t,\ t\ge 0\}$ is an independent Lévy process with characteristic exponent
    \begin{equation*}\psi(x)\;:\!=\;\int_\mathbb{R} ( 1- \exp [ {\textrm{i}} x z])\Pi ({\textrm{d}} z),\quad x\ge 0.\end{equation*}
    Here $\Pi(\cdot)$ is defined by
    (4.9) \begin{equation}\Pi ({\textrm{d}} z) \;:\!=\; \mathbb{E} [W^{a\lambda/\rho }\mathbb{1}_{[W\ge z]}] \dfrac{a\lambda}{\rho} z^{-a\lambda/\rho-1}\mathbb{1}_{[0<z<+\infty]}\, {\textrm{d}} z ,\end{equation}
    and we recall that W is characterized by its CF given by (2.5).
  2. (2) If $\rho<a\lambda$, then

    (4.10) \begin{equation}{\textrm{e}}^{-a\lambda t} N(t) \stackrel{\mathcal{D}}{\longrightarrow} {\mathcal{Z}}\ \gamma,\quad t\rightarrow +\infty,\end{equation}
    where $\mathcal{Z}$ is a random variable distributed as $\Gamma(b/a ,1)$ and $\gamma$ is the vector defined by
    (4.11) \begin{equation}\gamma\;:\!=\;a\lambda (a\lambda \operatorname{Id}-A)^{-1}\mathbb{E}(I) .\end{equation}
  3. (3) If $\rho=a\lambda$, then

    (4.12) \begin{equation}\dfrac{N(t)}{t}\,{\textrm{e}}^{-a\lambda t}\stackrel{\mathcal{D}}{\longrightarrow} \mathcal{Z} v,\quad t\rightarrow +\infty,\end{equation}
    where $\mathcal{Z}$ is a random variable distributed as $\Gamma (b/a, \mathbb{E}[W]a \lambda )$.

Remark 4.1. In the case $\rho>a\lambda$ we may note that, since $\Pi(\cdot)$ defined by (4.9) has support on $(0,+\infty)$ and verifies $\int_{(0,+\infty)} \min(1,z)\Pi ({\textrm{d}} z) <+\infty$ (precisely because of the condition $ \rho>a\lambda$), the underlying Lévy process $\{ \mathcal{Z}_t,\ t\ge 0\}$ in (4.8) belongs to the class of subordinators according to [Reference Kyprianou20, Lemma 2.14].

Remark 4.2. As shown in (4.7), the expected intensity in the present CPP case has an exponential form, hence it is natural to compare the limiting convergence results in Theorem 4.1 to those in Theorems 3.1 and 3.2 in the NHPP case with asymptotic intensity $\lambda(t)\sim \lambda_\infty \,{\textrm{e}}^{\delta t}$ for $\delta>0$. With an analogy between $a\lambda$ and $\delta$, Table 1 summarizes the different directions of the supports of the limiting distributions obtained in (3.4), (3.5), and (3.2) for $\rho<\delta$, $\rho=\delta$, and $\rho>\delta$ in the NHPP case, to those in (4.10), (4.12), and (4.8) for $\rho<a\lambda$, $\rho=a\lambda$, and $\rho>a\lambda$ in the CPP case. Each value in Table 1, which all belong to $\mathbb{R}_+^k$, roughly shows the position in which the renormalized process N(t) is located asymptotically in the corresponding case. Interestingly, the directions are the same except for $\rho=\delta$ in the NHPP case and $\rho=a\lambda$ in the CPP case, which are respectively given by the vectors u and v.

Table 1. Direction of limiting distribution.

The proofs of each case in Theorem 4.1 are provided below in Sections 4.1, 4.2, and 4.3, respectively.

4.1 Proof of Theorem 4.1 in the case $\boldsymbol{\rho >} \textbf{a} \boldsymbol{\lambda}$

In (4.3), with the normalizing function $g(t)={\textrm{e}}^{\rho t}$ we get

(4.13) \begin{equation} \varphi_t (s/g(t))=\varphi_t (s \,{\textrm{e}}^{-\rho t})=\biggl\{1+ \int^t_0 [1-\varphi^o_{t-y}(s\,{\textrm{e}}^{-\rho t})] a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y \biggr\}^{-b/a} \end{equation}

for all $ s \in \mathbb{R}^k$. The proof is divided into two steps as follows.

Step 1: Studying the convergence of $\varphi_t (s \,{\textrm{e}}^{-\rho t})$ as $t\to +\infty$. It is convenient to introduce the function

(4.14) \begin{align}\Xi_{t,s} & \;:\!=\; \int^t_0 [1-\varphi^o_{t-y}(s\,{\textrm{e}}^{-\rho t})] a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y \notag \\[5pt] & = \int^\infty_0 \mathbb{1}_{[0< y < t]} \mathbb{E} [ 1- \exp ( \langle {\textrm{i}} s, N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\rangle \,{\textrm{e}}^{-\rho y})] a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y, \end{align}

so that $\varphi_t (s \,{\textrm{e}}^{-\rho t})=\{1+\Xi_{t,s}\}^{-b/a}$, $t\ge 0$. Thus, studying the limit of $\varphi_t (s \,{\textrm{e}}^{-\rho t})$ as $t\to+\infty$ essentially requires us to find $\lim_{t\to +\infty} \Xi_{t,s}$, which will be completed by the dominated convergence theorem. First note that for all $y\in (0,+\infty)$ we have $N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\longrightarrow Wv$, $t\to \infty$, a.s. from Lemma 2.1. By the dominated convergence theorem, for a fixed $y\in (0,+\infty)$, we have

(4.15) \begin{equation}\mathbb{E} [ 1- \exp ( \langle {\textrm{i}} s, N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\rangle \,{\textrm{e}}^{-\rho y})]\longrightarrow\mathbb{E} [ 1- \exp ( \langle {\textrm{i}} s,v\rangle W\,{\textrm{e}}^{-\rho y})],\quad t\to+\infty .\end{equation}

Using (3.14), the integrand in (4.14) is upper-bounded in modulus as

\begin{align*}& \mathbb{1}_{[0< y< t]}|\mathbb{E} [ 1- \exp ( \langle {\textrm{i}} s, N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\rangle \,{\textrm{e}}^{-\rho y})] a \lambda \,{\textrm{e}}^{a \lambda y}|\\&\quad \le \mathbb{1}_{[0< y< t]}\mathbb{E} [ \langle |s|, N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\rangle \,{\textrm{e}}^{-\rho y}] a \lambda \,{\textrm{e}}^{a \lambda y}\\&\quad =a\lambda \mathbb{1}_{[0< y < t]}\mathbb{E} [ \langle |s|, N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\rangle ] \,{\textrm{e}}^{(a \lambda -\rho)y}.\end{align*}

By a martingale argument similar to that applied to the one leading to (3.21), for example, and since Assumption $\mathbf{(A)}$ holds, we can show that $\mathbb{1}_{[0< y< t]}\mathbb{E} [ \langle |s|, N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\rangle]$ is upper-bounded by some constant K that is independent of t and y. That is,

(4.16) \begin{equation}0 \le \mathbb{1}_{[0< y < t]}|\mathbb{E} [ 1- \exp ( \langle {\textrm{i}} s, N^o(t-y)/{\textrm{e}}^{\rho(t-y)}\rangle \,{\textrm{e}}^{-\rho y})]| a \lambda \,{\textrm{e}}^{a \lambda y}\le a\lambda K \,{\textrm{e}}^{(a \lambda -\rho)y},\end{equation}

which is integrable over $y\in (0,+\infty)$ when $\rho> a\lambda$. Hence, using (4.15), (4.16), and the dominated convergence theorem, we arrive at

(4.17) \begin{equation}\Xi_{t,s}\longrightarrow \Xi_{\infty,s}\;:\!=\;\int_0^\infty \mathbb{E} [ 1- \exp ( \langle {\textrm{i}} s,v\rangle W\,{\textrm{e}}^{-\rho y})] a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y,\quad t\to +\infty ,\end{equation}

so that the renormalized CF in (4.13) converges as

(4.18) \begin{equation}\varphi_t (s \,{\textrm{e}}^{-\rho t}) \longrightarrow \tilde{\varphi}(s)\;:\!=\; \{1+ \Xi_{\infty,s} \}^{-b/a} ,\quad t\to+ \infty .\end{equation}

Step 2: Identifying the CF $\tilde{\varphi}(s)$. In order to interpret (4.18) as the convergence towards some known distribution, we use the following elementary lemma (its proof is given in Appendix C).

Lemma 4.2. Let $\{ \mathcal{Z}_t,\ t\ge 0\}$ be a Lévy process with characteristic exponent $\psi(x)$ such that $\mathbb{E}[{\textrm{e}}^{{\textrm{i}} x\mathcal{Z}_t}] ={\textrm{e}}^{-t\psi(x)}$ for $ x\in \mathbb{R}$, and let T be a random variable distributed as $\Gamma(\zeta ,1)$, independent from $\{ \mathcal{Z}_t,\ t\ge 0\}$. Then the CF of $\mathcal{Z}_T$ is given by

(4.19) \begin{equation}\mathbb{E}[{\textrm{e}}^{{\textrm{i}} x \mathcal{Z}_T}]= \{1+\psi(x) \}^{-\zeta},\quad x\ge 0 .\end{equation}

The aim is now to write $\tilde{\varphi}(s)$ in (4.18) in the form of (4.19). We first write $\Xi_{\infty,s}$ in (4.17) as

\begin{equation*}\Xi_{\infty,s}=\int_0^\infty \int_0^\infty ( 1- \exp [ \langle {\textrm{i}} s,v\rangle w \,{\textrm{e}}^{-\rho y}]) a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y \ \mathbb{P}(W\in {\textrm{d}} w).\end{equation*}

Performing a change of variable $z\;:\!=\;w\,{\textrm{e}}^{-\rho y}$ (i.e. $y=-{1}/{\rho} \ln {z}/{w}$) within the integral in y, it may be expressed as

\begin{align*}\Xi_{\infty,s} & = \int_0^\infty \int_0^w ( 1- \exp [ \langle {\textrm{i}} s,v\rangle z]) \dfrac{a\lambda}{\rho} \biggl( \dfrac{z}{w} \biggr)^{-a\lambda/\rho } \dfrac{{\textrm{d}} z}{z} \ \mathbb{P}(W\in {\textrm{d}} w)\\[5pt] &= \int_0^\infty ( 1- \exp [ \langle {\textrm{i}} s,v\rangle z]) \biggl\{\int_0^\infty w^{a\lambda/\rho }\mathbb{1}_{[w\ge z]} \mathbb{P}(W\in {\textrm{d}} w)\biggr\} \dfrac{a\lambda}{\rho} z^{-a\lambda/\rho-1} \,{\textrm{d}} z\\[5pt] &= \int_0^\infty ( 1- \exp [ \langle {\textrm{i}} s,v\rangle z])\ \mathbb{E} [ W^{a\lambda/\rho }\mathbb{1}_{[W\ge z]}] \dfrac{a\lambda}{\rho} z^{-a\lambda/\rho-1} \,{\textrm{d}} z \\[5pt] &= \int_\mathbb{R} ( 1- \exp [ \langle {\textrm{i}} s,v\rangle z])\Pi ({\textrm{d}} z),\end{align*}

where the measure $\Pi ({\textrm{d}} z)$ on $(0,+\infty)$ is defined as (4.9). Finally, we get the following expression for (4.18):

\begin{equation*}\tilde{\varphi}(s) = \{1+ \psi(\langle s,v\rangle) \}^{-b/a},\quad s\in \mathbb{R}^k ,\end{equation*}

so that we deduce from Lemma 4.2 the convergence result in (4.8).

4.2 Proof of Theorem 4.1 in the case $\boldsymbol{\rho <} \textbf{a}\boldsymbol{\lambda}$

After a change of variable $y\;:\!=\;t-y$, (4.3) is rewritten as

\begin{equation*}\varphi_t(s)=\biggl\{1+ \int^t_0 [1-\varphi^o_{y}(s)] a \lambda \,{\textrm{e}}^{a \lambda (t-y)} \,{\textrm{d}} y \biggr\}^{-b/a},\quad t\geq 0,\ s \in \mathbb{R}^k.\end{equation*}

Let us consider the normalizing function $g(t)={\textrm{e}}^{a\lambda t}$, so that

(4.20) \begin{equation} \varphi_t (s/g(t))=\varphi_t (s \,{\textrm{e}}^{-a\lambda t})=\biggl\{1+ \int^t_0 [1-\varphi^o_{y}(s\,{\textrm{e}}^{-a\lambda t})] a \lambda \,{\textrm{e}}^{a \lambda (t-y)} \,{\textrm{d}} y \biggr\}^{-b/a}. \end{equation}

In the following, the limit of the integral on the right-hand side of (4.20) is studied in the subcritical case. First, similarly to (4.14), let

(4.21) \begin{equation}\Xi_{t,s}\;:\!=\; \int^t_0 [1-\varphi^o_{y}(s\,{\textrm{e}}^{-a\lambda t})] a \lambda \,{\textrm{e}}^{a \lambda (t-y)} \,{\textrm{d}} y. \end{equation}

To apply the dominated convergence theorem, let us define

(4.22) \begin{align} \Xi_{t,s,y} &\;:\!=\;\mathbb{1}_{[0< y< t]} [1-\varphi^o_{y}(s\,{\textrm{e}}^{-a\lambda t})] a \lambda \,{\textrm{e}}^{a \lambda (t-y)} = \mathbb{1}_{[0< y < t]}\mathbb{E}[1-{\textrm{e}}^{\langle {\textrm{i}} s, N^o(y)\rangle \,{\textrm{e}}^{-a \lambda t}}] a \lambda \,{\textrm{e}}^{a \lambda (t-y)}. \end{align}

Since

(4.23) \begin{equation} |1-{\textrm{e}}^{\langle {\textrm{i}} s, N^o(y)\rangle \,{\textrm{e}}^{-a \lambda t}} | \leq \; \langle |s|,N^o(y)\rangle \,{\textrm{e}}^{-a \lambda t},\end{equation}

(4.22) is bounded in modulus by

(4.24) \begin{align}|\Xi_{t,s,y}| & \leq \mathbb{1}_{[0< y < t]} \mathbb{E}[\langle |s|,N^o(y)\rangle] a\lambda \,{\textrm{e}}^{-a \lambda y} \notag \\&\leq \mathbb{E}[\langle |s|,N^o(y)\rangle] a\lambda \,{\textrm{e}}^{-a \lambda y} \notag \\&= \langle |s|, \mathbb{E}[N^o(y)]\rangle a\lambda \,{\textrm{e}}^{-a \lambda y} \notag \\& \;:\!=\;\Xi^\ast_{s,y}.\end{align}

We recall from [Reference Athreya and Ney5, page 202] that the mean matrix of the multitype process $N^o(t)$ is expressed as $\mathbb{E}[N^o(y)]={\textrm{e}}^{Ay}\mathbb{E}(I)$, where the matrix A is defined in (2.4). For the case $\rho<a\lambda $, the integral

\begin{equation*}\int^{+\infty}_0 \,{\textrm{e}}^{(A-a\lambda \operatorname{Id})y} \,{\textrm{d}} y\end{equation*}

is convergent because all eigenvalues of the matrix $A-a\lambda \operatorname{Id}$ have negative real part in the case of $\rho <a\lambda$. In turn, we conclude that $\int^{+\infty}_0 \Xi^\ast_{s,y}\,{\textrm{d}} y$ converges. Moreover, for a fixed $y\in (0,+\infty)$ we find that

(4.25) \begin{equation}\mathbb{E}[1-{\textrm{e}}^{\langle {\textrm{i}} s, N^o(y)\rangle \,{\textrm{e}}^{-a \lambda t}}] \,{\textrm{e}}^{a \lambda t} \longrightarrow -\mathbb{E} [\langle {\textrm{i}} s, N^o(y)\rangle],\quad t\rightarrow +\infty\end{equation}

by the dominated convergence theorem. Indeed, from (4.23) $| 1-{\textrm{e}}^{\langle {\textrm{i}} s, N^o(y)\rangle \,{\textrm{e}}^{-a \lambda t}}|{\textrm{e}}^{a \lambda t}$ is upper-bounded by $\langle |s|,N^o(y)\rangle$, which has a finite expectation. Finally, because of the bound for the integrand $\Xi_{t,s,y}$ obtained in (4.24) and the pointwise limit in (4.25), we deduce that (4.21) converges to

\begin{align*} \Xi_{t,s} &\longrightarrow - \int^{+\infty}_0 \mathbb{E}(\langle {\textrm{i}} s,N^o(y)\rangle) a\lambda \,{\textrm{e}}^{-a \lambda y} \,{\textrm{d}} y \\ & = - \int^{+\infty}_0 \langle {\textrm{i}} s, {\textrm{e}}^{Ay} \mathbb{E}(I)\rangle a\lambda \,{\textrm{e}}^{-a \lambda y} \,{\textrm{d}} y\\ & = - \biggl\langle {\textrm{i}} s, \int^{+\infty}_0 a\lambda \,{\textrm{e}}^{(A-a\lambda \operatorname{Id})y}\,{\textrm{d}} y\; \mathbb{E}(I)\biggr\rangle\\ & =\langle {\textrm{i}} s,a\lambda (a\lambda \operatorname{Id}-A)^{-1} \mathbb{E}(I)\rangle,\quad {\text{as $t\rightarrow +\infty$.}}\end{align*}

Consequently, it follows that (4.20) converges to

\begin{equation*}\varphi_t(s\,{\textrm{e}}^{-a\lambda t}) \longrightarrow \{1-\langle {\textrm{i}} s, a\lambda (a\lambda \operatorname{Id} -A)^{-1} \mathbb{E}(I)\rangle \}^{-b/a},\quad t\rightarrow +\infty,\end{equation*}

for all $s\in \mathbb{R}^k$, which entails (4.10) with the vector $\gamma$ defined as (4.11).

4.3 Proof of Theorem 4.1 in the case $\boldsymbol{\rho =} \textbf{a} \boldsymbol{\lambda}$

We consider here the renormalizing function $g(t)\;:\!=\; t \,{\textrm{e}}^{\rho t}= t \,{\textrm{e}}^{a\lambda t}$. As in (4.13) and (4.14), after a change of variable $y\;:\!=\;y/t$ we have, for all $ s \in \mathbb{R}^k$,

(4.26) \begin{align} \varphi_t (s/g(t))&= \varphi_t (s \,{\textrm{e}}^{-a\lambda t}/t)\notag \\ &=\biggl\{1+ \int^t_0 [1-\varphi^o_{t-y}(s\,{\textrm{e}}^{-a\lambda t}/t)] a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y \biggr\}^{-b/a}\notag \\ &= \biggl\{1+ \int^t_0 (1- \mathbb{E}[\!\exp(\langle {\textrm{i}} s, N^o(t-y)\rangle \,{\textrm{e}}^{-a\lambda t}/t) ] ) a \lambda \,{\textrm{e}}^{a \lambda y} \,{\textrm{d}} y \biggr\}^{-b/a}\notag \\ &= \biggl\{1+ \int^1_0 t(1-\mathbb{E}[\!\exp(\langle {\textrm{i}} s, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t}/t) ] ) a \lambda \,{\textrm{e}}^{a \lambda t y} \,{\textrm{d}} y \biggr\}^{-b/a}\notag \\ &= \{1+ \Xi_{t,s} \}^{-b/a},\end{align}

where $\Xi_{t,s}$ is now defined by

(4.27) \begin{align}\Xi_{t,s} &\;:\!=\; \int^1_0 t(1-\mathbb{E}[\!\exp(\langle {\textrm{i}} s, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t}/t]) a \lambda \,{\textrm{e}}^{a \lambda t y} \,{\textrm{d}} y\notag \\[5pt] &= \int^1_0 t(1-\mathbb{E}[\!\exp(\langle {\textrm{i}} s, Wv\rangle \,{\textrm{e}}^{-a\lambda ty }/t)]) a \lambda \,{\textrm{e}}^{a \lambda t y} \,{\textrm{d}} y\notag \\[5pt] &\quad\, + \int^1_0 t(\mathbb{E}[\!\exp(\langle {\textrm{i}} s, Wv\rangle \,{\textrm{e}}^{-a\lambda ty }/t)] - \mathbb{E}[\!\exp(\langle {\textrm{i}} s, N^o(t(1 - y))\rangle \,{\textrm{e}}^{-a\lambda t}/t)] ) a \lambda \,{\textrm{e}}^{a \lambda t y} \,{\textrm{d}} y\notag \\&\;:\!=\; \Xi^1_{t,s} + \Xi^2_{t,s}. \end{align}

In the following we shall determine the limits of $\Xi^1_{t,s}$ and $\Xi^2_{t,s}$ separately as $t\to + \infty$. For notational convenience, let

\begin{equation*}\Xi^2_{t,s}\;:\!=\;\int_0^1 \Upsilon_s^2(t,y) \,{\textrm{d}} y,\end{equation*}

where

(4.28) \begin{equation}\Upsilon^2_s(t,y)\;:\!=\; t(\mathbb{E}[\!\exp(\langle {\textrm{i}} s, Wv\rangle \,{\textrm{e}}^{-a\lambda ty }/t)] - \mathbb{E}[\!\exp(\langle {\textrm{i}} s, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t}/t)] ) a \lambda \,{\textrm{e}}^{a \lambda t y}.\end{equation}

Step 1: Studying the convergence of $\Xi^1_{t,s}$ as $t\to + \infty$. It is readily obtainable that using inequality (3.14), for all $t\ge 0$ and $y\in (0,1)$ we have

\begin{align*} t|1- \exp(\langle {\textrm{i}} s, Wv\rangle \,{\textrm{e}}^{-a\lambda ty }/t)| a \lambda \,{\textrm{e}}^{a \lambda t y} & \le t \langle |s|, Wv\rangle ({\textrm{e}}^{-a\lambda ty }/t) a \lambda \,{\textrm{e}}^{a \lambda t y}\\ & = \langle |s|, Wv\rangle a \lambda,\end{align*}

which is integrable, so that, for a fixed $y\in (0,1)$, by the dominated convergence theorem we have

\begin{equation*}t(1-\mathbb{E}[\!\exp(\langle {\textrm{i}} s, Wv\rangle \,{\textrm{e}}^{-a\lambda ty }/t)]) a \lambda \,{\textrm{e}}^{a \lambda t y} \longrightarrow -\mathbb{E}[ \langle {\textrm{i}} s, Wv\rangle ] a \lambda,\quad \text{as $t\to +\infty$.}\end{equation*}

Likewise

\begin{equation*}t|1-\mathbb{E}[\!\exp(\langle {\textrm{i}} s, Wv\rangle \,{\textrm{e}}^{-a\lambda ty }/t)]| a \lambda \,{\textrm{e}}^{a \lambda t y}\le \mathbb{E}[ \langle |s|, Wv\rangle ] a \lambda\end{equation*}

is a constant, so by the dominated convergence theorem we deduce that

(4.29) \begin{equation}\lim_{t\to +\infty} \Xi^1_{t,s} =-\mathbb{E}[\langle {\textrm{i}} s, Wv\rangle ] a \lambda= - \langle {\textrm{i}} s, \mathbb{E}[W] a \lambda v\rangle.\end{equation}

Step 2: Dominating $\Upsilon^2_s(t,y)$. In order to study $\lim_{t\to +\infty}\Xi^2_{t,s}$, we again use the dominated convergence theorem. First, it can be shown that $|\Upsilon^2_s(t,y)|$ in (4.28) is upper-bounded as

(4.30) \begin{align}|\Upsilon^2_s(t,y)| &\le t\mathbb{E}[ | \langle s, Wv\rangle \,{\textrm{e}}^{-a\lambda ty }/t - \langle s, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t}/t | ] a \lambda \,{\textrm{e}}^{a \lambda t y} \notag \\&= a\lambda \mathbb{E}[ (| \langle s, Wv\rangle - \langle s, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)} | ]\end{align}
\begin{align} &\qquad\ \le a\lambda \mathbb{E} [ | \langle s, Wv\rangle| ] + a\lambda\mathbb{E} [ | \langle s, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}| ]\notag \\&\qquad\ \le a\lambda \mathbb{E} [ \langle |s|, Wv\rangle ] + a\lambda \mathbb{E} [ \langle |s|, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}],\notag\end{align}

where the first inequality is obtained from (3.14) and the last equality holds because W and $N^o(t(1-y))$ are non-negative or have non-negative entries. Again using the constant $\kappa$ satisfying (3.16) and the martingale argument, we thus obtain, together with the above result, that $|\Upsilon^2_s(t,y)|$ is upper-bounded by some constant as

\begin{equation*}|\Upsilon^2_s(t,y)| \le a\lambda \mathbb{E}[ \langle |s|, Wv\rangle ] + a\lambda \kappa \langle u, \mathbb{E}(I)\rangle \quad \text{for all }t\ge 0,\ y\in (0,1).\end{equation*}

Step 3: Pointwise convergence of $\Upsilon^2_s(t,y)$ towards 0 as $t\to +\infty$. Let $y\in (0,1)$ be fixed. Since $\mathbb{R}^k$ can be decomposed as the direct sum of $\mathbb{R} u$ and $( \mathbb{R} v )^{\bot} $ (the orthogonal vector space of $\mathbb{R} v$ for the Euclidean inner product), there exists some (unique) $\alpha\in\mathbb{R}$ and $s_0\in ( \mathbb{R} v )^{\bot} $ such that $ s=\alpha u + s_0$. Since $\langle s_0,v\rangle=0$, it follows that (4.30) is expressed as

(4.31) \begin{align} |\Upsilon^2_s(t,y)| &\le a\lambda \mathbb{E}[|\langle s, Wv\rangle - \langle s, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)} | )\notag \\& = a\lambda \mathbb{E}( | \alpha \langle u, Wv\rangle - \alpha \langle u, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)} - \langle s_0, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}|)\notag \\& \le a\lambda \mathbb{E}( | \alpha W - \alpha \langle u, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)} |) \notag \\&\quad\, + \mathbb{E}( |\langle s_0, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}|),\end{align}

where the last line follows from the triangle inequality and the fact that $\langle u,Wv\rangle=W \langle u,v\rangle=W\cdot 1=W$. We next show that both terms in the above inequality tend to zero as $t\to +\infty$. From this decomposition of s along $\mathbb{R} u$ and $( \mathbb{R} v )^{\bot} $, it holds that the first term is linked to the martingale $\{\langle u,N^o(t)\,{\textrm{e}}^{-\rho t}\rangle,\ t\ge 0\}$ with $\rho= a \lambda$, whereas in the second term the behavior of $\{\langle s_0,N^o(t)\,{\textrm{e}}^{-\rho t}\rangle,\ t\ge 0\}$ is determined precisely thanks to the estimates given in [Reference Athreya4] and the fact that $s_0\in ( \mathbb{R} v )^{\bot}$. Indeed, we have from (A.1) that $D(t)={\textrm{O}}({\textrm{e}}^{2\rho t})$, with the result that $(\mathbb{E}[\| N^o(t)\|^2 \,{\textrm{e}}^{-2\rho t}])_{t\ge 0}$ is uniformly upper-bounded with $\rho=a\lambda$ here. Since $\mathbb{E}[ | \langle u, N^o(t)\rangle \,{\textrm{e}}^{-a\lambda t} |^2]$ is upper-bounded by $\mathbb{E}[\| N^o(t)\|^2 \,{\textrm{e}}^{-2\rho t}]$ up to a constant for all $t\ge 0$, we deduce that the martingale $\{\langle u,N^o(t)\,{\textrm{e}}^{-a\lambda t}\rangle,\ t\ge 0\}$ is uniformly square-integrable, so it converges in mean square towards W as $t\to +\infty$; in turn, the first term on the right-hand side of (4.31) converges to 0 as $t\to +\infty$. Concerning the second term, we have, using the representation (2.3) and triangular inequality,

(4.32) \begin{align}& \mathbb{E}(| \langle s_0, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}|)\notag \\&\quad = \sum_{\mathbf{n}_0=(\mathbf{n}_0(1),\ldots,\mathbf{n}_0(k))\in \mathbb{N}^k} \mathbb{E}( |\langle s_0, N^o(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}|\mid N^o(0)= \mathbf{n}_0) p_{\mathbf{n}_0}\notag \\&\quad \le \sum_{\mathbf{n}_0=(\mathbf{n}_0(1),\ldots,\mathbf{n}_0(k))\in \mathbb{N}^k} \biggl\{\sum_{j=1}^k \mathbf{n}_0(j)\mathbb{E}( | \langle s_0, N^j(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}|) \biggr\}p_{\mathbf{n}_0}\notag \\&\quad \le \biggl\{\sum_{\mathbf{n}_0=(\mathbf{n}_0(1),\ldots,\mathbf{n}_0(k))\in \mathbb{N}^k} \biggl[ \sum_{j=1}^k \mathbf{n}_0(j) \biggr]p_{\mathbf{n}_0}\biggr\}\; \max_{j=1,\ldots,k} \mathbb{E}( | \langle s_0, N^j(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}|)\notag \\&\quad = \biggl\{ \sum_{j=1}^k \mathbb{E}(I(j)) \biggr\} \; \max_{j=1,\ldots,k} \mathbb{E}( | \langle s_0, N^j(t(1-y))\rangle \,{\textrm{e}}^{-a\lambda t(1-y)}|)\notag \\&\quad \le \biggl\{ \sum_{j=1}^k \mathbb{E}(I(j)) \biggr\} \; \max_{j=1,\ldots,k} \mathbb{E}( | \langle s_0, N^j(t(1-y))\rangle |^2\,{\textrm{e}}^{-2a\lambda t(1-y)}|)^{1/2} , \end{align}

where the last line is obtained thanks to the Cauchy–Schwarz inequality. From [Reference Athreya4, Proposition 3] together with $\langle s_0,v\rangle=0$, there exists some real number $a(s_0)<\rho=a\lambda$ as well as an integer $\gamma(s_0)$ (both depending on $s_0$; see their precise definitions in [Reference Athreya4, (9a) and (9b)]) such that one of the three following situations occurs for all $j=1,\ldots,k$:

\begin{equation*}\mathbb{E}[|\langle s_0, N^j(t)\rangle|^2] =\begin{cases}{\textrm{O}}({\textrm{e}}^{2a(s_0)t} t^{2\gamma(s_0)})& \text{if }2a(s_0)>\rho=a\lambda,\\{\textrm{O}}({\textrm{e}}^{2a(s_0)t} t^{2\gamma(s_0)+1})& \text{if }2a(s_0)=\rho=a\lambda,\\{\textrm{O}}({\textrm{e}}^{\rho t})={\textrm{O}}({\textrm{e}}^{a\lambda t}) & \text{if }2a(s_0)<\rho=a\lambda.\end{cases}\end{equation*}

Here the above three cases correspond to a), b) and c), respectively, of [Reference Athreya4, Proposition 3]. In all cases, since $a(s_0)$ verifies $a(s_0)<\rho=a\lambda$, one can check easily that $\mathbb{E}[|\langle s_0, N^j(t)\rangle|^2] \,{\textrm{e}}^{-2\rho t}=\mathbb{E} [|\langle s_0, N^j(t)\rangle|^2] \,{\textrm{e}}^{-2a\lambda t}$ tends to 0 as $t\to +\infty$. Since Assumption $\mathbf{(A)}$ holds, (4.32) thus tends to 0 as $t\to +\infty$ (for a fixed $y\in (0,1)$). Combining all the above results, we thus prove that both terms on the right-hand side of (4.31) converge to 0. Therefore it is concluded that (4.28) goes to zero as $t\to+\infty$ for all $y\in (0,1)$.

Step 4: Convergence of $\Xi^2_{t,s}$ and conclusion. Steps 2 and 3 imply by the dominated convergence theorem that $\lim_{t\to +\infty}\Xi^2_{t,s}=0$. Using this and (4.29), from (4.27) it follows that (4.26) converges to

\begin{equation*}\varphi_t (s \,{\textrm{e}}^{-a\lambda t}/t)\longrightarrow \{1 - \langle {\textrm{i}} s, \mathbb{E}[W]a \lambda v\rangle \}^{-b/a}, \quad t\to+\infty.\end{equation*}

Hence we have proved (4.12).

5. Transient expectation when $k=2$

We shall hereafter consider two-type branching processes (i.e. $k=2$) to study transient expectation of the number of particles at time t. Assume that the lifetime of type j particles for $j=1,2$ is exponentially distributed as $\mathcal{E}(\mu_i)$. The branching mechanism is given by the following generating functions in (2.1):

\begin{equation*}h_1(z_1,z_2)= p_1(0,0)+ p_1(0,1) z_2,\quad h_2(z_1,z_2)= p_2(0,0)+ p_2(1,0) z_1,\quad (z_1,z_2)\in [0,1]^2,\end{equation*}

where probabilities $p_{12}\;:\!=\;p_1(0,1)$ and $p_{21}\;:\!=\;p_2(1,0)$ in (0,1] satisfy $ p_{12}p_{21}<1$, which means that a type 1 particle (resp. type 2) produces a type 2 (resp. type 1) particle with probability $p_{12}$ (resp. $p_{21}$), or else dies with probability $p_1(0,0)=1-p_{12}$ (resp. $p_2(0,0)=1-p_{21}$). We also suppose that there is only one immigrant of type 1 entering the system at each time $T_i$, $i\in \mathbb{N}$, i.e. $I\sim \delta_{(1,0)}$. We shall see in Remark 5.1 that this assumption can be relaxed to a general incoming immigration vector I in (2.2). Also note that in this model each particle produces one offspring only, and only one offspring of the other type. Finally, we let $m(t)=\mathbb{E}[S(t)]$ for $t\geq 0$ denote the renewal function associated with the immigration process $\{ S(t),\ t\ge 0\}$ with the convention $m(t)=0$ when $t< 0$.

Theorem 5.1. At time t, the transient expectation $\mathbb{E}[N_1(t)]$ for a type 1 particle is given by

(5.1) \begin{align}\mathbb{E} [N_1(t)] &= \int_0^t \biggl\{ m(t) - m(t-z) + \int_0^t [m(t-v) - m(t-v-z)] \notag \\&\quad \mu_1 \mu_2 p_{12}p_{21}\biggl[ \dfrac{1}{\zeta_1 (\zeta_2-\zeta_1)}\,{\textrm{e}}^{\zeta_1 v} + \dfrac{1}{\zeta_2 (\zeta_1-\zeta_2)}\,{\textrm{e}}^{\zeta_2 v}\biggr]\,{\textrm{d}} v\biggr\} \mu_1 \,{\textrm{e}}^{-\mu_1 z}\,{\textrm{d}} z ,\quad t\ge 0 ,\end{align}

where $\zeta_1$ and $\zeta_2$ are given by

(5.2) \begin{align} \zeta_1 &\;:\!=\; \dfrac{1}{2}\biggl[ -(\mu_1+\mu_2) + \sqrt{(\mu_1-\mu_2)^2 + 4\mu_1\mu_2 p_{12}p_{21}}\biggr],\end{align}
(5.3) \begin{align} \zeta_2 &\;:\!=\; \dfrac{1}{2}\biggl[ -(\mu_1+\mu_2) - \sqrt{(\mu_1-\mu_2)^2 + 4\mu_1\mu_2 p_{12}p_{21}}\biggr].\end{align}

Note that (5.1) depends on the renewal function m(t), which is explicitly available in many processes. For example, $m(t)=\int_0^t\lambda(s) \,{\textrm{d}} s$ when the immigration process is an NHPP with intensity $\lambda(\!\cdot\!)$, whereas

\begin{equation*}m(t)=\biggl(\dfrac{b}{a}\biggr)\dfrac{1- {\textrm{e}}^{-a\Lambda_t}}{{\textrm{e}}^{-a\Lambda_t}}\end{equation*}

when the immigration process is a GPP with parameters $(a,b, \lambda_t)$. In addition to the two processes considered here, we remark that (5.1) for the transient first moment is also available for other non-Poisson arrival processes where their renewal functions are known. Typical examples include the case when $\{ S(t),\ t\ge 0\}$ is a fractional Poisson process with parameter $\beta\in (0,1)$ (where $m(t)=Ct^\beta$ for some constant $C>0$; see [Reference Laskin22, expression (26)]), or when the inter-arrival times $T_i-T_{i-1}$, $i\ge 1$, follow matrix exponential distributions (in which case m(t) is explicit and given by [Reference Asmussen and Bladt3, Theorem 3.1]).

Proof. The key idea is to consider the successive passage times from type 2 to type 1 of the ith particle arriving at $T_i$, $i\in \mathbb{N}^*$, which is type 1. The particle type is changing between 1 and 2, while it remains the same type during an exponentially distributed lifetime as long as it is alive (i.e. it has not left the system). Let $G_i$ be the number of sojourn times as type 1 before dying, so that the $(G_i)_{i\in \mathbb{N}^*}$ are i.i.d. with distribution

\begin{equation*}G_i\sim \mathcal{G}(1- p_{12}p_{21}),\end{equation*}

where $\mathcal{G}$ denotes the geometric distribution on ${\mathbb{N}}^*$. In other words, $G_i=r$ if the ith particle survives r times as a type 1 particle then dies, or survives one last time as a type 2 particle then dies. Let us introduce the sequence $(V_i^{(r)})_{r\in \mathbb{N}}$ representing the successive time instants of this particle (arriving at time $T_i$) changing back to type 1 after being type 2 prior to its death. In other words, this ith particle becomes type 1 again at times $T_i + V_i^{(1)}$, $T_i + V_i^{(2)}$, etc. Then the sequence is expressed as $V_i^{(r)} - V_i^{(r-1)}=Y_{1,i}^{(r)}+Y_{2,i}^{(r)}$ for $r\in \mathbb{N}^*$ with $V_i^{(0)}=0$, where $Y_{j,i}^{(r)}$ represents the rth sojourn time of particle type j for $j=1,2$ such that the random variables $Y_{j,i}^{(r)}$, $j=1,2$, $i\in\mathbb{N}^*$, $r\in \mathbb{N}^*$, are independent, with distributions given by $\mathcal{D} ( Y_{j,i}^{(r)})= \mathcal{E}(\mu_j)$. See Figure 1 for an illustration. Then $N_1(t)$ has the following expression:

(5.4) \begin{equation}N_1(t)=\sum_{i=1}^\infty \sum_{r=0}^{G_i-1} \mathbb{1}_{[T_i + V_i^{(r)}\le t < T_i + V_i^{(r)} + Y_{1,i}^{(r+1)}]},\end{equation}

as $[T_i + V_i^{(r)}\le t < T_i + V_i^{(r)} + Y_{1,i}^{(r+1)}]$ corresponds to the event that a type 1 particle arriving at time $T_i$ is again type 1 at time t after its rth return time. Taking the expectation in (5.4), interchanging the order of summation, and using the independence of $G_i$ from the arrival process $\{S(t),\ t\ge 0 \}$ and $(Y_{j,i}^{(r)})_{r\in \mathbb{N}^*}$, $j=1,2$, yields

(5.5) \begin{align}\mathbb{E} [N_1(t)]&= \sum_{r=0}^\infty B_r (p_{12}p_{21})^r,\qquad\qquad\qquad\qquad\qquad\qquad\, \end{align}
\begin{align}B_r &\;:\!=\; \sum_{i=1}^\infty\mathbb{P}( T_i + V_i^{(r)}\le t < T_i + V_i^{(r)} + Y_{1,i}^{(r+1)}).\notag\end{align}

Let $G^{(r)}(\cdot)$ denote the cumulative distribution function of $V_i^{(r)}$, in other words the rth convolution of the sum of two exponential variables with mean $\mu_1$ and mean $\mu_2$, and also denote $G^{(0)}({\textrm{d}} s)=\delta_0({\textrm{d}} s)$. By the independence of the arrival process $\{S(t),\ t\ge 0 \}$, $V_i^{(r)}$, and $Y_{1,i}^{(r+1)}$, we get

\begin{equation*}B_r= \int_0^t \int_0^t [m(t-v) - m(t-v-z)]G^{(r)}({\textrm{d}} v) \mu_1 \,{\textrm{e}}^{-\mu_1 z}\,{\textrm{d}} z .\end{equation*}

It then follows from (5.5) that $\mathbb{E}[N_1(t)]$ is given by

(5.6) \begin{equation}\mathbb{E} [N_1(t)]= \int_0^t \int_0^t [m(t-v) - m(t-v-z)]\Psi({\textrm{d}} v) \mu_1 \,{\textrm{e}}^{-\mu_1 z}\,{\textrm{d}} z ,\quad t\ge 0,\end{equation}

Figure 1. Evolution of ith particle.

where $\Psi({\textrm{d}} s)$ is a measure defined by

(5.7) \begin{equation}\Psi({\textrm{d}} s)=\sum_{r=0}^\infty (p_{12}p_{21})^r G^{(r)}({\textrm{d}} s),\end{equation}

which remains to be determined. Since $G^{(r)}({\textrm{d}} s)$ is the distribution of the sum of two independent Erlang distributions with respective parameters $(r,\mu_1)$ and $(r,\mu_2)$, its Laplace transform is given by

\begin{equation*}\widehat{G^{(r)}}(x)=\int_0^\infty \,{\textrm{e}}^{-xs} G^{(r)}({\textrm{d}} s)=\biggl(\dfrac{\mu_1}{\mu_1+x} \dfrac{\mu_2}{\mu_2+x}\biggr)^r,\quad r\ge 0, \ x\ge 0 ,\end{equation*}

so that, taking the Laplace transform on both sides of (5.7), we obtain

(5.8) \begin{align} \widehat{\Psi}(x)&=\sum_{r=0}^\infty (p_{12}p_{21})^r\widehat{G^{(r)}}(x) \notag \\ &=\dfrac{1}{1- p_{12}p_{21}\frac{\mu_1}{\mu_1+x} \frac{\mu_2}{\mu_2+x}}\notag \\ &=1+\dfrac{\mu_1 \mu_2 p_{12}p_{21}}{x^2 + (\mu_1+\mu_2)x + \mu_1\mu_2(1- p_{12}p_{21} )}\notag \\ &=1+\dfrac{\mu_1 \mu_2 p_{12}p_{21}}{(x-\zeta_1)(x-\zeta_2)}\notag \\ &=1+ \mu_1 \mu_2 p_{12}p_{21}\biggl[ \dfrac{1}{(\zeta_1-\zeta_2)(x-\zeta_1)} + \dfrac{1}{(\zeta_2-\zeta_1)(x-\zeta_2)}\biggr],\end{align}

where $\zeta_1$ and $\zeta_2$ are defined by (5.2) and (5.3). Inverting (5.8) then yields

\begin{equation*}\Psi({\textrm{d}} s)= \delta_0({\textrm{d}} s) + \mu_1 \mu_2 p_{12}p_{21}\biggl[ \dfrac{1}{\zeta_1 (\zeta_2-\zeta_1)}\,{\textrm{e}}^{\zeta_1 s} + \dfrac{1}{\zeta_2 (\zeta_1-\zeta_2)}\,{\textrm{e}}^{\zeta_2 s}\biggr]\,{\textrm{d}} s,\ s\ge 0\end{equation*}

which, by substituting in (5.6), yields (5.1).

Remark 5.1. A similar analysis is available to obtain a transient expression for $\mathbb{E}[N_2(t)]$ for type 2 particles, as well as the expected number of particles, say $\mathbb{E}[M_1(t)]$ of type 1 when immigration has the distribution $I\sim \delta_{(0,1)}$, i.e. when one particle of type 2 arrives at each instant $T_i$. By the superposition principle (2.3), we deduce that, for a general immigration vector $I=(I(1),I(2))$ in (2.2), the total number of expected particles of type 1 is then given by

\begin{equation*}\mathbb{E}(I(1))\mathbb{E}[N_1(t)]+\mathbb{E}(I(2))\mathbb{E}[M_1(t)].\end{equation*}

Appendix A. A growth rate for the second-order moment of ${N^o}(t)$

Let $D(t)\;:\!=\;(\mathbb{E}(N^o_i(t)N^o_l(t)))_{i,l=1,\ldots,k}=\mathbb{E}(N^o(t) N^o(t)')$ be the second-order matrix at time $t \ge 0$ of the the baseline multitype branching process $\{ N^o(t),\ t\ge 0 \}$, with branching mechanism in (2.1) and immigration vector at time 0 in (2.2) described in Section 2. The following lemma is a consequence of the growth rates for the second moments provided in [Reference Athreya and Ney5, Section 7.4], which are the results obtained when $N^o(0)$ is deterministic.

Lemma A.1. The following growth rate holds entrywise under Assumption ${\bf (A)}$ as $t\to\infty$:

(A.1) \begin{equation}D(t)=\begin{cases}{\textrm{O}}({\textrm{e}}^{\rho t})& \text{if } \rho<0,\\{\textrm{O}}(t) & \text{if } \rho=0,\\{\textrm{O}}({\textrm{e}}^{2\rho t}) & \text{if } \rho>0.\end{cases}\end{equation}

Proof. Let $D(t\mid \mathbf{n}_0)\;:\!=\; \mathbb{E}(N^o(t) N^o(t)'\mid N^o(0)=\mathbf{n}_0)$ with $\mathbf{n}_0=(\mathbf{n}_0(1), \ldots,\mathbf{n}_0(k))\in\mathbb{N}^k$. The superposition principle (2.3) and the independence of processes $( \{ N^{j,l}(t),\ t\ge 0\})_{j,l\in\mathbb{N}^2}$ entail that

(A.2) \begin{align}D(t\mid \mathbf{n}_0) &= \sum_{j=1}^k \sum_{l=1}^{\mathbf{n}_0(j)} \mathbb{E}\big(N^{j,l}(t)N^{j,l}(t)'\big)+ \sum_{j=1}^k \ \sum_{\substack{l,r=1,\ldots,\mathbf{n}_0(j)\\ l\neq r}} \mathbb{E}\big(N^{j,l}\big)\mathbb{E}\big(N^{j,r}\big)'\notag \\& \quad\, + \sum_{\substack{j,p=1,\ldots,k,\\ j\neq p}} \ \sum_{\substack{l=1,\ldots, \mathbf{n}_0(j),\\ r= 1,\ldots, \mathbf{n}_0(t)}} \mathbb{E}\big(N^{j,l}(t)\big) \mathbb{E}\big(N^{p,r}(t)\big)'\notag \\ &= \sum_{j=1}^k \mathbf{n}_0(j) \mathbb{E}\big(N^{j}(t)N^{j}(t)'\big) + \sum_{j=1}^k(\mathbf{n}_0(j)-1)\mathbf{n}_0(j)\mathbb{E}\big(N^{j}(t)\big)\mathbb{E}\big(N^{j}(t)\big)' \notag \\ & \quad\, + \sum_{\substack{j,p=1,\ldots,k,\\ j\neq p}} \mathbf{n}_0(j)\mathbf{n}_0(p) \mathbb{E}\big(N^{j}(t)\big) \mathbb{E}\big(N^{p}(t)\big)'.\end{align}

We study the terms on the right-hand side of (A.2) in the following cases for the eigenvalue $\rho$, yielding the growth rates (A.1).

If $\rho<0$, we have from [Reference Athreya and Ney5, (19)] that $\mathbb{E}(N^{j}(t)N^{j}(t)')={\textrm{O}}({\textrm{e}}^{\rho t})$ entrywise and uniformly in all $j=1,\ldots,k$, and from [Reference Athreya and Ney5, (17)] that $\mathbb{E}(N^{j}(t))={\textrm{O}}({\textrm{e}}^{\rho t})$ entrywise and uniformly in all $j=1,\ldots,k$. The latter estimate yields that $\mathbb{E}(N^{j}(t)) \mathbb{E}(N^{j}(t))'$ and $\mathbb{E}(N^{j}(t)) \mathbb{E}(N^{p}(t))'$ are ${\textrm{O}}({\textrm{e}}^{2\rho t})$, hence also ${\textrm{O}}({\textrm{e}}^{\rho t})$ (as $\rho<0$) entrywise uniformly in all $j,p=1,\ldots,k$, yielding in turn from (A.2) that

\begin{equation*}D(t\mid \mathbf{n}_0)= \biggl[\sum_{j=1}^k \mathbf{n}_0(j)^2 \biggr] {\textrm{O}}({\textrm{e}}^{\rho t}),\end{equation*}

meaning that each entry of $D^{(\mathbf{n}_0)}(t)$ grows at most as $ \bigl[\sum_{j=1}^k \mathbf{n}_0(j)^2 \bigr] C\,{\textrm{e}}^{\rho t}$ for some constant C independent from $\mathbf{n}_0$. Thanks to Assumption ${\bf (A)}$, this finally implies that

\begin{equation*}D(t)= \sum_{\mathbf{n}_0\in \mathbb{N}^k} D(t\mid \mathbf{n}_0)p_{\mathbf{n}_0} = \sum_{\mathbf{n}\in \mathbb{N}^k} \biggl[\sum_{j=1}^k \mathbf{n}_0(j)^2 \biggr] p_{\mathbf{n}_0}\ {\textrm{O}}({\textrm{e}}^{\rho t})= \mathbb{E}(\|I\|^2)\; {\textrm{O}}({\textrm{e}}^{\rho t})={\textrm{O}}({\textrm{e}}^{\rho t}),\end{equation*}

proving (A.1) when $\rho<0$. If $\rho=0$, we have from [Reference Athreya and Ney5, (19)] that $\mathbb{E}(N^{j}(t)N^{j}(t)')={\textrm{O}}(t)$ entrywise and uniformly in all $j=1,\ldots,k$, and from [Reference Athreya and Ney5, (17)] that $\mathbb{E}(N^{j}(t))={\textrm{O}}(1)$ entrywise and uniformly in all $j=1,\ldots,k$. Thus a similar analysis using (A.2) and Assumption ${\bf (A)}$ implies the growth rate (A.1) when $\rho=0$. Finally, if $\rho=0$, we have from [Reference Athreya and Ney5, (19)] that $\mathbb{E}(N^{j}(t)N^{j}(t)')={\textrm{O}}({\textrm{e}}^{2\rho t})$ entrywise and uniformly in all $j=1,\ldots,k$, and from [Reference Athreya and Ney5, (17)] that $\mathbb{E}(N^{j}(t))={\textrm{O}}({\textrm{e}}^{\rho t})$ entrywise and uniformly in all $j=1,\ldots,k$, implying in turn that $\mathbb{E}(N^{j}(t))\mathbb{E}(N^{j}(t))'$ and $\mathbb{E}(N^{j}(t))\mathbb{E}(N^{p}(t))'$ are ${\textrm{O}}({\textrm{e}}^{2\rho t})$. A similar analysis implies the growth rate (A.1) when $\rho>0$.

Appendix B. Proof of Lemma 3.2

Before proceeding with the proof, we recall the multi-dimensional version of Pólya’s theorem, which will be used later on.

Lemma B.1. Let $\{X_t,\ t\ge 0\}$ be a sequence of random variables with values in $\mathbb{R}^k$ converging in distribution towards $X\in \mathbb{R}^k$, such that $x\in \mathbb{R}^k \mapsto \mathbb{P}(X \le x)$ is continuous. Then for all $x\in \mathbb{R}^k$ we have

\begin{equation*}\lim_{t\to +\infty}\mathbb{P}(X_t > x_t) = \mathbb{P} (X> x),\end{equation*}

where $\lim_{t\to \infty} x_t=x$, $x_t$ lying in $\mathbb{R}^k$, and ‘$\le$’ and ‘$>$’ are understood componentwise.

We now turn to the proof of Lemma 3.2. First, for the process $N^j(t)=(N^j_1(t),\ldots,N^j_k(t))$ described in Section 2, we have

\begin{equation*}\exp({\textrm{i}} s_l N^j_l(t - \Lambda^{-1} (\Lambda(t) x))/t)=\int_{\mathbb{R}_+^*} {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l)\mathbb{1}_{[z_l < N^j_l(t - \Lambda^{-1} (\Lambda(t) x))/t]} \,{\textrm{d}} z_l +1\quad \text{for all } s_l\in \mathbb{R},\end{equation*}

for $l=1,\ldots,k$. Together with an expansion formula, we get that

(B.1) \begin{align} &\exp \biggl(\biggl\langle {\textrm{i}} s, \dfrac{N^j(t - \Lambda^{-1} (\Lambda(t) x))}{t}\biggr\rangle\biggr) \notag \\ &\quad =\prod_{l=1}^k \biggl[ \int_{\mathbb{R}_+^*} {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l)\mathbb{1}_{[z_l < N^j_l(t - \Lambda^{-1} (\Lambda(t) x))/t]} \,{\textrm{d}} z_l +1\biggr] \notag \\&\quad = 1+ \sum_{J\subset \{1,\ldots,k\}} \int_{(\mathbb{R}_+^*)^{\mbox{\tiny Card}(J)}}\prod_{l\in J}\bigl[ {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l)\mathbb{1}_{[z_l < N^j_l(t - \Lambda^{-1} (\Lambda(t) x))/t]} \,{\textrm{d}} z_l \bigr],\end{align}

where $\sum_{J\subset \{1,\ldots,k\}}$ is the sum over non-empty sets $J\subset \{1,\ldots,k\}$. The results in [Reference Weiner33] will be repeatedly used in the following, leading to the convergence (3.39). Taking expectations on both sides of (B.1), it follows that $\psi^j(t,x)$ in (3.42) may be expressed as

\begin{align*}\psi^j(t,x)-1&= \sum_{J\subset \{1,\ldots,k\}} \int_{(\mathbb{R}_+^*)^{\mbox{\tiny Card}(J)}}\prod_{l\in J}[ {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l)\mathbb{P}(z_l < N^j_l(t - \Lambda^{-1} (\Lambda(t) x))/t) \,{\textrm{d}} z_l],\end{align*}

which, using the notation $z^J$ introduced just before Lemma 3.2 and multiplied by t, drives a more compact form given by

(B.2) \begin{align}&t[\psi^j(t,x)-1] \notag \\&\quad = \sum_{J\subset \{1,\ldots,k\}}\int_{(\mathbb{R}_+^*)^{\mbox{\tiny Card}(J)}}\prod_{l\in J}[ {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l) ]\, t \, \mathbb{P} (N^j(t - \Lambda^{-1} (\Lambda(t) x))/t >z^J) \,{\textrm{d}} z^J,\end{align}

where, for two vectors $v_1$ and $v_2$, $v_1>v_2$ means that each entry of $v_1$ is larger than the corresponding one in $v_2$, and ${\textrm{d}} z^J\;:\!=\; \prod_{l\in J} {\textrm{d}} z_l$.

Next, let us observe that, for a fixed $x\in (0,1)$, from (3.34) it follows that

\begin{equation*} t - \Lambda^{-1} (\Lambda(t) x) =t - \lambda_\infty^{-1} \Lambda(t) x - \eta ( \Lambda(t) x) \Lambda(t) x =t - \lambda_\infty^{-1} \Lambda(t) x + {\textrm{o}}(t)\end{equation*}

and also $\lambda_\infty^{-1} \Lambda(t) x= \lambda_\infty^{-1} [\lambda_\infty t + {\textrm{o}}(t)] x = tx + {\textrm{o}}(t)$ due to $\lim_{t\to +\infty}\Lambda(t)/t=\lambda_\infty$. Thus we find that

(B.3) \begin{equation} t - \Lambda^{-1} (\Lambda(t) x) \sim t(1-x),\quad t\to+\infty .\end{equation}

Since the above result entails that $t - \Lambda^{-1} (\Lambda(t) x)\longrightarrow +\infty$ as $t\to +\infty$, from [Reference Weiner33, Theorems 1 and 5], we find for $x\in(0,1)$ that

(B.4) \begin{align} [t - \Lambda^{-1} (\Lambda(t) x)] \; \mathbb{P}(N^j(t - \Lambda^{-1} (\Lambda(t) x))>0)&\longrightarrow \beta_j , \qquad\qquad\ \ \end{align}
(B.5) \begin{align} \mathbb{P}\biggl( \dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t - \Lambda^{-1} (\Lambda(t) x)}> z \Bigm| N^j(t - \Lambda^{-1} (\Lambda(t) x) )>0 \biggr)& \longrightarrow \exp\biggl( -c \max_{i=1,\ldots,k} \dfrac{z_i}{v_i \mu_i^{-1} }\biggr) \notag \\& =\mathbb{P}(\mathcal{X}>z),\end{align}

as $t\to +\infty$ and for all $z=(z_1,\ldots,z_k)\in {\mathbb{R}_+^{*k}}$, where we recall that $\beta_j$ is given by (3.6) and that $\mathcal{X}$ has a distribution given by (3.37). Here again, the relation ‘$>$’ is understood entrywise. It is noted that (B.5) simply states that the distribution of

\begin{equation*}\dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t - \Lambda^{-1} (\Lambda(t) x)},\end{equation*}

given that $N^j(t - \Lambda^{-1} (\Lambda(t) x) )>0$, converges to the distribution of $\mathcal{X}$. Also, since $z\in \mathbb{R}^k \mapsto \mathbb{P}(\mathcal{X}>z)$ is continuous (extending the definition in (3.37) from $z\in \mathbb{R}_+^{*k}$ to $z\in \mathbb{R}^k$ by putting $\mathbb{P}(\mathcal{X}>z)=1$ if $\max_{i=1,\ldots,k}z_i \le 0$), and

\begin{equation*}\lim_{t\to +\infty}\dfrac{t}{t - \Lambda^{-1} (\Lambda(t) x)} = \dfrac{1}{1-x}\end{equation*}

from (B.3), we have from Lemma B.1 for all z that

\begin{align*}& \mathbb{P}\biggl( \dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t}>z \Bigm| N^j( t - \Lambda^{-1} (\Lambda(t) x) )>0 \biggr)\\&\quad =\mathbb{P}\biggl( \dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t - \Lambda^{-1} (\Lambda(t) x)}> \dfrac{t}{t - \Lambda^{-1} (\Lambda(t) x)}z \Bigm| N^j( t - \Lambda^{-1} (\Lambda(t) x) )>0 \biggr)\\ &\quad\longrightarrow \mathbb{P} \biggl( \mathcal{X} > \dfrac{1}{1-x} z\biggr) \\ &\quad = \mathbb{P}((1-x)\mathcal{X}>z),\quad t\to +\infty ,\end{align*}

for a fixed $x\in (0,1)$. The latter convergence along with (B.3) and (B.4) entails that the components of the integrand in (B.2) satisfy

\begin{align*}& t\; \mathbb{P}\biggl(\dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t}>z^J \biggr)\\&\quad = \dfrac{t}{ t - \Lambda^{-1} (\Lambda(t) x)} \mathbb{P}\biggl( \dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t}>z^J \Bigm| N^j( t - \Lambda^{-1} (\Lambda(t) x) )>0 \biggr) \\&\quad\quad\, \times [t - \Lambda^{-1} (\Lambda(t) x)] \; \mathbb{P}(N^j(t - \Lambda^{-1} (\Lambda(t) x))>0)\\&\quad \longrightarrow \dfrac{1}{1-x} \beta_j\; \mathbb{P}((1-x)\; \mathcal{X}>z^J),\quad t\to +\infty \quad \text{for all } J\subset \{1,\ldots,k\}.\end{align*}

It is important to note that from the convergence result in (B.4),

\begin{equation*} [t - \Lambda^{-1} (\Lambda(t) x)] \; \mathbb{P}(N^o(t - \Lambda^{-1} (\Lambda(t) x))>0) \end{equation*}

is bounded uniformly in $t\ge 0$ and $x\in (0,1)$ by some constant. Furthermore, (B.5) says that $\mathcal{A}_t$ converges in distribution towards $(1-x)\mathcal{X}$ as $t\to \infty$, where $\mathcal{A}_t$ is a random vector such that

\begin{equation*} \mathcal{D}(\mathcal{A}_t)= \mathcal{D}\biggl( \dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t}\Bigm| N^j( t - \Lambda^{-1} (\Lambda(t) x) )>0\biggr)\quad \text{for all $t\ge 0$.} \end{equation*}

We recall that $\mathcal{X}$ admits (3.36), and we thus deduce that $(1-x)\mathcal{X}$ is light-tailed, that is, there exists some vector $w_0\in \mathbb{R}^k$ close to 0 with positive entries such that $\mathbb{E}(\!\exp(\langle (1-x)\mathcal{X},w_0\rangle))$ is finite. The fact that $w_0$ has positive entries, together with Chernoff’s bound, thus yields that

\begin{align*} & \mathbb{P}\biggl( \dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t} >z^J \Bigm| N^j( t - \Lambda^{-1} (\Lambda(t) x) )>0 \biggr) \\ &\quad= \mathbb{P}(\mathcal{A}_t>z^J)\\ &\quad \le \mathbb{P}(\langle \mathcal{A}_t,w_0 \rangle > \langle z^J,w_0 \rangle)\\ &\quad \le \mathbb{E}(\!\exp(\langle \mathcal{A}_t,w_0\rangle)) \,{\textrm{e}}^{- \langle z^J,w_0\rangle}\\ &\quad \longrightarrow \mathbb{E}(\!\exp(\langle (1-x)\mathcal{X},w_0\rangle)) \,{\textrm{e}}^{- \langle z^J,w_0\rangle},\quad t\to +\infty.\end{align*}

Further, ${t}/{ (t - \Lambda^{-1} (\Lambda(t) x))}$ is upper-bounded in $t\ge 0$ by some constant that depends on x as it is convergent towards ${1}/{(1-x)}$ as $t\to +\infty$. Therefore, from the above findings we conclude that the following function is bounded by

\begin{equation*}t\; \mathbb{P}\biggl(\dfrac{N^j( t - \Lambda^{-1} (\Lambda(t) x) )}{t}>z^J \biggr) \le K_x^j \,{\textrm{e}}^{- \langle z^J,w_0\rangle} \quad {\text{for all } J\subset \{1,\ldots,k\},\ t\ge 0,}\end{equation*}

where $K_x^j$ is some constant independent of $t\ge 0$ and $z\in \mathbb{R}_+^{*k}$. Since

\begin{equation*}\int_{(\mathbb{R}_+^*)^{\mbox{\tiny Card}(J)}} \,{\textrm{e}}^{- \langle z^J,w_0\rangle} \,{\textrm{d}} z^J\end{equation*}

is finite for all $J\subset \{1,\ldots,k\}$, we find by the dominated convergence theorem that the integrand in (B.2) satisfies

\begin{align*}&\int_{(\mathbb{R}_+^*)^{\mbox{\tiny Card}(J)}}\prod_{l\in J}[ {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l) ]\; t \; \mathbb{P}(N^j(t - \Lambda^{-1} (\Lambda(t) x))/t >z^J) \,{\textrm{d}} z^J \\&\quad \longrightarrow \int_{(\mathbb{R}_+^*)^{\mbox{\tiny Card}(J)}}\prod_{l\in J}[ {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l) ] \biggl\{\dfrac{1}{1-x} \beta_j\; \mathbb{P}((1-x)\; \mathcal{X}>z^J)\biggr\} \,{\textrm{d}} z^J ,\quad t\to+\infty\end{align*}

for all $ J\subset \{1,\ldots,k\}$, and for a fixed $x\in (0,1)$. Putting this into (B.2) yields the following limit as $t\to + \infty$:

\begin{equation*}t[\psi^j(t,x)-1]\longrightarrow \sum_{J\subset \{1,\ldots,k\}} \int_{(\mathbb{R}_+^*)^{\mbox{\tiny Card}(J)}}\prod_{l\in J}[ {\textrm{i}} s_l \exp({\textrm{i}} s_l z_l) ] \biggl\{\dfrac{1}{1-x} \beta_j\; \mathbb{P}((1-x)\; \mathcal{X}>z^J)\biggr\} \,{\textrm{d}} z^J .\end{equation*}

By an expansion formula similar to (B.1) (with $(1-x)\mathcal{X}$ here instead of $N^j(t - \Lambda^{-1} (\Lambda(t) x))/t$), one can check that the right-hand side of the above limit is equal to

\begin{equation*}\dfrac{1}{1-x} \beta_j[ 1-\mathbb{E}(\! \exp [\langle s,(1-x) \mathcal{X}\rangle])],\end{equation*}

yielding (3.43).

Appendix C. Proof of Lemma 4.2

The moment generating function of $T\sim \Gamma(\zeta ,1)$ is given by $\mathbb{E}({\textrm{e}}^{zT})=(1-z)^{-\zeta}$ for z being a complex number with real part less than 1. By the independence assumption, we then obtain

\begin{equation*}\mathbb{E}[{\textrm{e}}^{{\textrm{i}} x\mathcal{Z}_T}]=\int_0^\infty \mathbb{E}[{\textrm{e}}^{{\textrm{i}} x\mathcal{Z}_t}] \mathbb{P}(T\in {\textrm{d}} t)= \int_0^\infty \,{\textrm{e}}^{-t\psi(x)}\mathbb{P}(T\in {\textrm{d}} t)= \{1+ \psi(x) \}^{-\zeta},\end{equation*}

which completes the proof.

Acknowledgement

The authors wish to thank both referees for suggestions and remarks leading to many improvements in the paper.

References

Allison, P. D. (1980). Estimation and testing for a Markov model of reinforcement. Sociol. Methods Res. 8, 434453.10.1177/004912418000800405CrossRefGoogle Scholar
Altman, E. (2005). On stochastic recursive equations and infinite server queues. In Proceedings of IEEE 24th Annual Joint Conference of the IEEE Computer and Communications Societies.10.1109/INFCOM.2005.1498355CrossRefGoogle Scholar
Asmussen, S. and Bladt, M. (1996). Renewal theory and queueing algorithms for matrix exponential distributions. In Matrix Analytic Methods in Stochastic Models, eds A. S. Alsfa and S. Chakravarty, pp. 313341. Marcel Dekker, New York.Google Scholar
Athreya, A. B. (1969). Limit theorems for multitype continuous time Markov branching processes, II: The case of an arbitrary linear functional. Z. Wahrscheinlichkeitsth. 13, 204214.10.1007/BF00539201CrossRefGoogle Scholar
Athreya, A. B. and Ney, P. E. (1972). Branching Processes. Springer.10.1007/978-3-642-65371-1CrossRefGoogle Scholar
Badía, F. G., Sangüesa, C. and Cha, J. H. (2018). Univariate and multivariate stochastic comparisons and ageing properties of the generalized Pólya process. J. Appl. Prob. 55, 233253.10.1017/jpr.2018.15CrossRefGoogle Scholar
Bates, G. E. (1955). Joint distributions of time intervals for the occurrence of successive accidents in a generalized Polya scheme. Ann. Math. Statist. 26, 705720.10.1214/aoms/1177728429CrossRefGoogle Scholar
Bühlmann, H. (1970). Mathematical Methods in Risk Theory. Springer.Google Scholar
Cha, J. H. (2014). Characterization of the generalized Pólya process and its applications. Adv. Appl. Prob. 46, 11481171.10.1239/aap/1418396247CrossRefGoogle Scholar
Cha, J. H. and Finkelstein, M. (2016). New shock models based on the generalized Polya process. Europ. J. Operat. Res. 251, 135141.10.1016/j.ejor.2015.11.032CrossRefGoogle Scholar
Cha, J. H. and Finkelstein, M. (2016). Justifying the Gompertz curve of mortality via the generalized Polya process of shocks. Theoret. Pop. Biol. 109, 5462.10.1016/j.tpb.2016.03.001CrossRefGoogle ScholarPubMed
Durham, S. D. (1971). A problem concerning generalized age-dependent branching processes with immigration. Ann. Math. Statist. 42, 11211123.10.1214/aoms/1177693344CrossRefGoogle Scholar
Feller, W. (1943). On a general class of ‘contagious’ distributions. Ann. Math. Statist. 14, 389400.10.1214/aoms/1177731359CrossRefGoogle Scholar
Hyrien, O., Mitov, K. V. and Yanev, N. M. (2016). Supercritical Sevastyanov branching processes with non-homogeneous Poisson immigration. In Branching Processes and their Applications (Lecture Notes Statist. 219), eds I. del Puerto et al. pp. 151166. Springer, Cham.10.1007/978-3-319-31641-3_9CrossRefGoogle Scholar
Hyrien, O., Mitov, K. V. and Yanev, N. M. (2017). Subcritical Sevastyanov branching processes with nonhomogeneous Poisson immigration. J. Appl. Prob. 54, 569587.10.1017/jpr.2017.18CrossRefGoogle ScholarPubMed
Hyrien, O., Peslak, S. A., Yanev, N. M. and Palis, J. (2015). Stochastic modeling of stress erythropoiesis using a two-type age-dependent branching process with immigration. J. Math. Biol. 70, 14851521.10.1007/s00285-014-0803-xCrossRefGoogle ScholarPubMed
Greenwood, M. and Yule, G. U. (1920). An inquiry into the nature of frequency distributions representative of multiple happenings with particular reference to the occurrence of multiple attacks of disease or of repeated accidents. J. R. Statist. Soc. A 83, 255279.10.2307/2341080CrossRefGoogle Scholar
Jeanblanc, M., Yor, M. and Chesney, M. (2009). Poisson processes and ruin theory. Chapter 8 of Mathematical Methods for Financial Markets. Springer Finance, London.10.1007/978-1-84628-737-4_8CrossRefGoogle Scholar
Konno, H. (2010). On the exact solution of a generalized Polya process. Adv. Math. Phys. 2010, 504267.10.1155/2010/504267CrossRefGoogle Scholar
Kyprianou, A. E. (2006). Introductory Lectures on Fluctuations of Lévy Processes with Applications. Springer.Google Scholar
Landriault, D., Willmot, G. E. and Xu, D. (2014). On the analysis of time dependent claims in a class of birth process claim count models. Insurance Math. Econom. 58, 168173.10.1016/j.insmatheco.2014.07.001CrossRefGoogle Scholar
Laskin, N. (2003). Fractional Poisson process. Commun. Nonlinear Sci. Numer. Simul. 8, 201213.10.1016/S1007-5704(03)00037-6CrossRefGoogle Scholar
Le Gat, Y. (2014). Extending the Yule process to model recurrent pipe failures in water supply networks. Urban Water J. 11, 617630.10.1080/1573062X.2013.783088CrossRefGoogle Scholar
Mitov, K. V., Yanev, N. M. and Hyrien, O. (2018). Multitype branching processes with inhomogeneous Poisson immigration. Adv. Appl. Prob. 50, 211228.10.1017/apr.2018.81CrossRefGoogle Scholar
Pólya, G. (1930). Sur quelques points de la théorie des probabilités. Ann. Inst. H. Poincaré Prob. Statist. 1, 117161.Google Scholar
Qi, J., Ju, W. and Sun, K. (2017). Estimating the propagation of interdependent cascading outages with multi-type branching processes. IEEE Trans. Power Systems 32, 12121223.Google Scholar
Resing, J. A. C. (1993). Polling systems and multitype branching processes. Queueing Systems 13, 409426.10.1007/BF01149263CrossRefGoogle Scholar
Rolski, D. T., Schmidli, H., Schmidt, V. and Teugels, J. L. (1998). Stochastic Processes for Insurance and Finance. John Wiley.Google Scholar
Sevast’yanov, B. A. (1971). Branching Processes (in Russian). Nauka, Moscow.Google Scholar
van der Mei, R. D. (2007). Towards a unifying theory on branching-type polling systems in heavy traffic. Queueing Systems 57, 2946.10.1007/s11134-007-9044-7CrossRefGoogle Scholar
Vatutin, V. A. (1977). A critical Bellman–Harris branching process with immigration and several types of particles. Theory Prob. Appl. 21, 435442.10.1137/1121055CrossRefGoogle Scholar
Wasserman, S. (1983). Distinguishing between stochastic models of heterogeneity and contagion. J. Math. Psychol. 27, 201215.10.1016/0022-2496(83)90043-3CrossRefGoogle Scholar
Weiner, H. J. (1970). On a multi-type critical age-dependent branching process. J. Appl. Prob. 7, 523543.10.2307/3211936CrossRefGoogle Scholar
Weiner, H. J. (1972). A multi-type critical age-dependent branching process with immigration. J. Appl. Prob. 9, 697706.10.2307/3212609CrossRefGoogle Scholar
Willmot, G. E. (2010). Distributional analysis of a generalization of the Polya process. Insurance Math. Econom. 47, 423427.10.1016/j.insmatheco.2010.09.002CrossRefGoogle Scholar
Figure 0

Table 1. Direction of limiting distribution.

Figure 1

Figure 1. Evolution of ith particle.