1. Introduction
The asymptotic behavior of multi-type supercritical branching processes without or with immigration has been studied for a long time. Kesten and Stigum [Reference Kesten and Stigum21, Theorems 2.1, 2.2, 2.3, 2.4] investigated the limiting behaviors of the inner products
$\langle{\boldsymbol{{a}}}, {{\boldsymbol{{X}}}}_n\rangle$
as
$n \to \infty$
, where
${{\boldsymbol{{X}}}}_n$
,
$n \in \{1, 2, \ldots\}$
, is a supercritical, irreducible, and positively regular d-type Galton–Watson branching process without immigration and
${\boldsymbol{{a}}} \in \mathbb{R}^d \setminus \{{\boldsymbol{0}}\}$
is orthogonal to the left Perron eigenvector of the branching mean matrix
${{\boldsymbol{{M}}}} \,:\!=\, ({\mathbb{E}}(\langle{{\boldsymbol{{e}}}}_j, {{\boldsymbol{{X}}}}_1\rangle \,|\, {{\boldsymbol{{X}}}}_0 = {{\boldsymbol{{e}}}}_i))_{i,j\in\{1,\ldots,d\}}$
of the process, where
${{\boldsymbol{{e}}}}_1, \ldots, {{\boldsymbol{{e}}}}_d$
denotes the natural basis in
$\mathbb{R}^d$
. Of course, this can arise only if
$d \in \{2, 3, \ldots\}$
. It is enough to consider the case of
$\|{\boldsymbol{{a}}}\| = 1$
, when
$\langle{\boldsymbol{{a}}}, {{\boldsymbol{{X}}}}_n\rangle$
is the scalar projection of
${{\boldsymbol{{X}}}}_n$
on
${\boldsymbol{{a}}}$
. The appropriate scaling factor of
$\langle{\boldsymbol{{a}}}, {{\boldsymbol{{X}}}}_n\rangle$
,
$n \in \{1, 2, \ldots\}$
, depends not only on the Perron eigenvalue
$r({{\boldsymbol{{M}}}})$
(which is the spectral radius of
${{\boldsymbol{{M}}}}$
) and on the left and right Perron eigenvectors of
${\boldsymbol{{M}}}$
, but also on the full spectral representation of
${{\boldsymbol{{M}}}}$
. Badalbaev and Mukhitdinov [Reference Badalbaev and Mukhitdinov4, Theorems 1 and 2] extended these results of Kesten and Stigum [Reference Kesten and Stigum21], describing in a more explicit way the asymptotic behavior of
$(\langle{\boldsymbol{{a}}}^{(1)}, {{\boldsymbol{{X}}}}_n\rangle, \ldots, \langle{\boldsymbol{{a}}}^{(d-1)}, {{\boldsymbol{{X}}}}_n\rangle)$
as
$n \to \infty$
, where
$\{{\boldsymbol{{a}}}^{(1)}, \ldots, {\boldsymbol{{a}}}^{(d-1)}\}$
is a basis of the hyperplane in
$\mathbb{R}^d$
orthogonal to the left Perron eigenvector of
${{\boldsymbol{{M}}}}$
. They also pointed out the necessity of considering the functionals above that originate in statistical investigations for
${{\boldsymbol{{X}}}}_n$
,
$n\in\{1,2,\ldots\}$
.
Athreya [Reference Athreya1, Reference Athreya2] investigated the limiting behavior of
${{\boldsymbol{{X}}}}_t$
and the inner products
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
as
$t \to \infty$
, where
$({{\boldsymbol{{X}}}}_t)_{t\in[0,\infty)}$
is a supercritical, positively regular, and non-singular d-type continuous-time Galton–Watson branching process without immigration and
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
is a right eigenvector corresponding to an eigenvalue
$\lambda \in \mathbb{C}$
of the infinitesimal generator
${\boldsymbol{{A}}}$
of the branching mean matrix semigroup
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU1.png?pub-status=live)
of the process. Under a first-order moment condition on the branching distributions, with
$s({\boldsymbol{{A}}})$
denoting the maximum of the real parts of the eigenvalues of
${\boldsymbol{{A}}}$
, it was shown that there exists a non-negative random variable
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$
such that
$\mathrm{e}^{-s({\boldsymbol{{A}}})t}{{\boldsymbol{{X}}}}_t$
converges to
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}{{\boldsymbol{{u}}}}$
almost surely as
$t\to\infty$
, where
${{\boldsymbol{{u}}}}$
denotes the left Perron eigenvector of the branching mean matrix
${{\boldsymbol{{M}}}}(1)$
. Under a second-order moment condition on the branching distributions, it was shown that if
${\operatorname{Re}}(\lambda) \in \big(\frac{1}{2} s({\boldsymbol{{A}}}), s({\boldsymbol{{A}}})\big]$
, then
$\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
converges almost surely and in
$L_2$
to a (complex) random variable as
$t \to \infty$
, and if
${\operatorname{Re}}(\lambda) \in \big(-\infty, \frac{1}{2} s({\boldsymbol{{A}}})\big]$
and
$\mathbb{P}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} >0) > 0$
, then, under the conditional probability measure
$\mathbb{P}(\!\cdot\,|\, w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} >0)$
, the limit distribution of
$t^{-\theta} \mathrm{e}^{-s({\boldsymbol{{A}}})t/2} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
as
$t \to \infty$
is mixed normal, where
$\theta = \frac{1}{2}$
if
${\operatorname{Re}}(\lambda) = \frac{1}{2} s({\boldsymbol{{A}}})$
and
$\theta = 0$
if
${\operatorname{Re}}(\lambda) \in \big(-\infty, \frac{1}{2} s({\boldsymbol{{A}}})\big)$
. Further, in the case
${\operatorname{Re}}(\lambda) \in \big(-\infty, \frac{1}{2} s({\boldsymbol{{A}}})\big]$
, under the conditional probability measure
$\mathbb{P}(\!\cdot\,|\, w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} >0)$
, with an appropriate random scaling, asymptotic normality has been derived as well, with the advantage that the limit laws do not depend on the initial value
${{\boldsymbol{{X}}}}_0$
. We also recall that Athreya [Reference Athreya1] described the asymptotic behavior of
${\mathbb{E}}(\vert \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle^2\vert)$
as
$t\to\infty$
under a second-order moment condition on the branching distributions. These results have been extended by Athreya [Reference Athreya3] for the inner products
$\langle{\boldsymbol{{a}}}, {{\boldsymbol{{X}}}}_t\rangle$
,
$t \in [0, \infty)$
, with arbitrary
${\boldsymbol{{a}}} \in \mathbb{C}^d$
. Janson [Reference Janson19, Theorem 3.1] gave a functional version of the above-mentioned results of Athreya [Reference Athreya1, Reference Athreya2]. Under some conditions weaker than those of Athreya [Reference Athreya1, Reference Athreya2], Janson [Reference Janson19] described the asymptotic behavior of
$(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_{t+s}\rangle)_{s \in [0, \infty)}$
as
$t \to \infty$
by giving more explicit formulas for the asymptotic variances and covariances as well. For a more detailed comparison of Athreya’s and Janson’s results, see Janson [Reference Janson19, Section 6].
Kyprianou et al. [Reference Kyprianou, Palau and Ren22] described the limit behavior of the inner product
$\langle{{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t\rangle$
as
$t \to \infty$
for supercritical and irreducible d-type continuous-state and continuous-time branching processes (without immigration), where
${{\boldsymbol{{u}}}}$
denotes the left Perron vector of the branching mean matrix of
$({{\boldsymbol{{X}}}}_t)_{t\in[0,\infty)}$
. Barczy et al. [Reference Barczy, Palau and Pap8] started to investigate the limiting behavior of the inner products
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
as
$t \to \infty$
, where
$({{\boldsymbol{{X}}}}_t)_{t\in[0,\infty)}$
is a supercritical and irreducible d-type continuous-state and continuous-time branching process with immigration (CBI process) and
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
is a left eigenvector corresponding to an eigenvalue
$\lambda \in \mathbb{C}$
of the infinitesimal generator
${\widetilde{{{\boldsymbol{{B}}}}}}$
of the branching mean matrix semigroup
$\mathrm{e}^{t{\widetilde{{{\boldsymbol{{B}}}}}}}$
,
$t \in [0, \infty)$
, of the process. Note that for each
$t \in [0, \infty)$
and
$i, j \in \{1, \ldots, d\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU2.png?pub-status=live)
where
$({{\boldsymbol{{Y}}}}_t)_{t\in[0,\infty)}$
is a multi-type continuous-state and continuous-time branching process without immigration and with the same branching mechanism as
$({{\boldsymbol{{X}}}}_t)_{t\in[0,\infty)}$
. Thus
${\widetilde{{{\boldsymbol{{B}}}}}}$
plays the role of
${\boldsymbol{{A}}}^\top$
in Athreya [Reference Athreya2]; hence in our results the right and left eigenvectors are interchanged compared to Athreya [Reference Athreya2]. Under first-order moment conditions on the branching and immigration mechanisms, it was shown that there exists a non-negative random variable
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$
such that
$\mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t}{{\boldsymbol{{X}}}}_t$
converges to
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}{\widetilde{{{\boldsymbol{{u}}}}}}$
almost surely as
$t\to\infty$
, where
${\widetilde{{{\boldsymbol{{u}}}}}}$
is the right Perron vector of
$\mathrm{e}^{{\widetilde{{{\boldsymbol{{B}}}}}}}$
; see Barczy et al. [Reference Barczy, Palau and Pap8, Theorem 3.3]. If
${{\boldsymbol{{v}}}}$
is a left non-Perron eigenvector of the branching mean matrix
$\mathrm{e}^{{\widetilde{{{\boldsymbol{{B}}}}}}}$
, then this result implies that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU3.png?pub-status=live)
almost surely as
$t \to \infty$
, since
$\langle{{\boldsymbol{{v}}}}, {\widetilde{{{\boldsymbol{{u}}}}}}\rangle = 0$
by the so-called principle of biorthogonality (see, e.g., Horn and Johnson [Reference Horn and Johnson15, Theorem 1.4.7(a)]). Consequently, the scaling factor
$\mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t}$
is not appropriate for describing the asymptotic behavior of the projection
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
as
$t \to \infty$
. Under suitable moment conditions on the branching and immigration mechanisms, it was shown that if
${\operatorname{Re}}(\lambda) \in \big(\frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}}), s({\widetilde{{{\boldsymbol{{B}}}}}})\big]$
, then
$\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
converges almost surely and in
$L_1$
(in
$L_2$
) to a (complex) random variable as
$t \to \infty$
; see Barczy et al. [Reference Barczy, Palau and Pap8, Theorems 3.1 and 3.4].
The aim of the present paper is to continue the investigations of Barczy et al. [Reference Barczy, Palau and Pap8]. We will prove that under a fourth-order moment condition on the branching mechanism and a second-order moment condition on the immigration mechanism, if
${\operatorname{Re}}(\lambda) \in \big(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\big]$
, then the limit distribution of
$t^{-\theta} \mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t/2} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
as
$t \to \infty$
is mixed normal, where
$\theta = \frac{1}{2}$
if
${\operatorname{Re}}(\lambda) = \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})$
and
$\theta = 0$
if
${\operatorname{Re}}(\lambda) \in \big(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\big)$
; see Parts (ii) and (iii) of Theorem 3.1. If
${\operatorname{Re}}(\lambda) \in \big(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\big]$
and
$({{\boldsymbol{{X}}}}_t)_{t\in[0,\infty)}$
is non-trivial (equivalently,
$\mathbb{P}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0) > 0$
; see Lemma 3.1), then under the conditional probability measure
$\mathbb{P}(\!\cdot\,|\, w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0)$
, with an appropriate random scaling, we prove asymptotic normality as well, with the advantage that the limit laws do not depend on the initial value
${{\boldsymbol{{X}}}}_0$
; see Theorem 3.3. For the asymptotic variances, explicit formulas are presented. In case of a non-trivial process, under a first-order moment condition on the immigration mechanism, we also prove the convergence of the relative frequencies of distinct types of individuals on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
(see Proposition 3.1); for instance, if the immigration mechanism does not vanish, then this convergence holds almost surely (see Theorem 3.2).
We now summarize the novelties of our paper. We point out that we investigate the asymptotic behavior of the projections of a multi-type CBI process on certain left non-Perron eigenvectors of its branching mean matrix. Our approach is based on a decomposition of the process
$(\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle)_{t\in[0,\infty)}$
as the sum of a deterministic process and three square-integrable martingales; see the beginning of the proof of Part (iii) of Theorem 3.1. To prove asymptotic normality of the martingales in question, we use a result due to Crimaldi and Pratelli [Reference Crimaldi and Pratelli12, Theorem 2.2] (see also Theorem E.1), which provides a set of sufficient conditions for the asymptotic normality of multivariate martingales. These sufficient conditions concern the quadratic variation process and the jumps of the multivariate martingale in question. In the course of checking the conditions of Theorem E.1, we need to study the asymptotic behavior of the expectation of the running supremum of the jumps of a compensated Poisson integral process having time-dependent integrand over an interval [0, t] as
$t \to \infty$
. There is a new interest in this type of question; see, e.g., the paper of He and Li [Reference He and Li14] on the distributions of jumps of a single-type CBI process.
Next, we compare our methodology with that of the discrete-valued settings. Athreya [Reference Athreya2] decomposed
$\mathrm{e}^{-\lambda t}\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
into three terms, where
$({{\boldsymbol{{X}}}}_t)_{t\in[0,\infty)}$
is a supercritical, positively regular, and non-singular d-type continuous-time Galton–Watson branching process without immigration. He showed that two of the terms are small in probability and, using the central limit theorem, that the third one converges to the desired normal distribution. Janson’s proof [Reference Janson19, Theorem 3.1] for a functional extension of Athreya’s results is based on a martingale convergence theorem (see [Reference Janson19, Proposition 9.1]) that relies on the convergence of the quadratic variation of an
$L_2$
-locally bounded (see [Reference Janson19, Condition (9.2)]) martingale sequence. Janson then needed to define a suitable martingale sequence and estimate its quadratic variation. Observe that he asked for a finite second moment for the branching mechanism in order to have an
$L_2$
-locally bounded martingale (see [Reference Janson19, Assumption (A.2)]). In our case, where
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is a supercritical and irreducible d-type CBI process, the three martingales appearing in the previously mentioned decomposition of
$(\mathrm{e}^{-\lambda t}\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle)_{t\in[0,\infty)}$
turn out to be square-integrable under our moment assumptions on the branching and immigration mechanisms. One of the three martingales in question is an integral with respect to a standard Wiener process, and the other two are integrals with respect to compensated Poisson measures. The decomposition in question was derived using a stochastic differential equation representation of
$({{\boldsymbol{{X}}}}_t)_{t\in[0,\infty)}$
together with an application of the multidimensional Itô formula; see Barczy et al. [Reference Barczy, Li and Pap7, Lemma 4.1]. Concerning our moment assumptions, in order to be able to check the conditions of the previously mentioned Theorem 2.2 in Crimaldi and Pratelli [Reference Crimaldi and Pratelli12] (see also Theorem E.1), we need a fourth-order moment condition on the branching mechanism and a second-order moment condition on the immigration mechanism. So our proof technique cannot be considered an easy adaptation of that of Athreya [Reference Athreya1, Reference Athreya2] or that of Janson [Reference Janson19, Theorem 3.1].
The paper is structured as follows. In Section 2, we recall the definition of multi-type CBI processes together with the notion of irreducibility, and we introduce a classification of multi-type CBI processes as well. Sections 3 and 4 contain our results and their proofs, respectively. We close the paper with five appendices. In Appendix A we recall a decomposition of multi-type CBI processes. Appendix B is devoted to a description of deterministic projections of multi-type CBI processes (i.e., projections that are deterministic). In Appendix C, based on Buraczewski et al. [Reference Buraczewski, Damek and Mikosch11, Proposition 4.3.2], we recall some mild conditions under which the solution of a stochastic fixed point equation is atomless. Appendix D is devoted to the description of the asymptotic behavior of the second moment of projections of multi-type CBI processes. In Appendix E we recall a result on the asymptotic behavior of multivariate martingales due to Crimaldi and Pratelli [Reference Crimaldi and Pratelli12, Theorem 2.2], which serves as a key tool in the proofs of our results; see Theorem E.1.
2. Preliminaries
Let
$\mathbb{Z}_+$
,
$\mathbb{N}$
,
$\mathbb{R}$
,
$\mathbb{R}_+$
,
$\mathbb{R}_{++}$
, and
$\mathbb{C}$
denote the sets of non-negative integers, positive integers, real numbers, non-negative real numbers, positive real numbers, and complex numbers, respectively. For
$x , y \in \mathbb{R}$
, we will use the notation
$x \land y \,:\!=\, \min \{x, y\}$
,
$x \lor y \,:\!=\, \max \{x, y\}$
, and
$x^+ \,:\!=\, \max \{0, x\}$
. By
$\langle{{\boldsymbol{{x}}}}, {{\boldsymbol{{y}}}}\rangle \,:\!=\, \sum_{j=1}^d x_j \overline{y_j}$
we denote the Euclidean inner product of
${{\boldsymbol{{x}}}} = (x_1, \ldots, x_d)^\top \in \mathbb{C}^d$
and
${{\boldsymbol{{y}}}} = (y_1, \ldots, y_d)^\top \in \mathbb{C}^d$
, and by
$\|{{\boldsymbol{{x}}}}\|$
and
$\|{\boldsymbol{{A}}}\|$
we denote the induced norms of
${{\boldsymbol{{x}}}} \in \mathbb{C}^d$
and
${\boldsymbol{{A}}} \in \mathbb{C}^{d\times d}$
, respectively. By
$r({\boldsymbol{{A}}})$
we denote the spectral radius of
${\boldsymbol{{A}}} \in \mathbb{C}^{d\times d}$
. The null vector and the null matrix will be denoted by
${\boldsymbol{0}}$
. Moreover,
${{\boldsymbol{{I}}}}_d \in \mathbb{R}^{d\times d}$
denotes the identity matrix. If
${\boldsymbol{{A}}} \in \mathbb{R}^{d\times d}$
is positive semidefinite, then
${\boldsymbol{{A}}}^{1/2}$
denotes the unique positive semidefinite square root of
${\boldsymbol{{A}}}$
. If
${\boldsymbol{{A}}} \in \mathbb{R}^{d\times d}$
is strictly positive definite, then
${\boldsymbol{{A}}}^{1/2}$
is strictly positive definite and
${\boldsymbol{{A}}}^{-1/2}$
denotes the inverse of
${\boldsymbol{{A}}}^{1/2}$
. The set of
$d \times d$
matrices with non-negative off-diagonal entries (also called essentially non-negative matrices) is denoted by
$\mathbb{R}^{d\times d}_{(+)}$
. By
$C^2_\mathrm{c}(\mathbb{R}_+^d,\mathbb{R})$
we denote the set of twice continuously differentiable real-valued functions on
$\mathbb{R}_+^d$
with compact support. By
$B(\mathbb{R}_+^d,\mathbb{R})$
we denote the Banach space (endowed with the supremum norm) of real-valued bounded Borel functions on
$\mathbb{R}_+^d$
. Convergence almost surely, in
$L_1$
, in
$L_2$
, in probability, and in distribution will be denoted by
$\stackrel{{\mathrm{a.s.}}}{\longrightarrow}$
,
$\stackrel{L_1}{\longrightarrow}$
,
$\stackrel{L_2}{\longrightarrow}$
,
$\stackrel{\mathbb{P}}{\longrightarrow}$
and
$\stackrel{{\mathcal D}}{\longrightarrow}$
, respectively. For an event A with
$\mathbb{P}(A) > 0$
, let
${\mathbb{P}}_A(\!\cdot\!) \,:\!=\, {\mathbb{P}}(\!\cdot\,|\, A) = {\mathbb{P}}(\!\cdot \cap A) / {\mathbb{P}}(A)$
denote the conditional probability measure given A, and let
$\stackrel{{\mathcal D}_A}{\longrightarrow}$
denote convergence in distribution under the conditional probability measure
${\mathbb{P}}_A$
. Almost sure equality and equality in distribution will be denoted by
$\stackrel{{\mathrm{a.s.}}}{=}$
and
$\stackrel{{\mathcal D}}{=}$
, respectively. If
${{\boldsymbol{{V}}}} \in \mathbb{R}^{d\times d}$
is symmetric and positive semidefinite, then
${\mathcal N}_d({\boldsymbol{0}}, {{\boldsymbol{{V}}}})$
denotes the d-dimensional normal distribution with zero mean and variance matrix
${{\boldsymbol{{V}}}}$
. Throughout this paper, we make the conventions
$\int_a^b \,:\!=\, \int_{(a,b]}$
and
$\int_a^\infty \,:\!=\, \int_{(a,\infty)}$
for any
$a, b \in \mathbb{R}$
with
$a < b$
.
Definition 2.1. A tuple
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
is called a set of admissible parameters if
-
(i)
$d \in \mathbb{N}$ ,
-
(ii)
${{\boldsymbol{{c}}}} = (c_i)_{i\in\{1,\ldots,d\}} \in \mathbb{R}_+^d$ ,
-
(iii)
${\boldsymbol{\beta}} = (\beta_i)_{i\in\{1,\ldots,d\}} \in \mathbb{R}_+^d$ ,
-
(iv)
${{\boldsymbol{{B}}}} = (b_{i,j})_{i,j\in\{1,\ldots,d\}} \in \mathbb{R}^{d \times d}_{(+)}$ ,
-
(v)
$\nu$ is a Borel measure on
${\mathcal U}_d \,:\!=\, \mathbb{R}_+^d \setminus \{{\boldsymbol{0}}\}$ satisfying
$\int_{{\mathcal U}_d} (1 \land \|{{\boldsymbol{{r}}}}\|) \, \nu(\mathrm{d}{{\boldsymbol{{r}}}}) < \infty$ , and
-
(vi)
$\boldsymbol{\mu} = (\mu_1, \ldots, \mu_d)$ , where, for each
$i \in \{1, \ldots, d\}$ ,
$\mu_i$ is a Borel measure on
${\mathcal U}_d$ satisfying
\begin{equation*} \int_{{\mathcal U}_d} \biggl[\|{{\boldsymbol{{z}}}}\| \land \|{{\boldsymbol{{z}}}}\|^2 + \sum_{j \in \{1, \ldots, d\} \setminus \{i\}} (1 \land z_j)\biggr] \mu_i(\mathrm{d}{{\boldsymbol{{z}}}}) < \infty . \end{equation*}
Theorem 2.1. Let
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
be a set of admissible parameters. Then there exists a unique conservative transition semigroup
$(P_t)_{t\in\mathbb{R}_+}$
acting on
$B(\mathbb{R}_+^d,\mathbb{R})$
such that its Laplace transform has a representation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU5.png?pub-status=live)
where, for any
$\boldsymbol{\lambda} \in \mathbb{R}_+^d$
, the continuously differentiable function
$\mathbb{R}_+ \ni t \mapsto {{\boldsymbol{{v}}}}(t, \boldsymbol{\lambda}) = (v_1(t, \boldsymbol{\lambda}), \ldots, v_d(t, \boldsymbol{\lambda}))^\top \in \mathbb{R}_+^d$
is the unique locally bounded solution to the system of differential equations
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU6.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU7.png?pub-status=live)
for
$\boldsymbol{\lambda} \in \mathbb{R}_+^d$
,
$i \in \{1, \ldots, d\}$
, and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU8.png?pub-status=live)
Theorem 2.1 is a special case of Theorem 2.7 of Duffie et al. [Reference Duffie, Filipović and Schachermayer13] with
$m = d$
,
$n = 0$
, and zero killing rate. For more details, see Remark 2.5 in Barczy et al. [Reference Barczy, Li and Pap6].
Definition 2.2. A conservative Markov process with state space
$\mathbb{R}_+^d$
and with transition semigroup
$(P_t)_{t\in\mathbb{R}_+}$
given in Theorem 2.1 is called a multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
. The function
$\mathbb{R}_+^d \ni \boldsymbol{\lambda} \mapsto (\varphi_1(\boldsymbol{\lambda}), \ldots, \varphi_d(\boldsymbol{\lambda}))^\top \in \mathbb{R}^d$
is called its branching mechanism, and the function
$\mathbb{R}_+^d \ni \boldsymbol{\lambda} \mapsto \psi(\boldsymbol{\lambda}) \in \mathbb{R}_+$
is called its immigration mechanism. A multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
is called a CB process (a continuous-state and continuous-time branching process without immigration) if
${\boldsymbol{\beta}} = {\boldsymbol{0}}$
and
$\nu = 0$
(equivalently,
$\psi=0$
).
Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be a multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment condition
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn1.png?pub-status=live)
holds. Then, by the formula (3.4) in Barczy et al. [Reference Barczy, Li and Pap6],
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn2.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU9.png?pub-status=live)
with
$\delta_{i,j}\,:\!=\,1$
if
$i = j$
, and
$\delta_{i,j} \,:\!=\, 0$
if
$i \ne j$
. Note that, for each
${{\boldsymbol{{x}}}} \in \mathbb{R}^d_+$
, the function
$\mathbb{R}_+ \ni t \mapsto {\mathbb{E}}({{\boldsymbol{{X}}}}_t \,|\, {{\boldsymbol{{X}}}}_0 = {{\boldsymbol{{x}}}})$
is continuous, and
${\widetilde{{{\boldsymbol{{B}}}}}} \in \mathbb{R}^{d \times d}_{(+)}$
and
$\widetilde{{\boldsymbol{\beta}}} \in \mathbb{R}_+^d$
, since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU10.png?pub-status=live)
see Barczy et al. [Reference Barczy, Li and Pap6, Section 2]. Further,
${\mathbb{E}}({{\boldsymbol{{X}}}}_t \,|\, {{\boldsymbol{{X}}}}_0 = {{\boldsymbol{{x}}}})$
,
${{\boldsymbol{{x}}}} \in \mathbb{R}_+^d$
, does not depend on the parameter
${{\boldsymbol{{c}}}}$
. One can give probabilistic interpretations of the modified parameters
${\widetilde{{{\boldsymbol{{B}}}}}}$
and
$\widetilde{{\boldsymbol{\beta}}}$
: namely, for each
$t \in \mathbb{R}_+$
, we have
$\mathrm{e}^{t{\widetilde{{{\boldsymbol{{B}}}}}}} {{\boldsymbol{{e}}}}_j = {\mathbb{E}}({{\boldsymbol{{Y}}}}_t \,|\, {{\boldsymbol{{Y}}}}_0 = {{\boldsymbol{{e}}}}_j)$
,
$j \in \{1, \ldots, d\}$
, and
$t \widetilde{{\boldsymbol{\beta}}} = {\mathbb{E}}({{\boldsymbol{{Z}}}}_t \,|\, {{\boldsymbol{{Z}}}}_0 = {\boldsymbol{0}})$
, where
$({{\boldsymbol{{Y}}}}_t)_{t\in\mathbb{R}_+}$
and
$({{\boldsymbol{{Z}}}}_t)_{t\in\mathbb{R}_+}$
are multi-type CBI processes with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{0}}, {{\boldsymbol{{B}}}}, 0, \boldsymbol{\mu})$
and
$(d, {\boldsymbol{0}}, {\boldsymbol{\beta}}, {\boldsymbol{0}}, \nu, {\boldsymbol{0}})$
, respectively; see the formula (2.2). The processes
$({{\boldsymbol{{Y}}}}_t)_{t\in\mathbb{R}_+}$
and
$({{\boldsymbol{{Z}}}}_t)_{t\in\mathbb{R}_+}$
can be viewed as pure branching (without immigration) and pure immigration (without branching) processes, respectively. Consequently,
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
and
$\widetilde{{\boldsymbol{\beta}}}$
may be called the branching mean matrix and the immigration mean vector, respectively. Note that the branching mechanism depends only on the parameters
${{\boldsymbol{{c}}}}$
,
${{\boldsymbol{{B}}}}$
, and
$\boldsymbol{\mu}$
, while the immigration mechanism depends only on the parameters
${\boldsymbol{\beta}}$
and
$\nu$
.
If
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
is a set of admissible parameters,
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
, and the moment condition (2.1) holds, then the multi-type CBI process having parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
can be represented as a pathwise unique strong solution of the stochastic differential equation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn3.png?pub-status=live)
for
$t \in\mathbb{R}_+$
. (See Theorem 4.6 and Section 5 in Barczy et al. [Reference Barczy, Li and Pap6]; there, (2.3) was proved only for
$d \in \{1, 2\}$
, but their method clearly works for all
$d \in \mathbb{N}$
.) Here
$X_{t,\ell}$
,
$\ell\in\{1,\ldots,d\}$
, denotes the
$\ell$
th coordinate of
${{\boldsymbol{{X}}}}_t$
,
${\mathbb{P}}({{\boldsymbol{{X}}}}_0\in\mathbb{R}_+^d)=1$
;
$(W_{t,1})_{t\in\mathbb{R}_+}$
, …,
$(W_{t,d})_{t\in\mathbb{R}_+}$
are standard Wiener processes; and
$N_\ell$
,
$\ell \in \{1, \ldots, d\}$
, and M are Poisson random measures on
$\mathbb{R}_{++} \times {\mathcal U}_d \times \mathbb{R}_{++}$
and on
$\mathbb{R}_{++} \times {\mathcal U}_d$
with intensity measures
$\mathrm{d} u \, \mu_\ell(\mathrm{d}{{\boldsymbol{{z}}}}) \, \mathrm{d} w$
,
$\ell \in \{1, \ldots, d\}$
, and
$\mathrm{d} u \, \nu(\mathrm{d}{{\boldsymbol{{r}}}})$
, respectively, such that
${{\boldsymbol{{X}}}}_0$
,
$(W_{t,1})_{t\in\mathbb{R}_+}$
, …,
$(W_{t,d})_{t\in\mathbb{R}_+}$
,
$N_1,\dots,N_d$
, and M are independent, and
$\widetilde{N}_\ell(\mathrm{d} u, \mathrm{d}{{\boldsymbol{{z}}}}, \mathrm{d} w) \,:\!=\, N_\ell(\mathrm{d} u, \mathrm{d}{{\boldsymbol{{z}}}}, \mathrm{d} w) - \mathrm{d} u \, \mu_\ell(\mathrm{d}{{\boldsymbol{{z}}}}) \, \mathrm{d} w$
,
$\ell \in \{1, \ldots, d\}$
.
Next we recall a classification of multi-type CBI processes. For a matrix
${\boldsymbol{{A}}} \in \mathbb{R}^{d \times d}$
,
$\sigma({\boldsymbol{{A}}})$
will denote the spectrum of
${\boldsymbol{{A}}}$
, that is, the set of all
$\lambda \in \mathbb{C}$
that are eigenvalues of
${\boldsymbol{{A}}}$
. Then
$r({\boldsymbol{{A}}}) = \max_{\lambda \in \sigma({\boldsymbol{{A}}})} |\lambda|$
is the spectral radius of
${\boldsymbol{{A}}}$
. Moreover, we will use the notation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU11.png?pub-status=live)
A matrix
${\boldsymbol{{A}}} \in \mathbb{R}^{d\times d}$
is called reducible if there exist a permutation matrix
${{\boldsymbol{{P}}}} \in \mathbb{R}^{ d \times d}$
and an integer r with
$1 \leqslant r \leqslant d-1$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU12.png?pub-status=live)
where
${\boldsymbol{{A}}}_1 \in \mathbb{R}^{r\times r}$
,
${\boldsymbol{{A}}}_3 \in \mathbb{R}^{ (d-r) \times (d-r) }$
,
${\boldsymbol{{A}}}_2 \in \mathbb{R}^{r \times (d-r) }$
, and
${\boldsymbol{0}} \in \mathbb{R}^{(d-r)\times r}$
is a null matrix. A matrix
${\boldsymbol{{A}}} \in \mathbb{R}^{d\times d}$
is called irreducible if it is not reducible; see, e.g., Horn and Johnson [Reference Horn and Johnson15, Definitions 6.2.21 and 6.2.22]. We do emphasize that no 1-by-1 matrix is reducible.
Definition 2.3. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be a multi-type CBI process having parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that the moment condition (2.1) holds. Then
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is called irreducible if
${\widetilde{{{\boldsymbol{{B}}}}}}$
is irreducible.
Recall that if
${\widetilde{{{\boldsymbol{{B}}}}}} \in \mathbb{R}^{d\times d}_{(+)}$
is irreducible, then
$\mathrm{e}^{t{\widetilde{{{\boldsymbol{{B}}}}}}} \in \mathbb{R}^{d \times d}_{++}$
for all
$t \in \mathbb{R}_{++}$
, and
$s({\widetilde{{{\boldsymbol{{B}}}}}})$
is a real eigenvalue of
${\widetilde{{{\boldsymbol{{B}}}}}}$
, the algebraic and geometric multiplicities of
$s({\widetilde{{{\boldsymbol{{B}}}}}})$
are 1, and the real parts of the other eigenvalues of
${\widetilde{{{\boldsymbol{{B}}}}}}$
are less than
$s({\widetilde{{{\boldsymbol{{B}}}}}})$
. Moreover, corresponding to the eigenvalue
$s({\widetilde{{{\boldsymbol{{B}}}}}})$
there exists a unique (right) eigenvector
${\widetilde{{{\boldsymbol{{u}}}}}} \in \mathbb{R}^d_{++}$
of
${\widetilde{{{\boldsymbol{{B}}}}}}$
such that the sum of its coordinates is 1; this is also the unique (right) eigenvector of
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
, called the right Perron vector of
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
, corresponding to the eigenvalue
$r(\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}) = \mathrm{e}^{s({\widetilde{{{\boldsymbol{{B}}}}}})}$
of
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
such that the sum of its coordinates is 1. Further, there exists a unique left eigenvector
${{\boldsymbol{{u}}}} \in \mathbb{R}^d_{++}$
of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to the eigenvalue
$s({\widetilde{{{\boldsymbol{{B}}}}}})$
with
${\widetilde{{{\boldsymbol{{u}}}}}}^\top {{\boldsymbol{{u}}}} = 1$
, which is also the unique (left) eigenvector of
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
, called the left Perron vector of
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
, corresponding to the eigenvalue
$r(\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}) = \mathrm{e}^{s({\widetilde{{{\boldsymbol{{B}}}}}})}$
of
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
such that
${\widetilde{{{\boldsymbol{{u}}}}}}^\top {{\boldsymbol{{u}}}} = 1$
. Moreover, there exist
$C_1, C_2, C_3, C_4 \in \mathbb{R}_{++}$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn4.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn5.png?pub-status=live)
These Frobenius- and Perron-type results can be found, e.g., in Barczy and Pap [Reference Barczy and Pap10, Appendix A] and Barczy et al. [Reference Barczy, Palau and Pap8, (3.8)].
We will need the following dichotomy of the expectation of an irreducible multi-type CBI process.
Lemma 2.1. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be an irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment condition (2.1) holds. Then either
${\mathbb{E}}({{\boldsymbol{{X}}}}_t) = {\boldsymbol{0}}$
for all
$t \in \mathbb{R}_+$
, or
${\mathbb{E}}({{\boldsymbol{{X}}}}_t) \in \mathbb{R}_{++}^d$
for all
$t \in \mathbb{R}_{++}$
. Specifically, if
${\mathbb{P}}({{\boldsymbol{{X}}}}_0 = {\boldsymbol{0}}) = 1$
,
${\boldsymbol{\beta}} = {\boldsymbol{0}}$
, and
$\nu = 0$
, then
${\mathbb{E}}({{\boldsymbol{{X}}}}_t) = {\boldsymbol{0}}$
for all
$t \in \mathbb{R}_+$
, and hence
${\mathbb{P}}({{\boldsymbol{{X}}}}_t = {\boldsymbol{0}}) = 1$
for all
$t \in \mathbb{R}_+$
; otherwise
${\mathbb{E}}({{\boldsymbol{{X}}}}_t) \in \mathbb{R}_{++}^d$
for all
$t \in \mathbb{R}_{++}$
.
Proof. For each
$t \in \mathbb{R}_+$
, by (2.2), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU13.png?pub-status=live)
Since
$\mathrm{e}^{u{\widetilde{{{\boldsymbol{{B}}}}}}} \in \mathbb{R}_{++}^{d\times d}$
for all
$u \in \mathbb{R}_{++}$
,
${\mathbb{E}}({{\boldsymbol{{X}}}}_0) \in \mathbb{R}_+^d$
, and
$\widetilde{{\boldsymbol{\beta}}} \in \mathbb{R}_+^d$
, we obtain the assertions.
Definition 2.4. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be an irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
. Then
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is called trivial if
${\mathbb{P}}({{\boldsymbol{{X}}}}_0 = {\boldsymbol{0}}) = 1$
,
${\boldsymbol{\beta}} = {\boldsymbol{0}}$
, and
$\nu = 0$
, or equivalently if
${\mathbb{P}}({{\boldsymbol{{X}}}}_t = {\boldsymbol{0}}) = 1$
for all
$t \in \mathbb{R}_+$
. Otherwise
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is called non-trivial.
We note that if
$({{\boldsymbol{{X}}}}_t^{(1)})_{t\in\mathbb{R}_+}$
and
$({{\boldsymbol{{X}}}}_t^{(2)})_{t\in\mathbb{R}_+}$
are multi-type CBI processes with parameters
$(d, {{\boldsymbol{{c}}}}^{(1)}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}^{(1)}, \nu, \boldsymbol{\mu}^{(1)})$
and
$(d, {{\boldsymbol{{c}}}}^{(2)}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}^{(2)}, \nu, \boldsymbol{\mu}^{(2)})$
, respectively,
${{\boldsymbol{{X}}}}_0^{(1)} \stackrel{{\mathrm{a.s.}}}{=} {{\boldsymbol{{X}}}}_0^{(2)}$
, and
$({{\boldsymbol{{X}}}}_t^{(1)})_{t\in\mathbb{R}_+}$
is trivial, then
$({{\boldsymbol{{X}}}}_t^{(2)})_{t\in\mathbb{R}_+}$
is also trivial.
Definition 2.5. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be an irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment condition (2.1) holds. Then
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is called
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU14.png?pub-status=live)
For the motivations for Definitions 2.3 and 2.5, see Barczy and Pap [Reference Barczy and Pap10, Section 3].
3. Results
Now we present the main result of this paper. Recall that
${{\boldsymbol{{u}}}} \in \mathbb{R}^d_{++}$
is the left Perron vector of
$\mathrm{e}^{{\widetilde{{{\boldsymbol{{B}}}}}}}$
corresponding to the eigenvalue
$\mathrm{e}^{s({\widetilde{{{\boldsymbol{{B}}}}}})}$
.
Theorem 3.1. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be a supercritical and irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment condition (2.1) holds. Let
$\lambda \in \sigma({\widetilde{{{\boldsymbol{{B}}}}}})$
and let
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
be a left eigenvector of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to the eigenvalue
$\lambda$
.
-
(i) If
${\operatorname{Re}}(\lambda) \in \bigl(\frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}}), s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr]$ and the moment condition
(3.1)with\begin{equation} \sum_{\ell=1}^d \int_{{\mathcal U}_d} g(\|{{\boldsymbol{{z}}}}\|) {\mathbb{1}}_{\{\|{{\boldsymbol{{z}}}}\|\geqslant 1\}} \, \mu_\ell(\mathrm{d} {{\boldsymbol{{z}}}}) < \infty \end{equation}
\begin{equation*} g(x) \,:\!=\, \begin{cases} x^{\frac{s({\widetilde{{{\boldsymbol{{B}}}}}})}{{\operatorname{Re}}(\lambda)}} & \text{if ${\operatorname{Re}}(\lambda) \in \bigl(\frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}}), s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr)$,} \\ x \log(x) & \text{if ${\operatorname{Re}}(\lambda) = s({\widetilde{{{\boldsymbol{{B}}}}}})$ ($\Longleftrightarrow$ $\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$),} \end{cases} \qquad x \in \mathbb{R}_{++} \end{equation*}
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0}$ with
${\mathbb{E}}(|w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0}|) < \infty$ such that
(3.2)\begin{equation} \mathrm{e}^{-\lambda t} \langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle \to w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} \qquad \text{as $t \to \infty$ in $L_1$ and almost surely.} \end{equation}
-
(ii) If
${\operatorname{Re}}(\lambda) = \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})$ and the moment condition
(3.3)holds, then\begin{equation} \sum_{\ell=1}^d \int_{{\mathcal U}_d} \|{{\boldsymbol{{z}}}}\|^4 {\mathbb{1}}_{\{\|{{\boldsymbol{{z}}}}\|\geqslant 1\}} \, \mu_\ell(\mathrm{d}{{\boldsymbol{{z}}}}) < \infty , \qquad \int_{{\mathcal U}_d} \|{{\boldsymbol{{r}}}}\|^2 {\mathbb{1}}_{\{\|{{\boldsymbol{{r}}}}\|\geqslant 1\}} \, \nu(\mathrm{d} {{\boldsymbol{{z}}}}) < \infty \end{equation}
(3.4)where\begin{equation} t^{-1/2} \mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t/2} \begin{pmatrix} {\operatorname{Re}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \\ {\operatorname{Im}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \end{pmatrix} \stackrel{{\mathcal D}}{\longrightarrow} \sqrt{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}} \, {{\boldsymbol{{Z}}}}_{{\boldsymbol{{v}}}} \qquad \text{as $t \to \infty$,} \end{equation}
${{\boldsymbol{{Z}}}}_{{\boldsymbol{{v}}}}$ is a 2-dimensional random vector such that
${{\boldsymbol{{Z}}}}_{{\boldsymbol{{v}}}} \stackrel{{\mathcal D}}{=} {\mathcal N}_2({\boldsymbol{0}}, {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}})$ independent of
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$ , where
(3.5)with\begin{align} {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}} \,:\!=\, \frac{1}{2} \sum\limits_{\ell=1}^d \langle{{\boldsymbol{{e}}}}_\ell, {\widetilde{{{\boldsymbol{{u}}}}}}\rangle \left(C_{{{\boldsymbol{{v}}}},\ell} {{\boldsymbol{{I}}}}_2 + \begin{pmatrix} {\operatorname{Re}}(\widetilde{C}_{{{\boldsymbol{{v}}}},\ell})\;\;\;\;\;\; & {\operatorname{Im}}(\widetilde{C}_{{{\boldsymbol{{v}}}},\ell}) \\ {\operatorname{Im}}(\widetilde{C}_{{{\boldsymbol{{v}}}},\ell})\;\;\;\;\;\; & -{\operatorname{Re}}(\widetilde{C}_{{{\boldsymbol{{v}}}},\ell}) \end{pmatrix} {\mathbb{1}}_{\{{\operatorname{Im}}(\lambda)=0\}}\right) \end{align}
\begin{align*} C_{{{\boldsymbol{{v}}}},\ell} &\,:\!=\, 2 |\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle|^2 c_\ell + \int_{{\mathcal U}_d} |\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{z}}}}\rangle|^2 \, \mu_\ell(\mathrm{d}{{\boldsymbol{{z}}}}) , \qquad \ell \in \{1, \ldots, d\} , \\ \widetilde{C}_{{{\boldsymbol{{v}}}},\ell} &\,:\!=\, 2 \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle^2 c_\ell + \int_{{\mathcal U}_d} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{z}}}}\rangle^2 \, \mu_\ell(\mathrm{d}{{\boldsymbol{{z}}}}) , \qquad \ell \in \{1, \ldots, d\} . \end{align*}
-
(iii) If
${\operatorname{Re}}(\lambda) \in \bigl(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr)$ and the moment condition (3.3) holds, then
(3.6)where\begin{equation} \mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t/2} \begin{pmatrix} {\operatorname{Re}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \\ {\operatorname{Im}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \end{pmatrix} \stackrel{{\mathcal D}}{\longrightarrow} \sqrt{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}} \, {{\boldsymbol{{Z}}}}_{{\boldsymbol{{v}}}} \qquad \text{as $t \to \infty$,} \end{equation}
${{\boldsymbol{{Z}}}}_{{\boldsymbol{{v}}}}$ is a 2-dimensional random vector such that
${{\boldsymbol{{Z}}}}_{{\boldsymbol{{v}}}} \stackrel{{\mathcal D}}{=} {\mathcal N}_2({\boldsymbol{0}}, {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}})$ independent of
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$ , where
(3.7)with\begin{align} {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}} \,:\!=\, \frac{1}{2} \sum\limits_{\ell=1}^d \langle{{\boldsymbol{{e}}}}_\ell, {\widetilde{{{\boldsymbol{{u}}}}}}\rangle \left\{\frac{C_{{{\boldsymbol{{v}}}},\ell}}{s({\widetilde{{{\boldsymbol{{B}}}}}})-2{\operatorname{Re}}(\lambda)} {{\boldsymbol{{I}}}}_2 + \begin{pmatrix} {\operatorname{Re}}\Bigl(\frac{\widetilde{C}_{{{\boldsymbol{{v}}}},\ell}}{s({\widetilde{{{\boldsymbol{{B}}}}}})-2\lambda}\Bigr)\;\;\;\;\;\; & {\operatorname{Im}}\Bigl(\frac{\widetilde{C}_{{{\boldsymbol{{v}}}},\ell}}{s({\widetilde{{{\boldsymbol{{B}}}}}})-2\lambda}\Bigr) \\[5pt] {\operatorname{Im}}\Bigl(\frac{\widetilde{C}_{{{\boldsymbol{{v}}}},\ell}}{s({\widetilde{{{\boldsymbol{{B}}}}}})-2\lambda}\Bigr)\;\;\;\;\;\; & -{\operatorname{Re}}\Bigl(\frac{\widetilde{C}_{{{\boldsymbol{{v}}}},\ell}}{s({\widetilde{{{\boldsymbol{{B}}}}}})-2\lambda}\Bigr) \end{pmatrix}\right\} \end{align}
$C_{{{\boldsymbol{{v}}}},\ell}$ ,
$\ell \in \{1, \ldots, d\}$ , and
$\widetilde{C}_{{{\boldsymbol{{v}}}},\ell}$ ,
$\ell \in \{1, \ldots, d\}$ , defined in Part (ii).
First we have some remarks concerning the limit distributions in Parts (ii) and (iii) of Theorem 3.1. Note that under the moment condition (3.3), the moment condition (3.1) holds for
$\lambda=s({\widetilde{{{\boldsymbol{{B}}}}}})$
and hence there exists a non-negative random variable
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$
with
${\mathbb{E}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0})<\infty$
such that
$\mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t}\langle{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_t\rangle \to w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$
as
$t\to\infty$
in
$L_1$
and almost surely. Observe that if
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is not a trivial process (see Definition 2.4) and
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}\ne {\boldsymbol{0}}$
, then the scaling factors
$t^{-1/2}\mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t/2}$
and
$\mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t/2}$
in Parts (ii) and (iii) of Theorem 3.1 are correct in the sense that the corresponding limits are non-degenerate random variables, since
${\mathbb{P}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} = 0)<1$
by Theorem 3.1 in Barczy et al. [Reference Barczy, Palau and Pap8] or Lemma 3.1. The correctness of the scaling factor in Part (i) of Theorem 3 will be studied later on; this motivates the forthcoming Theorem 3.2. Note also that Theorem 3.1 is valid even if
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
is not invertible. In Proposition D.3, necessary and sufficient conditions are given for the invertibility of
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
provided that
${\mathbb{E}}(\Vert{{\boldsymbol{{X}}}}_0\Vert^2) < \infty$
,
${\operatorname{Im}}(\lambda) \ne 0$
, and the moment condition
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn13.png?pub-status=live)
holds.
Moreover, in Proposition D.2, under the moment condition (3.8) together with
${\mathbb{E}}(\Vert {{\boldsymbol{{X}}}}_0\Vert^2)<\infty$
, we describe the asymptotic behavior of the variance matrix of the real and imaginary parts of
$\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle$
as
$t \to \infty$
, which explains the phase transition at
${\operatorname{Re}}(\lambda) = \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})$
in Theorem 3.1. This result can be viewed as an extension of Proposition B.1 in Barczy et al. [Reference Barczy, Palau and Pap8] (see also Proposition D.1), where the asymptotic behavior of the second absolute moment
${\mathbb{E}}(\vert \langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle\vert^2)$
of
$\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle$
has been described as
$t\to\infty$
. The proof of Proposition D.2 is based on the decomposition of
$\mathrm{e}^{-\lambda t}\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle$
mentioned in the introduction (see the beginning of the proof of Part (iii) of Theorem 3.1), which yields an appropriate decomposition of
${\mathbb{E}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle^2)$
containing
${\mathbb{E}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0 \rangle^2)$
,
${\mathbb{E}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0 \rangle)$
, and
${\mathbb{E}}(X_{u,\ell})$
,
$u\in[0,t]$
,
$\ell\in\{1,\ldots,d\}$
. So the proof of Proposition D.2 can be finished through delicate estimates using the explicit form of
${\mathbb{E}}({{\boldsymbol{{X}}}}_t \,|\, {{\boldsymbol{{X}}}}_0={{\boldsymbol{{x}}}})$
,
${{\boldsymbol{{x}}}}\in\mathbb{R}_+^d$
,
$t\in\mathbb{R}_+$
, given in (2.2).
In the next statement, sufficient conditions are derived for
${\mathbb{P}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} = 0) = 0$
. Note that in the case
${\mathbb{P}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} = 0) = 0$
, the scaling factor
$\mathrm{e}^{-\lambda t}$
is correct in Part (i) of Theorem 3.1, in the sense that the limit is a non-degenerate random variable.
Theorem 3.2. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be a supercritical and irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment conditions (2.1) and (3.8) hold. Let
$\lambda \in \sigma({\widetilde{{{\boldsymbol{{B}}}}}})$
be such that
${\operatorname{Re}}(\lambda) \in \bigl(\frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}}), s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr]$
, and let
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
be a left eigenvector of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to the eigenvalue
$\lambda$
.
If the conditions
-
(i)
$\widetilde{{\boldsymbol{\beta}}} \ne {\boldsymbol{0}}$ , i.e.,
${\boldsymbol{\beta}} \ne {\boldsymbol{0}}$ or
$\nu \ne 0$ ,
-
(ii)
$\nu(\{{{\boldsymbol{{r}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{r}}}}\rangle \ne 0\}) > 0$ , or there exists
$\ell \in \{1, \ldots, d\}$ such that
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle c_\ell \ne 0$ or
$\mu_\ell(\{{{\boldsymbol{{z}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{z}}}}\rangle \ne 0\}) > 0$
hold, then the law of
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0}$
does not have atoms, where
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0}$
is given in Part (i) of Theorem 3.1. In particular,
${\mathbb{P}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} = 0) = 0$
.
If the condition (ii) does not hold, then
${\mathbb{P}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} = \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0 + \lambda^{-1} \widetilde{{\boldsymbol{\beta}}}\rangle ) = 1$
, and in particular,
${\mathbb{P}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} = 0) = {\mathbb{P}}(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0 + \lambda^{-1} \widetilde{{\boldsymbol{\beta}}}\rangle = 0)$
.
If
$\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$
,
${{\boldsymbol{{v}}}} = {{\boldsymbol{{u}}}}$
, and the condition (i) holds, then
${\mathbb{P}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} = 0) = 0$
.
If
$\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$
,
${{\boldsymbol{{v}}}} = {{\boldsymbol{{u}}}}$
, and the conditions (i) and (ii) do not hold, then
${\mathbb{P}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} = 0) = {\mathbb{P}}({{\boldsymbol{{X}}}}_0 = {\boldsymbol{0}})$
.
Next, we show that with an appropriate random scaling in Parts (ii) and (iii) of Theorem 3.1,
$\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle$
is asymptotically normal as
$t \to \infty$
under the conditional probability measure
${\mathbb{P}}(\!\cdot\,|\, w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0)$
, provided that
${\mathbb{P}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0) > 0$
. Parts (ii) and (iii) of the forthcoming Theorem 3.3 are analogous to Theorems 1 and 2 and Part 5 of Corollary 5 in Athreya [Reference Athreya2]. First we give a necessary and sufficient condition for
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} \stackrel{{\mathrm{a.s.}}}{=} 0$
.
Lemma 3.1. Suppose that
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is a supercritical and irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
, the moment condition (2.1) holds, and the moment condition (3.1) holds for
$\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$
. Then
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} \stackrel{{\mathrm{a.s.}}}{=} 0$
if and only if
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is a trivial process (equivalently,
${{\boldsymbol{{X}}}}_0 \stackrel{{\mathrm{a.s.}}}{=} {\boldsymbol{0}}$
and
$\widetilde{{\boldsymbol{\beta}}} = {\boldsymbol{0}}$
; see Lemma 2.1 and Definition 2.4).
Theorem 3.3. Suppose that
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is a supercritical, irreducible, and non-trivial multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment condition (2.1) holds.
-
(i) If
${\operatorname{Re}}(\lambda) \in \bigl(\frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}}), s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr]$ and the moment condition (3.1) holds, then
\begin{align*} &{\mathbb{1}}_{\{{{\boldsymbol{{X}}}}_t\ne{\boldsymbol{0}}\}} \frac{1}{\langle {{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t \rangle^{{\operatorname{Re}}(\lambda)/s({\widetilde{{{\boldsymbol{{B}}}}}})}} \begin{pmatrix} \cos({\operatorname{Im}}(\lambda)t)\;\;\; & \sin({\operatorname{Im}}(\lambda)t) \\[5pt] - \sin({\operatorname{Im}}(\lambda)t)\;\;\; & \cos({\operatorname{Im}}(\lambda)t) \end{pmatrix} \begin{pmatrix} {\operatorname{Re}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \\[5pt] {\operatorname{Im}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \end{pmatrix} \\[5pt] &\to \frac{1}{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}^{{\operatorname{Re}}(\lambda)/s({\widetilde{{{\boldsymbol{{B}}}}}})}} \begin{pmatrix} {\operatorname{Re}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0}) \\[5pt] {\operatorname{Im}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0}) \end{pmatrix} \qquad \text{as $t \to \infty$} \end{align*}
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$ .
-
(ii) If
${\operatorname{Re}}(\lambda) = \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})$ and the moment condition (3.3) holds, then, under the conditional probability measure
${\mathbb{P}}(\!\cdot\,|\, w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0)$ , we have
\begin{equation*} {\mathbb{1}}_{\{\langle{{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t\rangle>1\}} \frac{1}{\sqrt{\langle{{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t\rangle\log(\langle{{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t\rangle)}} \begin{pmatrix} {\operatorname{Re}}(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle) \\ {\operatorname{Im}}(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle) \end{pmatrix} \stackrel{{\mathcal D}_{\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0>0}\}}}{\longrightarrow} {\mathcal N}_2\biggl({\boldsymbol{0}}, \frac{1}{s({\widetilde{{{\boldsymbol{{B}}}}}})}{\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}\biggr) \end{equation*}
$t \to \infty$ .
-
(iii) If
${\operatorname{Re}}(\lambda) \in \bigl(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr)$ and the moment condition (3.3) holds, then, under the conditional probability measure
${\mathbb{P}}(\!\cdot\,|\, w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0)$ , we have
\begin{equation*} {\mathbb{1}}_{\{{{\boldsymbol{{X}}}}_t\ne{\boldsymbol{0}}\}} \frac{1}{\sqrt{\langle {{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t \rangle}} \begin{pmatrix} {\operatorname{Re}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \\ {\operatorname{Im}}(\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t \rangle) \end{pmatrix} \stackrel{{\mathcal D}_{\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0>0}\}}}{\longrightarrow} {\mathcal N}_2({\boldsymbol{0}}, {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}) \qquad \text{as $t \to \infty$.} \end{equation*}
Remark 3.1. The indicator function
${\mathbb{1}}_{\{{{\boldsymbol{{X}}}}_t\ne{\boldsymbol{0}}\}}$
is needed in Parts (i) and (iii) of Theorem 3.3, and the indicator function
${\mathbb{1}}_{\{\langle{{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t\rangle>1\}}$
is needed in Part (ii) of Theorem 3.3, since it can happen that
${\mathbb{P}}({{\boldsymbol{{X}}}}_t = {\boldsymbol{0}}) > 0$
,
$t \in \mathbb{R}_{++}$
, even if
$\widetilde{{\boldsymbol{\beta}}} \ne {\boldsymbol{0}}$
. For some examples, see our ArXiv preprint [Reference Barczy, Palau and Pap9, Remark 3.5].
Next we describe the asymptotic behavior of the relative frequencies of distinct types of individuals on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
. For different models, one can find similar results in Jagers [Reference Jagers18, Corollary 1] and Yakovlev and Yanev [Reference Yakovlev and Yanev25, Theorem 2]. For critical and irreducible multi-type CBI processes, see Barczy and Pap [Reference Barczy and Pap10, Corollary 4.1].
Proposition 3.1. If
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is a non-trivial, supercritical, and irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment condition (2.1) holds, then for each
$i, j \in \{1, \ldots, d\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU20.png?pub-status=live)
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
.
Remark 3.2. The indicator functions
${\mathbb{1}}_{\{{{\boldsymbol{{e}}}}_j^\top{{\boldsymbol{{X}}}}_t\ne0\}}$
and
${\mathbb{1}}_{\{{{\boldsymbol{{X}}}}_t\ne{\boldsymbol{0}}\}}$
are needed in Proposition 3.1, since it can happen that
${\mathbb{P}}({{\boldsymbol{{X}}}}_t = {\boldsymbol{0}}) > 0$
,
$t \in \mathbb{R}_{++}$
; see Remark 3.1.
4. Proofs
Proof of Part (i) of Theorem 3.1. This statement is proved in Barczy et al. [Reference Barczy, Palau and Pap8, Theorem 3.1].
Proof of Part (iii) of Theorem 3.1. The proof is divided into three main steps. First, we decompose the process
$(\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle)_{t\in\mathbb{R}_+}$
as the sum of a deterministic process and three square-integrable martingales. We show that the deterministic process goes to zero as
$t\rightarrow\infty$
. For proving asymptotic normality of the martingales in question, we use Theorem 9, due to Crimaldi and Pratelli [Reference Crimaldi and Pratelli12, Theorem 2.2], which provides a set of sufficient conditions for the asymptotic normality of multivariate martingales. The proof is complete as soon as we show that the conditions (E.1) and (E.2) of Theorem 9 are satisfied. In the second and third steps, we prove that (E.1) and (E.2), respectively, are satisfied.
Step 1. For each
$t \in \mathbb{R}_+$
, we have the representation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU21.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU22.png?pub-status=live)
see Barczy et al. [Reference Barczy, Li and Pap7, Lemma 4.1] or Barczy et al. [Reference Barczy, Palau and Pap8, Lemma 2.7]. Thus, for each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU23.png?pub-status=live)
First, we show
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn14.png?pub-status=live)
Indeed, if
$\lambda = 0$
, then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU24.png?pub-status=live)
since
$s({\widetilde{{{\boldsymbol{{B}}}}}}) \in \mathbb{R}_{++}$
. Otherwise, if
${\operatorname{Re}}(\lambda) \in \bigl(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr)$
and
$\lambda \ne 0$
, then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU25.png?pub-status=live)
as
$t \to \infty$
.
For each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU26.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU27.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU28.png?pub-status=live)
The assumption
${\operatorname{Re}}(\lambda) \in \bigl(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr)$
implies
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU29.png?pub-status=live)
For each
$t \in \mathbb{R}_+$
, we can write
${{\boldsymbol{{M}}}}_t = {{\boldsymbol{{M}}}}_t^{(2)} + {{\boldsymbol{{M}}}}_t^{(3,4)} + {{\boldsymbol{{M}}}}_t^{(5)}$
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU30.png?pub-status=live)
Note that under the moment condition (3.3), the stochastic processes
$({{\boldsymbol{{M}}}}_t^{(2)})_{t\in\mathbb{R}_+}$
,
$({{\boldsymbol{{M}}}}_t^{(3,4)})_{t\in\mathbb{R}_+}$
, and
$({{\boldsymbol{{M}}}}_t^{(5)})_{t\in\mathbb{R}_+}$
are square-integrable martingales (see, e.g., Ikeda and Watanabe [Reference Ikeda and Watanabe16, pages 55 and 63]). One can also observe that, by the decomposition of
$(\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle)_{t\in\mathbb{R}_+}$
given at the beginning of this step,
$(\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle-\langle{{\boldsymbol{{v}}}}, \widetilde{{\boldsymbol{\beta}}}\rangle \int_0^t \mathrm{e}^{-\lambda u} \, \mathrm{d} u)_{t\in\mathbb{R}_+}$
is a martingale with respect to the filtration
$\sigma({{\boldsymbol{{X}}}}_u : u \in[0,t])$
,
$t\in\mathbb{R}_+$
, which follows from Barczy et al. [Reference Barczy, Palau and Pap8, Lemma 2.6] as well.
The aim of the following discussion is to apply Theorem E.1 for the 2-dimensional martingale
$({{\boldsymbol{{M}}}}_t)_{t\in\mathbb{R}_+}$
with the scaling
${{\boldsymbol{{Q}}}}(t)$
,
$t \in \mathbb{R}_+$
.
Step 2. Now we prove that the condition (E.1) of Theorem E.1 holds for
$({{\boldsymbol{{M}}}}_t)_{t\in\mathbb{R}_+}$
with the scaling
${{\boldsymbol{{Q}}}}(t)$
,
$t \in \mathbb{R}_+$
. For each
$t \in \mathbb{R}_+$
, by Theorem I.4.52 in Jacod and Shiryaev [Reference Jacod and Shiryaev17], we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU31.png?pub-status=live)
since
$({{\boldsymbol{{M}}}}_t^{(2)})_{t\in\mathbb{R}_+}$
is continuous, where
$([{{\boldsymbol{{M}}}}^{(2)}]_t)_{t\in\mathbb{R}_+}$
and
$(\langle{{\boldsymbol{{M}}}}^{(2)}\rangle_t)_{t\in\mathbb{R}_+}$
respectively denote the quadratic variation process and the predictable quadratic variation process of
$({{\boldsymbol{{M}}}}^{(2)}_t)_{t\in\mathbb{R}_+}$
. Moreover, we have
${{\boldsymbol{{M}}}}_t^{(3,4)} = \sum_{\ell=1}^d {\widetilde{{{\boldsymbol{{Y}}}}}}_t^{(\ell)}$
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU32.png?pub-status=live)
for
$t \in \mathbb{R}_+$
and
$\ell \in \{1, \ldots, d\}$
. For each
$t \in \mathbb{R}_+$
and
$\ell \in \{1, \ldots, d\}$
,
$({\widetilde{{{\boldsymbol{{Y}}}}}}_t^{(\ell)})_{t\in\mathbb{R}_+}$
is a square-integrable purely discontinuous martingale; see, e.g., Jacod and Shiryaev [Reference Jacod and Shiryaev17, Definition II.1.27 and Theorem II.1.33]. Hence, for each
$t \in \mathbb{R}_+$
and
$k, \ell \in \{1, \ldots, d\}$
, by Lemma I.4.51 in Jacod and Shiryaev [Reference Jacod and Shiryaev17], we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU33.png?pub-status=live)
Further, by the proof of Part (a) of Theorem II.1.33 in Jacod and Shiryaev [Reference Jacod and Shiryaev17], for each
$t \in \mathbb{R}_+$
and
$k \in \{1, \ldots, d\}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU34.png?pub-status=live)
The aim of the following discussion is to show that for each
$t \in \mathbb{R}_+$
and
$k, \ell \in \{1, \ldots, d\}$
with
$k \ne \ell$
, we have
$[{\widetilde{{{\boldsymbol{{Y}}}}}}^{(k)}, {\widetilde{{{\boldsymbol{{Y}}}}}}^{(\ell)}]_t = {\boldsymbol{0}}$
almost surely. By the bilinearity of the quadratic variation process, for all
$\varepsilon \in \mathbb{R}_{++}$
and
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn15.png?pub-status=live)
where, for all
$\varepsilon \in \mathbb{R}_{++}$
,
$k \in \{1, \ldots, d\}$
and
$t \in \mathbb{R}_+$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU35.png?pub-status=live)
which is well-defined and square-integrable, since, by (2) and (3.8),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU36.png?pub-status=live)
For each
$\varepsilon \in \mathbb{R}_{++}$
,
$t \in \mathbb{R}_+$
, and
$k, \ell \in \{1, \ldots, d\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU37.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU38.png?pub-status=live)
where the first equality follows from the proof of Part (a) of Theorem II.1.33 in Jacod and Shiryaev [Reference Jacod and Shiryaev17], and the second equality follows from (2), Part (vi) of Definition 2.1, and (3.8), since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU39.png?pub-status=live)
and hence we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU40.png?pub-status=live)
For each
$\varepsilon \in \mathbb{R}_{++}$
and
$k \in \{1, \ldots, d\}$
, the jump times of
$({{\boldsymbol{{Y}}}}_t^{(k,\varepsilon)})_{t\in\mathbb{R}_+}$
form a subset of the jump times of the Poisson process
$(N_k([0,t] \times {\mathcal U}_d \times {\mathcal U}_1))_{t\in\mathbb{R}_+}$
. For each
$k, \ell \in \{1, \ldots, d\}$
with
$k \ne \ell$
, the Poisson processes
$(N_k([0,t] \times {\mathcal U}_d \times {\mathcal U}_1))_{t\in\mathbb{R}_+}$
and
$(N_\ell([0,t] \times {\mathcal U}_d \times {\mathcal U}_1))_{t\in\mathbb{R}_+}$
are independent, so they can jump simultaneously with probability zero; see, e.g., Revuz and Yor [Reference Revuz and Yor23, Chapter XII, Proposition 1.5]. Consequently, for each
$\varepsilon \in \mathbb{R}_{++}$
,
$t \in \mathbb{R}_+$
, and
$k, \ell \in \{1, \ldots, d\}$
with
$k \ne \ell$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU41.png?pub-status=live)
almost surely.
Moreover, for each
$t \in \mathbb{R}_+$
,
$\varepsilon \in \mathbb{R}_{++}$
,
$i, j \in \{1, 2\}$
and
$k, \ell \in \{1, \ldots, d\}$
with
$k \ne \ell$
, by the Kunita–Watanabe inequality, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU42.png?pub-status=live)
Hence it is enough to check that
$[\langle {{\boldsymbol{{e}}}}_j, {\widetilde{{{\boldsymbol{{Y}}}}}}^{(\ell,\varepsilon)} \rangle]_t$
is stochastically bounded in
$\varepsilon \in \mathbb{R}_{++}$
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU43.png?pub-status=live)
for all
$t \in \mathbb{R}_+$
,
$j \in \{1, 2\}$
, and
$\ell \in \{1,\ldots, d\}$
. Indeed, in this case
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU44.png?pub-status=live)
and, by (4.2), for each
$t \in \mathbb{R}_+$
and
$k, \ell \in \{1, \ldots, d\}$
with
$k \ne \ell$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU45.png?pub-status=live)
almost surely. By the proof of Part (a) of Theorem II.1.33 in Jacod and Shiryaev [Reference Jacod and Shiryaev17],
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU46.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU47.png?pub-status=live)
Consequently, using that
$\|{{\boldsymbol{{z}}}} {{\boldsymbol{{z}}}}^\top\| \leqslant \|{{\boldsymbol{{z}}}}\|^2$
,
${{\boldsymbol{{z}}}} \in \mathbb{R}^2$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU48.png?pub-status=live)
for all
$\varepsilon \in \mathbb{R}_{++}$
and
$j\in\{1, 2\}$
, where the right-hand side is finite almost surely, since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU49.png?pub-status=live)
Further,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU50.png?pub-status=live)
as
$\varepsilon \downarrow 0$
. Consequently, for each
$t \in \mathbb{R}_+$
and
$k, \ell \in \{1, \ldots, d\}$
with
$k \ne \ell$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU51.png?pub-status=live)
almost surely.
In a similar way,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU52.png?pub-status=live)
and
$[{\widetilde{{{\boldsymbol{{Y}}}}}}^{(\ell)}, {{\boldsymbol{{M}}}}^{(5)}]_t = {\boldsymbol{0}}$
,
$\ell \in \{1, \ldots, d\}$
, almost surely. Consequently, for each
$t \in \mathbb{R}_+$
, we have
$[{{\boldsymbol{{M}}}}^{(3,4)} + {{\boldsymbol{{M}}}}^{(5)}]_t = [{{\boldsymbol{{M}}}}^{(3,4)}]_t + [{{\boldsymbol{{M}}}}^{(5)}]_t$
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU53.png?pub-status=live)
Since
$({{\boldsymbol{{M}}}}^{(2)}_t)_{t\in\mathbb{R}_+}$
is a continuous martingale and
$({{\boldsymbol{{M}}}}^{(3,4)}_t + {{\boldsymbol{{M}}}}^{(5)}_t)_{t\in\mathbb{R}_+}$
is a purely discontinuous martingale, by Corollary I.4.55 in Jacod and Shiryaev [Reference Jacod and Shiryaev17], we have
$[{{\boldsymbol{{M}}}}^{(2)}, {{\boldsymbol{{M}}}}^{(3,4)} + {{\boldsymbol{{M}}}}^{(5)}]_t = {\boldsymbol{0}}$
,
$t \in \mathbb{R}_+$
. Consequently,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU54.png?pub-status=live)
For each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU55.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU56.png?pub-status=live)
for
$w \in \mathbb{R}_+$
and
${{\boldsymbol{{z}}}} \in \mathbb{R}^d$
. First, we show
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn16.png?pub-status=live)
For each
$t, T \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU57.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU58.png?pub-status=live)
For each
$t, T \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU59.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU60.png?pub-status=live)
almost surely, since
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
has cÀdlÀg sample paths (by Theorem 4.6 in Barczy et al. [Reference Barczy, Li and Pap5]). Then, using that
$\|{{\boldsymbol{{z}}}} {{\boldsymbol{{z}}}}^\top\| \leqslant \|{{\boldsymbol{{z}}}}\|^2$
,
${{\boldsymbol{{z}}}} \in \mathbb{R}^2$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn17.png?pub-status=live)
as
$t \to \infty$
. Hence, for each
$T \in \mathbb{R}_+$
, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU61.png?pub-status=live)
almost surely. Moreover, for each
$t, T \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU62.png?pub-status=live)
almost surely, where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn18.png?pub-status=live)
Consequently, for each
$T \in \mathbb{R}_+$
, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU63.png?pub-status=live)
almost surely. Letting
$T \to \infty$
, by Theorem 3.3 in Barczy et al. [Reference Barczy, Palau and Pap8] (which can be used, since the moment condition (3.3) yields the moment condition (3.1) with
$\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$
), we obtain (4.3). Moreover,
$\int_0^t {{\boldsymbol{{f}}}}(w, {{\boldsymbol{{e}}}}_\ell) \, \mathrm{d} w \to \int_0^\infty {{\boldsymbol{{f}}}}(w, {{\boldsymbol{{e}}}}_\ell) \, \mathrm{d} w$
as
$t \to \infty$
, since we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU64.png?pub-status=live)
Consequently,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn19.png?pub-status=live)
Next, by Theorem 3.3 in Barczy et al. [Reference Barczy, Palau and Pap8], we show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU65.png?pub-status=live)
as
$t \to \infty$
. Since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn20.png?pub-status=live)
it is enough to show that for each
$\ell \in \{1, \ldots, d\}$
and
$i, j \in \{1, 2\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn21.png?pub-status=live)
where
${{\boldsymbol{{f}}}}(w,{{\boldsymbol{{z}}}})\,=\!:\,(f_{i,j}(w,{{\boldsymbol{{z}}}}))_{i,j\in\{1,2\}}$
,
$w \in \mathbb{R}_+$
,
${{\boldsymbol{{z}}}} \in \mathbb{R}^d$
. For each
$t \in \mathbb{R}_+$
,
$\ell \in \{1, \ldots, d\}$
, and
$i, j \in \{1, 2\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn22.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU66.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU67.png?pub-status=live)
Here, for each
$\ell \in \{1, \ldots, d\}$
and
$i, j \in \{1 ,2\}$
, using Ikeda and Watanabe [Reference Ikeda and Watanabe16, page 63], (2), and the fact that
$|{\operatorname{Re}}(a)| \leqslant |a|$
and
$|{\operatorname{Im}}(a)| \leqslant |a|$
for each
$a \in \mathbb{C}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn23.png?pub-status=live)
as
$t \to \infty$
. Indeed, if
$s({\widetilde{{{\boldsymbol{{B}}}}}}) \ne 4{\operatorname{Re}}(\lambda)$
, using that
$2{\operatorname{Re}}(\lambda)<s({\widetilde{{{\boldsymbol{{B}}}}}})$
, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU68.png?pub-status=live)
as
$t \to \infty$
, since
$\int_{{\mathcal U}_d} \|{{\boldsymbol{{z}}}}\|^4 \, \mu_\ell(\mathrm{d}{{\boldsymbol{{z}}}}) < \infty$
. Otherwise, if
$s({\widetilde{{{\boldsymbol{{B}}}}}}) = 4 {\operatorname{Re}}(\lambda)$
, then we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU69.png?pub-status=live)
Further, for each
$\ell\in\{1, \ldots, d\}$
,
$i, j \in\{1, 2\}$
, and
$t, T \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn24.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU70.png?pub-status=live)
By Theorem 3.3 in Barczy et al. [Reference Barczy, Palau and Pap8], we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU71.png?pub-status=live)
and hence, similarly as in (4.4), for any
$T \in \mathbb{R}_+$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU72.png?pub-status=live)
as
$t \to \infty$
. Further, similarly as in (4.5), for each
$t, T \in \mathbb{R}_+$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU73.png?pub-status=live)
Consequently, for each
$T \in \mathbb{R}_+$
, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU74.png?pub-status=live)
Letting
$T \to \infty$
, by Theorem 3.3 in Barczy et al. [Reference Barczy, Palau and Pap8], we have
$\lim_{t\to\infty} I_{t,2} = 0$
, as desired. All in all,
$\lim_{t\to\infty} (I_{t,1} + I_{t,2}) = 0$
, yielding (4.8). Moreover, for each
$\ell \in \{1, \ldots, d\}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU75.png?pub-status=live)
as
$t \to \infty$
, since we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU76.png?pub-status=live)
Consequently,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn25.png?pub-status=live)
Further,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU77.png?pub-status=live)
as
$t \to \infty$
, since if
${\operatorname{Re}}(\lambda) \in (-\infty, \frac{1}{2}s({\widetilde{{{\boldsymbol{{B}}}}}}))$
, then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn26.png?pub-status=live)
as
$t \to \infty$
. Indeed, if
${\operatorname{Re}}(\lambda) \ne 0$
, then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU78.png?pub-status=live)
as
$t \to \infty$
, and if
${\operatorname{Re}}(\lambda) = 0$
, then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU79.png?pub-status=live)
as
$t \to \infty$
. Consequently, by (4.6) and (4.12), we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn27.png?pub-status=live)
hence the condition (E.1) of Theorem 9 holds. Indeed, for each
$a \in \mathbb{C}$
, we have the identity
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn28.png?pub-status=live)
Hence, for each
$\ell \in \{1, \ldots, d\}$
, applying (4.15) with
$a=\mathrm{e}^{-(s({\widetilde{{{\boldsymbol{{B}}}}}})- 2\lambda)w/2} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle$
and from the linearity of the integral, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU80.png?pub-status=live)
and similarly
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU81.png?pub-status=live)
yielding (4.14). Note that
$ {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
is non-negative definite irrespective of
$\widetilde{{\boldsymbol{\beta}}} \ne {\boldsymbol{0}}$
or
$\widetilde{{\boldsymbol{\beta}}} = {\boldsymbol{0}}$
, since
${{\boldsymbol{{c}}}} \in \mathbb{R}_+^d$
,
${\widetilde{{{\boldsymbol{{u}}}}}} \in \mathbb{R}_{++}^d$
, and
${{\boldsymbol{{f}}}}(w,{{\boldsymbol{{z}}}})$
is non-negative definite for any
$w \in \mathbb{R}_+$
and
${{\boldsymbol{{z}}}} \in \mathbb{R}^d$
.
Step 3. Now we turn to proving that the condition (E.2) of Theorem 9 holds for
$({{\boldsymbol{{M}}}}_t)_{t\in\mathbb{R}_+}$
with the scaling
${{\boldsymbol{{Q}}}}(t)$
,
$t \in \mathbb{R}_+$
; that is,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU82.png?pub-status=live)
Since
$({{\boldsymbol{{M}}}}^{(2)}_t)_{t\in\mathbb{R}_+}$
has continuous sample paths, for each
$t \in \mathbb{R}_+$
we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn29.png?pub-status=live)
almost surely. Since
${{\boldsymbol{{Q}}}}(t) {{\boldsymbol{{Q}}}}(t)^\top = \mathrm{e}^{-(s({\widetilde{{{\boldsymbol{{B}}}}}})-2{\operatorname{Re}}(\lambda))t} {{\boldsymbol{{I}}}}_2$
,
$t \in \mathbb{R}_+$
, we have
$\|{{\boldsymbol{{Q}}}}(t)\| = \mathrm{e}^{-(s({\widetilde{{{\boldsymbol{{B}}}}}})-2{\operatorname{Re}}(\lambda))t/2}$
,
$t \in \mathbb{R}_+$
. Hence it is enough to show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn30.png?pub-status=live)
for every
$\ell \in \{1, \ldots, d\}$
, and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn31.png?pub-status=live)
First, we prove (4.17) for each
$\ell \in \{1, \ldots, d\}$
. By the Cauchy–Schwarz inequality, for each
$\varepsilon \in \mathbb{R}_{++}$
,
$t \in \mathbb{R}_+$
, and
$\ell \in \{1, \ldots, d\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn32.png?pub-status=live)
Here, by (2), for each
$\varepsilon \in \mathbb{R}_{++}$
,
$t \in \mathbb{R}_+$
, and
$\ell \in \{1, \ldots, d\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn33.png?pub-status=live)
Hence, by (4.10) and
$2{\operatorname{Re}}(\lambda)<s({\widetilde{{{\boldsymbol{{B}}}}}})$
, for each
$\varepsilon \in \mathbb{R}_{++}$
and
$\ell \in \{1, \ldots, d\}$
, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn34.png?pub-status=live)
Further, since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU83.png?pub-status=live)
by the proof of Part (a) of Theorem II.1.33 in Jacod and Shiryaev [Reference Jacod and Shiryaev17], we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn35.png?pub-status=live)
Hence, by (4.19) and (4.21), for all
$\varepsilon \in \mathbb{R}_{++}$
and
$\ell \in \{1, \ldots, d\}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU84.png?pub-status=live)
which tends to 0 as
$\varepsilon \downarrow 0$
by (3.8). Hence we conclude (4.17) for each
$\ell \in \{1, \ldots, d\}$
.
Next, we prove (4.18). By the Cauchy–Schwarz inequality, for each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn36.png?pub-status=live)
so it is enough to prove that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU85.png?pub-status=live)
Since
$\int_{{\mathcal U}_d} \|{{\boldsymbol{{r}}}}\| \, \nu(\mathrm{d} {{\boldsymbol{{r}}}}) < \infty$
, for each
$t \in \mathbb{R}_+$
we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU86.png?pub-status=live)
with
$Z^*_t \,:\!=\, \int_0^t \int_{{\mathcal U}_d} \mathrm{e}^{-\lambda u} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{r}}}}\rangle \, M(\mathrm{d} u, \mathrm{d}{{\boldsymbol{{r}}}})$
; therefore
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn37.png?pub-status=live)
and so by (4.13), we conclude (4.18). Consequently, by Theorem E.1, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU87.png?pub-status=live)
where
${{\boldsymbol{{N}}}}$
is a 2-dimensional random vector with
${{\boldsymbol{{N}}}} \stackrel{{\mathcal D}}{=} {\mathcal N}_2({\boldsymbol{0}}, {{\boldsymbol{{I}}}}_2)$
independent of
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
. Clearly,
$(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}})^{1/2} {{\boldsymbol{{N}}}} = \sqrt{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}} \, {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}^{1/2} {{\boldsymbol{{N}}}} \stackrel{{\mathcal D}}{=} \sqrt{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}} {{\boldsymbol{{Z}}}}_{{\boldsymbol{{v}}}}$
. By the decomposition
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU88.png?pub-status=live)
the convergence (4.1), and Slutsky’s lemma (see, e.g., van der Vaart [Reference Van der Vaart24, Lemma 2.8]), we obtain (3.6).
Proof of Part (ii) of Theorem 3.1. This is similar to the proof of Part (iii) of Theorem 3. For details, see our ArXiv preprint [Reference Barczy, Palau and Pap9].
Proof of Theorem 3.2. First, suppose that the conditions (i) and (ii) hold. In the special case of
${{\boldsymbol{{X}}}}_0 \stackrel{{\mathrm{a.s.}}}{=} {\boldsymbol{0}}$
, applying Lemma A.1 with
$T = 1$
, we have
${{\boldsymbol{{X}}}}_{t+1} \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_t^{(1)} + {{\boldsymbol{{X}}}}_t^{(2,1)}$
for each
$t \in \mathbb{R}_+$
, where
$({{\boldsymbol{{X}}}}_s^{(1)})_{s\in\mathbb{R}_+}$
and
$({{\boldsymbol{{X}}}}_s^{(2,1)})_{s\in\mathbb{R}_+}$
are independent multi-type CBI processes with
${{\boldsymbol{{X}}}}_0^{(1)} \stackrel{{\mathrm{a.s.}}}{=} {\boldsymbol{0}}$
,
${{\boldsymbol{{X}}}}_0^{(2,1)} \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_1$
, and with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
and
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{0}}, {{\boldsymbol{{B}}}}, 0, \boldsymbol{\mu})$
, respectively. Without loss of generality, we may and do suppose that
$({{\boldsymbol{{X}}}}_s)_{s\in\mathbb{R}_+}$
,
$({{\boldsymbol{{X}}}}_s^{(1)})_{s\in\mathbb{R}_+}$
, and
$({{\boldsymbol{{X}}}}_s^{(2,1)})_{s\in\mathbb{R}_+}$
are independent. Then, for each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU89.png?pub-status=live)
By (3.2), we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU90.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU91.png?pub-status=live)
respectively denote the almost sure limits of
$\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t^{(1)}\rangle$
and
$\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t^{(2,1)}\rangle$
as
$t \to \infty$
. Since, for each
$t \in \mathbb{R}_+$
, we have
${{\boldsymbol{{X}}}}_t^{(1)} \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_t$
, we conclude
$w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}^{(1)} \stackrel{{\mathcal D}}{=} w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}$
. The independence of
$({{\boldsymbol{{X}}}}_s)_{s\in\mathbb{R}_+}$
and
$({{\boldsymbol{{X}}}}_s^{(2,1)})_{s\in\mathbb{R}_+}$
implies the independence of
$w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}$
and
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0^{(2,1)}}^{(2,1)}$
; hence
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU92.png?pub-status=live)
Taking the real and imaginary parts, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU93.png?pub-status=live)
which is a 2-dimensional stochastic fixed point equation. We are going to apply Corollary C.1. We have
$\det({\boldsymbol{{A}}}) = ({\operatorname{Re}}(\mathrm{e}^{-\lambda}))^2 + ({\operatorname{Im}}(\mathrm{e}^{-\lambda}))^2 = |\mathrm{e}^{-\lambda}|^2 = \mathrm{e}^{-2{\operatorname{Re}}(\lambda)} \ne 0$
. The eigenvalues of the matrix
${\boldsymbol{{A}}}$
are
$\mathrm{e}^{-\lambda}$
and
$\mathrm{e}^{-\overline{\lambda}}$
, so the spectral radius of
${\boldsymbol{{A}}}$
is
$r({\boldsymbol{{A}}}) = \mathrm{e}^{-{\operatorname{Re}}(\lambda)} \in (0, 1)$
. Next we check that
${\boldsymbol{{A}}} {{\boldsymbol{{C}}}}$
is not deterministic. Suppose that, on the contrary,
${\boldsymbol{{A}}} {{\boldsymbol{{C}}}}$
is deterministic. Then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU94.png?pub-status=live)
is deterministic, since
${\boldsymbol{{A}}}$
is invertible. By Lemma 2.6 in Barczy et al. [Reference Barczy, Palau and Pap8], the process
$(\mathrm{e}^{-s{\widetilde{{{\boldsymbol{{B}}}}}}} {{\boldsymbol{{X}}}}_s^{(2,1)})_{s\in\mathbb{R}_+}$
is a d-dimensional martingale with respect to the filtration
${\mathcal F}^{{{\boldsymbol{{X}}}}^{(2,1)}}_s \,:\!=\, \sigma({{\boldsymbol{{X}}}}_u^{(2,1)} : u \in [0, s])$
,
$s \in \mathbb{R}_+$
; hence
$(\mathrm{e}^{-\lambda s} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_s^{(2,1)}\rangle)_{s\in\mathbb{R}_+}$
is a complex martingale with respect to the same filtration. By (3.2), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU95.png?pub-status=live)
as
$s \to \infty$
in
$L_1$
and almost surely, so
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU96.png?pub-status=live)
almost surely; see, e.g., Karatzas and Shreve [Reference Karatzas and Shreve20, Chapter I, Problem 3.20]. Thus
$\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0^{(2,1)}\rangle$
is deterministic as well. Then
$\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_1\rangle$
is also deterministic since
${{\boldsymbol{{X}}}}_1 \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_0^{(2,1)}$
. However, applying Lemma 6 for the process
$({{\boldsymbol{{X}}}}_s)_{s\in\mathbb{R}_+}$
, we obtain that
$\langle {{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_1\rangle$
is not deterministic, since the condition (i) of this theorem implies that the process
$({{\boldsymbol{{X}}}}_s)_{s\in\mathbb{R}_+}$
is non-trivial, and the condition (ii) of this theorem yields that the condition (ii)/(b) of Lemma 6 does not hold. Thus we get a contradiction, and we conclude that
${\boldsymbol{{A}}} {{\boldsymbol{{C}}}}$
is not deterministic. Moreover, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU97.png?pub-status=live)
see (3.2). Applying Corollary C.1, we conclude that the distribution of
$w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}$
does not have atoms. In particular, we obtain
${\mathbb{P}}(w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}} = 0) = 0$
.
If the conditions (i) and (ii) hold, but
${{\boldsymbol{{X}}}}_0 \stackrel{{\mathrm{a.s.}}}{=} {\boldsymbol{0}}$
does not necessarily hold, then we apply Lemma 5 with
$T = 0$
, and we obtain that
${{\boldsymbol{{X}}}}_t \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_t^{(1)} + {{\boldsymbol{{X}}}}_t^{(2,0)}$
for each
$t \in \mathbb{R}_+$
, where
$({{\boldsymbol{{X}}}}_s^{(1)})_{s\in\mathbb{R}_+}$
and
$({{\boldsymbol{{X}}}}_s^{(2,0)})_{s\in\mathbb{R}_+}$
are independent multi-type CBI processes with
${{\boldsymbol{{X}}}}_0^{(1)} \stackrel{{\mathrm{a.s.}}}{=} {\boldsymbol{0}}$
,
${{\boldsymbol{{X}}}}_0^{(2,0)} \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_0$
, and with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
and
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{0}}, {{\boldsymbol{{B}}}}, 0, \boldsymbol{\mu})$
, respectively. Then, for each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU98.png?pub-status=live)
By (3.2), we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU99.png?pub-status=live)
where
$w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}^{(1)}$
and
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0^{(2,0)}}^{(2,0)}$
denote the almost sure limits of
$\mathrm{e}^{-\lambda s} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_s^{(1)}\rangle$
and of
$\mathrm{e}^{-\lambda s} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_s^{(2,0)}\rangle$
, respectively, as
$s \to \infty$
. The independence of
$({{\boldsymbol{{X}}}}_s^{(1)})_{s\in\mathbb{R}_+}$
and
$({{\boldsymbol{{X}}}}_s^{(2,0)})_{s\in\mathbb{R}_+}$
implies the independence of
$w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}^{(1)}$
and
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0^{(2,0)}}^{(2,0)}$
. We have already shown that
$w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}^{(1)} \stackrel{{\mathcal D}}{=} w_{{{\boldsymbol{{v}}}},{\boldsymbol{0}}}$
does not have atoms, which implies that
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0}$
does not have atoms, since for each
$z \in \mathbb{C}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU100.png?pub-status=live)
In particular, we obtain
${\mathbb{P}}(w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} = 0) = 0$
.
If the condition (ii) does not hold, then, as in the proof that Part (ii)
$\Longrightarrow$
Part (iii) in Lemma B.1, we obtain that in the representation (B.1) of
$\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
, the terms
$Z_t^{(2)}$
,
$Z_t^{(3,4)}$
, and
$Z_t^{(5)}$
are 0 almost surely, so
$\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle = \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle + \langle{{\boldsymbol{{v}}}}, \widetilde{{\boldsymbol{\beta}}}\rangle \int_0^t \mathrm{e}^{- \lambda u} \, \mathrm{d} u$
for all
$t \in \mathbb{R}_+$
almost surely, and hence, taking the limit as
$t\to\infty$
, we have
$w_{{{\boldsymbol{{v}}}},{{\boldsymbol{{X}}}}_0} = \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle + \lambda^{-1} \langle{{\boldsymbol{{v}}}}, \widetilde{{\boldsymbol{\beta}}}\rangle$
almost surely.
If
$\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$
,
${{\boldsymbol{{v}}}} = {{\boldsymbol{{u}}}}$
, and the conditions (i) and (ii) hold, then we have already derived
${\mathbb{P}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} = 0) = 0$
.
If
$\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$
,
${{\boldsymbol{{v}}}} = {{\boldsymbol{{u}}}}$
, and the condition (i) holds but the condition (ii) does not hold, then we have already derived
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU101.png?pub-status=live)
and this probability is 0, since
${{\boldsymbol{{u}}}} \in \mathbb{R}^d_{++}$
,
${\mathbb{P}}({{\boldsymbol{{X}}}}_0 \in \mathbb{R}^d_+) = 1$
,
$s({\widetilde{{{\boldsymbol{{B}}}}}}) \in \mathbb{R}_{++}$
, and
$\widetilde{{\boldsymbol{\beta}}} \in \mathbb{R}^d_+\setminus \{{\boldsymbol{0}}\}$
, yielding that
$\langle{{\boldsymbol{{u}}}}, \widetilde{{\boldsymbol{\beta}}}\rangle > 0$
.
If
$\lambda = s({\widetilde{{{\boldsymbol{{B}}}}}})$
,
${{\boldsymbol{{v}}}} = {{\boldsymbol{{u}}}}$
, and the conditions (i) and (ii) do not hold, then we have already derived
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU102.png?pub-status=live)
and this equals
${\mathbb{P}}({{\boldsymbol{{X}}}}_0 = {\boldsymbol{0}})$
, since
${{\boldsymbol{{u}}}} \in \mathbb{R}^d_{++}$
and
${\mathbb{P}}({{\boldsymbol{{X}}}}_0 \in \mathbb{R}^d_+) = 1$
.
Proof of Lemma 3.1. Note that
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} \stackrel{{\mathrm{a.s.}}}{=} 0$
if and only if
${\mathbb{E}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}) = 0$
. By (3.2), we have
$\mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t} \langle{{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t\rangle \stackrel{L_1}{\longrightarrow} w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$
as
$t \to \infty$
. By (2.2), we obtain
${\mathbb{E}}({{\boldsymbol{{X}}}}_t) = \mathrm{e}^{t{\widetilde{{{\boldsymbol{{B}}}}}}} {\mathbb{E}}({{\boldsymbol{{X}}}}_0) + \int_0^t \mathrm{e}^{u{\widetilde{{{\boldsymbol{{B}}}}}}} \widetilde{{\boldsymbol{\beta}}} \, \mathrm{d} u$
,
$t \in \mathbb{R}_+$
, so
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU103.png?pub-status=live)
as
$t \to \infty$
, where
${{\boldsymbol{{u}}}} \in \mathbb{R}_{++}^d$
and
$s({\widetilde{{{\boldsymbol{{B}}}}}}) > 0$
. Thus
${\mathbb{E}}(w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}) = 0$
if and only if
${{\boldsymbol{{X}}}}_0 \stackrel{{\mathrm{a.s.}}}{=} {\boldsymbol{0}}$
and
$\widetilde{{\boldsymbol{\beta}}} = {\boldsymbol{0}}$
.
Proof of Part (i) of Theorem 3.3. By Theorem 3.3 in Barczy et al. [Reference Barczy, Palau and Pap8], we have
$\mathrm{e}^{-s({\widetilde{{{\boldsymbol{{B}}}}}})t} {{\boldsymbol{{X}}}}_t \stackrel{{\mathrm{a.s.}}}{\longrightarrow} w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} {\widetilde{{{\boldsymbol{{u}}}}}}$
as
$t \to \infty$
; hence
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU104.png?pub-status=live)
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
, since
${\widetilde{{{\boldsymbol{{u}}}}}} \in \mathbb{R}_{++}^d$
. By (3.2), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU105.png?pub-status=live)
as
$t \to \infty$
. Using that
$\langle{{\boldsymbol{{u}}}}, {{\boldsymbol{{X}}}}_t\rangle \ne 0$
if and only if
${{\boldsymbol{{X}}}}_t \ne {\boldsymbol{0}}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU106.png?pub-status=live)
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
, as desired.
Proof of Part (iii) of Theorem 3.3. First, note that the moment condition (3.3) implies the moment condition (3.1) with
$\lambda=s({\widetilde{{{\boldsymbol{{B}}}}}})$
, so, by Lemma 3.1,
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} \stackrel{{\mathrm{a.s.}}}{=} 0$
if and only if
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is trivial. For each
$t \in \mathbb{R}_+$
, we have the decomposition
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU107.png?pub-status=live)
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
. As we have seen in the proof of Part (i) of Theorem 3.3, we have
${\mathbb{1}}_{\{{{\boldsymbol{{X}}}}_t\ne{\boldsymbol{0}}\}} \to 1$
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
, and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU108.png?pub-status=live)
as
$t\to\infty$
. In the case
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}} = {\boldsymbol{0}}$
, (3.6) yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU109.png?pub-status=live)
hence, using the above decomposition, by Slutsky’s lemma we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU110.png?pub-status=live)
In the case
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}} \ne {\boldsymbol{0}}$
, as in the proof of (3.4), we may apply Theorem E.1 to obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU111.png?pub-status=live)
as
$t \to \infty$
, where
${{\boldsymbol{{N}}}}$
is a 2-dimensional random vector with
${{\boldsymbol{{N}}}} \stackrel{{\mathcal D}}{=} {\mathcal N}_2({\boldsymbol{0}}, {{\boldsymbol{{I}}}}_2)$
independent of
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
, and hence independent of
$w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0}$
because
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}} \ne {\boldsymbol{0}}$
and
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
is deterministic. Applying the continuous mapping theorem, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU112.png?pub-status=live)
Hence, again using the above decomposition, by Slutsky’s lemma and (3.2), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU113.png?pub-status=live)
where
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}^{1/2} {{\boldsymbol{{N}}}} \stackrel{{\mathcal D}}{=} {\mathcal N}_2({\boldsymbol{0}}, {\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}})$
, as desired.
Proof of Part (ii) of Theorem 3.3. This is similar to the proof of Part (iii) of Theorem 3.3. For details, see our ArXiv preprint [Reference Barczy, Palau and Pap9].
Proof of Proposition 3.3. Theorem 3.3 in Barczy et al. [Reference Barczy, Palau and Pap8] yields that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU114.png?pub-status=live)
as
$t \to \infty$
. Consequently, since
${\widetilde{{{\boldsymbol{{u}}}}}} \in \mathbb{R}_{++}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU115.png?pub-status=live)
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
, and hence
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU116.png?pub-status=live)
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
; thus we obtain the first convergence. In a similar way,
${\mathbb{1}}_{\{{{\boldsymbol{{X}}}}_t\ne{\boldsymbol{0}}\}} \to 1$
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
, so
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU117.png?pub-status=live)
as
$t \to \infty$
on the event
$\{w_{{{\boldsymbol{{u}}}},{{\boldsymbol{{X}}}}_0} > 0\}$
, since the sum of the coordinates of
${\widetilde{{{\boldsymbol{{u}}}}}}$
is 1. Thus we obtain the second convergence.
Appendix A. A decomposition of multi-type CBI processes
The following useful decomposition of a multi-type CBI process as an independent sum of a CBI process starting from
${\boldsymbol{0}}$
and a CB process is derived in Barczy et al. [Reference Barczy, Palau and Pap8, Lemma A.1].
Lemma A.1. If
$({{\boldsymbol{{X}}}}_s)_{s\in\mathbb{R}_+}$
is a multi-type CBI process having parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
, then for each
$t, T \in \mathbb{R}_+$
, we have
${{\boldsymbol{{X}}}}_{t+T} \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_t^{(1)} + {{\boldsymbol{{X}}}}_t^{(2,T)}$
, where
$({{\boldsymbol{{X}}}}_s^{(1)})_{s\in\mathbb{R}_+}$
and
$({{\boldsymbol{{X}}}}_s^{(2,T)})_{s\in\mathbb{R}_+}$
are independent multi-type CBI processes with
${\mathbb{P}}({{\boldsymbol{{X}}}}_0^{(1)} = {\boldsymbol{0}}) = 1$
,
${{\boldsymbol{{X}}}}_0^{(2,T)} \stackrel{{\mathcal D}}{=} {{\boldsymbol{{X}}}}_T$
, and with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
and
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{0}}, {{\boldsymbol{{B}}}}, 0, \boldsymbol{\mu})$
, respectively.
Appendix B. On deterministic projections of multi-type CBI processes
Lemma B.1. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be an irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|) < \infty$
and the moment conditions (2.1) and (3.8) hold. Let
$\lambda \in \sigma({\widetilde{{{\boldsymbol{{B}}}}}})$
, and let
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
be a left eigenvector of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to the eigenvalue
$\lambda$
. Then the following three assertions are equivalent:
-
(i) There exists
$t \in \mathbb{R}_{++}$ such that
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$ is deterministic.
-
(ii) One of the following two conditions holds:
-
(a)
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$ is a trivial process (see Definition 2.4).
-
(b)
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle$ is deterministic,
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle c_\ell = 0$ and
$\mu_\ell(\{{{\boldsymbol{{z}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{z}}}}\rangle \ne 0\}) = 0$ for every
$\ell \in \{1, \ldots, d\}$ , and
$\nu(\{{{\boldsymbol{{r}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{r}}}}\rangle \ne 0\}) = 0$ .
-
(iii) For each
$t \in \mathbb{R}_+$ ,
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$ is deterministic.
If
$(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle)_{t\in\mathbb{R}_+}$
is deterministic, then
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle \stackrel{{\mathrm{a.s.}}}{=} \mathrm{e}^{\lambda t} \langle{{\boldsymbol{{v}}}}, {\mathbb{E}}({{\boldsymbol{{X}}}}_0)\rangle + \langle{{\boldsymbol{{v}}}}, \widetilde{{\boldsymbol{\beta}}}\rangle \int_0^t \mathrm{e}^{\lambda u} \, \mathrm{d} u$
,
$t \in \mathbb{R}_+$
.
Proof. Proof that (i)
$\Longrightarrow$
(ii): We have the representation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn38.png?pub-status=live)
with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU118.png?pub-status=live)
see Barczy et al. [Reference Barczy, Li and Pap7, Lemma 4.1] or Barczy et al. [Reference Barczy, Palau and Pap8, Lemma 2.7]. Note that under the moment condition (3.8),
$(Z_t^{(2)})_{t\in\mathbb{R}_+}$
,
$(Z_t^{(3,4)})_{t\in\mathbb{R}_+}$
and
$(Z_t^{(5)})_{t\in\mathbb{R}_+}$
are square-integrable martingales with initial values 0; hence
${\mathbb{E}}(Z_t^{(2)}) = {\mathbb{E}}(Z_t^{(3,4)}) = {\mathbb{E}}(Z_t^{(5)}) = 0$
. Since
$\mathrm{e}^{-\lambda t} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
and
$Z_t^{(1)}$
are deterministic, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU119.png?pub-status=live)
Hence, by the representation (B.1), we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU120.png?pub-status=live)
almost surely. Consequently,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU121.png?pub-status=live)
By the independence of
${{\boldsymbol{{X}}}}_0$
,
$(W_{u,1})_{u\geqslant 0}$
, …,
$(W_{u,d})_{u\geqslant 0}$
,
$N_1$
, …,
$N_d$
, and M, the random variables
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle - {\mathbb{E}}(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle)$
,
$Z_t^{(2)}$
,
$Z_t^{(3,4)}$
, and
$Z_t^{(5)}$
are conditionally independent with respect to
$({{\boldsymbol{{X}}}}_u)_{u\in[0,t]}$
; thus, by conditioning on
$({{\boldsymbol{{X}}}}_u)_{u\in[0,t]}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU122.png?pub-status=live)
where we also used that
$(Z_s^{(2)})_{s\in[0,t]}$
,
$(Z_s^{(3,4)})_{s\in[0,t]}$
, and
$(Z_s^{(5)})_{s\in[0,t]}$
are square-integrable martingales with initial values 0 conditionally on
$({{\boldsymbol{{X}}}}_u)_{u\in[0,t]}$
. Consequently,
${\mathbb{E}}(|\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle - {\mathbb{E}}(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle)|^2 ) = 0$
and
${\mathbb{E}}(|Z_t^{(2)}|^2) = {\mathbb{E}}(|Z_t^{(3,4)}|^2) = {\mathbb{E}}(|Z_t^{(5)}|^2) = 0$
. One can easily derive
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU123.png?pub-status=live)
so we conclude
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU124.png?pub-status=live)
Consequently, for each
$\ell \in \{1, \ldots, d\}$
, we have
$|\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle|^2 c_\ell = 0$
or
$\int_0^t \mathrm{e}^{-2{\operatorname{Re}}(\lambda)u} {\mathbb{E}}(X_{u,\ell}) \, \mathrm{d} u = 0$
. In the first case we obtain
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle c_\ell = 0$
, which is in (ii)/(b). In the second case, using Lemma 2.1 and
$\mathrm{e}^{-2{\operatorname{Re}}(\lambda)u} \in \mathbb{R}_{++}$
for all
$u \in \mathbb{R}_+$
, we conclude (ii)/(a).
Since
${\mathbb{E}}(|Z_t^{(3,4)}|^2) = 0$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU125.png?pub-status=live)
Using
$\mathrm{e}^{-2{\operatorname{Re}}(\lambda)u} \in \mathbb{R}_{++}$
for all
$u \in \mathbb{R}_+$
, we conclude
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU126.png?pub-status=live)
Then, using the non-negativity of the integrands, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU127.png?pub-status=live)
By Lemma 2.1, for each
$\ell \in \{1, \ldots, d\}$
, we have either (ii)/(a), or
${\mathbb{E}}(X_{u,\ell}) = {{\boldsymbol{{e}}}}_\ell^\top {\mathbb{E}}({{\boldsymbol{{X}}}}_u) \in \mathbb{R}_{++}$
for all
$u \in \mathbb{R}_{++}$
. In the second case, we conclude
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU128.png?pub-status=live)
and hence
$\mu_\ell(\{{{\boldsymbol{{z}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{z}}}}\rangle \ne 0\}) = 0$
, which is in (ii)/(b).
Since
${\mathbb{E}}(|Z_t^{(5)}|^2) = 0$
, we have
$Z_t^{(5)} = 0$
almost surely. Hence the random variable
$\int_0^t \int_{{\mathcal U}_d} \mathrm{e}^{-\lambda u} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{r}}}}\rangle \, M(\mathrm{d} u, \mathrm{d}{{\boldsymbol{{r}}}})$
is deterministic, since
$\int_0^t \int_{{\mathcal U}_d} \mathrm{e}^{-\lambda u} \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{r}}}}\rangle \, \mathrm{d} u \, \nu(\mathrm{d}{{\boldsymbol{{r}}}})$
is deterministic. We have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU129.png?pub-status=live)
for all
$s \in [0,t]$
almost surely, where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU130.png?pub-status=live)
since
$(Z_s^{(5)})_{s\in\mathbb{R}_+}$
is a martingale. Thus
${\mathbb{P}}(A_t^{(M)}) = 1$
, where
$A_t^{(M)}$
is the event such that the Poisson random measure M has no point in the set
$H_t$
, where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU131.png?pub-status=live)
since
$\mathrm{e}^{-\lambda u} \ne 0$
for all
$u \in \mathbb{R}_+$
. The number of the points of M in the set
$H_t$
has a Poisson distribution with parameter
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU132.png?pub-status=live)
We have
$1 = {\mathbb{P}}(A_t^{(M)}) = \mathrm{e}^{-\lambda_t}$
, yielding
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU133.png?pub-status=live)
and hence
$\nu(\{{{\boldsymbol{{r}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{r}}}}\rangle \ne 0\}) = 0$
, which is in (ii)/(b).
Proof that (ii)
$\Longrightarrow$
(iii): If (ii)/(a) holds, then
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle \stackrel{{\mathrm{a.s.}}}{=} 0$
for all
$t \in \mathbb{R}_+$
. If (ii)/(b) holds, then we use again the representation (B.1) of
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle$
. We have
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle = {\mathbb{E}}(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle) = \langle{{\boldsymbol{{v}}}}, {\mathbb{E}}({{\boldsymbol{{X}}}}_0)\rangle$
, since
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_0\rangle$
is deterministic. For each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU134.png?pub-status=live)
since
$\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle c_\ell = 0$
for every
$\ell \in \{1, \ldots, d\}$
.
Further, for each
$t \in \mathbb{R}_+$
and
$n \in \mathbb{N}$
, using the notation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU135.png?pub-status=live)
for
$u \in (0, t]$
,
${{\boldsymbol{{z}}}} \in {\mathcal U}_d$
, and
$w \in {\mathcal U}_1$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU136.png?pub-status=live)
almost surely, since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU137.png?pub-status=live)
by Part (vi) of Definition 2.1, (3.8), and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU138.png?pub-status=live)
for each
$\ell \in \{1, \ldots, d\}$
, where
${\mathcal L}_1$
and
${\mathcal L}_d$
denote the Lebesgue measure on
$\mathbb{R}$
and on
$\mathbb{R}^d$
, respectively. Letting
$n \to \infty$
, by Ikeda and Watanabe [Reference Ikeda and Watanabe16, page 63], we conclude
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU139.png?pub-status=live)
Finally, for each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU140.png?pub-status=live)
almost surely, since
$\int_{{\mathcal U}_d} \|{{\boldsymbol{{r}}}}\| \, \nu(\mathrm{d}{{\boldsymbol{{r}}}}) < \infty$
(by Definition 2.1 and (2.1)) and
$\nu(\{{{\boldsymbol{{r}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{r}}}}\rangle \ne 0\}) = 0$
.
The implication (iii)
$\Longrightarrow$
(i) is trivial.
If
$(\langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{X}}}}_t\rangle)_{t\in\mathbb{R}_+}$
is deterministic, then, by (2.2), for each
$t \in \mathbb{R}_+$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU141.png?pub-status=live)
almost surely.
Appendix C. A stochastic fixed point equation
Under some mild conditions, the solution of a stochastic fixed point equation is atomless; see, e.g., Buraczewski et al. [Reference Buraczewski, Damek and Mikosch11, Proposition 4.3.2].
Theorem C.1. Let
$({\boldsymbol{{A}}}, {{\boldsymbol{{C}}}})$
be a random element in
$\mathbb{R}^{d \times d} \times \mathbb{R}^d$
, where
$d \in \mathbb{N}$
. Assume that
-
(i)
${\boldsymbol{{A}}}$ is invertible almost surely,
-
(ii)
${\mathbb{P}}({\boldsymbol{{A}}} {{\boldsymbol{{x}}}} + {{\boldsymbol{{C}}}} = {{\boldsymbol{{x}}}}) < 1$ for every
${{\boldsymbol{{x}}}} \in \mathbb{R}^d$ , and
-
(iii) the d-dimensional fixed point equation
${{\boldsymbol{{X}}}} \stackrel{{\mathcal D}}{=} {\boldsymbol{{A}}} {{\boldsymbol{{X}}}} + {{\boldsymbol{{C}}}}$ , where
$({\boldsymbol{{A}}}, {{\boldsymbol{{C}}}})$ and
${{\boldsymbol{{X}}}}$ are independent, has a solution
${{\boldsymbol{{X}}}}$ , which is unique in distribution.
Then the distribution of
${{\boldsymbol{{X}}}}$
does not have atoms and is of pure type, i.e., it is either absolutely continuous or singular with respect to Lebesgue measure in
$\mathbb{R}^d$
.
Corollary C.1. Let
${\boldsymbol{{A}}} \in \mathbb{R}^{d \times d}$
with
$\det({\boldsymbol{{A}}}) \ne 0$
and
$r({\boldsymbol{{A}}}) < 1$
. Let
${{\boldsymbol{{C}}}}$
be a d-dimensional non-deterministic random vector with
${\mathbb{E}}(\|{{\boldsymbol{{C}}}}\|) < \infty$
. Then the d-dimensional fixed point equation
${{\boldsymbol{{X}}}} \stackrel{{\mathcal D}}{=} {\boldsymbol{{A}}} {{\boldsymbol{{X}}}} + {{\boldsymbol{{C}}}}$
, where
${{\boldsymbol{{X}}}}$
is independent of
${{\boldsymbol{{C}}}}$
, has a solution
${{\boldsymbol{{X}}}}$
which is unique in distribution, and the distribution of
${{\boldsymbol{{X}}}}$
does not have atoms and is of pure type, i.e., it is either absolutely continuous or singular with respect to Lebesgue measure in
$\mathbb{R}^d$
.
For a proof of Corollary C.1, see our ArXiv preprint [Reference Barczy, Palau and Pap9].
Appendix D. On the second moment of projections of multi-type CBI processes
An explicit formula for the second absolute moment of the projection of a multi-type CBI process on the left eigenvectors of its branching mean matrix is presented, together with its asymptotic behavior in the supercritical and irreducible case, in Barczy et al. [Reference Barczy, Palau and Pap8, Proposition B.1].
Proposition D.1. If
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is a multi-type CBI process having parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|^2) < \infty$
and the moment condition (3.8) holds, then for each left eigenvector
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to an arbitrary eigenvalue
$\lambda \in \sigma({\widetilde{{{\boldsymbol{{B}}}}}})$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU142.png?pub-status=live)
where
$C_{{{\boldsymbol{{v}}}},\ell}$
,
$\ell \in \{1, \ldots, d\}$
, are defined in Theorem 3.1, and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU143.png?pub-status=live)
If, in addition,
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is supercritical and irreducible, then we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU144.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU145.png?pub-status=live)
and
$M_{{\boldsymbol{{v}}}}^{(2)}$
is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU146.png?pub-status=live)
Based on Proposition D.1, we derive the asymptotic behavior of the variance matrix of the real and imaginary parts of the projection of a multi-type CBI process on certain left eigenvectors of its branching mean matrix
$\mathrm{e}^{\widetilde{{{\boldsymbol{{B}}}}}}$
.
Proposition D.2. If
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
is a supercritical and irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|^2) < \infty$
and the moment condition (3.8) holds, then for each left eigenvector
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to an arbitrary eigenvalue
$\lambda \in \sigma({\widetilde{{{\boldsymbol{{B}}}}}})$
with
${\operatorname{Re}}(\lambda) \in \bigl(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr]$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU147.png?pub-status=live)
where the scaling factor
$h \;:\; \mathbb{R}_{++} \to \mathbb{R}_{++}$
and the matrix
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
are defined in Proposition D.1 and in Theorem 3.1, respectively.
For a proof of Proposition D.2, see our ArXiv preprint [Reference Barczy, Palau and Pap9].
Proposition D.3. Let
$({{\boldsymbol{{X}}}}_t)_{t\in\mathbb{R}_+}$
be a supercritical and irreducible multi-type CBI process with parameters
$(d, {{\boldsymbol{{c}}}}, {\boldsymbol{\beta}}, {{\boldsymbol{{B}}}}, \nu, \boldsymbol{\mu})$
such that
${\mathbb{E}}(\|{{\boldsymbol{{X}}}}_0\|^2) < \infty$
and the moment condition (3.8) holds. Let
$\lambda \in \sigma({\widetilde{{{\boldsymbol{{B}}}}}})$
with
${\operatorname{Re}}(\lambda) \in \bigl(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr]$
, and let
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
be a left eigenvector of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to the eigenvalue
$\lambda$
. Then
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}} = {\boldsymbol{0}}$
if and only if
$c_\ell \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle = 0$
and
$\mu_\ell(\{{{\boldsymbol{{z}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{z}}}}\rangle \ne 0 \}) = 0$
for each
$\ell \in \{1, \ldots, d\}$
. If, in addition,
${\operatorname{Im}}(\lambda) \ne 0$
, then
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
is invertible if and only if there exists
$\ell \in \{1, \ldots, d\}$
such that
$c_\ell \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{e}}}}_\ell\rangle \ne 0$
or
$\mu_\ell(\{{{\boldsymbol{{z}}}} \in {\mathcal U}_d : \langle{{\boldsymbol{{v}}}}, {{\boldsymbol{{z}}}}\rangle \ne 0 \}) > 0$
.
For a proof of Proposition D.3, see our ArXiv preprint [Reference Barczy, Palau and Pap9].
Remark D.1. Under the conditions of Proposition D.3, if
$\lambda \in \sigma({\widetilde{{{\boldsymbol{{B}}}}}})$
with
${\operatorname{Re}}(\lambda) \in \bigl(-\infty, \frac{1}{2} s({\widetilde{{{\boldsymbol{{B}}}}}})\bigr]$
and
${\operatorname{Im}}(\lambda) = 0$
and
${{\boldsymbol{{v}}}} \in \mathbb{R}^d$
is a left eigenvector of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to the eigenvalue
$\lambda$
, then
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
is singular. Indeed, in this case, by (3.7) and (3.5), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU148.png?pub-status=live)
Note that if
${{\boldsymbol{{v}}}} \in \mathbb{R}^d$
is a left eigenvector of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to an eigenvalue
$\lambda$
of
${\widetilde{{{\boldsymbol{{B}}}}}}$
, then
$\lambda \in \mathbb{R}$
necessarily, and hence in the case
$\lambda\in(-\infty,\frac{1}{2}s({\widetilde{{{\boldsymbol{{B}}}}}})]$
, we have that
${\boldsymbol{\Sigma}}_{{\boldsymbol{{v}}}}$
is not invertible. However, if
$\lambda \in \mathbb{R}$
is an eigenvalue of
${\widetilde{{{\boldsymbol{{B}}}}}}$
and
${{\boldsymbol{{v}}}} \in \mathbb{C}^d$
is a left eigenvector of
${\widetilde{{{\boldsymbol{{B}}}}}}$
corresponding to
$\lambda$
, then
${\operatorname{Re}}({{\boldsymbol{{v}}}}) \in \mathbb{R}^d$
and
${\operatorname{Im}}({{\boldsymbol{{v}}}}) \in \mathbb{R}^d$
are also left eigenvectors of
${\widetilde{{{\boldsymbol{{B}}}}}}$
or the zero vector.
Appendix E. A limit theorem for martingales
The next theorem is about the asymptotic behavior of multivariate martingales.
Theorem E.1. (Crimaldi and Pratelli [Reference Crimaldi and Pratelli12, Theorem 2.2]) Let us consider a filtered probability space
$\bigl(\Omega, {\mathcal F}, ({\mathcal F}_t)_{t\in\mathbb{R}_+}, {\mathbb{P}}\bigr)$
satisfying the usual conditions. Let
$({{\boldsymbol{{M}}}}_t)_{t\in\mathbb{R}_+}$
be a d-dimensional martingale with respect to the filtration
$({\mathcal F}_t)_{t\in\mathbb{R}_+}$
such that it has cÀdlÀg sample paths almost surely. Suppose that there exists a function
${{\boldsymbol{{Q}}}} : \mathbb{R}_+ \to \mathbb{R}^{d \times d}$
such that
$\lim_{t\to\infty} {{\boldsymbol{{Q}}}}(t) = {\boldsymbol{0}}$
;
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn39.png?pub-status=live)
where
${\boldsymbol{\eta}}$
is a
$d \times d$
random (necessarily positive semidefinite) matrix and
$([{{\boldsymbol{{M}}}}]_t)_{t\in\mathbb{R}_+}$
denotes the quadratic variation (matrix-valued) process of
$({{\boldsymbol{{M}}}}_t)_{t\in\mathbb{R}_+}$
; and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqn40.png?pub-status=live)
Then, for each
$\mathbb{R}^{k\times\ell}$
-valued random matrix
${\boldsymbol{{A}}}$
defined on
$(\Omega, {\mathcal F}, {\mathbb{P}})$
with some
$k, \ell \in \mathbb{N}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20211122005117199-0051:S0001867821000070:S0001867821000070_eqnU149.png?pub-status=live)
where
${{\boldsymbol{{Z}}}}$
is a d-dimensional random vector with
${{\boldsymbol{{Z}}}} \stackrel{{\mathcal D}}{=} {\mathcal N}_d({\boldsymbol{0}}, {{\boldsymbol{{I}}}}_d)$
independent of
$({\boldsymbol{\eta}}, {\boldsymbol{{A}}})$
.
Acknowledgements
We would like to thank the referees for their comments, which helped us to improve the paper. This paper has been revised after the sudden death of Gyula Pap, the third author, in October 2019. Mátyás Barczy is supported by the János Bolyai Research Scholarship of the Hungarian Academy of Sciences. Sandra Palau is supported by the Royal Society Newton International Fellowship and by the EU-funded Hungarian grant EFOP-3.6.1-16-2016-00008.