Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-06T14:21:41.918Z Has data issue: false hasContentIssue false

Weak convergence of random processes with immigration at random times

Published online by Cambridge University Press:  04 May 2020

Congzao Dong*
Affiliation:
Xidian University
Alexander Iksanov*
Affiliation:
Xidian University and Taras Shevchenko National University of Kyiv
*
*Postal address: School of Mathematics and Statistics, Xidian University, 710126 Xi’an, P.R. China.
*Postal address: School of Mathematics and Statistics, Xidian University, 710126 Xi’an, P.R. China.
Rights & Permissions [Opens in a new window]

Abstract

By a random process with immigration at random times we mean a shot noise process with a random response function (response process) in which shots occur at arbitrary random times. Such random processes generalize random processes with immigration at the epochs of a renewal process which were introduced in Iksanov et al. (2017) and bear a strong resemblance to a random characteristic in general branching processes and the counting process in a fixed generation of a branching random walk generated by a general point process. We provide sufficient conditions which ensure weak convergence of finite-dimensional distributions of these processes to certain Gaussian processes. Our main result is specialised to several particular instances of random times and response processes.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

1.1. Definition of random processes with immigration at random times

Let $D\,:\!=D[0,\infty)$ be the Skorokhod space of right-continuous real-valued functions which are defined on $[0,\infty)$ and have finite limits from the left at each positive point. Denoting, as usual, by ${\mathbb{N}}_0\,:\!={\mathbb{N}}\cup\{0\}$ the set of nonnegative integers, let $(T_k)_{k\in{\mathbb{N}}_0}$ be a collection of nonnegative, not necessarily ordered points such that

(1)\begin{equation}N(t)\,:\!=\#\{k\in{\mathbb{N}}_0\,:\, T_k\leq t\}<\infty\quad\text{almost surely for each } t\geq 0.\end{equation}

Although in most applications the number of nonzero $T_k$ is almost surely (a.s.) infinite (then $\lim_{k\to\infty}T_k=\infty$ a.s. is a sufficient condition for (1)), the case of a.s. finitely many points is also allowed. Further, let $(X_j)_{j\in{\mathbb{N}}}$ be independent copies of a random process X with paths in D which vanishes on the negative half-line. Finally, we assume that, for each $k\in{\mathbb{N}}_0$, $X_{k+1}$ is independent of $(T_0, \ldots,T_k)$. In particular, the case of complete independence of $(X_j)_{j\in{\mathbb{N}}}$ and $(T_k)_{k\in{\mathbb{N}}_0}$ is not excluded.

Put

\begin{equation*}Y(t)\,:\!=\sum_{k\geq 0}X_{k+1}(t-T_k),\qquad t\in{\mathbb{R}}\end{equation*}

(note that $Y(t)=0$ for $t<0$). We shall call $Y\,:\!=(Y(t))_{t\in{\mathbb{R}}}$ a random process with immigration at random times. The interpretation is that associated with the kth immigrant which arrives at time $T_{k-1}$ is the random process $X_k$ which describes some model-dependent ‘characteristics’ of the kth immigrant; for instance, $X_k(t-T_{k-1})$ may be the number of offspring of the immigrant at time t or the fitness of the immigrant at time t. The value of Y(t) is then given by the sum of ‘characteristics’ of all immigrants that arrived up to and including time t.

1.2. Earlier literature and relation to other models

When $(T_k)_{k\in{\mathbb{N}}_0}$ is a zero-delayed standard random walk with nonnegative jumps, that is, $T_0=0$ and $(T_k-T_{k-1})_{k\in{\mathbb{N}}}$ are independent, identically distributed, nonnegative random variables, the random process Y was called in [Reference Iksanov, Marynych and Meiners10] a random process with immigration at the epochs of a renewal process. Thus, the set of the latter processes constitutes a proper subset of the set of random processes with immigration at random times. We refer to [Reference Iksanov6] and [Reference Iksanov, Marynych and Meiners10] for detailed surveys concerning earlier works on random processes with immigration at the epochs of a Poisson or renewal process. A non-exhaustive list of more recent contributions, not covered in the cited sources, includes [Reference Iksanov, Jedidi and Bouzzefour7], [Reference Iksanov and Kabluchko8], [Reference Iksanov and Kabluchko9], [Reference Marynych and Verovkin12], and [Reference Pang and Taqqu13].

Articles which focus on the random processes with immigration at random times other than renewal times are relatively rare. A selection of these can be traced via the references given in the recent article [Reference Pang and Zhou14]. The authors of [Reference Pang and Zhou14] investigate random processes of the form

\begin{equation*}Y(t)=\sum_{k\geq 1}X_k(t-T_k){\textbf{1}}_{\{T_k\leq t\}},\qquad t\geq 0,\end{equation*}

where $X_k(t)=H(t,\eta_k)$ for $k\in{\mathbb{N}}$, $H\,:\,[0,\infty)\times{\mathbb{R}}^n\to {\mathbb{R}}$ is a deterministic measurable function, and $\eta_k$ is an ${\mathbb{R}}^n$-valued random vector. Since $\eta_1$, $\eta_2,\ldots$ are assumed to be conditionally independent given $(T_j)_{j\in{\mathbb{N}}}$ (rather than just independent), and $\eta_k$ is allowed to depend on $T_k$, the model in [Reference Pang and Zhou14] is slightly different from ours.

In [Reference Iksanov and Rashytov11], another quite recent paper, functional limit theorems are proved for random processes with immigration at random times. There, the standing assumption is that X is an eventually nondecreasing deterministic function which is regularly varying at $\infty$ of nonnegative index. We stress that the techniques used in the present work and in [Reference Iksanov and Rashytov11] are very different.

Random processes with immigration at random times can be thought of as natural successors of two well-known branching processes: the general branching process (GBP) counted with random characteristic (see [Reference Asmussen and Hering1, pp. 362–363]) and the counting process in a branching random walk (BRW). To define the GBP, imagine a population initiated by a single ancestor at time 0. Denote by

  • $\mathcal{T}$ a point process on $[0,\infty)$ describing the instants of time at which a generic individual produces offspring;

  • $\Phi$ a random characteristic which is a random process on ${\mathbb{R}}$ which vanishes on the negative half-line (the processes $\mathcal{T}$ and $\Phi$ are allowed to be arbitrarily dependent);

  • J the collection of every born individual of the population.

Associated with each individual $n\in J$ is its birth time $\sigma_n$ and a random pair $(\mathcal{T}_n, \Phi_n)$, a copy of $(\mathcal{T}, \Phi)$. Furthermore, for different individuals these copies are independent. The GBP is given by

\begin{equation*}Z(t)\,:\!=\sum_{n\in J}\Phi_n(t-\sigma_n),\qquad t\geq 0.\end{equation*}

If $\Phi(t)=1$ for all $t\geq 0$, then Z(t) is the total number of births up to and including time t. If $\Phi(t)={\textbf{1}}_{\{\tau>t\}}$ for a positive random variable $\tau$ interpreted as the lifetime of a generic individual, then Z(t) is the number of individuals alive at time t. More examples of this flavor can be found in [Reference Asmussen and Hering1, p. 363].

Consider now a BRW with positions of the jth-generation individuals given by $(T(v))_{v\in\mathbb{V}_j}$ for $j\in{\mathbb{N}}$, where $\mathbb{V}_j$ is the set of words of length j over ${\mathbb{N}}$ and for the individual $v\in \mathbb{V}_j$ its position on the real line is denoted by T(v). Set $N_j(t)\,:\!=\#\{v\in\mathbb{V}_j\,:\,T(v)\leq t\}$ for $t\in {\mathbb{R}}$, so that $N_j(t)$ is the number of individuals in the jth generation of the BRW with positions $\leq t$. With the help of a branching property, we obtain the basic decomposition

(2)\begin{equation}N_j(t)\,:\!=\sum_{v\in\mathbb{V}_{j-1}}N_1^{(v)}(t-T(v)),\qquad t\in{\mathbb{R}},\end{equation}

where $(N_1^{(v)}(t))_{t\geq 0}$ for $v\in\mathbb{V}_{j-1}$ are independent copies of $(N_1(t))_{t\geq 0}$ which are also independent of the T(v), $v\in\mathbb{V}_{j-1}$. Motivated by an application to certain nested infinite occupancy schemes in a random environment, the authors of the recent article [Reference Gnedin and Iksanov4] proved functional limit theorems in $D^{\mathbb{N}}$ for $\Big(\frac{N_j(t\cdot\!)-a_j(t\cdot\!)}{b_j(t)}\Big)_{j\in{\mathbb{N}}}$ with appropriate centering and normalizing functions $a_j$ and $b_j$. The standing assumption of [Reference Gnedin and Iksanov4] is that the positions $(T(v))_{v\in\mathbb{V}_1}$ are given by $({-}\log P_k)_{k\in{\mathbb{N}}}$, where $P_1$, $P_2,\ldots$ are positive random variables with an arbitrary joint distribution satisfying $\sum_{k\geq 1}P_k=1$ a.s.

2. Main result

Throughout the remainder of the paper we assume that ${\textrm{E}}( X(t))=0$ for all $t\geq 0$, and that the covariance

\begin{equation*}f(u,w) \,:\!= {\textrm{Cov}} (X(u),X(w)) = {\textrm{E}}( X(u)X(w))\end{equation*}

is finite for all $u,w \geq 0$. The variance of X will be denoted by v, that is, $v(t)\,:\!=f(t,t)={\textrm{Var}}\, X(t)$.

Following [Reference Iksanov, Marynych and Meiners10], we recall several notions related to regular variation in ${\mathbb{R}}^2_+\,:\!=(0,\infty)\times(0,\infty)$. We refer to [Reference Bingham, Goldie and Teugels3] for an encyclopaedic treatment of regular variation on the positive half-line.

Definition 2.1. A function $r\,:\, [0,\infty)\times [0,\infty)\to{\mathbb{R}}$ is regularly varying in ${\mathbb{R}}^2_+$ if there exists a function $C\,:\,{\mathbb{R}}^2_+ \to (0,\infty)$ such that

\begin{equation*}\lim_{t\to\infty} \frac{r(ut,wt)}{r(t,t)}=C(u,w), \qquad u,w>0.\end{equation*}

The function C is called a limit function. The definition implies that r(t, t) is regularly varying at $\infty$, i.e. $r(t,t) \sim t^\beta \ell(t)$ as $t\to\infty$ for some $\ell$ slowly varying at $\infty$ and some $\beta\in{\mathbb{R}}$ called the index of regular variation. In particular, $C(a,a)=a^{\beta}$ for all $a>0$ and, further,

\begin{equation*}C(au,aw)=C(a,a)C(u,w)=a^\beta C(u,w)\end{equation*}

for all $a,u,w>0$.

Definition 2.2. A function $r\,:\, [0,\infty)\times [0,\infty)\to{\mathbb{R}}$ will be called fictitious regularly varying of index $\beta$ in ${\mathbb{R}}^2_+$ if

\begin{equation*}\lim_{t\to\infty} \frac{r(ut,wt)}{r(t,t)}=C(u,w),\qquad u,w>0,\end{equation*}

where $C(u,u)\,:\!=u^\beta$ for $u>0$ and $C(u,w)\,:\!=0$ for $u,w>0$, $u\neq w$. A function r will be called wide-sense regularly varying of index $\beta$ in ${\mathbb{R}}^2_+$ if it is either regularly varying or fictitious regularly varying of index $\beta$ in ${\mathbb{R}}^2_+$.

The function C corresponding to a fictitious regularly varying function will also be called a limit function.

The processes introduced in Definition 2.3 arise as weak limits in Theorem 2.1, which is our main result. We shall show that these are well defined at the beginning of Section 4.

Definition 2.3. Let $\rho>0$ and C be the limit function for a wide-sense regularly varying function (see Definition 2.2) in ${\mathbb{R}}^2_+$ of index $\beta$ for some $\beta\in ({-}1,\infty)$. We shall denote by $V_{\beta,\rho}\,:\!=(V_{\beta,\rho}(u))_{u> 0}$ a centered Gaussian process with the covariance

\begin{equation*}{\textrm{E}}( V_{\beta,\rho}(u)V_{\beta,\rho}(w)) =\int_0^{u\wedge w} \!\!\!\! C(u-y,w-y) \,\textrm{d}y^\rho=\rho \int_0^{u\wedge w} \!\!\!\! C(u-y, w-y)y^{\rho-1}\, {\textrm{d} {y}}, \quad u,w>0,\end{equation*}

when $C(s,t) \neq 0$ for some $s,t>0$, $s \neq t$, and a centered Gaussian process with independent values and variance ${\textrm{E}}(V_{\beta,\rho}^2(u) =\rho \textrm{B}(\beta+1, \rho) u^{\beta+\rho})$ otherwise. Here and hereafter, $\textrm{B}(\!\cdot,\cdot\!)$ denotes the beta function.

Theorem 2.1 given below is an extension of Proposition 2.1 in [Reference Iksanov, Marynych and Meiners10], which treats the case where $(T_k)_{k\in{\mathbb{N}}_0}$ is a zero-delayed ordinary random walk with positive increments. The extension is nontrivial in the sense that our proof of Theorem 2.1 is not a mere adaptation of the proof of [Reference Iksanov, Marynych and Meiners10, Proposition 2.1]. Actually, in places, radically new, more advanced arguments are required. The reason for this complication is clear. Renewal processes exhibit a wide spectrum of nice properties which are not shared by general counting processes. We only mention two supporting facts; the list could have been extended.

  1. (1) When $(N(t))_{t\geq 0}$ is a renewal process, the limit relation (5) holds a.s. rather than in probability. This property significantly simplifies analysis.

  2. (2) When $(N(t))_{t\geq 0}$ is a renewal process, the function $t\mapsto {\textrm{E}}( N(t))$ is subadditive and satisfies the Blackwell theorem, which states that the limit $\lim_{t\to\infty}{\textrm{E}} (N(t+h)-N(t))$ exists and is finite for (some) $h>0$. Of course, one cannot hope for such properties in the case of general counting processes.

Numerous examples given in Section 3 demonstrate that the range of applicability of Theorem 2.1 is much wider that that of [Reference Iksanov, Marynych and Meiners10, Proposition 2.1]. Another confirmation of this fact is that the limit processes $V_{\beta,\rho}$ in Theorem 2.1 constitute a family parameterized by $\beta>0$, whereas there is a single limit $V_{1,\rho}$ in [Reference Iksanov, Marynych and Meiners10, Proposition 2.1].

We shall write $Z_t(u)\,{\overset{\textrm{f.d.}}{\Rightarrow}}\, Z(u)$, $t\to\infty$ to denote weak convergence of finite-dimensional distributions; that is, for any $n\in{\mathbb{N}}$ and any $0<u_1<u_2<\cdots<u_n<\infty$, $(Z_t(u_1),\ldots, Z_t(u_n))$ converges in distribution to $(Z(u_1),\ldots, Z(u_n))$ as $t\to\infty$. Also, as usual, ${\overset{\textrm{P}}{\to}}$ denotes convergence in probability.

Theorem 2.1. Let finite $c,\rho>0$ and $\beta> -(\rho\wedge 1)$ be given. Assume that

  • v is a locally bounded function; $f(u,w) = {\textrm{Cov}}(X(u), X(w))$ is a wide-sense regularly varying function of index $\beta$ in ${\mathbb{R}}_+^2$ with limit function C;

    (3)\begin{equation}\lim_{t\to\infty} \underset{a\leq u\leq b}{\sup}\,\bigg|\frac{f(ut,(u+w)t)}{v(t)}-C(u,u+w)\bigg|=0\end{equation}
    for every $w>0$ and all $0<a<b<\infty$; when f(u,w) is regularly varying, the function $u\mapsto C(u,u+w)$ is almost everywhere (a.e.) continuous on $(0,\infty)$ for every $w>0$;
  • for all $y>0$,

    (4)\begin{equation}v_y(t) \,:\!= {\mathrm{E}}\Big(X^2(t){\textbf{1}}_{\{|X(t)|>y\sqrt{t^\rho v(t)}\}}\Big) =o(v(t)),\qquad t\to\infty;\end{equation}
  • (5)\vspace*{6pt}\begin{equation}\sup_{y\in[0,\,T]}\Big|\frac{N(ty)}{t^\rho}-cy^\rho\Big|\,{\overset{\mathrm{P}}{\to}}\,0,\qquad t\to\infty ,\end{equation}
    for all $T>0$;
  • if $\beta\in ({-}(\rho\wedge 1), 0]$, then ${\mathrm{E}}( N(t))<\infty$ for all $t\geq 0$ and

    (6)\begin{equation}{\mathrm{E}} (N(t)-N(t-1))=O(t^{\rho-1}),\qquad t\to\infty.\end{equation}

Then

(7)\begin{equation}\frac{Y(ut)}{\sqrt{ct^\rho v(t)}} \,{\overset{\mathrm{f.d.}}{\Rightarrow}}\, V_{\beta,\rho}(u), \qquad t\to\infty ,\end{equation}

where $V_{\beta,\rho}$ is a centered Gaussian process as introduced in Definition 2.3.

Remark 2.1. The condition $\beta>-\rho$ is obviously needed to guarantee that the normalization $\sqrt{ct^\rho v(t)}$ diverges to $\infty$ as $t\to\infty$. Since ${\mathbb{E}} V_{\beta,\,\rho}^2(u) =\rho \textrm{B}(\beta+1, \rho)u^{\beta+\rho}$, the limit process $V_{\beta,\,\rho}$ is not well defined unless $\beta>-1$.

Remark 2.2. Condition (5) entails that the number of positive $T_k$ is a.s. infinite. A simple sufficient condition for (5) is

(8)\begin{equation}\lim_{t\to\infty} t^{-\rho} N(t)=c\quad\text{a.s.}\end{equation}

Indeed, the latter entails $\lim_{t\to\infty} t^{-\rho}N(ty)=cy^\rho$ a.s. for each fixed $y\geq 0$. Furthermore, the convergence is locally uniform in y a.s., that is, (5) holds a.s. (hence, in probability) as the convergence of monotone functions to a continuous limit.

If $T_0<T_1<\cdots$ a.s., then a standard inversion procedure ensures that (8) is equivalent to $\lim_{k\to\infty}k^{-1/\rho}T_k=c^{-1/\rho}$ a.s. If the collection $(T_k)_{k\in{\mathbb{N}}_0}$ is not ordered or ordered in the nondecreasing (rather than increasing) order, the aforementioned equivalence may fail to hold.

3. Applications

In this section we discuss how Theorem 2.1 reads for some particular $(T_k)_{k\in{\mathbb{N}}_0}$ and X.

3.1. Particular $\textbf{(\textit{T}}_{\textbf{\textit{k}}}\textbf{)}$

3.1.1. Perturbed random walks

Let $(\xi_k, \eta_k)_{k\in{\mathbb{N}}}$ be independent copies of a random vector $(\xi, \eta)$ with positive arbitrarily dependent components. Denote by $(S_k)_{k\in{\mathbb{N}}_0}$ the zero-delayed ordinary random walk with increments $\xi_k$; that is, $S_0\,:\!=0$ and $S_k\,:\!=\xi_1+\cdots+\xi_k$ for $k\in{\mathbb{N}}$. Consider a perturbed random walk

\begin{equation*} T_k\,:\!=S_{k-1}+\eta_k,\qquad k\in {\mathbb{N}}.\end{equation*}

It is convenient to define the corresponding counting process on ${\mathbb{R}}$ rather than on $[0,\infty)$; that is, $N(t)=\#\{k\in{\mathbb{N}}\,:\,T_k\leq t\}$ for $t\in{\mathbb{R}}$. Then, of course, $N(t)=0$ a.s. for $t<0$.

Condition (5) holds for this particular N(t) in view of Lemma 3.1 in combination with Remark 2.2.

Lemma 3.1. If $\mu\,:\!={\mathrm{E}}( \xi)<\infty$, then $\lim_{t\to\infty} t^{-1}N(t)=\mu^{-1}$ a.s.

Proof. Set $\nu(t)\,:\!=\sum_{k\geq 0}{\textbf{1}}_{\{S_k\leq t\}}$ for $t\geq 0$. For $t>0$ and $y\in (0,t)$, the following inequalities hold with probability one:

(9)\begin{align}\nu(t-y)-\sum_{k=1}^{\nu(t)}{\textbf{1}}_{\{\eta_k>y\}} & =\sum_{k=1}^{\nu(t)}{\textbf{1}}_{\{S_{k-1} \leq t-y\}}- \sum_{k=1}^{\nu(t)}{\textbf{1}}_{\{\eta_k>y\}} \nonumber \\& \leq\sum_{k=1}^{\nu(t)}{\textbf{1}}_{\{S_{k-1}+\eta_k\leq t\}}=N(t)\leq \nu(t).\end{align}

By the strong law of large numbers for ordinary random walks,

\begin{equation*}\lim_{n\to\infty} n^{-1}\sum_{k=1}^n{\textbf{1}}_{\{\eta_k>y\}}={\textrm{E}}({\textbf{1}}_{\{\eta>y\}})={\textrm{P}}\{\eta>y\}\quad\text{a.s.}\end{equation*}

Since $\lim_{t\to\infty} \nu(t)=\infty$ a.s., it follows that $\lim_{t\to\infty}\sum_{k=1}^{\nu(t)}{\textbf{1}}_{\{\eta_k>y\}}/\nu(t)={\textrm{P}}\{\eta>y\}$ a.s. Recall that $\lim_{t\to\infty} t^{-1}\nu(t)=\mu^{-1}$ a.s. by the strong law of large numbers for renewal processes, whence

\begin{equation*}\frac{\sum_{k=1}^{\nu(t)}{\textbf{1}}_{\{\eta_k>y\}}}{t}=\frac{\sum_{k=1}^{\nu(t)}{\textbf{1}}_{\{\eta_k>y\}}}{\nu(t)}\frac{\nu(t)}{t}\,\to\,\frac{{\textrm{P}}\{\eta>y\}}{\mu}\quad\text{a.s.} \end{equation*}

as $t\to\infty$. Hence, using (9) we infer that

\begin{equation*}\mu^{-1}-\mu^{-1}{\textrm{P}}\{\eta>y\}\leq\underset{t\to\infty}{\lim\inf}\,t^{-1}N(t)\leq\underset{t\to\infty}{\lim\sup}\, t^{-1}N(t) \leq\mu^{-1}\quad\text{a.s.} \end{equation*}

Letting $y\to\infty$ gives $\lim_{t\to\infty} t^{-1} N(t)=\mu^{-1}$ a.s.

To take care of the case when $\beta\in ({-}1,0)$ in Theorem 2.1 we note that

(10)\begin{equation}{\textrm{E}}( N(t))={\textrm{E}}( U(t-\eta))=\int_{[0,\,t]}U(t-y)\,\textrm{d} G(y),\qquad t\in{\mathbb{R}} ,\end{equation}

where, for $t\in{\mathbb{R}}$, $U(t)\,:\!=\sum_{k\geq 0}{\textrm{P}}\{S_k\leq t\}$ is the renewal function and $G(t)\,:\!={\textrm{P}}\{\eta\leq t\}$. In particular, by monotonicity and our assumption that ${\textrm{P}}\{\xi=0\}<1$, ${\textrm{E}}( N(t))\leq U(t)<\infty$ for all $t\geq 0$. Further, condition (6) holds because the subadditivity of U on ${\mathbb{R}}$ entails

\begin{equation*} 0\leq {\textrm{E}} (N(t)-N(t-1))\leq U(1).\end{equation*}

3.1.2. Non-homogeneous Poisson process

Assume that $(N(t))_{t\geq 0}$ is a non-homogeneous Poisson process with mean function $m(t)\,:\!={\textrm{E}}( N(t))$ for $t\geq 0$ which satisfies $m(t)\sim c_0 t^{\rho_0}$ as $t\to\infty$ for some positive $c_0$ and $\rho_0$. We can identify $(N(t))_{t\geq 0}$ with the process $(\mathcal{P}(m(t)))_{t\geq0}$, where $(\mathcal{P}(t))_{t\geq 0}$ is a homogeneous Poisson process of unit intensity. As a consequence of the strong law of large numbers for $\mathcal{P}(t)$ we obtain $\lim_{t\to\infty}t^{-\rho_0}N(t)=c_0 $ a.s. In view of Remark 2.2, condition (5) holds for the present N(t) with $c=c_0$ and $\rho=\rho_0$. An additional assumption that $m(t)-m(t-1)=O(t^{\rho_0-1})$ as $t\to\infty$ guarantees that condition (6) also holds.

3.1.3. Positions in the jth generation of a branching random walk

Consider a BRW generated by a point process with the points given by the successive positions of the same random walk $(S_n)_{n\geq1}$ as in Subsection 3.1.1. Assume that $\mu={\textrm{E}}(\xi)<\infty$. Denote by $(T_{k,j})_{k\in{\mathbb{N}}}$, $j\in{\mathbb{N}}$, the positions of the jth-generation individuals, and by $N_j(t)$, $j\in{\mathbb{N}}$, $t\geq0$, the number of the jth-generation individuals with positions $\leq t$. In this example we identify $(T_k)_{k\in{\mathbb{N}}_0}$ with $(T_{k,j})_{k\in{\mathbb{N}}}$ for some integer $j\geq 2$, hence N(t) with $N_j(t)$.

Set $U_j(t)\,:\!={\textrm{E}}( N_j(t))$ for $j\in{\mathbb{N}}$ and $t\geq 0$. From the representation which is a counterpart of (2),

(11)\begin{equation}N_j(t)=\sum_{k\geq 1}N_1^{(k)}(t-T_{k,j-1}),\qquad t\geq 0\end{equation}

where $(N_1^{(1)}(t))_{t\geq 0}, (N_1^{(2)}(t))_{t\geq0},\ldots$ are independent copies of $(N_1(t))_{t\geq 0}$ which are independent of $(T_{k,j-1})_{k\in{\mathbb{N}}}$, we obtain

\begin{equation*}U_j(t)=\int_{[0,\,t]}U_1(t-y)\,\textrm{d}U_{j-1}(y),\qquad t\geq 0.\end{equation*}

By the elementary renewal theorem, $U_1(t)=O(t)$ as $t\to\infty$. Further, by monotonicity, $U_j(t)\leq U_1(t)U_{j-1}(t)$ for $t\geq0$, which shows that $U_j(t)<\infty$ for all $t\geq 0$ and that

(12)\begin{equation}U_j(t)=O(t^{\,j}),\qquad t\to\infty.\end{equation}

To show that (6) holds we write, by using the subadditivity of $U_1(t)+1$ and monotonicity of $U_1(t)$,

\begin{align*}U_j(t)-U_j(t-1)&=\int_{[0,\,t-1]}(U_1(t-y)-U_1(t-1-y))\,\textrm{d}U_{j-1}(y) \\& \quad +\ \int_{(t-1,\,t]}U_1(t-y)\,\textrm{d}U_{j-1}(y)\\&\leq(U_1(1)+1)U_{j-1}(t-1)+U_1(1)(U_{j-1}(t)-U_{j-1}(t-1))\\&\leq(U_1(1)+1)U_{j-1}(t).\end{align*}

Invoking (12) proves (6) with $\rho=j$.

To check (5) we assume for simplicity that $\sigma^2\,:\!={\textrm{Var}}\, \xi<\infty$ (this condition is by no means necessary but enables us to avoid some additional calculations). Theorem 1.3 in [Reference Iksanov and Kabluchko8] entails that

\begin{equation*}\frac{N_j(t\cdot\!)-(t\cdot\!)^{\,j}/({\kern1.5pt}j!\mu^j)}{\sqrt{\sigma^2\mu^{-2j-1}t^{2j-1}}}\end{equation*}

converges weakly to a $({\kern1.5pt}j-1)$-times integrated Brownian motion in D equipped with the $J_1$-topology. Of course, this immediately yields (5) with $\rho=j$ and $c=({\kern1.5pt}j!\mu^j)^{-1}$.

3.2. Particular X

Let $(\eta_k)_{k\in{\mathbb{N}}}$ be independent copies of a random variable $\eta$ such that, for each $k\in{\mathbb{N}}_0$, $\eta_{k+1}$ is independent of $(T_0,\ldots, T_k)$.

In Section 3 of [Reference Iksanov, Marynych and Meiners10] it was checked that the covariance functions f of the response processes X discussed in parts (a), (b), and (e) below (parts (a) and (b) below) are regularly varying in ${\mathbb{R}}^2_+$ of index $\beta$ (satisfy (3)).

  1. (a) Let $X(t)={\textbf{1}}_{\{\eta>t\}}-{\textrm{P}}\{\eta>t\}$ with ${\textrm{P}}\{\eta>t\}\sim t^\beta\ell(t)$ as $t\to\infty$ for some $\beta\in ({-}1,0)$. In this case, $C(u,w)=(u\vee w)^\beta$ for $u, w>0$, so that $C(u, u+w)=(u+w)^\beta$ is continuous in u for every $w>0$. Further, $v(t)={\textrm{P}}\{\eta>t\}{\textrm{P}}\{\eta\leq t\}$ is bounded. Finally, condition (4) holds in view of $|X(t)|\leq 1$ a.s.

  2. (b) Let $X(t)=\eta g(t)$, where ${\textrm{E}}( \eta)=0$, ${\textrm{Var}}\,\eta\in (0,\infty)$, and $g\,:\,[0,\infty)\to{\mathbb{R}}$ varies regularly at $\infty$ of index $\beta/2$ for some $\beta>-1$ and $g\in D$. In this case, $C(u,w)=(uw)^{\beta/2}$ for $u,w>0$, so that $C(u,u+w)=(u(u+w))^{\beta/2}$ is continuous in u for every $w>0$. Also, $v(t)=({\textrm{Var}}\, \eta)g^2(t)$ is locally bounded. Let $\rho>0$. Observe now that $\lim_{t\to\infty}(\sqrt{t^\rho v(t)}/\break |g(t)|)=\infty$ implies that, for all $y>0$,

    \begin{equation*}{\textrm{E}}(X^2(t){\textbf{1}}_{\{|X(t)|>y\sqrt{t^\rho v(t)}\}})=g^2(t){\textrm{E}}(\eta^2{\textbf{1}}_{\{|\eta|>y\sqrt{t^\rho v(t)}/|g(t)|\}})=o(v(t)),\qquad t\to\infty, \end{equation*}
    that is, (4) holds. The corresponding limit process admits a stochastic integral representation
    \begin{equation*}V_{\beta,\,\rho}(u)=\int_{[0,\,u]}(u-y)^{\beta/2}\,\textrm{d}W(y^\rho),\qquad u>0, \end{equation*}
    where $(W(u))_{u\geq 0}$ is a Brownian motion and $\beta>-(\rho\wedge 1)$.
  3. (c) Let X be a D-valued centered random process with finite second moments satisfying, for some interval $I\subset(0,\infty)$, ${\textrm{E}}( \sup_{s\in I}X^2(s))<\infty$. Assume also it is self-similar of Hurst exponent $\beta/2$ for some $\beta>0$. By self-similarity, $v(t)=t^\beta {\textrm{E}}( X^2(1))$ (locally bounded function) and

    \begin{equation*}\frac{f(ut,wt)}{v(t)}=\frac{{\textrm{E}}( X(u)X(w))}{{\textrm{E}}(X^2(1))},\qquad u, w>0, \end{equation*}
    which shows that f is regularly varying in ${\mathbb{R}}^2_+$ of index $\beta$ with limit function $C(u,w)={\textrm{E}}(X(u)X(w))/{\textrm{E}}( X^2(1))$ and that (3) trivially holds. Continuity of $C(u, u+w)$ in $u>0$ for every $w>0$ is justified by the facts that, with probability one, $X(u)X(u+w)$ does not have fixed discontinuities and that ${\textrm{E}}( \sup_{s\in[a,b]}X^2(s))<\infty$ for all $0<a<b<\infty$ (using self-similarity) in combination with the Lebesgue dominated convergence theorem: for any deterministic $u>0$$\lim_{s\to 0}X(u+s)X(u+s+w)=X(u)X(u+w)$ a.s., and for any $s\in{\mathbb{R}}$ sufficiently close to 0 $|X(u+s)X(u+s+w)|\leq \sup_{v\in[a,\,b]}X^2(v)$ a.s. for large enough $b>0$ and small enough $a>0$. Finally, condition (4) holds in view of
    \begin{equation*}{\textrm{E}}( X^2(t){\textbf{1}}_{\{|X(t)|>y\sqrt{t^\rho v(t)}\}})= t^\beta {\textrm{E}}(X^2(1){\textbf{1}}_{\{|X(1)|>{\textrm{E}}(X^2(1))^{1/2}yt^{\rho/2}\}})=o(t^\beta),\qquad t\to\infty,\end{equation*}
    where $\rho>0$.

    In particular, if $X(t)=W(t^\beta)$ for $\beta>0$, where, as before, $(W(t))_{t\geq 0}$ is a Brownian motion, then, for any $\rho>0$, $V_{\beta,\,\rho}(u)=(\rho \textrm{B}(\beta+1,\rho))^{1/2}W(u^{\beta+\rho})$ for $u\geq 0$.

  4. (d) Let $X(t)=N(t)-{\textrm{E}}( N(t))=N(t)-m(t)$, where $(N(t))_{t\geq 0}$ is a non-homogeneous Poisson process with mean function m(t) as discussed in Subsection 3.1.2. In this case, $v(t)=m(t)\sim c_0 t^{\rho_0}$ as $t\to\infty$. Since m(t) is a nondecreasing function, it must be locally bounded. For $u,v>0$, $f(u,v)={\textrm{E}} (N(u)-m(u))(N(v)-m(v))= m(u\wedge v)$. Hence, f is regularly varying in ${\mathbb{R}}_+^2$ of index $\rho_0$ with limit function $C(u,v)=(u\wedge v)^{\rho_0}$. Further, it is obvious that (3) holds and that, for every $w>0$, $C(u,u+w)=u^{\rho_0}$ is continuous in u. It remains to check that condition (4) holds. To this end, we use Hölder’s inequality and then Markov’s inequality to obtain, for $\rho, y>0$,

    \begin{align*}&{\textrm{E}} (N(t)-m(t))^2{\textbf{1}}_{\{|N(t)-m(t)|>y\sqrt{t^\rho m(t)}\}}\\&\quad\leq \big({\textrm{E}}(N(t)-m(t))^4\big)^{1/2}\big({\textrm{P}}\{|N(t)-m(t)|>y\sqrt{t^\rho m(t)}\}\big)^{1/2}\\&\quad\leq(m(t)(1+3m(t)))^{1/2}y^{-1}t^{-\rho/2}=o(m(t)) ,\end{align*}
    which proves (4).

    The limit process $V_{\rho_0,\,\rho}$ is the same time-changed Brownian motion as in point (c) in which the role of $\beta$ is played by $\rho_0$.

    To give a concrete specialization of Theorem 2.1, let Y(t) denote the number of second-generation individuals in a BRW generated by a non-homogeneous Poisson process $(N(t))_{t\geq 0}$ as above. Then $(Y(t))_{t\geq 0}$ is a random process with immigration at random times, for Y(t) admits a representation similar to (11) in which we take $j=2$, replace $N_2(t)$ with Y(t) and $N_1(t)$ with N(t), and let $(T_{k,1})_{k\in{\mathbb{N}}}$ denote the atoms of $(N(t))_{t\geq 0}$. We shall write $T_k$ for $T_{k,1}$. According to Theorem 2.1 in combination with the discussion above and in Subsection 3.1.2, we have the following limit theorem with a random centering:

    \begin{equation*}\frac{Y(ut)-\sum_{k\geq 1}m(ut-T_k){\textbf{1}}_{\{T_k\leq ut\}}}{c_0 (\rho_0\textrm{B}(\rho_0+1, \rho_0))^{1/2} t^{\rho_0}} \,{\overset{\textrm{f.d.}}{\Rightarrow}}\,W(u^{2\rho_0}), \qquad t\to\infty, \end{equation*}
    where $(W(u))_{u\geq 0}$ is a Brownian motion.
  5. (e) Let $X(t)=(t+1)^{\beta/2}Z(t)$, where $\beta\in({-}1,0)$ and $(Z(t))_{t\geq 0}$ is a stationary Ornstein–Uhlenbeck process with variance $1/2$. In this case, $f(u,w) ={\textrm{E}} (X(u)X(w))= 2^{-1}(u+1)^{\beta/2}(w+1)^{\beta/2}\textrm{e}^{-|u-w|}$ is fictitious regularly varying in ${\mathbb{R}}^2_+$ of index $\beta$. Furthermore, condition (3) holds; that is, for every $w>0$,

    \begin{equation*}\frac{f(ut, (u+w)t)}{v(t)}=\frac{(ut+1)^{\beta/2}((u+w)t+1)^{\beta/2}}{(t+1)^\beta}\textrm{e}^{-wt}\end{equation*}
    converges to 0 as $t\to\infty$ uniformly in $u\in [a, b]$ for all $0<a<b<\infty$. This stems from the fact that while the first factor converges to $u^{\beta/2}(u+w)^{\beta/2}$ uniformly in $u\in [a,b]$, the second factor converges to zero and does not depend on u. Further, $v(t)=2^{-1}(t+1)^\beta$ is bounded. By stationarity, for each $t > 0$, Z(t) has the same distribution as a random variable $\theta$ having the normal distribution with zero mean and variance $1/2$. Hence, with $\rho>0$,
    \begin{equation*}{\textrm{E}}( X^2(t){\textbf{1}}_{\{|X(t)|>y\sqrt{t^\rho v(t)}\}})= (t+1)^\beta {\textrm{E}}( \theta^2{\textbf{1}}_{\{|\theta|>2^{-1/2}yt^{\rho/2}\}})=o(t^\beta),\qquad t\to\infty;\end{equation*}
    that is, condition (4) holds. For $\beta>-(\rho\wedge 1)$, the corresponding limit process $V_{\beta,\,\rho}$ is a centered Gaussian process with independent values.

4. Proof of Theorem 2.1

When $C(u,w)=0$ for all $u,w>0$, $u\neq w$, the process $V_{\beta,\,\rho}$ exists as a Gaussian process with independent values, see Definition 2.3. Now we intend to show that the Gaussian process $V_{\beta,\,\rho}$ is well defined in the complementary case when $C(u,w)>0$ for some $u,w>0$, $u\neq w$. To this end, we check that the function $\Pi(s,t)$ given by

\begin{equation*}\Pi(s,t) \,\,:\!=\, \int_0^{s\wedge t} C(s-y,t-y) \, \textrm{d}y^\rho,\qquad s,t>0,\end{equation*}

is finite and positive semidefinite; that is, for any $j\in{\mathbb{N}}$, any $\gamma_1,\ldots, \gamma_j\in{\mathbb{R}}$, and any $0<\break v_1<\cdots<v_j<\infty$,

(13)\begin{align}0\leq \sum_{i=1}^j & \gamma_i^2 \Pi(v_i,\,v_i)+2\sum_{1\leq r<l\leq{\kern1pt}j} \gamma_r\gamma_l \Pi(v_r, v_l) \notag \\&= \sum_{i=1}^{j-1}\int_{v_{i-1}}^{v_i}\bigg(\sum_{s=i}^j\gamma_s^2 C(v_s-y,v_s-y)+2\sum_{i\leq r \lt l\leq{\kern1pt}j}\gamma_r\gamma_l C(v_r-y,v_l-y)\bigg) \, \textrm{d}y^\rho \notag \\&\quad +\gamma_j^2 \int_{v_{j-1}}^{v_j} C(v_j-y, v_j-y)\,\textrm{d}y^\rho,\end{align}

where $v_0\,:\!=0$. In view of

(14)\begin{equation}|{\kern1pt}f(s,t)| \leq 2^{-1}(v(s)+v(t)), \qquad s,t\geq 0,\end{equation}

we infer that

(15)\begin{equation}C(s-y,t-y) \leq 2^{-1} ((s-y)^\beta+(t-y)^\beta).\end{equation}

Since $\beta>-1$ by assumption, the latter ensures $\Pi(s,t)<\infty$ for all $s,t>0$. Now we pass to the proof of (13). Since the second term on the right-hand side of (13) is nonnegative, it suffices to prove that so is the first. The function C(s,t), $s,t>0$, is positive semidefinite as a limit of positive semidefinite functions. Hence, for each $1\leq i\leq{\kern1pt}j-1$ and $y\in (v_{i-1},\,v_i)$,

\begin{equation*}\sum_{s=i}^j\gamma_s^2 C(u_s-y, u_s-y)+2\sum_{i\leq r \lt l\leq{\kern1pt}j}\gamma_r\gamma_l C(u_r-y,u_l-y)\geq 0. \end{equation*}

Thus, the process $V_{\beta,\,\rho}$ does exist as a Gaussian process with covariance function $\Pi(s,t)$, $s,t>0$.

Proof of Theorem 2.1. We treat simultaneously both the case when $C(u,w)>0$ for some $u,w>0$, $u\neq w$, and the complementary case.

According to the Cramér–Wold device relation, (7) is equivalent to

(16)\begin{equation}\frac{\sum_{i=1}^j \alpha_i \sum_{k \geq0}X_{k+1}(u_it-T_k){\textbf{1}}_{\{T_k \leq u_it\}}}{\sqrt{ct^\rho v(t)}}\,\stackrel{\textrm{d}}{\to}\, \sum_{i=1}^j \alpha_iV_{\beta,\rho}(u_i)\end{equation}

for all $j\in{\mathbb{N}}$, all $\alpha_1,\ldots, \alpha_j\in{\mathbb{R}}$, and all $0<u_1<\cdots<u_j<\infty$. Here and hereafter, $\stackrel{\textrm{d}}{\to}$ denotes convergence in distribution. Since $C(y,y)=y^\beta$, we conclude that

\begin{equation*}\int_0^{u_i}C(u_i-y,u_i-y)\,\textrm{d}y^\rho=\rho \textrm{B}(\beta+1,\rho)u_i^{\beta+\rho}.\end{equation*}

Hence, the random variable $\sum_{i=1}^j \alpha_iV_{\beta,\rho}(u_i)$ is centered normal with variance

\begin{equation*}D_{\beta,\rho} (u_1,\ldots, u_j)\,:\!=\sum_{i=1}^j \alpha_i^2 \rho \textrm{B}(\beta+1,\rho)u_i^{\beta+\rho}+2\sum_{1\leq r \lt l\leq{\kern1pt}j}\alpha_r\alpha_l\int_0^{u_r}C(u_r-y,u_l-y)\textrm{d}y^\rho. \end{equation*}

Define the $\sigma$-algebras $\mathcal{F}_0\,:\!=\sigma(T_0)$ and $\mathcal{F}_k\,:\!=\sigma(T_0, X_1,T_1, \ldots, X_k, T_k)$ for $k\in{\mathbb{N}}$, and set ${\textrm{E}}_k{(\!\cdot\!)}\,:\!={\textrm{E}}(\!\cdot|\mathcal{F}_k)$, $k\in{\mathbb{N}}_0$. Now observe that

\begin{equation*}{\textrm{E}}_k \sum_{i=1}^j\alpha_i X_{k+1}(u_it-T_k){\textbf{1}}_{\{T_k\leq u_i t\}}= 0,\qquad k\in{\mathbb{N}}_0,\end{equation*}

which shows that $\sum_{k\geq0}\sum_{i=1}^j\alpha_i X_{k+1}(u_it-T_k){\textbf{1}}_{\{T_k\leq u_i t\}}$ is a martingale limit. In view of this, in order to prove (16), one may use the martingale central limit theorem (Corollary 3.1 in [Reference Hall and Heyde5]). The theorem tells us that it suffices to verify

(17)\begin{align}\sum_{k\geq 0} {\textrm{E}}_k (Z_{k+1,\,t}^2) \,\stackrel{{\textrm{P}}}{\to}\,D_{\beta,\rho}(u_1,\ldots, u_j),\qquad t\to\infty,\end{align}

and

(18)\begin{equation}\sum_{k\geq 0} {\textrm{E}}_k\big(Z_{k+1,\,t}^2{\textbf{1}}_{\{|Z_{k+1,\,t}|>y\}}\big)\,\stackrel{{\textrm{P}}}{\to}\, 0,\qquad t\to\infty ,\end{equation}

for all $y>0$, where

\begin{equation*}Z_{k+1,\,t} \,:\!= \frac{\sum_{i=1}^j \alpha_i {\textbf{1}}_{\{T_k\leq u_it\}}X_{k+1}(u_it-T_k)}{\sqrt{ct^\rho v(t)}}, \qquad k\in{\mathbb{N}}_0,\ t>0.\end{equation*}

Proof of (17). We start by writing

\begin{align*}\sum_{k\geq 0} {\textrm{E}}_k (Z_{k+1,\,t}^2) & = \frac{\sum_{i=1}^j \alpha_i^2 \sum_{k \geq 0}{\textbf{1}}_{\{T_k\leq u_it\}}v(u_it-T_k)}{ct^\rho v(t)} \\& \quad + \frac{2\sum_{1\leq r \lt l\leq{\kern1pt}j}\alpha_r\alpha_l \sum_{k\geq0}{\textbf{1}}_{\{T_k\leq u_rt\}}f(u_rt-T_k, u_lt-T_k)}{ct^\rho v(t)}.\end{align*}

We shall prove that, as $t\to\infty$,

(19)\begin{equation}\frac{\sum_{k\geq 0} {\textbf{1}}_{\{T_k\leq u_it\}}v(u_it-T_k)}{ct^\rho v(t)} \,=\, \frac{\int_{[0,\,u_i]} v(t(u_i-y)) \, \textrm{d}N(ty)}{ct^\rho v(t)} \,\stackrel{{\textrm{P}}}{\to}\, \rho \textrm{B}(\beta+1, \rho) u_i^{\beta+\rho}\end{equation}

for all $1\leq i\leq{\kern1pt}j$, and that

(20)\begin{align}\frac{\sum_{k\geq 0}{\textbf{1}}_{\{T_k\leq u_rt\}}\,f(u_rt-T_k,u_lt-T_k)}{ct^\rho v(t)} & = \frac{\int_{[0,\,u_r]}f(t(u_r-y),t(u_l-y)) \, \textrm{d}N(ty)}{ct^\rho v(t)} \notag \\& \stackrel{{\textrm{P}}}{\to} \int_0^{u_r} C(u_r-y,u_l-y)\,\textrm{d}y^\rho\end{align}

for all $1\leq r \lt l\leq{\kern1pt}j$.

Fix any $u_r \lt u_l$ and pick $\varepsilon \in (0,u_r\wedge 1)$. We claim that, as $t\to\infty$,

(21)\begin{equation}\int_{[0,\,u_r-\varepsilon]}\frac{v(t(u_r-y))}{v(t)} \, \textrm{d}\frac{N(ty)}{ct^\rho}\quad \overset{\textrm{P}}{\to}\quad\int_0^{u_r-\varepsilon}(u_r-y)^\beta \,\textrm{d}y^\rho\end{equation}

and

(22)\begin{equation}\int_{[0,\,u_r-\varepsilon]} \frac{f(t(u_r-y),t(u_l-y))}{v(t)} \,\textrm{d} \frac{N(ty)}{ct^\rho}\quad \overset{{\textrm{P}}}{\to}\quad\int_0^{u_r-\varepsilon}C(u_r-y, u_l-y)\,\textrm{d}y^\rho.\end{equation}

To prove these limit relations we need some preparation. For each $t>0$, the random function $\mathcal{G}_t$ defined by $\mathcal{G}_t(y)\,:\!=0$ for $y \lt 0$, $\,:\!=N(ty)/N(tu_r)$ for $y\in[0,u_r)$, and $=1$ for $y\geq u_r$ is a random distribution function. Similarly, the function $\mathcal{G}$ defined by $\mathcal{G}(y)\,:\!=0$ for $y \lt 0$, $\,:\!=(y/u_r)^\rho$ for $y\in[0,u_r)$, and $=1$ for $y\geq u_r$ is a distribution function. According to (5), for every sequence $(t_n)_{n\in{\mathbb{N}}}$ there exists a subsequence $(t_{n_s})_{s\in{\mathbb{N}}}$ such that $\lim_{s\to\infty}t_{n_s}^{-\rho} N(t_{n_s}y)= cy^\rho$ a.s. for each $y\in[0,u_r]$. We would like to stress that the uniformity of the convergence in (5) ensures that the subsequence $(t_{n_s})_{s\in{\mathbb{N}}}$ does not depend on y (without the uniformity assumption we should have taken a new subsequence $(t_{n_s})_{s\in{\mathbb{N}}}$ for each particular $y\in [0,\,u_r]$; this would not be sufficient for what follows). The last limit relation guarantees that $\lim_{s\to\infty} N(t_{n_s}y)/N(t_{n_s}u_r)=(y/u_r)^\rho$ a.s. for each $y\in [0,u_r]$. Therefore, as $s\to\infty$, $\mathcal{G}_{t_{n_s}}$ converges weakly to $\mathcal{G}$ with probability one.

Proof of (21). Write

\begin{align*}&\Big|\int_{[0,\,u_r-\varepsilon]}\frac{v(t_{n_s}(u_r-y))}{v(t_{n_s})}\textrm{d}\mathcal{G}_{t_{n_s}}(y)-\int_{[0,\,u_r-\varepsilon]}(u_r-y)^\beta\textrm{d}\mathcal{G}(y) \Big|\\&\quad\leq\int_{[0,\,u_r-\varepsilon]}\Big|\frac{v(t_{n_s}(u_r-y))}{v(t_{n_s})}-(u_r-y)^\beta\Big|\textrm{d}\mathcal{G}_{t_{n_s}}(y)\\&\qquad+\Big|\int_{[0,\,u_r-\varepsilon]}(u_r-y)^\beta\textrm{d}\mathcal{G}_{t_{n_s}}(y)-\int_{[0,\,u_r-\varepsilon]}(u_r-y)^\beta\textrm{d}\mathcal{G}(y)\Big|.\end{align*}

By the uniform convergence theorem for regularly varying functions (Theorem 1.5.2 in [Reference Bingham, Goldie and Teugels3]),

(23)\begin{equation}\lim_{t \to \infty} \frac{v(t(u_r-y))}{v(t)} = (u_r-y)^\beta\end{equation}

uniformly in $y \in [0,\,u_r-\varepsilon]$. This implies that the first summand on the right-hand side of the penultimate centered formula converges to 0 a.s. as $s\to\infty$. The second summand does so by the following reasoning. The function g defined by $g(y)\,:\!=(u_r-y)^\rho$ for $y\in [0, u_r-\varepsilon]$ and $\,:\!=0$ for $y>u_r-\varepsilon$ is bounded with one discontinuity point. With this at hand it remains to invoke the aforementioned weak convergence with probability one and the fact that $\mathcal{G}$ is a continuous distribution function. This implies that the left-hand side of the penultimate centered formula with t replacing $t_{n_s}$ converges in probability to 0 as $t\to\infty$. Multiplying it by $N(tu_r)/(ct^\rho)$, which converges to $u_r^\rho$ in probability as $t\to\infty$, we arrive at (21).

Proof of (22) is analogous. Instead of (23) one has to use the following relation which is a consequence of (3):

\begin{equation*} \lim_{t \to \infty} \frac{f(t(u_r-y),t(u_l-y))}{v(t)} = C(u_r-y, u_l-y)\end{equation*}

uniformly in $y\in [0,u_r-\varepsilon]$. The role of g is now played by $g^\ast(y)\,:\!=C(u_r-y, u_l-y)$ for $y\in [0,u_r-\varepsilon]$ and $\,:\!=0$ for $y>u_r-\varepsilon$. In view of (15), this function is bounded. Also, $g^\ast$ is a.e. continuous by assumption, which in combination with the absolute continuity of $\mathcal{G}$ is enough for completing the proof of (22).

As $\varepsilon\to 0+$, the right-hand sides of (21) and (22) converge to

\begin{equation*} \int_0^{u_r}(u_r-y)^\beta \,\textrm{d}y^\rho=\rho \textrm{B}(\beta+1, \rho)u_r^{\beta+\rho}\end{equation*}

and $\int_0^{u_r} C(u_r-y,u_l-y)\textrm{d}y^\rho$, respectively. Thus, relations (19) and (20) are valid if we can show (see Theorem 4.2 in [Reference Billingsley2]) that

(24)\begin{equation}\lim_{\varepsilon\to 0+} \limsup_{t\to\infty} {\textrm{P}}\bigg\{\frac{\int_{(u_r-\varepsilon, \, u_r]} v(t(u_r-y)) \, \textrm{d}N(ty)}{ct^\rho v(t)} > \delta\bigg\} = 0\end{equation}

and

(25)\begin{equation}\lim_{\varepsilon\to 0+} \limsup_{t\to\infty} {\textrm{P}}\bigg\{\frac{\big|\int_{(u_r-\varepsilon, \, u_r]}f(t(u_r-y),t(u_l-y)) \,\textrm{d}N(ty) \big|}{ct^\rho v(t)}>\delta\bigg\}=0\end{equation}

for any $\delta>0$.

Using (14) we obtain

(26)\begin{align} &\int_{(u_r-\varepsilon, \, u_r]} |{\kern1pt}f(t(u_r-y), t(u_l-y))| \, \textrm{d}N(ty)\\[3pt] &\quad \leq\, 2^{-1}\Big(\int_{(u_r-\varepsilon, \, u_r]} v(t(u_r-y)) \,\textrm{d}N(ty)+ \int_{(u_r-\varepsilon, \, u_r]} v(t(u_l-y)) \, \textrm{d}N(ty)\Big) , \notag\end{align}

which shows that a proof of (25) includes that of (24). Therefore, we shall only prove (25).

We first treat the second summand on the right-hand side of (26). Since

\begin{equation*}\lim_{t\to\infty}\frac{v(t(u_l-y))}{v(t)}=(u_l-y)^\beta\end{equation*}

uniformly in $y\in (u_r-\varepsilon, u_r]$ (recall that $u_r \lt u_l$) we can use the argument given after formula (23) to conclude that

\begin{equation*}\frac{\int_{(u_r-\varepsilon, \, u_r]} v(t(u_l-y))\, \textrm{d}N(ty)}{ct^\rho v(t)}\,{\overset{\textrm{P}}{\to}}\,\int_{(u_r-\varepsilon,\,u_r]}(u_l-y)^\beta \textrm{d}y^\rho,\qquad t\to\infty.\end{equation*}

The right-hand side converges to zero as $\varepsilon\to 0+$.

Now we are passing to the analysis of the first summand on the right-hand side of (26). According to Potter’s bound (Theorem 1.5.6 (iii) in [Reference Bingham, Goldie and Teugels3]), for any chosen $A>1$, $\gamma\in (0,\beta)$ when $\beta>0$, and $\gamma\in (0,\beta+1)$ when $\beta\in ({-}(\rho\wedge 1),0]$, there exists $t_0>0$ such that

\begin{equation*}\frac{v(t(u_r-y))}{v(t)}\leq A (u_r-y)^{\beta-\gamma} \end{equation*}

whenever $t\geq t_0$ and $t(u_r-y)\geq t_0$. Then, for $t\geq t_0/\varepsilon$,

(27)\begin{align}&\frac{\int_{(u_r-\varepsilon, \, u_r]} v(t(u_r-y)) \, \textrm{d}N(ty)}{ct^\rho v(t)}\notag\\ &\quad\leq \frac{\int_{(u_r-\varepsilon, \,u_r-t_0/t]} v(t(u_r-y)) \, \textrm{d}N(ty)}{ct^\rho v(t)}+\frac{\int_{(u_r-t_0/t, \, u_r]} v(t(u_r-y)) \, \textrm{d}N(ty)}{ct^\rho v(t)}\notag\\&\quad\leq \frac{A\int_{(u_r-\varepsilon, \, u_r-t_0/t]}(u_r-y)^{\beta-\gamma}\textrm{d}N(ty)}{ct^\rho}+ \frac{(N(tu_r)-N(tu_r-t_0))\sup_{x\in[0,\,t_0]} v(x)}{ct^\rho v(t)}.\end{align}

We claim that the second term on the right-hand side in (27) converges to zero in probability as $t\to\infty$. For the proof we first note that the function v is locally bounded by assumption. With this at hand, the claim follows from (6) in combination with Markov’s inequality when $\beta\in ({-}(\rho\wedge 1), 0)$ or $\beta=0$ and ${\lim\inf}_{t\to\infty}v(t)=0$ and from $t^{-\rho}(N(t)-N(t-t_0)){\overset{\textrm{P}}{\to}} 0$ as $t\to\infty$ which, in its turn, is a consequence of (5) when $\beta>0$ or $\beta=0$ and ${\lim\inf}_{t\to\infty}v(t)>0$.

While treating the first summand on the right-hand side in (27) we consider two cases separately.

Case $\beta>0$in which $\beta-\gamma>0$. The first summand is bounded from above by $A\varepsilon^{\beta-\gamma}N(tu_r)/(ct^\rho)$, which converges to $A\varepsilon^{\beta-\gamma}u_r^\rho$ in probability as $t\to\infty$. Therefore, for any $\delta>0$,

\begin{equation*}\limsup_{t\to\infty}{\textrm{P}}\{A\varepsilon^{\beta-\gamma}N(tu_r)/(ct^\rho) > \delta\} \leq{\textbf{1}}_{[0,A\varepsilon^{\beta-\gamma}u_r^\rho]}(\delta).\end{equation*}

It remains to note that the right-hand side converges to zero as $\varepsilon\to 0+$.

Case $\beta\in ({-}(\rho\wedge 1), 0]$in which $\beta-\gamma \lt 0$. Invoking Markov’s inequality we see that it suffices to prove that

(28)\begin{equation}\lim_{\varepsilon\to 0+} \limsup_{t\to\infty}\frac{\int_{(u_r-\varepsilon, \, u_r]}(u_r-y)^{\beta-\gamma}\,\textrm{d}L(ty)}{t^\rho} = 0,\end{equation}

where $L(t)\,:\!={\textrm{E}}( N(t))$ for $t\geq 0$.

Write, for large enough t, positive constants $C_1$ and $C_2$, and $i=1,2$,

\begin{align*}&\int_{(u_r-\varepsilon, u_r]}(u_r-y)^{\beta-\gamma}\, \textrm{d}L(ty)\\&\quad\leq \sum_{k=0}^{[\varepsilon t]}\int_{(u_r-t^{-1}(k+1), u_r-t^{-1}k]}(u_r-y)^{\beta-\gamma}\,\textrm{d}L(ty)\\&\quad\leq \sum_{k=0}^{[\varepsilon t]}(k/t)^{\beta-\gamma}(L(tu_r-k)-L(tu_r-(k+1)))\\&\quad\leq\begin{cases}C_1t^{-(\beta-\gamma)}\sum_{k=0}^{[\varepsilon t]}k^{\beta-\gamma}(tu_r-k)^{\rho-1} & \text{if } \ \rho\geq 1, \\[5pt] C_2t^{-(\beta-\gamma)}\sum_{k=0}^{[\varepsilon t]}k^{\beta-\gamma}(tu_r-k+1)^{\rho-1} & \text{if} \ \rho\in(0,1),\end{cases}\\&\quad\leq C_it^{-(\beta-\gamma)}\sum_{k=1}^{[\varepsilon t]}\int_{k-1}^k y^{\beta-\gamma}(tu_r-y)^{\rho-1}\textrm{d}y \\&\quad\leq C_i t^{-(\beta-\gamma)}\int_0^{\varepsilon t} y^{\beta-\gamma}(tu_r-y)^{\rho-1}\textrm{d}y\\& =C_i t^\rho \int_0^\varepsilon y^{\beta-\gamma}(u_r-y)^{\rho-1}\textrm{d}y,\end{align*}

where the third inequality is a consequence of (6), and we take $i=1$ when $\rho\geq 1$ and $i=2$ when $\rho\in (0,1)$. This proves (28), and (17) follows.

Proof of (18). The following inequality holds for real $a_1,\ldots,a_m$:

\begin{align*}(a_1+\cdots+a_m)^2{\textbf{1}}_{\{|a_1+\cdots+a_m| \gt y\}}&\leq(|a_1|+\cdots+|a_m|)^2{\textbf{1}}_{\{|a_1|+\cdots+|a_m| \gt y\}}\notag\\&\leq m^2 (|a_1| \vee\cdots\vee |a_m|)^2{\textbf{1}}_{\{m(|a_1| \vee\cdots\vee|a_m|)>y\}}\notag\\&\leq m^2\big(a_1^2{\textbf{1}}_{\{|a_1|>y/m\}}+\cdots+a_m^2{\textbf{1}}_{\{|a_m|>y/m\}}\big).\end{align*}

This in combination with the regular variation of $t^\rho v(t)$ guarantees it is sufficient to show that

(29)\begin{equation}\sum_{k\geq 0} {\textbf{1}}_{\{T_k \leq t\}} {\textrm{E}}_k \bigg(\frac{X^2_{k+1}(t-T_k)}{t^\rho v(t)} {\textbf{1}}_{\{|X_{k+1}(t-T_k)|>y\sqrt{t^\rho v(t)}\}}\bigg)\,\stackrel{{\textrm{P}}}{\to}\, 0\end{equation}

for all $y>0$.

By Proposition 1.5.8 in [Reference Bingham, Goldie and Teugels3], $t^\rho v(t)\sim (\rho+\beta)\int_0^t y^{\rho-1}v(y)\,\textrm{d}y$ as $t\to\infty$. Therefore, while proving Theorem 2.1 we can interchangeably use $t^\rho v(t)$ or $(\rho+\beta)\int_0^ty^{\rho-1}v(y)\,\textrm{d}y$ in the denominator of (7). Therefore, without loss of generality we can and do assume that $t^\rho v(t)$ is nondecreasing, for so is its asymptotic equivalent. Thus, relation (29) follows if we can prove that

\begin{equation*}\frac{1}{t^\rho v(t)}\int_{[0,\,t]} v_y(t-x) \, \textrm{d}N(x)\,{\overset{\textrm{P}}{\to}}\,0,\qquad t\to\infty,\end{equation*}

for all $y>0$.

Fix any $y>0$. Formula (4) ensures that given $\varepsilon>0$ there exists $t_0>0$ such that $v_y(t)\leq\varepsilon v(t)$ whenever $t\geq t_0$. With this at hand we obtain

\begin{align*}\frac{1}{t^\rho v(t)}\int_{[0,\,t]} v_y(t-x) \, \textrm{d}N(x)&=\frac{1}{t^\rho v(t)}\Big(\int_{[0,\,t-t_0]} v_y(t-x) \,\textrm{d}N(x)\\&\quad+\int_{(t-t_0,\,t]} v_y(t-x) \, \textrm{d}N(x)\Big)\\&\qquad\leq \frac{\varepsilon}{t^\rho v(t)}\int_{[0,\,t]}v(t-x) \, \textrm{d}N(x)\\&\quad+\frac{(N(t)-N(t-t_0))\sup_{x\in[0,\,t_0]}v_y(x)}{t^\rho v(t)}.\end{align*}

Using (19) with $u_i=1$ and denoting the first summand on the right-hand side by $J(t,\varepsilon)$, we conclude that, for any $\delta>0$,

\begin{equation*}\lim_{\varepsilon\to0+}\limsup_{t\to\infty} {\textrm{P}}\{J(t,\varepsilon)> \delta\}=0.\end{equation*}

Since $v_y(t)\leq v(t)$ for all $t\geq 0$, and v is locally bounded by assumption, so is $v_y$. Therefore, the second summand on the right-hand side converges to zero in probability as $t\to\infty$ by the same reasoning as given for the second summand on the right-hand side of (27).

The proof of Theorem 2.1 is complete.

Acknowledgements

The authors thank the two anonymous referees for several useful comments. The support of the Chinese Belt and Road Programme DL20180077 is gratefully acknowledged.

References

Asmussen, S. and Hering, H. (1983). Branching Processes. Boston, Birkhäuser.CrossRefGoogle Scholar
Billingsley, P. (1968). Convergence of Probability Measures. New York, Wiley.Google Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1989). Regular Variation. Cambridge University Press.Google Scholar
Gnedin, A. and Iksanov, A. (2018). On nested infinite occupancy scheme in random environment. Preprint, arXiv:1808.00538.Google Scholar
Hall, P. and Heyde, C. C. (1980). Martingale Limit Theory and its Applications. New York, Academic Press.Google Scholar
Iksanov, A. (2016). Renewal Theory for Perturbed Random Walks and Similar Processes. Cham, Birkhäuser.CrossRefGoogle Scholar
Iksanov, A., Jedidi, W. and Bouzzefour, F. (2018). Functional limit theorems for the number of busy servers in a $G/G/\infty$ queue. J. Appl. Prob. 55, 1529.CrossRefGoogle Scholar
Iksanov, A. and Kabluchko, Z. (2018). A functional limit theorem for the profile of random recursive trees. Electron. Commun. Probab. 23, 87.CrossRefGoogle Scholar
Iksanov, A. and Kabluchko, Z. (2018). Weak convergence of the number of vertices at intermediate levels of random recursive trees. J. Appl. Prob. 55, 11311142.CrossRefGoogle Scholar
Iksanov, A., Marynych, A. and Meiners, M. (2017). Asymptotics of random processes with immigration I: Scaling limits. Bernoulli 23, 12331278.CrossRefGoogle Scholar
Iksanov, A. and Rashytov, B. (2019). A functional limit theorem for general shot noise processes. To appear in J. Appl. Prob. 57.Google Scholar
Marynych, A. and Verovkin, G. (2017). A functional limit theorem for random processes with immigration in the case of heavy tails. Mod. Stoch.: Theory Appl. 4, 93108.CrossRefGoogle Scholar
Pang, G. and Taqqu, M. S. (2019). Nonstationary self-similar Gaussian processes as scaling limits of power-law shot noise processes and generalizations of fractional Brownian motion. High Frequency 2, 95112.CrossRefGoogle Scholar
Pang, G. and Zhou, Y. (2018). Functional limit theorems for a new class of non-stationary shot noise processes. Stoch. Process. Appl. 128, 505544.CrossRefGoogle Scholar