Hostname: page-component-745bb68f8f-grxwn Total loading time: 0 Render date: 2025-02-11T14:46:40.545Z Has data issue: false hasContentIssue false

Shot noise processes with randomly delayed cluster arrivals and dependent noises in the large-intensity regime

Published online by Cambridge University Press:  22 November 2021

Bo Li*
Affiliation:
Nankai University
Guodong Pang*
Affiliation:
Pennsylvania State University
*
*Postal address: School of Mathematics and LPMC, Nankai University, Tianjin, 300071 China. Email address: libo@nankai.edu.cn
**Postal address: The Harold and Inge Marcus Department of Industrial and Manufacturing Engineering, College of Engineering, Pennsylvania State University, University Park, PA 16802, USA. Email address: gup3@psu.edu
Rights & Permissions [Opens in a new window]

Abstract

We study shot noise processes with cluster arrivals, in which entities in each cluster may experience random delays (possibly correlated), and noises within each cluster may be correlated. We prove functional limit theorems for the process in the large-intensity asymptotic regime, where the arrival rate gets large while the shot shape function, cluster sizes, delays, and noises are unscaled. In the functional central limit theorem, the limit process is a continuous Gaussian process (assuming the arrival process satisfies a functional central limit theorem with a Brownian motion limit). We discuss the impact of the dependence among the random delays and among the noises within each cluster using several examples of dependent structures. We also study infinite-server queues with cluster/batch arrivals where customers in each batch may experience random delays before receiving service, with similar dependence structures.

Type
Original Article
Copyright
© The Author(s) 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

We consider the shot noise process $X=\{X(t)\;:\;t\ge 0\}$ defined by

(1.1) \begin{equation}X(t) \,{:}\,{\raise-1.5pt{=}}\, \sum_{i=1}^{A(t)} \sum_{j=1}^{K_i} H(t-\tau_i- \xi_{ij})Z_{ij}, \quad t \ge 0.\end{equation}

Here $A=\{A(t)\;:\;t\ge 0\}$ is a simple point process of clusters with event times $\{\tau_i\;:\;i \ge 1\}$ ; that is, $A(t) = \max\{k\ge 1\;:\;\tau_k \le t\}$ with $\tau_0\equiv 0$ . $K_i$ represents the number of arrivals in cluster i. Entities of cluster i may arrive at times subsequent to the cluster time $\tau_i$ , that is, at times $\tau_i + \xi_{ij}$ , $j=1,\dots, K_i$ with $\xi_{ij} \ge 0$ . For each cluster i, the random delays $\{\xi_{ij}\;:\;j \in {\mathbb N}\}$ can be correlated and are allowed to be zero with a positive probability. The real-valued variables $Z_{ij}$ represent the noises, and for each cluster i, $\{Z_{ij},j \in {\mathbb N}\}$ may be correlated. For each cluster i, the variables $K_i$ , $\{\xi_{ij}, j \in {\mathbb N}\}$ and $ \{Z_{ij}, j \in {\mathbb N}\}$ are mutually independent, and they are also independent for different clusters. In addition, the cluster variables $(K_i, \{\xi_{ij}\}_j, \{Z_{ij}\}_j)$ are independent of the arrival process A(t) (and the event times $\{\tau_i\}$ ). The function $H\,:\,{\mathbb R}_{+}\to {\mathbb R}$ is the shot shape (response) function.

This model may be used to model financial markets with clustering events, insurance claims with cluster arrivals, and noise processes in some electronic components. For example, insurance claims may arrive in clusters possibly due to natural disasters or accidents, and the claims in each of the clusters may arrive after random delays. The claim sizes may be also dependent because of the clustering effect. Shot noise processes with cluster arrivals and various forms of the shot shape functions have been studied in [Reference Basrak, Wintenberger and ugec1, Reference Gilchrist and Thomas7, Reference Ramirez-Perez and Serfling31, Reference Ramirez-Perez and Serfling32]. In these papers, people have investigated the general formula for the characteristic functional [Reference Gilchrist and Thomas7, Reference Ramirez-Perez and Serfling31], long-range dependence under certain structural conditions on the response function [Reference Ramirez-Perez and Serfling31], and central limit theorem results with normal limit distribution [Reference Ramirez-Perez and Serfling32] and with stable limit distribution [Reference Basrak, Wintenberger and ugec1]. Similar results for shot noise processes without cluster arrivals were established in [Reference Hsing and Teugels9, Reference Lane20, Reference Lane21, Reference Rice33, Reference Samorodnitsky, Heyde, Prohorov, Pyke, Rachev and Springer34].

In this paper we establish a functional law of large numbers (FLLN) and a functional central limit theorem (FCLT) for the process X(t) in (1.1) in the large-intensity asymptotic regime, in which the arrival rate is large while the response function and the delay and noise variables are fixed (unscaled), and there is no scaling in time. Shot noise processes have been studied in this asymptotic regime in [Reference Biermé and Desolneux2, Reference Heinrich and Schmidt8, Reference Iglehart10, Reference Pang and Zhou27, Reference Pang and Zhou29, Reference Papoulis30]. In this regime, the limit processes in the FCLT are Gaussian processes of a particular structure (assuming the arrival process satisfies an FCLT with a Brownian limit). This asymptotic regime is different from another commonly used scaling regime, in which both time and space are scaled (noticing the scaling in time involves both A(t) and $H(t-\cdot)$ ), and which results in self-similar Gaussian processes and fractional Brownian motion limits [Reference Klüppelberg and Kühn17, Reference Klüppelberg and Mikosch18, Reference Pang and Taqqu24] and stable motion limits [Reference Iksanov11Reference Iksanov, Marynych and Meiners15, Reference Klüppelberg, Mikosch and Schärf19]. Among these scaling results, models with renewal arrivals, referred to as random processes with immigration at epochs of a renewal process, have been studied in [Reference Iksanov12, Reference Iksanov, Marynych and Meiners14, Reference Iksanov, Marynych and Meiners15, Reference Marynych22, Reference Marynych and Verovkin23], and models with an arbitrary point process have also recently been studied in [Reference Dong and Iksanov6, Reference Iksanov and Rashytov16].

In this asymptotic regime, we assume that the arrival process satisfies an FCLT as the arrival rate gets large (possibly time-inhomogeneous), having a stochastic limit process with continuous paths (including Brownian motion and other Gaussian processes). In the FLLN, we show that the limit is a deterministic function, which is not affected by the dependences between the random delay times, or by those between the noises, in each cluster. In the FCLT, we show that the limit process is composed of four mutually independent processes, one being an integral functional of the arrival limit process, and the other three being continuous Gaussian processes, capturing the variabilities of random delays, noises, and clusters. We give a few examples to illustrate how the covariance functions of these Gaussian processes depend on the correlations. In the examples, we consider the random delays in the following scenarios: (a) independent and identically distributed (i.i.d.), (b) as a sequence of the event times of a renewal counting process, (c) symmetrically correlated among the arrivals in each cluster, and (d) a discrete autoregressive process with order one (DAR(1)). We consider the noises in the scenarios (a), (c), and (d) as the random delays.

We also study an infinite-server queueing model with cluster (batch) arrivals, where customers in each batch may experience some random delay before receiving service. Using the same notation as above ( $Z_{ij}\ge 0$ representing service times), we may express the number of customers in service at time t as

(1.2) \begin{equation}X(t) \,{:}\,{\raise-1.5pt{=}}\, \sum_{i=1}^{A(t)} \sum_{j=1}^{K_i} {\mathbf 1}(0 \le t-\tau_i- \xi_{ij} < Z_{ij}), \quad t \ge 0.\end{equation}

In [Reference Pang and Whitt25], heavy-traffic limits (FLLN and FCLT) were established for infinite-server queues with batch arrivals where service times within each batch may be correlated, as a consequence of infinite-server queues with weakly dependent service times in [Reference Pang and Whitt26] (see also [Reference Pang and Zhou28]). That approach cannot include (dependent) random delays for customers in each batch. In this model, we tackle the problem with random delays and allow dependence among random delays as well as among service times. We illustrate the impact of the dependence among random delays as well as that among the service times in the scenarios discussed above. When the arrival process is stationary, we discuss the effect of the correlations upon the steady-state mean and variances of the limit process. These results have implications in comparing batch delays and customer delays in service systems (see, e.g., [Reference Whitt37]).

1.1. Organization of the paper

We describe the model in detail and state the assumptions and main results in Section 2. We then give some examples in Section 3, and discuss the impact of dependence among random delays and among noises within each cluster. In Section 4, we state the results and examples for infinite-server queues with batch arrivals. The proofs are given in Sections 5 and 6.

1.2. Notation

All random variables and processes are defined in a common complete probability space $\big(\Omega,{\mathcal{F}},\{{\mathcal{F}}_{t}\}_{t\geq 0},\mathbb{P}\big)$ . Throughout the paper, ${\mathbb N}$ denotes the set of natural numbers, and ${\mathbb R}$ (resp. ${\mathbb R}_{+}$ ) denotes the space of real (resp. nonnegative) numbers. For $a,b\in{\mathbb R}$ , we write $a\wedge b=\min\{a,b\}$ and $a\vee b=\max\{a,b\}$ . Also, $a^{+}=a\vee0$ . Let ${\mathbb D}={\mathbb D}({\mathbb R}_{+},{\mathbb R})$ denote the ${\mathbb R}$ -valued function space of all càdlàg functions on ${\mathbb R}_{+}$ . Let $({\mathbb D},J_{1})$ denote the space ${\mathbb D}$ equipped with the Skorohod $J_{1}$ topology (see [Reference Billingsley3, Reference Whitt38]), which is complete and separable. Let ${\mathbb C}$ be the subset of ${\mathbb D}$ consisting of continuous functions. The symbols $\to$ and $\Rightarrow$ indicate convergence of real numbers and convergence in distribution, respectively. Let $m\in{\mathbb D}$ be a function of locally bounded variation. The Stieltjes integral with respect to dm is denoted by

\begin{equation*}\int_{a}^{b}f(z)m(dz)=\int_{(a,b]}f(z)m(dz),\end{equation*}

for every Borel measurable function f; this is the integral of f on (a, b], and $\int_{a}^{b}m(dz)=m(a,b]=m(b)-m(a)$ for any $a<b$ . If the integral is on [a, b], we write $\int_{a-}^{b}f(z)m(dz)$ . For every $g\in{\mathbb D}$ , the integral $\displaystyle\int_{(a,b]}m(b-z)g(dz)$ is defined by formal integration by parts:

(1.3) \begin{equation}\begin{split}\int_{(a,b]}m(b-z)g(dz)&\,{:}\,{\raise-1.5pt{=}}\,g(b)m(0-)- g(a)m\big((b-a)-\big)-\int_{(a,b]}g(z)m(b-dz),\\&=g(b)m(0)-g(a)m(b-a)- \int_{[a,b)}g(z)m(b-dz)\\&=g(b)m(0)-g(a)m(b-a)+ \int_{(0,b-a]}g(b-z)m(dz).\end{split}\end{equation}

See, e.g., [Reference Shiryaev35, p. 206].

2. Model and results

We consider a sequence of shot noise processes $X^n$ indexed by n in the large-intensity asymptotic regime, where the arrival rate of clusters gets large, of order O(n), while the distributions of the cluster sizes, random delays, and noises, as well as the shot shape function, are fixed. Define the fluid-scaled process $\bar{X}^n \,{:}\,{\raise-1.5pt{=}}\, n^{-1} X^n$ and the diffusion-scaled process $\hat{X}^n \,{:}\,{\raise-1.5pt{=}}\, \sqrt{n} (\bar{X}^n- \bar{X})$ , where $\bar{X}$ is the limit of $\bar{X}^n$ .

We make the following assumptions on the input data.

Assumption 2.1. Assume that $A^{n}(0)=0$ . There exist a deterministic, continuous, and increasing function $\Lambda\,:\,{\mathbb R}_{+}\rightarrow{\mathbb R}_{+}$ and a continuous stochastic process $\hat{A}$ such that $\Lambda(0)=0$ and

\begin{equation*}\hat{A}^n\,{:}\,{\raise-1.5pt{=}}\,n^{1/2}(\bar{A}^n-\Lambda) \Rightarrow \hat{A} \quad\mbox{in}\quad ({\mathbb D}, J_1) \quad\mbox{as}\quad n\rightarrow\infty,\end{equation*}

where $\bar{A}^n=n^{-1} A^n$ . This implies that $\bar{A}^n \Rightarrow \Lambda$ in $({\mathbb D},J_1)$ as $n\to\infty$ .

Assumption 2.2. Assume that the cluster variables $(K_{i},\{\xi_{ij}\}_{j},\{Z_{ij}\}_{j})_i$ are independent of the arrival process $A^{n}$ (and the associate event times $\{\tau^n_i\}$ ), and that the variables $K_i$ , $\{\xi_{ij},j \in {\mathbb N}\}$ , and $\{Z_{ij}, j \in {\mathbb N}\}$ are mutually independent for each cluster i, and also mutually independent across i. Assume the cluster sizes $K_i$ are i.i.d. with finite mean $m_K$ and variance $\sigma^2_K$ , and let $p_{k} = P(K_i=k)$ for $k \in {\mathbb N}$ , so that $\sum_k p_{k}=1$ . For each cluster i, the random delays $\{\xi_{ij},j \in {\mathbb N}\}$ may be dependent with a marginal distribution $G_{j}$ and $\xi_{ij} \ge 0$ (we allow $\xi_{ij} = 0$ with a positive probability), and the noises $\{Z_{ij}, j \in {\mathbb N}\}$ are real-valued and may be also correlated with a common marginal distribution F. Assume that the $Z_{ij}$ have finite mean $m_Z$ and variance $\sigma^2_Z$ . In addition, assume that $\mathbb{E}[Z_{ij}^4]<\infty$ and $\mathbb{E}[K_{i}^{4}]<\infty$ .

For notational brevity, we occasionally drop the index i for the variables $K_i$ , $\{\xi_{ij},j \in {\mathbb N}\}$ , and $\{Z_{ij}, j \in {\mathbb N}\}$ , since they have a common joint law for each i.

Assumption 2.3. Let $H\,:\,{\mathbb R}_{+}\to{\mathbb R}_{+}$ be a monotone function and $H(u)=0$ for $u<0$ . For every fixed $T>0$ , there exists $\gamma>\frac{1}{4}$ such that

(2.1) \begin{equation}\begin{array}{c}\displaystyle\sup_{0\leq s<t\leq T} \frac{|H(t)-H(s)|}{(t-s)^{\gamma}}<\infty,\\ \displaystyle\sup_{\substack{0\leq s<t\leq T\\ (s,t]\cap{\mathcal{L}}_{1}=\emptyset}}\;\;\sup_{j}\frac{\mathbb{P}\big(\xi_{j}\in(s,t]\big)}{(t-s)^{2\gamma}}<\infty,\quad\textit{and}\quad\displaystyle\sup_{\substack{0\leq s<r<t\leq T\\ (s,t]\cap{\mathcal{L}}_{2}=\emptyset}} \;\;\sup_{j,j^\prime}\frac{\mathbb{P}\big(\xi_{j}\in(s,r],\xi_{j^\prime}\in(r,t]\big)}{(t-s)^{4\gamma}}<\infty,\end{array}\end{equation}

where ${\mathcal{L}}_{1}$ and ${\mathcal{L}}_{2}$ are two sets with no accumulation points on $[0,\infty)$ , with ${\mathcal{L}}_{1}$ being the collection of discontinuous points of the distribution function of the $\xi_j$ .

Remark 2.1. The fourth moments in Assumption 2.2 are needed in the proof of tightnesses where the increments moments are estimated, while the second moments are used in the convergence of the finite-dimensional distribution. In Assumption 2.3, the marginal distributions of the $\xi_{j}$ are assumed to be piecewise Hölder continuous on ${\mathbb R}_+$ , but only a finite number of discontinuities in every finite interval is allowed. For the joint distributions of $(\xi_{j},\xi_{j^\prime})$ , we impose the regularity condition concerning $\mathbb{P}\big(\xi_{j}\in(s,r],\xi_{j^\prime}\in(r,t]\big)$ over (s, r] and (r, t] for $s<r<t$ in the third display, which is applied in (5.2) for the tightness proof in Lemmas 5.1 and 5.8. The set ${\mathcal{L}}_{2}$ is chosen such that the last inequality in (2.1) holds. We give an example to illustrate the sets ${\mathcal{L}}_{1}$ and ${\mathcal{L}}_{2}$ . If $\big(\xi_{j},\xi_{j^\prime}\big)$ is discretely distributed on ${\mathbb R}_{+}^{2}$ with support $\{(x_{kjj^\prime},y_{kjj^\prime}), k\geq1\}_{j,j^\prime}$ for $j,j^\prime\in{\mathbb N}$ , then ${\mathcal{L}}_{1}=\{x_{kjj^\prime}| k,j,j^\prime\geq1\}$ and ${\mathcal{L}}_{2}=\{x_{kjj^\prime}| x_{kjj^\prime}=y_{kjj^\prime}, k,j,j^\prime\ge1\}$ .

In addition, we have assumed Hölder continuity for the function H; however, our main results, Theorems 2.1 and 2.2, also hold if H has finitely many jumps on every [0, T]. That assumption requires much heavier notation in the proofs, which we omit for brevity.

For all $u\in{\mathbb R}_{+}$ , let

(2.2) \begin{equation}h_{j}(u)\,{:}\,{\raise-1.5pt{=}}\,\mathbb{E}\big[H(u-\xi_{j})\big]\quad\text{and}\quad h(u)\,{:}\,{\raise-1.5pt{=}}\,\mathbb{E}\bigg[\sum_{j=1}^{K}H(u-\xi_{j})\bigg]=\sum_{j\geq1}h_{j}(u)\mathbb{P}(K\geq j).\end{equation}

The second equality in h(u) is due to the independence between $K_i$ and $\{\xi_{ij}, j \in {\mathbb N}\}$ .

Remark 2.2. By definition, $h_{j}(z)=h(z)=0$ for $z<0$ , and $\displaystyle h_{j}(0)=H(0)\mathbb{P}(\xi_{j}=0)$ , $\displaystyle h(0)=\sum_{j\geq1}h_{j}(0)\mathbb{P}(K\geq j)$ , so $h_{j}, h$ may fail to be continuous at 0. Recall that we allow the random delays to take zero values with a positive probability. Note that $h_{j}$ is independent of the cluster i and $h\in{\mathbb D}$ is monotone on $[0,\infty)$ under Assumption 2.3, which will be proved in Lemma 5.3.

Theorem 2.1. Under Assumptions 2.2 and 2.3, and assuming that $\bar{A}^n\Rightarrow \bar{A}$ in ${\mathbb D}$ as $n\to \infty$ ,

(2.3) \begin{equation}\bar{X}^n \Rightarrow \bar{X} \quad\mbox{in}\quad {\mathbb D} \quad\mbox{as}\quad n \to \infty,\end{equation}

where the limit $\bar{X}$ is a deterministic function given by

(2.4) \begin{equation} \bar{X}(t) = m_Z \int_0^t h(t-s) \Lambda(ds), \quad t \ge 0.\end{equation}

We remark that the dependence among the noises $\{Z_{ij}\}_j$ and that among the random delays $\{\xi_{ij}\}_j$ do not affect the fluid limit, which only depend on the marginal distribution of $\xi_{ij}$ and the mean of $Z_{ij}$ .

We next state the FCLT for the diffusion-scaled process $\hat{X}^n$ . We first introduce some notation. Let

\begin{equation*}\varrho_{ij} \,{:}\,{\raise-1.5pt{=}}\, Z_{ij}- \mathbb{E}[Z_{ij}], \quad\varsigma_{ij}(u) \,{:}\,{\raise-1.5pt{=}}\, H(u-\xi_{ij})-h_{j}(u),\quad\vartheta_{i}(u) \,{:}\,{\raise-1.5pt{=}}\,\sum_{j=1}^{K_{i}}\big(h_{j}(u)-h(u)\big), \quad u \in{\mathbb R}_{+}. \end{equation*}

Again, for notational convenience, we sometimes drop the index i in $\varrho_{ij}$ and $\varsigma_{ij}$ . Define the following quantities:

(2.5) \begin{equation}\begin{split}\displaystyle r_{2}(t,s)=\mathbb{E}\bigg[\sum_{j,j^\prime}^{K} \varsigma_{j}(t)\varsigma_{j^\prime}(s)\bigg] \quad\text{and}\quad R_{2}(t,s)=\int_{0}^{t\wedge s}r_{2}(t-u,s-u)\Lambda(du),\\\displaystyle r_{3}(t,s)=\mathbb{E}\bigg[\sum_{j,j^\prime}^{K} h_{j}(t)h_{j^\prime}(s)\bigg]-h(t)h(s)\quad\text{and}\quad R_{3}(t,s)=\int_{0}^{t\wedge s}r_{3}(t-u,s-u)\Lambda(du),\\\displaystyle r_{4}(t,s)=\mathbb{E}\bigg[\sum_{j,j^\prime}^{K} \varrho_{j}\varrho_{j^\prime}H(t-\xi_{j})H(s-\xi_{j^\prime})\bigg]\quad\text{and}\quad R_{4}(t,s)=\int_{0}^{t\wedge s}r_{4}(t-u,s-u)\Lambda(du).\end{split}\end{equation}

Theorem 2.2. Under Assumptions 2.1, 2.2, and 2.3,

(2.6) \begin{equation}\hat{X}^n \Rightarrow \hat{X} \quad\mbox{in}\quad ({\mathbb D},J_1) \quad\mbox{as}\quad n \to \infty,\end{equation}

where the limit $\hat{X} = m_Z \big(\hat{X}_{1}(t)+ \hat{X}_{2}(t)+\hat{X}_{3}(t)\big) + \hat{X}_{4}(t)$ , a sum of four mutually independent processes defined as follows:

(2.7) \begin{equation} \hat{X}_{1}(t)= \int_{(0,t]}h(t-s)\hat{A}(ds)= \hat{A}(t)h(0)-\int_{[0,t)}\hat{A}(s)h(t-ds),\end{equation}

and the $\hat{X}_\ell$ are continuous Gaussian processes with covariance functions $R_\ell$ , $\ell =2,3,4$ , defined in (2.5).

Remark 2.3. If the limit of the diffusion-scaled process is a Brownian motion (BM)—that is, $\hat{A} = \sqrt{\lambda c_a^{2}}B_a$ where $\lambda$ is the arrival rate (i.e., $\Lambda(t) = \lambda t$ for $t\ge 0$ ), $c_a^2$ is a variability parameter, and $B_a$ is a standard BM—then

\begin{equation*}\hat{X}_{1}(t)=\sqrt{\lambda}c_{a}\int_{0}^{t}h(t-s)dB_{a}(s), \quad t \ge 0.\end{equation*}

In particular, if the arrival process A(t) is renewal with interarrival times of mean $\lambda^{-1}$ and variance $\sigma^2_a$ , then $c_a^2 = \sigma^2_a/(\lambda^{-1})^2 = \lambda^2 \sigma^2_a$ represents the squared coefficient of variation of the interarrival times. In general, $c_a^2$ indicates the variabilities in the arrival process. With non-stationary arrival rates, the limit can be $\hat{A}(t) = c_a B_a(\Lambda(t))$ , which gives

\begin{equation*}\hat{X}_{1}(t)=c_{a}\int_{0}^{t}h(t-s)d B_{a}(\Lambda(s)), \quad t \ge 0.\end{equation*}

3. Examples

In this section, we discuss some special cases of the model, including several dependence structures among the random delays as well as among the noises.

Assumption 3.1. The random delay times of each cluster, $\{\xi_{ij}\}_{j}$ , satisfy one of the following four conditions:

  1. (a) The $\{\xi_{ij}\}_{j}$ are i.i.d. with cumulative distribution function (CDF) G.

  2. (b) For each cluster i, $\xi_{ij} = \sum_{\ell=1}^j \zeta_{i\ell}$ where $\{\zeta_{i\ell}\;:\;\ell \in{\mathbb N}\}$ are i.i.d. with CDF $G_{\zeta}$ . Let $G^{(l)}_\zeta$ be the l-fold convolution of $G_\zeta$ .

  3. (c) For each cluster i, the sequence $\{\xi_{ij}\;:\; j \in \mathbb{N}\}$ is symmetrically correlated; that is, each pair has a common joint distribution $\Psi$ whose correlation is equal to $\rho_{\xi}$ , and each $\xi_{ij}$ has the marginal CDF G.

  4. (d) For each cluster i, the sequence $\{\xi_{ij}\;:\;j \in \mathbb{N}\}$ is generated by a first-order discrete autoregressive process, referred to as a DAR(1) process. Specifically, let $\xi_{ij} = \delta_{i,j-1}\xi_{i,j-1} + (1-\delta_{i,j-1}) \eta_{ij}$ . Here $\{\delta_{ij}\;:\;j \in {\mathbb N}\}$ is a sequence of i.i.d. Bernoulli random variables with $\mathbb{P}(\delta_{ij}=1) = \alpha \in (0,1)$ , while $\{\eta_{ij}\;:\;j \in {\mathbb N}\}$ , independent of $\{\delta_{ij}\;:\;{\kern0.1pt}j \in \mathbb{N}\}$ and $\xi_{i1} \sim G$ , is a sequence of i.i.d. random variables with CDF G.

We also consider the following two conditions, which are variations of (a) and (b) above:

(a $'$ ) For each cluster i, $\xi_{i1}=0$ and $\xi_{ij}$ for $j \ge 2$ are i.i.d. with CDF G.

(b $'$ ) For each cluster i, $\xi_{i1}=0$ and $\xi_{ij} = \sum_{\ell=2}^j \zeta_{i\ell}$ for $j\ge2$ , where $\{\zeta_{i\ell}\;:\;\ell \in{\mathbb N}\}$ are i.i.d. with CDF $G_{\zeta}$ . Let $G_\zeta^{(l)}$ be the l-fold convolution of $G_\zeta$ .

Notice that a special case of Assumption 3.1(c) is given by

(3.1) \begin{equation}\Psi(u,v)=\rho_{\xi} G(u \wedge v) + (1-\rho_{\xi}) G(u) G(v) \quad\text{for all $u,v\geq0$}\end{equation}

for the marginal G and correlation $\rho_{\xi} \in [0,1]$ of $\Psi(u,v)$ (see [Reference Whitt36]), under which

\begin{equation*}\mathbb{P}\big(\xi_{j}\in du,\xi_{j^\prime}\in dv\big)=\rho_{\xi} G(du)\delta_{\{u\}}(dv)+(1-\rho_{\xi})G(du)G(dv),\end{equation*}

and $\text{Corr}(\xi_{j},\xi_{j^\prime})=\rho_{\xi}$ , where $\delta_{\{u\}}$ is the Dirac measure at $\{u\}$ . Moreover, under Assumption 3.1(d),

\begin{equation*}\mathbb{P}\big(\xi_{j}\leq u, \xi_{j^\prime}\leq v\big)=\alpha^{|j-j^\prime|}G(u\wedge v)+ \big(1-\alpha^{|j-j^\prime|}\big)G(u)G(v),\quad\text{for all $u,v\geq0$}.\end{equation*}

Thus, $\text{Corr}(\xi_{j},\xi_{j^\prime})=\alpha^{|j-j^\prime|}$ for all $j,j^\prime\geq1$ ; that is, the correlation between $\xi_{j}$ and $\xi_{j^\prime}$ decreases geometrically in the distance $|j-j^\prime|$ . Note that under Assumption 3.1(d), the sequence $\{\xi_{ij}\;:\; j \in \mathbb{N}\}$ satisfies the $\rho$ -mixing and $\phi$ -mixing conditions.

We first discuss the limit $\bar{X}$ under Assumption 3.1.

Under Assumption 3.1(a),

\begin{equation*}\bar{X}(t) = m_K m_Z\int_0^t \int_{0-}^{t-s} H(t-s-u) G(du) \Lambda(ds). \end{equation*}

Under Assumption 3.1(a),

\begin{equation*}\bar{X}(t) = m_Z \int_0^t \Big(H(t-s)+ ( m_K -1) \int_{0-}^{t-s} H(t-s-u) G(du) \Big) \Lambda(ds).\end{equation*}

Under Assumption 3.1(b),

\begin{equation*}\bar{X}(t) = m_Z\sum_{k=1}^\infty p_{k} \sum_{l=1}^k \int_0^t \int_{0-}^{t-s} H(t-s-u) G_\zeta^{(l)}(du) \Lambda(ds). \end{equation*}

Under Assumption 3.1(b),

\begin{equation*}\bar{X}(t) = m_Z \int_0^t \Big( H(t-s) + \sum_{k=2}^\infty p_{k} \sum_{l=1}^{k-1} \int_{0-}^{t-s} H(t-s-u) G_\zeta^{(l)}(du) \Big) \Lambda(ds).\end{equation*}

We remark that in the cases (a) and (b), it can have an arbitrary number of arrivals (less than the cluster size) at the event time of the cluster $\tau_i^{n}$ . For example, if there are $\ell$ entities of the cluster without delay, assuming $\ell \leq K_i$ almost surely, then similarly to the case under Assumption 3.1(a), we have

\begin{equation*}\bar{X}(t) = m_Z \int_0^t \Big(\ell H(t-s)+ ( m_K -\ell) \int_{0-}^{t-s} H(t-s-u) G(du) \Big) \Lambda(ds). \end{equation*}

In the extreme case where all entities of the cluster arrive at the same time as the cluster arrival time (that is, without delay), we have

\begin{equation*}\bar{X}(t) = m_K m_Z \int_0^t H(t-s) \Lambda(ds). \end{equation*}

Now under Assumptions 3.1(c) and 3.1(d), we have the same formula as in the case (a); the correlation does not affect the fluid limit, but it does affect the covariance function as we show below.

We next give examples of the covariance functions in the various cases.

3.1. I.i.d. noises

Under Assumption 3.1(a), we have

\begin{align*}r_2(t,s) &= m_{K}\bigg(\int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)-h_{1}(t)h_{1}(s)\bigg),\\r_3(t,s) &= \sigma_K^2 h_1(t) h_1(s) , \\r_4(t,s) &= m_K \sigma^2_Z \mathbb{E}[H(t-\xi_1)H(s-\xi_1)] = m_K \sigma^2_Z \int_{0-}^{t\wedge s} H(t-u) H(s-u) G(du),\end{align*}

where $h_1(t) = \int_{0-}^{t} H(t-u) G(du)$ . Under Assumption 3.1(a), we have

\begin{align*}r_2(t,s) &= ( m_K-1)\bigg(\int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)-h_{1}(t)h_{1}(s)\bigg),\\r_3(t,s) &= \sigma^2_K h_1(t) h_1(s),\\r_4(t,s) &= \sigma_{Z}^{2}H(t)H(s)+ (m_{K}-1)\sigma^2_Z \int_{0-}^{t\wedge s} H(t-u) H(s-u) G(du).\end{align*}

As mentioned above in the fluid limit, we can also allow an arbitrary number of entities in the cluster to arrive without delay. In the extreme case of all entities arriving without delay, we have

\begin{equation*}r_2(t,s) =0, \quad r_3(t,s) = \sigma^2_Z H(t)H(s), \quad\text{and}\quad r_{4}(t,s)=m_{K}\sigma_{Z}^{2}H(t)H(s).\end{equation*}

Under Assumption 3.1(b), we have

\begin{align*}r_2(t,s) &= \sum_{j\geq 1}\sum_{k\geq j}p_{k}\Bigg(\int_{0-}^{t\wedge s}H(s-u)H(t-u)G^{(j)}_{\zeta}(du)-h_{j}(t)h_{j}(s)\Bigg)\\ &\quad + \!\sum_{j,j^\prime\geq1}\sum_{k\geq j+j^\prime}p_{k}\Bigg(\!\int_{0-}^{t\wedge s}\!\int_{0-}^{t-u}H(s-u)H(t-u-v)G^{(j)}_{\zeta}(du)G^{(j^\prime)}_{\zeta}(dv)-h_{j}(s)h_{j+j^\prime}(t)\!\Bigg)\\ &\quad + \!\sum_{j,j^\prime\geq1}\sum_{k\geq j+j^\prime}p_{k}\Bigg(\!\int_{0-}^{t\wedge s}\!\int_{0-}^{s-u}\!H(t-u)H(s-u-v)G^{(j)}_{\zeta}(du)G^{(j^\prime)}_{\zeta}(dv)-h_{j}(t)h_{j+j^\prime}(s)\!\Bigg),\\ r_3(t,s) &= \sum_{j,j^\prime\geq1} h_{j}(t)h_{j^\prime}(s)\mathbb{P}\left(K\geq j\vee j^\prime\right)-h(t)h(s),\\ r_4(t,s) &= \sigma^2_Z \sum_{j\geq1}\sum_{k\geq j} p_{k}\int_0^{t\wedge s} H(t-u) H(s-u) G^{(j)}_\zeta(du),\end{align*}

where $h_{j}(t)=\int_{0-}^{t}H(t-u)G^{(j)}_{\zeta}(du)$ .

Under Assumption 3.1(b), we have

\begin{align*}r_2(t,s) &= \sum_{j\geq 1}\sum_{k\geq j}p_{k+1}\Bigg(\int_{0-}^{t\wedge s}H(s-u)H(t-u)G^{(j)}_{\zeta}(du)-h_{j}(s)h_{j}(t)\Bigg)\\ &\quad + \sum_{j,j^\prime\geq1}\sum_{k\geq j+j^\prime}p_{k+1}\Bigg(\int_{0-}^{t\wedge s}\int_{0-}^{t-u}H(s-u)H(t-u-v)\\ & \qquad\qquad\qquad\qquad\quad\qquad G^{(j)}_{\zeta}(du)G^{(j^\prime)}_{\zeta}(dv)-h_{j}(s)h_{j+j^\prime}(t)\Bigg)\\ &\quad + \sum_{j,j^\prime\geq1}\sum_{k\geq j+j^\prime}p_{k+1}\Bigg(\int_{0-}^{t\wedge s}\int_{0-}^{s-u}H(t-u)H(s-u-v)\\ & \qquad\qquad\qquad\qquad\quad\qquad G^{(j)}_{\zeta}(du)G^{(j^\prime)}_{\zeta}(dv)-h_{j}(t)h_{j+j^\prime}(s)\Bigg),\\ r_3(t,s) &= \sum_{j,j^\prime\geq1} h_{j}(t)h_{j^\prime}(s)\left(\mathbb{P}\left(K\geq (j+1)\vee (j^\prime+1)\right)-\mathbb{P}(K\geq j+1)\mathbb{P}(K\geq j^\prime+1)\right), \\ r_4(t,s) &= \sigma_{Z}^{2}H(t)H(s)+\sigma_{Z}^{2} \sum_{j\geq1}\sum_{k\geq j} p_{k+1}\int_0^{t\wedge s} H(t-u) H(s-u) G^{(j)}_\zeta(du).\end{align*}

We next consider the cases where the random delays are correlated in Parts (c) and (d) of Assumption 3.1. The dependence affects only the function $r_2(t,s)$ , while the functions $r_3(t,s)$ and $r_4(t,s)$ are the same as in the case of Assumption 3.1(a), so we only present the formula for $r_2(t,s)$ .

Under Assumption 3.1(c), we have

\begin{align*}r_2(t,s) &= m_K\Big(\int_{0-}^{t\wedge s} H(t-u)H(s-u)G(du)- h_{1}(t)h_{1}(s)\Big)\\& \quad + \mathbb{E}[K(K-1)]\Big(\int_{0-}^{t}\int_{0-}^{s}H(t-u)H(s-v)\big(\Psi(du,dv)-G(du)G(dv)\big)\Big).\end{align*}

If $\Psi(u,v)$ is approximated by $\tilde{\Psi}(u,v)$ defined in (3.1), we can approximate $r_2(t,s)$ by

(3.2) \begin{equation} \tilde{r}_{2}(t,s)=\big(m_{K}(1-\rho_{\xi})+ \rho_{\xi} \mathbb{E}[K^{2}]\big)\Big(\int_{0-}^{t\wedge s} H(t-u)H(s-u)G(du)- h_{1}(t)h_{1}(s)\Big).\end{equation}

This approximate function is linear in the correlation parameter $\rho_{\xi}$ .

Under Assumption 3.1(d), we have

\begin{equation*}\mathbb{E}[\varsigma_{j}(t)\varsigma_{j^\prime}(s)]=\big(\mathbb{E}\big[H(t-\xi_{1})H(s-\xi_{1})\big]-h_{1}(t)h_{1}(s)\big)\alpha^{|j-j^\prime|}\quad\text{for all \textit{j},\textit{j}}'.\end{equation*}

Thus,

\begin{align*}r_2(t,s) &= \Big(\int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)-h_{1}(t)h_{1}(s)\Big)\mathbb{E}\bigg[\sum_{j,j^\prime}^{K}\alpha^{|j-j^\prime|}\bigg]\\&= \Big(\int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)-h_{1}(t)h_{1}(s)\Big)\bigg(\frac{m_{K}(1+\alpha)}{1-\alpha}+ \frac{2\alpha\big(\mathbb{E}[\alpha^{K}]-1\big)}{(1-\alpha)^{2}}\bigg).\end{align*}

It is clear that the function $r_2(t,s)$ is increasing (decreasing) nonlinearly in $\alpha$ if the quantity in the first parenthesis is positive (negative).

3.2. Correlated noises

Assumption 3.2. For each cluster i, $\{Z_{ij}, j \in {\mathbb N}\}$ satisfies one of the following conditions:

  1. (a) $\{Z_{ij}, j \in {\mathbb N}\}$ are symmetrically correlated with a common CDF F and a bivariate joint distribution for each pair $\Phi$ .

  2. (b) $\{Z_{ij}, j \in {\mathbb N}\}$ is a DAR(1) sequence as in Assumption 3.1, with Bernoulli parameter $\beta \in (0,1)$ and marginal CDF F.

Note that the correlations in noises only affect the function $r_4(t,s)$ . We present the formula for it in the following cases.

Under Assumptions 3.1(a) and 3.2(a), we have

(3.3) \begin{align} r_4(t,s) &=m_{K}\sigma_{Z}^{2}\int_{0-}^{t}H(t-u)H(s-u)G(du)+\mathbb{E}\big[K^{2}-K\big]\text{Cov}(Z_{1},Z_{2})h_{1}(t)h_{1}(s).\end{align}

Under Assumptions 3.1(a) and 3.2(b), we have

(3.4) \begin{align} r_4(t,s)=m_{K}\sigma_{Z}^{2}\int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)+ h_{1}(t)h_{1}(s)\sigma_{Z}^{2}\bigg(\frac{m_{K}2\beta}{1-\beta}+ \frac{2\beta\big(\mathbb{E}[\beta^{K}]-1\big)}{(1-\beta)^{2}}\bigg).\end{align}

Under Assumptions 3.1(c) and 3.2(a), we have

\begin{align*}r_4(t,s) &= m_{K}\sigma_{Z}^{2}\int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)\\&\quad + \mathbb{E}\big[K^{2}-K\big]\text{Cov}(Z_{1},Z_{2})\int_{0-}^{t}\int_{0-}^{s}H(t-u)H(s-v)\Psi(du, dv).\end{align*}

Similarly to (3.2), under Assumption 3.1(c), let $\rho_Z$ be the correlation between $Z_{j}$ and $Z_{j^\prime}$ , and let $\Phi$ be approximated by the following $\tilde{\Phi}$ :

\begin{equation*}\tilde{\Phi}(z_{1},z_{2})=\rho_{Z}F(z_{1}\wedge z_{2})- (1-\rho_{Z})F(z_{1})F(z_{2}).\end{equation*}

We can approximate $r_4$ by $\tilde{r}_{4}$ given by

(3.5) \begin{equation}\begin{split}\tilde{r}_{4}(t,s)&=\Big(m_{K}+ \rho_{\xi} \rho_Z \mathbb{E}[K^{2}-K] \Big)\sigma_{Z}^{2} \int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)\\&\quad +(1-\rho_{\xi})\rho_Z \mathbb{E}[K^{2}-K] \sigma^2_Z h_{1}(t)h_{1}(s).\end{split}\end{equation}

It is clear that when $\rho_\xi=0$ , this formula reduces to (3.3) under Assumptions 3.1(a) and 3.2(a).

Under Assumptions 3.1(d) and 3.2(b), we have

\begin{align*}r_{4}(t,s)&=\sigma_{Z}^{2}\int_{0-}^{t\wedge s}H(t-u)H(s-u)G(du)\bigg(\frac{m_{K}(1+\alpha\beta)}{1-\alpha\beta}+ \frac{2\alpha\beta\big(\mathbb{E}[(\alpha\beta)^{K}-1]\big)}{(1-\alpha\beta)^{2}}\bigg)\\&\quad+ \sigma_{Z}^{2}h_{1}(t)h_{1}(s)\bigg(\frac{2m_{K}\beta(1-\alpha)}{(1-\beta)(1-\alpha\beta)}+ \frac{2\beta\big(\mathbb{E}[\beta^{K}]-1\big)}{(1-\beta)^{2}}-\frac{2\alpha\beta\big(\mathbb{E}[(\alpha\beta)^{K}]-1\big)}{(1-\alpha\beta)^{2}}\bigg).\end{align*}

It is also clear that when $\alpha=0$ , this formula reduces to (3.4) under Assumptions 3.1(a) and 3.2(b).

Remark 3.1. The process X in (1.1) can be used to model the total claims in insurance, where A(t) is the arrival process of cluster claims, the $K_i$ are the cluster sizes, the variables $\{Z_{ij}\}_{j}$ are the claim sizes for each cluster, and $\{\xi_{ij}\}_{j}$ are the delays for the claims to arrive in each cluster. See, for example, the book by Daley and Vere-Jones [Reference Daley and Vere-Jones4] and the recent work in [Reference Basrak, Wintenberger and ugec1] and references therein. When the arrival process A is Poisson, under the i.i.d. conditions on the claim sizes and delays, the distribution of X can be characterized using the probability generating or characteristic functionals [Reference Daley and Vere-Jones4, Reference Ramirez-Perez and Serfling31]. In [Reference Basrak, Wintenberger and ugec1, Reference Ramirez-Perez and Serfling32], central limit theorems with Gaussian and infinite-variance stable limits are proved and used to approximate the total claim distributions as $t\to \infty$ . Our results provide distributional approximations for the total claim size at each time t when the arrival rate of cluster claims is large; these approximations are valid for any general non-stationary arrival processes, as well as for various scenarios of correlated claims and delays discussed above. For instance, given that the arrival process results in a BM limit as in Remark 2.3, the total claim X(t) at each time t can be approximated by a Gaussian process with mean $\bar{X}(t)$ as in (2.4) and covariance functions $\sum_{i=1}^4r_i(t,s)$ , where $r_1(t,s)= c_a^2 \int_0^{t\wedge s} h(t-u) h(s-u) d\Lambda(u)$ and $r_2, r_3, r_4$ are given as above in the various scenarios. Then one can approximate the corresponding ruin probability (the first passage time or hitting time of the total claim) by exploiting the computation of the hitting times for Gaussian processes (see, e.g., [Reference Decreusefond and Nualart5]).

4. Infinite-server queues with cluster arrivals and random delays

We consider infinite-server queues with batch/cluster arrivals where the arrivals in each cluster may experience random delays. Let A(t) be the arrival process of batches/clusters, and let $K_i$ be the batch/cluster size of the cluster i. For each cluster i, $\xi_{ij}$ , $j=1,\dots,K_i$ , are the random delays and $Z_{ij}$ , $j=1,\dots,K_i$ , are the corresponding service times. Note that in Assumption 2.2, the noises can take any real values, but in the queueing setting, the service times must be positive. Let X(t) be the number of customers in service at time t. Then it has the representation in (1.2), that is,

\begin{equation*}X^{n}(t)=\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\mathbf{1}(\tau^{n}_{i}+\xi_{ij}\leq t<\tau^{n}_{i}+\xi_{ij}+Z_{ij}).\end{equation*}

We make the following regularity conditions instead of Assumption 2.3, and impose the same conditions in Assumptions 2.1 and 2.2.

Assumption 4.1 For every fixed $T>0$ , there exists $\gamma>\frac{1}{4}$ such that

\begin{equation*}\begin{gathered}\sup_{\substack{0\leq s<t<T\\ (s,t]\cap{\mathcal{L}}_{1}=\emptyset}}\;\;\sup_{j}\frac{\mathbb{P}\big(\xi_{j}\in(s,t]\big)}{(t-s)^{2\gamma}}<\infty\quad\text{and}\quad\sup_{\substack{0\leq s<r<t<T\\ (s,t]\cap{\mathcal{L}}_{2}=\emptyset}}\;\; \sup_{j,j^\prime}\frac{\mathbb{P}\big(\xi_{j}\in(s,r],\xi_{j^\prime}\in(r,t]\big)}{(t-s)^{4\gamma}}<\infty,\end{gathered}\end{equation*}

where ${\mathcal{L}}_{1}$ and ${\mathcal{L}}_{2}$ are the sets with no accumulation points on ${\mathbb R}_{+}$ as in Assumption 2.3. In addition,

\begin{equation*}\sup_{0\leq s<t\leq T}\sup_{j}\frac{\mathbb{P}(Z_{j}\in(s,t])}{(t-s)^{2\gamma}}<\infty,\end{equation*}

and for some ${\mathcal{L}}_{3}\subset{\mathbb R}_{+}$ with no accumulation points,

(4.1) \begin{equation} \sup_{\substack{0\leq s<r<t\leq T\\ (s,t]\cap{\mathcal{L}}_{3}=\emptyset}}\;\; \sup_{j,j^\prime}\frac{\mathbb{P}\big(Z_{j}+\xi_{j}\in(s,r], Z_{j^\prime}+\xi_{j^\prime}\in(r,t]\big)}{(t-s)^{4\gamma}}<\infty.\end{equation}

Remark 4.1. In addition to the conditions on the $\xi_j$ in Assumption 2.3, we further assume that the marginal distributions of the $Z_{j}$ are locally Hölder continuous (and thus, the joint distribution of $(Z_j, Z_{j^\prime})$ is continuous). The condition in (4.1) is the regularity condition imposed upon the joint distributions of $(Z_{j}+\xi_{j}, Z_{j^\prime}+\xi_{j^\prime})$ concerning $\mathbb{P}\big(Z_{j}+\xi_{j}\in(s,r], Z_{j^\prime}+\xi_{j^\prime}\in(r,t]\big)$ over the intervals (s, r] and (r, t] for $s<r<t$ , which is applied in (6.1) for the proof of tightness. If the joint distribution $(Z_{j},Z_{j^\prime})$ is itself locally Hölder continuous, that is,

\begin{equation*}\displaystyle\sup_{\substack{0\leq s<t\leq T\\ 0\leq v<u\leq T}} \sup_{j\neq j^\prime}\frac{\mathbb{P}\big(Z_{j}\in(s,t], Z_{j^\prime}\in(v,u]\big)}{(t-s)^{2\gamma}(u-v)^{2\gamma}}<\infty,\end{equation*}

then ${\mathcal{L}}_{3}=\emptyset$ . Note that the joint distributions of $(Z_{j},Z_{j^\prime})$ and $(Z_{j}+\xi_{j},Z_{j^\prime}+\xi_{j^\prime})$ are continuous on ${\mathbb R}_{+}^{2}$ . However, the second and third conditions do not imply the condition in (4.1). (It is believable that the results in this section also hold for $Z_j$ having a discontinuous distribution function, as for H in the cluster model, but proving this would require additional notation, which we omit for brevity.)

We define for $u\geq0$ ,

(4.2) \begin{equation}\tilde{H}_{j}(u)=\mathbb{P}(Z_{j}>u),\quad\tilde{h}_{j}(u)=\mathbb{P}\big(0\leq u-\xi_{j}<Z_{j}\big),\quad\tilde{h}(u)= \mathbb{E}\bigg[\sum_{j=1}^{K} \tilde{h}_{j}(u)\bigg],\end{equation}

and $\tilde{H}_{j}(u)=\tilde{h}_{j}(u)=\tilde{h}(u)=0$ for $u<0$ .

Theorem 4.1. Under Assumptions 2.2 and 4.1, and assuming that $\bar{A}^n\Rightarrow \bar{A}$ in ${\mathbb D}$ as $n\to \infty$ , (2.3) holds with the limit $\bar{X}(t)$ using $\tilde{h}(u)$ in (4.2).

We next state the FCLT for the diffusion-scaled process $\hat{X}^n$ . We first introduce some notation. For $u \in {\mathbb R}$ , let

\begin{align*}& \tilde\varsigma_{ij}(u) \,{:}\,{\raise-1.5pt{=}}\,\tilde{H}_j(u - \xi_{ij})-\tilde{h}_{j}(u),\quad\tilde\vartheta_{i}(u) \,{:}\,{\raise-1.5pt{=}}\,\sum_{j=1}^{K_{i}}\tilde{h}_{j}(u)-\tilde{h}(u), \\& \tilde{\varrho}_{ij}(u) \,{:}\,{\raise-1.5pt{=}}\, \mathbf{1}\big(0\leq u-\xi_{ij}<Z_{ij}\big)-\tilde{H}_{j}(u-\xi_{ij}).\end{align*}

Again, for notational convenience, we occasionally drop the index i in $\tilde\varrho_{ij}$ , $\tilde\varsigma_{ij}$ , and $\tilde\vartheta_{i}$ . Define the following quantities:

\begin{equation*}\begin{split}\displaystyle r_{2}(t,s)=\mathbb{E}\bigg[\sum_{j,j^\prime}^{K} \tilde\varsigma_{j}(t) \tilde\varsigma_{j^\prime}(s)\bigg] \quad\text{and}\quad R_{2}(t,s)=\int_{0}^{t\wedge s}r_{2}(t-u,s-u)\Lambda(du),\\[-5pt] \displaystyle r_{3}(t,s)=\mathbb{E}\bigg[\sum_{j,j^\prime}^{K} \tilde{h}_{j}(t) \tilde{h}_{j^\prime}(s)\bigg]-\tilde{h}(t)\tilde{h}(s)\quad\text{and}\quad R_{3}(t,s)=\int_{0}^{t\wedge s}r_{3}(t-u,s-u)\Lambda(du), \end{split}\qquad\qquad\end{equation*}

(4.3) \begin{equation}\ \begin{split} \displaystyle r_{4}(t,s)=\mathbb{E}\bigg[\sum_{j,j^\prime}^{K}\tilde{\varrho}_{j}(t)\tilde{\varrho}_{j^\prime}(s)\bigg]\quad\text{and}\quad R_{4}(t,s) =\int_{0}^{t\wedge s}r_{4}(t-u,s-u)\Lambda(du).\end{split} \end{equation}

Theorem 4.2. Under Assumptions 2.1, 2.2, and 4.1, the convergence in (2.6) holds with the limit $\hat{X} = \sum_{\ell=1}^4 \hat{X}_\ell$ , a sum of mutually independent processes, where $\hat{X}_{1}(t)$ is the same as in Theorem 2.2, and $\hat{X}_\ell$ , $\ell=2,3,4$ , are continuous Gaussian processes with covariance functions $R_\ell$ , $\ell =2,3,4$ , defined in (4.3).

4.1. Examples

In this section we give explicit expressions for the fluid limit and the functions $r_\ell$ in the covariance functions, under Assumptions 3.1 and 3.2. Note that except for the renewal random delays in Assumption 3.1(b), all the combinations of the cases of random delays and service times can be regarded as a tandem infinite-server queue with two service stations, where the random delays $\{\xi_{ij}\}_j$ are the service times in the first station and the service times $\{Z_{ij}\}_j$ are those for the second station. These are interesting examples in themselves, since tandem $G/G/\infty-G/\infty$ queues with correlated service times in each service station have not been studied in the literature.

Let $F^c=1-F$ . With i.i.d. random delays, under Assumption 3.1(a),

\begin{equation*}\bar{X}(t) = m_K \int_0^t\Big(\int_{0-}^{t-s}F^c(t-s-u) G(du)\Big) \Lambda(ds), \end{equation*}

and under Assumption 3.1(a),

\begin{equation*}\bar{X}(t) = \int_0^t \Big( F^c(t-s) + (m_K-1)\int_{0-}^{t-s} F^c(t-s-u) G(du) \Big) \Lambda(ds). \end{equation*}

With renewal random delays, under Assumption 3.1(b),

\begin{equation*}\bar{X}(t) =\int_0^t \sum_{k=1}^\infty p_{k} \sum_{l=1}^k\int_{0-}^{t-s} F^c(t-s-u) G^{(l)}(du) \Lambda(ds), \end{equation*}

and under Assumption 3.1(b),

\begin{equation*}\bar{X}(t) = \int_0^t \Big(F^c(t-s) + \sum_{k=2}^\infty p_{k} \sum_{l=1}^{k-1}\int_{0-}^{t-s} F^c(t-s-u) G^{(l)}(du)\Big) \Lambda(ds). \end{equation*}

Again, under Assumptions 3.1(c) and 3.1(d), the fluid limit $\bar{X}$ is the same as in the case (a). The dependence among random delays does not affect the fluid limit.

We next give some examples of the covariance functions under Assumptions 3.1 and 3.2.

4.1.1. I.i.d. service times

Under Assumption 3.1(a), we have

\begin{align*}r_{2}(t,s)&=m_{K}\Big(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\Big)\\r_{3}(t,s)&=\sigma^{2}_{K}\tilde{h}_{1}(t)\tilde{h}_{1}(s) \\r_{4}(t,s)&=m_{K}\int_{0-}^{t\wedge s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G(du),\end{align*}

where

(4.4) \begin{equation} \tilde{h}_{1}(u)=\int_{0-}^{u}F^{c}(u-v)G(dv).\end{equation}

Under Assumption 3.1(a), we have the same $r_{3}(t,s)$ as above, and

\begin{align*}r_{2}(t,s)&=(m_{K}-1)\Big(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\Big),\\r_{4}(t,s)&=\big(F^{c}(t\vee s)-F^{c}(t)F^{c}(s)\big)+ (m_{K}-1)\int_{0-}^{t\wedge s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G(du).\end{align*}

Under Assumption 3.1(b), we have

\begin{align*}r_{2}(t,s)&=\sum_{l\geq 1}\sum_{k\geq l}p_{k}\Big(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G^{(l)}(du)-\tilde{h}_{l}(t)\tilde{h}_{l}(s)\Big)\\&\quad +\sum_{l,l^\prime\geq1}\sum_{k\geq l+l^\prime}p_{k}\Big(\int_{0-}^{t\wedge s}\int_{0-}^{s}F^{c}(t-u)F^{c}(s-u-v)G^{(l)}(du)G^{(l^\prime)}(dv)-\tilde{h}_{l}(t)\tilde{h}_{l+l^\prime}(s)\Big)\\&\quad +\sum_{l,l^\prime\geq1}\sum_{k\geq l+l^\prime}p_{k}\Big(\int_{0-}^{t\wedge s}\int_{0-}^{t}F^{c}(s-u)F^{c}(t-u-v)G^{(l)}(du)G^{(l^\prime)}(dv)-\tilde{h}_{l}(s)\tilde{h}_{l+l^\prime}(t)\Big),\\r_{3}(t,s)&=\sum_{l,l^\prime\geq1}\tilde{h}_{l}(t)\tilde{h}_{l^\prime}(s)\big(\mathbb{P}(K\geq l\vee l^\prime)-\mathbb{P}(K\geq l)\mathbb{P}(K\geq l^\prime)\big),\\r_{4}(t,s)&= \sum_{l\geq1}\sum_{k\geq l}p_{k}\Big(\int_{0-}^{t\wedge s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G^{(l)}(du)\Big),\end{align*}

where $\tilde{h}_{l}(u)=\int_{0-}^{u}F^{c}(u-v)G^{(l)}(dv)$ .

Under Assumptions 3.1(c) and 3.1(d), the correlations in the random delays affect only the function $r_2(t,s)$ , while the functions $r_3(t,s)$ and $r_4(t,s)$ remain the same as those in the i.i.d. case in Assumption 3.1(a). So we state the function $r_2(t,s)$ in these two scenarios. Under Assumption 3.1(c), we have

\begin{align*}r_{2}(t,s)&=m_{K}\Big(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\Big)\\&\quad+\mathbb{E}[K^{2}-K]\Big(\int_{0-}^{t}\int_{0-}^{s}F^{c}(t-u)F^{c}(s-v)\big(\Psi(du, dv)- G(du)G(dv)\big)\Big),\end{align*}

where $\tilde{h}_1(u)$ is defined in (4.4). If $\Psi(u,v)=\rho_{\xi} G(u\wedge v)+(1-\rho_{\xi})G(u)G(v)$ , then

\begin{equation*}r_{2}(t,s)=\big(\rho_{\xi}\mathbb{E}[K^{2}]+ \mathbb{E}[K](1-\rho_{\xi})\big)\Big(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\Big). \end{equation*}

Observe that the function $r_2(t,s)$ is approximately linear in the correlation parameter $\rho_{\xi}$ . Under Assumption 3.1(d), we have

\begin{align*}r_{2}(t,s)&=\Bigg(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\Bigg)\bigg(\frac{m_{K}(1+\alpha)}{1-\alpha}+ \frac{2\alpha\big(\mathbb{E}[\alpha^{K}]-1\big)}{(1-\alpha)^{2}}\bigg),\end{align*}

where $\tilde{h}_1(u)$ is defined in (4.4). Note that the function $r_2(t,s)$ is increasing (decreasing) nonlinearly in the correlation parameter $\alpha$ if the quantity in the first parenthesis is positive (negative).

4.1.2. Correlated service times

We consider the scenarios in Assumptions 3.1(c) and 3.2(a), and in Assumptions 3.1(d) and 3.2(b). (The formulas in the scenarios in Assumptions 3.1(a) and 3.2(a) and those in Assumptions 3.1(a) and 3.2(b) can respectively be obtained from these as seen in Section 3.2.) Note that in both scenarios, we have $r_{3}(t,s)=\sigma_{K}^{2}\tilde{h}_{1}(t)\tilde{h}_{1}(s)$ , which is not affected by the correlations in random delays and in service times. So we focus on the functions $r_2(t,s)$ and $r_4(t,s)$ .

Under Assumptions 3.1(c) and 3.2(a), we have

\begin{align*}r_{2}(t,s)&=m_{K}\bigg(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\bigg)\\&\quad +\mathbb{E}[K^{2}-K]\bigg(\int_{0-}^{t}\int_{0-}^{s}F^{c}(t-u)F^{c}(s-v)\big(\Psi(du, dv)- G(du)G(dv)\big)\bigg),\\r_{4}(t,s)&=m_{K}\bigg(\int_{0-}^{t\wedge s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G(du)\bigg)\\&\quad +\mathbb{E}[K^{2}-K]\bigg(\int_{0-}^{t}\int_{0-}^{s}\big(\Phi^{c}(t-u,s-v)-F^{c}(t-u)F^{c}(s-u)\big)\Psi(du, dv)\bigg),\end{align*}

where $\tilde{h}_1(u)$ is defined in (4.4) and

(4.5) \begin{equation}\Phi^{c}(u,v)\,{:}\,{\raise-1.5pt{=}}\,\mathbb{P}\big(Z_{1}>u,Z_{2}>v\big),\quad\text{for $u,v\geq0$}.\end{equation}

If we further assume relations similar to (3.5) and let $\rho_Z$ be the correlation between $Z_{j}$ and $Z_{j^\prime}$ , then we can approximate $r_2$ and $r_4$ by $\tilde{r}_2$ and $\tilde{r}_4$ , respectively:

\begin{align*}\tilde{r}_{2}(t,s)&=\big(\rho_{\xi} \mathbb{E}[K^{2}]+ \mathbb{E}[K](1-\rho_{\xi})\big)\Big(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\Big),\\\tilde{r}_{4}(t,s)&=\big(\rho_{\xi}\rho_{Z}\mathbb{E}[K^{2}]+ \mathbb{E}[K](1-\rho_{\xi}\rho_{Z})\big)\Big(\int_{0-}^{t\wedge s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G(du)\Big)\\&\quad +\rho_{\xi}\big(1-\rho_{Z})\mathbb{E}[K^{2}-K]\Big(\int_{0-}^{t}\int_{0-}^{s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G(du)G(dv)\Big).\end{align*}

Note that the function $r_2(t,s)$ is linear in $\rho_{\xi}$ , only affected by the correlations in the random delays, and $r_4(t,s)$ is linear in both $\rho_{\xi}$ and $\rho_{Z}$ . The formulas for $r_2$ and $r_4$ under Assumptions 3.1(a) and 3.2(a) are obtained from $\tilde{r}_2$ and $\tilde{r}_4$ , respectively, by setting $\rho_\xi=0$ .

Under Assumptions 3.1(d) and 3.2(b), we have

\begin{align*}r_{2}(t,s)&=\Big(\int_{0-}^{t\wedge s}F^{c}(t-u)F^{c}(s-u)G(du)-\tilde{h}_{1}(t)\tilde{h}_{1}(s)\Big)\bigg(\frac{m_{K}(1+\alpha)}{1-\alpha}+ \frac{2\alpha\big(\mathbb{E}[\alpha^{K}]-1\big)}{(1-\alpha)^{2}}\bigg),\\ r_{4}(t,s)&=\Big(\int_{0-}^{t\wedge s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G(du)\Big)\\ & \qquad\quad \times \bigg(\frac{m_{K}(1+\alpha\beta)}{1-\alpha\beta}+ \frac{2\alpha\beta\big(\mathbb{E}[(\alpha\beta)^{K}]-1\big)}{(1-\alpha\beta)^{2}}\bigg)\\ &\quad+ \Big(\int_{0-}^{t}\int_{0-}^{s}\big(F^{c}(t\vee s-u)-F^{c}(t-u)F^{c}(s-u)\big)G(du)G(dv)\Big)\\ &\qquad\quad \times\bigg(\frac{2m_{K}\beta(1-\alpha)}{(1-\beta)(1-\alpha\beta)}+ \frac{2\beta\big(\mathbb{E}[\beta^{K}]-1\big)}{(1-\beta)^{2}}-\frac{2\alpha\beta\big(\mathbb{E}[(\alpha\beta)^{K}]-1\big)}{(1-\alpha\beta)^{2}}\bigg),\end{align*}

where $\tilde{h}_1(u)$ is defined in (4.4). Observe that $r_2(t,s)$ is increasing (decreasing) nonlinearly in $\alpha$ if the quantity in the first parenthesis is positive (negative), and is only affected by the correlations in the random delays. On the other hand, the function $r_4(t,s)$ is not necessarily monotone in either $\alpha$ or $\beta$ , as indicated in the second term. The formulas under Assumptions 3.1(a) and 3.2(b) can be obtained from these by setting $\alpha=0$ .

4.1.3. Steady state in the stationary case

We consider the stationary case with $\Lambda(t) = \lambda t$ for $t\ge 0$ and the arrival limit $\hat{A}(t) = \sqrt{ \lambda c_a^2} B_a(t)$ for $c_a>0$ and a standard BM $B_a$ . In this case we obtain the equilibrium point of the fluid limit $\bar{X}(t)$ and the steady-state distribution of the stochastic limit $\hat{X}(t)$ , which is a Gaussian process. We state the steady-state limit $\bar{X}(\infty) = \lim_{t\to\infty}\bar{X}(t)$ and the variance $\text{Var}(\hat{X}(\infty))$ of the limiting Gaussian random variable $\hat{X}(\infty)$ of $\hat{X}(t)$ as $t\to\infty$ .

Recall that for infinite-server queues with batch arrivals and i.i.d. service times, it is shown in [Reference Pang and Whitt25] that

\begin{equation*}\bar{X}(\infty)= \lambda m_K \int_0^\infty F^c(s) ds = \lambda m_K m_Z\end{equation*}

and

(4.6) \begin{equation} \text{Var}(\hat{X}(\infty)) = \lambda m_K m_Z + \lambda m_K \big(m_K ( c_a^2 + c^2_K) - 1\big) \int_0^{\infty} \big( F^c(u) \big)^2 d u,\end{equation}

where $c_K^2 = \sigma^2_K/m_K^2$ is the squared coefficient of variation of K.

For the steady state $\bar{X}(\infty)$ of our model, we still have

\begin{equation*}\bar{X}(\infty)=\lambda\int_{0}^{\infty}\sum_{j=1}^{\infty}\mathbb{P}(K\geq j) \mathbb{P}(\xi_{j}\leq s<\xi_{j}+Z_{j})\,ds= \lambda m_{K}m_{Z},\end{equation*}

if $\mathbb{E}[Z_{j}]\equiv m_{Z}$ for all $j\in{\mathbb N}$ , where the second equality follows from Fubini’s theorem. We next provide the steady-state variance formulas in various cases, which are new to the literature.

I.i.d. service times. For notational convenience, let

\begin{equation*}\chi_{1}=\int_0^{\infty} \big( F^c(u) \big)^2 d u\quad\text{and}\quad\chi_2 = \int_0^{\infty} \Big( \int_{0-}^{u}F^c(u-v)G(dv) \Big)^2 du. \end{equation*}

Under Assumption 3.1(a), we obtain

(4.7) \begin{equation}\text{Var}(\hat{X}(\infty))=\lambda m_K m_{Z}+ \lambda m_K \big(m_K ( c_a^2 + c^2_K) - 1\big) \chi_2.\end{equation}

This result is a direct generalization of the case of i.i.d. service times without random delays, comparing (4.7) with (4.6). As we suggested earlier, this case can be regarded as a tandem $G/G/\infty-G/\infty$ queue with i.i.d. service times at both service stations. When counting the number of customers in the second station, we count those that have completed service at the first station and are still in service at the second station. Under Assumption 3.1(a), we obtain

\begin{align*}\text{Var}(\hat{X}(\infty))&=\lambda m_{K} m_{Z}+\lambda(c_{a}^{2}-1)\chi_{1}+ \lambda(m_{K}-1)\big((m_{K}-1)(c_{a}^{2}+c_{K-1}^{2})-1\big)\chi_{2}\\[5pt] &\quad +2\lambda c_{a}^{2}(m_{K}-1)\int_{0}^{\infty}F^{c}(s)\int_{0-}^{s}F^{c}(s-u)G(du)ds.\end{align*}

Note that $c_{K-1}^2 = \text{Var}(K-1)/(m_K-1)^2 = \sigma_K^2/(m_K-1)^2 $ .

Under Assumption 3.1(b), we obtain

\begin{align*}\text{Var}(\hat{X}(\infty))&= \lambda m_{K}m_{Z}+ 2\lambda \sum_{j,j^\prime\geq 1}\mathbb{P}(K\geq j+j^\prime)\int_{0-}^{\infty}G^{(j^\prime)}(dv) \int_{0}^{\infty}F^{c}(s)F^{c}(s+v)ds\\[5pt] &\quad + \lambda (c_{a}^{2}-1) \int_{0}^{\infty}\Big(\sum_{j\geq1}\mathbb{P}(K\geq j)\int_{0-}^{s}G^{(j)}(du)F^{c}(s-u)\Big)^{2}ds.\end{align*}

Under Assumption 3.1(c), we obtain

\begin{align*}&\text{Var}(\hat{X}(\infty)) =\lambda m_{K}m_{Z}+ \lambda m_{K} \big(m_{K} ( c_{a}^{2} + c_{K}^{2}) - 1\big) \chi_2\\&\quad\quad +\lambda m_{K}\big(m_{K}(1+c_{K}^{2})-1\big)\int_{0}^{\infty}\int_{0-}^{s}\int_{0-}^{s}F^{c}(s-u)F^{c}(s-v)\big(\Psi(du, dv)-G(du)G(dv)\big)ds.\end{align*}

Note that in the special case of i.i.d. random delays, $\Psi(du, dv)=G(du)G(dv)$ ; thus the identity above is consistent with (4.7). Also, if $\Psi(u,v)=\rho_{\xi} G(u\wedge v)+ (1-\rho_{\xi})G(u)G(v)$ , then

\begin{equation*} \text{Var}(\hat{X}(\infty))=\lambda m_{K}m_{Z} + \lambda m_{K}\big(m_{K}(c_{a}^{2}+c_{K}^{2})-1\big)\chi_{2}+ \lambda\rho_{\xi} m_{K}\big(m_{K}(1+c_{K}^{2})-1\big)\big(\chi_{1}-\chi_{2}\big).\end{equation*}

It is clear that when $\rho_{\xi}=0$ , this reduces to the formula in (4.7).

Under Assumption 3.1(d), we obtain

\begin{equation*} \text{Var}(\hat{X}(\infty))=\lambda m_{K} m_{Z}+ \lambda\big(m_{K}(c_{a}^{2}+c_{K}^{2})-1\big)\chi_{2}+\lambda \bigg(\frac{2m_{K}\alpha}{1-\alpha}+ \frac{2\alpha\big(\mathbb{E}[\alpha^{K}]-1\big)}{(1-\alpha)^{2}}\bigg)\big(\chi_{1}-\chi_{2}\big).\end{equation*}

Observe that if $\chi_1- \chi_2>0$ , then $ \text{Var}(\hat{X}(\infty)) $ is increasing in $\alpha$ nonlinearly.

Correlated service times. Let

\begin{equation*}\chi_{3}\,{:}\,{\raise-1.5pt{=}}\,\int_{0-}^{\infty}G(du)\int_{0-}^{\infty}G(dv)\int_{u\vee v}^{\infty}F^{c}(s-u\wedge v)ds.\end{equation*}

Under Assumptions 3.1(c) and 3.2(a), with $\Phi^{c}$ defined as in (4.5), we obtain

(4.8) \begin{equation} \begin{split}& \text{Var}(\hat{X}(\infty)) =\lambda m_{K}m_{Z}+ \lambda m_{K}\big(m_{K}(c_{a}^{2}+c_{K}^{2})-1\big)\chi_{2}\\ &\quad +\lambda m_{K}\big(m_{K} (1+c_{K}^{2}) -1\big)\int_{0}^{\infty}\int_{0-}^{s}\int_{0-}^{s} \big(\Phi^{c}(s-u,s-v)-F^{c}(s-u)F^{c}(s-v)\big)\Psi(du, dv)ds\\ &\quad +\lambda m_{K}\big(m_{K} (1+c_{K}^{2}) -1\big)\int_{0}^{\infty}\int_{0-}^{s}\int_{0-}^{s} F^{c}(s-u)F^{c}(s-v) \big(\Psi(du, dv)-G(du)G(dv)\big)ds.\end{split}\end{equation}

If $\Psi(u,v)=\rho_{\xi}G(u\wedge v)+(1-\rho_{\xi})G(u)G(v)$ and $\Phi(u,v)=\rho_{Z}F(u\wedge v)+(1-\rho_{Z})F(u)F(v)$ , then

(4.9) \begin{equation} \begin{split} \text{Var}(\hat{X}(\infty)) &=\lambda m_{K}m_{Z}+ \lambda m_{K}\big(m_{K}(c_{a}^{2}+c_{K}^{2})-1\big)\chi_{2}\\&\quad +\lambda \rho_{\xi}\rho_{Z} m_{K}\big(m_{K}(1+c_{K}^{2})-1\big)(m_{Z}-\chi_{2})\\&\quad +\lambda \rho_{\xi}(1-\rho_{Z}) m_K \big(m_{K} (1+c_{K}^{2}) -1\big) (\chi_{1}-\chi_{2})\\&\quad+\lambda (1-\rho_{\xi})\rho_{Z} m_{K} \big(m_{K} (1+c_{K}^{2}) -1\big) (\chi_{3}-\chi_{2}).\end{split}\end{equation}

Under Assumptions 3.1(d) and 3.2(b), we obtain

(4.10) \begin{equation} \begin{split}\text{Var}(\hat{X}(\infty))&=\lambda m_{K}m_{Z}+ \lambda m_{K}\big(m_{K}(c_{a}^{2}+c_{K}^{2})-1\big)\chi_{2}\\[5pt] &\quad+ \lambda\bigg(\frac{m_{K}2\alpha\beta}{1-\alpha\beta}+ \frac{2\alpha\beta(\mathbb{E}[(\alpha\beta)^{K}-1])}{(1-\alpha\beta)^{2}}\bigg)\big(m_{Z}-\chi_{2}\big)\\[5pt] &\quad+ \lambda\bigg(\frac{m_{K}2\alpha(1-\beta)}{(1-\alpha)(1-\alpha\beta)}+\frac{2\alpha(\mathbb{E}[\alpha^{K}]-1)}{(1-\alpha)^{2}}- \frac{2\alpha\beta(\mathbb{E}[(\alpha\beta)^{K}-1])}{(1-\alpha\beta)^{2}}\bigg)\big(\chi_{1}-\chi_{2}\big)\\&\quad+ \lambda\bigg(\frac{m_{K}2\beta(1-\alpha)}{(1-\beta)(1-\alpha\beta)}+\frac{2\beta(\mathbb{E}[\beta^{K}]-1)}{(1-\beta)^{2}}-\frac{2\alpha\beta(\mathbb{E}[(\alpha\beta)^{K}-1])}{(1-\alpha\beta)^{2}}\bigg)\big(\chi_{3}-\chi_{2}\big).\end{split}\end{equation}

Observe that the first two terms in these two scenarios in (4.8)–(4.10) are the same as the steady-state variance in (4.7), and the other terms capture the effect of correlations among the random delays, as well as among service times.

5. Proof of Theorem 2.2

This section is dedicated to the proof of Theorem 2.2. Since Theorem 2.1 follows directly from Theorem 2.2, we omit its proof for brevity.

We first provide a decomposition of the process $\hat{X}^n$ . Recall $h_{j}(u)$ and h(u) defined in (2.2). We have

(5.1) \begin{equation}\hat{X}^{n}(t)=\mathbb{E}[Z_1]\big(\hat{X}^{n}_{1}(t)+\hat{X}^{n}_{2}(t)+\hat{X}^{n}_{3}(t)\big) + \hat{X}^{n}_{4}(t),\end{equation}

where for every $t>0$ , the subprocesses are given by

\begin{align*}\hat{X}^{n}_{1}(t)&\,{:}\,{\raise-1.5pt{=}}\, \sqrt{n}\bigg(\frac{1}{n}\sum_{i=1}^{A^{n}(t)}h(t-\tau_{i}^{n})-\int_{0}^{t}h(t-s)\Lambda(ds)\bigg)= \int_{(0,t]}h(t-s)\hat{A}^{n}(ds)\\[5pt] &= \hat{A}^{n}(t)h(0)-\int_{[0,t)}\hat{A}^{n}(s)h(t-ds)=\hat{A}^{n}(t)h(0)+\int_{(0,t]}\hat{A}^{n}(t-s)h(ds),\end{align*}

where the integration by parts in (1.3) and the fact that $\hat{A}^{n}(0)=0$ are applied, and

\begin{align*}\hat{X}^{n}_{2}(t)&\,{:}\,{\raise-1.5pt{=}}\,\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\Big(H(t-\tau_{i}^{n}-\xi_{ij})-h_{j}(t-\tau_{i}^{n})\Big)=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\varsigma_{ij}(t-\tau_{i}^{n}),\\\hat{X}^{n}_{3}(t)&\,{:}\,{\raise-1.5pt{=}}\,\frac{1}{\sqrt{n}}\sum_{j=1}^{A^{n}(t)}\Big(\sum_{j=1}^{K_{i}}h_{j}(t-\tau_{i}^{n})-h(t-\tau_{i}^{n})\Big)=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\vartheta_{i}(t-\tau_{i}^{n}), \\\hat{X}^{n}_{4}(t)&\,{:}\,{\raise-1.5pt{=}}\,\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}H(t-\tau_{i}^{n}-\xi_{ij})\big(Z_{ij}-\mathbb{E}[Z]\big)=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}H(t-\tau_{i}^{n}-\xi_{ij})\varrho_{ij}.\end{align*}

In the proofs below, we further assume without loss of generality that $\mathbb{E}[Z]=1$ , to simplify our notation. Moreover, by Assumption 2.3, for fixed $T>0$ , there exists $c_{0}>1$ such that for any $s<t\leq T$ with small $t-s$ ,

(5.2) \begin{equation}\begin{split}\big(H(t)-H(s)\big)^{2}\leq&\ c_{0}\Big((t-s)^{2\gamma}+\mathbf{1}\big(0\in(s,t]\big)\Big),\\\mathbb{P}\big(\xi_{j}\in(s,t]\big)\leq&\ c_{0}\bigg((t-s)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(s,t]\big)\bigg),\\\mathbb{P}\big(\xi_{j}\in(s,r],\xi_{j^\prime}\in(r,t]\big)\leq&\ c_{0}\bigg((t-s)^{4\gamma}+ (t-s)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(s,t]\big)\bigg),\end{split}\end{equation}

where $\{q_{k},k\geq0\}={\mathcal{L}}={\mathcal{L}}_{1}\cup{\mathcal{L}}_{2}\cup\{0\}$ with $q_{0}=0$ , and in the last inequality the indicators are equal to 0 except for at most one term. For the last inequality above, we only consider the nontrivial case where $j\neq j^\prime$ , $r\geq0$ , and $s<r<t\leq T$ with $t-s$ small enough. Then there is at most one possible point, say q, in $(s,t]\cap{\mathcal{L}}$ , and $q=q_{0}=0$ if $s<0$ . We thus have, case by case,

\begin{align*}& \mathbb{P}\big(\xi_{j}\in(s,r],\xi_{j^\prime}\in(r,t]\big)\\ =& \textbf{1}\big(s<0\big)\textbf{1}\big((0,t]\cap{\mathcal{L}}=\emptyset\big)\Big(\mathbb{P}\big(\xi_{j}\in(0,r],\xi_{j^\prime}\in(r,t]\big)+\mathbb{P}\big(\xi_{j}=0, \xi_{j^\prime}\in(r,t]\big)\Big)\\ & +\textbf{1}\big(s\ge0\big)\mathbb{P}\big(\xi_{j}\in(s,r],\xi_{j^\prime}\in(r,t]\big)\\ &\quad\times\Big(\textbf{1}\big((s,t]\cap{\mathcal{L}}=\emptyset\big)+\textbf{1}\big(q\in(s,r]\big)\textbf{1}\big((r,t]\cap{\mathcal{L}}=\emptyset\big)+\textbf{1}\big((s,r]\cap{\mathcal{L}}=\emptyset\big)\textbf{1}\big(q\in(r,t]\big)\Big)\\ \leq & c_{0}(t-s)^{4\gamma}+ c_{0}(t-r)^{2\gamma}\textbf{1}\big(q\in(s,r]\big)+ c_{0}(r-s)^{2\gamma}\textbf{1}\big(q\in(r,t]\big)\\ \leq & c_{0}(t-s)^{4\gamma}+c_{0}(t-s)^{2\gamma}\textbf{1}\big(q\in(s,t]\big).\end{align*}

In the following proofs, we fix the constant $c_0$ and $\{q_{k}\}$ in (5.2).

Lemma 5.1. Under Assumption 2.3, for all $v<w<u\leq T$ with small enough $u-v$ , we have

(5.3) \begin{align}&\mathbb{E}\big[H(u-\xi_{j})-H(v-\xi_{j})\big]^{2}\leq2c_{0}^{2}\left((u-v)^{2\gamma}+ \sum_{k=0}^{\infty} \mathbf{1}\big(q_{k}\in(v,u]\big)\right),\end{align}
(5.4) \begin{align}\begin{split}&\mathbb{E}\big[(H(u-\xi_{j})-H(w-\xi_{j}))^{2}(H(w-\xi_{j^\prime})-H(v-\xi_{j^\prime}))^{2}\big]\\&\quad\quad\leq 4c_{0}^{3}\left((u-v)^{4\gamma}+ (u-v)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\right),\end{split}\end{align}

where $c_{0}$ is the constant in (5.2).

Proof. Applying (5.2), we have

\begin{equation*}\mathbb{E}\big[H(u-\xi_{j})-H(v-\xi_{j})\big]^{2}\leq c_{0}\Big((u-v)^{2\gamma}+ \mathbb{P}\big(\xi_{j}\in(v,u]\big)\Big),\end{equation*}

which gives (5.3) after a further application of (5.2).

Similarly, we have from (5.2) that

\begin{align*}&\ \mathbb{E}\big[(H(u-\xi_{j})-H(w-\xi_{j}))^{2}(H(w-\xi_{j^\prime})-H(v-\xi_{j^\prime}))^{2}\big]\\\leq&\ c_{0}^{2}\mathbb{E}\Big[\Big((u-w)^{2\gamma}+\mathbf{1}\big(\xi_{j}\in(w,u]\big)\Big)\Big((w-v)^{2\gamma}+\mathbf{1}\big(\xi_{j^\prime}\in(v,w]\big)\Big)\Big]\\\leq&\ c_{0}^{2}\bigg((u-v)^{4\gamma}+c_{0}(u-w)^{2\gamma}\bigg((w-v)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,w]\big)\bigg)\\&\quad+c_{0}(w-v)^{2\gamma}\bigg((u-w)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(w,u]\big)\bigg)\\&\quad+c_{0}\bigg((u-v)^{4\gamma}+(u-v)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg) \bigg),\end{align*}

which gives (5.4). This finishes the proof.

5.1. Convergence of $\hat{X}^{n}_{1}$

For the convergence of $\hat{X}^{n}_{1}$ , we need the following lemma, which is a simplified version of [Reference Pang and Zhou27, Lemma 6.1], noticing that the Lebesgue–Stieltjes integral for $\psi_{g}$ is defined on the interval [0,t). We state it here for completeness and provide a proof in the appendix.

Lemma 5.2. Let g be a function in ${\mathbb D}$ with locally bounded variation. Define the mapping $\psi_{g}$ on ${\mathbb D}$ as follows:

(5.5) \begin{equation} \psi_{g}(z)(t)\,{:}\,{\raise-1.5pt{=}}\,\int_{[0,t)}z(s)g(t-ds)=-\int_{(0,t]}z(t-s)g(ds)\end{equation}

for $z\in{\mathbb D}$ and $t>0$ . Then the following hold:

  1. 1. For any $z\in{\mathbb D}$ , $\psi_{g}(z)\in {\mathbb D}$ and $\psi_{g}(z)(0)=0$ .

  2. 2. If $g\in {\mathbb C}$ or $z\in{\mathbb C}$ and $z(0)=0$ , then $\psi_{g}(z)\in{\mathbb C}$ .

  3. 3. If $z\in{\mathbb C}$ , then $\psi_{g}$ is continuous at z in $({\mathbb D},J_{1})$ .

Lemma 5.3. Under Assumptions 2.2 and 2.3, the functions $h_{j}$ and h defined in (2.2) are monotonic functions in ${\mathbb D}$ , which are piecewise Hölder continuous.

Proof. The fact that $h_{j}$ and h are monotonic and càdlàg follows from the corresponding properties of H. For all $s<t<T$ and $j\in{\mathbb N}$ , we have from (5.3) that

(5.6) \begin{equation}(h_{j}(t)-h_{j}(s))^{2}\leq \mathbb{E}\big[H(t-\xi_{j})-H(s-\xi_{j})\big]^{2}\leq 2c_{0}^{2}\bigg((t-s)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(s,t]\big)\bigg).\end{equation}

This proves the result.

Lemma 5.4. Under Assumptions 2.1, 2.2, and 2.3, $\hat{X}^{n}_{1}\Rightarrow \hat{X}_{1}$ in $({\mathbb D},J_{1})$ as $n\to\infty$ , where $\hat{X}_{1}$ is the continuous process given in Theorem 2.2.

Proof. Firstly, the continuity of $\hat{X}_{1}$ is a consequence of (2.7), Lemma 5.2(ii), the continuity of $\hat{A}$ , and the fact that $\hat{A}(0)=0$ . Moreover, by definition

\begin{equation*}\hat{X}^{n}_{1}(t)=\hat{A}^{n}(t)h(0) - \int_{[0,t)}\hat{A}^{n}(s)h(t-ds),\end{equation*}

so the claim follows from Lemma 5.2(iii) and the continuous mapping theorem [Reference Billingsley3, Theorem 2.7].

5.2. Convergence of $\hat{X}^{n}_{\textrm{2}},\hat{X}^{n}_{\textrm{3}}$ , and $\hat{X}^{n}_{\textrm{4}}$ in ${\mathbb D}$

This proof proceeds in the following steps:

  1. Step 1: Proving the existence of the limit Gaussian process $\hat{X}_{k}$ with sample paths in ${\mathbb C}$ (Lemma 5.5).

  2. Step 2: Proving the convergence of finite-dimensional distributions of $\hat{X}^{n}_{k}$ to those of $\hat{X}_{k}$ (Lemma 5.7).

  3. Step 3: Verifying the convergence criterion with the modulus of continuity as in [Reference Billingsley3, Theorem 13.5] and completing the proof (Lemma 5.8).

Lemma 5.5. Under Assumptions 2.1, 2.2, and 2.3, there exist continuous modifications of the Gaussian processes $\hat{X}_{k}$ , $k=2,3,4$ , with mean zero and covariance functions as in (2.5).

Proof. Recalling the covariance functions (2.5) of the limit distribution $\hat{X}_{k}$ , its existence as a Gaussian process follows from the consistency condition for the Gaussian distributional property. To prove $\hat{X}_{k}\in{\mathbb C}$ , it is sufficient to check that each $\hat{X}_{k}$ has continuous quadratic mean.

Let $\displaystyle \eta(t)\,{:}\,{\raise-1.5pt{=}}\,\sum_{j=1}^{K}H(t-\xi_{j})Z_{j}$ . For every $v<u<T$ , by (5.3), we have

\begin{align*}\text{Var}(\eta(u)-\eta(v))& \leq \mathbb{E}\bigg[K\sum_{j=1}^{K}\varrho_{j}^{2}(H(u-\xi_{j})-H(v-\xi_{j}))^{2}\bigg] \\& \leq 2c_{0}^{2}\mathbb{E}\big[K^{2}\big] \mathbb{E}[\varrho^{2}]\bigg((u-v)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg).\end{align*}

It can be shown that

\begin{equation*}\sum_{j=1}^{K}\varrho_{j}H(t-\xi_{j})\in\sigma\{K,\xi,Z\},\quad\sum_{j=1}^{K}\varsigma_{j}(t)\in\sigma\{K,\xi\},\quad\text{and}\quad\vartheta(t)\in\sigma\{K\},\end{equation*}

are centered variables adapted to successively descending filtrations, where we use $\xi, Z$ to denote the sequence of the corresponding variables. Therefore, for $0\leq s<t<T$ ,

\begin{align*}& \mathbb{E} \big[\big(\hat{X}_{2}(t)-\hat{X}_{2}(s)\big)^{2}\big]+\mathbb{E} \big[\big(\hat{X}_{3}(t)-\hat{X}_{3}(s)\big)^{2}\big]+\mathbb{E} \big[\big(\hat{X}_{4}(t)-\hat{X}_{4}(s)\big)^{2} \big] \nonumber\\ = &\int_{0}^{t}\text{Var}\big(\eta(t-u)-\eta(s-u)\big)\Lambda(du)\\ \leq&\ 2c_{0}^{2}\mathbb{E}\big[Z^{2}\big]\mathbb{E}\big[K^{2}\big]\bigg((t-s)^{2\gamma}\Lambda(T)+ \sum_{k=0}^{\infty}\big(\Lambda(t-q_{k})-\Lambda(s-q_{k})\big)\bigg).\end{align*}

Since $\Lambda$ is continuous and there are only finitely many $q_{k}$ on [0, T], this finishes the proof.

To prove the convergence of the finite-dimensional distributions of $\hat{X}^{n}_{k}$ , $k=2,3,4$ , noticing that the processes are essentially independent sums of centered random variables, we follow the idea of proving the central limit theorem under the Lindeberg condition used in [Reference Shiryaev35, Theorem.III.4.1], where we need the second moments for $\hat{X}^{n}_{k}$ , $k=2,3,4$ . Let ${\mathcal{F}}_{A}^{n}(t)=\sigma\{A^n(s)\;:\;t\geq s\geq0\}$ .

Lemma 5.6. For $0 <t <T$ , we have

\begin{align*}\mathbb{E}\Big[\big(\hat{X}^{n}_{2}(t)\big)^{2}\Big| {\mathcal{F}}^n_A(t)\Big]&= \displaystyle\int_{(0,t]}r_{2}(t-u,t-u)\bar{A}^{n}(du),\\\mathbb{E}\Big[\big(\hat{X}^{n}_{3}(t)\big)^{2}\Big|{\mathcal{F}}^n_A(t)\Big]&= \displaystyle\int_{(0,t]}r_{3}(t-u,t-u)\bar{A}^{n}(ds),\\\mathbb{E}\Big[\big(\hat{X}^{n}_{4}(t)\big)^{2}\Big| {\mathcal{F}}^n_A(t)\Big]&= \displaystyle\int_{(0,t]}r_{4}(t-u,t-u)\bar{A}^{n}(du).\end{align*}

Proof. For fixed $t>0$ , notice that conditioning on ${\mathcal{F}}^n_A(t)$ , $\hat{X}^{n}_{4}(t)$ is a sum of independent and centralized random variables. It is straightforward that

\begin{equation*}\mathbb{E} \big[ \big(\hat{X}^{n}_{4}(t))^2\big| {\mathcal{F}}^n_A(t) \big] =\frac{1}{n}\sum_{i=1}^{A^{n}(t)}\mathbb{E}\bigg[ \bigg( \sum_{j=1}^{K}H(u_{i}-\xi_{j})\varrho_{j}\bigg)^{2} \bigg] \bigg|_{u_{i}=t-\tau_{i}^{n}}.\end{equation*}

Thus, the formula for $\hat{X}^{n}_{4}(t)$ follows. The conditional second moments of $\hat{X}^{n}_{2}$ and $\hat{X}^{n}_{3}$ are derived similarly by conditioning.

A direct application of Lemma 5.6 shows that for $s,t\in[0,T]$ ,

\begin{align*}\mathbb{E}\Big[\hat{X}^{n}_{2}(t)\hat{X}^{n}_{2}(s)\Big|{\mathcal{F}}^n_A(t)\Big]&=\displaystyle\int_{(0,t]}r_{2}(t-u,s-u)\bar{A}^{n}(du),\\\mathbb{E}\Big[\hat{X}^{n}_{3}(t)\hat{X}^{n}_{3}(s)\Big| {\mathcal{F}}^n_A(t) \Big]&=\displaystyle\int_{(0,t]}r_{3}(t-u,s-u)\bar{A}^{n}(du),\\\mathbb{E}\Big[\hat{X}^{n}_{4}(t)\hat{X}^{n}_{4}(s)\Big|{\mathcal{F}}^n_A(t) \Big]&= \displaystyle\int_{(0,t]}r_{4}(t-u,s-u)\bar{A}^{n}(du).\end{align*}

Lemma 5.7. Under Assumptions 2.1, 2.2, and 2.3, the finite-dimensional distributions of the processes $\big(\hat{X}^{n}_{2},\hat{X}^{n}_{3},\hat{X}^{n}_{4}\big)$ converge to those of $(\hat{X}_{2},\hat{X}_{3},\hat{X}_{4})$ , in which $\hat{X}_{k}$ , $k=2,3,4$ , are mutually independent.

Proof. For fixed $t>0$ and $\alpha,\beta,\gamma\in{\mathbb R}$ , we consider first the limit distribution of

(5.7) \begin{equation}\begin{split}\hat{Y}^{n}&\,{:}\,{\raise-1.5pt{=}}\, \alpha\hat{X}^{n}_{2}(t)+\beta\hat{X}^{n}_{3}(t)+\gamma\hat{X}^{n}_{4}(t)= \frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\breve{\eta}_{i}(s)\Big|_{s=t-\tau_{i}^{n}},\\\breve{\eta}_{i}(s)&\,{:}\,{\raise-1.5pt{=}}\, \frac{1}{\sqrt{n}}\bigg(\alpha\sum_{j=1}^{K_{i}}\varsigma_{ij}(s)+\beta\vartheta_{i}(s)+ \gamma\sum_{j=1}^{K_{i}}H(s-\xi_{ij})\varrho_{ij}\bigg),\end{split}\end{equation}

where, by assumption, the $\breve{\eta}_{i}(s)$ are independent for i with mark s, and $\breve{\eta}(s)=\alpha(\eta(s)-h(s))$ , comparing with $\eta$ as defined in the proof of Lemma 5.5 if $\alpha=\beta=\gamma$ . Applying the continuity theorem, it suffices to show that the characteristic function of $\hat{Y}^{n}$ converges pointwise to that of $\big(\alpha\hat{X}_{2}(t)+\beta\hat{X}_{3}(t)+\gamma\hat{X}_{4}(t)\big)$ , where $\hat{X}_{2}$ , $\hat{X}_{3}$ , and $\hat{X}_{4}$ are mutually independent.

To this end, we examine the distribution of $\check{\eta}(s)$ for $s\geq0$ , making use of the following inequalities:

\begin{equation*}\big|e^{iu}-(1+iu)\big|\leq \frac{u^{2}}{2},\quad\big|e^{iu}-(1+iu-\frac{u^{2}}{2})\big|\leq \frac{|u|^{3}}{6}, \quad\text{and}\quad\big|\ln(1+v)-v\big|\leq |v|^{2},\end{equation*}

where $\ln$ denotes the principal value of the logarithm. These are valid whenever $u\in{\mathbb R}$ and v is a complex number with $|v|\leq \frac{1}{2}$ . From the fact that $\mathbb{E}[\breve{\eta}(s)]=0$ , it follows that for every $a>0$ ,

\begin{align*}\mathbb{E}\Big[\exp&\Big(\frac{iz}{\sqrt{n}}\breve{\eta}(s)\Big)\Big]= \mathbb{E}\Big[\exp\Big(\frac{iz}{\sqrt{n}}\breve{\eta}(s)\Big);|\breve{\eta}(s)|>a\Big]+\mathbb{E}\Big[\exp\Big(\frac{iz}{\sqrt{n}}\breve{\eta}(s)\Big); |\breve{\eta}(s)|\leq a\Big]\\&= 1+ \frac{z^{2}\theta_{1}}{2n}\mathbb{E}\big[\breve{\eta}^{2}(s); |\breve{\eta}|>a\big]-\frac{z^{2}}{2n}\mathbb{E}\big[\breve{\eta}^{2}(s); |\breve{\eta}|\leq a\big]+ \frac{z^{3}\theta_{2}}{6n^{3/2}}\mathbb{E}\big[|\breve{\eta}(s)|^{3}; |\breve{\eta}|\leq a\big]\\&= 1- \frac{z^{2}}{2n}\mathbb{E}\big[\breve{\eta}^{2}(s)\big]+\frac{z^{2}}{n} R_{s},\end{align*}

where $\theta_{1},\theta_{2}$ are complex numbers with $|\theta_{1}|,|\theta_{2}|\leq 1$ ,

\begin{equation*}|R_{s}|\leq \mathbb{E}\big[\breve{\eta}^{2}(s); |\breve{\eta}(s)|>a\big]+ \frac{|z|a}{6\sqrt{n}}\mathbb{E}\big[\breve{\eta}^{2}(s)\big],\quad\text{and}\quad\Big|\mathbb{E}\big[e^{\frac{iz}{\sqrt{n}}\check{\eta}(s)}\big]-1\Big|\leq \frac{z^{2}}{2n}\mathbb{E}\big[\check{\eta}^{2}(s)\big].\end{equation*}

Now, taking $z\in{\mathbb R}$ such that $z^{2}\mathbb{E}[\breve{\eta}^{2}(s)]\leq n$ , we have that for some $|\theta_{3}|\leq 1$ ,

(5.8) \begin{equation}\ln\mathbb{E}\Big[\exp\Big(\frac{iz}{\sqrt{n}}\breve{\eta}(s)\Big)\Big]=\frac{-z^{2}}{2n}\mathbb{E}\big[\breve{\eta}^{2}(s)\big] +\frac{z^{2}}{n}R_{s}+\frac{z^{4}\theta_{3}}{4n^{2}}\mathbb{E}^{2}[\breve{\eta}^{2}(s)].\end{equation}

By the same reasoning as in the proof of Lemma 5.5, we have

(5.9) \begin{align}g(s)&\,{:}\,{\raise-1.5pt{=}}\, \mathbb{E}(\breve{\eta}^{2}(s))=\alpha^{2}\mathbb{E}\Big[\Big(\sum_{j=1}^{K}\varsigma_{j}(s)\Big)^{2} \Big] +\beta^{2}\mathbb{E}\big[ \vartheta(s)^{2} \big] + \gamma^{2}\mathbb{E} \Big[ \Big(\sum_{j=1}^{K}H(s-\xi_{j})\varrho_{j}\Big)^{2} \Big]\nonumber\\&= \gamma^{2}\Big(\mathbb{E}\Big[\Big(\sum_{j=1}^{K}H(s-\xi_{j})Z_{j}\Big)^{2} \Big] -h^{2}(s)\Big)+ \big(\alpha^{2}-\gamma^{2}\big)\Big(\mathbb{E} \Big[ \big(\sum_{j=1}^{K}H(s-\xi_{j})\big)^{2} \Big] -h^{2}(s)\Big)\nonumber\\&\quad+ \big(\beta^{2}-\alpha^{2}-\gamma^{2}\big)\Big(\mathbb{E}\Big[ \Big(\sum_{j=1}^{K}h_{j}(s)\Big)^{2} \Big] -h^{2}(s)\Big),\end{align}

which shows $g\in{\mathbb D}$ has bounded variation. Applying the integration by parts in (1.3) and Lemma 5.2,

\begin{align*}\int_{(0,t]}g(t-u)\bar{A}^{n}(du)&= g(0)\bar{A}^{n}(t)+ \int_{[0,t)}\bar{A}^{n}(u) g(t-du)\nonumber\\ &\to g(0)\bar{A}(t)+\int_{[0,t)}\bar{A}(u)g(t-du)=\int_{(0,t]}g(t-u)\bar{A}(du)\end{align*}

in probability as $n\to\infty$ .

Now, conditioning on ${\mathcal{F}}^{n}_{A}(t)$ , plugging the limit above into the following, we have

\begin{align*}&\ \log\Big(\mathbb{E}\big[ \exp\big(iz\hat{Y}^{n}\big)\big| {\mathcal{F}}^n_A(t) \big] \Big)=\int_{(0,t]}\log\mathbb{E}\Big[\exp\Big(\frac{iz}{\sqrt{n}}\breve{\eta}(t-u)\Big)\Big]A^{n}(du)\nonumber\\[5pt] \!\!= &\,\frac{-z^{2}}{2}\int_{(0,t]}g(t-u)\bar{A}^{n}(du)+ z^{2}\int_{(0,t]}R(t-u)\bar{A}^{n}(du)+\frac{z^{4}}{n} \int_{(0,t]}\theta_{3}g^{2}(t-u)\bar{A}^{n}(du)\ \ \\[5pt] \ \to& -\frac{z^{2}}{2}\int_{(0,t]}g(t-u)\Lambda(du)= -\frac{z^{2}}{2}\Big(\alpha^{2}R_{2}(t,t)+ \beta^{2}R_{3}(t,t)+ \gamma^{2}R_{4}(t,t)\Big)\end{align*}

in probability for every $z\in{\mathbb R}$ by passing $n\to\infty$ and then $a\to\infty$ , where the boundedness in (5.8) and the fact that $\displaystyle\sup_{s\in[0,T]}\mathbb{E}\big[\check{\eta}^{2}(s)\big]<\infty$ are applied. This immediately yields the desired limit distributions for the $\hat{X}^{n}_{k}(t)$ at fixed t, as well as their mutual independence and their independence with respect to $\hat{X}^{n}_{1}$ .

The above convergence can be generalized to the joint Laplace of $\big(\hat{X}^{n}_{2}, \hat{X}^{n}_{3}, \hat{X}^{n}_{4}\big)$ , that is,

\begin{equation*}\sum_{l=1}^{m}\Big(\sum_{j=1}^{K}\big(\alpha_{l,j}\varsigma_{}(s_{l})\big)+ \beta_{l}\vartheta(s_{l})+ \gamma_{l,j}H(s_{l}-\xi_{j})\varrho_{j}\Big)\end{equation*}

for some $\alpha_{l,j}, \beta_{l}, \gamma_{l,j}{\kern1pt}\in \mathbb{R}$ and $0<s_{1}<\cdots<s_{m}<T$ . Applying the same procedure will complete the proof of the convergence of the finite-dimensional distributions of $\hat{X}^{n}_{k}$ , $k=2,3,4$ , as well as the mutual independence between the limit processes. We only remark that the second moment of the random variable above, associated with g in (5.9), is an m-dimensional function which may fail to be a continuous function on the domain; however, we can always take $u\to g(s_{1}-u, s_{2}-u,\cdots, s_{m}-u)$ as a càdlàg function with bounded variation, which induces a signed measure on [0, T].

For the tightness of $\hat{X}^{n}_{k}$ , $k=2,3,4$ , we obtain the following probability bound for the increments of the prelimit processes $\hat{X}^{n}_{k}$ , $k=2,3,4$ , where the idea for empirical processes is applied.

Lemma 5.8. Under Assumptions 2.1, 2.2, and 2.3, for $0 \le s < r <t \leq T$ ,

\begin{align*}\max\Bigg\{&\mathbb{P}\Big(\big|\hat{X}^{n}_{2}(t)-\hat{X}^{n}_{2}(r)\big|\wedge\big|\hat{X}^{n}_{2}(r)-\hat{X}^{n}_{2}(s)\big|\geq \lambda\Big|{\mathcal{F}}^n_A(t)\Big),\\&\mathbb{P}\Big(\big|\hat{X}^{n}_{3}(t)-\hat{X}^{n}_{3}(r)\big|\wedge\big|\hat{X}^{n}_{3}(r)-\hat{X}^{n}_{3}(s)\big|\geq \lambda\Big| {\mathcal{F}}^n_A(t)\Big),\\&\mathbb{P}\Big(\big|\hat{X}^{n}_{4}(t)-\hat{X}^{n}_{4}(r)\big|\wedge\big|\hat{X}^{n}_{4}(r)-\hat{X}^{n}_{4}(s)\big|\geq \lambda\Big| {\mathcal{F}}^n_A(t) \Big)\Bigg\}\\\leq&\ \frac{c\big(\bar{A}^{n}(T)+1\big)^{2}}{\lambda^{4}}\bigg((t-s)^{4\gamma}+\bigg( \sum_{k=0}^{\infty}\bar{A}^{n}(t-q_{k})- \sum_{k=0}^{\infty}\bar{A}^{n}(s-q_{k})\bigg)^{2}\bigg)\end{align*}

for some constant $c>0$ independent of n and $\sigma\{A^{n}\}$ .

Proof. In the proof, for fixed $0\leq s<r<t\leq T$ and every $1\leq i\leq A^{n}(T)$ , we always put $(u_{i},w_{i},v_{i})=(t-\tau_{i}^{n},r-\tau_{i}^{n},s-\tau_{i}^{n})$ , which are constants less than T conditioning on $A^{n}$ . And we need the following expansion:

(5.10) \begin{align}x^{2}y^{2}&=\big(\sum_{k}x_{k}\big)^{2}\big(\sum_{k}y_{k}\big)^{2}=\sum_{i,j,l,m}x_{i}x_{j}y_{l}y_{m} \nonumber\\&=\sum_{k}x_{k}^{2}y_{k}^{2} + \sum_{i\neq j}\big(x_{i}^{2}y_{j}^{2} + 2x_{i}x_{j}y_{i}y_{j}\big)+r(x,y),\end{align}

where r(x,y) collects those terms with at least one single subscript.

We first consider the increments of $\hat{X}^{n}_{4}$ . For every $v<w<u\leq T$ , we take

\begin{equation*}x=\sum_{j=1}^{K}\varrho_{j}\big(H(u-\xi_{j})-H(w-\xi_{j})\big)\quad\text{and}\quad y=\sum_{j=1}^{K}\varrho_{j}\big(H(w-\xi_{j})-H(v-\xi_{j})\big).\end{equation*}

Applying Hölder’s inequality, we have

(5.11) \begin{align}\mathbb{E}\big[x^{2}y^{2}\big] \leq&\ \mathbb{E}\Big[K^{2}\sum_{j,j^\prime}^{K}\varrho_{j}^{2}\varrho_{j^\prime}^{2}\big(H(u-\xi_{j})-H(w-\xi_{j})\big)^{2}\big(H(w-\xi_{j^\prime})-H(v-\xi_{j^\prime})\big)^{2}\Big]\nonumber\\\leq& \,4c_{0}^{3}\mathbb{E}[K^{4}]\mathbb{E}[\varrho^{4}]\bigg((u-v)^{4\gamma}+(u-v)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg),\end{align}

where (5.4) is applied in the last inequality. Similarly, one can check that

(5.12) \begin{align}\mathbb{E}\big[x^{2}\big]\leq &\ \mathbb{E}\Big[K\sum_{j=1}^{K}\varrho_{j}^{2}\big(H(u-\xi_{j})-H(w-\xi_{j})\big)^{2}\Big]\nonumber\\ \leq & \ 2c_{0}^{2}\mathbb{E}[K^{2}]\mathbb{E}[\varrho^{2}]\bigg((u-v)^{2\gamma}+ \sum_{k=0}^{\infty}\textbf{1}\big(q_{k}\in(v,u]\big)\bigg),\end{align}

by (5.3), and the same inequality also holds for $\mathbb{E}\big[y^{2}\big]$ and $\mathbb{E}[|xy|]$ .

Then, for $1\leq i\leq A^{n}(T)$ , we take

\begin{equation*}x_{i}=\sum_{j=1}^{K_{i}}\varrho_{ij}\mathbf{1}\big(\xi_{ij}\in(w_{i},u_{i}]\big)\quad\text{and}\quad y_{i}=\sum_{j=1}^{K_{i}}\varrho_{ij}\mathbf{1}\big(\xi_{ij}\in(v_{i},w_{i}]\big).\end{equation*}

By the conditional independence for i and centralization, we have from the last identity in (5.10) that

\begin{align*}&\ \mathbb{E}\Big[\big(\hat{X}^{n}_{4}(t)-\hat{X}^{n}_{4}(r)\big)^{2}\big(\hat{X}^{n}_{4}(r)-\hat{X}^{n}_{4}(s)\big)^{2}\Big|{\mathcal{F}}^{n}_{A}(T) \Big]\\=&\ \frac{1}{n^{2}}\sum_{i=1}^{A^{n}(t)}\mathbb{E}\big[x_{i}^{2}y_{i}^{2}\big|\tau_{i}^{n}\big] +\frac{1}{n^{2}}\sum_{i\neq i^\prime}^{A^{n}(t)} \Big(\mathbb{E}\big[x_{i}^{2}\big|\tau_{i}^{n}\big] \mathbb{E}\big[y_{i^\prime}^{2}\big|\tau_{i^\prime}^{n}\big] +2\mathbb{E}\big[x_{i}y_{i}\big|\tau_{i}^{n}\big]\mathbb{E}\big[x_{i^\prime}y_{i^\prime}\big|\tau_{i^\prime}^{n}\big] \Big)\\\leq&\ \frac{1}{n^{2}}\sum_{i=1}^{A^{n}(t)}\mathbb{E}\big[x_{i}^{2}y_{i}^{2}\big|\tau_{i}^{n}\big] +\frac{1}{n^{2}}\bigg(\sum_{i=1}^{A^{n}(t)}\mathbb{E}\big[x_{i}^{2}\big|\tau_{i}^{n}\big] \bigg)\bigg(\sum_{i=1}^{A^{n}(t)}\mathbb{E}\big[y_{i}^{2}\big|\tau_{i}^{n}\big] \bigg)+\frac{2}{n^{2}}\bigg(\sum_{i=1}^{A^{n}(t)}\mathbb{E}\big[x_{i}y_{i}\big|\tau_{i}^{n}\big] \bigg)^{2}.\end{align*}

Plugging (5.11) and (5.12) into the inequality above, and noticing that $u_{i}-v_{i}\equiv(t-s)$ for every $i\in{\mathbb N}$ and that

\begin{equation*}\mathbf{1}\big(q_{k}\in(v_{i},u_{i}]\big)=\mathbf{1}\big(\tau_{i}^{n}\in(s-q_{k},t-q_{k})\big)\end{equation*}

for all $k\geq0$ , we have almost surely

\begin{align*}\text{RHS}\leq&\ 4c_{0}^{3}\mathbb{E}[K^{4}]\mathbb{E}[\varrho^{4}]\bigg((t-s)^{4\gamma}\bar{A}^{n}(T)+(t-s)^{2\gamma} \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)\nonumber\\&\quad + 12c_{0}^{4}\mathbb{E}^{2}[K^{2}]\mathbb{E}^{2}[\varrho^{2}]\bigg((t-s)^{2\gamma}\bar{A}^{n}(T)+ \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)^{2}\\\leq&\ 28c_{0}^{4}\mathbb{E}[K^{4}]\mathbb{E}[\varrho^{4}]\big(\bar{A}^{n}(T)+1\big)^{2}\bigg((t-s)^{4\gamma}+\bigg( \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)^{2}\bigg),\end{align*}

which shows the inequality for $\hat{X}^{n}_{4}$ .

For the second moments of increment of $\hat{X}^{n}_{2}$ , for $v<w<u<T$ , we define

\begin{align*}x_{j}=H(u-\xi_{j})-H(w-\xi_{j})&\quad\text{and}\quad x=\sum_{j=1}^{K}\big(\varsigma_{j}(u)-\varsigma_{j}(w)\big)=\sum_{j=1}^{K}\big(x_{j}-\mathbb{E}(x_{j})\big),\\y_{j}=H(w-\xi_{j})-H(v-\xi_{j})&\quad\text{and}\quad y=\sum_{j=1}^{K}\big(\varsigma_{j}(w)-\varsigma_{j}(v)\big)=\sum_{j=1}^{K}\big(y_{j}-\mathbb{E}(y_{j})\big).\end{align*}

Applying (5.3), (5.4), and the fact $H(z)=0$ for $z<0$ , it can be checked that for all $j,j^\prime\geq1$ ,

\begin{align*}&\ \max\Big\{\mathbb{E}[x_{j}^{2}y_{j^\prime}^{2}],\mathbb{E}[x_{j}^{2}]\mathbb{E}[y_{j^\prime}^{2}],\mathbb{E}^{2}[x_{j}]\mathbb{E}[y_{j^\prime}^{2}],\mathbb{E}[x_{j}^{2}]\mathbb{E}^{2}[y_{j^\prime}],\mathbb{E}^{2}[x_{j}]\mathbb{E}^{2}[y_{j^\prime}],\\&\quad \mathbb{E}^{2}[x_{j}y_{j^\prime}], \big|\mathbb{E}[x_{j}]\mathbb{E}[x_{j}y_{j^\prime}^{2}]\big|, \big|\mathbb{E}[x_{j}^{2}y_{j^\prime}]\mathbb{E}[y_{j^\prime}]\big|\Big\}=\max\big\{\mathbb{E}[x_{j}^{2}y_{j^\prime}^{2}], \mathbb{E}[x_{j}^{2}]\mathbb{E}[y_{j^\prime}^{2}]\big\}\\&\quad\leq 4c_{0}^{4}\bigg((u-v)^{4\gamma}+(u-v)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg).\end{align*}

Therefore, by applying Hölder’s inequality, we have

\begin{align*}\mathbb{E}[x^{2}y^{2}]\leq&\ \mathbb{E}\Big[K^{2}\sum_{j,j^\prime}^{K}\big(x_{j}-\mathbb{E}[x_{j}]\big)^{2}\big(y_{j^\prime}-\mathbb{E}[y_{j^\prime}]\big)^{2}\Big]\\[5pt] \leq&\, 64c_{0}^{4}\mathbb{E}[K^{4}]\bigg((u-v)^{4\gamma}+(u-v)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg)\end{align*}

and an inequality similar to (5.12),

\begin{equation*}\max\{\mathbb{E}[x^{2}], |\mathbb{E}[xy]|, \mathbb{E}[y^{2}]\}\leq2c_{0}^{2}\mathbb{E}[K^{2}]\bigg((u-v)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg).\end{equation*}

For the last, applying the last identity in (5.10), we have almost surely

\begin{align*}&\ \mathbb{E}\Big[\big(\hat{X}^{n}_{2}(t)-\hat{X}^{n}_{2}(r)\big)^{2}\big(\hat{X}^{n}_{2}(r)-\hat{X}^{n}_{2}(s)\big)^{2}\Big|{\mathcal{F}}^{n}_{A}(T)\Big]\\[5pt] \leq&\ 64c_{0}^{4}\mathbb{E}[K^{4}]\bigg((t-s)^{4\gamma}\bar{A}^{n}(T)+(t-s)^{2\gamma} \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)\nonumber\\[5pt] &\quad + 12c_{0}^{4}\mathbb{E}^{2}[K^{2}]\bigg((t-s)^{2\gamma}\bar{A}^{n}(T)+ \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)^{2}\\[5pt] \leq&\ \big(88c_{0}^{4}\mathbb{E}[K^{4}]\big)\mathbb{E}[\varrho^{4}]\big(\bar{A}^{n}(T)+1\big)^{2}\bigg((t-s)^{4\gamma}+ \bigg( \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)^{2}\bigg),\end{align*}

which proves the inequality for $\hat{X}^{n}_{2}$ .

For the moment of $\hat{X}^{n}_{3}$ , we define

\begin{equation*}x=\sum_{j=1}^{K}\big(h_{j}(u)-h_{j}(w)\big)\quad\text{and}\quad y=\sum_{j=1}^{K}\big(h_{j}(w)-h_{j}(v)\big).\end{equation*}

Then $\vartheta(u)-\vartheta(w)=x-\mathbb{E}[x]$ . Applying (5.6), we have almost surely

\begin{align*}x^{2}\leq& 2c_{0}^{2}K^{2}\bigg((u-w)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(w,u]\big)\bigg),\\[5pt] y^{2}\leq& 2c_{0}^{2}K^{2}\bigg((w-v)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,w]\big)\bigg),\end{align*}

which shows for small $u-v$ ,

\begin{align*} \mathbb{E}\big[(x-\mathbb{E}[x])^{2}(y-\mathbb{E}[y])^{2}\big]&\leq 64 c_{0}^{4}\mathbb{E}[K^{4}]\bigg((u-v)^{4\gamma}+(u-v)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg),\\[5pt] \max\big\{\mathbb{E}[x-\mathbb{E}[x]]^{2}, \mathbb{E}[y-\mathbb{E}[y]]^{2}\big\}&\leq \max\big\{\mathbb{E}[x^{2}], \mathbb{E}[y^{2}]\big\}\\&\leq 2c_{0}^{2}\mathbb{E}[K^{2}]\bigg((u-v)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg),\\\big|\mathbb{E}\big[(x-\mathbb{E}[x])(y-\mathbb{E}[y])]\big|&\leq 2c_{0}^{2}\mathbb{E}[K^{2}]\bigg((u-v)^{2\gamma}+ \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u]\big)\bigg).\end{align*}

Therefore, by applying (5.10), we have

\begin{align*}&\ \mathbb{E}\Big[\big(\hat{X}^{n}_{3}(t)-\hat{X}^{n}_{3}(r)\big)^{2}\big(\hat{X}^{n}_{3}(r)-\hat{X}^{n}_{3}(s)\big)^{2}\Big|{\mathcal{F}}^{n}_{A}(T)\Big]\\\leq&\ 64c_{0}^{4}\mathbb{E}[K^{4}]\bigg((t-s)^{4\gamma}\bar{A}^{n}(T)+(t-s)^{2\gamma} \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)\\&\quad + 12c_{0}^{4}\mathbb{E}^{2}[K^{2}]\bigg((t-s)^{2\gamma}\bar{A}^{n}(T)+ \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)^{2}.\end{align*}

This proves the last inequality for $\hat{X}^{n}_{3}$ and completes the proof.

6. Proof of Theorem 4.2

Recalling $\tilde{H}_{j}$ , $\tilde{h}_{j}$ , and $\tilde{h}$ defined in (4.2), we decompose the process $\hat{X}^{n}(t)$ as follows:

\begin{equation*}\hat{X}^{n}(t)= \hat{X}^{n}_{1}(t)+\hat{X}^{n}_{2}(t)+\hat{X}^{n}_{3}(t)+\hat{X}^{n}_{4}(t),\end{equation*}

where the subprocesses are defined by

\begin{align*}\hat{X}^{n}_{1}(t)&=\sqrt{n}\Big(\sum_{i=1}^{A^{n}(t)}\frac{\tilde{h}(t-\tau_{i}^{n})}{n}-\int_{0}^{t}\tilde{h}(t-s)\Lambda(ds)\Big)=\int_{(0,t]}\tilde{h}(t-s)\hat{A}^{n}(ds),\\\hat{X}^{n}_{2}(t)&=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\Big(\tilde{H}_{j}(t-\tau_{i}^{n}-\xi_{ij})-\tilde{h}_{j}(t-\tau_{i}^{n})\Big)=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\tilde{\varsigma}_{ij}(t-\tau_{i}^{n}),\\\hat{X}^{n}_{3}(t)&=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\Big(\sum_{j=1}^{K_{i}}\tilde{h}_{j}(t-\tau_{i}^{n})-\tilde{h}(t-\tau_{i}^{n})\Big)=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\tilde{\vartheta}_{i}(t-\tau_{i}^{n}),\\\hat{X}^{n}_{4}(t)&=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\Big(\mathbf{1}\big(0\leq t-\tau_{i}^{n}-\xi_{ij}<Z_{ij}\big)-\tilde{H}_{j}(t-\tau_{i}^{n}-\xi_{ij})\Big)=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\tilde{\varrho}_{ij}(t-\tau_{i}^{n}).\end{align*}

Noticing that $\mathbf{1}(0\leq u< v)=\mathbf{1}(u\geq0)-\mathbf{1}(u\geq v)$ , $\hat{X}^{n}_{4}$ can further be written as

\begin{equation*}\hat{X}^{n}_{4}(t)=\frac{1}{\sqrt{n}}\sum_{i=1}^{A^{n}(t)}\sum_{j=1}^{K_{i}}\Big(F_{j}(t-\tau_{i}^{n}-\xi_{ij})-\mathbf{1}(\tau_{i}^{n}+\xi_{ij}+Z_{ij}\leq t)\Big)\quad\text{for all $t\geq0$}.\end{equation*}

The convergence of the new $\hat{X}^{n}_{1}$ in $({\mathbb D},J_{1})$ is proved by making use of integration by parts and the continuous mapping theorem. It is true that under Assumption 4.1, $\tilde{h}\in{\mathbb D}$ has bounded variation and plays the same role as h does for the former $\hat{X}^{n}_{1}$ . The discussion in Subsection 5.1 and Lemma 5.2 can be applied directly, which proves the convergence. For the proofs for $\hat{X}^{n}_{k}$ , $k=2,3,4$ , we follow the same procedures as stated at the beginning of Subsection 5.2.

  1. Step 1. The existence of continuous modifications of the new Gaussian processes $\hat{X}_{k}$ follows from their continuous quadratic mean as in Lemma 5.5.

  2. Step 2. Since the $\hat{X}^{n}_{k}$ are still the random sums of independent variables with marks depending on $A^{n}$ , the idea of Lemma 5.7 can be applied. The convergence of their finite-dimensional distributions as well as their mutual independence are proved by examining the second moments of joint distributions of the new processes.

  3. Step 3. The tightness properties of the families of the law of $\hat{X}^{n}_{2}$ and $\hat{X}^{n}_{3}$ are proved by examining the probability bound of their increments. It is true that $\tilde{H}_{j}\in{\mathbb C}[0,\infty)$ in this model is decreasing on $[0,\infty)$ and vanishes on $(-\infty,0)$ ; more importantly, it is uniformly Hölder continuous of order $\gamma$ by Assumption 4.1. Therefore, the proof for the former $\hat{X}^{n}_{2}$ and $\hat{X}^{n}_{3}$ in the proof of Lemma 5.8 can be applied.

  4. Step 4. The proof of the tightness for $\hat{X}^{n}_{4}$ is slightly different, as it is the difference of two indicator functions and related to two variables. Here, we only give a sketch of the proof.

To find the bound for the second moments of the increments

\begin{equation*}\mathbb{E}\Big[\big(\hat{X}^{n}_{4}(t)-\hat{X}^{n}_{4}(r)\big)^{2}\big(\hat{X}^{n}_{4}(r)-\hat{X}^{n}_{4}(s)\big)^{2}\Big],\end{equation*}

for every $0\leq s<r<t\leq T$ , we define, for $v<w<u\leq T$ ,

\begin{align*}x_{j}&=\big(F_{j}(u-\xi_{j})-F_{j}(w-\xi_{j})\big)-\mathbf{1}\big(\xi_{j}+Z_{j}\in(w,u]\big),\\[5pt] y_{j}&=\big(F_{j}(w-\xi_{j})-F_{j}(v-\xi_{j})\big)-\mathbf{1}\big(\xi_{j}+Z_{j}\in(v,w]\big),\end{align*}

where (v,w,u) represents $(s-\tau_{i}^{n},r-\tau_{i}^{n},t-\tau_{i}^{n})$ , which are constants conditioning on ${\mathcal{F}}^{n}_{A}(t)$ and possibly negative. By the boundedness of the variables, it is not hard to see that

\begin{equation*}\begin{gathered}\mathbb{E}\big[x_{j}^{2}\big]\leq \mathbb{P}\big(\xi_{j}+Z_{j}\in(w,u]\big)=\int_{0-}^{u}\mathbb{P}(\xi_{j}\in dz) \mathbb{P}\big(Z_{j}\in(w-z,u-z]\big)\leq c_{0}(u-w)^{2\gamma}\,, \\[5pt] \mathbb{E}\big[y_{j}^{2}\big]\leq \mathbb{P}\big(\xi_{j}+Z_{j}\in(v,w]\big)\leq c_{0}(w-v)^{2\gamma},\end{gathered}\end{equation*}

from the Hölder continuity of Z under Assumption 4.1. Moreover,

\begin{align*}x_{j}^{2}\leq&\ \mathbf{1}(\xi_{j}+Z_{j}\in(w,u])+ \big(F_{j}(u-\xi_{j})-F_{j}(w-\xi_{j})\big)^{2}\leq \mathbf{1}(\xi_{j}+Z_{j}\in(w,u])+ c_{0}(u-w)^{2\gamma}\,, \\[5pt] y_{j}^{2}\leq&\ \mathbf{1}(\xi_{j}+Z_{j}\in(v,w])+ \big(F_{j}(w-\xi_{j})-F_{j}(v-\xi_{j})\big)^{2}\leq \mathbf{1}(\xi_{j}+Z_{j}\in(w,u])+ c_{0}(u-w)^{2\gamma}.\end{align*}

Therefore, we have from (4.1) in Assumption 4.1, for small $u-v$ ,

(6.1) \begin{equation}\begin{split}\mathbb{E}\big[x_{j}^{2}y_{j^\prime}^{2}\big]\leq&\ 3c_{0}^{2}(u-v)^{4\gamma}+\mathbb{P}\big(\xi_{j}+Z_{j}\in(w,u], \xi_{j^\prime}+Z_{j^\prime}\in(v,w]\big)\\\leq&\ 4c_{0}^{2}(u-v)^{4\gamma}+ c_{0}(u-v)^{2\gamma} \sum_{k=0}^{\infty}\mathbf{1}\big(q_{k}\in(v,u])\big).\end{split}\end{equation}

Applying the Hölder inequality and (5.10), similarly to the previous discussion, we have

\begin{align*}&\ \mathbb{E}\Big[\big(\hat{X}^{n}_{4}(t)-\hat{X}^{n}_{4}(r)\big)^{2}\big(\hat{X}^{n}_{4}(r)-\hat{X}^{n}_{4}(s)\big)^{2}\Big|{\mathcal{F}}^{n}_{A}(T)\Big]\\\leq&\ c \mathbb{E}[K^{4}]\big(\bar{A}^{n}(T)+1\big)^{2}\bigg((t-s)^{4\gamma}+ \bigg( \sum_{k=0}^{\infty}\big(\bar{A}^{n}(t-q_{k})-\bar{A}^{n}(s-q_{k})\big)\bigg)^{2}\bigg)\end{align*}

for some constant $c>0$ , which proves the tightness. This completes the proof.

Appendix A. Proof of Lemma 5.2

For completeness, we provide a proof of Lemma 5.2.

Proof of Lemma 5.2. Since every function of bounded variation can be expressed as the difference of two increasing functions, it is sufficient to prove the result for the case in which g is a decreasing function. Moreover, since the integral depends only on the value of g on $(0,\infty)$ , we further assume without loss of generality that $g(z)=g(0)$ for $z\leq 0$ . Then $g(t-ds)$ is a positive measure on ${\mathbb R}$ induced by the càglàd increasing function $s\to g(t-s)$ , and

\begin{equation*}\int_{[a,b)}g(t-ds)=g(t-b)-g(t-a)\end{equation*}

for all $b>a$ .

For any $\varepsilon>0$ , by the separability of $({\mathbb D},J_{1})$ , z is approximated by a simple function such that

\begin{equation*}f(t)=\sum_{i}f(r_{i})\mathbf{1}\big(t\in[r_{i},r_{i+1})\big)\quad\text{and}\quad||z-f||_{\infty}<\varepsilon,\end{equation*}

where $||\cdot||_{\infty}$ represents the uniform norm on [0, T]. For any $T>t>s\geq0$ , we have

\begin{align*}\big|\psi_{g}(z)(t)-\psi_{g}(z)(s)\big|\leq&\ 2||\psi_{g}(z)-\psi_{g}(f)||_{\infty}+ \big|\psi_{g}(f)(t)-\psi_{g}(f)(s)\big|\\\leq&\ 2\varepsilon(g(0)-g(t))+ \big|\psi_{g}(f)(t)-\psi_{g}(f)(s)\big|.\end{align*}

On the other hand, by the definition of f,

\begin{align*}\psi_{g}(f)(t)-\psi_{g}(f)(s)&= \int_{[0,s)}f(r)\big(g(t-dr)-g(s-dr)\big)+ \int_{[s,t)}f(r)g(t-dr)\\&= \sum_{i}f(r_{i})\Big(\big(g(t-r_{i+1})-g(s-r_{i+1})\big)-\big(g(t-r_{i})-g(s-r_{i})\big)\Big)\\&\quad\quad+ \theta\cdot||f||_{\infty}\big(g(0)-g(t-s)\big),\end{align*}

where $\theta$ is a number with $|\theta|\leq 1$ and $||f||_{\infty}=\sup_{u\in[0,T]}|f(u)|$ .

For (i) and (ii), by the right-continuity of g, let $t\downarrow s=s_{0}\geq0$ and then $\varepsilon\downarrow0+$ ; this proves the right-continuity of $\psi_{g}(z)$ at $s_{0}$ . Moreover, let $t,s\uparrow s_{0}>0$ for some $s_{0}>0$ with $t-s\to0+$ and then $\varepsilon\downarrow0+$ ; from the left limit of g this also proves the existence of a left limit of $\psi_{g}(z)$ at $s_{0}$ . And $\psi_{g}(z)(0)=0$ by definition. This also proves the continuity of $\psi_{g}(z)$ if g is continuous at $s>0$ . On the other hand, if $z\in {\mathbb C}$ and $z(0)=0$ , by the second identity in the definition of $\psi_{g}$ in (5.5),

\begin{equation*}\psi_{g}(z)(s)-\psi_{g}(z)(t)=\int_{(0,s]}\big(z(t-u)-z(s-u)\big)g(du)+ \int_{(s,t]}z(t-u)g(du).\end{equation*}

The continuity follows from the uniform continuity of z on [0, T] and the right-continuity of g.

For (iii), let $z_{n}\to z$ in $({\mathbb D},J_{1})$ ; since $z\in{\mathbb C}$ we have $||z_{n}-z||_{\infty} \to0$ . It suffices to prove that $||\psi_{g}(z_{n})-\psi_{g}(z)||_{\infty}\to0$ as $n\to\infty$ . By the monotonicity of g on [0, T], we have

\begin{equation*}||\psi_{g}(z_{n})-\psi_{g}(z)||_{\infty}\leq ||z_{n}-z||_{\infty}\big(g(0)-g(T)\big),\end{equation*}

which converges to zero as $n\to\infty$ .

Acknowledgements

The authors thank the reviewers for their helpful comments, which have improved the exposition of the results in the paper. G. Pang was supported in part by the U.S. National Science Foundation grants CMMI-1635410 and DMS/CMMI-1715875.

References

Basrak, B., Wintenberger, O. and $\check{\text{Z}}$ ugec, P. (2019). On total claim amount for marked Poisson cluster models. Working paper. Available at https://arxiv.org/abs/1903.09387v1.Google Scholar
Biermé, H. and Desolneux, A. (2012). Crossings of smooth shot noise processes. Ann. Appl. Prob. 22, 22402281.CrossRefGoogle Scholar
Billingsley, P. (1999). Convergence of Probability Measures. John Wiley, New York.CrossRefGoogle Scholar
Daley, D. J. and Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes, Vol. I–II, 2nd edn. Springer, New York.Google Scholar
Decreusefond, L. and Nualart, D. (2008). Hitting times for Gaussian processes. Ann. Prob. 36, 319330.CrossRefGoogle Scholar
Dong, C. and Iksanov, A. (2020). Weak convergence of random processes with immigration at random times. J. Appl. Prob. 57, 250265.CrossRefGoogle Scholar
Gilchrist, J. H. and Thomas, J. B. (1975). A shot process with burst properties. Adv. Appl. Prob. 7, 527541.CrossRefGoogle Scholar
Heinrich, L. and Schmidt, V. (1985). Normal convergence of multidimensional shot noise and rates of this convergence. Adv. Appl. Prob. 17, 709730.CrossRefGoogle Scholar
Hsing, T. and Teugels, J. L. (1989). Extremal properties of shot noise processes. Adv. Appl. Prob. 21, 513525.CrossRefGoogle Scholar
Iglehart, D. L. (1973). Weak convergence of compound stochastic process, I. Stoch. Process. Appl. 1, 1131.CrossRefGoogle Scholar
Iksanov, A. (2013). Functional limit theorems for renewal shot noise processes with increasing response functions. Stoch. Process. Appl. 123, 19872010.CrossRefGoogle Scholar
Iksanov, A. (2016). Renewal Theory for Perturbed Random Walks and Similar Processes. Birkhäuser, Basel.CrossRefGoogle Scholar
Iksanov, A., Marynych, A. and Meiners, M. (2014). Limit theorems for renewal shot noise processes with eventually decreasing response functions. Stoch. Process. Appl. 124, 21322170.CrossRefGoogle Scholar
Iksanov, A., Marynych, A. and Meiners, M. (2017). Asymptotics of random processes with immigration I: scaling limits. Bernoulli 23, 12331278.Google Scholar
Iksanov, A., Marynych, A. and Meiners, M. (2017). Asymptotics of random processes with immigration II: convergence to stationarity. Bernoulli 23, 12791298.Google Scholar
Iksanov, A. and Rashytov, B. (2020). A functional limit theorem for general shot noise processes. J. Appl. Prob. 57, 280294.CrossRefGoogle Scholar
Klüppelberg, C. and Kühn, C. (2004). Fractional Brownian motion as a weak limit of Poisson shot noise processes—with applications to finance. Stoch. Process. Appl. 113, 333351.CrossRefGoogle Scholar
Klüppelberg, C. and Mikosch, T. (1995). Explosive Poisson shot noise processes with applications to risk reserves. Bernoulli 1, 125147.CrossRefGoogle Scholar
Klüppelberg, C., Mikosch, T. and Schärf, A. (2003). Regular variation in the mean and stable limits for Poisson shot noise. Bernoulli 9, 467496.CrossRefGoogle Scholar
Lane, J. A. (1984). The central limit theorem for the Poisson shot-noise process. J. Appl. Prob. 21, 287301.CrossRefGoogle Scholar
Lane, J. A. (1987). The Berry–Esseen bound for the Poisson shot-noise. Adv. Appl. Prob. 19, 512514.CrossRefGoogle Scholar
Marynych, A. (2015). A note on convergence to stationarity of random processes with immigration. Theory Stoch. Process. 20, 84100.Google Scholar
Marynych, A. and Verovkin, G. (2017). A functional limit theorem for random processes with immigration in the case of heavy tails. Modern Stoch. Theory Appl. 4, 93108.CrossRefGoogle Scholar
Pang, G. and Taqqu, M. S. (2019). Non-stationary self-similar Gaussian processes as scaling limits of power-law shot noise processes and generalizations of fractional Brownian motion. High Freq. 2, 95–112.CrossRefGoogle Scholar
Pang, G. and Whitt, W. (2012). Infinite-server queues with batch arrivals and dependent service times. Prob. Eng. Inf. Sci. 26, 197–220.CrossRefGoogle Scholar
Pang, G. and Whitt, W. (2013). Two-parameter heavy-traffic limits for infinite-server queues with dependent service times. Queueing Systems 73, 119–146.CrossRefGoogle Scholar
Pang, G. and Zhou, Y. (2018). Functional limit theorems for a new class of non-stationary shot noise processes. Stoch. Process. Appl. 128, 505544.CrossRefGoogle Scholar
Pang, G. and Zhou, Y. (2018). Two-parameter process limits for infinite-server queues with dependent service times via chaining bounds. Queueing Systems 88, 1–25.CrossRefGoogle Scholar
Pang, G. and Zhou, Y. (2020). Functional limit theorems for shot noise processes with weakly dependent noises. Stoch. Systems 10, 99123.CrossRefGoogle Scholar
Papoulis, A. (1971). High density shot noise and Gaussianity. J. Appl. Prob. 18, 118127.CrossRefGoogle Scholar
Ramirez-Perez, F. and Serfling, R. (2001). Shot noise on cluster processes with cluster marks, and studies of long range dependence. Adv. Appl. Prob. 33, 631651.CrossRefGoogle Scholar
Ramirez-Perez, F. and Serfling, R. (2003). Asymptotic normality of shot noise on Poisson cluster processes with cluster marks. J. Prob. Statist. Sci. 1, 157172.Google Scholar
Rice, J. (1977). On generalized shot noise. Adv. Appl. Prob. 9, 553565.CrossRefGoogle Scholar
Samorodnitsky, G. (1995). A class of shot noise models for financial applications. In Proc. Athens Internat. Conf. Appl. Prob. and Time Series, Vol. 1, eds Heyde, C. C., Prohorov, Y. V., Pyke, R. and Rachev, S. T., Springer, Berlin, pp. 332–353.Google Scholar
Shiryaev, A. N. (1996). Probability, 2nd edn. Springer, New York.CrossRefGoogle Scholar
Whitt, W. (1976). Bivariate distributions with given marginals. Ann. Statist. 4, 12801289.CrossRefGoogle Scholar
Whitt, W. (1983). Comparing batch delays and customer delays. Bell System Tech J. 62, 20012009.CrossRefGoogle Scholar
Whitt, W. (2002). Stochastic-Process Limits: an Introduction to Stochastic-Process Limits and Their Application to Queues. Springer, New York.CrossRefGoogle Scholar