Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-11T05:35:08.083Z Has data issue: false hasContentIssue false

Functional central limit theorems and moderate deviations for Poisson cluster processes

Published online by Cambridge University Press:  24 September 2020

Fuqing Gao*
Affiliation:
Wuhan University
Yujing Wang*
Affiliation:
Wuhan University
*
*Postal address: School of Mathematics and Statistics, Wuhan University, Wuhan430072, China. Email: fqgao@whu.edu.cn
**Postal address: School of Mathematics and Statistics, Wuhan University, Wuhan430072, China. Email: yujingwang@whu.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we consider functional limit theorems for Poisson cluster processes. We first present a maximal inequality for Poisson cluster processes. Then we establish a functional central limit theorem under the second moment and a functional moderate deviation principle under the Cramér condition for Poisson cluster processes. We apply these results to obtain a functional moderate deviation principle for linear Hawkes processes.

Type
Original Article
Copyright
© Applied Probability Trust 2020

1. Introduction and main results

1.1. Introduction

Poisson cluster processes are an important class of point process models (see [Reference Bordenave and Torrisi5]). They occur frequently in applications such as cellular networks [Reference Chun, Hasna and Ghrayeb6], [Reference Tabassum, Hossain and Hossain23], insurance [Reference Jessen, Mikosch and Samorodnitsky18], [Reference Matsui19], [Reference Matsui and Mikosch20], queueing theory [Reference Fasen11], [Reference Fasen and Samorodnitsky12], [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky13], and cosmology [Reference Neyman and Scott22]. Theoretical studies of Poisson cluster processes have attracted considerable attention; for example, see [Reference Bogachev and Daletskii4] for quasi-invariance, [Reference Faÿ, González-Arévalo, Mikosch and Samorodnitsky13] for some asymptotic behaviors of a Poisson cluster process, and [Reference Bordenave and Torrisi5] for functional large deviation principles for a large class of the processes. Linear Hawkes processes [Reference Hawkes15] are a class of Poisson cluster processes. They are amenable to statistical analysis. Thus they have a wide range of applications in large networks [Reference Delattre, Fournier and Hoffmann8], finance [Reference Hawkes16], and many other fields. The asymptotic behaviors of Hawkes processes have been widely studied; for example, see [Reference Bacry, Delattre, Hoffmann and Muzy1] for a functional central limit theorem and [Reference Bordenave and Torrisi5] for functional large deviation principles.

This paper considers functional central limit theorems and moderate deviations for Poisson cluster processes. Maximal inequalities, such as Ottaviani’s inequality and the maximal inequalities for martingales [Reference Billingsley3], play a very important role in functional limits. In this paper, we first propose a maximal inequality for Poisson cluster processes. Then we establish a functional central limit theorem under the second moment and a functional moderate deviation principle under the Cramér condition for Poisson cluster processes. As an application, we obtain a functional moderate deviation principle for linear Hawkes processes.

The paper is organized as follows. In the remainder of this section, we give some preliminaries on Poisson cluster processes, present a maximal inequality for Poisson cluster processes, and state the main results of this paper. The proofs of the functional central limit theorem and the functional moderate deviation principle are given in Sections 2 and 3, respectively. The proof of the maximal inequality is postponed to the appendix.

1.2. Main results

Let us recall some basic definitions related to Poisson cluster processes (see [Reference Bordenave and Torrisi5]). A Poisson cluster process $\mathbf{X}\subset \mathbb{R}$ is a point process generated from an immigrant process and a family of offspring processes. The formal definition of a Poisson cluster process is the following:

  1. (i) The immigrant process I is a homogeneous Poisson process with points $X_i\in \mathbb{R}$ and intensity constant $\nu\in (0,\infty)$.

  2. (ii) Each immigrant $X_i$ generates a cluster, i.e. an offspring process $C_i=C_{X_i}$ which is a finite point process.

  3. (iii) Given the immigrants, the centered clusters

    \begin{equation*}C_i-X_i=\{Y-X_i\,{:}\,Y\in C_i\}, \qquad X_i\in I,\end{equation*}
    are independent and identically distributed (i.i.d.) and independent of I.
  4. (iv) $\mathbf{X}$ consists of the union of all the clusters.

For a point process $\mathbf{Y}$ on $\mathbb{R}$, let $N_{\mathbf{Y}}(0,t]$ denote the number of points of $\mathbf{Y}$ in the interval (0, t]. A process $\mathbf{Y}$ is called stationary if its law is translation-invariant; it is said to be ergodic if it is stationary with a finite intensity $\mathbb{E}(N_{\mathbf{Y}}(0,1])$, and

\begin{equation*}\lim_{t\rightarrow \infty}\frac{N_{\mathbf{Y}}(0,t]}{t}=\mathbb{E}(N_{\mathbf{Y}}(0,1])\end{equation*}

almost surely.

Let S denote the number of points in a cluster and assume $\mathbb{E}(S)<\infty$. Let $C_0$ be the cluster generated by an immigrant at 0, and let $L=\sup_{Y\in C_0}|Y|$ be the radius of $C_0$. Then we can see from the definition of a Poisson cluster process that $\mathbf{X}$ is ergodic with finite intensity $\nu \mathbb{E}(S)$.

First, let us present a maximal inequality which will play an important role in studying functional limits of Poisson cluster processes.

Lemma 1. Define

\begin{equation*}C(t)=\sum_{X_k\in I_{|(0,t]}}N_{C_k}(0, t].\end{equation*}

Let $0\leq s< t$ and $s=t_0<t_1<\cdots<t_n=t$. Then for any $r>0$,

(1)\begin{align}& \mathbb{P}\Big( \max_{1\leq l\leq n}\left| C(t_l)-C(s)-\mathbb{E}(C(t_l)-C(s)) \right|>3r \Big)\nonumber\\ &\quad \leq 2 \mathbb{P}\Bigg( \max_{0\leq l\leq n-1}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|>r/2\Bigg)\\ &\qquad + 2\max_{0\leq l\leq n-1}\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|> r/2\Bigg).\nonumber\end{align}

The proof of the inequality is given in the appendix.

Now let us state our main results. Let D[0, 1] denote the space of càdlàg functions on the interval [0, 1] and let $AC_0[0,1]$ be the family of all absolutely continuous functions f with $f(0)=0$. Set $\|\,f\|=\sup_{t\in [0,1]}|\,f(t)|$. Let $\rho_s$ denote the Skorokhod topology on D[0, 1].

Theorem 1. Assume that

(A1)\begin{equation}\mathbb{E}((1+L)S^2)<\infty \end{equation}

holds. Then as $\alpha \to \infty$,

\begin{equation*}\frac{N_{\mathbf{X}}(0,\alpha t]-\mathbb{E}(N_{\mathbf{X}}(0,\alpha t])}{\sqrt{\alpha}} \stackrel{d} {\rightarrow} \sigma B(t) \mbox{ in } (D[0,1], \rho_s),\end{equation*}

where $\sigma ^2=\nu \mathbb{E}(S^2)$, $\{B(t),t\geq 0\}$ is the standard Brownian motion, and $\stackrel{d} {\rightarrow}$ denotes convergence in distribution.

Theorem 2. Let $\{b(\alpha),\alpha>0\}$ be a positive function satisfying

\begin{equation*}\lim_{\alpha\rightarrow \infty}\frac{b(\alpha)}{\alpha}= 0,\quad\lim_{\alpha\rightarrow \infty}\frac{b(\alpha)}{\sqrt {\alpha}}=+\infty. \eqno{(\text{SC})}\end{equation*}

Assume that the Cramér condition holds, i.e., that there exists a constant $\theta_0>0$ such that

(A2)\begin{equation}\mathbb{E}\left((1+L) e^{\theta_0 S}\right)<\infty.\ \end{equation}

Then

\begin{equation*}\left\{\frac{N_{\mathbf{X}}(0,\alpha t]-\mathbb{E}(N_{\mathbf{X}}(0,\alpha t])}{b(\alpha)},t\in[0,1]\right\}\end{equation*}

satisfies the large deviation principle (LDP) on $(D[0,1],\|\cdot\|)$ with speed $\frac{b^2(\alpha)}{\alpha}$ and good rate function $J\,{:}\,D[0,1]\to [0,\infty]$ defined by

(2)\begin{equation}J(\,f)=\left\{\begin{array}{ll}\displaystyle\frac{1}{2\sigma^2}\int_0^1{|\,\dot{f}(t)|^2}dt & \mbox{ if }f(t)=\int_0^t \dot{f}(s)ds \in AC_0[0,1];\\\hbox{}\\+\infty &\mbox{ otherwise}.\end{array}\right.\end{equation}

That is,

  1. (1) for any $l\leq 0$, $\{f;\, J(\,f)\leq l\}$ is compact in $(D[0,1],\|\cdot\|)$;

  2. (2) for any closed F in $(D[0,1],\|\cdot\|)$,

    \begin{equation*} \limsup_{\alpha\rightarrow \infty}\frac{\alpha}{b^2(\alpha)}\log \mathbb{P}\!\left(\frac{N_{\mathbf{X}}(0,\alpha\cdot]-\mathbb{E}(N_{\mathbf{X}}(0,\alpha\cdot])}{b(\alpha)}\in F\right)\leq -\inf_{f\in F}J(\,f), \end{equation*}
    and for any open G in $(D[0,1],\|\cdot\|)$,
    \begin{equation*} \liminf_{\alpha\rightarrow \infty}\frac{\alpha}{b^2(\alpha)}\log \mathbb{P}\!\left(\frac{N_{\mathbf{X}}(0,\alpha\cdot]-\mathbb{E}(N_{\mathbf{X}}(0,\alpha\cdot])}{b(\alpha)}\in G\right)\geq -\inf_{f\in G}J(\,f). \end{equation*}

Next, let us apply Theorems 1 and 2 to linear Hawkes processes. A Poisson cluster process $\mathbf{X}$ is called a Hawkes process if each cluster $C_i=C_{X_i}$ is a random set formed by the points of generations $n=0,1,\cdots$ with the following branching structure:

  1. The immigrant $X_i$ is said to be of generation 0.

  2. Given generations $0,1,\cdots,n$ in $C_i$, each point $Y\in C_i$ of generation n generates a Poisson process on $(Y,\infty)$ of offspring of generation $n+1$ with intensity function $h(\cdot-Y)$, where $h\,{:}\,(0,\infty)\rightarrow [0,\infty)$ is a nonnegative Borel function.

As usual, we assume that the mean number of points in any offspring process satisfies

(B1)\begin{equation}\mu\coloneqq \int_0^{\infty} h(t) dt\in (0,1). \end{equation}

We also assume that

(B2)\begin{equation}\int_0^{\infty} t h(t)dt<\infty. \end{equation}

It is known (cf. Bordenave and Torrisi [Reference Bordenave and Torrisi5]) that for a linear Hawkes process, the distribution of S is given by

\begin{equation*}\mathbb{P}(S=k)=\frac{e^{-k\mu}(k\mu)^{k-1}}{k!}, \quad k=1,2,\cdots.\end{equation*}

From the proof of Theorem 3.2.1 in [Reference Bordenave and Torrisi5], under the conditions (B1) and (B2), there exists a constant $0<\theta_0< \mu-1-\log \mu$ such that $\mathbb{E}\left((1+L) e^{\theta_0 S}\right)<\infty$; that is, the condition (A2) holds. Furthermore, the function

\begin{equation*}F(\theta)\coloneqq \mathbb{E}(e^{\theta S}), \qquad \theta < \mu-1-\log \mu,\end{equation*}

satisfies the equation $F(\theta)=e^{\theta}\exp\{\mu(F(\theta)-1)\}.$ Thus,

\begin{align*} F'(\theta) & =\frac{F(\theta)}{1-\mu F(\theta)},\\[4pt] \mathbb{E}(S) & =F'(0)=1/(1-\mu)<\infty,\end{align*}

and

\begin{equation*}\hspace*{-25pt}\mathbb{E}(S^2)=F''(0)=\frac{1}{(1-\mu)^3}.\end{equation*}

The following results are consequences of Theorems 1 and 2.

Corollary 1. Assume that (B1) and (B2) hold.

  1. (1)

    \begin{equation*}\bigg\{\frac{N_{\mathbf{X}}(0,\alpha t]-\mathbb{E}(N_{\mathbf{X}}(0,\alpha t])}{\sqrt{\alpha}},t\in[0,1]\bigg\}\end{equation*}

converges in distribution to

\begin{equation*}\sqrt{\frac{\nu}{(1-\mu)^3}}B(t)\end{equation*}

in $(D[0,1],\rho_s)$.

  1. (2)

    \begin{equation*}\bigg\{\frac{N_{\mathbf{X}}(0,\alpha t]-\mathbb{E}(N_{\mathbf{X}}(0,\alpha t])}{b(\alpha)},\ t\in[0,1]\bigg\}\end{equation*}

satisfies the LDP on $(D[0,1],\|\cdot\|)$with speed $\frac{b^2(\alpha)}{\alpha}$and good rate function $J^H(\,f)$defined by

\begin{equation*}J^H(\,f)=\left\{\begin{array}{l@{\quad}l}\displaystyle\frac{(1-\mu)^3}{2\nu }\int_0^1{|\,\dot{f}(t)|^2}dt & \mbox{ if } f\in AC_0[0,1],\\\hbox{}\\+\infty &\mbox{ otherwise}.\end{array}\right.\end{equation*}

Applying Corollary 1(2) to $F=\{f\in D[0,1]\,{;}\, f(0)=0,\ \|\,f\|\geq r\}$ for $r>0$ and $b(\alpha)=\alpha^{p}$ for $1/2<p<1$, we have

\begin{align*} &\lim_{\alpha\rightarrow \infty}\frac{1}{\alpha^{2p-1}}\log \mathbb{P}\left(\|N_{\mathbf{X}}(0,\alpha\cdot]-\mathbb{E}(N_{\mathbf{X}}(0,\alpha\cdot])\|\geq \alpha^{p}r\right)\\[4pt] &=-\inf_{f\in AC_0[0,1], \|\,f\|\geq r}\bigg\{\frac{(1-\mu)^3}{2\nu }\int_0^1{|\,\dot{f}(t)|^2}dt \bigg\}\\[4pt] &=-\frac{r^2(1-\mu)^3}{2\nu }.\end{align*}

Remark 1. Corollary 1 (1) is also a consequence of [1, Theorem 2].

For convenience, before we close this section, let us introduce some notation and state a Laplace integral formula for marked point processes (see Daley and Vere-Jones [Reference Daley and Vere-Jones7], Section 6.4).

  1. A marked point process N, with locations in a Polish space $\mathcal X$ and marks in a Polish space $\mathcal K$, is a point process $\{(X_i,\kappa_i)\}$ on $\mathcal X \times\mathcal K$ with the additional property that the process $ \{X_i\}$ is itself a point process. The process $\{X_i\}$ is referred to as the ground process.

  2. A marked point process N is said to have independent marks if, given the ground process $ \{X_i\}$, the $\{\kappa_i\}$ are mutually independent random variables such that the distribution of $\kappa_i$ depends only on the corresponding location $X_i$.

  3. Let N be a marked point process with i.i.d. marks and let $\{F(K|x)\,{:}\,K \in\mathcal B(\mathcal K), x \in\mathcal X\}$ be a kernel. This kernel is called the mark kernel of the process N if, for any $x\in\mathcal X$ and $K\in \mathcal K$, $F(K |x) =\mathbb{P}(\kappa_i\in K |X_i=x)$ almost surely.

  4. A marked point process N is called a marked Poisson process or a compound Poisson process if the process N has i.i.d. marks and the ground process is a Poisson process.

For the next lemma, see Proposition 6.4.IV and Lemma 6.4.VI in [Reference Daley and Vere-Jones7].

Lemma 2. A marked Poisson process that has mark kernel $F(\cdot |\cdot)$, and for which the Poisson ground process $N_g$ has intensity measure $\mu$, is equivalent to a Poisson process on the product space $\mathcal X \times\mathcal K$ with intensity measure $\Lambda(dx \times dz) = \mu(dx) F(dz | x)$. In particular, the probability generating functional of the process N is

\begin{equation*}G[h]=\mathbb{E}\!\left(\prod_{i}h(X_i,\kappa_i)\right)=\exp\left\{\int\int (h(x,\kappa)-1) F(dz | x)\mu(dx) \right\},\ h\in \mathcal V(\mathcal X \times\mathcal K),\end{equation*}

where $\mathcal V(\mathcal X \times\mathcal K)$ is the space of measurable functions $h(x,\kappa)$ such that $0\leq h(x,\kappa)\leq 1$ and, for some bounded set A, $h(x, \kappa)=1$ for all $\kappa\in K$ and $x\not\in A$.

2. Functional central limit theorem

In this section we prove Theorem 1. First, let us decompose $N_\mathbf{X}(0, t]$ into the following three parts:

(3)\begin{align}N_\mathbf{X}(0, t]&=C(t)+ \sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, t]+ \sum_{X_k\in I_{|(t,\infty]}}N_{C_k}(0, t].\end{align}

It is easy to show that the terms $\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0,\alpha t]$ and $\sum_{X_k\in I_{|(\alpha t , \infty)}}N_{C_k}(0,\alpha t]$ are negligible; see Propositions 1 and 2. Thus, our main work is to show that the term C(t) satisfies a functional central limit theorem; see Proposition 3.

Proposition 1. Assume that (A1) holds. Then for any $\delta>0$,

(4)\begin{align} \lim_{\alpha\rightarrow \infty} \mathbb{P}\bigg(\sup_{t\in[0,1]} \bigg|&\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0,\alpha t] -\mathbb{E}\bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0,\alpha t]\bigg)\bigg|>\sqrt{\alpha}\delta\bigg)=0.\end{align}

Proposition 2. Assume that (A1) holds. Then for any $\delta>0$,

(5)\begin{align} \lim_{\alpha\rightarrow \infty} \mathbb{P}\bigg(\sup_{t\in[0,1]} \bigg|&\sum_{X_k\in I_{|(\alpha t, \infty)}}N_{C_k}(0,\alpha t] -\mathbb{E}\bigg(\sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]\bigg)\bigg|>\sqrt{\alpha}\delta\bigg)=0.\end{align}

Proposition 3. Assume that (A1) holds. Then

(6)\begin{equation}\frac{C(\alpha t)-\mathbb{E}(C(\alpha t))}{\sqrt{\alpha}} \stackrel{d} {\rightarrow} \sigma B(t)\end{equation}

in $(D[0,1],\rho_s)$.

2.1. Proof of Proposition 1

For any $\delta>0$, for $\alpha$ large enough, by Chebyshev’s inequality we have

\begin{align*} &\mathbb{P}\Bigg\{\sup_{t\in[0,1]}\Bigg|\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha t)-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha t)\Bigg)\Bigg|> \sqrt{\alpha} \delta\Bigg\}\\[4pt] &\leq \mathbb{P}\Bigg\{\sup_{t\in[0,1]}\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha t)+\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha t)\Bigg)>\sqrt{\alpha} \delta\Bigg\}\\[4pt] &=\mathbb{P}\Bigg\{\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha )+\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha t)\Bigg)> \sqrt{\alpha} \delta \Bigg\}\\[4pt] &\leq\mathbb{P}\Bigg\{\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha )>\sqrt{\alpha} \delta /2\Bigg\}+\mathbb{P}\Bigg\{\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha t)\Bigg)> \sqrt{\alpha} \delta /2\Bigg\}\\[4pt] &\leq \frac{4}{\sqrt{\alpha} \delta }\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}[0,\alpha t)\Bigg)\leq \frac{4\nu \mathbb{E}(LS)}{\sqrt{\alpha}\delta},\end{align*}

which implies that (4) holds.

2.2. Proof of Proposition 2

Let us give some moment estimates. Applying Lemma 2, for $l=1,\cdots, \lfloor\alpha\rfloor+1$ we have

\begin{align*} &\mathbb{E}\Big(e^{ \sqrt{-1} \theta \sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty, l]}\Big) \\[4pt] &=\ \exp\left\{\nu\int_{l-1}^\infty\mathbb{E}\left(e^{ \sqrt{-1} \theta N_{C_0}(-\infty, l-s]}-1\right)ds\right\}\\[4pt] &=\ \exp\left\{\nu\mathbb{E}\left(\int_{-1}^{L}\left(e^{\sqrt{-1} \theta N_{C_0}(-\infty,-s]}-1\right)ds\right)\right\}.\end{align*}

Therefore, the sums $ \sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l]$, $l=1,\cdots, \lfloor\alpha\rfloor+1$, are identically distributed. Note that $N_{C_0}(- \infty, -s]\leq N_{C_0}(\mathbb{R})=S$. Using the above characterization functions, it is easy to obtain the following moment estimates:

\begin{align*}& \mathbb{E}\Bigg(\sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l]\Bigg)=\nu\mathbb{E}\!\left( \int_{-1}^L N_{C_0}( -\infty, -s]ds \right)\leq \nu \mathbb{E}((1+L)S)\end{align*}

and

\begin{align*}& \mathbb{E}\Bigg(\Bigg(\sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l]\Bigg)^2\Bigg)\\ & = \nu \mathbb{E}\!\left( \int_{-1}^{L} \left(N_{C_0}( -\infty,-s]\right)^2ds \right)+\left(\nu \mathbb{E}\!\left( \int_{-1}^{L} N_{C_0}( -\infty,-s]ds\right)\right)^2\\ & \leq \nu \mathbb{E}\big((1+L)S^2\big)+(\nu \mathbb{E}((1+L)S))^2.\end{align*}

Noting that for any $1\leq l\leq \lfloor\alpha\rfloor+1$,

\begin{align*}0\leq &\sup_{l-1\leq t\leq l} \sum_{X_k\in I_{|(t,\infty)}}N_{C_k}(0,t]- \sum_{X_k\in I_{|(l,\infty)}}N_{C_k}(0,l]\leq\sum_{X_k\in I_{|(l-1,l]}}N_{C_k}(0, l],\end{align*}

we have that

\begin{align*}\sup_{l-1\leq t\leq l} \sum_{X_k\in I_{|(t,\infty)}}N_{C_k}(0,t]\leq&\sum_{X_k\in I_{|(l-1,l]}}N_{C_k}(0, l]+\sum_{X_k\in I_{|(l,\infty)}}N_{C_k}(-\infty, l]\\\leq &2 \sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty, l].\end{align*}

Thus, for any $\delta>0$, when $\alpha$ is large enough, we have

(7)\begin{align} &\mathbb{P}\Bigg(\sup_{t\in [0,1]} \bigg|\sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]-\mathbb{E}\bigg(\sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]\bigg) \bigg|>\sqrt{\alpha}\delta\Bigg)\nonumber\\ & \quad \leq \mathbb{P}\Bigg(\sup_{t\in [0,1]} \sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]>\sqrt{\alpha}\delta/2\Bigg)\nonumber\displaybreak\\ &= \mathbb{P}\Bigg( \max_{1\leq l\leq \lfloor\alpha\rfloor+1}\sup_{l-1\leq t\leq l} \sum_{X_k\in I_{(t,\infty)}}N_{C_k}(0,t]>\sqrt{\alpha}\delta/2\Bigg)\\[3pt] & \leq \mathbb{P}\Bigg(2 \max_{1\leq l\leq \lfloor\alpha\rfloor+1}\sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l]>\sqrt{\alpha}\delta/2\Bigg).\nonumber\end{align}

By Chebyshev’s inequality, for any $\delta>0$,

(8)\begin{align} & \mathbb{P}\Bigg(2 \max_{1\leq l\leq \lfloor\alpha\rfloor+1}\sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l]>\sqrt{\alpha}\delta/2\Bigg)\nonumber\\[4pt] & \leq ( \alpha +1) \max_{1\leq l\leq \lfloor\alpha\rfloor+1} \mathbb{P}\Bigg(\sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l]>\sqrt{\alpha}\delta/4\Bigg)\nonumber\\[4pt] & = (\alpha +1)\mathbb{P}\Bigg( \sum_{X_k\in I_{|( 0,\infty)}}N_{C_k}(-\infty, 1] >\sqrt{\alpha}\delta/4\Bigg)\\[4pt] & \leq\frac{16(\alpha+1)}{\delta^2\alpha} \mathbb{E}\Bigg(\Bigg(\sum_{X_k\in I_{|(0,\infty)}}N_{C_k}(-\infty, 1] \Bigg)^2I_{\left\{ \sum_{X_k\in I_{|( 0,\infty)}}N_{C_k}(-\infty, 1] >\sqrt{\alpha}\delta/4\right\}}\Bigg)\nonumber\\[4pt] & \to 0 \quad \mbox{ as } \alpha\to\infty.\nonumber\end{align}

Now, combining (7) and (8), we obtain (5).

2.3. Proof of Proposition 3

By the general theory of weak convergence (see Theorems 15.1, 15.4, and 15.5 in [Reference Billingsley2]), in order to obtain Proposition 3 it is sufficient to prove the convergence of finite-dimensional distributions and the tightness of $\{C(\alpha t), t\in [0,1];\ \alpha>0\}$. Lemma 3, below, gives the convergence of finite-dimensional distributions, and Lemma 4 gives the tightness.

Lemma 3. Assume that (A1) holds. Then for each $n\geq 1$ and $0\leq t_1 < \cdots < t_n \leq 1$,

\begin{equation*}\left(\frac{C(\alpha t_1)-\mathbb{E}(C(\alpha t_1))}{\sqrt{\alpha}},\cdots,\frac{C(\alpha t_n)-\mathbb{E}(C(\alpha t_n))}{\sqrt{\alpha}}\right) \stackrel{d} {\rightarrow} \sigma (B(t_1),\cdots,B(t_n)).\end{equation*}

Lemma 4. Assume that (A1) holds. Then for any $\delta>0$,

(9)\begin{align} &\lim_{\eta\to 0}\limsup_{\alpha\rightarrow \infty} \mathbb{P}\!\left(\sup_{|t-s|<\eta} \bigg|C(\alpha t)-\mathbb{E}(C(\alpha t))-(C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg|>3\sqrt{\alpha}\delta\right)=0.\end{align}

The following elementary result will be used in the proof of Lemma 4.

Lemma 5. Let f(t) and g(t) be two nondecreasing functions on $[0,\alpha]$, and let $0=t_0<t_1<\cdots <t_n=\alpha$. Then

(10)\begin{equation}\sup_{t\in[0,\alpha]}\left| f(t)-g(t)\right|\leq 2\max\Big\{\max_{k=0,1,\cdots,n}|\,f(t_{k})-g(t_k)|,\max_{k=0,1,\cdots,n-1}|g(t_{k+1})-g(t_{k})|\Big\}.\end{equation}

Proof. For $k=0,1,\cdots,n-1$, $t\in [t_k,t_{k+1}]$, we have

\begin{align*}&\quad -2\max\{|\,f(t_{k})-g(t_{k})|,|\,f(g_{k+1})-g(t_{k})|\}\leq f(t_{k})-g(t_{k+1})\\[3pt] & \leq\, f(t)-g(t) \\[3pt] & \leq\, f(t_{k+1})-g(t_k)\leq 2\max\{|\,f(t_{k+1})-g(t_{k+1})|,|g(t_{k+1})-g(t_k)|\}.\end{align*}

Thus, (10) holds. □

2.3.1. Proof of Lemma 3

We can write

(11)\begin{align}&\sum_{i=1}^n\theta_i\Bigg(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\Bigg)\Bigg) \nonumber \\[3pt] &\quad =\sum_{i=1}^n\sum_{j=1}^i\sum_{X_k\in I_{|(\alpha t_{j-1},\alpha t_j]}}\theta_iN_{C_k}(0,\alpha t_i]-\sum_{i=1}^n\sum_{j=1}^i\theta_i\mathbb{E}\Bigg(\sum_{X_k\in I_{|(\alpha t_{j-1},\alpha t_j]}}N_{C_k}(0,\alpha t_i]\Bigg) \nonumber \\[3pt] &\quad =\sum_{j=1}^n\sum_{i=j}^n\sum_{X_k\in I_{|(\alpha t_{j-1},\alpha t_j]}}\theta_iN_{C_k}(0,\alpha t_i]-\sum_{j=1}^n\sum_{i=j}^n\theta_i\mathbb{E}\Bigg(\sum_{X_k\in I_{|(\alpha t_{j-1},\alpha t_j]}}N_{C_k}(0,\alpha t_i]\Bigg).\end{align}

By the definition of a Poisson cluster process, for each i, $I_{|{(0, \alpha t_i ]}}$ can be viewed as the superposition of the i independent Poisson processes $I_{|( \alpha t_{j-1}, \alpha t_j]}$ on $(\alpha t_{j-1}, \alpha t_j]$, $j=1,\cdots,i$, with intensity $\nu$, and for each j, $\{(X_k,C_k);\ X_k \in I_{|(\alpha t_{j-1}, \alpha t_j]}\}$ is an independently marked Poisson process. By Lemma 2 and (11), we have that

(12)\begin{align}&\mathbb{E\!}\left(e^{\frac{\sqrt{-1}}{\sqrt{\alpha}}\sum_{i=1}^n\theta_i \sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]}\right)\nonumber\\[3pt] & \quad =\exp\left\{\nu\sum_{j=1}^n\int_0^{\alpha(t_j-t_{j-1})}\mathbb{E}\!\left(e^{\frac{\sqrt{-1}}{\sqrt{\alpha}}\sum_{i=j}^n\theta_iN_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]}-1\right)ds\right\}\end{align}

and

(13)\begin{align}&\sum_{i=1}^n\theta_i\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\Bigg)\nonumber\\[4pt] & \quad =\nu\sum_{j=1}^n\int_{0}^{\alpha(t_j-t_{j-1})}\mathbb{E}\Bigg(\sum_{i=j}^n \theta_i N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\Bigg)ds,\end{align}

which implies that

\begin{align*} &\mathbb{E}\Bigg(e^{\frac{\sqrt{-1}}{\sqrt{\alpha}}\sum_{i=1}^n\theta_i\Big(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\left(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\right) \Big)}\Bigg) \\ & = \exp\Bigg\{\nu\sum_{j=1}^n\int_{0}^{\alpha(t_j-t_{j-1})}\bigg\{\mathbb{E}\!\left(e^{\frac{\sqrt{-1}}{\sqrt{\alpha}}\sum_{i=j}^n\theta_iN_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]}-1\right) \\ &\quad \quad -\frac{\sqrt{-1}}{\sqrt{\alpha}}\mathbb{E}\Bigg(\sum_{i=j}^n\theta_i N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\Bigg)\Bigg\}ds \Bigg\}.\end{align*}

Noting that for $i \leq j$, $N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\leq N_{C_0}(\mathbb{R})=S$, and by the elementary inequality

\begin{equation*}\left|e^{\sqrt{-1}x}-1-\sqrt{-1}x+\frac{1}{2} x^2\right|\leq 2|x|^2I_{\{|x|\geq \epsilon\}}+ \frac{\epsilon}{2}|x|^2,\ x\in\mathbb R,\ \epsilon>0,\end{equation*}

we have that

\begin{equation*}\Bigg|\sum_{i=j}^n\theta_i N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\Bigg|^2\leq n |\theta|^2 S^2.\end{equation*}

Using Taylor expansion in (12) with (13), we have

\begin{align*}&\Bigg|\mathbb{E}\left(e^{\frac{\sqrt{-1}}{\sqrt{\alpha}}\sum_{i=1}^n\theta_i\Big(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\Big(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\Big) \Big)}\right) \\ &\qquad \times \exp\Bigg\{-\nu\sum_{j=1}^n\int_{0}^{\alpha(t_j-t_{j-1})}\frac{1}{2\alpha}\mathbb{E}\Bigg(\Bigg(\sum_{i=j}^n\theta_i N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\Bigg)^2\Bigg\}\Bigg|\\ &\quad \leq \frac{\nu n |\theta|^2 t_n }{2} \left(4\mathbb{E}(S^2I_{\{S\geq \epsilon \sqrt{\alpha}\}}+\epsilon \mathbb{E}(S^2)\right).\end{align*}

First letting $\alpha\to \infty$, then letting $\epsilon\to 0$, and noting that

\begin{equation*}N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s] \uparrow S\end{equation*}

as $\alpha\uparrow \infty$, we obtain that

\begin{align*}& \lim_{\alpha\rightarrow\infty}\mathbb{E}\Bigg(e^{\frac{\sqrt{-1}}{\sqrt{\alpha}}\sum_{i=1}^n\theta_i\Big(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\Big(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\Big) \Big)}\Bigg) \\[4pt] &\quad =\exp\Bigg\{-\frac{1}{2}\nu \mathbb{E} (S^2)\sum_{j=1}^n \Bigg(\sum_{i=j}^n\theta_i\Bigg)^2(t_j-t_{j-1})\Bigg\}\\[6pt] & \quad =\mathbb{E}\big(e^{ \sqrt{-1} \sum_{i=1}^n\theta_i \sigma B(t_i) }\big).\end{align*}

2.3.2. Proof of Lemma 4

Given $\eta>0$, set

\begin{equation*}A_s(\delta)=\bigg\{\sup_{s\leq t\leq s+\eta} \bigg| {C}(\alpha t)-\mathbb{E}( {C}(\alpha t))-({C}(\alpha s)-\mathbb{E}({C}(\alpha s))) \bigg|>\sqrt{\alpha}\delta\bigg\}.\end{equation*}

Then

\begin{align*}&\mathbb{P}\Bigg(\sup_{|t-s|<\eta} \bigg| {C}(\alpha t)-\mathbb{E}({C}(\alpha t))-({C}(\alpha s)-\mathbb{E}( {C}(\alpha s))) \bigg|>3\sqrt{\alpha}\delta\Bigg)\\[4pt] & \quad \leq \mathbb{P}\left(\cup_{i\leq \eta^{-1}} A_{i\eta} (\delta)\right)\leq \left(1+ \frac{1}{\eta}\right)\sup_{s\in (0,1)} \mathbb{P}\left(A_{s} (\delta)\right).\end{align*}

Thus, if for any $\delta>0$,

(14)\begin{align}\lim_{\eta\to 0}\limsup_{\alpha\to \infty } \frac{1}{\eta}\sup_{s\in (0,1)} \mathbb{P}\!\left(A_{s} (\delta)\right)=0,\end{align}

then (9) holds. Thus, we only need to prove (14).

For any $\eta\in (0,1)$, $s\in(0,1)$, set $t_l=s\alpha+l\eta$ for $l=0,1,\cdots, \lfloor\alpha\rfloor+1$. Then by Lemmas 5 and 1, for any $\delta>0$,

\begin{align*}&\mathbb{P}\!\left(\sup_{s\leq t\leq s+\eta} \bigg|C(\alpha t)-\mathbb{E}(C(\alpha t))-( C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg|>\sqrt{\alpha}\delta\right)\\[4pt] &\quad \leq \mathbb{P}\!\left( \max_{1\leq l\leq \lfloor\alpha\rfloor+1}\bigg| C(t_l)-\mathbb{E}(C(t_l))-( C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg| >\sqrt{\alpha}\delta/2\right)\\[4pt] &\qquad + \mathbb{P}\!\left( \max_{0\leq l\leq \lfloor\alpha\rfloor} \bigg| \mathbb{E}(C(t_l)-C(t_{l+1}) )\bigg| >\sqrt{\alpha}\delta/2\right),\end{align*}

and

\begin{align*} & \mathbb{P}\!\left( \max_{1\leq l\leq \lfloor\alpha\rfloor+1} \bigg| C(t_l)-\mathbb{E}(C(t_l))-( C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg| >\sqrt{\alpha}\delta/2\right)\\[4pt] &\quad \leq 2\mathbb{P}\Bigg( \max_{0\leq l\leq \lfloor\alpha\rfloor}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{ \lfloor\alpha\rfloor+1}]\Bigg)\Bigg|>\sqrt{\alpha}\delta/12\Bigg)\\[4pt] &\qquad\! +2\max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{P}\Bigg(\Bigg|\! \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}\!\!\! N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|> \!\sqrt{\alpha}\delta/12\Bigg).\end{align*}

Thus, we only need to show that

(15)\begin{equation}\lim_{\alpha\to \infty}\sup_{s\in(0,1)} \mathbb{P}\!\left( \max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{E}(C(t_{l+1})-C(t_{l})) >\sqrt{\alpha}\delta/2\right) =0,\end{equation}

(16)\begin{align}&\lim_{\alpha\to \infty}\sup_{s\in(0,1)} \mathbb{P}\Bigg( \!\! \max_{0\leq l\leq \lfloor\alpha\rfloor}\Bigg|\! \sum_{X_k\in I_{|(0,t_l]}}\!\! N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]- \mathbb{E}\Bigg(\! \sum_{X_k\in I_{|(0,t_l]}}\! N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|>\sqrt{\alpha}\delta/12\!\Bigg)=0,\end{align}

and

(17)\begin{align} \lim_{\eta\to 0}\limsup_{\alpha\to\infty}\frac{1}{\eta}\sup_{s\in (0,1)} \max_{0\leq l\leq \lfloor\alpha\rfloor} & \mathbb{ P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\nonumber\\[5pt] & - \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|> \sqrt{\alpha}\delta/12\Bigg) =0.\end{align}

Let us first prove (16). Applying Lemma 2, we have that for any $t\geq 0$,

\begin{align*}& \mathbb{E}\Bigg(\exp\Bigg\{\sqrt{-1}\theta\sum_{X_k\in I_{|(-\infty,t]}}N_{C_k}(t, \infty)\Bigg\}\Bigg)\\[6pt] & \quad =\exp\left\{\nu \int_{-\infty}^{t}\mathbb{E} \left( e^{\sqrt{-1}\theta N_{C_0}(t-s, \infty)}-1\right ) ds\right\}\\[6pt] & \quad =\exp\left\{\nu \mathbb{E}\left( \int_{0}^{L}\left( e^{\sqrt{-1}\theta N_{C_0}(s, \infty)}-1\right ) ds\right)\right\}.\end{align*}

Thus, $ \sum_{X_k\in I_{|(-\infty,t_l]}}N_{C_k}(t_l, \infty)$, $l=0,1,\cdots, \lfloor\alpha\rfloor+1$, are identically distributed with $ \sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, \infty)$, and

\begin{align*}\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,t_l]}}N_{C_k}(t_l, \infty)\Bigg) & \leq \nu \mathbb{E}( L S),\\[5pt] \mathbb{E}\Bigg(\Bigg(\sum_{X_k\in I_{|(-\infty,t_l]}}N_{C_k}(t_l, \infty)\Bigg)^2\Bigg) & \leq ( \nu \mathbb{E}( L S))^2+\nu \mathbb{E}(LS^2).\end{align*}

Therefore, for $\alpha$ large enough,

\begin{align*}& \mathbb{P}\Bigg( \max_{0\leq l\leq \lfloor\alpha\rfloor}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|>\sqrt{\alpha}\delta/12\Bigg)\\[3pt] & \leq (\alpha+1)\max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{P}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}] >\sqrt{\alpha}\delta/20\Bigg)\\[3pt] & \leq (\alpha+1) \mathbb{P}\Bigg( \sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, \infty) >\sqrt{\alpha}\delta/20\Bigg)\\[3pt] & \leq \frac{(\alpha+1)20^2}{\alpha \delta^2} \mathbb{E}\Bigg(\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, \infty) \Bigg)^2I_{\left\{\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, \infty) >\sqrt{(\alpha+1)}\delta/20\right\}}\Bigg)\\[3pt] & \to 0 \quad \mbox{ as } \alpha\to\infty.\end{align*}

That is, (16) is valid.

Similarly, we can get that

\begin{equation*}\sup_{s\in(0,1)}\max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{l+1}]}}N_{C_k}(0, t_{t_l+1}]\Bigg)\leq \nu\eta \mathbb{E}(S).\end{equation*}

Therefore, when $\alpha$ is large enough,

(18)\begin{equation}\max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{E}(C(t_{l+1})-C(t_{l})) \leq\nu \mathbb{E}( L S)+ \nu\eta \mathbb{E}(S)<\sqrt{\alpha}\delta/2,\end{equation}

which implies that (15) holds.

Now let us show (17). Set $S_k= N_{C_k}(\mathbb R)$. Define

\begin{equation*}Z_l=\sum_{X_k\in I_{|(t_l, t_{l+1}]} }S_k-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(t_l, t_{l+1}]} }S_k\Bigg), \qquad l=0,1,\cdots, \lfloor\alpha\rfloor,\end{equation*}

and

\begin{equation*}\tilde{Z}_l=\sum_{X_k\in I_{|(t_l, t_{l+1}]} }N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{l+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg), \qquad l=0,1,\cdots, \lfloor\alpha\rfloor.\end{equation*}

Then $Z_l,\ l=0,1,\cdots, \lfloor\alpha\rfloor$, are i.i.d. random variables with mean 0 and variance $\mathbb{E}(Z_0^2)= \nu \eta \mathbb{E}(S^2)$, and the common distribution is not dependent on s.

Noting that

\begin{equation*}\sum_{l=0}^{\lfloor\alpha\rfloor} \sum_{X_k\in I_{|(t_l, t_{l+1}]} }\bigg|S_k- N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\bigg|\leq \sum_{X_k\in I_{|(0, t_{\lfloor\alpha\rfloor+1}]} }\big(N_{C_k}(-\infty, 0]+N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty)\big),\end{equation*}

we have that

(19)\begin{align}&\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|> \sqrt{\alpha}\delta/12\Bigg)\nonumber\\ &\quad \leq \mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}S_k -\mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}S_k\Bigg)\Bigg|> \sqrt{\alpha}\delta/24\Bigg)\nonumber\\ &\qquad +\mathbb{P}\Bigg(\sum_{X_k\in I_{|(0, t_{\lfloor\alpha\rfloor+1}]} }(N_{C_k}(-\infty, 0]+N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty))\nonumber\\ &\qquad +\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0, t_{\lfloor\alpha\rfloor+1}]} }(N_{C_k}(-\infty, 0]+N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty))\Bigg)> \sqrt{\alpha}\delta/24\Bigg),\end{align}

and by the Montgomery-Smith inequality [Reference Montgomery-Smith21],

(20)\begin{align} &\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}S_k -\mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}S_k\Bigg)\Bigg|> \sqrt{\alpha}\delta/24\Bigg)\leq3 \mathbb{P}\Bigg(\Bigg| \sum_{l=0}^{\lfloor\alpha\rfloor} Z_l \Bigg|> \sqrt{\alpha}\delta/240\Bigg).\end{align}

From

\begin{align*}&\mathbb{E}\!\left(\exp\left\{\sqrt{-1}\theta\sum_{X_k\in I_{|(0, \infty)} }N_{C_k}(-\infty, 0]\right\}\right)\\[4pt] & =\exp\left\{\nu \mathbb{E}\left(\int_0^{L} \left( e^{\sqrt{-1}\theta N_{C_0}(-\infty, -t]}-1\right ) dt\right)\right\}\end{align*}

and

\begin{align*}&\mathbb{E}\Bigg(\exp\Bigg\{\sqrt{-1}\theta\sum_{X_k\in I_{|(-\infty, t_{\lfloor\alpha\rfloor+1}]} } N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty)\Bigg\}\Bigg)\\[5pt] &=\exp\left\{\nu \mathbb{E}\left(\int_{-L}^{0} \left( e^{\sqrt{-1}\theta N_{C_0}(-t, \infty]}-1\right ) dt\right)\right\},\end{align*}

we have the following moment estimates:

\begin{equation*}\max\Bigg\{\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0, \infty)} }N_{C_k}(-\infty, 0]\Bigg),\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty, t_{\lfloor\alpha\rfloor+1}]} } N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty)\Bigg)\Bigg\}\leq \nu \mathbb{E}(L S)\end{equation*}

and

\begin{align*}&\max\Bigg\{\mathbb{E}\Bigg(\Bigg(\sum_{X_k\in I_{|(0, \infty)} }N_{C_k}(-\infty, 0]\Bigg)^2\Bigg),\mathbb{E}\Bigg(\Bigg(\sum_{X_k\in I_{|(-\infty, t_{\lfloor\alpha\rfloor+1}]} } N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty)\Bigg)^2\Bigg)\Bigg\}\\ & \leq ( \nu \mathbb{E}( L S))^2+\nu \mathbb{E}(LS^2).\end{align*}

Thus

\begin{equation*}\sum_{l=0}^{\lfloor\alpha\rfloor}\mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l, t_{l+1}]} }\left|S_k- N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\right|\Bigg)\leq 2\nu \mathbb{E}(LS),\end{equation*}

and by Chebyshev’s inequality,

(21)\begin{align}&\limsup_{\alpha\to \infty}\sup_{s\in(0,1)} \mathbb{P}\Bigg(\sum_{X_k\in I_{|(0, t_{\lfloor\alpha\rfloor+1}]} }(N_{C_k}(-\infty, 0]+N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty))\nonumber\\[3pt] & \qquad+\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0, t_{\lfloor\alpha\rfloor+1}]} }(N_{C_k}(-\infty, 0]+N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty))\Bigg)> \sqrt{\alpha}\delta/24\Bigg)\nonumber\\[3pt] & \quad \leq \limsup_{\alpha\to \infty}\sup_{s\in(0,1)} \mathbb{P}\Bigg( \sum_{X_k\in I_{|(0, \infty)} }N_{C_k}(-\infty, 0] > \sqrt{\alpha}\delta/48\Bigg)\\[3pt] & \qquad +\limsup_{\alpha\to \infty}\sup_{s\in(0,1)} \mathbb{P}\Bigg( \sum_{X_k\in I_{|(-\infty, t_{\lfloor\alpha\rfloor+1}]} } N_{C_k}(t_{\lfloor\alpha\rfloor+1},\infty)> \sqrt{\alpha}\delta/48\Bigg)=0.\nonumber\end{align}

Applying Chebyshev’s inequality again,

\begin{align*}& \mathbb{P}\Bigg(\bigg| \sum_{l=0}^{\lfloor\alpha\rfloor}Z_l \bigg|> \sqrt{\alpha}\delta/240\Bigg)\leq \frac{240^2}{\alpha \delta^2} \mathbb{E}\Bigg(\bigg| \sum_{l=0}^{\lfloor\alpha\rfloor}Z_l \bigg|^2 I_{\left\{\left| \sum_{l=0}^{\lfloor\alpha\rfloor}Z_l \right|> \sqrt{\alpha}\delta/240\right\}} \Bigg).\end{align*}

By

\begin{equation*}\frac{1}{\sqrt{\alpha}}\sum_{l=0}^{\lfloor\alpha\rfloor}Z_l \to N(0,\nu\eta \mathbb{E}(S^2)),\end{equation*}

we have that

(22)\begin{align}& \lim_{\eta\to 0}\limsup_{\alpha\to\infty}\frac{1}{\eta} \mathbb{E}\Bigg(\bigg| \sum_{l=0}^{\lfloor\alpha\rfloor}Z_l \bigg|^2 I_{\left\{\left| \sum_{l=0}^{\lfloor\alpha\rfloor}Z_l \right|> \sqrt{\alpha}\delta/240\right\}} \Bigg)\nonumber\\[6pt] & \quad =\lim_{\eta\to 0} \nu \mathbb{E}(S^2)\int_{|x|\geq \frac{\delta}{240 \sqrt{\nu\eta E(S^2)}}} |x|^2 \frac{1}{\sqrt{2\pi}} e^{-\frac{x^2}{2}} dx=0.\end{align}

Combining (19)–(22) completes the proof of (17).

3. Functional moderate deviations

In this section, we give a proof of Theorem 2. As in the proof of the central limit theorem, we again decompose $N_{\bf X}$ into three parts as in (3), then show that the last two parts are negligible in the sense of moderate deviations; see Propositions 4 and 5 below. Thus, the proof of Theorem 2 is reduced to showing the moderate deviations of the first term $C(\alpha t)$, i.e., Proposition 6.

Proposition 4. Assume that (A2) holds. Then for any $\delta>0$,

(23)\begin{align}\lim_{\alpha\rightarrow \infty}\frac{\alpha}{b^2(\alpha)}\log \mathbb{P}\Bigg(\sup_{t\in[0,1]}\bigg|&\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0,\alpha t]\nonumber\\[4pt] &-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0,\alpha t]\Bigg)\Bigg|>b(\alpha)\delta\Bigg)=-\infty.\end{align}

Proposition 5. Assume that (A2) holds. Then for any $\delta>0$,

(24)\begin{align}\lim_{\alpha\rightarrow \infty}\frac{\alpha}{b^2(\alpha)}\log \mathbb{P}\Bigg(\sup_{t\in[0,1]}\Bigg|&\sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]\nonumber\\[4pt] &-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]\Bigg)\Bigg|>b(\alpha)\delta\Bigg)=-\infty.\end{align}

Proposition 6. Assume that (A2) holds. Then

\begin{equation*}\left\{\frac{C(\alpha t)-\mathbb{E}(C(\alpha t))}{b(\alpha)},\ t\in [0,1]\right\}\end{equation*}

satisfies the LDP on $(D[0,1],\|\cdot\|)$ with speed $\frac{b^2(\alpha)}{\alpha}$ and good rate function J(f).

3.1. Proof of Proposition 4

By Lemma 5, for any $\delta>0$,

\begin{align*}&\mathbb{P}\Bigg(\sup_{t\in[0,1]} \Bigg|\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0,\alpha t]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0,\alpha t]\Bigg)\Bigg|>b(\alpha)\delta\Bigg)\\[4pt] &\quad \leq \mathbb{P}\Bigg(\max_{1\leq l\leq \lfloor\alpha\rfloor+1} \Bigg|\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, l]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, l]\Bigg)\Bigg|>b(\alpha)\delta/2\Bigg)\\[4pt] &\qquad +\mathbb{P}\Bigg(\max_{1\leq l\leq \lfloor\alpha\rfloor+1} \mathbb{E}\Bigg( \sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(l-1, l] \Bigg)>b(\alpha)\delta/2\Bigg). \end{align*}

Since $\{C_i;\ X_i \in I_{|(-\infty,0]}\}$ is an independently marked Poisson process, by Lemma 2, we have that

\begin{align*}&\log \mathbb{E}\Bigg(\exp\Bigg\{\theta_0 \sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, l]\Bigg\}\Bigg) \\[3pt] & = \nu \mathbb{E}\!\left(\int_{-\infty}^{0}\left(e^{\theta_0 N_{C_0}(-s, l-s]}-1\right)ds\right)\\[3pt] & \leq \nu \mathbb{E}\!\left(\int_{-L}^{0}\left(e^{\theta_0 S}-1\right)ds\right)\leq \nu \mathbb{E}\left(L e^{\theta_0 S}\right)<\infty.\end{align*}

Therefore,

\begin{align*}& \max_{1\leq l\leq \lfloor\alpha\rfloor+1} \mathbb{E}\Bigg( \sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(l-1, l] \Bigg)\leq \nu \mathbb{E}(LS),\end{align*}

and by Chebyshev’s inequality, when $\alpha$ is large enough,

\begin{align*}&\mathbb{P}\Bigg(\max_{1\leq l\leq \lfloor\alpha\rfloor+1} \Bigg|\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, l]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, l]\Bigg)\Bigg|>b(\alpha)\delta/2\Bigg)\\ & \leq (\alpha+1)\max_{1\leq l\leq \lfloor\alpha\rfloor+1} \mathbb{P}\Bigg( \sum_{X_k\in I_{|(-\infty,0]}}N_{C_k}(0, l] >b(\alpha)\delta/4\Bigg)\\ & \leq \exp\left\{-b(\alpha)\delta \theta_0/4+\log (\alpha+1)+ \nu \mathbb{E}\big(L e^{\theta_0 S}\big) \right\}.\end{align*}

This implies that (23) holds.

3.2. Proof of Proposition 5

Let us give some moment estimates. Applying Lemma 2, we have that for any $\theta\leq \theta_0$,

\begin{align*}&\mathbb{E}\!\left(e^{ \theta \sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l]}\right) \\ =&\exp\left\{\nu\mathbb{E}\!\left(\int_{-1}^{L}\big(e^{ \theta N_{C_0}(-\infty, -s]}-1\big)ds\right)\right\}\leq \exp\left\{\nu\mathbb{E}\big((1+L) e^{ \theta_0 S}\big)\right\}.\end{align*}

For any $\delta>0$, when $\alpha$ is large enough,

(25)\begin{align}&\mathbb{P}\Bigg(\sup_{t\in [0,1]} \bigg|\sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(\alpha t,\infty)}}N_{C_k}(0,\alpha t]\Bigg) \bigg|>b(\alpha)\delta\Bigg)\nonumber\\[3pt] & \leq (\alpha+1) \max_{1\leq l\leq \lfloor\alpha\rfloor+1}\mathbb{P}\Bigg( \sup_{l-1\leq t\leq l} \sum_{X_k\in I_{|(t,\infty)}}N_{C_k}(0, t]>b(\alpha)\delta/2\Bigg)\\[3pt] & \leq (\alpha+1) \max_{1\leq l\leq \lfloor\alpha\rfloor+1 }\mathbb{P}\Bigg(\sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l] >b(\alpha)\delta/2\Bigg).\nonumber\end{align}

By Chebyshev’s inequality, for any $\delta>0$, when $\alpha$ is large enough,

(26)\begin{align} & (\alpha+1) \max_{1\leq l\leq \lfloor\alpha\rfloor+1 }\mathbb{P}\Bigg(\sum_{X_k\in I_{|(l-1,\infty)}}N_{C_k}(-\infty,l] >b(\alpha)\delta/2\Bigg)\nonumber\\ & \leq \exp\left\{-b(\alpha)\delta \theta_0/2+\log (\alpha+1)+ \nu \mathbb{E}\big(L e^{\theta_0 S}\big) \right\}.\end{align}

Now, combining (25) and (26), we obtain (24).

3.3. Proof of Proposition 6

By the general theory of large deviations (see [9, Theorem 5.1.2] and [10, Lemma A.1]), to obtain Proposition 6 it is sufficient to prove the large deviation principle for finite-dimensional distributions and the exponential tightness of $\{C(\alpha t), t\in [0,1];\ \alpha>0\}$, which are stated in Lemmas 6 and 7, respectively.

Let us first give a variational representation of the rate function J defined by (2) and show that it is a good rate function in $(D[0,1],\|\cdot\|)$. The proofs of these facts are standard (see [9, Lemma 5.1.6]). Set

\begin{equation*} \Lambda(\theta)=\frac{1}{2}\nu \mathbb{E}(S^2)\theta^2.\end{equation*}

Then the Fenchel–Legendre transform of $\Lambda(\theta)$ is

\begin{equation*}\Lambda^{*}(x)=\sup_{\theta\in \mathbb{R}}\{\theta x-\Lambda(\theta)\}=\frac{1}{2\nu \mathbb{E}( S^2)}x^2 .\end{equation*}

For each $n\geq 1$ and $0\leq t_1 < \cdots < t_n \leq 1$, define

\begin{equation*}J_{t_1,\cdots,t_n}(x_1,\cdots,x_n)=\sum_{j=1}^n(t_j-t_{j-1})\Lambda^*\left(\frac{x_j-x_{j-1}}{t_j-t_{j-1}}\right),\end{equation*}

where $x_0=0$ and $t_0=0$. Then J has the variational representation

\begin{equation*}J(\,f)=\sup\left\{J_{t_1,\cdots,t_n}(\,f(t_1),\cdots,f(t_n))\,{:}\,n\geq 1,0\leq t_1<\cdots<t_n \leq 1\right\},\end{equation*}

and it is a good rate function in $(D[0,1],\|\cdot\|)$.

Lemma 6. Assume that (A2) holds. Then for each $n\geq 1$ and $0\leq t_1 < \cdots < t_n \leq 1$,

\begin{equation*}\left(\frac{C(\alpha t_1)-\mathbb{E}(C(\alpha t_1))}{b(\alpha)},\cdots,\frac{C(\alpha t_n)-\mathbb{E}(C(\alpha t_n))}{b(\alpha)}\right)\end{equation*}

satisfies the LDP in $\mathbb{R}^n$ with speed $\frac{b^2(\alpha)}{\alpha}$ and good rate function $J_{t_1,\cdots,t_n}(x_1,\cdots,x_n)$. That is,

  1. (a) for any closed set $F \subset \mathbb{R}^n$,

    \begin{align*}&\limsup_{\alpha\rightarrow \infty}\frac{\alpha}{b(\alpha)}\log \mathbb{P}\Bigg(\Bigg(\frac{C(\alpha t_1)-\mathbb{E}(C(\alpha t_1))}{b(\alpha)},\cdots,\frac{C(\alpha t_n)-\mathbb{E}(C(\alpha t_n))}{b(\alpha)}\Bigg)\in F\Bigg)\\[4pt] & \leq -\inf_{(x_1,\cdots,x_n)\in F}J_{t_1,\cdots,t_n}(x_1,\cdots,x_n);\end{align*}
  2. (b) for any open set $G \subset \mathbb{R}^n$,

    \begin{align*}&\liminf_{\alpha\rightarrow \infty}\frac{\alpha}{b(\alpha)}\log \mathbb{P}\bigg(\bigg(\frac{C(\alpha t_1)-\mathbb{E}(C(\alpha t_1))}{b(\alpha)},\cdots,\frac{C(\alpha t_n)-\mathbb{E}(C(\alpha t_n))}{b(\alpha)}\bigg)\in G\bigg)\\[4pt] & \geq -\inf_{(x_1,\cdots,x_n)\in G}J_{t_1,\cdots,t_n}(x_1,\cdots,x_n).\end{align*}

Lemma 7. Assume that (A2) holds. Then for any $\delta>0$ and $s\in(0,1)$,

(27)\begin{align}\lim_{\eta\to 0}\limsup_{\alpha\rightarrow \infty}\frac{\alpha}{b^2(\alpha)}\log \mathbb{P}\bigg(\sup_{s\leq t\leq s+\eta}\bigg|&C(\alpha t)-\mathbb{E}(C(\alpha t))\nonumber\\ &-(C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg|>b(\alpha)\delta\bigg)=-\infty.\end{align}

3.3.1. Proof of Lemma 6

Since

\begin{equation*}\sum_{i=1}^{n}\theta_ix_i=\sum_{i=1}^n\theta_i\sum_{j=1}^i(x_j-x_{j-1})=\sum_{j=1}^n(x_j-x_{j-1})\sum_{i=j}^n\theta_i,\end{equation*}

we have that

\begin{align*}&\sup_{(\theta_1,\cdots,\theta_n)\in \mathbb{R}^n}\left\{\sum_{i=1}^n\theta_ix_i-\sum_{j=1}^n(t_j-t_{j-1})\Lambda\left(\sum_{i=j}^n\theta_i\right)\right\}\\[4pt] & = \sum_{j=1}^n\sup_{\sum_{i=j}^n\theta_i\in \mathbb{R}}\left\{(x_j-x_{j-1})\sum_{i=j}^n\theta_i-(t_j-t_{j-1})\Lambda\left(\sum_{i=j}^n\theta_i\right)\right\}\\[4pt] & = J_{t_1,\cdots,t_n}(x_1,\cdots,x_n).\end{align*}

Thus, by the Gärtner–Ellis theorem, we need only show that for any $(\theta_1,\cdots,\theta_n)\in \mathbb{R}^n$,

\begin{align*}&\Lambda_{t_1,\cdots,t_n}(\theta_1,\cdots,\theta_n)\\[3pt] & \coloneqq \lim_{\alpha\rightarrow\infty}\frac{\alpha}{b^2(\alpha)}\log \mathbb{E}\!\left(\exp\left\{ \sum_{i=1}^n\frac{(C(\alpha t_i) -\mathbb{E}(C(\alpha t_i)))\theta_i}{b(\alpha)}\frac{b^2(\alpha)}{\alpha}\right\}\right) \\[3pt] & =\sum_{j=1}^n(t_j-t_{j-1})\Lambda\left(\sum_{i=j}^n\theta_i\right).\end{align*}

We can write

\begin{align*}&C(\alpha t_i) -\mathbb{E}(C(\alpha t_i))\\[4pt] &=\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\Bigg)\end{align*}

and

\begin{align*}&\sum_{i=1}^n\theta_i\Bigg(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\Bigg)\Bigg) \nonumber \\[4pt] & =\sum_{j=1}^n\sum_{i=j}^n\sum_{X_k\in I_{|(\alpha t_{j-1},\alpha t_j]}}\theta_iN_{C_k}(0,\alpha t_i]-\sum_{j=1}^n\sum_{i=j}^n\theta_i\mathbb{E}\Bigg(\sum_{X_k\in I_{|(\alpha t_{j-1},\alpha t_j]}}N_{C_k}(0,\alpha t_i]\Bigg).\end{align*}

By the definition of a Poisson cluster process, for each i, $I_{|{(0, \alpha t_i ]}}$ can be viewed as the superposition of the i independent Poisson processes $I_{|( \alpha t_{j-1}, \alpha t_j]}$ on $(\alpha t_{j-1}, \alpha t_j]$, $j=1,\cdots,i$, with intensity $\nu$, and for each j, $\{(X_k,C_k);\ X_k \in I_{|(\alpha t_{j-1}, \alpha t_j]}\}$ is an independently marked Poisson process. Applying Lemma 2, we have that

(28)\begin{align}&\mathbb{E}\bigg(e^{\frac{b(\alpha)}{\alpha}\sum_{i=1}^n\theta_i\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]}\bigg)\nonumber\\ &=\exp\left\{\nu\sum_{j=1}^n\int_0^{\alpha(t_j-t_{j-1})}\mathbb{E}\left(e^{\frac{b(\alpha)}{\alpha}\sum_{i=j}^n\theta_iN_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]}-1\right)ds\right\}\end{align}

and

(29)\begin{align}&\sum_{i=1}^n\theta_i\mathbb{E}\Bigg(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\Bigg)\nonumber\\[4pt] &=\nu\sum_{j=1}^n\int_{0}^{\alpha(t_j-t_{j-1})}\mathbb{E}\Bigg(\sum_{i=j}^n \theta_i N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\Bigg)ds.\end{align}

Note that $N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\leq N_{C_0}(\mathbb{R})=S$ and

\begin{equation*}N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s] \uparrow S\end{equation*}

as $\alpha\uparrow \infty$. Using Taylor expansion in (28) with (29), we have

\begin{align*}&\frac{\alpha}{b^2(\alpha)}\log\mathbb{E}\Bigg(e^{\frac{b(\alpha)}{\alpha}\sum_{i=1}^n\theta_i\left(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\left(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\right) \right)}\Bigg) \nonumber \\[4pt] &=\frac{\alpha}{b^2(\alpha)}\nu\sum_{j=1}^n\int_{0}^{\alpha(t_j-t_{j-1})}\bigg\{\mathbb{E}\left(e^{\frac{b(\alpha)}{\alpha}\sum_{i=j}^n\theta_iN_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]}-1\right) \nonumber \\[4pt] &\quad -\frac{b(\alpha)}{\alpha}\mathbb{E}\Bigg(\sum_{i=j}^n \theta_i N_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\Bigg)\bigg\}ds \nonumber \displaybreak\\[4pt] &=\frac{\alpha}{b^2(\alpha)}\nu\sum_{j=1}^n\int_{0}^{\alpha(t_j-t_{j-1})}\frac{1}{2}\left(\frac{b(\alpha)}{\alpha}\right)^2\mathbb{E}\Bigg(\Bigg(\sum_{i=j}^n\theta_iN_{C_0}(-\alpha t_{j-1}-s,\alpha(t_i-t_{j-1})-s]\Bigg)^2\Bigg)ds \nonumber \\[4pt] &\quad + O \bigg(\frac{b(\alpha)}{\alpha}\bigg).\end{align*}

Thus

\begin{align*}& \lim_{\alpha\rightarrow\infty}\frac{\alpha}{b^2(\alpha)}\log\mathbb{E}\Bigg(e^{\frac{b(\alpha)}{\alpha}\sum_{i=1}^n\theta_i\left(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]-\mathbb{E}\left(\sum_{X_k\in I_{|(0,\alpha t_i]}}N_{C_k}(0,\alpha t_i]\right) \right)}\Bigg)\nonumber \\[5pt] &=\sum_{j=1}^n(t_j-t_{j-1})\frac{1}{2}\nu \Bigg(\sum_{i=j}^n\theta_i\Bigg)^2\mathbb{E}( S^2),\end{align*}

and

\begin{equation*}\Lambda_{t_1,\cdots,t_n}(\theta_1,\cdots,\theta_n)=\sum_{j=1}^n(t_j-t_{j-1})\Lambda\Bigg(\sum_{i=j}^n\theta_i\Bigg).\end{equation*}

3.3.2. Proof of Lemma 7

We now prove Lemma 7. The maximal inequality plays an important role in the proof.

For any $\eta\in (0,1)$, $s\in(0,1)$, set $t_l=s\alpha+l\eta$ for $l=0,1,\cdots, \lfloor\alpha\rfloor+1$. Then by Lemmas 5 and 1, for any $\delta>0$,

(30)\begin{align}&\mathbb{P}\left(\sup_{s\leq t\leq s+\eta} \bigg|C(\alpha t)-\mathbb{E}(C(\alpha t))-( C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg|>b(\alpha)\delta\right)\nonumber\\[4pt] &\quad \leq \mathbb{P}\left( \max_{1\leq l\leq \lfloor\alpha\rfloor+1} \bigg| C(t_l)-\mathbb{E}(C(t_l))-( C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg| >b(\alpha)\delta/2\right)\\[4pt] &\qquad + \mathbb{P}\left( \max_{0\leq l\leq \lfloor\alpha\rfloor} \bigg|\mathbb{E}( C(t_l)-C(t_{l+1}) )\bigg| >b(\alpha)\delta/2\right),\nonumber\end{align}

and

(31)\begin{align} & \mathbb{P}\left( \max_{1\leq l\leq \lfloor\alpha\rfloor+1} \bigg| C(t_l)-\mathbb{E}(C(t_l))-( C(\alpha s)-\mathbb{E}(C(\alpha s))) \bigg| >b(\alpha)\delta/2\right)\nonumber\\[4pt] &\quad \leq 2 \mathbb{P}\Bigg( \max_{0\leq l\leq \lfloor\alpha\rfloor}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|>b(\alpha)\delta/12\Bigg)\nonumber\\[4pt] &\qquad +2\max_{0\leq l\leq {\lfloor\alpha\rfloor}}\!\mathbb{P}\Bigg(\Bigg|\! \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}\!\! N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]- \mathbb{E}\Bigg( \!\sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}\!\! N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|> b(\alpha)\delta/12\!\Bigg).\end{align}

By Lemma 2, we have that

\begin{align*} &\log \mathbb{E}\Bigg(\exp\Bigg\{\theta_0 \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{l+1}]\Bigg\}\Bigg) \leq \nu \mathbb{E}(L e^{\theta_0 S})<\infty,\\[4pt] &\log \mathbb{E}\Bigg(\exp\Bigg\{\theta_0 \sum_{X_k\in I_{|(t_{l},t_{l+1}]}}N_{C_k}(0, t_{l+1}]\Bigg\}\Bigg) \leq \nu \mathbb{E}( e^{\theta_0 S})<\infty,\end{align*}

and

\begin{align*}&\log \mathbb{E}\Bigg(\exp\Bigg\{\theta_0 \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]\Bigg\}\Bigg)\leq \nu \mathbb{E}(L e^{\theta_0 S})<\infty.\end{align*}

By (18),

\begin{align*}& \max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{E}(C(t_{l+1})-C(t_{l})) \leq \nu \mathbb{E}(LS)+\nu\eta \mathbb{E}(S),\end{align*}

and by Chebyshev’s inequality, when $\alpha$ is large enough,

\begin{align*}& (\alpha+1 ) \max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{P}\Bigg( \Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|>b(\alpha)\delta/12\Bigg)\\[4pt] & \leq (\alpha+1 ) \max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{P}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]>b(\alpha)\delta/24\Bigg)\\[4pt] & \leq e^{-\delta \theta_0 b(\alpha)/24+\log (\alpha+1)+ \nu \mathbb{E}\left(L e^{\theta_0 S}\right)},\end{align*}

which implies that

(32)\begin{equation}\lim_{\alpha\to \infty}\frac{\alpha}{b^2(\alpha)}\log \mathbb{P}\!\left( \max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{E}(C(t_{l+1})-C(t_{l})) >b(\alpha)\delta/2\right) =-\infty\end{equation}

and

(33)\begin{align}&\lim_{\alpha\to \infty}\frac{\alpha}{b^2(\alpha)}\log \Bigg( (\alpha+1 ) \max_{0\leq l\leq \lfloor\alpha\rfloor}\mathbb{P}\Bigg( \Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]\nonumber\\[4pt] & \qquad - \mathbb{E}\!\left( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_{\lfloor\alpha\rfloor+1}]\right)\Bigg|>b(\alpha)\delta/12\Bigg)\Bigg) =-\infty.\end{align}

On the other hand, by Chebyshev’s inequality and Lemma 2, for any $\theta >0$,

\begin{align*}&\frac{\alpha}{b^2(\alpha)}\log \Bigg(\max_{0\leq l\leq {\lfloor\alpha\rfloor}}\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\\[4pt] & \qquad\qquad\qquad\qquad\qquad - \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|> b(\alpha)\delta/12\Bigg)\Bigg)\\ & \leq -\theta \delta/12\\ & \quad +\frac{\alpha}{b^2(\alpha)}\log \max_{0\leq l\leq {\lfloor\alpha\rfloor}} \mathbb{E}\Bigg(e^{\frac{b(\alpha)}{\alpha}\theta\Big( \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]- \mathbb{E}\bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Big)\Big)}\Bigg) \\[5pt] &\quad +\frac{\alpha}{b^2(\alpha)}\log \max_{0\leq l\leq {\lfloor\alpha\rfloor}}\mathbb{E}\Bigg(e^{-\frac{b(\alpha)}{\alpha}\theta\Big( \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]- \mathbb{E}\Big( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Big)\Big)}\Bigg) \end{align*}

and

\begin{align*}&\max_{0\leq l\leq {\lfloor\alpha\rfloor}}\frac{\alpha}{b^2(\alpha)}\log \mathbb{E}\Bigg(e^{\pm\frac{b(\alpha)}{\alpha}\theta\Big( \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]- \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Big)\Big)}\Bigg)\\[4pt] & = \max_{0\leq l\leq {\lfloor\alpha\rfloor}}\frac{\alpha}{b^2(\alpha)}\nu \int_{t_l}^{t_{\lfloor\alpha\rfloor+1}} \mathbb{E}\left(e^{\pm\frac{b(\alpha)}{\alpha} \theta N_{C_0}(-s, t_{\lfloor\alpha\rfloor+1}-s]}-1-\frac{\pm b(\alpha)}{\alpha} \theta N_{C_0}(-s, t_{\lfloor\alpha\rfloor+1}-s]\right)ds \\[4pt] & \leq \nu \eta \theta^2 \mathbb{E}(S^2)/2+o(1).\end{align*}

Thus, for any $\theta>0$,

\begin{align*}&\lim_{\eta\to 0}\limsup_{\alpha\to\infty}\frac{\alpha}{b^2(\alpha)}\log \Bigg(\max_{0\leq l\leq {\lfloor\alpha\rfloor}}\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\\ &\quad\quad\quad\quad\quad\quad\quad\quad\quad- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|> b(\alpha)\delta/12\Bigg)\Bigg)\leq -\theta \delta/12.\end{align*}

Letting $\theta\to \infty$, this yields that

(34)\begin{align}&\lim_{\eta\to 0}\limsup_{\alpha\to\infty}\frac{\alpha}{b^2(\alpha)}\log \Bigg(\max_{0\leq l\leq {\lfloor\alpha\rfloor}}\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l, t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\nonumber\\[4pt] &\quad\quad\quad\quad\quad\quad\quad\quad\quad- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_{\lfloor\alpha\rfloor+1}]}}N_{C_k}(0, t_{\lfloor\alpha\rfloor+1}]\Bigg)\Bigg|> b(\alpha)\delta/12\Bigg)\Bigg)\leq -\infty.\end{align}

Finally, combining (30)–(34), we obtain (27).

Appendix: Proof of Lemma 1

In this appendix, we prove Lemma 1.

Proof of Lemma 1. Set $\tau=\inf\left\{l\geq 1; \left| C(t_l)-C(s)-\mathbb{E}(C(t_l)-C(s)) \right|>3r\right\}$. Then for any $1\leq l\leq n$, we can write

\begin{equation*}C(t_l)-C(s)=\sum_{X_k\in I_{|(0,s]}}N_{C_k}(s, t_l]+\sum_{X_k\in I_{|(s,t_l]}}N_{C_k}(0, t_l],\end{equation*}

and

\begin{align*}\{\tau=l\}=&\bigg\{ \sup_{1\leq k\leq l-1 }\left| C(t_k)-C(s)-\mathbb{E}(C(t_k)-C(s)) \right|\leq 3 r , \\&\quad\quad\quad\quad\quad\quad \left| C(t_l)-C(s)-\mathbb{E}(C(t_l)-C(s)) \right|>3r\bigg\}.\end{align*}

Thus, $\{\tau=l\}$ and $\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]$ are independent. When $\tau=l$,

\begin{equation*}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|\leq r,\end{equation*}

and

\begin{equation*}\Bigg|\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|\leq r, \end{equation*}

then

\begin{equation*}\left|C(t_n)-C(t_l)-\mathbb{E}(C(t_n)-C(t_l)) \right|\leq 2r,\end{equation*}

and so $\left|C(t_n)-C(s)-\mathbb{E}(C(t_n)-C(s)) \right|>r$. We can write

\begin{align*}&\mathbb{P}\Bigg( \max_{1\leq l\leq n-1}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|\leq r,\\ & \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \left|C(t_n)-C(s)-\mathbb{E}(C(t_n)-C(s)) \right|>r\Bigg)\\ & \geq \sum_{l=1}^n \mathbb{P}\Bigg( \tau=l,\max_{1\leq l\leq n-1}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|\leq r,\\ & \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \left|C(t_n)-C(s)-\mathbb{E}(C(t_n)-C(s)) \right|>r\Bigg).\end{align*}

Thus

\begin{align*}&\mathbb{P}\Bigg( \max_{1\leq l\leq n}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|\leq r,\\[3pt] & \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \quad\quad\quad\quad \left|C(t_n)-C(s)-\mathbb{E}(C(t_n)-C(s)) \right|>r\Bigg)\\[3pt] & \geq \sum_{l=1}^n \mathbb{P}\Bigg( \tau=l,\max_{1\leq l\leq n}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|\leq r,\\[3pt] & \quad\quad\quad\quad \quad\quad\quad\quad \Bigg|\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|\leq r\Bigg)\\[3pt] & =\mathbb{P}\Bigg( \cup_{l=1}^n\Bigg\{\tau=l, \Bigg|\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|\leq r\Bigg\}\\[3pt] & \quad\quad\quad\quad \quad\quad \cap\Bigg\{\max_{1\leq l\leq n}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|\leq r\Bigg\} \Bigg)\\[3pt] & \geq \mathbb{P}\Bigg( \cup_{l=1}^n\Bigg\{\tau=l, \Bigg|\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|\leq r\Bigg\}\Bigg)\\[3pt] & \quad\quad\quad\quad \quad\quad-\mathbb{P}\Bigg(\max_{1\leq l\leq n}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|> r \Bigg)\\[3pt] & =\sum_{l=1}^n\mathbb{P}(\tau=l) \mathbb{P}\Bigg(\Bigg|\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]-\mathbb{E}\Bigg(\sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|\leq r\Bigg)\\[3pt] & \quad\quad\quad\quad \quad\quad-\mathbb{P}\Bigg(\max_{1\leq l\leq n-1}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|> r \Bigg).\end{align*}

If

\begin{equation*}\max_{0\leq l\leq n-1}\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|> r\Bigg)\geq \frac{1}{2},\end{equation*}

then (1) is trivial. Otherwise,

\begin{equation*}\Bigg(1-\max_{1\leq l\leq n-1}\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|> r\Bigg)\Bigg)>1/2,\end{equation*}

and so

\begin{align*}& \mathbb{P}\!\left( \max_{1\leq l\leq n}\left| C(t_l)-C(s)-\mathbb{E}(C(t_l)-C(s)) \right|>3r \right)\\[4pt] &\quad \leq 2 \mathbb{P}\Bigg( \max_{1\leq l\leq n-1} \Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|>r\Bigg)\\[4pt] &\qquad + 2\mathbb{P}\left(\left| C(t_n)-C(s)-\mathbb{E}(C(t_n)-C(s)) \right|>r\right).\end{align*}

Now, noting that

\begin{align*}& \mathbb{P}\left(\left| C(t_n)-C(s)-\mathbb{E}(C(t_n)-C(s)) \right|>r\right)\\[4pt] &\quad \leq \mathbb{P}\Bigg( \Bigg| \sum_{X_k\in I_{|(0,t_0]}}N_{C_k}(t_0, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_0]}}N_{C_k}(t_0, t_n]\Bigg)\Bigg|>r/2\Bigg)\\[4pt] &\qquad + \mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_0,t_n]}}N_{C_k}(0, t_n]- \mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_0,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|> r/2\Bigg),\end{align*}

we have

\begin{align*}& \mathbb{P}\!\left( \max_{1\leq l\leq n}\left| C(t_l)-C(s)-\mathbb{E}(C(t_l)-C(s)) \right|>3r \right)\\[4pt] &\quad \leq 2 \mathbb{P}\Bigg( \max_{0\leq l\leq n-1}\Bigg| \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(0,t_l]}}N_{C_k}(t_l, t_n]\Bigg)\Bigg|>r/2\Bigg)\\[4pt] &\qquad + 2\max_{0\leq l\leq n-1}\mathbb{P}\Bigg(\Bigg| \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]-\mathbb{E}\Bigg( \sum_{X_k\in I_{|(t_l,t_n]}}N_{C_k}(0, t_n]\Bigg)\Bigg|> r/2\Bigg). \\[-40pt]\end{align*}

Acknowledgements

We would like to thank the editors and two anonymous referees for their careful reading of the manuscript and their valuable comments and suggestions. F. Gao was supported by the National Natural Science Foundation of China (Nos. 11571262, 11971361, and 11731012).

References

Bacry, E., Delattre, S., Hoffmann, M. and Muzy, J.-F. (2013). Scaling limits for Hawkes processes and application to financial statistics. Stoch. Process. Appl. 123, 24752499.CrossRefGoogle Scholar
Billingsley, P. (1968). Convergence of Probability Measures. John Wiley & Sons, New York.Google Scholar
Billingsley, P. (1986). Probability and Measure. John Wiley & Sons, New York.Google Scholar
Bogachev, L. and Daletskii, A. (2009). Poisson cluster measures: Quasi-invariance, integration by parts and equilibrium stochastic dynamics. J. Funct. Anal. 256, 432478.CrossRefGoogle Scholar
Bordenave, C. and Torrisi, G. L. (2007). Large deviations of Poisson cluster processes. Stoch. Models 23, 593625.CrossRefGoogle Scholar
Chun, Y. J., Hasna, M. O. and Ghrayeb, A. (2015). Modeling heterogeneous cellular networks interference using Poisson cluster processes. IEEE J. Sel. Areas Commun. 33, 21822195.CrossRefGoogle Scholar
Daley, D. J. and Vere-Jones, D. (2003). An Introduction to the Theory of Point Processes, Vol. 1, 2nd edn. Springer, New York.Google Scholar
Delattre, S., Fournier, N. and Hoffmann, M. (2016). Hawkes processes on large networks. Ann. Appl. Prob. 26, 216261.CrossRefGoogle Scholar
Dembo, A. and Zeitouni, O. (1998). Large Deviations Techniques and Applications, 2nd edn. Springer, New York.10.1007/978-1-4612-5320-4CrossRefGoogle Scholar
Djellout, H., Guillin, A. and Wu, L. M. (1999). Large and moderate deviations for estimators of quadratic variational processes of diffusion. Statist. Infer. Stoch. Process. 2, 195225.10.1023/A:1009950229386CrossRefGoogle Scholar
Fasen, V. (2010). Modeling network traffic by a cluster Poisson input process with heavy and light-tailed file sizes. Queueing Systems 66, 313350.CrossRefGoogle Scholar
Fasen, V. and Samorodnitsky, G. (2009). A fluid cluster Poisson input process can look like a fractional Brownian motion even in the slow growth aggregation regime. Adv. Appl. Prob. 41, 393427.CrossRefGoogle Scholar
Faÿ, G., González-Arévalo, B., Mikosch, T. and Samorodnitsky, G. (2006). Modelling teletraffic arrivals by a Poisson cluster process. Queueing Systems 54, 121140.CrossRefGoogle Scholar
Gao, F. Q. and Zhu, L. (2018). Some asymptotic results for nonlinear Hawkes processes. Stoch. Process. Appl. 128, 40514077.CrossRefGoogle Scholar
Hawkes, A. G. (1971). Spectra of some self-exciting and mutually exciting point processes. Biometrika 58, 8390.CrossRefGoogle Scholar
Hawkes, A. G. (2018). Hawkes processes and their applications to finance: a review. Quant. Finance 18, 193198.CrossRefGoogle Scholar
Hawkes, A. G. and Oakes, D. (1974). A cluster process representation of a self-exciting process. J. Appl. Prob 11, 493503.CrossRefGoogle Scholar
Jessen, A. H., Mikosch, T. and Samorodnitsky, G. (2011). Prediction of outstanding payments in a Poisson cluster model. Scand. Actuarial J. 2011, 214237.CrossRefGoogle Scholar
Matsui, M. (2014). Prediction in a non-homogeneous Poisson cluster model. Insurance Math. Econom. 55, 1017.CrossRefGoogle Scholar
Matsui, M. and Mikosch, T. (2010). Prediction in a Poisson cluster model. J. Appl. Prob. 47, 350366.CrossRefGoogle Scholar
Montgomery-Smith, S. J. (1993). Comparison of sums of independent identically distributed random variables. Prob. Math. Statist. 14, 281285.Google Scholar
Neyman, J. and Scott, E. L. (1958). Statistical approach to problems of cosmology. J. R. Statist. Soc. B 20, 143.Google Scholar
Tabassum, H., Hossain, E. and Hossain, J. (2017). Modeling and analysis of uplink non-orthogonal multiple access in large-scale cellular networks using Poisson cluster processes. IEEE Trans. Commun. 65, 35553570.Google Scholar
Torrisi, G. L. (2016). Gaussian approximation of nonlinear Hawkes processes. Ann. Appl. Prob. 26, 21062140.CrossRefGoogle Scholar
Torrisi, G. L. (2017). Poisson approximation of point processes with stochastic intensity, and application to nonlinear Hawkes processes. Ann. Inst. H. Poincaré Prob. Statist. 53, 679700.CrossRefGoogle Scholar
Yi, W., Liu, Y. and Nallanathan, A. (2017). Modeling and analysis of D2D millimeter-wave networks with Poisson cluster processes. IEEE Trans. Commun, Wireless.65, 55745588.CrossRefGoogle Scholar
Zhu, L. (2013). Central limit theorem for nonlinear Hawkes processes. J. Appl. Prob. 50, 760771.CrossRefGoogle Scholar
Zhu, L. (2013). Moderate deviations for Hawkes processes. Statist. Prob. Lett. 83, 885890.CrossRefGoogle Scholar