Hostname: page-component-7b9c58cd5d-7g5wt Total loading time: 0 Render date: 2025-03-15T11:40:34.632Z Has data issue: false hasContentIssue false

Representations of hermite processes using local time of intersecting stationary stable regenerative sets

Published online by Cambridge University Press:  23 November 2020

Shuyang Bai*
Affiliation:
University of Georgia, US
*
*Postal address: Department of Statistics, University of Georgia, 310 Herty Drive, Athens, GA 30602, USA. Email address: bsy9142@uga.edu
Rights & Permissions [Opens in a new window]

Abstract

Hermite processes are a class of self-similar processes with stationary increments. They often arise in limit theorems under long-range dependence. We derive new representations of Hermite processes with multiple Wiener–Itô integrals, whose integrands involve the local time of intersecting stationary stable regenerative sets. The proof relies on an approximation of regenerative sets and local times based on a scheme of random interval covering.

MSC classification

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

Since the seminal works of Taqqu [Reference Taqqu29, Reference Taqqu30] and Dobrushin and Major [Reference Dobrushin and Major7], the class of processes called Hermite processes has attracted considerable interest in probability and statistics. A Hermite process, up to a multiplicative constant, is specified by two parameters: the order $p\in \mathbb{Z}_+$ and the memory parameter

(1.1) \begin{equation}\beta\in \bigg(1-\frac{1}{p},1\bigg).\end{equation}

A Hermite process can be defined by any of its equivalent representations. Here, by equivalent representations we mean that the represented processes share the same finite-dimensional distributions. Two of the most well-known representations are the time-domain representation and the frequency-domain representation in terms of multiple Wiener–Itô integrals (see Section 2.1). The time-domain representation is given by

(1.2) \begin{equation}Z_1(t)= a_{p,\beta} \int^{\prime}_{\mathbb{R}^p} \bigg(\int_0^t \prod_{j=1}^p (s-x_j)^{\beta/2-1}_+ {\rm d} s \bigg) W({\rm d} x_1)\cdots W({\rm d} x_p),\qquad t\ge 0,\end{equation}

where W is a Gaussian random measure on $\mathbb{R}$ with Lebesgue control measure, the prime ʹ at the top of the integral sign indicates the exclusion of the diagonals $x_i=x_j$ , $i\neq j$ , in the p-tuple stochastic integral, and

\begin{equation*}a_{p,\beta}=\bigg( \frac{(1-p(1-\beta)/2)(1-p(1-\beta))}{p! \mathrm{B}(\beta/2,1-\beta)^p}\bigg)^{1/2}\end{equation*}

is a constant to ensure that $\mathrm{Var}[Z_1(1)]=1$ , where $\mathrm{B}(\cdot\,,\cdot)$ is the beta function. The frequency-domain representation is given by

(1.3) \begin{equation}Z_2(t)=b_{p,\beta} \int^{\prime\prime}_{\mathbb{R}^p}\frac{{\rm e}^{it(x_1+\cdots+x_p) }-1}{i(x_1+\cdots+x_k)} \prod_{j=1}^p |x_j|^{-\beta/2} \widehat{W}({\rm d} x_1)\cdots \widehat{W}({\rm d} x_p),\qquad t\ge 0,\end{equation}

where $\widehat{W}$ is a complex-valued Hermitian Gaussian random measure [Reference Pipiras and Taqqu23, Definition B.1.3] with Lebesgue control measure, the double prime ʹʹ at the top of the integral sign indicates the exclusion of the hyperdiagonals $x_i=\pm x_j$ , $i\neq j$ , in the p-tuple stochastic integral, and

\begin{equation*}b_{p,\beta}=\bigg( \frac{(p(\beta-1)/2+1)(p(\beta-1)+1)}{p! [\Gamma(1-\beta)\sin (\beta\pi/2)]^p}\bigg)^{1/2}\end{equation*}

is a constant to ensure $\mathrm{Var}[Z_2(1)]=1$ , where $\Gamma(\cdot)$ is the gamma function. See [Reference Pipiras and Taqqu23, Section 4.2] for the derivation of the normalization constants $a_{p,\beta}$ and $b_{p,\beta}$ . It was shown in [Reference Taqqu30] that $Z_1(t)$ and $Z_2(t)$ have the same finite-dimensional distributions and thus they represent the same process, which we denote as Z(t). We shall call such a process a standard Hermite process, where the word standard corresponds to the normalization $\mathrm{Var}[Z(1)]=1$ .

A Hermite process Z(t) has stationary increments and is self-similar with Hurst index $H=1-p(1-\beta)/2\in (1/2,1)$ ; namely, $(Z(ct))_{t\ge 0}$ and $(c^H Z(t))_{t\ge 0}$ have the same finite-dimensional distributions for any constant $c>0$ . In the literature, H is often used in place of $\beta$ to parameterize Z(t), whereas $\beta$ is a convenient choice for this paper. When the order $p=1$ , Z(t) recovers a well-known Gaussian process: fractional Brownian motion. When $p\ge 2$ , the law of Z(t) is non-Gaussian, and if $p=2$ , the process is also known as a Rosenblatt process [Reference Rosenblatt24, Reference Taqqu29]. All standard Hermite processes with Hurst index H, regardless of the order, share the same covariance structure as a standard fractional Brownian motion, that is,

\begin{equation*}\mathbb{E} [Z(t_1) Z(t_2)]=\frac{1}{2}\left(|t_1|^{2H}+|t_2|^{2H}- |t_1-t_2|^{2H}\right), \qquad t_1,t_2\ge 0.\end{equation*}

A Hermite process Z(t) often appears in, but is not limited to, a limit theorem of the form

\begin{equation*} \Bigg(\frac{1}{A(N)}\sum_{n=1}^{\lfloor Nt \rfloor}X_n\Bigg)_{t\ge 0} \Rightarrow (Z(t))_{t\ge 0} \qquad \text{as }N\rightarrow\infty,\end{equation*}

where $\Rightarrow$ stands for a suitable sense of weak convergence (e.g. convergence of finite-dimensional distributions, or weak convergence in Skorokhod space), A(N) is a normalizing sequence regularly varying [Reference Bingham, Goldie and Teugels6] with index H as $N\rightarrow\infty$ , and $(X_n)$ is a stationary sequence with long-range dependence, a notion often characterized by a slow power-law decay of the covariance of $(X_n)$ (see, e.g., [Reference Dobrushin and Major7, Reference Ho and Hsing11, Reference Surgailis28, Reference Taqqu30]). These limit theorems are often termed non-central limit theorems, and have found numerous applications in statistical inference under long-range dependence (see, e.g., [Reference Bai and Taqqu2] and the references therein).

Some alternative representations of a Hermite process are known besides the ones in (1.2) and (1.3). Two other representations based on multiple Wiener–Itô integrals are the finite-time interval representation [Reference Pipiras and Taqqu22, Reference Tudor32] and the positive half-axis representation [Reference Pipiras and Taqqu22] (see also [Reference Pipiras and Taqqu23, Section 4.2]). In addition, there is a representation involving multiple integrals with respect to fractional Brownian motion [Reference Nourdin, Nualart and Tudor20, Definition 7] (see also [Reference Tudor31, Section 3.1]). Typically, to obtain a Hermite process as the weak limit, one needs to work with a suitable choice among these different representations.

In this paper we shall provide new multiple Wiener–Itô integral representations of Hermite processes of different natures. These representations involve the local time of intersecting stationary stable regenerative sets (see Section 2). The reader may directly skip to Theorem 3.1 for a glimpse. The discovery of such representations is motivated by the recent works [Reference Bai, Owada and Wang1, Reference Bai and Taqqu3]. The new representations also shed light on new mechanisms (e.g. [Reference Bai and Taqqu3]) which may generate Hermite processes as weak limits.

The rest of the paper is organized as follows. Section 2 prepares some necessary background. Section 3 states the main results. The proofs of the results in Section 3 are collected in Section 4.

2. Preliminaries

2.1. Multiple Wiener–Itô integrals

The information recalled below about Gaussian analysis and multiple Wiener–Itô integrals can be found in [Reference Janson12]. Let $(E,\mathcal{E},\mu)$ be a $\sigma$ -finite measure space, and let W be a Gaussian (independently scattered) random measure on E with control measure $\mu$ so that $\mathbb{E} W(A)W(B)=\mu(A\cap B)$ for $A,B\in \mathcal{E}$ with $\mu(A),\mu(B)<\infty$ . Throughout this paper, the underlying probability space which carries the randomness of Gaussian random measures is denoted by $(\Omega,\mathcal{F},\mathbb{P})$ . Then, for $g\in L^2(E,\mathcal{E},\mu)$ , the Wiener integral $I(g)=\int_E g(x) W({\rm d} x)$ can be defined, which forms a linear isometry between $L^2(E,\mathcal{E},\mu)$ and the Gaussian Hilbert space $H=\{I(g)\,:\, g\in L^2(E,\mathcal{E},\mu)\}$ .

Let $H^{:p:}=\overline{\mathcal{P}}_p(H)\cap\left( \overline{\mathcal{P}}_{p-1}(H)^{\perp}\right)$ be the pth Wiener chaos of H, $p\ge 1$ , where $\overline{\mathcal{P}}_p(H)$ denotes the closure of the linear subspace of multivariate polynomials of elements of H with degrees p or lower, and $\perp$ in the superscript denotes the orthogonal complement in $L^2(\Omega,\mathcal{F},\mathbb{P})$ . Then, for $f\in L^2(E^p,\mathcal{E}^p,\mu^p)$ , the p-tuple Wiener–Itô integral

\begin{equation*}I_p(f)=\int^{\prime}_{E^p} f(x_1,\ldots,x_p) W({\rm d} x_1)\cdots W({\rm d} x_p)\end{equation*}

can be defined [Reference Janson12, Theorem 7.26]. In fact, $I_p\,:\,L^2(E^p,\mathcal{E}^p,\mu^p) \rightarrow H^{:p:}$ forms a bounded linear operator, and in particular a linear isometry from the subspace of symmetric functions of $L^2(E^p,\mathcal{E}^p,\mu^p/p!)$ to $H^{:p:}$ . In addition, $I_p$ is characterized by the following property: for $f_1,\ldots,f_p\in L^2(E,\mathcal{E},\mu)$ , we have

(2.1) \begin{equation}I_p(f_1\otimes\cdots\otimes f_p)=\int^{\prime}_{E^p} f_1(x_1)\cdots f_p(x_p) W({\rm d} x_1)\cdots W({\rm d} x_p)= {:}\, I(f_1)\cdots I(f_p)\,{:},\end{equation}

where for $\xi_1,\ldots,\xi_p\in H$ , the notation ${:}\,\xi_1 \cdots \xi_p\,{:}$ stands for the Wick product, which is the projection of the product $\xi_1\cdots \xi_p$ onto the $L^2$ subspace $H^{:p:}$ . We note that in the literature the construction of $I_p(f)$ often starts with (2.1) for $f_i=1_{A_i}$ for disjoint $A_i\in \mathcal{F}$ with $\mu(A_i)<\infty$ , so that ${:}\, I(f_1)\cdots I(f_p)\,{:}$ is simply the product $W(A_1)\cdots W(A_p)$ . Then, the definition is extended to a general $f\in L^2(E^p,\mathcal{E}^p,\mu^p)$ by linearity and continuity given that $\mu$ is atomless. If $\mu$ has atoms, an extra step in the construction is needed (see, e.g., [Reference Major18, p. 42]). For generality, we shall use the construction of multiple Wiener–Itô integrals in [Reference Janson12, Chapter VII.2] without assuming that $\mu$ is atomless.

The following lemma is useful for changing the underlying measure spaces in multiple Wiener–Itô integral representations of a process.

Lemma 2.1. Let $(E_j,\mathcal{E}_j,\mu_j)$ , $j=1,2$ , be $\sigma$ -finite measure spaces. Suppose $W_j$ is a Gaussian random measure defined on $E_j$ with control measure $\mu_j$ , $j=1,2$ . Let $(U,\mathcal{H})$ be a measurable space and suppose $f_j:E_j \rightarrow U $ , $j=1,2$ , are measurable and satisfy

(2.2) \begin{equation}\mu_1 \circ f_1^{-1} = \mu_2 \circ f_2^{-1}=\!:\,\nu \end{equation}

on $(U,\mathcal{H})$ . Let T be an index set and let $g_t\,:\, U^p\rightarrow[-\infty,+\infty]$ , $t\in T$ , be a family of measurable functions satisfying $g_t\circ f_j^{\otimes p} \in L^2(E^p_j,\mathcal{E}_j^p, \mu_j^p)$ , $j=1,2$ . Let

(2.3) \begin{equation}Z_j(t)=\int_{E_j^p}^{\prime} g_t\left(f_j(x_1),\ldots,f_j(x_p)\right) W_j({\rm d} x_1)\cdots W_j({\rm d} x_p), \qquad j=1,2.\end{equation}

Then, the two processes $(Z_j(t))_{t\in T}$ , $j=1,2$ , have the same finite-dimensional distributions.

Proof. First, by Cramér–Wold and the linearity of the multiple integrals, it suffices to prove equality of marginal distributions at a single $t\in T$ , and we thus simply set $g_t=g$ . Suppose first $g=h_1\otimes \cdots \otimes h_p$ , where $h_1,\ldots,h_p\in L^2(U,\mathcal{H},\nu)$ . Then, by (2.1), the right-hand side of (2.3) is equal to

\begin{equation*}{:}\int_{E_j} h_1 \circ f_j (x) W_j({\rm d} x) \cdots \int_{E_j} h_p \circ f_j (x) W_j({\rm d} x)\,{:}, \qquad j=1,2.\end{equation*}

So, by joint Gaussianity, the conclusion follows from comparing the covariances: for $1\le i_1, i_2\le p$ ,

\begin{align*}&\mathbb{E} \bigg[\int_{E_j} h_{i_1} \circ f_j (u) W_j({\rm d} x) \int_{E_j} h_{i_2} \circ f_j (x) W_j({\rm d} x)\bigg] \\[3pt] &\quad= \int h_{i_1}\circ f_j(x) \ h_{i_2}\circ f_j(x)\mu_j({\rm d} x)= \int_{U} h_{i_1}(u) h_{i_2}(u)\nu({\rm d} u),\qquad j=1,2,\end{align*}

where we have used (2.2) in the last equality.

Similarly, using the linearity of the multiple integrals, the conclusion holds if g is a finite linear combination of terms each of the form $h_1\otimes\cdots\otimes h_p$ . Finally, by the linear isometry of the multiple integral, it suffices to note that such linear combinations are dense in $L^2(U^p, \mathcal{H}^p,\nu^p)$ . □

We mention that a similar discussion to the above carries over to the case where W is replaced by a complex-valued Gaussian random measure (see [Reference Janson12, Chapter 7, Section 4]).

2.2. Regenerative sets and local time functional

Most of the information reviewed in this section about subordinators and regenerative sets can be found in [Reference Bertoin4].

Recall that a process $(\sigma(t))_{t\ge0}$ is said to be a subordinator if it is a nondecreasing Lévy process starting at the origin. The Laplace exponent of $(\sigma(t))_{t\ge0}$ is given by $\mathbb{E} {\rm e}^{-\lambda \sigma(t)}=\exp({-} t\Phi(\lambda))$ , $\lambda\ge0$ , which completely characterizes its law and satisfies the Lévy–Khintchine formula,

(2.4) \begin{equation}\Phi(\lambda)= {\rm d}\lambda+\int_{(0,\infty)}(1-{\rm e}^{-\lambda x})\Pi({\rm d} x),\end{equation}

where the constant $d\ge 0$ is the drift and $\Pi$ is the Lévy measure on $(0,\infty)$ satisfying $\int_{(0,\infty)} (x\wedge 1) \nu({\rm d} x)<\infty$ . The renewal measure U of $(\sigma(t))_{t\ge 0}$ on $[0,\infty)$ is characterized by $\int_{[0,\infty)}f(x)U({\rm d} x)=\mathbb{E} \int_0^\infty f(\sigma(t))\,{\rm d} t$ for any measurable function $f\ge 0$ , which is related to the Laplace exponent through

(2.5) \begin{equation}\int_{[0,\infty)} {\rm e}^{-\lambda x} U({\rm d} x)=\frac{1}{\Phi(\lambda)}, \qquad \lambda\ge 0.\end{equation}

The Radon–Nikodym derivative of U with respect to the Lebesgue measure, if it exists, is called the renewal density.

Let $\textbf{{F}}=\textbf{{F}}([0,\infty))$ denote the space of closed subsets of $[0,\infty)$ equipped with the Fell topology [Reference Molchanov19, Appendix C]. A random element R taking values in $\textbf{F}$ is said to be a regenerative set if R has the same distribution as the closed range $\overline{\{\sigma(t):\ t\ge 0\}}$ , where $(\sigma(t))_{t\ge0}$ is a subordinator. Note that for any constant $c>0$ , the time-scaled subordinator $(\sigma(ct))_{t\ge 0}$ and the original $(\sigma(t))_{t\ge 0}$ correspond to the same regenerative set. To make the correspondence between a regenerative set and a subordinator unique, we shall always assume the following normalization condition for the Laplace exponent of the subordinator:

(2.6) \begin{equation}\Phi(1)=1.\end{equation}

A regenerative set R is said to be $\beta$ -stable, $\beta\in (0,1)$ , if the associated subordinator $(\sigma (t))_{t\ge 0}$ is $\beta$ -stable, namely, if the Laplace exponent $\Phi(\lambda)=\lambda^{\beta}$ , which corresponds to the Lévy measure

(2.7) \begin{equation}\Pi ({\rm d} x)=\frac{ \beta}{\Gamma(1-\beta)}x^{-1-\beta} {1}_{(0,\infty)}(x) \,{\rm d} x.\end{equation}

A random closed set F in $\textbf{F}$ is said to be stationary if

(2.8) \begin{equation}\tau_x (F)\,:\!=F\cap [x,\infty)-x\overset{{\rm d}}{=} F\end{equation}

for any $x\ge 0$ . A $\beta$ -stable regenerative set R itself is not stationary. However, stationarity in a generalized sense can be obtained by a shifted R as follows.

Proposition 2.1. Let $\mathbb{P}_R$ be the distribution of a $\beta$ -stable regenerative set on $\textbf{F}$ and let $\pi_V$ be a Borel measure on $[0,\infty)$ with $\pi_V({\rm d} v)=(1-\beta) v^{-\beta} {\rm d} v$ , $v\in (0,\infty)$ . There exists a $\sigma$ -finite infinite-measure space $(E,\mathcal{E},\mu)$ and a measurable mapping $ \overline{R}\,:\,E\rightarrow\textbf{{F}}$ such that

\begin{equation*}\mu ( \overline{R} \in \cdot )=(\mathbb{P}_R\times \pi_V)(F+v\in \cdot ),\end{equation*}

where the right-hand side is understood as the push-forward measure of $\mathbb{P}_R\times \pi_V$ under the mapping $(F,v)\mapsto F+v$ . Furthermore, $ \overline{R} $ is stationary in the sense of

\begin{equation*}\mu(\tau_x \overline{R} \in \cdot )= \mu( \overline{R} \in \cdot ),\end{equation*}

where $\tau_x$ is as in (2.8).

Remark 2.1. The proposition says that if V is an improper random variable governed by the infinite law $\pi_V$ independent of the $\beta$ -stable regenerative set R, then the resulting shifted improper random set $R+V$ is stationary. Throughout this paper We use “improper” to mean that the distribution may be an infinite measure. The proposition follows from [Reference Lacaux and Samorodnitsky15, Proposition 4.1(c)] (note that their $1-\beta$ corresponds to our $\beta$ ). It can also be obtained by restricting a stationary $\beta$ -stable regenerative set on $\mathbb{R}$ constructed in [Reference Fitzsimmons and Taksar10] to $[0,\infty)$ . Note that the improper distribution $\mu ( \overline{R} \in \cdot )$ stays unchanged if $\pi_V$ is replaced by a positive constant multiple of it. This follows from the self-similarity property: $cR\overset{{\rm d}}{=} R$ , $c>0$ , of a $\beta$ -stable regenerative set R.

Definition 2.1. We call the improper random closed set $ \overline{R} $ in Proposition 2.1 a stationary $\beta$ -stable regenerative set.

Next, we recall the local time functionals introduced by [Reference Kingman14]. For $\beta\in (0,1)$ , define

\begin{equation*}L^{(\beta)}\,:\, {F}\rightarrow [0,\infty], \quad L^{(\beta)}(F) \,:\!=\limsup_{n\rightarrow\infty}\frac{\lambda\left(F+[- \frac{1}{2n},\frac{1}{2n}]\right)}{ l_{\beta}(n)},\end{equation*}

where $\lambda$ is the Lebesgue measure, $F+[- 1/{2n},1/{2n}]=\cup_{x\in F}[x-1/{2n},x+1/{2n}]$ , and the normalization sequence

\begin{equation*}l_{\beta}(n)=\int_{0}^{1/n} \Pi((x,\infty))\,{\rm d} x = \frac{ n^{\beta-1}}{\Gamma(2-\beta)},\end{equation*}

where $\Pi$ is as in (2.7). We then define

(2.9) \begin{equation}L_t^{(\beta)}(F)\,:\!=L^{(\beta)}(F\cap [0,t]), \qquad t\ge 0.\end{equation}

By [Reference Bai, Owada and Wang1, Lemma 2.1], $L_t^{(\beta)}$ is a measurable mapping from $\textbf{F}$ to $[0,\infty]$ for each $t\ge 0$ . Furthermore, [Reference Kingman14, Theorem 3] entails that if R is a $\beta$ -stable regenerative set, then the process $(L_t^{(\beta)}(R ))_{t\ge 0}$ has the same finite-dimensional distributions as the local time process associated with R, which is an inverse $\beta$ -stable subordinator.

3. Main results

We are now ready to state our main results. Set

(3.1) \begin{equation}\beta_p\,:\!=(\beta-1)p+1\in (0,1).\end{equation}

The range above should be compared to (1.1). Recall the standard Hermite process with memory parameter $\beta$ and order p as defined in (1.2) or (1.3).

Theorem 3.1. Let $L_t =L_t^{(\beta_p)}$ be Kingman’s local time functional as in (2.9) associated with a $\beta_p$ -stable regenerative set.

  1. (a) Let $ \overline{R}\,:\,E \rightarrow\textbf{{F}}$ be a stationary $\beta$ -stable regenerative set defined on a $\sigma$ -finite measure space $(E,\mathcal{E},\mu)$ as in Definition 2.1. Suppose W is a Gaussian random measure on E with control measure $\mu$ . Then the process

    (3.2) \begin{equation}Z(t)=c_{p,\beta}\int^{\prime}_{E^p} L_t \Bigg(\bigcap_{i=1}^p \overline{R} (x_i)\Bigg) W({\rm d} x_1)\cdots W({\rm d} x_p)\end{equation}
    has the same finite-dimensional distributions as the standard Hermite process with memory parameter $\beta$ and order p, where
    (3.3) \begin{equation}c_{p,\beta}=\left(\frac{\Gamma(\beta_p) \Gamma(\beta_p+2) }{ 2p! \Gamma(\beta)^{p } \Gamma(2-\beta)^{p }}\right)^{1/2}.\end{equation}
  2. (b) Let $(\Omega^{\prime} , \mathcal{F}^{\prime} , \mathbb{P}^{\prime} )$ be a probability space (note that $(\Omega^{\prime} , \mathcal{F}^{\prime} , \mathbb{P}^{\prime} )$ is different from the probability space $(\Omega,\mathcal{F},\mathbb{P})$ carrying the randomness of the Gaussian random measure). Let $R\,:\,\Omega^{\prime}\rightarrow\textbf{{F}}$ be a $\beta$ -stable regenerative set and let $V\,:\,\Omega^{\prime}\rightarrow [0,T]$ , $T>0$ , be a random variable independent of R satisfying $\mathbb{P}^{\prime}(V \le v)= T^{\beta-1} v^{1-\beta}$ , $v\in [0,T]$ . Suppose $W_T$ is a Gaussian random measure on $\Omega^{\prime}$ with control measure $\mathbb{P}^{\prime}$ . Then

    (3.4) \begin{align}Z(t)=c_{p,\beta,T}\int_{(\Omega^{\prime})^p}^{\prime} L_t \Bigg(\bigcap_{i=1}^p (R (\omega^{\prime}_i) +V (\omega^{\prime}_i))\Bigg) W_T({\rm d}\omega^{\prime}_1)\cdots W_T({\rm d}\omega^{\prime}_p)\end{align}
    has the same finite-dimensional distributions as a standard Hermite process with memory parameter $\beta$ and order p over the interval [0, T], where $c_{p,\beta,T}=T^{p(1-\beta)/2}c_{p,\beta}$ .

The theorem is proved in Section 4.

Remark 3.1. In view of Lemma 2.1, the finite-dimensional distributions of the process $(Z(t))_{t\ge 0}$ do not depend on the choice of the measure spaces $(E,\mathcal{E},\mu)$ and $(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})$ . Notation-wise, simple representations can be obtained when the measure spaces are chosen canonically. For example, in (a), if one sets $(E,\mathcal{E},\mu)=(\textbf{{F}},\mathcal{B},\mathcal{L}_\beta)$ , where $\mathcal{B}$ is the Borel $\sigma$ -field induced by the Fell topology on $\textbf{F}$ and $\mathcal{L}_\beta$ denotes the infinite law of a stationary $\beta$ -stable regenerative set on $\textbf{F}$ , then (3.2) reduces to

\begin{equation*}c_{p,\beta}\int_{\textbf{{F}}^p}^{\prime} L_t \Bigg(\bigcap_{i=1}^p \overline{R}_i\Bigg) W({\rm d} \overline{R}_i)\cdots W({\rm d} \overline{R}_i),\end{equation*}

where $\overline{R}_i$ , $i=1,\ldots,p$ , are understood as the identity maps on $\textbf{F}$ , and W is a Gaussian random measure on $\textbf{F}$ with control measure $\mathcal{L}_\beta$ . For a similar simplification of the representation in (b), see (4.25).

Remark 3.2. Note that even in the case $p=1$ , Theorem 3.1 provides new representations for fractional Brownian motion whose Hurst index $H\in (0,1/2)$ . In this case, by Gaussianity, the theorem can be established by computing the covariance directly using [Reference Bai, Owada and Wang1, Theorem 2.2] and the $L^2$ linear isometry of single stochastic integrals. The proof of Theorem 3.1 given below, on the other hand, applies to any $p\in \mathbb{Z}_+$ .

In addition, it is possible to use [Reference Bai, Owada and Wang1, Theorem 2.2] and the diagram formula for multiple integrals ([Reference Pipiras and Taqqu23, Proposition 4.3.4]) to compute the joint moments $\mathbb{E} [Z(t_1)\cdots Z(t_r)]$ , $t_1,\ldots, t_r\ge 0$ , and identify them with those of standard Hermite processes (see [Reference Pipiras and Taqqu23, Proposition 4.4.1]). This suffices to establish the claims of Theorem 3.1 when $p\le 2$ , but fails to conclude for $p\ge 3$ since moment determinancy ceases to hold in this case [Reference Slud27].

Remark 3.3. The representations found in this theorem are motivated from [Reference Bai, Owada and Wang1], where a process is defined similarly to (3.2) or (3.4) but with the Gaussian random measures replaced by $\alpha$ -stable ones, $\alpha\in (0,2)$ . The representations in (3.2) and (3.4) suggest the possibility of new types of limit theorems leading to Hermite processes; see, for example, [Reference Bai and Taqqu3].

4. Proofs

Throughout, c and $c_i$ denote constants whose values may change from line to line. We shall make use of the following relations about beta and gamma functions: for $a,b>0$ and $x<y$ ,

(4.1) \begin{equation}\int_{y}^x (u-y)^{a-1}(x-u)^{b-1} \, {\rm d} u = \mathrm{B}(a,b)(y-x)^{a+b-1},\qquad \mathrm{B}(a,b)=\frac{\Gamma(a)\Gamma(b)}{\Gamma(a+b)}.\end{equation}

4.1. Constructions of regenerative sets and local times by random covering

The proof of Theorem 3 uses a construction of a regenerative set as the set left uncovered by random intervals due to [Reference Fitzsimmons, Fristedt and Shepp9], as well as a related construction of local time which originates from [Reference Bertoin and Pitman5]. Similar constructions are used in [Reference Bai, Owada and Wang1].

Suppose a Poisson point process $\mathcal{N}=\sum_{\ell\ge 1} \delta_{\{y_\ell,z_\ell\}}$ on $[0,\infty)^2$ is defined on the probability space $(H,\mathcal{H},\mathbb{P}_\mathcal{N})$ with intensity measure $(1-\beta) \,{\rm d} y z^{-2}\,{\rm d} z$ , $\beta\in (0,1)$ , where $(\{y_\ell,z_\ell\})_{\ell\ge 1}$ is a measurable enumeration of the points of $\mathcal{N}$ . For $\varepsilon\ge 0$ , define

(4.2) \begin{equation}R_\varepsilon = \bigcap_{\ell:\ z_\ell\ge \varepsilon}( y_\ell,y_\ell+z_\ell)^{\rm c},\end{equation}

where the complement is with respect to $[0,\infty)$ . In other words, $R_\varepsilon $ is the subset of $[0,\infty)$ left uncovered by the collection of (possibly overlapping) open intervals $\{ (y_\ell,y_\ell+z_\ell)\,:\, z_\ell\ge \varepsilon\}$ , where the left boundary $y_\ell$ and the width $z_\ell$ are governed by $\mathcal{N}$ . In view of [Reference Fitzsimmons, Fristedt and Shepp9, Example 1], each $R_\varepsilon $ , $\varepsilon\ge 0$ , is a regenerative set on $[0,\infty)$ , and in particular, $R_0$ is a $\beta$ -stable regenerative set. It is also known that the subordinator associated with $R_\varepsilon$ has a positive drift $d_\varepsilon$ when $\varepsilon>0$ . In contrast to $R_0$ , the Lévy measure $\Pi_\varepsilon$ and the drift $d_\varepsilon$ of $R_\varepsilon$ , $\varepsilon>0$ , do not have simple expressions, although the exact expressions are not needed for our purpose. While other constructions of regenerative sets are possible, the random covering scheme seems efficient in providing tractable approximation of the intersecting regenerative sets and the local times encountered in the proof of Theorem 3.1.

By the calculation in the proof of [Reference Bai, Owada and Wang1, Lemma 2.5], we have, for $\varepsilon>0$ ,

(4.3) \begin{equation}p_\varepsilon(x)\,:\!=\mathbb{P}_\mathcal{N}(x\in R_{\varepsilon}) = {\rm e}^{(\beta-1)x/\varepsilon}{1}_{\{0\le x\le \varepsilon\}}+ \left(\frac{\varepsilon}{{\rm e}}\right)^{1-\beta} x^{\beta-1}{1}_{\{x>\varepsilon\}},\qquad x\ge 0.\end{equation}

Let $u_\varepsilon(x)$ be the renewal density of the subordinator associated with $R_\varepsilon$ . By [Reference Bertoin4, Propositions 1.7], if $\varepsilon>0$ ,

(4.4) \begin{equation}u_\varepsilon(x)= d_{\varepsilon}^{-1}p_\varepsilon(x)=\!:\,d_{\varepsilon}^{-1}\left(\frac{\varepsilon}{{\rm e}}\right)^{1-\beta}f_\varepsilon(x), \qquad x>0.\end{equation}

It is elementary to verify that as $\varepsilon\downarrow 0$ ,

(4.5) \begin{equation}f_\varepsilon(x)= ({\rm e}^{x/\varepsilon-1}\varepsilon)^{\beta-1}{1}_{\{0\le x\le \varepsilon\}}+ x^{\beta-1}{1}_{\{x>\varepsilon\}}\uparrow x^{\beta-1},\qquad x>0.\end{equation}

Note that $\int_{0}^\infty {\rm e}^{-x} u_\varepsilon(x)\,{\rm d} x = 1$ due to normalization requirement (2.6) via (2.5). Combining this fact with (4.5) and the monotone convergence theorem, we have, as $\varepsilon\rightarrow 0$ ,

(4.6) \begin{equation}d_\varepsilon\sim \Gamma(\beta)\left(\frac{\varepsilon}{{\rm e}}\right)^{1-\beta} \quad \text{as }\varepsilon\rightarrow 0.\end{equation}

Note that $d_0=0$ since a $\beta$ -stable subordinator has no drift. Define, for all $\varepsilon\ge 0$ , the measures

(4.7) \begin{equation}\pi_\varepsilon({\rm d} x)=d_\varepsilon \delta_0 + \overline{\Pi}_\varepsilon (x)\,{\rm d} x,\quad x\ge 0,\qquad \widetilde{\pi}_\varepsilon(\cdot )=\frac{\pi_\varepsilon(\cdot\cap [0,1])}{\pi_\varepsilon([0,1])},\end{equation}

where $\overline{\Pi}_{\varepsilon}(x)=\Pi_\varepsilon((x,\infty))$ is the tail of the Lévy measure $\Pi_\varepsilon$ corresponding to $R_\varepsilon $ , and $\delta_0$ is the delta measure with a unit mass at 0. Note that $\Pi_0$ is equal to $\Pi$ in (2.7) and $\widetilde{\pi}_0=\pi_V$ in Proposition 2.1 when restricted to [0, 1]. Note that, for each $\varepsilon\ge 0$ ,

\begin{equation*}\int_0^y u_\varepsilon(x)\,{\rm d} x\asymp y^{\beta}, \qquad y>0,\end{equation*}

where $h_1(y)\asymp h_2(y)$ means $ c_1 h_2(y)\le h_1(y)\le c_2 h_2(y)$ for some constants $0<c_1<c_2$ . So, in view of [Reference Bertoin4, Proposition 1.4], we have

\begin{equation*}\int_0^y \overline{\Pi}_{\varepsilon}(x)\,{\rm d} x \asymp y^{1-\beta}, \qquad y>0.\end{equation*}

Hence, each $\pi_\varepsilon$ is a $\sigma$ -finite infinite measure on $[0,\infty)$ .

Enriching the space $(H,\mathcal{H})$ if necessary, suppose $\nu$ is a $\sigma$ -finite measure on $\mathcal{H}$ , and suppose $U_\varepsilon\,:\,H\rightarrow[0,\infty)$ is an improper random variable satisfying

(4.8) \begin{equation}\nu((R_\varepsilon,U_\varepsilon)\in\cdot)=(\mathbb{P}_{R_\varepsilon}\times \pi_\varepsilon)(\cdot),\end{equation}

where $\mathbb{P}_{R_\varepsilon}$ is the distribution of $R_\varepsilon$ on $\textbf{F}$ and $\pi_\varepsilon$ is as in (4.7). Then, in view of [Reference Fitzsimmons and Taksar10], the improper random set $\overline{R}_\varepsilon\,:\!=R_\varepsilon+U_\varepsilon$ is stationary in the sense of

(4.9) \begin{equation}\nu(\tau_x \overline{R}_\varepsilon \in \cdot )= \nu( \overline{R}_\varepsilon \in \cdot ),\end{equation}

where $\tau_x$ is the shift as in (2.8).

The next result enables a key coupling argument in the proof of Theorem 3.1.

Lemma 4.1. We have the convergence in total variation distance:

\begin{equation*}\|\widetilde{\pi}_\varepsilon- \widetilde{\pi}_0\|_{\rm TV}\,:\!=\sup\{|\widetilde{\pi}_\varepsilon(A)-\widetilde{\pi}_0(A)|\,:\, A\in \mathcal{B}([0,1])\}\rightarrow 0 \qquad {as } \varepsilon\rightarrow 0.\end{equation*}

Proof. We first show that, as $\varepsilon\rightarrow 0$ ,

(4.10) \begin{equation} \pi_\varepsilon([0,1])\rightarrow \pi_0([0,1]).\end{equation}

By [Reference Fitzsimmons, Fristedt and Maisonneuve8, Equation (1.11) and Proposition 3.9], as $\varepsilon\rightarrow 0$ , the Laplace exponent uniquely associated with $R_{\varepsilon}$ satisfying (2.6) converges pointwise to the Laplace exponent uniquely associated with a $\beta$ -stable regenerative set. This, by [Reference Kallenberg13, Theorem 15.15(ii)], further implies that as $\varepsilon\rightarrow 0$ , we have the following weak convergence of measures:

(4.11) \begin{align}\widetilde{\nu}_\varepsilon({\rm d} x)&\,:\!= d_\varepsilon \delta_0 +(1-{\rm e}^{-x})\Pi_{\varepsilon}({\rm d} x) \notag\\[3pt] &\overset{{\rm d}}{\rightarrow} (1-{\rm e}^{-x})\Pi_0({\rm d} x)=(1-{\rm e}^{-x}) \frac{\beta}{\Gamma(1-\beta)} x^{-\beta-1} \, {\rm d} x =\!:\,\widetilde{\nu}_0({\rm d} x),\end{align}

where $\widetilde{\nu}_\varepsilon$ , $\varepsilon\ge0$ , are probability measures on $[0,\infty)$ due to (2.4) and (2.6). Next, by Fubini,

(4.12) \begin{align}\pi_\varepsilon([0,1])=d_\varepsilon+\int_0^1 \overline{\Pi}_\varepsilon(x)\,{\rm d} x = d_\varepsilon+\int_{(0,\infty)}(1\wedge x)\Pi_\varepsilon({\rm d} x) = \int_{[0,\infty)} h(x) \widetilde{\nu}_\varepsilon({\rm d} x),\end{align}

where $h(x)\,:\!=\frac{1\wedge x}{1-{\rm e}^{-x}}$ for $x>0$ and $h(0)\,:\!=1$ is a bounded continuous function on $[0,\infty)$ . Hence, by the weak convergence in (4.11), as $\varepsilon \rightarrow 0$ we have

(4.13) \begin{equation}\int_{[0,\infty)} h(x) \widetilde{\nu}_\varepsilon({\rm d} x)\rightarrow \int_{[0,\infty)} h(x) \widetilde{\nu}_0({\rm d} x) =\int_0^1 \overline{\Pi}_0(x)\,{\rm d} x=\pi_0([0,1])=\frac{1}{\Gamma(2-\beta)}.\end{equation}

Combining (4.12) and (4.13), we obtain (4.10).

Now, to conclude the proof, it suffices to show that

(4.14) \begin{equation}\sup\{| \pi_\varepsilon(A)-\pi_0(A)|\,:\, A\in \mathcal{B}([0,1])\}\rightarrow 0 \qquad \text{as } \varepsilon\rightarrow 0.\end{equation}

Indeed, for $A\in \mathcal{B}([0,1])$ ,

(4.15) \begin{align} |\pi_\varepsilon(A)-\pi_0(A)| & = \bigg| d_\varepsilon \delta_0(A) + \int_{A\cap (0,1]} \overline{\Pi}_\varepsilon(x) - \overline{\Pi}_0(x) \,{\rm d} x\bigg| \nonumber \\ & \le d_\varepsilon+ \int_{[0,1]} \big|\overline{\Pi}_\varepsilon(x) - \overline{\Pi}_0(x)\big|\,{\rm d} x.\end{align}

Note that $d_\varepsilon\rightarrow 0$ by (4.6). On the other hand, [Reference Fitzsimmons, Fristedt and Maisonneuve8, Proposition 3.9] and [Reference Kallenberg13, Lemma 15.14(ii)] imply that, as $\varepsilon\rightarrow 0$ , $\overline{\Pi}_\varepsilon(x) \rightarrow \overline{\Pi}_0(x)$ , $x>0$ . In addition, by (4.12), (4.13), and the fact $d_\varepsilon\rightarrow 0$ , we have, as $\varepsilon\rightarrow 0$ ,

\begin{equation*}\int_0^1 \overline{\Pi}_\varepsilon(x)\,{\rm d} x \rightarrow \int_0^1 \overline{\Pi}_0(x)\,{\rm d} x.\end{equation*}

Therefore, the second term in the bound in (4.15) tends to 0 as $\varepsilon\rightarrow 0$ by Scheffé’s lemma (e.g. [Reference Williams33, Item 5.10]). So, (4.14) follows. □

Next, we turn to the construction of the local time based on the covering scheme introduced above. Suppose $R_\varepsilon$ , $\varepsilon\ge 0$ , are as in (4.2) with the point process $\mathcal{N}$ defined on a probability space $(H,\mathcal{H},\mathbb{P}_\mathcal{N})$ . In view of Lemma 4.1 and a well-known coupling characterization of total variation distance (e.g. [Reference Sethuraman26, Theorem 2.1]), there exist random variables $V_\varepsilon\ge 0$ , $\varepsilon\ge 0$ , defined on a probability space $(\Theta,\mathcal{G},\mathbb{P}_V)$ , such that $\mathbb{P}_V(V_\varepsilon\in\cdot)=\widetilde{\pi}_\varepsilon(\cdot)$ and, as $\varepsilon\rightarrow 0$ ,

(4.16) \begin{equation}\mathbb{P}_V(V_\varepsilon = V_0)\rightarrow 1.\end{equation}

Lemma 4.2. For $0\le t\le 1$ , set

(4.17) \begin{equation}L_t^{(\varepsilon)}(\textbf{{h}},\boldsymbol{\theta})=\frac{1}{\Gamma(\beta_p)} \left(\frac{\varepsilon}{{\rm e}}\right)^{\beta_p-1} \int_0^t {1}_{\{x\in \bigcap_{i=1}^p (R_\varepsilon(h_i)+V_{\varepsilon}(\theta_i)) \}} \,{\rm d} x,\end{equation}

where $\textbf{{h}}\,:\!=(h_1,\ldots,h_p)\in H^p$ , $\boldsymbol{\theta}\,:\!=(\theta_1,\ldots,\theta_p)\in \Theta^p$ , and $\beta_p$ is as in (3.1).

  1. (a) As $\varepsilon =1/n \rightarrow 0$ , $n\in \mathbb{Z}_+$ , $L_{t}^{(\varepsilon)}$ converges in $L^r=L^r\big(H^p\times \Theta^p,\mathcal{H}^{p}\times \mathcal{G}^p , \mathbb{P}_\mathcal{N}^p\times \mathbb{P}_V^p\big)$ to a limit $L_t^*$ , $r\in \mathbb{Z}_+$ .

  2. (b) Let $\mathbb{E}^{\prime}$ denote integration with respect to $ \mathbb{P}_\mathcal{N}^p\times \mathbb{P}_V^p$ . Then, for $r\in \mathbb{Z}_+$ ,

    (4.18) \begin{equation}\mathbb{E}^{\prime} [(L_{t}^*)^r] = \frac{r! \Gamma(2-\beta)^p\Gamma(\beta)^p}{\Gamma((r-1)\beta_p+2)\Gamma(\beta_p)} \, t^{(r-1)\beta_p-1}.\end{equation}
  3. (c) We have

    (4.19) \begin{equation}L_t^*(\textbf{{h}},\boldsymbol{\theta})= L_t\Bigg(\bigcap_{i=1}^p R_0(h_i ) +V_0(\theta_{i})\Bigg) \quad \mathbb{P}_\mathcal{N}^p\times \mathbb{P}_V^p\text{-a.e. (almost everywhere)},\end{equation}
    where $L_t=L_t^{(\beta_p)}$ is the local time functional as in (2.9).

Remark 4.1. Results similar to Lemma 4.2 were established in [Reference Bai, Owada and Wang1] but with $V_\varepsilon(\theta_i)$ and $V_0(\theta_i)$ replaced by fixed constants $v_i$ , $i=1,\ldots,p$ .

We need the following preparation for the proof of the lemma. Note that throughout, an integral $\int_a^b \cdot \ {\rm d} x$ is understood as zero if $a\ge b$ .

Lemma 4.3. For $\delta,\eta\ge 0 $ , $\textbf{{v}}=(v_1,\ldots,v_p)\in [0,1]^p$ and $\textbf{{h}}=(h_1,\ldots,h_p)\in H^p$ , define

\begin{equation*}\Delta_{\delta,\eta}^{(\varepsilon)}(\textbf{{h}};\,\textbf{{v}})=\frac{1}{\Gamma(\beta_p)} \left(\frac{\varepsilon}{{\rm e}}\right)^{\beta_p-1} \int_\delta^\eta {1}_{\{x\in \bigcap_{i=1}^p (R_\varepsilon(h_i)+v_i) \}} \, {\rm d} x .\end{equation*}

Let $\mathbb{E}_\mathcal{N}$ denote integration with respect to $\mathbb{P}_\mathcal{N}^p$ on $H^p$ , and suppose $r\in \mathbb{Z}_+$ . Then

(4.20) \begin{align} &\mathbb{E}_{\mathcal{N}} \big[ \Delta_{\delta,\eta}^{(\varepsilon)}( \cdot \, ;\,\textbf{{v}})^r\big] \nonumber \\ & \quad= \frac{r!}{\Gamma(\beta_p)^r}\int_{\delta<x_1<\cdots<x_r<\eta} \Bigg(\prod_{i=1}^p f_\varepsilon(x_1-v_i) \Bigg)\, f_\varepsilon(x_2-x_1)^{p} \cdots f_{\varepsilon}(x_r-x_{r-1})^{p} \, {\rm d}\textbf{{x}} \end{align}
(4.21) \begin{equation} \hspace*{-258pt}\le c (\eta-\delta)_+^{r\beta_p}, \end{equation}

where $f_\varepsilon(x)$ is as in (4.5) if $x\ge 0$ , $f_\varepsilon(x)\,:\!=0$ for $x<0$ , and $c>0$ is a constant which does not depend on $\varepsilon$ , $\delta$ , $\eta$ , or $\textbf{{v}}$ .

Proof. Suppose $\delta< \eta$ , otherwise the conclusion is trivial. Let $p_\varepsilon(x)$ be as in (4.3) when $x\ge 0$ , and set $p_\varepsilon(x)=0$ if $x<0$ . For $0\le x_1<\cdots<x_r$ and $\varepsilon>0$ , we have

\begin{align*}&\mathbb{P}_\mathcal{N}^p\bigg(\bigg\{\textbf{{h}}\in H^p \,:\, \{x_1,\ldots,x_r\}\subset \bigcap_{i=1}^p (R_\varepsilon(h_i)+v_i)\bigg\}\bigg)\\&\quad= \prod_{i=1}^p \mathbb{P}_\mathcal{N}(\{h\in H\,:\, \{x_1,\ldots,x_r\}\subset (R_\varepsilon(h)+v_i)\})\\&\quad=\prod_{i=1}^p \big( p_\varepsilon(x_1-v_i)p_\varepsilon(x_2-x_1) \ldots p_{\varepsilon}(x_r-x_{r-1}) \big),\end{align*}

where for the last equality we have used the regenerative property of $R_\varepsilon$ [Reference Fitzsimmons, Fristedt and Shepp9, Equation (6)]. See also the proof of [Reference Bai, Owada and Wang1, Lemma 2.5]. Then, (4.20) follows from Fubini, the relation $f_\varepsilon(x)= (\varepsilon/{\rm e})^{\beta-1}p_\varepsilon(x)$ , see (4.4), and a symmetry in the multiple integral.

Next, from (4.5) we have. for any $\varepsilon>0$ ,

(4.22) \begin{equation}f_\varepsilon(x)\le x^{\beta-1}, \qquad x>0 . \end{equation}

Hence, for some constants $c_1,c_2>0$ free of $\varepsilon$ , $\delta$ , $\eta$ , or $\textbf{{v}}$ (recall that $\beta_p=p(\beta-1)+1\in (0,1)$ ),

\begin{align*}\mathbb{E}_{\mathcal{N}} & \Delta_{\delta,\eta}^{(\varepsilon)}( \cdot\, ;\,\textbf{{v}})^r \\ & \le c_1\int_{\delta<x_1<\cdots<x_r<\eta} \Bigg(\prod_{i=1}^p (x_1-v_i)_+^{\beta-1} \Bigg) (x_2-x_1)^{\beta_p-1} \cdots (x_r-x_{r-1})^{\beta_p-1} \, {\rm d}\textbf{{x}}\\& = c_2 \int_{\delta}^{\eta} \Bigg(\prod_{i=1}^p (x_1-v_i)_+^{\beta-1} \Bigg) (\eta-x_1)^{(r-1)\beta_p} \, {\rm d} x_1 =\!:\, c_2 g_{\delta,\eta}(v_1,\ldots,v_p),\end{align*}

where for the first equality we have integrated out the variables in the order $x_r$ , $x_{r-1}$ , …, $x_2$ and repeatedly applied (4.1). Note that the function $g_{\delta,\eta}\,:\,[0,1]^p\rightarrow \mathbb{R}$ is symmetric, so we suppose without loss of generality that $0\le v_1\le \cdots \le v_p \le 1$ . Then, by monotonicity and (4.1),

\begin{align*}g_{\delta,\eta}(v_1,\ldots,v_p) & = \int_{\delta}^{\eta} \Bigg(\prod_{i=1}^p (x-v_i)^{\beta-1}_+ \Bigg) (\eta-x)^{(r-1)\beta_p} \, {\rm d} x \\ & \le \int_{\delta}^{\eta} (x-v_p)_+^{\beta_p-1} (\eta-x)^{(r-1)\beta_p} \, {\rm d} x \notag \\ & \le {1}_{\{v_p\ge \delta\}} \mathrm{B}(\beta_p,(r-1)\beta_p+1) (\eta-v_p)_+^{r\beta_p} \\ & \quad + {1}_{\{v_p < \delta\}}\int_{\delta}^{\eta} (x-\delta )^{\beta_p-1} (\eta-x)^{(r-1)\beta_p} \, {\rm d} x \notag \\ & \le c (\eta-\delta)^{r\beta_p}.\\[-35pt] \end{align*}

Proof of Lemma 4.2. All conclusions trivially hold if $t=0$ . Suppose $0< t\le 1$ .

  1. (a) Let $\Theta_*\,:\!= \bigcup_{n=1}^\infty \{\theta\,:\, V_{1/n}(\theta)=V_0(\theta)\}$ ; we have $\mathbb{P}_V(\Theta_*)=1$ by (4.16). Let $V_0^{\rm M}(\boldsymbol\theta)= \max(V_0(\theta_i),\ i=1,\ldots,p)$ . Set $D_1=\{\boldsymbol{\theta}\in \Theta^p_*\,:\, 0<V_0^{\rm M}(\boldsymbol\theta)<t \}$ , $D_2=\{\boldsymbol{\theta}\in \Theta^p_*\,: V_0^{\rm M}(\boldsymbol\theta)>t\}$ , $E_1= H^p \times D_1$ , and $E_2=H^p \times D_2$ , which satisfy $\big(\mathbb{P}_\mathcal{N}^p\times \mathbb{P}_V^p\big)(E_1\cup E_2) =1$ because the distribution $\widetilde{\pi}_0$ of $V_0$ is continuous. We shall establish the $L^r$ convergence restricted on $E_1$ and $E_2$ respectively. Let $\mathbb{E}_{\mathcal{N}}$ and $\mathbb{E}_{V}$ denote the integrations with respect to $\mathbb{P}_{\mathcal{N}}^p$ and $\mathbb{P}_V^p$ respectively.

    First, suppose $\boldsymbol{\theta}\in D_1$ . In this case, since $V_0^{\rm M}(\boldsymbol{\theta})\ge \inf \big(\cap_{i=1}^p R_\varepsilon(h_i)+V_{0}(\theta_i)\big)$ ,

    \begin{align*}L_t^{(\varepsilon)}(\textbf{{h}},\boldsymbol{\theta})=\frac{1}{\Gamma(\beta_p)}\left(\frac{\varepsilon}{{\rm e}}\right)^{\beta_p-1} \int_{V_0^{\rm M}(\boldsymbol{\theta})}^t {1}_{\{x\in \bigcap_{i=1}^p (R_\varepsilon(h_i)+V_{\varepsilon}(\theta_i)) \}} \, {\rm d} x.\end{align*}
    For $m\in \mathbb{Z}_+$ , set $\delta_m(\boldsymbol{\theta})= (1-m^{-1})V_0^{\rm M}(\boldsymbol{\theta})+m^{-1}t$ , which satisfies $0 <V_0^{\rm M}(\boldsymbol{\theta}) <\delta_m(\boldsymbol{\theta})<t$ . Define
    (4.23) \begin{equation}L_t^{(\varepsilon,m)}(\textbf{{h}},\boldsymbol{\theta}) = \frac{1}{\Gamma(\beta_p)} \left(\frac{\varepsilon}{{\rm e}}\right)^{\beta_p-1} \int_{\delta_m(\boldsymbol{\theta})}^t {1}_{\{x\in \bigcap_{i=1}^p ( R_\varepsilon(h_i)+V_{\varepsilon}(\theta_i)) \}} \, {\rm d} x \le L_{t}^{(\varepsilon)}(\textbf{{h}},\boldsymbol{\theta}).\end{equation}
    If $\varepsilon=1/n$ is sufficiently small that $V_\varepsilon(\theta_i)=V_0(\theta_i)$ for all $i=1,\ldots,p$ and $\varepsilon<\delta_m(\boldsymbol{\theta})-V_0^{\rm M}(\boldsymbol{\theta})$ , by [Reference Bai, Owada and Wang1, Lemma 2.6], $(L_t^{(\varepsilon,m)}(\cdot\,,\boldsymbol{\theta}))_{\varepsilon>0} $ forms a nonnegative $\mathbb{P}_\mathcal{N}^p$ -martingale as $\varepsilon$ decreases. So, $L_t^{(\varepsilon,m)}(\cdot\,,\boldsymbol{\theta}) $ converges $\mathbb{P}_{\mathcal{N}}^p$ -a.e. as $\varepsilon=1/n\rightarrow 0$ , and hence $L_t^{(\varepsilon,m)}$ converges $\mathbb{P}_{\mathcal{N}}^p\times \mathbb{P}_{V}^p$ -a.e. by Fubini. On the other hand, for any $r\in \mathbb{Z}_+$ , by Fubini, (4.21), and (4.23),
    (4.24) \begin{equation}\mathbb{E}^{\prime} \big|L_t^{(\varepsilon,m)}\big|^r \le \mathbb{E}_{V}\mathbb{E}_\mathcal{N}\big[(L_t^{(\varepsilon)})^r\big] \le c t^{r\beta_p}.\end{equation}
    Hence, the $L^r$ convergence of $L_t^{(\varepsilon,m)}1_{E_1}$ as $\varepsilon=1/n\rightarrow 0$ follows from uniform integrability. On the other hand, by (4.21) again,
    \begin{equation*} \mathbb{E}_{\mathcal{N}} \big|L_{t}^{(\varepsilon)}(\cdot\,,\boldsymbol{\theta})- L_{t}^{(\varepsilon,m)}(\cdot\, ,\boldsymbol{\theta})\big|^r \le c \big[\delta_m(\boldsymbol{\theta})-V_0^{\rm M}(\boldsymbol{\theta})\big]^{r\beta_p} \le c t m^{-r\beta_p}.\end{equation*}
    Therefore, $\lim_{m\rightarrow \infty}\sup_{\varepsilon>0} \big\|L_{t}^{(\varepsilon)}{1}_{E_1}- L_{t}^{(\varepsilon,m)} {1}_{E_1}\big\|_{L^r}=0$ . This, together with the $L^r$ convergence of $L_{t}^{(\varepsilon,m)}{1}_{E_1}$ as $\varepsilon=1/n\rightarrow 0$ implies that $L_{t}^{(\varepsilon)}{1}_{E_1}$ is Cauchy in $L^r$ as $\varepsilon=1/n\rightarrow 0$ , and thus converges in $L^r$ .

    Next, suppose $\boldsymbol{\theta}\in D_2$ . When $\varepsilon$ is small enough that $V_{\varepsilon}(\theta_i)=V_0(\theta_i)$ for all $i=1,\ldots,p$ , we have $L_{t}^{(\varepsilon)}(\textbf{{h}},\boldsymbol{\theta})=0$ since $t\le \inf\big(\cap_{i=1}^p R_\varepsilon(h_i)+V_{0}(\theta_i)\big)$ in this case. Then, the $L^r$ convergence of $L_t^{(\varepsilon)}{1}_{E_2}$ follows from uniform integrability by (4.24).

  2. (b) By (4.5), (4.20), and the monotone convergence theorem, we have, for $\boldsymbol{\theta}\in \Theta_*^p$ ,

    \begin{align*} &\mathbb{E}_{\mathcal{N}} L_t^*(\cdot\,,\boldsymbol{\theta})^r \\ &\quad= \frac{r!}{\Gamma(\beta_p)^r}\!\int_{0<x_1<\cdots<x_r<t} \Bigg(\!\prod_{i=1}^p (x_1-V_0(\theta_i))_+^{\beta-1} \!\Bigg) (x_2-x_1)^{\beta_p-1} \cdots (x_r-x_{r-1})^{\beta_p-1} \, {\rm d}\textbf{{x}}.\end{align*}
    Hence, by Fubini and (4.1),
    \begin{align*}\mathbb{E}^{\prime} & [(L_t^*)^r] \\ & = \frac{r!(1-\beta)^p}{\Gamma(\beta_p)^r} \int_{0<x_1<\cdots<x_r<t} \int_{[0,1]^p} \prod_{i=1}^p v^{-\beta}_i \Bigg(\prod_{i=1}^p (x_1-v_i)_+^{\beta-1} \Bigg) \, {\rm d}\textbf{{v}} \\ & \qquad\qquad\qquad \times (x_2-x_1)^{\beta_p-1} \cdots (x_r-x_{r-1})^{\beta_p-1} \, {\rm d}\textbf{{x}} \\& = \frac{r!(1-\beta)^p\mathrm{B}(1-\beta,\beta)^p}{\Gamma(\beta_p)^r} \int_{0<x_1<\cdots<x_r<t} (x_2-x_1)^{\beta_p-1} \cdots (x_r-x_{r-1})^{\beta_p-1} \, {\rm d}\textbf{{x}}.\end{align*}
    The last expression is equal to that in (4.18) through repeated applications of (4.1).
  3. (c) This can be proved as [Reference Bai, Owada and Wang1, Lemma 2.7], so we only provide a sketch. Write

    \begin{equation*}\bigcap_{i=1}^p \left( R_0(h_i)+V_0(\theta_i)\right) =R^*(\textbf{{h}},\boldsymbol{\theta})+V^*(\textbf{{h}},\boldsymbol{\theta}),\end{equation*}
    where $V^*(\textbf{{h}},\boldsymbol{\theta})=\inf\cap_{i=1}^p \left(R_0(h_i)+V_0(\theta_i)\right)$ and $R^*$ is a $\beta_p$ -stable regenerative set starting at the origin independent of $V^*$ under $\mathbb{P}_V^p\times \mathbb{P}_\mathcal{N}^p$ [Reference Samorodnitsky and Wang25, Lemma 3.1]. By its construction, $(L_t^*)_{t\ge 0}$ is an additive functional which only increases on $R^*+V^* $ . In addition, by (4.18), increment-stationarity of $L_t^*$ , and Kolmogorov’s continuity theorem, $(L_t^*)_{t\ge 0}$ admits a $\mathbb{P}_\mathcal{N}^p\times \mathbb{P}_V^p$ -version which is continuous in t. Therefore, in view of [Reference Kingman14, Theorem 3] and [Reference Maisonneuve17, Theorem 3.1], the equality in (4.19) holds up to a positive multiplicative constant. This constant can be shown to be 1 by comparing the moments as in the proof of [Reference Bai, Owada and Wang1, Lemma 2.7]. □

4.2. Proof of Theorem 3.1

Proof. We first show the equivalence between the representations in Theorem 3.1(a) and (b), for which we shall fix $T>0$ and consider $t\in [0,T]$ . In view of Proposition 2.1 and Lemma 2.1, the representation in (3.2) is equivalent to

(4.25) \begin{equation}c_{p,\beta}\int_{(\textbf{{F}}\times [0,\infty))^p}^{\prime} L_t \Bigg(\bigcap_{i=1}^p (R_i +v_i)\Bigg) W^*({\rm d}R_1,{\rm d} v_1)\cdots W^*({\rm d} R_p,{\rm d} v_p),\end{equation}

where $W^*$ is a Gaussian random measure on $\textbf{{F}}\times [0,\infty)$ with control measure $\mathbb{P}_R\times \pi_V$ . Observe that, since $t\le T$ , the integrand above is zero if $v_i>T$ for some $i=1,\ldots,p$ . So, the integral domain $(\textbf{{F}}\times [0,\infty))^p$ in (4.25) can be replaced by $(\textbf{{F}}\times [0,T])^p$ , and $\pi_V$ can be viewed as its restriction on [0, T]. Then, define a Gaussian random measure by

(4.26) \begin{equation}W_T^*(\cdot)=\pi_V([0,T])^{-1/2} W^*(\cdot)=T^{(\beta-1)/2} W^*(\cdot),\end{equation}

whose control measure is now the probability measure $\mathbb{P}_R\times \widetilde{\pi}_V$ , where $\widetilde{\pi}({\rm d} v)=T^{\beta-1}(1-\beta) v^{-\beta}\,{\rm d} v$ , $v\in (0,T)$ . Substituting $W^*$ by $W_T^*$ in (4.25), the equivalence to the representation in (b) then follows from Lemma 2.1. Also, because of (4.26), the relation $c_{p,\beta,T}=T^{p(1-\beta)/2}c_{p,\beta}$ holds.

Next we prove (b). We shall assume $T=1$ for simplicity, and the argument is similar for general T. By Lemma 4.2, the $L^2$ linear isometry of multiple Wiener–Itô integrals (Section 2.1), and the standardization of variance at $t=1$ , the second moment of the expression in (4.25) when $t=1$ is equal to

\begin{equation*}p! \frac{2! \Gamma(\beta)^p \Gamma(2-\beta)^p}{\Gamma(\beta_p)\Gamma(\beta_p+2)} c_{p,\beta}^2=1.\end{equation*}

This implies (3.3).

Let $(H,\mathcal{H},\mathbb{P}_\mathcal{N})$ , $(\Theta,\mathcal{G},\mathbb{P}_V)$ , $L_t^*$ , $L_t^{(\varepsilon)}$ , $V_\varepsilon$ , and $R_\varepsilon$ be as in Lemma 4.2. Let $\nu$ and $U_\varepsilon$ be as described in the paragraph above (4.9). In view of Lemma 2.1, without loss of generality, one can set the probability space in (3.4) as $(\Omega^{\prime},\mathcal{F}^{\prime},\mathbb{P}^{\prime})= (H\times \Theta, \mathcal{H}^{p}\times\mathcal{G}^p, \mathbb{P}_\mathcal{N}\times \mathbb{P}_{V})$ , assume that $ \mathbb{P}_{\mathcal{N}}\times \mathbb{P}_{V}$ is atomless, and choose $R=R_0$ and $V=V_0$ . In view of (4.19), for $0 \le t\le 1$ we have, almost surely (a.s.),

\begin{align*}\widetilde{Z}(t)\,:\!=c_{p,\beta}^{-1}Z(t) = \int_{(H\times\Theta )^p}^{\prime} L_t^*(\textbf{{h}},\boldsymbol{\theta}) W({\rm d} h_1,{\rm d}\theta_{1})\cdots W({\rm d} h_p,{\rm d}\theta_{p}).\end{align*}

For $\varepsilon>0$ , define

(4.27) \begin{equation}\widetilde{Z}_{\varepsilon}(t)=\int_{(H\times\Theta )^p}^{\prime} L_t^{(\varepsilon)}(\textbf{{h}},\boldsymbol{\theta}) W({\rm d} h_1,{\rm d}\theta_{1})\cdots W({\rm d} h_p,{\rm d}\theta_{p}).\end{equation}

By Lemma 4.2, $L_t^{(\varepsilon)}$ converges to $L_t$ in $L^2$ as $\varepsilon=1/n\rightarrow 0$ . So, by the $L^2$ linear isometry of multiple integrals, as $\varepsilon=1/n\rightarrow 0$ ,

(4.28) \begin{equation}\widetilde{Z}_{\varepsilon}(t)\overset{L^2(\Omega)}{\longrightarrow} \widetilde{Z}(t), \qquad t\ge 0.\end{equation}

Next, for $\varepsilon>0$ , define the Gaussian processes

\begin{align*} G_{\varepsilon}(x) & = \left(\frac{\pi_\varepsilon([0,1])}{ d_\varepsilon}\right)^{1/2} \int_{H\times \Theta} {1}_{\{x\in R_\varepsilon(h)+V_{\varepsilon}(\theta)\}} W({\rm d} h,{\rm d}\theta), \qquad x\in [0,1], \\[3pt] G_{\varepsilon}^*(x) & = d_\varepsilon^{-1/2} \int_{H\times \Theta} {1}_{\{x\in R_\varepsilon(h)+U_{\varepsilon}(\theta)\}} W({\rm d} h,{\rm d}\theta), \qquad x\in [0,\infty).\end{align*}

We claim that the Gaussian process $(G_{\varepsilon}^*(x))_{x\in [0,\infty)}$ is stationary and

(4.29) \begin{equation}(G_\varepsilon(x))_{x\in [0,1]}\overset{{\rm f.d.d.}}{=}(G_\varepsilon^*(x))_{x\in [0,1]},\end{equation}

where $\overset{{\rm f.d.d.}}{=}$ means equality in finite-dimensional distributions. Indeed, for $0\le x\le y$ , in view of (4.9),

\begin{align*}\mathbb{E} [G_\varepsilon^*(x)G_\varepsilon^*(y)] & = d_\varepsilon^{-1} \nu(\{x,y\}\subset R_\varepsilon+U_\varepsilon) = d_\varepsilon^{-1} \nu(\{0,y-x\}\subset R_\varepsilon+U_\varepsilon ) \\[3pt] & =\mathbb{E} G_\varepsilon^*(0)G_\varepsilon^*(y-x).\end{align*}

On the other hand, for $0\le x\le y\le 1$ ,

\begin{align*}\mathbb{E}[ G_\varepsilon(x)G_\varepsilon(y)] &= (\pi_\varepsilon([0,1])/d_\varepsilon) \mathbb{P}^{\prime}(\{x,y\}\subset R_\varepsilon+V_\varepsilon)= d_\varepsilon^{-1} \nu(\{x,y\}\subset R_\varepsilon+U_\varepsilon, U_\varepsilon \le 1)\\[3pt] & = d_\varepsilon^{-1} \nu(\{x,y\}\subset R_\varepsilon+U_\varepsilon)=\mathbb{E} G_\varepsilon^*(x)G_\varepsilon^*(y).\end{align*}

The reason for introducing $G^*_\varepsilon$ in addition to $G_\varepsilon$ is because we will apply the spectral representation, see (4.30), of a stationary Gaussian process defined on $\mathbb{R}$ . While $G^*_\varepsilon$ defined on $[0,\infty)$ can be extended to a stationary Gaussian process on $\mathbb{R}$ by shift, it is not the case for $G_\varepsilon$ defined only on [0, 1].

Next, because $0\in R_\varepsilon$ , we have $\{0,x\}\in R_\varepsilon+U_\varepsilon$ if and only if $U_\varepsilon=0$ and $x\in R_\varepsilon$ . So, by (4.7) and (4.8), $\mathbb{E} [G_\varepsilon^*(0) G_\varepsilon^*(x)] =d_\varepsilon^{-1}\pi_\varepsilon(\{0\}) p_\varepsilon(x) =p_\varepsilon(x)$ , and, in particular, $\mathbb{E} G_\varepsilon^*(0)^2=p_\varepsilon(0)=1$ . Using the spectral representation in [Reference Janson12, Theorem 7.54], we have, for $\varepsilon>0$ and $x\ge 0$ , that

(4.30) \begin{equation}(G_\varepsilon^*(x))_{x\in [0,\infty)}\overset{{\rm f.d.d.}}{=}\left(\int_{\mathbb{R}} {\rm e}^{{\rm i}\lambda x} \widehat{W}_\varepsilon({\rm d}\lambda)\right)_{x\in [0,\infty)},\end{equation}

where $\widehat{W}_\varepsilon$ is a complex-valued Hermitian Gaussian random measure with control measure $\mu_\varepsilon$ satisfying

(4.31) \begin{equation}p_{\varepsilon}(x)=\int_\mathbb{R} {\rm e}^{{\rm i}\lambda x} \mu_\varepsilon({\rm d}\lambda),\qquad x\ge 0.\end{equation}

In addition, using Gaussian moments, we have, for some constant $c>0$ not depending on x, that

\begin{equation*}\mathbb{E}[G^*(x)-G^*(0)]^4=3(\mathbb{E}[G^*(x)-G^*(0)]^2)^2= 12(1-p_{\varepsilon}(x))^2\le c x^2,\qquad x\ge 0,\end{equation*}

where the inequality follows from an examination of (4.3). Hence, $G_\varepsilon$ and $G_\varepsilon^*$ admit continuous versions by Kolmogorov’s continuity theorem. We shall work with such versions when integrating along their time variables below.

Applying a stochastic Fubini theorem [Reference Peccati and Taqqu21, Theorem 5.13.1] to (4.27), see also (4.17), and using the relation between Hermite polynomials and multiple Wiener–Itô integrals [Reference Janson12, Theorem 7.52, Equation (7.23), and Theorem 3.19], (4.29), and (4.30), we have

\begin{align*} \widetilde{Z}_{\varepsilon}(t)& \ \overset{\text{a.s.}}{=}\frac{1}{\Gamma(\beta_p)} \left( \frac{\varepsilon}{{\rm e}}\right)^{\beta_p-1} \left(\frac{d_\varepsilon}{ \pi_\varepsilon([0,1]) } \right)^{p/2} \int_0^t H_p\left(G_{\varepsilon}(x)\right) {\rm d} x \notag\\[3pt] & \overset{{\rm f.d.d.}}{=} \frac{1}{\Gamma(\beta_p)} \left( \frac{\varepsilon}{{\rm e}}\right)^{\beta_p-1} \left(\frac{d_\varepsilon}{ \pi_\varepsilon([0,1]) } \right)^{p/2} \int_0^t H_p\left(G_{\varepsilon}^*(x)\right) {\rm d} x \notag\\[3pt] & \overset{{\rm f.d.d.}}{=} \frac{\big(d_\varepsilon \left(\varepsilon/{\rm e}\right)^{\beta-1}\big)^p }{\Gamma(\beta_p)\pi_\varepsilon([0,1])^{p/2}} \int ^{\prime\prime}_{\mathbb{R}^p} \frac{{\rm e}^{{\rm i}\big(\sum_{j=1}^p \lambda_j\big)t}-1}{{\rm i}\big(\sum_{j=1}^p \lambda_j\big)} \widetilde{W}_{\varepsilon}({\rm d}\lambda_1)\cdots \widetilde{W}_\varepsilon({\rm d}\lambda_p),\end{align*}

where $\widetilde{W}_\varepsilon\,:\!=d_\varepsilon^{-1/2}\widehat{W}_\varepsilon$ has control measure $\widetilde{\mu}_\varepsilon\,:\!=d_\varepsilon^{-1}\mu_\varepsilon$ . By (4.31) and (4.4),

(4.32) \begin{equation}\int_{\mathbb{R}} {\rm e}^{{\rm i}\lambda x} \widetilde{\mu}_\varepsilon({\rm d}\lambda)=d_\varepsilon^{-1}p_{\varepsilon}(|x|)=u_\varepsilon(|x|).\end{equation}

Note that, in view of (4.5) and (4.6), as $\varepsilon\rightarrow 0$ ,

(4.33) \begin{equation} u_\varepsilon(x)\rightarrow u_0(x)= \frac{ x^{\beta-1}}{\Gamma(\beta)}, \qquad x> 0. \end{equation}

Define

\begin{equation*}\widetilde {\mu}_0({\rm d}\lambda)\,:\!=c_\beta |\lambda|^{-\beta} \, {\rm d}\lambda, \qquad \lambda\neq 0,\end{equation*}

where $c_\beta=\frac{2(1-\beta)}{\Gamma(\beta)}\int_0^\infty \sin(y)y^{\beta-2}\,{\rm d} y$ is a constant ensuring the relation

(4.34) \begin{equation}4\int_0^\infty \frac{\sin(ax)}{x} u_0(x)\,{\rm d} x = \widetilde{\mu}_0([-a,a]), \qquad a>0,\end{equation}

which can be obtained via a change of variable $ax=y$ in the integral above.

We claim that, as $\varepsilon\rightarrow 0$ ,

(4.35) \begin{align} (\widetilde{Z}_\varepsilon(t))_{0 \le t\le 1}\!\overset{{\rm f.d.d.}}{\longrightarrow} \! \Bigg(\frac{\Gamma(\beta)^p }{\Gamma(\beta_p) \Gamma(2-\beta)^{p/2}}\int ^{\prime\prime}_{\mathbb{R}^p} \frac{{\rm e}^{{\rm i}\big(\sum_{j=1}^p \lambda_j\big)t}-1}{{\rm i}\big(\sum_{j=1}^p \lambda_j\big)} \widehat{W}_0({\rm d}\lambda_1)\cdots \widehat{W}_0({\rm d}\lambda_p)\Bigg)_{_{0 \le t\le 1}},\end{align}

where $\overset{{\rm f.d.d.}}{\longrightarrow}$ stands for the convergence of the finite-dimensional distributions and $\widehat{W}_\varepsilon$ is a complex-valued Hermitian Gaussian random measure with control measure $\widetilde{\mu}_0$ . The right-hand side of (4.35) is, up to a constant, the frequency-domain representation of a Hermite process (1.3) after noticing that $ \widehat{W}_0({\rm d}\lambda) \overset{{\rm d}}{=} c_{\beta}^{-1/2} |\lambda|^{-\beta/2}\widehat{W}({\rm d}\lambda)$ . If (4.35) holds, then in view of (4.28), the proof is concluded.

To show (4.35), in view of (4.6), (4.13), the Cramér–Wold device, and [Reference Dobrushin and Major7, Lemma 3] (see also [Reference Pipiras and Taqqu23, Proposition 5.3.6]), it suffices to show, as $\varepsilon\rightarrow 0$ , the vague convergence

(4.36) \begin{equation}\widetilde{\mu}_\varepsilon({\rm d}\lambda)\overset{{\rm v}}{\rightarrow} \widetilde {\mu}_0({\rm d}\lambda),\end{equation}

as well as

(4.37) \begin{equation}\lim_{A\rightarrow\infty}\limsup_{\varepsilon\rightarrow 0}\int_{([-A,A]^p)^{\rm c}} |k_{t}(\lambda_1,\ldots,\lambda_p)|^2 \widetilde{\mu}_\varepsilon({\rm d}\lambda_1)\cdots \widetilde{\mu}_\varepsilon({\rm d}\lambda_p)=0,\end{equation}

where

\begin{equation*}k_t(\lambda_1,\ldots,\lambda_p)\,:\!=\int_{0}^t {\rm e}^{{\rm i} s(\lambda_1+\cdots+\lambda_p)} \, {\rm d} s = \frac{\exp\big({\rm i}\big(\sum_{j=1}^p \lambda_j\big)t\big)-1}{{\rm i}\big(\sum_{j=1}^p \lambda_j\big)}, \qquad t> 0, \ \sum_{j=1}^p \lambda_j\neq 0.\end{equation*}

We first show (4.36). By an inversion of the Fourier transform (4.32) [Reference Lindgren16, Theorem 4:4] we have, for any $a>0$ ,

\begin{align*}\widetilde{\mu}_\varepsilon([-a,a])= \lim_{A\rightarrow\infty}\int_{-A}^A \frac{{\rm e}^{-{\rm i} ax}-{\rm e}^{{\rm i} ax}}{-{\rm i} x} u_\varepsilon(x)\,{\rm d} x = 4\int_{0}^\infty \frac{\sin(ax)}{x} u_\varepsilon(x)\,{\rm d} x,\end{align*}

where, for the last expression, its integrability follows from the fact (see (4.4), (4.6), and (4.22)) that

(4.38) \begin{equation}u_\varepsilon(x)\le c x^{\beta-1},\qquad x>0.\end{equation}

Note that in the inversion above, we have implicitly used the continuity of $\widetilde{\mu}_\varepsilon([-a,a])$ in a, which can be verified using the dominated convergence theorem via the bound $\sup_{a\in [0,b]}|\sin(ax)|\le (bx)\wedge 1$ for any $x,b>0$ . Then, by (4.38), (4.33), (4.34), and the dominated convergence theorem, we conclude that, as $\varepsilon\rightarrow 0$ ,

\begin{equation*}\widetilde{\mu}_\varepsilon([-a,a])\rightarrow \widetilde{\mu}_0([-a,a]),\end{equation*}

and thus (4.36) holds.

To show (4.37), define a measure on $\mathbb{R}^p$ as

\begin{equation*}\kappa_\varepsilon({\rm d}\lambda_1,\ldots,{\rm d}\lambda_p)\,:\!=|k_t(\lambda_1,\ldots,\lambda_p)|^2 \widetilde{\mu}_\varepsilon({\rm d}\lambda_1)\cdots \widetilde{\mu}_\varepsilon({\rm d}\lambda_p).\end{equation*}

We shall obtain (4.37) as a tightness condition from the weak convergence of $\kappa_\varepsilon$ as $\varepsilon\rightarrow 0$ . Indeed, set $\textbf{1}=(1,\ldots,1)\in \mathbb{R}^p$ and let $\langle \cdot\, , \cdot \rangle$ denote the Euclidean inner product. By (4.32), (4.38), Fubini, and the dominated convergence theorem, we have, as $\varepsilon\rightarrow 0$ ,

\begin{align*}\int_{\mathbb{R}^p} {\rm e}^{{\rm i} \langle \boldsymbol{\lambda} ,\textbf{{x}} \rangle } \kappa_\varepsilon({\rm d}\boldsymbol{\lambda})&= \int_{\mathbb{R}^p} \widetilde{\mu}_\varepsilon^p({\rm d}\boldsymbol{\lambda}) {\rm e}^{{\rm i} \langle \boldsymbol{\lambda} , \textbf{{x}}\rangle } \int_{0}^t {\rm e}^{{\rm i} s_1 \langle \boldsymbol{\lambda},\textbf{1}\rangle} \,{\rm d} s_1\int_{0}^t {\rm e}^{-{\rm i} s_2 \langle\boldsymbol{\lambda}, \textbf{1}\rangle} \, {\rm d} s_2 \\[3pt] & = \int_0^t{\rm d} s_1\int_0^t{\rm d} s_2 \prod_{j=1}^p u_\varepsilon(|x_j +s_1-s_2|) \\&\rightarrow \int_0^t{\rm d} s_1\int_0^t{\rm d} s_2 \prod_{j=1}^p u_0(|x_j +s_1-s_2|)\\ &=\frac{t^{\beta_p+1}}{\Gamma(\beta)^{p}}\int_{-1}^{1} (1-|y|)\prod_{j=1}^p |x_j/t+y|^{\beta-1}\,{\rm d} y =\!:\, \phi(\textbf{{x}}),\end{align*}

where the last line is obtained by the change of variables $s_1=t(y+w)$ , $s_2=t w$ and integrating out the variable w. Note that $\phi(\textbf{{x}})<\infty$ for any $\textbf{{x}}\in \mathbb{R}^p$ since $(\beta-1)p>-1$ . Furthermore, the function $\phi(\textbf{{x}})$ is continuous [Reference Dobrushin and Major7, Lemma 1]. So, tightness (4.37) holds by Lévy’s continuity theorem. Hence, (4.35) is established and thus the proof is complete. □

Acknowledgements

The author would like to thank Takashi Owada and Yizao Wang for discussions which motivated this work, and also the anonymous referees for their helpful suggestions.

References

Bai, S., Owada, T. and Wang, Y. (2020). A functional non-central limit theorem for multiple-stable processes with long-range dependence. Stoch. Process. Appl. 130, 5768–5801.10.1016/j.spa.2020.04.007CrossRefGoogle Scholar
Bai, S. and Taqqu, M. S. (2018). How the instability of ranks under long memory affects large-sample inference. Statist. Sci. 33, 96116.10.1214/17-STS633CrossRefGoogle Scholar
Bai, S. and Taqqu, M. S. (2020). Limit theorems for long-memory flows on Wiener chaos. Bernoulli 26, 14731503.10.3150/19-BEJ1168CrossRefGoogle Scholar
Bertoin, J. (1999). Subordinators: Examples and Applications. Springer, New York.Google Scholar
Bertoin, J. and Pitman, J. (2000). Two coalescents derived from the ranges of stable subordinators. Electron. J. Prob. 5, 7.10.1214/EJP.v5-63CrossRefGoogle Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1989). Regular Variation . Encyclopedia of Mathematics and Its Applications. Cambridge University Press.Google Scholar
Dobrushin, R. L. and Major, P. (1979). Non-central limit theorems for non-linear functional of Gaussian fields. Prob. Theory Relat. Fields 50, 2752.Google Scholar
Fitzsimmons, P. J., Fristedt, B. and Maisonneuve, B. (1985). Intersections and limits of regenerative sets. Z. Wahrscheinlichkeitsth. 70, 157173.10.1007/BF02451426CrossRefGoogle Scholar
Fitzsimmons, P. J., Fristedt, B. and Shepp, L. A. The set of real numbers left uncovered by random covering intervals. Z. Wahrscheinlichkeitsth. 70, 175189.10.1007/BF02451427CrossRefGoogle Scholar
Fitzsimmons, P. J. and Taksar, M. (1988). Stationary regenerative sets and subordinators. Ann. Prob. 16, 12991305.10.1214/aop/1176991692CrossRefGoogle Scholar
Ho, H. and Hsing, T. (1997). Limit theorems for functionals of moving averages. Ann. Prob. 25, 16361669.10.1214/aop/1023481106CrossRefGoogle Scholar
Janson, S. (1997), Gaussian Hilbert Spaces. Cambridge University Press.10.1017/CBO9780511526169CrossRefGoogle Scholar
Kallenberg, O. (2002). Foundations of Modern Probability, 2nd ed. Springer, New York.10.1007/978-1-4757-4015-8CrossRefGoogle Scholar
Kingman, J. F. C. (1973). An intrinsic description of local time. J. London Math. Soc. 2, 725731.10.1112/jlms/s2-6.4.725CrossRefGoogle Scholar
Lacaux, C. and Samorodnitsky, G. (2016). Time-changed extremal process as a random sup measure. Bernoulli, 22, 19792000.10.3150/15-BEJ717CrossRefGoogle Scholar
Lindgren, G. (2012). Stationary Stochastic Processes: Theory and Applications. CRC Press, Boca Raton.10.1201/b12171CrossRefGoogle Scholar
Maisonneuve, B. (1987). Subordinators regenerated. In Seminar on Stochastic Processes, 1986, eds E. Çinlar, K. L. Chung, R. K. Getoor, and J. Glover, pp. 155161.10.1007/978-1-4684-6751-2_11CrossRefGoogle Scholar
Major, P. (2014). Multiple Wiener–Itô Integrals, with Applications to Limit Theorems, 2nd ed. Springer, New York.10.1007/978-3-319-02642-8CrossRefGoogle Scholar
Molchanov, I. S. (2017) Theory of Random Sets, 2nd ed., Vol. 87. Springer, New York.10.1007/978-1-4471-7349-6CrossRefGoogle Scholar
Nourdin, I., Nualart, D. and Tudor, C. (2010). Central and non-central limit theorems for weighted power variations of fractional Brownian motion. Ann. Inst. H. Poincaré Prob. Statist. 46, 10551079.10.1214/09-AIHP342CrossRefGoogle Scholar
Peccati, G. and Taqqu, M. S. (2011). Wiener Chaos: Moments, Cumulants and Diagrams: A Survey With Computer Implementation. Springer, New York.10.1007/978-88-470-1679-8CrossRefGoogle Scholar
Pipiras, V. and Taqqu, M. S. (2010). Regularization and integral representations of Hermite processes. Statist. Prob. Lett. 80, 20142023.10.1016/j.spl.2010.09.008CrossRefGoogle Scholar
Pipiras, V. and Taqqu, M. S. (2017). Long-Range Dependence and Self-Similarity, Vol. 45. Cambridge University Press.10.1017/CBO9781139600347CrossRefGoogle Scholar
Rosenblatt, M. (1961). Independence and dependence. In Proc. 4th Berkeley Symp. Math. Statist. Prob., Vol. 2. University of California Press, Berkeley, pp. 431443.Google Scholar
Samorodnitsky, G. and Wang, Y. (2019). Extremal theory for long range dependent infinitely divisible processes. Ann. Prob. 47, 25292562.10.1214/18-AOP1318CrossRefGoogle Scholar
Sethuraman, J. (2002). Some extensions of the Skorohod representation theorem. Sankhyā A 64, 884893.Google Scholar
Slud, E. V. (1993). The moment problem for polynomial forms in normal random variables. Ann. Prob. 21, 22002214.10.1214/aop/1176989017CrossRefGoogle Scholar
Surgailis, D. (1982). Zones of attraction of self-similar multiple integrals. Lithuanian Math. J. 22, 327340.10.1007/BF00966427CrossRefGoogle Scholar
Taqqu, M. S. (1975). Weak convergence to fractional Brownian motion and to the Rosenblatt process. Prob. Theory Relat. Fields 31, 287302.Google Scholar
Taqqu, M. S. (1979). Convergence of integrated processes of arbitrary Hermite rank. Prob. Theory Relat. Fields 50, 5383.Google Scholar
Tudor, C. (2013). Analysis of Variations for Self-Similar Processes: A Stochastic Calculus Approach. Springer, New York.10.1007/978-3-319-00936-0CrossRefGoogle Scholar
Tudor, C. A. (2008). Analysis of the Rosenblatt process. ESIAM Prob. Statist. 12, 230257.10.1051/ps:2007037CrossRefGoogle Scholar
Williams, D. (1991). Probability with Martingales. Cambridge University Press.10.1017/CBO9780511813658CrossRefGoogle Scholar