Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-07T04:43:40.064Z Has data issue: false hasContentIssue false

Limit theorems for multivariate Brownian semistationary processes and feasible results

Published online by Cambridge University Press:  03 September 2019

Riccardo Passeggeri*
Affiliation:
Imperial College London
Almut E. D. Veraart*
Affiliation:
Imperial College London
*
* Postal address: Department of Mathematics, Imperial College London, 180 Queen’s Gate, London, SW7 2AZ, UK.
* Postal address: Department of Mathematics, Imperial College London, 180 Queen’s Gate, London, SW7 2AZ, UK.
Rights & Permissions [Opens in a new window]

Abstract

In this paper we introduce the multivariate Brownian semistationary (BSS) process and study the joint asymptotic behaviour of its realised covariation using in-fill asymptotics. First, we present a central limit theorem for general multivariate Gaussian processes with stationary increments, which are not necessarily semimartingales. Then, we show weak laws of large numbers, central limit theorems, and feasible results for BSS processes. An explicit example based on the so-called gamma kernels is also provided.

Type
Original Article
Copyright
© Applied Probability Trust 2019 

1. Introduction

The univariate Brownian semistationary process is a stochastic process of the form

$$ \begin{equation*} Y_{t}=\mu+\int_{-\infty}^{t}g(t-s)\sigma_{s}{\rm d} W_{s}+\int_{-\infty}^{t} q(t-s)a_{s}{\rm d} s, \end{equation*} $$

where $\mu$ is a constant, W is a Brownian measure on $\mathbb{R}$ , g and q are nonnegative deterministic functions on $\mathbb{R}$ , with $g(t)=q(t)=0$ for $t\leq 0$ , and $\sigma$ and a are càdlàg processes. The name Brownian semistationary (BSS) process comes from the fact that, when $\sigma$ and a are stationary then Y is stationary. These processes were first introduced in [Reference Barndorff-Nielsen, Schmiegel, Albrecher, Runggaldier and Schachermayer2] and, since then, they have been extensively used in applications due to their flexibility and, thus, their capacity to model a variety of empirical phenomena. Two of the most notable fields of applications are turbulence and finance.

In the context of turbulence, where the process $\sigma$ represents the intermittency of the dynamics, these processes are able to reproduce the key stylized features of turbulence data, such as homogeneity, stationarity, skewness, isotropy, and certain scaling laws (see [Reference Barndorff-Nielsen, Pakkanen and Schmiegel8], [Reference Corcuera, Hedevang, Podolskij and Pakkanen15], and the discussion therein). In finance, the BSS process has been applied to the modelling of energy spot prices [Reference Barndorff-Nielsen, Benth and Veraart4], [Reference Bennedsen10] and of logarithmic volatility of futures [Reference Bennedsen, Lunde and Pakkanen11], among others. Furthermore, fast and efficient simulation schemes for the univariate BSS are available [Reference Bennedsen, Lunde and Pakkanen12].

One of the key aspects of the BSS process that has been analysed in great detail in the last decade is the asymptotic behaviour of its realised power variation. The realised power variation of a process $Y_{t}$ is the sum of absolute powers of increments of a process, i.e.

$$ \begin{equation*} \sum_{i=1}^{\lfloor nt \rfloor}|\Delta_{i}^{n}Y|^{r}, \quad\text{where}\quad \Delta_{i}^{n}Y\,:\!=Y_{{i}/{n}}-Y_{{(i-1)}/{n}}. \end{equation*} $$

For general semimartingales, the study of realised variation has a pivotal role in estimating the key aspects of the process under consideration, e.g. the integrated squared volatility given by $\int_{0}^{t}\sigma^{2}_{s}{\rm d} s$ (see [Reference Barndorff-Nielsen, Corcuera and Podolskij7] for further discussions). This has led to the development of numerous works on this topic (see [Reference Jacod19] and the references therein). On the other hand, the BSS process is not in general a semimartingale and the theory of realised power variation for semimartingales does not apply in this case. New results based on different mathematical tools, mainly those presented in the works of Peccati, Nourdin, and coauthors (see [Reference Nourdin and Peccati20] and the references therein), have been obtained. Barndorff-Nielsen et al. [Reference Barndorff-Nielsen, Corcuera and Podolskij7] presented the multipower variation for BSS processes, while Granelli and Veraart [Reference Granelli and Veraart17] obtained the realised covariation for the bivariate BSS without drift. It is important to mention that in the general multivariate setting we have the work [Reference Barndorff-Nielsen and Shephard3] for the semimartingale case, but no work for the case of BSS processes outside the semimartingale framework.

In this article we introduce the multivariate BSS process, study the joint asymptotic behaviour of its realised covariation, and present feasible results and relevant examples. In particular, we will study the asymptotic behaviour of

$$ \begin{equation*} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta_{i}^{n}Y^{(k)}} {\tau_{n}^{(k)}} \frac{\Delta_{i}^{n}Y^{(l)}} {\tau_{n}^{(l)}}\bigg)_{k,l=1,\ldots,p}, \end{equation*} $$

where $p\in\mathbb{N}$ , $\smash{\tau_{n}^{(\,j)}}>0,$ and $\smash{Y_{t}^{(\,j)}}$ is the jth component of the multivariate BSS process, for $j=1,\ldots,p$ . This work is motivated by the manifold applications of the BSS process and it is not just an extension to the multivariate case of the results presented in [Reference Barndorff-Nielsen, Corcuera and Podolskij7] and [Reference Granelli and Veraart17]. Indeed, in these previous works, the realised power variations and covariations were always scaled by a scaling factor ( $\tau_{n}$ ) restricted to a specific structure. We eliminate this restriction and this enables us to obtain all the feasible results presented in this work, which were not obtainable otherwise.

We remark that, despite the more general theory developed here, no additional assumptions will be added other than those already introduced in [Reference Barndorff-Nielsen, Corcuera and Podolskij7] and [Reference Granelli and Veraart17] (but used in a multivariate setting).

Due to the various potential applications of the multivariate BSS process, it appears natural to derive feasible results, namely results that can be computed directly from real data. We focus on two objects:

$$ \begin{gather*}\\[-24pt] \bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)} \Delta^{n}_{i}Y^{(l)}}{\sqrt{\sum_{i=1}^{\lfloor nt\rfloor} (\Delta^{n}_{i}Y^{(k)})^{2}\,}\sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}Y^{(l)})^{2}}\,}\bigg)_{k,l=1,\ldots,p} \\ \text{and}\quad \bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}\bigg)_{k,l=1,\ldots, p}. \end{gather*} $$

Both objects belong to the class of realised covariation ratios. Similar ratios tailored to the univariate case have been used in the literature to construct a consistent estimator of key parameters, e.g. the smoothness parameter $\alpha$ of the BSS process in [Reference Corcuera, Hedevang, Podolskij and Pakkanen15]. The second object can be defined as the relative covolatility of the BSS process, since it represents the multivariate representation of the relative volatility concept introduced in [Reference Barndorff-Nielsen, Pakkanen and Schmiegel8].

This paper is structured as follows. In Section 2 we introduce the multivariate BSS process, the general setting, and the basic mathematical concepts of this work. It is usually the case that, when a univariate process is extended to the multivariate case, there is more than one way to do it. Hence, we present two possible multivariate extensions of the one-dimensional BSS process. Furthermore, in the same section we introduce the Gaussian core, which is a key object for the mathematical understanding and estimation of the BSS process. In Sections 3 and 4 we respectively present the joint central limit theorem for general multivariate Gaussian processes with stationary increments and for the BSS processes. In particular, in these two sections we present different cases depending on which multivariate extension of the BSS process and values of the scaling factor $\tau_{n}$ are considered. In addition, in Section 4 we prove the weak law of large numbers (WLLN) for BSS processes. In Section 5 we derive the feasible results and in Section 6 we present examples. In Section 7 we provide some final remarks and open questions.

We refer the reader to Section 6.1 for a presentation of the main results of this work (in a simplified framework).

2. Preliminaries

In this section we explore the setting and some of the basic mathematical tools used throughout this article.

Let $T>0$ denote a finite time horizon, and let $ (\Omega,\mathcal{F},(\mathcal{F}_{t}),\mathbb{P}) $ be a filtered complete probability space. In the following we always assume that $p,n\in\mathbb{N}$ and that $\mathcal{B}(\mathbb{R}) $ denotes the class of Borel sets of $\mathbb{R}$ . We recall the definition of a Brownian measure.

Definition 2.1

An $\mathcal{F}_{t}$ -adapted Brownian measure $W\colon \Omega\times\mathcal{B}(\mathbb{R})\rightarrow\mathbb{R}$ is a Gaussian stochastic measure such that, if $A\in\mathcal{B}(\mathbb{R}) $ with $\mathbb{E}[(W(A))^{2}]<\infty$ , then $W(A)\sim N(0,{\rm Leb}(A)) $ , where Leb is the Lebesgue measure. Moreover, if $A\subseteq[t,\infty) $ then W(A) is independent of $\mathcal{F}_{t}$ .

We will assume that $ (\Omega,\mathcal{F},(\mathcal{F}_{t}),\mathbb{P}) $ supports p independent $\mathcal{F}_{t}$ -Brownian measures on $\mathbb{R}$ . Consider the stochastic process $\smash{\{{\boldsymbol G}_{t}\}_{t\in[0,T]}}$ defined as

$$ \begin{equation*} {\boldsymbol G}_{t}\,:\!= \begin{pmatrix} G^{(1)}_{t} \\ \vdots\\ G^{(\,p)}_{t} \end{pmatrix} =\int_{-\infty}^{t} \begin{pmatrix} g^{(1,1)}(t-s) & \dots & g^{(1,p)}(t-s) \\ \vdots & \ddots & \vdots\\ g^{(\,p,1)}(t-s) & \dots & g^{(\,p,p)}(t-s) \end{pmatrix} \begin{pmatrix} {\rm d} W^{(1)}_{s} \\ \vdots\\ {\rm d} W^{(\,p)}_{s} \end{pmatrix}, \end{equation*} $$

where the integral has to be considered componentwise, for $i,j=1,\ldots, p$ , $\smash{g^{(i,j)}\!\in L^{2}((0,\infty))}$ are deterministic functions and continuous on $\mathbb{R}\setminus\{0\}$ , and $\smash{(W^{(1)},\ldots, W^{(\,p)})}$ are jointly Gaussian $\mathcal{F}_{t}$ -Brownian measures on $\mathbb{R}$ . Thus, we have $\smash{G^{(i)}_{t}=\sum_{j=1}^{p}\int_{-\infty}^{t} g^{(i,j)}(t-s){\rm d} W^{(\,j)}_{s}}$ . We call the process $\smash{\{{\boldsymbol G}_{t}\}_{t\in[0,T]}}$ the multivariate Gaussian core, and it is possible to see that it is a stationary Gaussian process and it has stationary increments. The Gaussian core will play a crucial role in the limit theorems for the BSS process.

Remark 2.1

Note that we do not assume independence of the Brownian measures. The only requirement is that they are jointly Gaussian so that the process $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ is Gaussian. This level of generality is needed to prove the central limit theorem (CLT) for the BSS process. In fact, as we will later see, proving a CLT for the Gaussian core driven by independent Brownian measures is not sufficient for proving the CLT for the BSS process.

For $j\in\{1,\ldots, p\}$ and $l\in\{1,\ldots, n\}$ , let $\smash{\tau_{n}^{(\,j)}}$ be a (scaling) constant depending on $\smash{G^{(\,j)}}$ and n whose explicit form will be introduced later, and let $\smash{\Delta^{n}_{l}G^{(\,j)}}\,:\!=\smash{G^{(\,j)}_{{l}/{n}}}- \smash{G^{(\,j)}_{{(l-1)}/{n}}}$ . Since $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ is a Gaussian process, we can use the machinery of Malliavin calculus. In particular, let $\mathcal{H}$ be the Hilbert space generated by the random variables given by:

(2.1) $$ \begin{equation} \bigg(\dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}\bigg)_{n\geq1,\,1\leq l\leq \lfloor nt\rfloor,\,j\in\{1,\ldots, p\}} \label{eqn1} \end{equation} $$

equipped with the scalar product $\langle\cdot,\cdot \rangle_{\mathcal{H}}$ induced by $L^{2}(\Omega,\mathcal{F},\mathbb{P}) $ , i.e. for $X,Y\in\mathcal{H},$ we have $\langle X,Y\rangle_{\mathcal{H}}=\mathbb{E}[XY]$ . Note that $\mathcal{H}$ is a closed subset of $L^{2}(\Omega,\mathcal{F},\mathbb{P}) $ composed by $L^{2}$ -Gaussian random variables generated by (2.1). In particular, we have an isonormal Gaussian process because the random variables (2.1) are jointly Gaussian since they are (rescaled) increments of the Gaussian process $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ . Following the setting of [Reference Nourdin and Peccati20], we assume that $\mathcal{F}$ is generated by $\mathcal{H}$ . Finally, recall that any element of $L^{2}(\Omega,\mathcal{F},\mathbb{P}) $ has a unique decomposition in terms of the Wiener chaos expansion of $\mathcal{H}$ (see [Reference Nourdin and Peccati20]).

Next, we present and define the multivariate BSS process. Since there are several ways to generalise a univariate BSS process to a multivariate process, we will present two particularly relevant multivariate extensions.

Definition 2.2

Consider p jointly Brownian measures $\smash{W^{(1)},\ldots, W^{(\,p)}}$ . Furthermore, consider $p^{2}$ nonnegative deterministic functions $\smash{g^{(1,1)},\ldots, g^{(\,p,p)}}\in L^{2}((0,\infty)) $ which are continuous on $\mathbb{R}\setminus\{0\}$ and such that $\smash{g^{(i,j)}(t)}=0$ for $t\leq0$ and $i,j=1,\ldots, p$ . Let $\smash{\sigma^{(1,1)},\ldots, \sigma^{(\,p,p)}}$ be càdlàg, $\mathcal{F}_{t}$ -adapted stochastic processes and assume for all $t\in[0,T]$ and $i,j,k=1,\ldots, p$ that $\smash{\int_{-\infty}^{t}(g^{(i,j)}(t-s)\sigma_{s}^{(\,j,k)})^{2}}{\rm d} s<\infty$ for every $t\geq0$ . Let $\{{\boldsymbol U}_{t}\}_{t\in[0,T]}=\smash{\{(U_{t}^{(1)},\ldots, U_{t}^{(\,p)})\}_{t\in[0,T]}}$ be a stochastic process in the nature of a drift term. Define

\begin{align*} {\boldsymbol Y}_{t}&\,:\!= \begin{pmatrix} Y^{(1)}_{t} \\\vdots\\ Y^{(\,p)}_{t} \end{pmatrix} \\[-1pt] &\phantom{:}=\int_{-\infty}^{t} \begin{pmatrix} g^{(1,1)}(t-s) &\cdots & g^{(1,p)}(t-s) \\ \vdots & \ddots & \vdots \\ g^{(\,p,1)}(t-s) & \cdots & g^{(\,p,p)}(t-s) \end{pmatrix} \begin{pmatrix} \sigma^{(1,1)}_{s} &\cdots & \sigma^{(1,p)}_{s} \\ \vdots & \ddots & \vdots \\ \sigma^{(\,p,1)}_{s} & \cdots & \sigma^{(\,p,p)}_{s} \end{pmatrix} \begin{pmatrix} {\rm d} W^{(1)}_{s} \\\vdots\\ {\rm d} W^{(\,p)}_{s} \end{pmatrix} \\[-1pt] &+ \begin{pmatrix} U^{(1)}_{t} \\\vdots\\ U^{(\,p)}_{t} \end{pmatrix} \end{align*}

and

\begin{align*} {\boldsymbol X}_{t}&\,:\!= \begin{pmatrix} X^{(1)}_{t} \\\vdots\\ X^{(\,p)}_{t} \end{pmatrix} \\[-1pt] &\phantom{:}=\int_{-\infty}^{t} \begin{pmatrix} g^{(1,1)}(t-s)\sigma^{(1,1)}_{s} &\cdots & g^{(1,p)}(t-s)\sigma^{(1,p)}_{s} \\ \vdots & \ddots & \vdots \\ g^{(\,p,1)}(t-s)\sigma^{(\,p,1)}_{s} & \cdots & g^{(\,p,p)}(t-s)\sigma^{(\,p,p)}_{s} \end{pmatrix} \begin{pmatrix} {\rm d} W^{(1)}_{s} \\\vdots\\ {\rm d} W^{(\,p)}_{s} \end{pmatrix} + \begin{pmatrix} U^{(1)}_{t} \\\vdots\\ U^{(\,p)}_{t} \end{pmatrix}. \end{align*}

Then the vector-valued processes $\{{\boldsymbol Y}_{t}\}_{t\in[0,T]}$ and $\{{\boldsymbol X}_{t}\}_{t\in[0,T]}$ are both called multivariate BSS processes.

Remark 2.2

Note that in the above definition we consider the case where the matrices are of dimension $p\times p$ . However, the above definition can be extended straightforwardly to include general rectangular matrices.

Now, we will discuss properties of the tensor product in Hilbert spaces. Consider two real Hilbert spaces $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ endowed with the inner products $\langle\cdot,\cdot \rangle_{\mathcal{H}_{1}}$ and $\langle\cdot,\cdot \rangle_{\mathcal{H}_{2}},$ respectively. Given $f,x\in\mathcal{H}_{1}$ and $g,y\in\mathcal{H}_{2}$ , we denote by $[f\otimes g](x,y)\,:\!=\langle x,f \rangle_{\mathcal{H}_{1}}\langle y,g \rangle_{\mathcal{H}_{2}}$ the bilinear form $f\otimes g\colon \mathcal{H}_{1}\times \mathcal{H}_{2}\rightarrow\mathbb{R}$ . Let $\mathcal{K}$ be the set of all finite linear combinations of such bilinear forms, namely $\mathcal{K}\,:\!={\rm span}(f\otimes g\colon f\in\mathcal{H}_{1},\,g\in\mathcal{H}_{2}) $ . We are going to present a result on the inner product for this space.

Lemma 2.1

The bilinear form $\langle\langle\cdot,\cdot \rangle \rangle$ on $\mathcal{K}$ defined by $\langle\langle f_{1}\otimes g_{1},f_{2}\otimes g_{2} \rangle \rangle\,:\!=\langle f_{1},f_{2} \rangle_{\mathcal{H}_{1}}\langle g_{1}, g_{2} \rangle_{\mathcal{H}_{2}}$ is symmetric, well defined, and positive definite, and thus defines a scalar product on $\mathcal{K}$ .

Proof. This is a well-known result; for details, see Reed and Simon’s book [Reference Reed and Simon22].

Observe that $\mathcal{K}$ endowed with $\langle\langle\cdot,\cdot \rangle \rangle$ is not complete. In the next three definitions we introduce the notion of a tensor product between Hilbert spaces, and the symmetrisation and contraction of a tensor product.

Definition 2.3

The tensor product of the Hilbert spaces $\mathcal{H}_{1}$ and $\mathcal{H}_{2}$ is the Hilbert space $\mathcal{H}_{1}\otimes \mathcal{H}_{2}$ defined to be the completion of $\mathcal{K}$ under the scalar product $\langle\langle\cdot,\cdot \rangle \rangle$ . Furthermore, we denote by $\mathcal{H}_{1}^{\otimes n}$ the n-fold tensor product between $\mathcal{H}_{1}$ and itself.

Definition 2.4

If $\smash{f\in\mathcal{H}^{\otimes n}}$ is of the form $f=h_{1}\otimes \cdots h_{n}$ for $h_{1},\ldots, h_{n}\in\mathcal{H}$ , then the symmetrisation of f, denoted by $\tilde{f}$ , is defined by $\tilde{f}\,:\!=({1}/{n!})\smash{\sum_{\sigma}h_{\sigma(1)}\otimes \cdots h_{\sigma(n)}}$ , where the sum is taken over all permutations of $\{1,\ldots, n\}$ . The closed subspace of $\smash{\mathcal{H}^{\otimes n}}$ generated by the elements of the form $\tilde{f}$ is called the n-fold symmetric tensor product of $\mathcal{H}$ , and is denoted by $\smash{\mathcal{H}^{\odot n}}$ .

Definition 2.5

Let $g=g_{1}\otimes\cdots\otimes g_{n}\in\smash{\mathcal{H}^{\otimes n}}$ and $h=h_{1}\otimes\cdots\otimes h_{n}\in\smash{\mathcal{H}^{\otimes m}}$ . For any $0\leq p\leq n\wedge m$ , we define the pth contraction of $g\otimes h$ as the following element of $\smash{\mathcal{H}^{\otimes m+n-p}}\colon g\otimes_{p} h\,:\!=\langle g_{1},h_{1} \rangle_{\mathcal{H}}\cdots \langle g_{p},h_{p} \rangle_{\mathcal{H}} g_{p+1}\otimes\cdots\otimes g_{n}\otimes h_{p+1}\otimes\cdots\otimes h_{m}$ . Note that, even if g and h are symmetric, their pth contraction is not, in general, a symmetric tensor. We therefore denote by $g\tilde{\otimes}_{p}h$ its symmetrisation.

Let us now move to the discussion of multiple integrals in the Malliavin calculus setting (see Section 2.7 of [Reference Nourdin and Peccati20]). We denote by $I_{p}\colon \smash{\mathcal{H}^{\odot p}\rightarrow\mathcal{W}_{p}}$ the isometry from the symmetric tensor product $\mathcal{H}^{\odot p}$ , equipped with the norm $\smash{\sqrt{p!}\|\cdot\|_{\mathcal{H}^{\otimes p}}}$ , onto the pth Wiener chaos $\mathcal{W}_{p}$ . In other words, the image of a pth multiple integral lies in the pth Wiener chaos. The first property that we are going to present is the isometry property of integrals.

Proposition 2.1

Fix integers $1\leq q\leq p$ , as well as $f\in\mathcal{H}^{\odot p}$ and $g\in\mathcal{H}^{\odot q}$ . We have

$$ \begin{equation*} \mathbb{E}[I_{p}(f)I_{q}(g)]= \begin{cases} p!\langle f,g\rangle_{\mathcal{H}^{\otimes p}} & \text{if }p=q, \\ 0 & \text{otherwise}. \end{cases} \end{equation*} $$

Proof. See Proposition 2.7.5 of [Reference Nourdin and Peccati20].

Moreover, we have the following product formula for multiple integrals.

Theorem 2.1

Let $p,q\geq 1$ . If $f\in\mathcal{H}^{\odot p}$ and $g\in\mathcal{H}^{\odot q}$ then

$$ \begin{equation*} I_{p}(f)I_{q}(g)=\sum_{r=0}^{p\wedge q}r!\,\binom{p}{r}\binom{q}{r} I_{p+q-2r}(f\tilde{\otimes}_{r}g). \end{equation*} $$

Proof. See Theorem 2.7.10 of [Reference Nourdin and Peccati20].

Similarly to [Reference Granelli and Veraart17] we apply the product formula for multiple integrals to conclude that, for $i,j=1,\ldots, p,$

\begin{align*} \dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}}\dfrac{\Delta^{n}_{l}G^{(\,j)}} {\tau_{n}^{(\,j)}}&=I_{1}\bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \bigg)I_{1}\bigg(\dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg) \\ &=\sum_{r=0}^{1}r!\,\binom{1}{r}\binom{1}{r}I_{2-2r} \bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}}\tilde{\otimes}_{r} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}\bigg) \\ &=I_{2}\bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \tilde{\otimes}\dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}\bigg)+ \mathbb{E}\bigg[\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \\ &\Rightarrow I_{2}\bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \tilde{\otimes}\dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}\bigg) \\ &=\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}}\dfrac{\Delta^{n}_{l} G^{(\,j)}}{\tau_{n}^{(\,j)}}-\mathbb{E}\bigg[\dfrac{\Delta^{n}_{l}G^{(i)}} {\tau_{n}^{(i)}}\dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg]. \end{align*}

Now, we introduce the space $\mathcal{D}([0,T],\mathbb{R}^{n}) $ . This space is the set of all càdlàg functions from [0,T] to $\mathbb{R}^{n}$ and it is called the Skorokhod space. The norm in this space is defined as $\|f\|_{\mathcal{D}([0,T],\mathbb{R}^{n})}\,:\!=\sup_{t\in[0,T]} \|f\|_{\mathbb{R}^{n}},$ where $f\in\mathcal{D}([0,T],\mathbb{R}^{n}) $ and $\|\cdot\|_{\mathbb{R}^{n}}$ is any norm on $\mathbb{R}^{n}$ (it is a finite-dimensional vector space; thus, all the norms are equivalent). This metric works fine for $C([0,T],\mathbb{R}^{n}) $ (the space of continuous functions from [0,T] to $\mathbb{R}^{n}$ ), but it is stronger than the usual Skorokhod metric $J_{1}$ (or $M_{1}$ ). However, in this paper the functions to which our random elements (i.e. random variables and stochastic processes) convergence are continuous, and in this case these metrics are all equivalent.

Let us recall some results on stable convergence. We use the notation ‘ $\smash{ \xrightarrow{\text{u.c.p.}} }$ ’, ‘ $\smash{ \xrightarrow{\text{P}} }$ ’, ‘ $\smash{ \xrightarrow{\text{st}} }$ ’, and ‘ $\smash{ \xrightarrow{\text{D}} }$ ’ for convergence uniformly on compacts in probability, convergence in probability, stable convergence, and convergence in distribution, respectively. Recall that a sequence of stochastic processes $ (X_{n}) $ converges to the limit X uniformly on compacts in probability if $ {\mathbb P}(\sup_{s\le t}\vert \smash{X^n_s}-X_s\vert>\epsilon)\rightarrow 0$ as $n\rightarrow\infty$ for each $t,\epsilon>0$ . In the case of the Skorokhod space with uniform metric, suppose that $X_{n},X$ are $\mathcal{D}([0,T],\mathbb{R}^{d}) $ -valued stochastic processes defined on the same filtered probability space. Then ${X_{n} \xrightarrow{\text{P}} X}$ if and only if $\smash{X_{n} \xrightarrow{\text{u.c.p.}} X}$ , since they are both equal to $\lim_{n\rightarrow\infty}\mathbb{P}(\sup_{t\in[0,T]}\| X_{n,t}-X_{t}\|_{\mathbb{R}^{d}}) $ .

Remark 2.3

To make the notation less cumbersome and when it does not create confusion, we avoid writing ‘as $n\rightarrow\infty$ ’. For example, we write $\smash{X_{n}\stackrel{p}{\rightarrow} X}$ for $\smash{X_{n}\stackrel{p}{\rightarrow} X}$ as $n\rightarrow\infty$ .

Theorem 2.2

(Continuous mapping theorem.) Let (S, m) be a metric space, and let $ (S_{n},m)\subset (S,m) $ be arbitrary subsets and $g_{n}\colon (S_{n},m)\mapsto (E,\mu) $ be arbitrary maps $ (n\in\mathbb{N}\cup\{0\}) $ such that, for every sequence $x_{n}\in (S_{n},m) $ , if $x_{n'}\rightarrow x$ along a subsequence and $x\in(S_{0},m) $ then $g_{n'}(x_{n'})\rightarrow g_{0}(x) $ . Then, for arbitrary maps $X_{n}\colon \Omega_{n}\mapsto (S_{n},m) $ and every random element X with values in $ (S_{0},m) $ such that $g_{0}(X) $ is a random element in $ (E,\mu) $ , if $\smash{X_{n} \xrightarrow{\text{D}} X}$ then $\smash{g_{n}(X_{n}) \xrightarrow{\text{D}} g_{0}(X)}$ ; if ${X_{n} \xrightarrow{\text{P}} X}$ then ${g_{n}(X_{n}) \xrightarrow{\text{P}} g_{0}(X)}$ ; and if $\smash{X_{n}\stackrel{a.s.}{\rightarrow}X}$ then $\smash{g_{n}(X_{n})\stackrel{a.s.}{\rightarrow}g_{0}(X)}$ .

Proof. See Theorem 18.11 of [Reference Van der Vaart23].

Note that (S, m) might be a function space like the Skorokhod space endowed with the uniform metric. For stable convergence, we have the following theorem; see Theorem 1 of [Reference Aldous and Eagleson1].

Theorem 2.3

Let $X_{n}$ be random elements defined on the same probability space $ (\Omega,\mathcal{F},\mathbb{P}) $ . Suppose that $\smash{X_{n} \xrightarrow{\text{st}} X}$ , that $\sigma$ is any fixed $\mathcal{F}$ -measurable random variable, and that g(x, y) is a continuous function of two variables. Then $\smash{g(X_{n},\sigma) \xrightarrow{\text{st}} g(X,\sigma)}$ .

Proof. It follows from Theorem 2.2 and the definition of stable convergence.

Proposition 2.2

Let $X_{n}, Y_{n},$ and Y be random elements defined on the same probability space, and assume that $Y_{n} \xrightarrow{\text{P}} Y$ and $\smash{X_{n} \xrightarrow{\text{st}} X}$ . Then $\smash{(X_{n},Y_{n}) \xrightarrow{\text{st}} (X,Y)}$ .

Proof. See Section 2 of [Reference Jacod18].

We end this section with some asymptotic results. We start by reporting a simplified version of Theorem 6.2.3 of [Reference Nourdin and Peccati20].

Theorem 2.4

Let $b\geq 2$ and consider

$$ \begin{equation*} \textbf{F}_{n}\,:\!=(I_{2}(f_{1,n}),\ldots, I_{2}(f_{b,n})). \end{equation*} $$

Let ${\boldsymbol C}\in \mathcal{M}^{b\times b} (\mathbb{R}) $ be a symmetric, nonnegative definite matrix, and let $\textbf{N}\sim\mathcal{N}_{b}(0,{\boldsymbol C}) $ . Assume that, for any $r,s=1,\ldots, b,$

$$ \begin{equation*} \lim_{n\rightarrow\infty}\mathbb{E}[I_{2}(f_{s,n})I_{2}(f_{r,n})] =({\boldsymbol C})_{s,r} \quad\text{and}\quad I_{2}(f_{s,n}) \xrightarrow{\text{D}} \mathcal{N} (0, (\mathbf{C})_{s,s}). \end{equation*} $$

Then $\smash{\textbf{F}_{n} \xrightarrow{\text{D}} \textbf{N}}$ as $n\rightarrow\infty$ .

Proof. See Theorem 6.2.3 of [Reference Nourdin and Peccati20].

We have the following simple corollary of Theorem 5.2.7 and Theorem 6.2.3 of [Reference Nourdin and Peccati20].

Corollary 2.1

Let $\textbf{F}_{n}$ and ${\boldsymbol C}$ be as in Theorem 2.4. Assume that, for any $r,s=1,\ldots, b$ ,

$$ \begin{equation*} \lim_{n\rightarrow\infty}\mathbb{E}[I_{2}(f_{s,n})I_{2}(f_{r,n})] =({\boldsymbol C})_{s,r} \quad\text{and}\quad \lim_{n\rightarrow\infty}\|f_{s,n}\otimes_{1}f_{s,n} \|_{\mathcal{H}^{\otimes 2}}=0. \end{equation*} $$

Then $\smash{\textbf{F}_{n} \xrightarrow{\text{D}} \textbf{N}}$ as $n\rightarrow\infty$ .

Moreover, we report here the part of Proposition 2 of [Reference Aldous and Eagleson1] needed in this work. This result concerns mixing limits, which are defined as follows. Let $Y_{n}$ be a sequence of random variables in $ (\Omega,\mathcal{F},\mathbb{P}) $ converging stably to a random variable Y. If Y can be taken to be independent of $\mathcal{F}$ then the limit is said to be mixing. We denote it by $\smash{Y_{n} \xrightarrow{\text{mixing}} Y}$ .

Proposition 2.3

Suppose that $\smash{Y_{n} \xrightarrow{\text{D}} Y}$ . Then the following statements are equivalent:

  1. (i) $\smash{Y_{n} \xrightarrow{\text{mixing}} Y}$ ,

  2. (ii) for all fixed $k\in\mathbb{N}$ and $B\in \sigma(Y_{1},\ldots, Y_{k}) $ , $\mathbb{P}(B)>0$ ,

    $$ \begin{equation*} \lim_{n\rightarrow\infty}\mathbb{P}(Y_{n}\leq x\mid B)=F_{Y}(x), \end{equation*} $$
    where $F_{Y}(x) $ is the distribution function of Y.

Proof. See Proposition 2 of [Reference Aldous and Eagleson1].

3. Joint CLT for multivariate Gaussian processes with stationary increments

As mentioned in the introduction, one of the differences from previous works on limit theorems for BSS processes is that we use a different scaling factor $\tau$ . In this section we present two cases. For the first case we use the same $\tau$ used in the literature, while in the second we use a new formulation. The differences between the two approaches will be pointed out subsequently.

Remark 3.1

Throughout this section $\{{\boldsymbol G}_{t} \}_{t\in[0,T]}$ is a general multivariate Gaussian process with stationary increments. Thus, it is not necessarily the Gaussian core.

3.1. Case

For $i,j=1,\ldots, p$ and $k\in\mathbb{N}$ , let us define the scaling factor by

(3.1) $$ \begin{equation} \tau_{n}^{(\,j)}\,:\!=\sqrt{\mathbb{E}[(\Delta^{n}_{1}G^{(\,j)})^{2}]}, \label{eqn2} \end{equation} $$

and the multivariate process $\smash{\{\textbf{Z}_{t}^{n}\}_{t\in[0,T]}=\{(Z_{(1,1),t}^{n},\ldots, Z_{(\,p,p),t}^{n})^{\top}\}_{t\in[0,T]}}$ as

$$ \begin{equation*} Z_{(i,j),t}^{n}\,:\!=\frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor}I_{2} \bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}}\tilde{\otimes} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}\bigg), \quad\text{and}\quad r_{i,j}^{(n)}(k)\,:\!=\mathbb{E}\bigg[\dfrac{\Delta^{n}_{1}G^{(i)}} {\tau_{n}^{(i)}}\dfrac{\Delta^{n}_{1+k}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg]. \end{equation*} $$

Thanks to Theorem 2.7.7 of [Reference Nourdin and Peccati20] (reported in this work as Theorem 2.1), $\smash{Z_{(i,j),t}^{n}}$ belongs to $\mathcal{W}_{2}$ , namely the second Wiener chaos. In addition, we will consider the following assumption on the correlation. It states that, uniformly in n, the squared autocorrelations $\smash{(r_{i,j}^{(n)}(k))^{2}}$ are summable, which means that $\smash{r_{i,j}^{(n)}(k)}$ goes to 0 sufficiently fast as $k\rightarrow\infty$ .

Assumption 3.1

Let the limit $\smash{\lim_{n\rightarrow\infty}r_{i,j}^{(n)}(k)}$ exist for any $k\in\mathbb{N},$ and let $ (\xi(k))_{k\in\mathbb{N}}$ be a sequence such that, for any $k, n \in\mathbb{N}$ , $\smash{(r_{i,j}^{(n)}(k))^{2} \leq \xi(k)}$ and $\smash{\sum_{k=1}^{\infty}\xi(k)}<\infty$ for $i,j=1,\ldots, p$ .

Remark 3.2

Let $\smash{\lim_{n\rightarrow\infty}\sum_{k=1}^{\infty} (r_{i,j}^{(n)}(k))^{2}} <\infty$ for $i,j=1,\ldots, p$ . For $x,y,z,w=1,\ldots, p$ , the following limit holds:

\begin{align*} &\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n}k(r_{x,z}^{(n)}(k) r_{y,w}^{(n)}(k)+r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k)) \\ &+\frac{1}{n}\sum_{k=n+1}^{2n-1}(2n-k)(r_{x,z}^{(n)}(k)r_{y,w}^{(n)}(k) +r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k)) \\ &\qquad=0. \end{align*}

This is because, given two sequences $\{a_{k}\}_{k\in\mathbb{N}}$ and $\{b_{k}\}_{k\in\mathbb{N}},$ if $\smash{\sum_{k=1}^{\infty}(a_{k})^{2}}<\infty$ and $\smash{\sum_{k=1}^{\infty}(b_{k})^{2}}<\infty$ then $\smash{\lim_{l\rightarrow\infty}({1}/{l})\sum_{k=1}^{l}k(a_{k})^{2}}=0$ and $\smash{\lim_{l\rightarrow\infty}({1}/{l})\sum_{k=1}^{l}k(b_{k})^{2}} =0,$ which imply that $\smash{\lim_{l\rightarrow\infty}({1}/{l}) \sum_{k=1}^{l}k a_{k}b_{k}} =0$ (see the use of Condition (3.9) of [Reference Barndorff-Nielsen, Corcuera and Podolskij7]).

We now present two propositions regarding the process $\{\textbf{Z}_{t}^{n}\}_{t\in[0,T]}$ and they will lead us to the main theorem of this section. The first proposition is on the convergence of the finite-dimensional distributions, while the second proposition is on the tightness of the law of the process.

Proposition 3.1

Let $d\in\mathbb{N},$ and let $ (a_{l},b_{l}]$ be pairwise disjoint intervals in [0,T] where $l=1,\ldots,d$ . Consider

\begin{align*} \textbf{Z}_{b_{l}}^{n}-\textbf{Z}_{a_{l}}^{n}=(&Z_{(1,1),b_{l}}^{n}- Z_{(1,1),a_{l}}^{n},\ldots, Z_{(1,p),b_{l}}^{n}-Z_{(1,p), a_{l}}^{n},Z_{(2,1),b_{l}}^{n} -Z_{(2,1),a_{l}}^{n},\ldots, \\ &Z_{(\,p,p),b_{l}}^{n}-Z_{(\,p,p),a_{l}}^{n}). \end{align*}

Then, under Assumption 3.1, $\smash{(\textbf{Z}_{b_{l}}^{n}-\textbf{Z}_{a_{l}}^{n})_{1\leq l\leq d} \xrightarrow{\text{D}}} ({\boldsymbol D}^{{1}/{2}}(B_{b_{l}}-B_{a_{l}}))_{1\leq l\leq d}$ as $n\rightarrow\infty$ , where ${\boldsymbol D}\in\mathcal{M}^{p^{2}\times p^{2}}(\mathbb{R}) $ and $B_{t}$ is a $p^{2}$ -dimensional Brownian motion. In particular, associating for each combination (i,j) a combination ((x,y),(z,w)), where $x,y,z,w=1,\ldots, p$ , using the formula $ (i,j)\leftrightarrow((\lfloor {(i-1)}/{p} \rfloor+1,i-p\lfloor {(i-1)}/{p} \rfloor),(\lfloor {(\,j-1)}/{p} \rfloor+1,j-p\lfloor {(\,j-1)}/{p} \rfloor)) $ , we have

\begin{align*} ({\boldsymbol D})_{ij}&=({\boldsymbol D})_{(x,y),(z,w)} \\ &=\frac{1}{n}\sum_{k=1}^{n-1}(n-k) (r_{x,z}^{(n)}(k)r_{y,w}^{(n)}(k)+r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k) +r_{z,x}^{(n)}(k)r_{w,y}^{(n)}(k)+r_{z,y}^{(n)}(k)r_{w,x}^{(n)}(k)) \\ &\quad\,+(r_{x,z}^{(n)}(0)r_{y,w}^{(n)}(0)+r_{y,z}^{(n)}(0)r_{x,w}^{(n)}(0)). \end{align*}

Proof. In order to prove this result, we need to use Corollary 2.1. Note that now $I_{2}(f_{s,n}) $ takes the form of $\smash{Z_{(x,y),b_{l}}^{n}-Z_{(z,w),a_{l}}^{n}}$ and $b=p^{2}d$ .

First, we compute the covariances. Note that it is sufficient to focus on the case in which $l=2, a_{1}=0,b_{1}=a_{2}=1,$ and $b_{2}=2$ . Recall the isometry property of integrals (i.e. Proposition 2.1) and that, for $f_{1},g_{1},f_{2},g_{2}\in\mathcal{H},$ we have $\langle f_{1}\otimes g_{1},f_{2}\otimes g_{2}\rangle_{\mathcal{H}^{\otimes 2}}\,:\!=\langle f_{1},f_{2}\rangle_{\mathcal{H}}\langle g_{1},g_{2}\rangle_{\mathcal{H}}$ . Then, for $x,y,z,w=1,\ldots, p$ , we have

$$ \begin{align*} &\mathbb{E}[(Z_{(x,y),1}^{n}-Z_{(x,y),0}^{n})(Z_{(z,w),2}^{n}-Z_{(z,w) ,1}^{n})] \\ &\qquad=\frac{2}{n}\bigg\langle\sum_{i=1}^{n}\dfrac{\Delta^{n}_{i} G^{(x)}}{\tau_{n}^{(x)}}\tilde{\otimes}\dfrac{\Delta^{n}_{i}G^{(y)}} {\tau_{n}^{(y)}},\sum_{j=n+1}^{2n}\dfrac{\Delta^{n}_{j}G^{(z)}} {\tau_{n}^{(z)}}\tilde{\otimes}\dfrac{\Delta^{n}_{j}G^{(w)}} {\tau_{n}^{(w)}} \bigg\rangle_{\mathcal{H}^{\otimes 2}} \\ &\qquad=\frac{1}{2n}\sum_{i=1}^{n}\sum_{j=n+1}^{2n}\bigg\langle \dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}}\otimes\dfrac{\Delta^{n}_{i} G^{(y)}}{\tau_{n}^{(y)}},\dfrac{\Delta^{n}_{j}G^{(z)}}{\tau_{n}^{(z)}} \otimes\dfrac{\Delta^{n}_{j}G^{(w)}}{\tau_{n}^{(w)}} \bigg\rangle_{\mathcal{H}^{\otimes 2}} \\ &\qquad +\bigg\langle\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}}\otimes \dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}},\dfrac{\Delta^{n}_{j}G^{(z)}} {\tau_{n}^{(z)}}\otimes\dfrac{\Delta^{n}_{j}G^{(w)}}{\tau_{n}^{(w)}} \bigg\rangle_{\mathcal{H}^{\otimes 2}} \\ &\qquad +\bigg\langle\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}} \otimes\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}},\dfrac{\Delta^{n}_{j} G^{(w)}}{\tau_{n}^{(w)}}\otimes\dfrac{\Delta^{n}_{j}G^{(z)}} {\tau_{n}^{(z)}} \bigg\rangle_{\mathcal{H}^{\otimes 2}} \\ &\qquad +\bigg\langle\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}}\otimes \dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}},\dfrac{\Delta^{n}_{j} G^{(w)}}{\tau_{n}^{(w)}}\otimes\dfrac{\Delta^{n}_{j}G^{(z)}} {\tau_{n}^{(z)}} \bigg\rangle_{\mathcal{H}^{\otimes 2}} \\ &\qquad=\frac{1}{2n}\sum_{i=1}^{n}\sum_{j=n+1}^{2n}\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}} \dfrac{\Delta^{n}_{j}G^{(z)}}{\tau_{n}^{(z)}}\bigg]\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \dfrac{\Delta^{n}_{j}G^{(w)}}{\tau_{n}^{(w)}}\bigg] \\ &\qquad +\mathbb{E}\bigg[\dfrac{\Delta^{n}_{i}G^{(y)}} {\tau_{n}^{(y)}} \dfrac{\Delta^{n}_{j}G^{(z)}} {\tau_{n}^{(z)}}\bigg]\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i} G^{(x)}}{\tau_{n}^{(x)}} \dfrac{\Delta^{n}_{j}G^{(w)}} {\tau_{n}^{(w)}}\bigg] \\ &\qquad +\mathbb{E}\bigg[\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}} \dfrac{\Delta^{n}_{j}G^{(w)}}{\tau_{n}^{(w)}}\bigg]\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \dfrac{\Delta^{n}_{j}G^{(z)}}{\tau_{n}^{(z)}}\bigg] \\ &\qquad +\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \dfrac{\Delta^{n}_{j}G^{(w)}}{\tau_{n}^{(w)}}\bigg]\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}}\dfrac{\Delta^{n}_{j} G^{(z)}}{\tau_{n}^{(z)}}\bigg] \\ &\qquad=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=n+1}^{2n}\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}} \dfrac{\Delta^{n}_{j}G^{(z)}}{\tau_{n}^{(z)}}\bigg]\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \dfrac{\Delta^{n}_{j}G^{(w)}}{\tau_{n}^{(w)}}\bigg] \\ &\qquad +\mathbb{E}\bigg[\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \dfrac{\Delta^{n}_{j}G^{(z)}}{\tau_{n}^{(z)}}\bigg]\mathbb{E} \bigg[\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}} \dfrac{\Delta^{n}_{j}G^{(w)}}{\tau_{n}^{(w)}}\bigg] \\ &\qquad=\frac{1}{n}\sum_{k=1}^{n}k(r_{x,z}^{(n)}(k)r_{y,w}^{(n)}(k) +r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k)) \\ &\qquad +\frac{1}{n}\sum_{k=n+1}^{2n-1}(2n-k) (r_{x,z}^{(n)}(k)r_{y,w}^{(n)}(k)+r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k)). \end{align*} $$

Hence, we have, for $x,y,z,w=1,\ldots, p$ ,

$$ \begin{align*} &\mathbb{E}[(Z_{(x,y),1}^{n}-Z_{(x,y),0}^{n})(Z_{(z,w),2}^{n}-Z_{(z,w), 1}^{n})] \\ &\qquad=\frac{1}{n}\sum_{k=1}^{n}k(r_{x,z}^{(n)}(k)r_{y,w}^{(n)}(k) +r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k)) \\ &\qquad +\frac{1}{n}\sum_{k=n+1}^{2n-1}(2n-k) (r_{x,z}^{(n)}(k)r_{y,w}^{(n)}(k) +r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k)). \end{align*} $$

Under Assumption 3.1, we have $\smash{\lim_{n\rightarrow\infty}\mathbb{E}[(Z_{(x,y),1}^{n}-Z_{(x,y) ,0}^{n})} \smash{(Z_{(z,w),2}^{n}-Z_{(z,w),1}^{n})]}=0$ .

For the variances, following similar computations as above and using the fact that $r_{x,z}^{(n)}(-k)=\smash{r_{z,x}^{(n)}(k)}$ , we have

(3.2) \begin{align} &\mathbb{E}[(Z_{(x,y),1}^{n}-Z_{(x,y),0}^{n})(Z_{(z,w),1}^{n}- Z_{(z,w),0}^{n})] \nonumber \\ &\qquad=\frac{2}{n}\bigg\langle\sum_{i=1}^{n}\dfrac{\Delta^{n}_{i} G^{(x)}}{\tau_{n}^{(x)}}\tilde{\otimes}\dfrac{\Delta^{n}_{i}G^{(y)}} {\tau_{n}^{(y)}},\sum_{j=1}^{n}\dfrac{\Delta^{n}_{j}G^{(z)}} {\tau_{n}^{(z)}}\tilde{\otimes}\dfrac{\Delta^{n}_{j}G^{(w)}} {\tau_{n}^{(w)}} \bigg\rangle_{\mathcal{H}^{\otimes 2}} \nonumber \\ &\qquad=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{n}r_{x,z}^{(n)}(i-j) r_{y,w}^{(n)}(i-j)+r_{y,z}^{(n)}(i-j)r_{x,w}^{(n)}(i-j) \nonumber \\ &\qquad=\frac{1}{n}\sum_{i=1}^{n}\sum_{j=1}^{i}r_{x,z}^{(n)}(i-j) r_{y,w}^{(n)} (i-j)+r_{y,z}^{(n)}(i-j)r_{x,w}^{(n)}(i-j) \nonumber \\ &\qquad +\frac{1}{n}\sum_{i=1}^{n} \sum_{j=i+1}^{n}r_{z,x}^{(n)} (\,j-i)r_{w,y}^{(n)}(\,j-i)+r_{z,y}^{(n)}(\,j-i) r_{w,x}^{(n)}(\,j-i) \nonumber \\ &\qquad=\frac{1}{n}\sum_{k=1}^{n-1}(n-k)(r_{x,z}^{(n)}(k)r_{y,w}^{(n)}(k) +r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k)+r_{z,x}^{(n)}(k)r_{w,y}^{(n)}(k) +r_{z,y}^{(n)}(k)r_{w,x}^{(n)}(k)) \nonumber \\ &\qquad +(r_{x,z}^{(n)}(0)r_{y,w}^{(n)}(0) +r_{y,z}^{(n)}(0) r_{x,w}^{(n)}(0)). \label{eqn3}\end{align}

Under Assumption 3.1, we have $\lim_{n\rightarrow\infty}\mathbb{E}[(Z_{(x,y),1}^{n}- Z_{(x,y),0}^{n})(Z_{(z,w),1}^{n}-Z_{(z,w),0}^{n})]<\infty$ , by using the fact that, for $a,b\in\mathbb{R},$ we have $|ab|\leq \tfrac{1}{2}(a^{2}+b^{2}) $ .

Furthermore, since we need to use a matrix formulation for our result, we need to associate an element ((x, y),(z, w))) with $x,y,z,w=1,\ldots, p$ to (i, j) with $i,j=1,\ldots, p^{2}$ . Recall that we have the vector

$$ \begin{equation*} (Z_{(1,1),b_{l}}^{n}-Z_{(1,1),a_{l}}^{n},\ldots, Z_{(1,p),b_{l}}^{n}- Z_{(1,p),a_{l}}^{n},Z_{(2,1),b_{l}}^{n}-Z_{(2,1),a_{l}}^{n},\ldots, Z_{(\,p,p),b_{l}}^{n}-Z_{(\,p,p),a_{l}}^{n}), \end{equation*} $$

and we can rename it as

$$ \begin{equation*} (Z_{(1),b_{l}}^{n}-Z_{(1),a_{l}}^{n},\ldots, Z_{(\,p),b_{l}}^{n}- Z_{(\,p),a_{l}}^{n},Z_{(\,p+1),b_{l}}^{n}-Z_{(\,p+1),a_{l}}^{n},\ldots, Z_{(\,p^{2}),b_{l}}^{n}-Z_{(\,p^{2}),a_{l}}^{n}). \end{equation*} $$

In this way we are making the following association between (x, y) and i: $i=1\leftrightarrow (1,1) $ ; $i=2\leftrightarrow (1,2);\ i=p\leftrightarrow (1,p) $ ; $i=p+1\leftrightarrow (2,1);\ i=2p\leftrightarrow (2,p) $ ; $i=2p+1\leftrightarrow (3,1);\ i=p^{2}\leftrightarrow (\,p,p) $ . By symmetry, the same association applies to (z, w) and j. Thus, this can be written compactly as $ (i,j)\leftrightarrow((\lfloor {(i-1)}/{p} \rfloor+1,i-p\lfloor {(i-1)}/{p} \rfloor),(\lfloor {(\,j-1)}/{p} \rfloor+1,j-p\lfloor {(\,j-1)}/{p} \rfloor)) $ . Therefore, we have

$$ \begin{equation*} ({\boldsymbol D})_{i,j}=({\boldsymbol D})_{((\lfloor {(i-1)}/{p} \rfloor+1,i-p\lfloor {(i-1)}/{p} \rfloor),(\lfloor {(\,j-1)}/{p} \rfloor+1,j-p\lfloor {(\,j-1)}/{p} \rfloor))}, \end{equation*} $$

where $ ({\boldsymbol D})_{((x,y),(z,w))}=\lim_{n\rightarrow\infty} \mathbb{E}[(Z_{(x,y),1}^{n}-Z_{(x,y),0}^{n})(Z_{(z,w),1}^{n}- Z_{(z,w),0}^{n})]$ is equal to (3.2).

Now we have shown the convergence of the variances and covariances for intervals $[a_{l},b_{l}]$ of length 1; however, it is straightforward to change the summation indices from $\smash{\sum_{i=1}^{n}}$ to $\smash{\sum_{i=\lfloor na_{l}\rfloor+1}^{\lfloor nb_{l}\rfloor}}$ , and in addition observe that

$$ \begin{equation*} \lim_{n\rightarrow\infty}\frac{\lfloor nb_{l}\rfloor-\lfloor na_{l} \rfloor}{n}=\lim_{n\rightarrow\infty}\frac{nb_{l}-na_{l}}{n}+ o\bigg(\frac{1}{n}\bigg)=b_{l}-a_{l}. \end{equation*} $$

The last step is to show that, for $l=1,\ldots,d$ and $x,y=1,\ldots, p$ ,

(3.3) $$ \begin{equation} \bigg\|\bigg( \frac{1}{\sqrt{n}\,}\sum_{i=\lfloor na_{l}\rfloor+1}^{\lfloor nb_{l}\rfloor}\dfrac{\Delta^{n}_{i}G^{(x)}} {\tau_{n}^{(x)}}\tilde{\otimes} \dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \bigg)\otimes_{1}\bigg( \frac{1}{\sqrt{n}\,}\sum_{i=\lfloor na_{l}\rfloor+1}^{\lfloor nb_{l}\rfloor}\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}} \tilde{\otimes}\dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \bigg)\bigg\|_{\mathcal{H}^{\otimes 2}}\rightarrow0. \label{eqn4} \end{equation} $$

However, this is true under Assumption 3.1 by using the same computations as carried out in the proof of Theorem 3.2 of [Reference Granelli and Veraart17]. Let us sketch them. Without loss of generality, we can focus on $d=1,$ $a_{1}=0,$ and $b_{1}=1$ . By simple computations, it is possible to show that

$$ \begin{equation*} \bigg(\dfrac{\Delta^{n}_{i}G^{(x)}}{\tau_{n}^{(x)}}\tilde{\otimes} \dfrac{\Delta^{n}_{i}G^{(y)}}{\tau_{n}^{(y)}} \bigg)\otimes_{1}\bigg( \dfrac{\Delta^{n}_{j}G^{(x)}}{\tau_{n}^{(x)}}\tilde{\otimes} \dfrac{\Delta^{n}_{j}G^{(y)}}{\tau_{n}^{(y)}} \bigg) =\frac{1}{4}\sum_{\substack{\{a,a'\}=\{x,y\}\\ \{b,b'\}=\{x,y\}}}\!\!r^{(n)}_{a,b}(\,j-i)\dfrac{\Delta^{n}_{i}G^{(a')}} {\tau_{n}^{(a')}}\otimes\dfrac{\Delta^{n}_{j}G^{(b')}}{\tau_{n}^{(b')}}, \end{equation*} $$

and from this we obtain

(3.4) \begin{align} &\frac{1}{n^{2}}\bigg\|\sum_{i,j=1}^{n}\bigg( \dfrac{\Delta^{n}_{i} G^{(x)}}{\tau_{n}^{(x)}}\tilde{\otimes}\dfrac{\Delta^{n}_{i}G^{(y)}} {\tau_{n}^{(y)}} \bigg)\otimes_{1}\bigg(\dfrac{\Delta^{n}_{i}G^{(x)}} {\tau_{n}^{(x)}}\tilde{\otimes}\dfrac{\Delta^{n}_{i}G^{(y)}} {\tau_{n}^{(y)}} \bigg)\bigg\|_{\mathcal{H}^{\otimes 2}}^{2}\label{eqn5}\end{align}
$$ \begin{align} &\qquad=\frac{1}{16n^{2}}\sum_{\substack{\{a,a'\}=\{x,y\}\\ \{ b,b'\}=\{x,y\}\\ \{\alpha,\alpha'\}=\{x,y\}\\\{\beta,\beta'\}=\{x,y\}}} \sum_{i,j,i',j'=1}^{n}r^{(n)}_{a,b}(\,j-i)r^{(n)}_{\alpha,\beta}(\,j'-i') r^{(n)}_{a',\alpha'}(i'-i)r^{(n)}_{b',\beta'}(\,j'-j). \label{eqn6}\end{align} $$

The last step is to show that (3.4) goes to 0 as $n\rightarrow\infty$ . By Hölder’s inequality and some computations, we obtain

\begin{align*} &\leq \frac{1}{8}\sum_{\substack{\{a,a'\}=\{x,y\}\\ \{ b,b'\}=\{x,y\}\\ \{\alpha,\alpha'\}=\{x,y\}\\\{\beta,\beta'\}=\{x,y\}}}\sum_{k\in\mathbb{Z}} [(r^{(n)}_{\alpha,\beta}(k))^{2} +(r^{(n)}_{b',\beta'}(k))^{2}] \bigg(\frac{1}{\sqrt{n}\,}\sum_{|i|{<}n}|r^{(n)}_{a,b}(i)|\bigg) \\ &\times \bigg(\frac{1}{\sqrt{n}\,}\sum_{|j|{<}n}|r^{(n)}_{a', \alpha'}(\,j)|\bigg). \end{align*}

Then, by Assumption 3.1 we have

$$ \begin{equation*} \sum_{k\in\mathbb{Z}}[(r^{(n)}_{\alpha,\beta}(k))^{2} +(r^{(n)}_{b',\beta'}(k))^{2}]{<}\infty \end{equation*} $$

and

$$ \begin{equation*} \frac{1}{\sqrt{n}\,}\sum_{|i|{<}n}|r^{(n)}_{a,b}(i)|\rightarrow0, \qquad \frac{1}{\sqrt{n}\,}\sum_{|j|{<}n}|r^{(n)}_{a',\alpha'}(\,j)|\rightarrow 0, \end{equation*} $$

as $n\rightarrow\infty$ , and therefore we obtain the desired convergence (3.3).

Finally, observe that the matrix denoted by ${\boldsymbol C}$ in Corollary 2.1 is here a ${\rm d} p^{2}\times {\rm d} p^{2}$ matrix given by

$$ \begin{equation*} {\boldsymbol C}= \begin{pmatrix} (b_{1}-a_{1}){\boldsymbol D} & \textbf{0} &\cdots & \textbf{0} \\ \textbf{0} & \ddots & & \vdots \\ \vdots & & \ddots & \textbf{0}\\ \textbf{0} & \cdots& \textbf{0} & (b_{d}-a_{d}){\boldsymbol D} \end{pmatrix}, \end{equation*} $$

where the $ (b_{l}-a_{l}){\boldsymbol D}$ are $p^{2}\times p^{2}$ matrices and the $\textbf{0}$ matrices are also $p^{2}\times p^{2}$ but only composed by zeros. Thus, we obtain the representation of the statement.

Proposition 3.2

Under Assumption 3.1, let $\mathbf{P}^{n}$ be the law of the process $\{\textbf{Z}^{n}_{t}\}_{t\in[0,T]}$ on the Skorokhod space $\mathcal{D}([0,T],\mathbb{R}^{p^{2}}) $ . Then the sequence $\{\mathbf{P}^{n}\}_{n\in\mathbb{N}}$ is tight.

Proof. It follows from the tightness of the components of the vector $\textbf{Z}^{n}_{t}$ which is proved following the same arguments as in the proof of Theorem 4.3 of [Reference Granelli and Veraart17] or of Theorem 7 of [Reference Corcuera14].

In particular, using the computations carried out in the proof of Proposition 3.1 for the variances, we have

\begin{align*} &\mathbb{E}[(Z^{n}_{(x,y),t}-Z^{n}_{(x,y),s})^{2}] \\ &\qquad=\frac{1}{n}\mathbb{E}\bigg[\bigg(\sum_{i=\lfloor ns\rfloor+1}^{\lfloor nt\rfloor}I_{2}\bigg(\dfrac{\Delta^{n}_{l} G^{(x)}}{\tau_{n}^{(x)}}\tilde{\otimes}\dfrac{\Delta^{n}_{l}G^{(y)}} {\tau_{n}^{(y)}}\bigg)\bigg)^{2} \bigg] \\ &\qquad=\frac{\lfloor nt\rfloor-\lfloor ns\rfloor}{n}\frac{1}{\lfloor nt\rfloor-\lfloor ns\rfloor}\mathbb{E}\bigg[\bigg(\sum_{i=1}^{\lfloor nt\rfloor-\lfloor ns\rfloor}I_{2}\bigg(\dfrac{\Delta^{n}_{l}G^{(x)}} {\tau_{n}^{(x)}}\tilde{\otimes}\dfrac{\Delta^{n}_{l}G^{(y)}} {\tau_{n}^{(y)}}\bigg)\bigg)^{2} \bigg] \\ &\qquad=\frac{\lfloor nt\rfloor-\lfloor ns\rfloor}{n} \\ &\qquad \times\bigg[\frac{1}{\lfloor nt\rfloor-\lfloor ns\rfloor} \bigg(\frac{1}{n} \sum_{k=1}^{\lfloor nt\rfloor-\lfloor ns\rfloor-1}(n-k)(r_{x,z}^{(n)}(k) r_{y,w}^{(n)}(k)+r_{y,z}^{(n)}(k)r_{x,w}^{(n)}(k) \\ &\qquad +r_{z,x}^{(n)}(k)r_{w,y}^{(n)}(k)+r_{z,y}^{(n)}(k) r_{w,x}^{(n)}(k)) \\ & +(r_{x,z}^{(n)}(0)r_{y,w}^{(n)}(0)+r_{y,z}^{(n)}(0)r_{x,w}^{(n)}(0)) \bigg)\bigg] \\ &\qquad\leq C\frac{\lfloor nt\rfloor-\lfloor ns\rfloor}{n} \end{align*}

for a constant $C>0$ , where we used the fact that the object inside the square brackets converges and hence it is bounded. Then, by the equivalence of the $L^{p}$ norms for $1 < p < \infty$ on a fixed (sum of) Wiener chaos (see Theorem 2.7.2 of [Reference Nourdin and Peccati20]), we obtain

$$ \begin{equation*} \mathbb{E}[(Z^{n}_{(x,y),t}-Z^{n}_{(x,y),s})^{4}]^{1/2}\leq C \frac{\lfloor nt\rfloor-\lfloor ns\rfloor}{n}. \end{equation*} $$

Then, by the Cauchy–Schwarz inequality we have, for any $t\geq r\geq s$ and $\lambda>0,$

\begin{align*} \mathbb{P}(|Z^{n}_{(x,y),t}-Z^{n}_{(x,y),r}|\geq \lambda,\, |Z^{n}_{(x,y),r}-Z^{n}_{(x,y),s}|\geq \lambda) &\leq \frac{C}{\lambda^{4}}\frac{\lfloor nt\rfloor-\lfloor nr\rfloor}{n}\frac{\lfloor nr\rfloor-\lfloor ns\rfloor}{n} \\ &\leq C\frac{(t-s)^{2}}{\lambda^{4}}. \end{align*}

Finally, we obtain tightness by using Theorem 13.5 of [Reference Billingsley13].

Theorem 3.1

Let Assumption 3.1 hold. Then we have

$$ \begin{equation*} \bigg\{\frac{1}{\sqrt{n}\,}\sum_{l=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}-\mathbb{E} \bigg[\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}}\dfrac{\Delta^{n}_{l} G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \bigg)_{i,j=1,\ldots, p}\bigg\}_{t\in[0,T]} \xrightarrow{\text{st}} \{{\boldsymbol D}^{{1}/{2}}B_{t}\}_{t\in[0,T]}, \end{equation*} $$

where ${\boldsymbol D}$ and $B_{t}$ are given in Proposition 3.1. In particular, $B_{t}$ is independent of $\smash{G^{(1)},\ldots, G^{(\,p)}}$ and the convergence is in $\mathcal{D}([0,T],\mathbb{R}^{p^{2}}) $ , namely the Skorokhod space equipped with the uniform metric.

Proof. First, note that

$$ \begin{equation*} \bigg\{\frac{1}{\sqrt{n}\,}\sum_{l=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}-\mathbb{E} \bigg[\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \bigg)_{i,j=1,\ldots, p}\bigg\}_{t\in[0,T]} \xrightarrow{\text{D}} \{{\boldsymbol D}^{{1}/{2}}B_{t}\}_{t\in[0,T]}, \end{equation*} $$

follows from Theorem 13.1 of [Reference Billingsley13] by using the finite-dimensional distribution convergence proved in Proposition 3.1 and by the tightness proved in Proposition 3.2.

The independence of $B_{t}$ from $\smash{G^{(1)}_{t},\ldots, G^{(\,p)}_{t}}$ is given by the fact that $\smash{G^{(i)}_{t},\ldots, G^{(\,p)}_{t}}$ belong to the first Wiener chaos, while $B_{t}$ is the limiting process of objects belonging to the second Wiener chaos. Moreover, we have

(3.5) $${\{ {(G_t^{(i)})_{i = 1, \ldots ,p}},{1 \over {\sqrt n {\mkern 1mu} }}\sum\limits_{l = 1}^{nntn} ( {{\Delta _l^n{G^{(i)}}} \over {\tau _n^{(i)}}}{{\Delta _l^n{G^{({\kern 1pt} j)}}} \over {\tau _n^{({\kern 1pt} j)}}} - n[{{\Delta _l^n{G^{(i)}}} \over {\tau _n^{(i)}}}{{\Delta _l^n{G^{({\kern 1pt} j)}}} \over {\tau _n^{({\kern 1pt} j)}}}])_{i,j = 1, \ldots ,p}}{\} _{t \in [0,T]}}\,\,\buildrel {\rm{D}} \over \longrightarrow {\{ {(G_t^{(i)})_{i = 1, \ldots ,p}},{{\bf{D}}^{1/2}}{B_t}\} _{t \in [0,T]}}$$

in the space $\mathcal{D}([0,T],\mathbb{R}^{p}\times\mathbb{R}^{p^{2}}) $ , namely the Skorokhod space equipped with the uniform metric. Note that this result comes from the convergence of the finite-dimensional distributions of (3.5), which follows from the arguments at the beginning of this proof together with the orthogonality of different Wiener chaos, and from the tightness of the law of (3.5), which follows from the tightness of each component of the vector proved in Proposition 3.2.

Concerning the convergence of the finite-dimensional distributions, note that, for any $t\in[0,T],$ each element of $\smash{(G_{t}^{(i)})_{i=1,\ldots, p}}$ and of

$$ \begin{equation*} \frac{1}{\sqrt{n}\,}\sum_{l=1}^{\lfloor nt \rfloor}\bigg(\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}-\mathbb{E} \bigg[\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}}\frac{\Delta^{n}_{l} G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \bigg)_{i,j=1,\ldots, p} \end{equation*} $$

belong to the first and second Wiener chaos, respectively; hence, for any $i,j,k=1,\ldots, p$ and $s,t\in[0,T]$ , we have

$$ \begin{equation*} \mathbb{E}\bigg[G_{s}^{(k)}\frac{1}{\sqrt{n}\,}\sum_{l=1}^{\lfloor nt \rfloor}\bigg(\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}-\mathbb{E} \bigg[\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \bigg)\bigg]=0. \end{equation*} $$

Then, from this argument we have

$$ \begin{equation*} \lim_{n\rightarrow\infty}\mathbb{E}\bigg[G_{s}^{(k)}\frac{1}{\sqrt{n}\,} \sum_{l=1}^{\lfloor nt \rfloor}\bigg(\frac{\Delta^{n}_{l}G^{(i)}} {\tau_{n}^{(i)}}\frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}- \mathbb{E}\bigg[\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \bigg)\bigg]=0, \end{equation*} $$

and note that each element of $\smash{(G_{t}^{(i)})_{i=1,\ldots, p}}$ and of

$$ \begin{equation*} \frac{1}{\sqrt{n}\,}\sum_{l=1}^{\lfloor nt \rfloor}\bigg(\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}-\mathbb{E} \bigg[\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \bigg)_{i,j=1,\ldots, p} \end{equation*} $$

converges to a normal distribution. Then, by Theorem 2.4 we obtain the convergence of the finite-dimensional distributions, that is, for any $M\in\mathbb{N}$ and disjoint intervals $[a_{m},b_{m}]$ with $m=1,\ldots, M,$

\begin{align*} &\bigg((G_{b_{m}}^{(i)}-G_{a_{m}}^{(i)})_{i=1,\ldots, p}, \frac{1}{\sqrt{n}\,}\sum_{l=a_{m}}^{\lfloor nb_{m} \rfloor}\bigg(\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l} G^{(\,j)}}{\tau_{n}^{(\,j)}} \\ &-\mathbb{E}\bigg[\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg] \bigg)_{i,j=1,\ldots, p}\bigg)_{m=1,\ldots, M} \\ &\qquad \xrightarrow{\text{D}} ((G_{b_{m}}^{(i)}-G_{a_{m}}^{(i)})_{i=1,\ldots, p}, {\boldsymbol D}^{1/2}(B_{b_{m}}-B_{a_{m}}))_{m=1,\ldots, M}. \end{align*}

In order to obtain the stable convergence, it is sufficient to use Proposition 2.3. Observe that condition (ii) in that proposition is implied by the convergence in (3.5) using Bayes’ theorem and independence of the limiting process $\{B_{t}\}_{t\in[0,T]}$ from $\smash{(\{G_{t}^{(i)}\}_{t\in[0,T]})_{i=1,\ldots, p}}$ . In particular, note that, by Bayes’ theorem, (3.5) implies that, for all fixed $n_{1},\ldots, n_{k}\in\mathbb{N}$ and all

\begin{align*} &A\in\sigma\bigg(\frac{1}{\sqrt{n_{1}}\,}\sum_{l=1}^{\lfloor n_{1}t \rfloor}\bigg(\frac{\Delta^{n_{1}}_{l}G^{(i)}}{\tau_{n_{1}}^{(i)}} \frac{\Delta^{n_{1}}_{l}G^{(\,j)}}{\tau_{n_{1}}^{(\,j)}}-\mathbb{E} \bigg[\frac{\Delta^{n_{1}}_{l}G^{(i)}}{\tau_{n_{1}}^{(i)}} \frac{\Delta^{n_{1}}_{l}G^{(\,j)}}{\tau_{n_{1}}^{(\,j)}} \bigg] \bigg)_{i,j=1,\ldots, p},\ldots, \\ &\frac{1}{\sqrt{n_{k}}\,}\sum_{l=1}^{\lfloor n_{k}t \rfloor}\bigg(\frac{\Delta^{n_{k}}_{l}G^{(i)}}{\tau_{n_{k}}^{(i)}} \frac{\Delta^{n_{k}}_{l}G^{(\,j)}}{\tau_{n_{k}}^{(\,j)}}-\mathbb{E} \bigg[\frac{\Delta^{n_{k}}_{l}G^{(i)}}{\tau_{n_{k}}^{(i)}} \frac{\Delta^{n_{k}}_{l}G^{(\,j)}}{\tau_{n_{k}}^{(\,j)}} \bigg] \bigg)_{i,j=1,\ldots, p}\bigg), \end{align*}

with $\mathbb{P}(A)>0$ , we have

\begin{align*} &\lim_{n\rightarrow\infty}\mathbb{P}\bigg(\bigg(\frac{1}{\sqrt{n}\,} \sum_{l=1}^{\lfloor nt \rfloor}\bigg(\frac{\Delta^{n}_{l}G^{(i)}} {\tau_{n}^{(i)}}\frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}}- \mathbb{E}\bigg[\frac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(i)}} \frac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\,j)}} \bigg]\bigg)<x^{(i,j)}\bigg)_{i,j=1,\ldots, p}\bigg|A\bigg) \\ &\qquad=\mathbb{P}((({\boldsymbol D}^{1/2}B_{t} )_{i,j}<x^{(i,j)})_{i,j=1,\ldots, p}\mid A). \end{align*}

In addition, by the independence of the limiting process $\{B_{t}\}_{t\in[0,T]}$ from $\smash{(\{G_{t}^{(i)}\}_{t\in[0,T]})_{i=1,\ldots, p}}$ we obtain

$$ \begin{equation*} \mathbb{P}((({\boldsymbol D}^{1/2}B_{t})_{i,j}<x^{(i,j)} )_{i,j=1,\ldots, p}\mid A) =\mathbb{P}((({\boldsymbol D}^{1/2}B_{t} )_{i,j}<x^{(i,j)})_{i,j=1,\ldots, p}). \end{equation*} $$

Following the same computations, it is possible to show the result for any set of points $\{t_{1},\ldots,t_{a}\}\in[0,T]^{a}$ for $a\in\mathbb{N}$ and not just one $t\in[0,T]$ . Therefore, we obtain mixing convergence and hence stable convergence in $\mathcal{D}([0,T],\mathbb{R}^{p^{2}}) $ .

3.2. Case II

In this section we will show that the results presented in the previous section hold for different choices of $\tau_{n}$ , i.e. the scaling factor. In particular, the scaling factor which we will use in this section will have an order equal to or greater than the order of the previous scaling factor as n goes to $\infty.$ We will denote this new scaling factor by $\smash{\tau_{n}^{(\beta(\,j))}}$ for $j=1,\ldots, p$ and maintain the notation of $\smash{\tau_{n}^{(\,j)}}$ for the scaling factor introduced in the previous section.

Indeed, let $\smash{\tau_{n}^{(\,j)}=O(\tau_{n}^{(\beta(\,j))})}$ for $j=1,\ldots, p$ (e.g. consider $\smash{\tau_{n}^{(\beta(\,j))}=\max_{j=1,\ldots, p}(\tau_{n}^{(\,j)})}$ for some js and $\smash{\tau_{n}^{(\beta(\,j))}=\tau_{n}^{(\,j)}}$ for others, or see also Example 3.1). Let $i,j=1,\ldots, p$ . In this section we will work with the Hilbert space generated by the Gaussian random variables

$$ \begin{equation*} \bigg(\dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\beta(\,j))}} \bigg)_{n\geq1,\,1\leq l\leq \lfloor nt\rfloor,\,j\in\{1,\ldots, p\}}\quad\text{and let}\quad r_{i,j}^{(\beta),(n)}(k)\,:\!=\mathbb{E} \bigg[\frac{\Delta^{n}_{1}G^{(i)}}{\tau_{n}^{(\beta(i))}} \frac{\Delta^{n}_{1+k}G^{(\,j)}}{\tau_{n}^{(\beta(\,j))}} \bigg]. \end{equation*} $$

Moreover, the following assumption is the analogue of Assumption 3.2.

Assumption 3.2

Let the limit $\smash{\lim_{n\rightarrow\infty}r_{i,j}^{(\beta),(n)}(k)}$ exist for any $k\in\mathbb{N},$ and let $ (\xi(k))_{k\in\mathbb{N}}$ be a sequence such that, for any $k, n \in\mathbb{N}$ , $\smash{(r_{i,j}^{(\beta),(n)}(k))^{2} \leq \xi(k)}$ and $\smash{\sum_{k=1}^{\infty}\xi(k)<\infty}$ for $i,j=1,\ldots, p$ .

We can now present and prove a modification of the main result of the previous section.

Theorem 3.2

Let Assumption 3.2 hold. Let $\alpha,\beta=1,2$ . Then we have

\begin{align*} &\bigg\{\frac{1}{\sqrt{n}\,}\sum_{l=1}^{\lfloor nt\rfloor} \bigg(\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(\beta(i))}} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\beta(\,j))}}-\mathbb{E} \bigg[\dfrac{\Delta^{n}_{l}G^{(i)}}{\tau_{n}^{(\beta(i))}} \dfrac{\Delta^{n}_{l}G^{(\,j)}}{\tau_{n}^{(\beta(\,j))}} \bigg] \bigg)_{i,j=1,\ldots, p}\bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \{{\boldsymbol D}^{(\beta)\,{1}/{2}}B_{t}\}_{t\in[0,T]}. \end{align*}

In particular, associating for each combination (i, j) a combination ((x, y),(z, w)) where $x,y,z,w=1,\ldots, p,$ using the formula $ (i,j)\leftrightarrow((\lfloor {(i-1)}/{p} \rfloor+1,i-p\lfloor {(i-1)}/{p} \rfloor), (\lfloor {(\,j-1)}/{p} \rfloor+1,j-p\lfloor {(\,j-1)}/{p} \rfloor)) $ , we have

\begin{align*} ({\boldsymbol D})_{ij}^{(\beta)}&=({\boldsymbol D})_{(x,y),(z,w)}^{(\beta)} \\ &=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{k=1}^{n-1}(n-k) (r_{x,z}^{(\beta),(n)}(k)r_{y,w}^{(\beta),(n)}(k)+r_{y,z}^{(\beta),(n)}(k) r_{x,w}^{(\beta),(n)}(k) \\ &+r_{z,x}^{(\beta),(n)}(k)r_{w,y}^{(\beta),(n)}(k)+r_{z,y}^{( \beta),(n)}(k) r_{w,x}^{(\beta),(n)}(k)) \\ & +(r_{x,z}^{(\beta),(n)}(0)r_{y,w}^{(\beta),(n)}(0) +r_{y,z}^{(\beta),(n)}(0)r_{x,w}^{(\beta),(n)}(0)). \end{align*}

Furthermore, $B_{t}$ is a $p^{2}$ -dimensional Brownian motion independent of $\smash{G^{(1)},\ldots, G^{(\,p)}}$ and the convergence is in $\mathcal{D}([0,T],\mathbb{R}^{p^{2}}) $ , namely the Skorokhod space equipped with the uniform metric.

Proof. It follows from the same arguments as used in the proofs of Proposition 3.1, Proposition 3.2, and Theorem 3.1. This is because the only difference is that now we have a greater denominator than before (i.e. $\smash{\tau_{n}^{(\beta(\,j))}\geq \tau_{n}^{(\,j)}}$ ), which changes neither the logic of the arguments nor the computations. This is because in our framework we do not need $\smash{\mathbb{E}[( {\Delta_{i}^{n}G^{(k)}}/{\tau_{n}^{(k)}})^{2} ]}=1$ . This was different for the previous literature where the equality was needed in order to use Theorem 2.7.7 in combination with Theorem 2.7.8 of [Reference Nourdin and Peccati20].

Remark 3.3

Note that, while the larger value of the $\smash{\tau_{n}^{(\beta(\,j))}}$ does not trigger any modification in the proof of the results, it may reduce some of the components of $\smash{{\boldsymbol D}^{(\beta)}}$ to 0. For example, if we assume that $\smash{\mathbb{E}[( {\Delta_{i}^{n}G^{(\,j)}}/{\tau_{n}^{(\,j)}})^{2} ]}<C$ , where $C>0$ , and that $\smash{\lim_{n\rightarrow\infty}\{{\tau_{n}^{(\,j)}}/ {\tau_{n}^{(\beta(\,j))}}\}}=0$ for every $j=1,\ldots, p$ , then all the components of $\smash{{\boldsymbol D}^{(\beta)}}$ reduce to 0. This is because, by stationarity,

\begin{align*}\\[-24pt] r_{i,j}^{(\beta),(n)}(k)&=\mathbb{E}\bigg[\frac{\Delta^{n}_{1}G^{(i)}} {\tau_{n}^{(\beta(i))}}\frac{\Delta^{n}_{1+k}G^{(\,j)}}{\tau_{n}^{(\beta(\,j) )}} \bigg] \\ &\leq \mathbb{E}\bigg[\bigg(\frac{\Delta^{n}_{1}G^{(i)}}{\tau_{n}^{(\beta(i))}} \bigg)^{2}+\bigg(\frac{\Delta^{n}_{1+k}G^{(\,j)}}{\tau_{n}^{(\beta(\,j))}} \bigg)^{2} \bigg] \\ &=\frac{\tau_{n}^{(i)}}{\tau_{n}^{(\beta(i))}}+\frac{\tau_{n}^{(\,j)}} {\tau_{n}^{(\beta(\,j))}} \\ &\rightarrow0. \end{align*}

Example 3.1

In this example, let $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ be a Gaussian core. Consider a partition of the set $\{1,\ldots, p\}$ and call its elements $I_{\alpha_{1}},\ldots, I_{\alpha_{v}}$ for some $v\in\mathbb{N}$ . Hence, $I_{\alpha_{h}}\subset\{1,\ldots, p\}$ for $h=1,\ldots, v$ , and $I_{\alpha_{h}}\cap I_{\alpha_{l}}=\emptyset$ for $h,l=1,\ldots, v$ with $h\neq l$ . For $h=1,\ldots,v$ , define

(3.6) $$ \begin{equation} \tau_{n}^{(\alpha_{h})}\,:\!=\sqrt{\mathbb{E}\bigg[\bigg(\sum_{i\in I_{\alpha_{h}}}\Delta^{n}_{1}G^{(i)} \bigg)^{2}\bigg]}, \label{eqn8} \end{equation} $$

and consider $\smash{{\Delta^{n}_{l}G^{(\,j)}}/{\tau_{n}^{(\beta(\,j))}}},$ where $\beta(\,j)\,:\!=\alpha_{h}$ when $j\in I_{\alpha_{h}}$ . In addition, assume that, for $h=1,\ldots, v$ , the $\mathcal{F}_{t}$ -Brownian measures $W^{(i)}$ are independent for $i\in I_{\alpha_{h}}$ . This means that $\smash{\mathbb{E}[W^{(i)}W^{(\,j)}]}=0$ if $i,j\in I_{\alpha_{h}}$ for some $h=1,\ldots, v$ . Then $\smash{\tau_{n}^{(\beta(\,j))}\geq\tau_{n}^{(\,j)}}$ for $j=1,\ldots,p$ , where $\smash{\tau_{n}^{(\,j)}}$ has been defined in (3.1) and, thus, we can apply Theorem 3.2.

4. Joint CLT for the multivariate BSS process

In this section we will present and prove our main results consisting of the joint CLT for the two types of multivariate BSS processes. First, we will present the CLT for the scaling factor $\tau$ used in the literature (i.e. case I) and then the CLT for the new formulation (i.e. case II). For case II, we have two scenarios depending on which multivariate extension of the univariate BSS process we consider (see Definition 2.2). In this and the next two sections we will use a multivariate version of the continuous mapping theorem applied to stable convergence (for reference, see [Reference Aldous and Eagleson1]). Moreover, we will adopt the following three assumptions. These are the analogues of Assumption (CLT) and conditions in Section 4.1 of [Reference Barndorff-Nielsen, Corcuera and Podolskij7], and Assumptions 2.1, 2.2, 4.1, and 4.2 of [Reference Granelli and Veraart17]. The only difference is that they also focus on the convergence of the autocorrelations $\smash{r^{(n)}}$ , which for the sake of brevity and clarity of exposition we decided not to focus on in this work. Let $\Delta_{n}\,:\!=1/n.$

Assumption 4.1

For $k,l=1,\ldots, p$ , let $\smash{g^{(k,l)}}$ be differentiable on $ (0,\infty) $ with derivative $\smash{(g^{(k,l)})'}\in L^{2}((\epsilon,\infty)) $ for all $\epsilon>0$ and $\smash{((g^{(k,l)})')^{2}}$ be nonincreasing in $\smash{[b^{(k,l)},\infty)}$ for some $b^{(k,l)}>0$ . Let $\smash{\sigma^{(k,l)}}$ have $\smash{\alpha^{(k,l)}}$ -Hölder continuous sample paths for $\alpha^{(k,l)}\in\smash{(\tfrac{1}{2},1)}$ . Define

$$ \begin{equation*} \pi_{n}^{(k,l)}(A)\,:\!=\frac{\int_{A}(g^{(k,l)}(s+\Delta_{n})-g^{(k,l)} (s))^{2}{\rm d} s}{\int_{0}^{\infty}(g^{(k,l)}(s+\Delta_{n})-g^{(k,l)} (s))^{2}{\rm d} s}, \quad\text{where}\quad A\in\mathcal{B}(\mathbb{R}). \end{equation*} $$

We impose that there exists a constant $\smash{\lambda^{(k,l)}}<-1$ such that, for any $\epsilon_{n}=O(n^{-\kappa}),\,\kappa\in(0,1) $ , we have

$$ \begin{equation*} \pi_{n}^{(k,l)}((\epsilon_{n},\infty))=O(n^{\lambda^{(k,l)}(1-\kappa)}). \end{equation*} $$

In the next sections we present different cases. Each case will have particular $\tau_{n}$ and $\smash{r^{(n)}}$ , but the underlying assumptions will have the same structure. Hence, for the last two assumptions, we use the variables $\tau_{n}$ and $\smash{r^{(n)}}$ , whose specific form will not be introduced here, but instead it will be specified in the context where the assumptions are used. In other words, we prefer to have a general form for these two assumptions (with an unspecified $\tau_{n}$ and $\smash{r^{(n)}}$ ) in order to avoid repeating the same assumptions with different $\tau_{n}$ and $\smash{r^{(n)}}$ for each case.

Assumption 4.2

Let the limit $\smash{\lim_{n\rightarrow\infty}r^{(n)}(k)}$ exist for any $k\in\mathbb{N},$ and let $ (\xi(k))_{k\in\mathbb{N}}$ be a sequence such that, for any $k, n \in\mathbb{N}$ , $\smash{(r^{(n)}(k))^{2} \leq \xi(k)}$ and $\smash{\sum_{k=1}^{\infty}\xi(k)}<\infty$ .

Assumption 4.3

Let $k,l,m=1,\ldots, p$ . Let

$$ \begin{equation*} \sup_{s\in(-\infty,T]}\mathbb{E}[(\sigma_{s}^{(l,m)})^{2} ]<\infty. \end{equation*} $$

Moreover, for any $t>0,$ let

$$ \begin{equation*} \int_{1}^{\infty}((g^{(k,l)})'(s))^{2}(\sigma_{t-s}^{(l,m)})^{2}{\rm d} s<\infty. \end{equation*} $$

Let us discuss the intuition behind these assumptions. First, Assumption 4.2 is the analogue of Assumption 3.1 and Assumption 3.2 of the previous section, and it concerns the summability of the autocorrelations of the Gaussian core under consideration.

Assumption 4.1 regards the behaviour of the deterministic kernel g. In the first sentence we are not imposing that $ (\smash{g^{(m,l)})}'\in L^{2}((0,\infty)) $ in order to allow the theory to also be applicable outside the semimartingale case. Moreover, the intuition behind the conditions on $\smash{\pi_{n}^{(k,l)}}$ is that without them the increments $\smash{\Delta_{i}^{n}Y^{(k)}}$ might contain substantial information about the volatility far outside of the interval $[{(i-1)}/{n},{i}/{n}]$ (i.e. $[ (i-1)\Delta_{n},i\Delta_{n}]$ ), potentially leading to a different stochastic limit. In other words, the mass of $\smash{\pi_{n}^{(k,l)}}$ is more and more concentrated toward 0 as n increases.

Furthermore, it is shown in Section 4.3 of [Reference Barndorff-Nielsen, Corcuera and Podolskij7] in the univariate case that Assumptions 4.1 and 4.2 are for example satisfied in two cases. These cases can be easily extended to our multivariate framework since Assumptions 4.1 and 4.2 apply to each $\smash{g^{(k,l)}}$ where k and l are fixed. The first case consists of the function $\smash{g^{(k,l)}(x) = x^{\delta}\mathbf{1}_{(0,1]}(x)},\,x>0$ , for $\delta\in(-\smash{\tfrac{1}{2}},0) $ . The second case consists of the function $\smash{g^{(k,l)}(x)} = x^{\delta}L_{g}(x) $ with $\smash{(g^{(k,l)})'(x) }= x^{\delta-1}L_{g'}(x) $ , where $L_{g}$ and $L_{g'}$ are two continuous functions on $ (0, \infty) $ and slowly varying at 0, under the assumptions that $\smash{g^{(k,l)}}\in L^{2}((0,\infty)) $ , $\smash{(g^{(k,l)})'} \in L^{2} ((\epsilon, \infty)) $ for any $\epsilon > 0$ , $\smash{(g^{(k,l)})}$ is nonincreasing on $ (b, \infty) $ for some $b > 0$ , and $\delta\in(-\smash{\tfrac{1}{2}},0) $ .

In many situations the condition of Assumption 4.3 is satisfied, for example, when the stochastic volatilities are second-order stationary or more generally when $\smash{\sup_{s\in(-\infty,T]}\mathbb{E} [(\sigma_{s}^{(l,m)})^{2} ]}\!<\infty$ . Concerning the second condition, it is satisfied when we assume a fast decay to 0 of the derivative of the deterministic kernel $\smash{(g^{(k,l)})'(s)}$ when s goes to $\infty.$

In Section 6 we will show that these assumptions are satisfied for an important and practical case which is found in many real-world applications: the gamma kernel. Finally, since the assumptions of this work and of [Reference Barndorff-Nielsen, Corcuera and Podolskij7] and [Reference Granelli and Veraart17] are similar, we refer the reader to Section 4.3 of [Reference Barndorff-Nielsen, Corcuera and Podolskij7] and Section 2.1 of [Reference Granelli and Veraart17] for further discussions on the assumptions.

4.1. Case I

We will start with some preliminaries, where we focus on the bivariate case in order to simplify the exposition. The notation for higher dimensions is analogous. Consider the stochastic process $\{{\boldsymbol Y}_{t}\}_{t\in[0,T]}$ defined as

$$ \begin{equation*} {\boldsymbol Y}_{t}\,:\!= \begin{pmatrix} Y^{(1)}_{t} \\ Y^{(2)}_{t} \end{pmatrix} =\int_{-\infty}^{t} \begin{pmatrix} g^{(1,1)}(t-s) & g^{(1,2)}(t-s) \\ g^{(2,1)}(t-s) & g^{(2,2)}(t-s) \end{pmatrix} \begin{pmatrix} \sigma^{(1,1)}_{s} & \sigma^{(1,2)}_{s} \\ \sigma^{(2,1)}_{s} & \sigma^{(2,2)}_{s} \end{pmatrix} \begin{pmatrix} {\rm d} W^{(1)}_{s} \\ {\rm d} W^{(2)}_{s} \end{pmatrix}+ \begin{pmatrix} U^{(1)}_{t} \\ U^{(2)}_{t} \end{pmatrix}, \end{equation*} $$

where $\smash{g^{(i,j)}(\cdot)},\, i,j=1,2,$ are deterministic functions and $\smash{W^{(1)},W^{(2)}}$ are two ( possibly dependent) $\mathcal{F}_{t}$ -adapted jointly Brownian measures on $\mathbb{R}$ . From a modelling point of view, the dependence of the Brownian measures is not very important since it is always possible to shift it from the Brownian measure to the stochastic volatilities by just rewriting the latter. We have

\begin{align*} Y^{(1)}_{t}&=\int_{-\infty}^{t}(g^{(1,1)}(t-s)\sigma^{(1,1)}_{s}+ g^{(1,2)}(t-s)\sigma^{(2,1)}_{s}){\rm d} W^{(1)}_{s} \\ & +\int_{-\infty}^{t}(g^{(1,1)}(t-s)\sigma^{(1,2)}_{s} +g^{(1,2)}(t-s) \sigma^{(2,2)}_{s}){\rm d} W^{(2)}_{s}+U_{t}^{(1)} \\ \text{and}\quad Y^{(2)}_{t}&=\int_{-\infty}^{t}(g^{(2,1)}(t-s) \sigma^{(1,1)}_{s}+g^{(2,2)}(t-s)\sigma^{(2,1)}_{s}){\rm d} W^{(1)}_{s} \\ & +\int_{-\infty}^{t}(g^{(2,1)}(t-s)\sigma^{(1,2)}_{s}+g^{(2,2)} (t-s)\sigma^{(2,2)}_{s}){\rm d} W^{(2)}_{s}+U_{t}^{(2)}. \end{align*}

Let us define, for $k,r,m=1,2$ ,

$$ \begin{equation*} \Delta_{i}^{n}Z^{(k,r,m)}\,:\!=\int_{(i-1)\Delta_{n}}^{i\Delta_{n}} g^{(k,r)}(i\Delta_{n}-s)\sigma^{(r,m)}_{s}{\rm d} W^{(m)}_{s}+ \int_{-\infty}^{(i-1)\Delta_{n}}\Delta^{n}_{i} g^{(k,r)}(s)\sigma^{(r,m)}_{s}{\rm d} W^{(m)}_{s}, \end{equation*} $$

where $\smash{\Delta^{n}_{i} g^{(k,l)}(s)}\,:\!=\smash{g^{(k,l)}(i\Delta_{n}-s)-g^{(k,l)}((i-1) \Delta_{n}-s)}$ . Then we have

$$ \begin{equation*} \Delta_{i}^{n}Y^{(k)}=\Delta_{i}^{n}Z^{(k,1,1)}+\Delta_{i}^{n} Z^{(k,2,1)}+\Delta_{i}^{n}Z^{(k,1,2)}+\Delta_{i}^{n}Z^{(k,2,2)}+ \Delta_{i}^{n}U^{(k)}. \end{equation*} $$

For $m,r,k=1,2$ , let $\smash{G^{(k,r;m)}_{t}\,:\!=\int_{-\infty}^{t}g^{(k,r)} (t-s){\rm d} W_{s}^{(m)}}$ . Hence,

$$ \begin{equation*} \Delta^{n}_{i}G^{(k,r;m)}=\int_{(i-1)\Delta_{n}}^{i\Delta_{n}} g^{(k,r)}(i\Delta_{n}-s){\rm d} W_{s}^{(m)}+\int_{-\infty}^{(i-1)\Delta_{n}} \Delta^{n}_{i} g^{(k,r)}(s){\rm d} W_{s}^{(m)}. \end{equation*} $$

Furthermore, let $\smash{\tau_{n}^{(k,r)}\,:\!=\sqrt{\mathbb{E} [(\Delta^{n}_{1}G^{(k,r;m)})^{2}]}}$ and

$$ \begin{equation*} r_{k,r,m;l,q,w}^{(n)}(h)\,:\!=\mathbb{E}\bigg[\dfrac{\Delta^{n}_{1} G^{(k,r;m)}}{\tau_{n}^{(k,r)}}\dfrac{\Delta^{n}_{1+h}G^{(l,q;w)}} {\tau_{n}^{(l,q)}} \bigg]. \end{equation*} $$

The Gaussian core $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ is here given by ${\boldsymbol G}_{t}=\smash{(G^{(k,r;m)}_{t} )_{m,k,r=1,2}}$ , and it is a Gaussian process since $\smash{W^{(1)}}$ and $\smash{W^{(2)}}$ are jointly Gaussian. Note that we are working with the separable Hilbert space $\mathcal{H}$ generated by the jointly Gaussian random variables

$$ \begin{equation*} \bigg(\frac{{\Delta^{n}_{1}G^{(k,r;m)}}}{{\tau_{n}^{(k,r)}}}\bigg)_{n \geq1,\,1\leq l\leq \lfloor nt\rfloor,\,k,r,m\in\{1,2\}}. \end{equation*} $$

Furthermore, observe that

$$ \begin{equation*} \mathbb{E}\bigg[\dfrac{\Delta^{n}_{1}G^{(k,r;m)}}{\tau_{n}^{(k,r)}} \dfrac{\Delta^{n}_{1+h}G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \bigg]=\mathbb{E}\bigg[\dfrac{\Delta^{n}_{i}G^{(k,r;m)}} {\tau_{n}^{(k,r)}}\dfrac{\Delta^{n}_{i+h}G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \bigg] \end{equation*} $$

for any $i=1,\ldots, \lfloor nt \rfloor$ . Before presenting the CLT, we introduce an assumption, similar to Assumption (4.10) of [Reference Barndorff-Nielsen, Corcuera and Podolskij7], that controls the asymptotic behaviour of the drift process. In particular, it states that the drift process has to be sufficiently smooth so that the sum of its increments does not grow too fast as $n\rightarrow\infty$ .

Assumption 4.4

For any $k,l,r,m=1,\ldots, p$ , let

$$ \begin{equation*} \frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta_{i}^{n} Z^{(k,r,m)}}{\tau^{(k,r)}_{n}}\Delta_{i}^{n}U^{(l)} \xrightarrow{\text{u.c.p.}} 0 \quad\text{and}\quad \frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor}\Delta_{i}^{n}U^{(k)} \Delta_{i}^{n}U^{(l)} \xrightarrow{\text{u.c.p.}} 0. \end{equation*} $$

Theorem 4.1

Under Assumptions 4.1, 4.2, and 4.3 applied to $\smash{\tau_{n}^{(l,q)},r_{k,r;l,q}^{(n)}}$ for $l,q,k,r=1,\ldots, p$ , and Assumption 4.4, we have the following stable convergence:

\begin{align*} &\bigg\{\sqrt{n}\bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\sum_{r,m=1}^{p}\frac{\Delta_{i}^{n}Z^{(k,r,m)}} {\tau^{(k,r)}_{n}}+\Delta_{i}^{n}U^{(k)}\bigg)\bigg(\sum_{q,w=1}^{p} \frac{\Delta_{i}^{n}Z^{(l,q,w)}}{\tau^{(l,q)}_{n}}+\Delta_{i}^{n}U^{(l)} \bigg) \\ &-\sum_{r,m,q,w=1}^{p}\mathbb{E}\bigg[\frac{\Delta_{1}^{n} G^{(k,r,m)}}{\tau^{(k,r)}_{n}}\frac{\Delta_{1}^{n}G^{(l,q,w)}} {\tau^{(l,q)}_{n}} \bigg]\int_{0}^{t}\sigma^{(r,m)}_{s} \sigma^{(q,w)}_{s}{\rm d} s \bigg]_{k,l=1,\ldots, p} \bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\int_{0}^{t}V_{s}{\boldsymbol D}^{1/2}{\rm d} B_{s}\bigg\}_{t\in[0,T]} \end{align*}

in $\mathcal{D}([0,T],\mathbb{R}^{p^{2}}) $ . Where ${\boldsymbol D}$ and $V_{s}$ are introduced in Appendix A, and $B_{s}$ is a $p^{6}$ -dimensional Brownian motion.

Remark 4.1

As all the other CLTs and WLLN in this section, this result is not feasible because the scaling factor $\smash{\tau_{n}^{(k,r)}}$ for $k,r=1,\ldots, p$ depends on the Gaussian core ${\boldsymbol G}_{t}$ which is not observable, and, so, this result cannot be computed directly from the data. Note also that in Theorem 4.1 and Corollary 4.1 the increments $\smash{\Delta_{i}^{n} Z^{(k,r,m)}}$ appear, which are not observable either.

Proof of Theorem 4.1

Let us assume for now that $p=2$ and that the drift process ${\boldsymbol U}_{t}=0$ for any $t\in[0,T]$ . We can split our formulation into two components $A_{n}+C_{n}$ , where $A_{n}$ contains the elements that go to 0, while $C_{n}$ contains the elements that do not converge to 0. In this proof we will use the so-called blocking technique; see [Reference Barndorff-Nielsen, Corcuera and Podolskij7], [Reference Corcuera, Hedevang, Podolskij and Pakkanen15], and [Reference Granelli and Veraart17] for details. Let $1\leq h\leq n,$ and let us first focus on $C_{n}$ , which is defined as

(4.1) \begin{align} C_{n}&\,:\!=\frac{1}{\sqrt{n}\,}\bigg(\sum_{j=1}^{\lfloor ht\rfloor} \sum_{i\in I_{h,n}(\,j)}\sum_{r,m,q,w=1}^{2}\bigg(\frac{\Delta^{n}_{i} G^{(k,r;m)}}{\tau_{n}^{(k,r)}}\frac{\Delta^{n}_{i}G^{(l,q;w)}} {\tau_{n}^{(l,q)}} -\mathbb{E}\bigg[\frac{\Delta^{n}_{i}G^{(k,r;m)}} {\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}}\bigg]\bigg) \nonumber \\ &\times \sigma^{(r,m)}_{(\,j-1)\Delta_{h}}\sigma^{(q,w)}_{(\,j-1)\Delta_{h}} \bigg)_{k,l=1,2}, \label{eqn9}\end{align}

where $I_{h,n}(\,j)=\{ i\mid {i}/{n}\in({(\,j-1)}/{h},{j}/{h} ]\}$ , which can be rewritten as

\begin{align*} C_{n}&=\frac{1}{\sqrt{n}\,}\sum_{j=1}^{\lfloor ht\rfloor} \sum_{i\in I_{h,n}(\,j)}V_{(\,j-1)\Delta_{h}} \bigg(\frac{\Delta^{n}_{i}G^{(k,r;m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i} G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \\ &-\mathbb{E}\bigg[\frac{\Delta^{n}_{i}G^{(k,r;m)}} {\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \bigg]\bigg)_{r,m,q,w,k,l=1,2}, \end{align*}

where

$$ \begin{equation*} V_{(\,j-1)\Delta_{h}}= \begin{pmatrix} \sigma_{(\,j-1)\Delta_{h}} & \textbf{0} & \textbf{0} & \textbf{0} \\ \textbf{0} & \sigma_{(\,j-1)\Delta_{h}} & \textbf{0} & \textbf{0} \\ \textbf{0} & \textbf{0} & \sigma_{(\,j-1)\Delta_{h}} & \textbf{0} \\ \textbf{0} & \textbf{0} & \textbf{0} & \sigma_{(\,j-1)\Delta_{h}} \end{pmatrix}. \end{equation*} $$

Here $\smash{\sigma_{(\,j-1)\Delta_{h}}=(\sigma^{(r,m)}_{(\,j-1)\Delta_{h}} \sigma^{(q,w)}_{(\,j-1)\Delta_{h}})_{r,m,q,w=1,2}^{\top}}$ (hence, it is a row vector of 16 elements), and $\textbf{0}$ is a row vector of 16 elements containing only 0s. Hence, $V_{(\,j-1)\Delta_{h}}$ is a $4\times 64$ matrix. Now, by Theorem 3.1 we have

\begin{align*} &\bigg\{\frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}G^{(k,r;m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}}-\mathbb{E} \bigg[\frac{\Delta^{n}_{i}G^{(k,r;m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}}\bigg] \bigg)_{r,m,q,w,k,l=1,2}\bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \{{\boldsymbol D}^{1/2}B_{t} \}_{t\in[0,T]}\quad\text{as } n\rightarrow\infty \end{align*}

in $\mathcal{D}([0,T],\mathbb{R}^{64}) $ , where the symmetric matrix ${\boldsymbol D}^{1/2}$ is a $64\times64$ matrix and $B_{t}$ is a 64-dimensional Brownian motion. In particular, in order to define the elements of the matrix ${\boldsymbol D},$ we proceed with the following association of (z, y) to $ ((r_{z},m_{z},q_{z},w_{z},k_{z},l_{z}),(r_{y},m_{y},q_{y},w_{y}, k_{y},l_{y})) $ . This part of the proof is similar to the analogue part in the proof of Proposition 3.1, but there are some differences. First, in the latter we had the association of fewer elements (namely $ (i,j)\leftrightarrow ((x,y),(z,w)) $ ). Second, as it is possible to see from (4.1) in the present case there is a summation over r, m, q, w and hence there is no a specific ordering of r, m, q, w when we consider it as a vector (given that we modify the ordering of $\sigma_{(\,j-1)\Delta_{h}}$ accordingly). Indeed, let $\nu(r,m,q,w) $ be any permutation of the set $\{(1,1,1,1),(1,1,1,2),(1,1,2,1),(1,1,2,2), \ldots,(2,2,2,2)\}$ , namely the set of all the possible combinations of $r,m,q,w\in\{1,2\}$ , and let $\nu_{s}(r,m,q,w) $ determine the sth element of $\nu(r,m,q,w) $ . It is possible to see that $\nu(r,m,q,w) $ contains $2^{4}$ elements; hence, $s\in\{1,\ldots, 2^{4}\}$ . Then we have

\begin{align*} C_{n}&=\frac{1}{\sqrt{n}\,}\sum_{j=1}^{\lfloor ht\rfloor} \sum_{i\in I_{h,n}(\,j)}V_{(\,j-1)\Delta_{h}}\bigg(\frac{\Delta^{n}_{i}G^{(k,r;m)}} {\tau_{n}^{(k,r)}}\frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \\ &-\mathbb{E}\bigg[\frac{\Delta^{n}_{i}G^{(k,r;m)}} {\tau_{n}^{(k,r)}}\frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \bigg]\bigg)_{r,m,q,w,k,l=1,2} \\ &=\frac{1}{\sqrt{n}\,}\sum_{j=1}^{\lfloor ht\rfloor} \sum_{i\in I_{h,n}(\,j)}V^{\nu}_{(\,j-1)\Delta_{h}} \bigg(\frac{\Delta^{n}_{i}G^{(k,r;m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \\ &-\mathbb{E}\bigg[\frac{\Delta^{n}_{i}G^{(k,r;m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q;w)}}{\tau_{n}^{(l,q)}} \bigg]\bigg)_{\nu(r,m,q,w);k,l=1,2}, \end{align*}

where $V^{\nu}_{(\,j-1)\Delta_{h}}$ has the same form as $V_{(\,j-1)\Delta_{h}}$ , but with $\sigma^{\nu}_{(\,j-1)\Delta_{h}}\!\!\!=\!(\sigma^{(r,m)}_{(\,j-1)\Delta_{h}} \sigma^{(q,w)}_{(\,j-1)\Delta_{h}})_{\nu(r,m,q,w)}^{\top}$ instead of $\sigma_{(\,j-\!1)\Delta_{h}}$ . In particular, given a vector ${\boldsymbol X}=(X_{1},\ldots,\! X_{2^{4}}) $ , by the notation $ ({\boldsymbol X})_{\nu(r,m,q,w)}$ we mean that the sth component of the vector is $\nu_{s}(r,m,q,w) $ , that is,

$$ \begin{equation*} ({\boldsymbol X})_{\nu(r,m,q,w)}=(X_{\nu_{1}(r,m,q,w)},X_{\nu_{2} (r,m,q,w)},\ldots, X_{\nu_{2^{4}}(r,m,q,w)}). \end{equation*} $$

We are now ready to formulate the relation between z and $ (r_{z},m_{z},q_{z},w_{z},k_{z},l_{z}) $ : for $z=s\leftrightarrow (\nu_{s}(r,m,q,w),1,1) $ ; $z=2^{4}+1\leftrightarrow (\nu_{1}(r,m,q,w),1,2);\ldots z=2^{4}+s\leftrightarrow (\nu_{s}(r,m,q,w),1, 2);\ldots z=2^{5}+1\leftrightarrow (\nu_{1}(r,m,q,w),2,1);\ldots z=2^{5}+s\leftrightarrow (\nu_{s}(r,m,q,w),2,1);\ldots z=2^{5}+2^{4}+1\leftrightarrow (\nu_{1}(r,m,q,w),2,2);\ldots z=2^{5}+2^{4}+s\leftrightarrow (\nu_{s}(r,m,q,w),2,2);\ldots z=2^{6}\leftrightarrow (\nu_{2^{4}}(r,m,q,w),2,2) $ . By symmetry, the same relation applies to y and $ (r_{y},m_{y},q_{y},w_{y},k_{y},l_{y}) $ . This can be written compactly as

\begin{align*} (z,y)\leftrightarrow&\bigg(\bigg(\nu_{z-\lfloor{(z-1)}/{2^{4}}\rfloor 2^{4}}(r,m,q,w),\bigg\lfloor\frac{\lfloor{(z-1)}/{2^{4}}\rfloor}{2} \bigg\rfloor +1,\bigg\lfloor\frac{z-1}{2^{4}}\bigg\rfloor+1 \\ &-2\bigg\lfloor \frac{\lfloor{(z-1)}/{2^{4}} \rfloor}{2}\bigg\rfloor\bigg), \\ &\bigg(\nu_{y-\lfloor{(y-1)}/{2^{4}}\rfloor 2^{4}}(r,m,q,w),\bigg\lfloor\frac{\lfloor{(y-1)}/{2^{4}} \rfloor}{2}\bigg\rfloor+1,\bigg\lfloor\frac{y-1}{2^{4}}\bigg\rfloor+1 \\ &- 2\bigg\lfloor \frac{\lfloor{(y-1)}/{2^{4}}\rfloor}{2} \bigg\rfloor\bigg)\bigg). \end{align*}

Moreover, here it becomes evident why we need Theorem 3.1 to hold for jointly Gaussian Brownian measures and not just for independent Brownian measures: this is because the vector of Brownian measures is not composed of independent Brownian measures, but rather of the same Brownian measures recurring repeatedly.

Continuing with the proof, we observe that, by using the properties of the stable convergence and the assumption on the $\mathcal{F}$ -measurability of the $\sigma$ , we obtain, for fixed h,

$$ \begin{equation*} C_{n} \xrightarrow{\text{st}} \bigg\{\sum_{j=1}^{\lfloor ht\rfloor}V_{(\,j-1)\Delta_{h}}{\boldsymbol D}^{1/2} (B_{j\Delta_{h}}-B_{(\,j-1)\Delta_{h}})\bigg\}_{t\in[0,T]} \quad\text{as } n\rightarrow\infty \end{equation*} $$

in $\mathcal{D}([0,T],\mathbb{R}^{4}),$ where the dimensions are $4\times64, 64\times64,$ and $64\times 1$ , respectively. The convergence in this space is implied by the convergence in $\mathcal{D}([0,T],\mathbb{R}^{64}) $ . In addition, since the stochastic volatilities are càdlàg, we have

$$ \begin{equation*} \sum_{j=1}^{\lfloor ht\rfloor}V_{(\,j-1)\Delta_{h}}{\boldsymbol D}^{1/2} (B_{j\Delta_{h}}-B_{(\,j-1)\Delta_{h}}) \xrightarrow{\text{P}} \int_{0}^{t}V_{s}{\boldsymbol D}^{1/2}{\rm d} B_{s} \quad\text{as $h\rightarrow\infty$. } \end{equation*} $$

From this we obtain the stable convergence of $C_{n}$ .

Concerning $A_{(l,k),n}$ we apply the same arguments as for Propositions 7.1, 7.2, and 7.5 of [Reference Granelli and Veraart17] and Theorem 4 of [Reference Barndorff-Nielsen, Corcuera and Podolskij7]. This is because we can focus on the single elements

(4.2) $$ \begin{equation} \sqrt{n}\bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta_{i}^{n}Z^{(k,r,m)}}{\tau^{(k,r)}_{n}} \frac{\Delta_{i}^{n}Z^{(l,q,w)}}{\tau^{(l,q)}_{n}}-\mathbb{E} \bigg[\frac{\Delta^{n}_{i}G^{(k,r,m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \bigg]\int_{0}^{t}\sigma^{(r,m)}_{s}\sigma^{(q,w)}_{s}{\rm d} s\bigg] \label{eqn10} \end{equation} $$

and directly apply their arguments using the assumptions of this theorem. This is because, for each (k, r, m, l, q, w), $\smash{A_{(k,r,m,l,q,w),n}}$ converges to 0 in distribution in $\mathcal{D}([0,T],\mathbb{R}^{4}) $ , which implies that they converge jointly to 0 stably in distribution. For the sake of completeness, let us sketch their arguments and show how they apply to our case. First note that (4.2) can be split into elements that converge to 0 and elements that do not. The second elements belong to $C_{n}$ and so they have already been treated in the previous part of this proof. The first elements are given by

\begin{align*} &A_{(k,r,m,l,q,w),n} \\ &\qquad\,:\!=\frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor} \Bigg(\frac{\Delta_{i}^{n}Z^{(k,r,m)}}{\tau^{(k,r)}_{n}}\frac{\Delta_{i}^{n} Z^{(l,q,w)}}{\tau^{(l,1)}_{n}}-\sigma^{(r,m)}_{(i-1)\Delta_{n}} \sigma^{(q,w)}_{(i-1)\Delta_{n}}\frac{\Delta^{n}_{i}G^{(k,r,m)}} {\tau_{n}^{(k,r)}}\frac{\Delta^{n}_{i}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \Bigg) \\ &\qquad +\biggl[\frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor} \sigma^{(r,m)}_{(i-1)\Delta_{n}}\sigma^{(q,w)}_{(i-1)\Delta_{n}} \frac{\Delta^{n}_{i}G^{(k,r,m)}}{\tau_{n}^{(k,r)}}\frac{\Delta^{n}_{i} G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \\ &-\frac{1}{\sqrt{n}\,}\sum_{j=1}^{\lfloor ht\rfloor} \sigma^{(r,m)}_{(\,j-1)\Delta_{h}}\sigma^{(q,w)}_{(\,j-1)\Delta_{h}} \sum_{i\in I_{h,n}(\,j)}\frac{\Delta^{n}_{i}G^{(k,r,m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q,w)}}{\tau_{n}^{(l,q)}}\biggr] \\ &\qquad+\mathbb{E}\bigg[\frac{\Delta^{n}_{1}G^{(k,r,m)}} {\tau_{n}^{(k,r)}}\frac{\Delta^{n}_{1}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \bigg] \\ &\times \bigg(\frac{\sqrt{n}}{h}\sum_{j=1}^{\lfloor ht\rfloor}\sigma^{(r,m)}_{(\,j-1) \Delta_{h}}\sigma^{(q,w)}_{(\,j-1)\Delta_{h}}-\frac{1}{\sqrt{n}\,} \sum_{j=1}^{\lfloor nt\rfloor}\sigma^{(r,m)}_{(\,j-1)\Delta_{n}} \sigma^{(q,w)}_{(\,j-1)\Delta_{n}}\bigg) \\ &\qquad+\mathbb{E}\bigg[\frac{\Delta^{n}_{1}G^{(k,r,m)}} {\tau_{n}^{(k,r)}}\frac{\Delta^{n}_{1}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \bigg] \\ &\times \bigg(\frac{1}{\sqrt{n}\,}\sum_{j=1}^{\lfloor nt\rfloor}\sigma^{(r,m)}_{(\,j-1) \Delta_{n}}\sigma^{(q,w)}_{(\,j-1)\Delta_{n}}-\sqrt{n}\int_{0}^{t} \sigma^{(r,m)}_{s}\sigma^{(q,w)}_{s}{\rm d} s\bigg) \\ &\phantom{:}\qquad\,{=\!:}\,A^{n}_{t} + A'^{n,h}_{t} + A''^{n,h}_{t} + D^{n}_{t}. \end{align*}

Note that $\# I_{h,n}(\,j)\in\{\lfloor {n}/{h} \rfloor, \lfloor {n}/{h} \rfloor+1 \}$ , which implies that $\# I_{h,n}(\,j)={n}/{h}+e_{h,n}(\,j) $ with $e_{h,n}(\,j)\in(-1,1]$ for every $1\leq h\leq n$ and $j\geq 1$ . Then we have

\begin{align*} \tilde{C}_{(k,r,m,l,q,w),n}&=\frac{1}{\sqrt{n}\,}\sum_{j=1}^{\lfloor ht \rfloor}\sigma^{(r,m)}_{(\,j-1)\Delta_{h}}\sigma^{(q,w)}_{(\,j-1)\Delta_{h}} \sum_{i\in I_{h,n}(\,j)}\frac{\Delta^{n}_{i}G^{(k,r,m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \\ &\quad\,-\mathbb{E}\bigg[\frac{\Delta^{n}_{1}G^{(k,r,m)}}{\tau_{n}^{(k,r)} }\frac{\Delta^{n}_{1}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \bigg]\frac{\sqrt{n}}{h}\sum_{j=1}^{\lfloor ht\rfloor}\sigma^{(r,m)}_{(\,j-1)\Delta_{h}}\sigma^{(q,w)}_{(\,j-1)\Delta_{h}} \\ &=C_{(k,r,m,l,q,w),n} \\ & +\sum_{j=1}^{\lfloor ht\rfloor}\sigma^{(r,m)}_{(\,j-1)\Delta_{h}}\sigma^{(q,w)}_{(\,j-1)\Delta_{h}} \bigg(\frac{1}{\sqrt{n}\,}\# I_{h,n}(\,j)-\frac{\sqrt{n}}{h}\bigg) \\ &\times\mathbb{E}\bigg[\frac{\Delta^{n}_{1}G^{(k,r,m)}} {\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{1}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \bigg], \end{align*}

where

\begin{align*} C_{(k,r,m,l,q,w),n}&=\frac{1}{\sqrt{n}\,}\sum_{j=1}^{\lfloor ht\rfloor} \sigma^{(r,m)}_{(\,j-1)\Delta_{h}}\sigma^{(q,w)}_{(\,j-1)\Delta_{h}} \\ &\times\sum_{i\in I_{h,n}(\,j)}\bigg(\frac{\Delta^{n}_{i}G^{(k,r,m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{i}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} -\mathbb{E}\bigg[\frac{\Delta^{n}_{1}G^{(k,r,m)}}{\tau_{n}^{(k,r)}} \frac{\Delta^{n}_{1}G^{(l,q,w)}}{\tau_{n}^{(l,q)}} \bigg]\bigg). \end{align*}

Since $\# I_{h,n}(\,j)/{\sqrt{n}}-{\sqrt{n}}/{h}={e_{h,n}(\,j)}/{\sqrt{n}}$ , the second addendum in the above equation (i.e. $\smash{\tilde{C}_{(k,r,m,l,q,w),n}-C_{(k,r,m,l,q,w),n}}$ ) goes to 0 almost surely as $n\rightarrow\infty$ . For completeness, we remark that in this proof the order is always $n\rightarrow\infty$ first and $h\rightarrow\infty$ afterwards. Furthermore, note that $\smash{A_{(k,r,m,l,q,w),n}+\tilde{C}_{(k,r,m,l,q,w),n}}$ gives us (4.2), and that $\smash{C_{(k,r,m,l,q,w),n}}$ is the (k, r, m, l, q, w)th element of the vector $C_{n}$ .

Let $\smash{A'''^{n,h}_{t}\,:\!=A'^{n,h}_{t}+A''^{n,h}_{t}}$ . Observe that our assumptions are sufficient to use Propositions 7.1, 7.2, and 7.5 of [Reference Granelli and Veraart17], which would allow us to prove the convergence to 0 of $\smash{A^{n}_{t}}$ , of $\smash{A'''^{n,h}_{t}},$ and of $\smash{D^{n}_{t}}$ , respectively. In particular, Assumptions 2.1, 2.2, 4.1, and 4.2 of [Reference Granelli and Veraart17] and Assumption (CLT) of [Reference Barndorff-Nielsen, Corcuera and Podolskij7] are analogues of Assumptions 4.1, 4.2, and 4.3 together with Definition 2.2 (see also the discussion after Assumption 4.3). The only main difference between our and their assumptions is that they additionally focus on the limiting object of the convergence of the correlations $\smash{r^{(n)}}$ , which is denoted by $\smash{\rho^{i,j}_{\vartheta}(k)}$ in [Reference Granelli and Veraart17] and by $\rho(k) $ in [Reference Barndorff-Nielsen, Corcuera and Podolskij7]. However, for the sake of brevity and clarity of exposition, we decided not to focus on this issue in our work. This allows us to get rid of some assumptions, in particular some part of Assumption 2.2 of [Reference Granelli and Veraart17] and some part of Assumption (CLT) of [Reference Barndorff-Nielsen, Corcuera and Podolskij7]. Therefore, by our assumptions and by Propositions 7.1, 7.2, and 7.5 of [Reference Granelli and Veraart17], we obtain the convergence to 0 of $\smash{A^{n}_{t}}$ , of $\smash{A'''^{n,h}_{t}},$ and of $\smash{D^{n}_{t}}$ , respectively.

Since $C_{n}$ converges stably and $A_{n}$ converges stably to 0, they jointly converge stably. Note also that $\smash{\tilde{C}_{n}-C_{n}}$ converges to 0 almost surely and so it does not affect the limit. This concludes the proof for the case ${\boldsymbol U}_{t}=\textbf{0}$ , where $\textbf{0}$ is a vector of 0s.

Now consider ${\boldsymbol U}_{t}\neq\textbf{0}$ . In order to get the stated result, we need to prove that the following elements converge uniformly on compacts in probability (u.c.p.) to 0, so that the stable convergence obtained so far in this proof remains the same. These elements are

\begin{align*} \sqrt{n}\bigg[&\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{ \Delta_{i}^{n}Z^{(k,1,1)}}{\tau^{(k,1)}_{n}}+\frac{\Delta_{i}^{n} Z^{(k,2,1)}}{\tau^{(k,2)}_{n}}+\frac{\Delta_{i}^{n}Z^{(k,1,2)}} {\tau^{(k,1)}_{n}}+\frac{\Delta_{i}^{n}Z^{(k,2,2)}}{\tau^{(k,2)}_{n}} \bigg)\Delta_{i}^{n}U^{(l)} \\ &+\bigg(\frac{\Delta_{i}^{n}Z^{(l,1,1)}}{\tau^{(l,1)}_{n}} +\frac{\Delta_{i}^{n}Z^{(l,2,1)}}{\tau^{(l,2)}_{n}}+\frac{\Delta_{i}^{n} Z^{(l,1,2)}}{\tau^{(l,1)}_{n}}+\frac{\Delta_{i}^{n}Z^{(l,2,2)}} {\tau^{(l,2)}_{n}}\bigg)\Delta_{i}^{n}U^{(k)} \\ &+\Delta_{i}^{n}U^{(k)} \Delta_{i}^{n}U^{(l)}\bigg]_{k,l=1,2}. \end{align*}

Thanks to Assumption 4.4 (with $p=2$ ) they go to 0 u.c.p. componentwise (i.e. for fixed k, l) and, hence, jointly. Thus, using the properties of the stable convergence, the proof for the case $p=2$ is complete.

Now, let $p\geq 2$ . The proof for this case follows from the same arguments as just presented for the case $p=2$ . Indeed, having $p\geq 2$ does change the dimension of the objects considered (in particular, see the matrix ${\boldsymbol D}$ and $V_{s}$ in Appendix A), but it does not affect in any way the logic of the arguments.

It is possible to obtain a vech formulation of our results, thus reducing their dimensions without losing any information. This is possible because of the symmetry of our object of study, that is there is no difference between

$$ \begin{equation*} \frac{\Delta_{i}^{n}G^{(k,r,m)}}{\tau^{(k,r)}_{n}}\frac{\Delta_{i}^{n} G^{(l,q,w)}}{\tau^{(l,q)}_{n}} \quad\text{and}\quad \frac{\Delta_{i}^{n}G^{(l,q,w)}}{\tau^{(l,q)}_{n}}\frac{\Delta_{i}^{n} G^{(k,r,m)}}{\tau^{(k,r)}_{n}} \end{equation*} $$

and between

$$ \begin{equation*} \frac{\Delta_{i}^{n}Z^{(k,r,m)}}{\tau^{(k,r)}_{n}}\frac{\Delta_{i}^{n} Z^{(l,q,w)}}{\tau^{(l,q)}_{n}} \quad\text{and}\quad \frac{\Delta_{i}^{n}Z^{(l,q,w)}}{\tau^{(l,q)}_{n}}\frac{\Delta_{i}^{n} Z^{(k,r,m)}}{\tau^{(k,r)}_{n}}. \end{equation*} $$

Hence, we have the following formulation of our results.

Corollary 4.1

Under Assumptions 4.1, 4.2, and 4.3 applied to $\smash{\tau_{n}^{(l,q)},r_{k,r;l,q}^{(n)}}$ for $l,q,k,r=1,\ldots, p$ , and Assumption 4.4, we have the stable convergence

\begin{align*} &\bigg\{\sqrt{n}\bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\sum_{r,m=1}^{p}\frac{\Delta_{i}^{n}Z^{(k,r,m)}}{\tau^{(k,r)}_{n}} +\Delta_{i}^{n}U^{(k)}\bigg)\bigg(\sum_{q,w=1}^{p}\frac{\Delta_{i}^{n} Z^{(l,q,w)}}{\tau^{(l,q)}_{n}}+\Delta_{i}^{n}U^{(l)}\bigg) \\ &-\sum_{r,m,q,w=1}^{p}\mathbb{E}\bigg[\frac{\Delta_{1}^{n} G^{(k,r,m)}} {\tau^{(k,r)}_{n}}\frac{\Delta_{1}^{n}G^{(l,q,w)}} {\tau^{(l,q)}_{n}}\bigg] \int_{0}^{t}\sigma^{(r,m)}_{s}\sigma^{(q,w)}_{s} {\rm d} s \bigg]_{k=1,\ldots, p;l\leq k} \bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\int_{0}^{t}V_{s} {\boldsymbol D}^{1/2}{\rm d} B_{s}\bigg\}_{t\in[0,T]} \end{align*}

in $\mathcal{D}([0,T],\smash{\mathbb{R}^{{p}(\,p+1)}/{2}}) $ , where ${\boldsymbol D}$ and $V_{s}$ are introduced in Appendix A, and $B_{s}$ is a ${p^{5}}(\,p+1)/{2}$ -dimensional Brownian motion.

Proof. It follows from the same arguments as used in the proof of Theorem 4.1. Indeed, in Theorem 4.1 we had $k,l=1,\ldots, p$ , but now we have $k=1,\ldots,p$ and $l\leq k$ . As before, this does change the dimension of the objects considered (in particular, see the matrix ${\boldsymbol D}$ and $V_{s}$ in Appendix A), but not the logic of the arguments.

Remark 4.2

Note that ${p^{5}}(\,p+1)/{2}$ comes from $\smash{p^{4}(\,p^{2}-\sum_{j=1}^{p-1}j)},$ where $\smash{\sum_{j=1}^{p-1}j}$ indicates that we are not considering the strictly upper (or lower) triangular elements of the $p\times p$ matrix. Moreover, for the remaining sections and subsections, we will always adopt the version of our results.

4.2. Case II: first scenario

Despite the process ${\boldsymbol Y}_{t}$ being the same as in the previous section, we introduce a new formulation for the $\tau$ . This formulation is in line with that presented in Section 3.2. Furthermore, in this section we present and prove the results for one of the two versions of the multivariate BSS process introduced in Definition 2.2. In the next section we will do the same for the other version.

Consider the stochastic process $\{{\boldsymbol Y}_{t}\}_{t\in[0,T]}=\smash{\{(Y_{t}^{(1)},\ldots, Y_{t}^{(\,p)})\}_{t\in[0,T]}}$ given by

$${{\bf{Y}}_t} = \int_{ - \infty }^t {\left( {\matrix{ {{g^{(1,1)}}(t - s)} & n & {{g^{(1,p)}}(t - s)} \cr n & n & n \cr {{g^{({\kern 1pt} p,1)}}(t - s)} & n & {{g^{({\kern 1pt} p,p)}}(t - s)} \cr } } \right)} \left( {\matrix{ {\sigma _s^{(1,1)}} & n & {\sigma _s^{(1,p)}} \cr n & n & n \cr {\sigma _s^{({\kern 1pt} p,1)}} & n & {\sigma _s^{({\kern 1pt} p,p)}} \cr } } \right)\left( {\matrix{ {{\rm{d}}W_s^{(1)}} \cr n \cr {{\rm{d}}W_s^{({\kern 1pt} p)}} \cr } } \right) + \left( {\matrix{ {U_t^{(1)}} \cr n \cr {U_t^{({\kern 1pt} p)}} \cr } } \right).$$

Assume that the $\mathcal{F}_{t}$ -Brownian measures are all independent of each other. The Gaussian core is given by $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ with

$$ \begin{equation*} {\boldsymbol G}_{t}=(G^{(k,r,m)}_{t} )_{k,r,m=1,\ldots, p}=\bigg(\int_{0}^{t}g^{(k,r)}(t-s){\rm d} W_{s}^{(m)} \bigg)_{k,r,m=1,\ldots, p}. \end{equation*} $$

Let us define, for $k=1,\ldots, p$ , the sequence $\smash{(\bar{\tau}_{n}^{(k)})_{n\in\mathbb{N}}}$ such that, for each $n\in\mathbb{N}$ ,

$$ \begin{equation*} \max_{r=1,\ldots, p}{\tau_{n}^{(k,r)}}={O(\bar{\tau}_{n}^{(k)} }). \end{equation*} $$

For example, we may assume that

$$ \begin{equation*} \bar{\tau}_{n}^{(k)}\,:\!={\sqrt{\mathbb{E}\bigg[\bigg(\sum_{r=1}^{p} \Delta^{n}_{1}G^{(k,r,r)}_{t}\bigg)^{2}\bigg]}} \quad\text{or that}\quad \bar{\tau}_{n}^{(k)}\,:\!=\max_{r=1,\ldots, p}{\sqrt{\mathbb{E} [(\Delta^{n}_{1}G^{(k,r,r)}_{t})^{2}]}}. \end{equation*} $$

Note that in the previous expressions we focused on (k, r, r) instead of (k, r, m). This is because, by the independence of the Brownian measures, we obtain

$$ \begin{equation*} \mathbb{E}\bigg[\frac{\Delta^{n}_{1}G^{(k,r,m)}}{\bar{\tau}_{n}^{(k)}} \frac{\Delta^{n}_{1+h}G^{(l,q,w)}}{\bar{\tau}_{n}^{(l)}}\bigg ]=0 \end{equation*} $$

whenever $m\neq w$ ; hence, it is sufficient to define the scaling factors just in terms of (k, r, r). Moreover, for $h\in\mathbb{N}$ , let

$$ \begin{equation*} \bar{r}_{k,r,m;l,q,w}^{(n)}(h)\,:\!=\mathbb{E} \bigg[\dfrac{\Delta^{n}_{1}G^{(k,r,m)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{1+h}G^{(l,q,w)}}{\bar{\tau}_{n}^{(l)}}\bigg], \end{equation*} $$

and observe that if $m\neq w$ then $\smash{\bar{r}_{k,r,m;l,q,w}^{(n)}(h)}=0$ .

Assumption 4.5

For any $k,l=1,\ldots, p$ , let

$$ \begin{equation*} \frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta_{i}^{n}(Y^{(k)}-U^{(k)})}{\bar{\tau}^{(k)}_{n}} \frac{\Delta_{i}^{n}U^{(l)}}{\bar{\tau}^{(l)}_{n}}\xrightarrow{\text{u.c.p.}} 0 \quad\text{and}\quad \frac{1}{\sqrt{n}\,}\sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta_{i}^{n}U^{(k)}}{\bar{\tau}^{(k)}_{n}} \frac{\Delta_{i}^{n}U^{(l)}}{\bar{\tau}^{(l)}_{n}}\xrightarrow{\text{u.c.p.}} 0. \end{equation*} $$

Observe that the above assumption is similar to Assumption 4.4. The only difference is that now the increments of the drift are divided by the scaling factor due to the different formulation of the theorems. Indeed, now the process $Y_{t}$ contains the drift $U_{t}$ .

Theorem 4.2

Under Assumptions 4.1, 4.2, and 4.3 applied to $\smash{\bar{\tau}_{n}^{(l)},\bar{r}_{k,r,m;l,q,m}^{(n)}}$ for $l,q,k,r, m=1,\ldots, p$ , and Assumption 4.5, we have the stable convergence

\begin{align*} &\bigg\{\sqrt{n}\bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \frac{\Delta_{i}^{n}Y^{(k)}}{\bar{\tau}^{(k)}_{n}}\frac{\Delta_{i}^{n} Y^{(l)}}{\bar{\tau}^{(l)}_{n}}-\sum_{r,m,q=1}^{p}\mathbb{E} \bigg[\frac{\Delta_{1}^{n}G^{(k,r,m)}}{\bar{\tau}^{(k)}_{n}} \frac{\Delta_{1}^{n}G^{(l,q,m)}}{\bar{\tau}^{(l)}_{n}}\bigg] \\ &\times\int_{0}^{t}\sigma^{(r,m)}_{s}\sigma^{(q,m)}_{s}{\rm d} s\bigg]_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\int_{0}^{t}V_{s}{\boldsymbol D}^{1/2}{\rm d} B_{s}\bigg\}_{t\in[0,T]} \end{align*}

in $\mathcal{D}([0,T],\smash{\mathbb{R}^{{p}(\,p+1)/{2}}}) $ , where ${\boldsymbol D}$ and $V_{s}$ are introduced in Appendix A, and $B_{s}$ is a ${p^{5}}(\,p+1)/{2}$ -dimensional Brownian motion.

Proof. It follows from the same arguments as used in the proof of Theorem 4.1, the only difference is the use of Theorem 3.2 instead of Theorem 3.1. In particular, the elements that converge to 0 (which we called $A_{n}$ in the proof of Theorem 4.1) still converge to 0 since we are using a larger denominator. For the other elements (which we called $C_{n}$ ) we do have the stated convergence due to Theorem 3.2.

4.3. Case II: second scenario

In this section we present and prove the results for the other version of the multivariate BSS process introduced in Definition 2.2. In addition, at the price of a simple assumption on the stochastic volatilities, this new form allows for a definition of $\tau_{n}$ in terms of the BSS process $\{{\boldsymbol X}_{t}\}_{t\in[0,T]}$ (see below) and not in terms of the Gaussian process $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ .

Consider the stochastic process $\{{\boldsymbol X}_{t}\}_{t\in[0,T]}=\smash{\{(X_{t}^{(1)},\ldots, X_{t}^{(\,p)})\}_{t\in[0,T]}}$ given by

(4.3) $$ \begin{equation} {\boldsymbol X}_{t}=\int_{-\infty}^{t} \begin{pmatrix} g^{(1,1)}(t-s)\sigma^{(1,1)}_{s} &\cdots & g^{(1,p)}(t-s)\sigma^{(1,p)}_{s} \\ \vdots & \ddots & \vdots \\ g^{(\,p,1)}(t-s)\sigma^{(\,p,1)}_{s} & \cdots & g^{(\,p,p)}(t-s)\sigma^{(\,p,p)}_{s} \end{pmatrix} \begin{pmatrix} {\rm d} W^{(1)}_{t} \\\vdots\\ {\rm d} W^{(\,p)}_{t} \end{pmatrix}+ \begin{pmatrix} U^{(1)}_{t} \\\vdots\\ U^{(\,p)}_{t} \end{pmatrix}. \label{eqn11} \end{equation} $$

Assume that the $\mathcal{F}_{t}$ -Brownian measures are all independent of each other and of the drift process, and that the volatilities, $\smash{\sigma_{s}^{(k,m)}}$ for any $k,m=1,\ldots, p$ , are second order stationary. Let us define, for $k=1,\ldots, p$ , $\tilde{\tau}_{n}^{(k)}\,:\!=\smash{\sqrt{\mathbb{E}[(\Delta^{n}_{1} X^{(k)})^{2}]}}$ . Following Example 3.1, it is possible to observe that $\smash{\tilde{\tau}_{n}^{(k)}}$ is similar to $\smash{\tau_{n}^{(\alpha_{h})}}$ introduced in (3.6) (except for the drift component). The Gaussian core is given by $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ with ${\boldsymbol G}_{t}=\smash{(G^{(k,m)}_{t} )_{k,m=1,\ldots, p}} =(\smash{\int_{0}^{t}g^{(k,m)}(t-s)}{\rm d} \smash{W_{s}^{(m)}})_{k,m=1,\ldots, p}$ and the partition is in the k variables, namely we split $\{1,\ldots, p\}^{2}$ into $\{1\}\times\{1,\ldots, p\},\ldots, \{p\}\times\{1,\ldots, p\}$ . Note that, for each element of the partition, the associated Brownian measures are independent of each other. Furthermore, we define

$$ \begin{equation*} \tilde{r}_{k,m;l,w}^{(n)}(h)\,:\!=\mathbb{E}\bigg[\dfrac{\Delta^{n}_{1} G^{(k,m)}}{\tilde{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{1+h}G^{(l,w)}} {\tilde{\tau}_{n}^{(l)}} \bigg]; \end{equation*} $$

observe that if $m\neq w$ then $\smash{\tilde{r}_{k,m;l,w}^{(n)}(h)}=0$ and that the rate of $\smash{\tilde{\tau}_{n}^{(k)}}$ is greater than or equal to the order of $\smash{\tau_{n}^{(k,r)}}$ as $n\rightarrow\infty$ for any $r=1,\ldots,p$ .

Theorem 4.3

Under Assumptions 4.1, 4.2, and 4.3 applied to $\smash{\tilde{\tau}_{n}^{(l)},\tilde{r}_{k,m;l,m}^{(n)}}$ and Assumption 4.5 applied to $\smash{X^{(k)}}$ for $l,k,m=1,\ldots, p$ , we have the stable convergence

\begin{align*} &\bigg\{\sqrt{n}\bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor }\frac{\Delta_{i}^{n}X^{(k)}}{\tilde{\tau}_{n}^{(k)}} \frac{\Delta_{i}^{n}X^{(l)}}{\tilde{\tau}_{n}^{(l)}} -\sum_{m=1}^{p}\mathbb{E}\bigg[\frac{\Delta_{1}^{n}G^{(k,m)}} {\tilde{\tau}_{n}^{(k)}}\frac{\Delta_{1}^{n}G^{(l,m)}} {\tilde{\tau}_{n}^{(l)}} \bigg] \\ &\times\int_{0}^{t}\sigma^{(k,m)}_{s}\sigma^{(l,m)}_{s}{\rm d} s\bigg]_{k=1,\ldots, p;l\leq k}\Bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\int_{0}^{t}V_{s}{\boldsymbol D}^{1/2}{\rm d} B_{s}\bigg\}_{t\in[0,T]} \end{align*}

in $\mathcal{D}([0,T],\smash{\mathbb{R}^{{p}(\,p+1)/{2}}}) $ , where ${\boldsymbol D}$ and $V_{s}$ are introduced in Appendix A and $B_{s}$ is a ${p^{3}}(\,p+1)/{2}$ -dimensional Brownian motion.

Proof. It follows from the arguments of Theorem 4.1 and the results of Theorem 3.2. In particular, in the present case, the converging element is given by

\begin{align*} C_{n}&=\frac{1}{\sqrt{n}\,}\bigg(\sum_{j=1}^{\lfloor ht\rfloor} \sum_{i\in I_{h,n}(\,j)}\sum_{m=1}^{p}\Bigg(\frac{\Delta^{n}_{i}G^{(k,m)}} {\tilde{\tau}_{n}^{(k)}}\frac{\Delta^{n}_{i}G^{(l,m)}} {\tilde{\tau}_{n}^{(l)}} -\mathbb{E}\bigg[\frac{\Delta^{n}_{i}G^{(k,m)}} {\tilde{\tau}_{n}^{(k)}} \frac{\Delta^{n}_{i}G^{(l,m)}} {\tilde{\tau}_{n}^{(l)}} \bigg]\bigg) \\ &\times\sigma^{(r,m)}_{(\,j-1)\Delta_{h}}\sigma^{(q,m)}_{(\,j-1) \Delta_{h}} \bigg)_{k,l=1,\ldots, p;l\leq k}.\tag*{\qedhere} \end{align*}

Remark 4.3

It is possible to obtain a similar result to Theorem 4.3 using a formulation for the $\tau$ which is not written in terms of the $\sigma$ , (hence without the assumption of the independence of the Brownian measures and on the expected squared value of the $\sigma$ s required for Theorem 4.3). This could be obtained by proceeding as we have in the previous section (i.e. case II, first scenario). So why did we introduce this formulation of the $\tau$ ? The reason comes from the novel possibility, provided by the multidimensional structure of the BSS process presented in this section (see (4.3)), to write $\tau$ directly in terms of X. The benefit of doing this is that if we have an estimate of $\smash{\mathbb{E}[(\Delta^{n}_{1}X^{(k)})^{2}]}$ then Theorem 4.3 becomes a feasible CLT.

4.4. WLLN

From the CLTs proved in the previous sections, it is possible to derive the WLLN. First, we present the following lemma, which follows from the definition of uniform convergence in probability.

Lemma 4.1

Consider p real-valued stochastic processes $\smash{\{H^{(1)}_{t}\}_{t\in[0,T]},\ldots, \{H^{(\,p)}_{t}\}_{t\in[0,T]}}$ . Furthermore, consider sequences of random variables $\smash{\{H^{(i),n}_{t}\}_{t\in[0,T]}}$ such that ${H^{(i),n}_{t}\!\! \xrightarrow{\text{u.c.p.}}\! H^{(i)}_{t}}$ for $i=1,\ldots, p$ and any $t\in[0,T]$ . Then ${(H^{(1),n}_{t},\ldots, H^{(\,p),n}_{t}) \xrightarrow{\text{u.c.p.}} (H^{(1)}_{t},\ldots, H^{(\,p)}_{t})}$ .

Proof. Consider the case $p=2$ , since for $p>2$ the proof uses exactly the same arguments. Let $t\in[0,T],$ and let $|\cdot|$ denote the Euclidean norm. We have

\begin{align*} &\mathbb{P}\Big(\sup_{s\in[0,t]}|(H^{(1),n}_{s},H^{(2),n}_{s}) -(H^{(1)}_{s},H^{(2)}_{s})|>\epsilon\Big) \\ &\qquad\leq \mathbb{P}\Big(\sup_{s\in[0,t]}|H^{(1),n}_{s}-H^{(1)}_{s}|+ |H^{(2),n}_{s}-H^{(2)}_{s}|>\epsilon\Big) \\ &\qquad\leq \mathbb{P}\bigg(\sup_{s\in[0,t]}|H^{(1),n}_{s}-H^{(1)}_{s}| >\frac{\epsilon}{2}\bigg)+\mathbb{P}\bigg(\sup_{s\in[0,t]}|H^{(2),n}_{s} -H^{(2)}_{s}|>\frac{\epsilon}{2}\bigg) \\ &\qquad\rightarrow 0\quad\text{as $n\rightarrow\infty$.}\tag*{\qedhere} \end{align*}

We will derive the WLLN for the first scenario of case II and point out that, using the same arguments, it is possible to obtain similar results for all the CLTs presented in this work.

Let $\bar{r}_{k,r,m;l,q,w}(h)\,:\!=\smash{\lim_{n\rightarrow\infty} \bar{r}^{(n)}_{k, r,m;l,q,w}(h)}$ and $\tilde{r}_{k,m;l,w}(h)\,:\!=\smash{\lim_{n\rightarrow\infty} \tilde{r}^{(n)}_{k,m;l,w}(h)}$ for $k,r,m,l,q,w=1,\ldots, p$ .

Theorem 4.4

Let the assumptions of Theorem 4.2 hold. Then we have

\begin{align*} & \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor }\frac{\Delta_{i}^{n}Y^{(k)}}{\bar{\tau}^{(k)}_{n}} \frac{\Delta_{i}^{n}Y^{(l)}}{\bar{\tau}^{(l)}_{n}}\bigg)_{k=1,\ldots, p;l\leq k} \\ &\qquad \xrightarrow{\text{u.c.p.}} \bigg(\sum_{r,m,q=1}^{p}\bar{r}_{k,r,m;l,q,m}(0) \int_{0}^{t} \sigma^{(r,m)}_{s}\sigma^{(q,m)}_{s}{\rm d} s\bigg)_{k=1,\ldots, p;l\leq k} \end{align*}

Proof. Fix $l,k\in\{1,\ldots, p\}$ . From Theorem 4.2,

$$ \begin{equation*} \sqrt{n}\bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \frac{\Delta_{i}^{n}Y^{(k)}}{\tau^{(k)}_{n}}\frac{\Delta_{i}^{n}Y^{(l)}} {\tau^{(l)}_{n}}-\sum_{r,m,q=1}^{p}\mathbb{E} \bigg[\frac{\Delta_{1}^{n}G^{(k,r,m)}}{\bar{\tau}^{(k)}_{n}} \frac{\Delta_{1}^{n}G^{(l,q,m)}}{\bar{\tau}^{(l)}_{n}}\bigg] \int_{0}^{t}\sigma^{(r,m)}_{s}\sigma^{(q,m)}_{s}{\rm d} s\bigg] \end{equation*} $$

converges in distribution. Now, by Slutsky’s theorem, we have

$$ \begin{equation*} \bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta_{i}^{n}Y^{(k)}}{\tau^{(k)}_{n}} \frac{\Delta_{i}^{n}Y^{(l)}}{\tau^{(l)}_{n}}-\sum_{r,m,q=1}^{p} \mathbb{E}\bigg[\frac{\Delta_{1}^{n}G^{(k,r,m)}}{\bar{\tau}^{(k)}_{n}} \frac{\Delta_{1}^{n}G^{(l,q,m)}}{\bar{\tau}^{(l)}_{n}}\bigg]\int_{0}^{t} \sigma^{(r,m)}_{s}\sigma^{(q,m)}_{s}{\rm d} s\bigg] \xrightarrow{\text{D}} 0, \end{equation*} $$

which implies that

$$ \begin{equation*} \bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta_{i}^{n}Y^{(k)}}{\tau^{(k)}_{n}} \frac{\Delta_{i}^{n}Y^{(l)}}{\tau^{(l)}_{n}}-\sum_{r,m,q=1}^{p} \mathbb{E}\bigg[\frac{\Delta_{1}^{n}G^{(k,r,m)}}{\bar{\tau}^{(k)}_{n}} \frac{\Delta_{1}^{n}G^{(l,q,m)}}{\bar{\tau}^{(l)}_{n}}\bigg] \int_{0}^{t}\sigma^{(r,m)}_{s}\sigma^{(q,m)}_{s}{\rm d} s\bigg] \xrightarrow{\text{P}} 0. \end{equation*} $$

Then, by the triangular inequality, it follows that

$$ \begin{equation*} \frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor }\frac{\Delta_{i}^{n}Y^{(k)}}{\tau^{(k)}_{n}} \frac{\Delta_{i}^{n}Y^{(l)}}{\tau^{(l)}_{n}} \xrightarrow{\text{P}} \sum_{r,m,q=1}^{p}\bar{r}_{k,r,m;l,q,m}(0)\int_{0}^{t} \sigma^{(r,m)}_{s}\sigma^{(q,m)}_{s}{\rm d} s. \end{equation*} $$

The uniform convergence in probability follows from Remark 4.25 of [Reference Granelli16], while the joint uniform convergence follows from Lemma 4.1.

Remark 4.4

Similar WLLN corresponding to all the others CLTs presented in this work, including those for the multivariate Gaussian processes with stationary increments, can be derived using the same arguments as above.

5. Feasible results

In all the limit theorems presented above, we considered scaling of the increments of the corresponding process X, Y, or G by $\tau_{n}$ . However, in empirical applications $\tau_{n}$ would not be known, which makes the limit theorems infeasible in the sense that they are not computable from empirical data. We now want to move on to derive related feasible results which can be implemented in empirical applications. To this end, we somehow need to get rid off the $\tau_{n}$ s. A natural way of doing this is to consider suitable ratios of the statistics considered above so that the scaling parameters cancel out.

In this section we will focus on two kinds of feasible results, which differ from each other for the type of ratio considered. They are the correlation ratio and the relative covolatility. For the latter, see [Reference Barndorff-Nielsen, Pakkanen and Schmiegel8]. We will show feasible results for both scenarios of case II. Moreover, we will present the results using the vech formulation; however, similar results hold for the general formulation. The reason why we focus only on case II is because, for case I, it is not possible to get rid off the scaling factor $\tau$ (unless in trivial cases) and, hence, to get feasible limit theorems. This is one of the main benefits of the introduction of case II.

Remark 5.1

From the feasible results developed in this section, it is possible to obtain estimates for the mean of our process. Thus, we have ‘first-order’ feasible results. It is an open question whether it is possible to obtain ‘second-order’ feasible results, namely estimates for the asymptotic covariance. For the univariate BSS process, this question has been solved for the power variation case in [Reference Barndorff-Nielsen, Pakkanen and Schmiegel8], but it still remains open for the multipower variation case.

In this section we will make considerable use of certain random variables and in order to simplify the exposition we decided to use the following formulation. For any $l,k=1,\ldots, p$ , we define

\begin{align*} \bar{R}^{(k,l)}_{t,n}&\,:\!=\sum_{r,m,q=1}^{p}\mathbb{E}\bigg[\frac{ \Delta_{1}^{n}G^{(k,r,m)}}{\bar{\tau}^{(k)}_{n}}\frac{\Delta_{1}^{n} G^{(l,q,m)}}{\bar{\tau}^{(l)}_{n}}\bigg]\int_{0}^{t}\sigma^{(r,m)}_{s} \sigma^{(q,m)}_{s}{\rm d} s, \\ \bar{R}^{(k,l)}_{t}&\,:\!=\sum_{r,m,q=1}^{p}\bar{r}_{k,r,m;l,q,m}(0) \int_{0}^{t}\sigma^{(r,m)}_{s}\sigma^{(q,m)}_{s}{\rm d} s, \\ \tilde{R}^{(k,l)}_{t,n}&\,:\!=\sum_{m=1}^{p}\mathbb{E}\bigg[\frac{ \Delta_{1}^{n}G^{(k,m)}}{\tilde{\tau}_{n}^{(k)}}\frac{\Delta_{1}^{n} G^{(l,m)}}{\tilde{\tau}_{n}^{(l)}}\bigg]\int_{0}^{t}\sigma^{(k,m)}_{s} \sigma^{(l,m)}_{s}{\rm d} s, \\ \text{and}\quad \tilde{R}^{(k,l)}_{t}&\,:\!=\sum_{m=1}^{p}\tilde{r}_{k,m;l,m} (0)\int_{0}^{t}\sigma^{(k,m)}_{s}\sigma^{(l,m)}_{s}{\rm d} s. \end{align*}

5.1. Correlation ratio

In this section, by a slight abuse of notation, we will consider $\smash{\bar{\tau}_{n}^{(k)}=\sqrt{\mathbb{E}[(\Delta^{n}_{1} Y^{(k)})^{2}]}}$ ,

$$ \begin{equation*} \bar{r}_{k,m;l,w}^{(n)}(h)=\mathbb{E}\bigg[\dfrac{\Delta^{n}_{1}G^{(k,m)}} {\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{1+h}G^{(l,w)}}{\bar {\tau}_{n}^{(l)}} \bigg], \end{equation*} $$

and $\bar{r}_{k,m;l,w}(h)=\smash{\lim_{n\rightarrow\infty}\bar{r}^{(n)}_{k, m;l,w}(h)}$ , for $k,m,l,w=1,\ldots, p$ . Moreover, observe that, for $k,m=1,\ldots, p$ ,

$$ \begin{equation*} \sum_{r,q=1}^{p}\mathbb{E}\bigg[\frac{\Delta_{1}^{n}G^{(k,r,m)}} {\bar{\tau}^{(k)}_{n}}\frac{\Delta_{1}^{n}G^{(k,q,m)}} {\bar{\tau}^{(k)}_{n}}\bigg]\int_{0}^{t}\sigma^{(r,m)}_{s} \sigma^{(q,m)}_{s}{\rm d} s\geq 0, \end{equation*} $$

since it is a quadratic form. Therefore, for $k=1,\ldots, p,$ we have $\smash{\bar{R}^{(k,k)}_{t,n}}\geq 0$ and $\smash{\bar{R}^{(k,k)}_{t}}\geq 0$ (and equals 0 only in the trivial case). The same applies to $\smash{\tilde{R}^{(k,k)}_{t,n}}$ and $\smash{\tilde{R}^{(k,k)}_{t}}$ .

Before presenting the main results of this section, we introduce the following lemma, which is a generalisation of the functional delta method (see Chapter 3.9 of [Reference Van der Vaart and Wellner24]) and of Proposition 2 of [Reference Podolskij and Vetter21].

Lemma 5.1

Let $\mathbb{D}$ and $\mathbb{E}$ be metrizable topological vector spaces and $r_{n}$ constants such that $r_{n}\rightarrow\infty$ as $n\rightarrow\infty$ . Let $\phi\colon \mathbb{D}_{\phi}\subset \mathbb{D}\rightarrow\mathbb{E}$ be continuous and satisfy

(5.1) $$ \begin{equation} r_{n}(\phi(\theta_{n}+r_{n}^{-1}h_{n})-\phi(\theta_{n}))\rightarrow \phi'(\theta,h) \label{eqn12} \end{equation} $$

for all converging sequences $\theta_{n}$ , $h_{n}$ with $h_{n}\rightarrow h\in\mathbb{D}_{0}\subset\mathbb{D}$ , $\theta_{n}\rightarrow \theta\in\mathbb{D}_{\phi}$ and with $\theta_{n} + r_{n}^{-1}h_{n}\in\mathbb{D}_{\phi}$ for all n, and some arbitrary map $\phi'(\theta,h) $ from $\mathbb{D}_{\phi}\times\mathbb{D}_{0}$ to $\mathbb{E}$ . If $Y_{n},\bar{Y}_{n}\colon \Omega_{n}\rightarrow\mathbb{D}_{\phi}$ are maps with $\smash{{Y_{n} \xrightarrow{\text{P}} Y}}$ and $\smash{r_{n}(Y_{n}-\bar{Y}_{n}) \xrightarrow{\text{st}} X}$ , where X is separable and takes its values in $\mathbb{D}_{0}$ , then ${r_{n}(\phi(Y_{n})-\phi(\bar{Y}_{n})) \xrightarrow{\text{st}} \phi'(Y,X)}$ .

Proof. Since $\smash{\sqrt{n}(Y_{n}-\bar{Y}_{n})\xrightarrow{\text{st}} X}$ then $\smash{|Y_{n}-\bar{Y}_{n}|\xrightarrow{\text{P}} 0}$ and, given that $\smash{Y_{n}\xrightarrow{\text{P}} Y}$ , applying the triangular inequality we can deduce that $\smash{\bar{Y}_{n}\xrightarrow{\text{P}} Y}$ . Hence, we have $ (Y_{n},\bar{Y}_{n})\stackrel{P}{\rightarrow}(Y,Y) $ , and using the properties of the stable convergence, we have ${(\bar{Y}_{n},r_{n}(Y_{n}-\bar{Y}_{n}))\xrightarrow{\text{st}} (Y,X)}$ .

Now, for each n, define a map $g_{n}(\theta_{n},h_{n})\,:\!=r_{n}(\phi(\theta_{n}+r_{n}^{-1}h_{n}) -\phi(\theta_{n})) $ on $\{(h_{n},\theta_{n})\colon \theta_{n}+r_{n}^{-1}h_{n}\in\mathbb{D}_{\phi}\}$ . These maps are continuous in $\mathbb{E}$ since $\phi$ is continuous and, by (5.1), they converge to $\phi'(\theta,h) $ . Applying the continuous mapping theorem, we obtain $g_{n}(\bar{Y}_{n},r_{n}(Y_{n}-\bar{Y}_{n})) \smash{\xrightarrow{\text{st}}} \phi'(Y,X) $ , which is our result.

For the next results, we will use the following lemma.

Lemma 5.2

Let $I\subset\mathbb{R}$ be a compact interval and consider $\mathcal{D}(I,(0,\infty)) $ , namely the space of càdlàg functions from I to $ (0,\infty) $ . Then, for any $f\in\mathcal{D}(I,(0,\infty)),$ we have $\inf_{t\in I} f(t)>0$ . Moreover, let $f_{n}\in\mathcal{D}(I,(0,\infty)),\,n\in\mathbb{N}$ , such that $f_{n}\rightarrow f$ in $\mathcal{D}(I,(0,\infty)) $ , namely $\sup_{t\in I}|f_{n}(t)-f(t)|\rightarrow0$ as $n\rightarrow\infty$ . Then $\inf_{n\in\mathbb{N}}\inf_{t\in I} f_{n}(t)>0$ .

Proof. First, recall that, for a càdlàg function, the left limits must exist, that is, for every $t\in I,$ $\lim_{s\uparrow t}f(s) $ exists, which means that $\lim_{s\uparrow t}f(s)\in(0,\infty) $ . Assume that $\inf_{t\in I} f(t)=0$ . Then there exists a sequence $ (t_{n})_{n\in\mathbb{N}}\subset I$ such that $\lim_{n\rightarrow\infty}f(t_{n})=0$ and, by compactness, there exists a subsequence $ (t_{n_{k}})_{n_{k}\in\mathbb{N}}$ such that $t_{n_{k}}\rightarrow t\in I$ as $n_{k}\rightarrow\infty$ and so $\lim_{t_{n_{k}}\rightarrow t}f(t_{n_{k}})=0$ . Hence, there exists either a subsequence $ (t_{n_{k_{l}}})_{n_{k_{l}}\in\mathbb{N}}$ of $ (t_{n_{k}})_{n_{k}\in\mathbb{N}}$ which converges to the left to t and so $\lim_{t_{n_{k_{l}}}\uparrow t}f(t_{n_{k_{l}}})=0$ , or a subsequence $ (t_{n_{k_{j}}})_{n_{k_{j}}\in\mathbb{N}}$ of $ (t_{n_{k}})_{n_{k}\in\mathbb{N}}$ which converges to the right to t and so $\lim_{t_{n_{k_{j}}}\downarrow t}f(t_{n_{k_{j}}})=0$ , or both. In all the three cases we have a contradiction. This proves the first statement.

Now, consider the second statement. Let $\inf_{t\in I} f(t)=2\varepsilon$ for some $\varepsilon>0$ . Since $f_{n}\rightarrow f$ in the uniform metric, then there exists a $\tilde{n}\in\mathbb{N}$ such that, for all $n\geq\tilde{n},$ we have $\sup_{t\in I}|f_{n}(t)-f(t)|<\varepsilon$ . This implies that, for all $n\geq\tilde{n}$ , $f_{n}(t)>f(t)-\varepsilon$ for every $t\in I$ and so $\inf_{t\in I} f_{n}(t)\geq \inf_{t\in I} f(t)-\varepsilon=\varepsilon$ . Furthermore, for all $n<\tilde{n},$ we have $\inf_{t\in I} f_{n}(t)>0$ and, since $\tilde{n}$ is finite, we get $\inf_{n\in\mathbb{N}}\inf_{t\in I} f_{n}(t)\geq \min(\min_{n<\tilde{n}}\inf_{t\in I} f_{n}(t),\varepsilon)>0$ .

We can now present the main results of this section: the feasible WLLN and CLT.

Proposition 5.1

Let $\epsilon>0$ . Under the assumptions of Theorem 4.2 and, for any interval $[\epsilon,T]$ ,

$${({{\sum\limits_{i = 1}^{nntn} {\Delta _i^n} {Y^{(k)}}\Delta _i^n{Y^{(l)}}} \over {\sqrt {\sum\limits_{i = 1}^{nntn} {{{(\Delta _i^n{Y^{(k)}})}^2}} } \sqrt {\sum\limits_{i = 1}^{nntn} {{{(\Delta _i^n{Y^{(l)}})}^2}} } {\mkern 1mu} }})_{k = 1, \ldots ,p;l \le k}}\buildrel {{\rm{u}}{\rm{.c}}{\rm{.p}}{\rm{.}}} \over \longrightarrow {({{\bar R_t^{(k,l)}} \over {\sqrt {\bar R_t^{(k,k)}\bar R_t^{(l,l)}} {\mkern 1mu} }})_{k = 1, \ldots ,p;l \le k}}.$$

Proof. Fix $l,k\in\{1,\ldots, p\}$ such that $l\leq k$ . We have, for $n\geq1/\epsilon$ (the case $n<1/\epsilon$ is trivial and moreover we are concerned with the behaviour as $n\rightarrow\infty$ ),

\begin{align*} &\bigg(\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)} \Delta^{n}_{i}Y^{(l)}\bigg)\bigg(\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}Y^{(k)})^{2}\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}Y^{(l)})^{2}\bigg)^{-1/2}- \frac{\bar{R}_{t}^{(k,l)}}{\sqrt{\bar{R}_{t}^{(k,k)}\bar{R}_{t}^{(l,l)}}\,} \\ &\qquad=\bigg[\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i} Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)-\bar{R}_{t}^{(k,l)}\bigg] \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \bigg)^{2}\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)^{2}\bigg)^{-1/2} \\ &\qquad +\frac{1}{\sqrt{\bar{R}_{t}^{(k,k)}\bar{R}_{t}^{(l,l)}}\,} {\bar{R}_{t}^{(k,l)}\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \bigg)^{2}\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)^{2} \bigg)^{-1/2}} \\ &\times\bigg[\sqrt{\bar{R}_{t}^{(k,k)}\bar{R}_{t}^{(l,l)}} -\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \bigg)^{2}\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)^{2} \bigg)^{1/2}\bigg] \\ &\qquad \xrightarrow{\text{u.c.p.}} 0, \end{align*}

where, for the term in the first square bracket, the u.c.p. convergence to 0 comes from the fact that we have LLN results (see Theorem 4.4). For the second square bracket, we have the following. First, we use the continuous mapping theorem knowing the joint convergence in probability and using the continuous function $g(x,y)=\smash{\sqrt{xy}}$ (note that in our case x and y are positive). Then, we pass from the convergence in probability to the uniform convergence using the fact that the paths are nondecreasing in time and the paths of the limiting process are continuous almost surely. Concerning the elements outside the square brackets, they do not interfere with the uniform convergence since their suprema are bounded for any $t\in[\epsilon,T]$ (and that is why we have considered $\epsilon>0$ ).

Finally, the joint convergence follows from Lemma 4.1.

Proposition 5.2

Under the assumptions of Theorem 4.2, we have, for any $\epsilon>0$

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}} {\sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i} Y^{(k)})^{2}} \sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i} Y^{(l)})^{2}}\,}- \frac{\bar{R}_{t,n}^{(k,l)}}{\sqrt{\bar{R}_{t,n}^{(k, k)}\bar{R}_{t,n}^{(l,l)}}\,}\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[\epsilon,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\bigg(\frac{1}{\sqrt{\bar{R}^{(k,k)}_{t}\bar{R}^{(l,l)}_{t}}\,} \bigg(\int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,l)} {\rm d} B^{(k,l)}_{s}-\frac{1}{2}\frac{\bar{R}^{(k,l)}_{t}} {\bar{R}^{(k,k)}_{t}}\int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k, k)}{\rm d} B^{(k,k)}_{s} \\ & - \frac{1}{2}\frac{\bar{R}^{(k,l)}_{t}}{\bar{R}^{(l,l)}_{t}} \int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(l,l)}{\rm d} B^{(l,l)}_{s}\bigg) \bigg)_{k=1,\ldots, p;l\leq k}\bigg \}_{t\in[\epsilon,T]} \end{align*}

in $\mathcal{D}([\epsilon,T],\smash{\mathbb{R}^{{p}(\,p+1)/{2}})}$ , where $ (V_{s}{\boldsymbol D}^{1/2})_{(k,l)}$ indicates that we are considering only the (k,l) row of the matrix $ (V_{s}{\boldsymbol D}^{1/2}) $ (see Appendix A) and the $B_{s}^{(k,l)}$ are one-dimensional independent Brownian motions.

Proof. First we prove the statement for fixed k and l. As in the previous proof, we concentrate on the case $n\geq1/\epsilon$ . We have

\begin{align*} &\sqrt{n}\bigg[\bigg(\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)} \Delta^{n}_{i}Y^{(l)}\bigg)\bigg(\sum_{i=1}^{\lfloor nt\rfloor} (\Delta^{n}_{i}Y^{(k)})^{2}\sum_{i=1}^{\lfloor nt\rfloor} (\Delta^{n}_{i}Y^{(l)})^{2}\bigg)^{-1/2} -\frac{\bar{R}_{t,n}^{(k,l)}}{\sqrt{\bar{R}_{t,n}^{(k,k)} \bar{R}_{t,n}^{(l,l)}}\,}\bigg] \\ &\qquad=\frac{1}{\sqrt{\bar{R}_{t,n}^{(k,k)}\bar{R}_{t,n}^{(l,l)}}\,} {\sqrt{n}\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \frac{\Delta^{n}_{i} Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\frac{\Delta^{n}_{i}Y^{(l)}} {\bar{\tau}_{n}^{(l)}} -\bar{R}_{t,n}^{(k,l)}\bigg)} \\ &\qquad\quad\, -\sqrt{n}\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \frac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg) \\ &\times \bigg[ {{\bar{R}_{t,n}^{(k,k)}\bar{R}_{t,n}^{(l,l)}\bigg(\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(k)}} {\bar{\tau}_{n}^{(k)}}\bigg)^{2}\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)^{2}\bigg)}}\biggr]^{-1/2} \\ &\times\bigg(\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\bigg)^{2} \frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)^{2} \bigg)^{1/2}-\sqrt{\bar{R}_{t,n}^{(k,k)}\bar{R}_{t,n}^{(l,l)}}\bigg) \\ &\qquad=\frac{\sqrt{n}}{\sqrt{\bar{R}_{t,n}^{(k,k)}\bar{R}_{t,n}^{(l,l)}}\,} \bigg(1,-\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg) \\ &\times \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\bigg)^{2} \frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)^{2} \bigg)^{-1/2}\bigg) \\ &\qquad \times\bigg(\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i} Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i}Y^{(l)}} {\bar{\tau}_{n}^{(l)}}\bigg)-\bar{R}_{t,n}^{(k,l)}, \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i} Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\bigg)^{2}\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i}Y^{(l)}} {\bar{\tau}_{n}^{(l)}}\bigg)^{2}\bigg)^{1/2} \\ & -\sqrt{\bar{R}_{t,n}^{(k,k)} \bar{R}_{t,n}^{(l,l)}}\bigg)^{\top}. \end{align*}

Note that

\begin{align*} Z^{(k,l)}_{1,n}&\,:\!=\frac{1}{\sqrt{\bar{R}_{t,n}^{(k,k)}\bar{R}_{t,n}^{(l, l)}}\,} \bigg(1,-\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg) \\ &\times\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\bigg)^{2} \frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i}Y^{(l)}} {\bar{\tau}_{n}^{(l)}}\bigg)^{2}\bigg)^{-1/2}\bigg) \\ & \xrightarrow{\text{u.c.p.}} \frac{1}{\sqrt{\bar{R}^{(k,k)}_{t} \bar{R}^{(l,l)}_{t}}\,}\bigg(1,-\frac{\bar{R}^{(k,l)}_{t}} {\sqrt{\bar{R}^{(k,k)}_{t}\bar{R}^{(l,l)}_{t}}\,}\bigg) \\ &{=\!:}\,Z^{(k,l)}_{1}, \end{align*}

by Proposition 5.1 and by noting that, for any $\delta>0$ ,

$$ \begin{equation*} \mathbb{P}\Big(\sup_{t\in[\epsilon,T]}\Big(\sqrt{\bar{R}^{(k,k)}_{t} \bar{R}^{ (l,l)}_{t}}\Big)^{-1}>\delta\Big)= \mathbb{P}\Big(\Big(\sqrt{\bar{R}^{(k,k)}_{\epsilon} \bar{R}^{(l,l)}_{\epsilon}}\Big)^{-1}>\delta\Big). \end{equation*} $$

For the other term, by Theorem 4.3 and Lemma 5.1 applied to ${\boldsymbol g}\colon \mathcal{D}([\epsilon,T],\mathbb{R}\times(0,\infty)^{2}) \rightarrow\mathcal{D}([\epsilon,T],\mathbb{R}\times(0,\infty)) $ (where both Skorokhod spaces as well as the Euclidean spaces are equipped with the uniform metric) defined as ${\boldsymbol g}(\{{\boldsymbol x}_{t}\}_{t\in[\epsilon,T]})= \{g(x_{1,t},x_{2,t},x_{3,t})\}_{t\in[\epsilon,T]}=\{(x_{1,t}, \sqrt{x_{2,t}x_{3,t}})\}_{t\in[\epsilon,T]},$ we have

\begin{align*} Z^{(k,l)}_{2,n}&\,:\!=\bigg\{\sqrt{n} \bigg(\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\dfrac {\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)- \bar{R}_{t,n}^{(k,l)}, \\ &\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\bigg)^{2} \frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\dfrac{\Delta^{n}_{i} Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)^{2}\bigg)^{1/2}- \sqrt{\bar{R}_{t,n}^{(k,k)}\bar{R}_{t,n}^{(l,l)}}\bigg)^{\top} \bigg\}_{t\in[\epsilon,T]} \\ &\phantom{:} \xrightarrow{\text{st}} \bigg\{ \begin{pmatrix} 1 & 0 & 0 \\ 0 & \frac{1}{2}\sqrt{{\bar{R}_{t}^{(l,l)}}/{\bar{R}_{t}^{(k,k)}}} & \frac{1}{2}\sqrt{{\bar{R}_{t}^{(k,k)}}/{\bar{R}_{t}^{(l,l)}}}\\ \end{pmatrix} \\ &\times\bigg(\int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,l)} {\rm d} B^{(k,l)}_{s},\int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,k)} {\rm d} B^{(k,k)}_{s}, \\ &\int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(l,l)} {\rm d} B^{(l,l)}_{s}\bigg)^{\top}\bigg\}_{t\in[\epsilon,T]} \\ &\phantom{:}=\biggl\{ { \begin{pmatrix} \int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}{\rm d} B^{(k,l)}_{s} \\ \frac{1}{2}(\sqrt{{\bar{R}^{(l,l)}_{t}}\!/{\bar{R}^{(k,k)}_{t}}}) \int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,k)}{\rm d} B^{(k,k)}_{s} \!+ \frac{1}{2}(\sqrt{{\bar{R}^{(k,k)}_{t}}\!/{\bar{R}^{(l,l)}_{t}}}) \int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(l,l)}{\rm d} B^{(l,l)}_{s}\\ \end{pmatrix}} \!\biggr\}_{t\in[\epsilon,T]} \\ &\phantom{:}\,{=\!:}\, Z^{(k,l)}_{2}, \end{align*}

where $\smash{B_{s}^{(k,l)},B_{s}^{(k,k)}},$ and $\smash{B_{s}^{(l,l)}}$ are three independent Brownian motions. Note that we have not yet investigated whether such functional ${\boldsymbol g}$ satisfies the conditions of Lemma 5.1, but we will do it now. First, we check that ${\boldsymbol g}$ is continuous and then, using the notation of Lemma 5.1, that ${\boldsymbol g}'(\{\theta\}_{t\in[\epsilon,T]},\{h\}_{t\in[\epsilon,T]}) =\{\nabla g(\theta_{t})h_{t}\}_{t\in[\epsilon,T]}$ , where $\nabla g$ is the Jacobian matrix of g. Comparing the notation of Lemma 5.1 to the present framework, we note that $\mathcal{D}([\epsilon,T],\mathbb{R}\times(0,\infty)^{2})=\mathbb{D}_{\phi}$ , $\mathcal{D}([\epsilon,T],\mathbb{R}^{3})=\mathbb{D}_{0}=\mathbb{D}$ , $\mathcal{D}([\epsilon,T],\mathbb{R}^{2})=\mathbb{E}$ , ${\boldsymbol g}=\phi$ , and ${\boldsymbol g}'=\phi'$ . Hence, $\theta,\theta_{n}\in \mathcal{D}([\epsilon,T],\mathbb{R}\times(0,\infty)^{2}) $ and $h_{n},h\in \mathcal{D}([\epsilon,T],\mathbb{R}^{3}) $ with $\theta_{n}+r_{n}^{-1}h_{n}\in \mathcal{D}([\epsilon,T],\mathbb{R}\times(0,\infty)^{2}) $ for every $n\in\mathbb{N}$ .

To prove the continuity of ${\boldsymbol g}$ , we need to show that, for every $\smash{(\{(x_{n,t}^{(1)},x_{n,t}^{(2)},x_{n,t}^{(3)})\}_{t\in[\epsilon, T]})_{n\in\mathbb{N}}}\!\rightarrow\smash{\{(x_{t}^{(1)},x_{t}^{(2)}, x_{t}^{(3)})\}_{t\in[\epsilon,T]}}$ in $\mathcal{D}([\epsilon,T],\mathbb{R}\times(0,\infty)^{2}),$ we have

$$ \begin{equation*} \lim_{n\rightarrow\infty}\sup_{t\in[\epsilon,T]}\Big\|\Big(x_{n,t}^{(1)} -x_{t}^{(1)},\sqrt{x_{n,t}^{(2)}x_{n,t}^{(3)}} -\sqrt{x_{t}^{(2)}x_{t}^{(3)}}\Big)\Big\|_{\infty}=0. \end{equation*} $$

For the first component, it is straightforward, while, for the second, we have

\begin{align*} &\sup_{t\in[\epsilon,T]}\Big|\sqrt{x_{n,t}^{(2)}x_{n,t}^{(3)}} -\sqrt{x_{t}^{(2)}x_{t}^{(3)}}\Big| \\ &\qquad\leq\sup_{t\in[\epsilon,T]} \Big|\sqrt{x_{n,t}^{(2)}}\Big(\sqrt{x_{n,t}^{(3)}}-\sqrt{x_{t}^{(3)}}\Big)\Big| +\Big|\sqrt{x_{t}^{(3)}}\Big(\sqrt{x_{n,t}^{(2)}}-\sqrt{x_{t}^{(2)}}\Big)\Big| \\ &\qquad\leq\sup_{t\in[\epsilon,T]}\Big|\sqrt{x_{n,t}^{(2)}} \Big|\sup_{t\in[\epsilon,T]} \Big|\sqrt{x_{n,t}^{(3)}}- \sqrt{x_{t}^{(3)}}\Big| +\sup_{t\in[\epsilon,T]} \Big|\sqrt{x_{t}^{(3)}}\Big|\sup_{t\in[\epsilon,T]} \Big|\sqrt{x_{n,t}^{(2)}}-\sqrt{x_{t}^{(2)}}\Big|. \end{align*}

Note that $\smash{x_{n,t}^{(3)}-x_{t}^{(3)}}= \smash{(\sqrt{x_{n,t}^{(3)}}-\sqrt{x_{t}^{(3)}}) (\sqrt{x_{n,t}^{(3)}}+\sqrt{x_{t}^{(3)}})}$ . Furthermore, by Lemma 5.2 we have $\sup_{t\in[\epsilon,T]}\{{1}/{\sqrt{x_{t}^{(3)}}}\}<\infty$ because $x_{t}^{(3)}\colon [\epsilon,T]\rightarrow(0,\infty) $ and it is a càdlàg function. The same applies to $\smash{x_{t}^{(2)}}$ . Hence, we have

\begin{align*} \sup_{t\in[\epsilon,T]}\Big|\sqrt{x_{n,t}^{(3)}}-\sqrt{x_{t}^{(3)}} \Big|&\leq \sup_{t\in[\epsilon,T]}\frac{1}{(\sqrt{x_{t}^{(3)}} +\sqrt{x_{n,t}^{(3)}})} \sup_{t\in[\epsilon,T]} |x_{n,t}^{(3)}-x_{t}^{(3)}| \\ &\leq\sup_{t\in[\epsilon,T]}\frac{1}{\sqrt{x_{t}^{(3)}}\,} \sup_{t\in[\epsilon,T]}|x_{n,t}^{(3)}-x_{t}^{(3)}| \\ &\rightarrow 0 \\ & \Rightarrow \sup_{t\in[\epsilon,T]}\Big|\sqrt{x_{n,t}^{(2)}x_{n,t}^{(3)}} -\sqrt{x_{t}^{(2)}x_{t}^{(3)}}\Big| \\ &\rightarrow0\quad \text{as $n\rightarrow\infty$.} \end{align*}

Regarding the map ${\boldsymbol g}'$ , we need to show that

(5.2) \begin{align} &\lim_{n\rightarrow\infty}\sup_{t\in[\epsilon,T]}\bigg\|\bigg(r_{n} (\theta_{n,t}^{(1)}+r_{n}^{-1}h_{n,t}^{(1)}-\theta_{n,t}^{(1)}) -h^{(1)}_{t}, \nonumber \\ & r_{n}\sqrt{(\theta_{n,t}^{(2)}+r_{n}^{-1}h_{n,t}^{(2)}) (\theta_{n,t}^{(3)} +r_{n}^{-1}h_{n,t}^{(3)})}-r_{n} \sqrt{\theta^{(2)}_{n,t}\theta^{(3)}_{n,t}} -\frac{h_{t}^{(2)}}{2}\sqrt{\frac{\theta_{t}^{(3)}}{\theta_{t}^{(2)}}} \nonumber \\ &-\frac{h_{t}^{(3)}}{2}\sqrt{\frac{\theta_{t}^{(2)}} {\theta_{t}^{(3)}}}\bigg) \bigg\|_{\infty} \nonumber \\ &\qquad=0. \label{eqn13}\end{align}

For the first component, it is straightforward since $h^{(n)} \rightarrow h$ , while, for the second, using the Taylor series for the bivariate function $f(x,y)=r_{n}\sqrt{(x+r_{n}^{-1}h_{n,t}^{(2)}) (y+r_{n}^{-1}h_{n,t}^{(3)})},$ we have

(5.3) \begin{align} &\sup_{t\in[\epsilon,T]}\bigg|r_{n}\sqrt{(\theta_{n,t}^{(2)}+r_{n}^{-1} h_{n,t}^{(2)})(\theta_{n,t}^{(3)}+r_{n}^{-1}h_{n,t}^{(3)})}-r_{n} \sqrt{\theta^{(2)}_{n,t}\theta^{(3)}_{n,t}}-\frac{h_{t}^{(2)}}{2} \sqrt{\frac{\theta_{t}^{(3)}}{\theta_{t}^{(2)}}}-\frac{h_{t}^{(3)}}{2} \sqrt{\frac{\theta_{t}^{(2)}}{\theta_{t}^{(3)}}}\bigg| \nonumber \\ &\qquad\leq\sup_{t\in[\epsilon,T]}\bigg|\frac{h_{n,t}^{(2)}}{2} \sqrt{\frac{\theta_{n,t}^{(3)}}{\theta_{n,t}^{(2)}}}+\frac{h_{n,t}^{(3)}}{2} \sqrt{\frac{\theta_{n,t}^{(2)}}{\theta_{n,t}^{(3)}}}-\frac{h_{t}^{(2)}}{2} \sqrt{\frac{\theta_{t}^{(3)}}{\theta_{t}^{(2)}}}-\frac{h_{t}^{(3)}}{2} \sqrt{\frac{\theta_{t}^{(2)}}{\theta_{t}^{(3)}}}\bigg| \nonumber \\ &\qquad +\frac{r_{n}^{-1}}{2}\sup_{t\in[\epsilon,T]}\bigg|\frac{h_{n,t}^{(2)} h_{n,t}^{(3)}}{\sqrt{\theta_{n,t}^{(2)}\theta_{n,t}^{(3)}}\,}-\frac{1}{4} (h_{n,t}^{(2)})^{2}\sqrt{\frac{\theta_{n,t}^{(3)}} {(\theta_{n,t}^{(2)})^{3}}}-\frac{1}{4}(h_{n,t}^{(3)} )^{2}\sqrt{\frac{\theta_{n,t}^{(2)}}{(\theta_{n,t}^{(3)} )^{3}}}\,\bigg| +o(r_{n}^{-1}). \label{eqn14}\end{align}

Since $\smash{\theta^{(2)}_{n,t}}$ and $\smash{\theta^{(3)}_{n,t}}$ take values in $ (0,\infty) $ and are càdlàg functions, then by Lemma 5.2 we have, for each $n\in\mathbb{N}$ , $\sup_{t\in[\epsilon,T]}|{1}/{\sqrt{\theta_{n,t}^{(\,j)}}}|<\infty$ for $j=1,2$ . Moreover, $\smash{\theta^{(2)}_{n,t}}$ , $\smash{\theta^{(3)}_{n,t}}$ , $\smash{h_{n,t}^{(2)}},$ and $\smash{h_{n,t}^{(3)}}$ converge in the uniform metric to $\smash{\theta^{(2)}_{t}, \theta^{(3)}_{t}, h_{t}^{(2)}},$ and $\smash{h_{t}^{(3)}}$ , respectively, while $\smash{r_{n}^{-1}}\rightarrow0$ . Therefore,

$$ \begin{equation*} \lim_{n\rightarrow\infty}\frac{r_{n}^{-1}}{2}\sup_{t\in[\epsilon,T]} \bigg|\frac{h_{n,t}^{(2)} h_{n,t}^{(3)}}{\sqrt{\theta_{n,t}^{(2)} \theta_{n,t}^{(3)}}\,} -\frac{1}{4}(h_{n,t}^{(2)})^{2} \sqrt{\frac{\theta_{n,t}^{(3)}}{(\theta_{n,t}^{(2)})^{3}}} -\frac{1}{4}(h_{n,t}^{(3)})^{2}\sqrt{\frac{\theta_{n,t}^{(2)}} {(\theta_{n,t}^{(3)})^{3}}}\,\bigg|=0. \end{equation*} $$

Finally, the $\smash{o(r_{n}^{-1})}$ term in (5.3) comes from the fact that the remaining terms of the Taylor expansion have a further multiplying factor of $\smash{r^{-1}_{n}}$ . Now, observe that

\begin{align*} &\bigg|\frac{h_{n,t}^{(2)}}{2}\sqrt{\frac{\theta_{n,t}^{(3)}} {\theta_{n,t}^{(2)}}}-\frac{h_{t}^{(2)}}{2}\sqrt{\frac{\theta_{t}^{(3)}} {\theta_{t}^{(2)}}}\bigg|\leq \bigg|\frac{h_{n,t}^{(2)}}{2} \bigg(\sqrt{\frac{\theta_{n,t}^{(3)}}{\theta_{n,t}^{(2)}}}- \sqrt{\frac{\theta_{t}^{(3)}}{\theta_{t}^{(2)}}}\bigg)\bigg|+ \bigg|\sqrt{\frac{\theta_{t}^{(3)}}{\theta_{t}^{(2)}}} \bigg(\frac{h_{n,t}^{(2)}}{2}-\frac{h_{t}^{(2)}}{2}\bigg)\bigg|, \\ &\sqrt{\frac{\theta_{n,t}^{(3)}}{\theta_{n,t}^{(2)}}}-\sqrt{\frac{ \theta_{t}^{(3)}}{\theta_{t}^{(2)}}}=\frac{1}{\sqrt{\theta_{t}^{(2)}}\,} \Big(\sqrt{\theta_{n,t}^{(3)}}-\sqrt{\theta_{t}^{(3)}}\Big)+\frac{1} {\sqrt{\theta_{t}^{(2)}}}\sqrt{\frac{\theta_{n,t}^{(3)}} {\theta_{n,t}^{(2)}}\,}\Big(\sqrt{\theta_{t}^{(2)}}- \sqrt{\theta_{n,t}^{(2)}\,}\Big),\\[-12pt] \end{align*}

and that $\smash{\sup_{t\in[\epsilon,T]}|{1}/{\sqrt{\theta_{t}^{(2)}}}|} <\infty$ by Lemma 5.2 because $\smash{\theta_{t}^{(2)}}$ is a càdlàg function with values in $ (0,\infty) $ (and the same holds for the other $\theta$ s). Then, taking the limit as $n\rightarrow\infty$ in (5.3), we obtain the desired result (5.2).

Furthermore, since $\smash{Z^{(k,l)}_{1,n} \xrightarrow{\text{P}} Z^{(k,l)}_{1}}$ and $\smash{Z^{(k,l)}_{2,n} \xrightarrow{\text{st}} Z^{(k,l)}_{2}},$ we deduce that $\smash{(Z^{(k,l)}_{1,n},Z^{(k,l)}_{2,n}) \xrightarrow{\text{st}}} \smash{(Z^{(k,l)}_{1},Z^{(k,l)}_{2})}$ . Finally, by applying the continuous mapping theorem for the stable convergence using the continuous function ${f(Z^{(k,l)}_{1,n},Z^{(k,l)}_{2,n})}= \smash{\{Z^{(k,l)}_{1,n}(t)Z^{(k,l)}_{2,n}(t)\}_{t\in[\epsilon,T]}}$ , we obtain our result for fixed k and l.

For the joint stable convergence, we proceed similarly thanks to the uniform metric. Let

$$ \begin{equation*} \Theta_{n}\,:\!=\left\{ \begin{pmatrix} Z^{(1,1)}_{1,n}(t) & 0 &\cdots & 0 \\ 0 & \ddots & & \vdots \\ \vdots & & \ddots & 0\\ 0& \cdots& 0 & Z^{(\,p,p)}_{1,n} (t) \end{pmatrix} \right\}_{t\in[\epsilon,T]}. \end{equation*} $$

Then

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}{\sqrt{ \sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}Y^{(k)})^{2}} \sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}Y^{(l)})^{2}}\,} -\frac{\bar{R}_{t,n}^{(k,l)}}{\sqrt{\bar{R}_{t,n}^{(k,k)} \bar{R}_{t,n}^{(l,l)}}\,}\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[\epsilon,T]} \\ &\qquad=\{\Theta_{n}(t)(Z^{(1,1)}_{2,n}(t), \ldots, Z^{(\,p,p)}_{2,n}(t))^{\top}\}_{t\in[\epsilon,T]}. \end{align*}

Using Lemma 4.1 and the arguments used before for fixed k and l, we obtain the u.c.p. convergence of $\Theta_{n}$ . Now, we would like to prove the stable convergence for $\{(\smash{Z^{(1,1)}_{2,n}(t)},\ldots, \smash{Z^{(\,p,p)}_{2,n}(t))^{\top}}\}_{t\in[\epsilon,T]}$ . Define the function ${\tilde{\boldsymbol g}}\colon \mathcal{D}([\epsilon,T],(0,\infty)^{p} \times\smash{\mathbb{R}^{{p}(\,p-1)/{2}})}\rightarrow\mathcal{D}([\epsilon,T], (0, \infty)^{{p}(\,p+1)/{2}}\times\mathbb{R}^{{p}(\,p+1)/{2}}) $ as (using our variables)

\begin{align*} &{\tilde{\boldsymbol g}}\bigg(\bigg\{\bigg( \frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[\epsilon,T]}\bigg) \\ &\qquad=\bigg\{\bigg( \frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(1)}}{\bar{\tau}_{n}^{(1)}} \bigg)^{2},\sqrt{\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(1)}}{\bar{\tau}_{n}^{(1)}} \bigg)^{2}\bigg)^{2}},\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(2)}}{\bar{\tau}_{n}^{(2)}} \dfrac{\Delta^{n}_{i}Y^{(1)}}{\bar{\tau}_{n}^{(1)}}, \\ & \sqrt{\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\frac{\Delta^{n}_{i}Y^{(2)}}{\bar{\tau}_{n}^{(2)}}\bigg)^{2} \frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(1)}} {\bar{\tau}_{n}^{(1)}}\bigg)^{2}},\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\frac{\Delta^{n}_{i}Y^{(2)}}{\bar{\tau}_{n}^{(2)}} \bigg)^{2}, \\ & \sqrt{\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\frac{\Delta^{n}_{i}Y^{(2)}}{\bar{\tau}_{n}^{(2)}}\bigg)^{2} \bigg)^{2}}, \frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(3)}} {\bar{\tau}_{n}^{(3)}}\dfrac{\Delta^{n}_{i}Y^{(1)}}{\bar{\tau}_{n}^{(1)}}, \\ & \sqrt{\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{ \Delta^{n}_{i}Y^{(3)}}{\bar{\tau}_{n}^{(3)}}\bigg)^{2}\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(1)}} {\bar{\tau}_{n}^{(1)}}\bigg)^{2}},\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(3)}}{\bar{\tau}_{n}^{(3)}} \dfrac{\Delta^{n}_{i}Y^{(2)}}{\bar{\tau}_{n}^{(2)}}, \\ & \sqrt{\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \bigg(\frac{\Delta^{n}_{i}Y^{(3)}}{\bar{\tau}_{n}^{(3)}}\bigg)^{2} \frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(2)}} {\bar{\tau}_{n}^{(2)}}\bigg)^{2}},\ldots, \frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(\,p)}} {\bar{\tau}_{n}^{(\,p)}}\bigg)^{2}, \\ & \sqrt{\bigg(\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(\,p)}} {\bar{\tau}_{n}^{(\,p)}}\bigg)^{2}\bigg)^{2}} \bigg)^{\top}\bigg\}_{t\in[\epsilon,T]}. \end{align*}

Note that the above formulation is just a multidimensional extension of the formulation of g given in the first part of this proof. In particular, the function ${\tilde{\boldsymbol g}}$ can be seen as associating to any three-dimensional vector of the form

$$ \begin{equation*} \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}},\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\left(\frac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\right)^{2},\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\left(\frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\right)^{2} \bigg) \end{equation*} $$

a two-dimensional vector of the form

$$ \begin{equation*} \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}},\sqrt{\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(k)}} {\bar{\tau}_{n}^{(k)}}\bigg)^{2}\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\bigg(\frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)^{2}} \bigg) \end{equation*} $$

for any $k=1,\ldots,p$ and $l\leq k$ .

Then, as in the first part of the proof thanks to Theorem 4.3 and Lemma 5.1 applied to ${\tilde{\boldsymbol g}}$ , we obtain the stable convergence as in the statement. Moreover, we use the same arguments as used for fixed k and l to prove that ${\tilde{\boldsymbol g}}$ satisfies the required conditions of Lemma 5.1 thanks to the properties of the uniform metric.

Similar results can be obtained for the second scenario of case II.

Proposition 5.3

Let $\epsilon>0$ . Under the assumptions of Theorem 4.3 and, for any interval $[\epsilon,T]$ ,

$$ \begin{equation*} \bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}} {\sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i} X^{(k)})^{2}}\sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}X^{(l)})^{2}}\,}\bigg)_{k=1,\ldots, p;l\leq k} \xrightarrow{\text{u.c.p.}} \bigg(\frac{\tilde{R}^{(k,l)}_{t}} {\sqrt{\tilde{R}^{(k,k)}_{t}\tilde{R}^{(l,l)}_{t}}\,}\bigg)_{k=1,\ldots, p;l\leq k}. \end{equation*} $$

Proof. It follows from the same arguments as used in the proof of Proposition 5.1.

Proposition 5.4

Under the assumptions of Theorem 4.3, we have, for any $\epsilon>0$ ,

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}} {\sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i} X^{(k)})^{2}}\sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}X^{(l)})^{2}}\,} -\frac{\tilde{R}_{t,n}^{(k,l)}}{\sqrt{\tilde{R}_{t,n}^{(k,k)} \tilde{R}_{t,n}^{(l,l)}}\,}\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[\epsilon,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\bigg(\frac{1}{\sqrt{\tilde{R}_{t}^{(k,k)}\tilde{R}_{t}^{(l,l)}}} \bigg(\int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}{\rm d} B^{(k,l)}_{s} -\frac{1}{2}\frac{\tilde{R}_{t}^{(k,l)}} {\tilde{R}_{t}^{(k,k)}} \int_{0}^{t}(V_{s} {\boldsymbol D}^{1/2})_{(k,k)}{\rm d} B^{(k,k)}_{s} \\ & - \frac{1}{2}\frac{\tilde{R}_{t}^{(k,l)}}{\tilde{R}_{t}^{(l,l)}} \int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(l,l)}{\rm d} B^{(l,l)}_{s} \bigg)\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[\epsilon,T]} \end{align*}

in $\mathcal{D}([\epsilon,T],\smash{\mathbb{R}^{{p}(\,p+1)/{2}})}$ , where $\smash{(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}}$ indicates that we are considering only the (k,l) row of the matrix $ (V_{s}{\boldsymbol D}^{1/2}) $ (see Appendix A) and the $\smash{B_{s}^{(k,l)}}$ are one-dimensional independent Brownian motions.

Proof. It follows from the same arguments as used in the proof of Proposition 5.2.

5.2. Relative covolatility

In this section we look at the relative volatility case (see [Reference Barndorff-Nielsen, Pakkanen and Schmiegel8]). Similarly to the previous section, we present first the results for the first scenario and then for the second scenario of case II.

Proposition 5.5

Assume that, for all $ n\in\mathbb{N},\smash{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}\neq0$ almost surely for $k=1,\ldots ,p$ and $l\leq k$ . Then under the assumptions of Theorem 4.2, we have

$$ \begin{equation*} \bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}\bigg)_{k,l=1,\ldots, p;l\leq k} \xrightarrow{\text{u.c.p.}} \bigg(\frac{\bar{R}^{(k,l)}_{t}} {\bar{R}^{(k,l)}_{T}}\bigg)_{k=1,\ldots, p;l\leq k}. \end{equation*} $$

Proof. Fix k, l. We have

\begin{align*} &\bigg(\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i} Y^{(l)}\bigg)\bigg(\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}\bigg)^{-1} -\frac{\bar{R}^{(k,l)}_{t}}{\bar{R}^{(k,l)}_{T}} \\ &\qquad=\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i} Y^{(l)}}{\bar{\tau}_{n}^{(l)}}-\bar{R}^{(k,l)}_{t}\bigg) \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)^{-1} \\ &\qquad +\frac{\bar{R}^{(k,l)}_{t}}{\bar{R}^{(k,l)}_{T}} \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}} {\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)^{-1}\bigg(\bar{R}^{(k,l)}_{T}-\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor} \dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg) \\ &\qquad \xrightarrow{\text{u.c.p.}} 0, \end{align*}

by Theorem 4.4. Note that the supremum of $\smash{\bar{R}^{(k,l)}_{t}}$ over $t\in[0,T]$ is bounded since the $\sigma$ are compact on bounded intervals. Finally, the joint convergence follows from Lemma 4.1.

Proposition 5.6

Assume that, for all $ n\in\mathbb{N},\smash{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}\neq0$ almost surely for $k=1,\ldots,p$ and $l\leq k$ . Then under the assumptions of Theorem 4.2, we have

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}- \frac{\bar{R}^{(k,l)}_{t,n}}{\bar{R}^{(k,l)}_{T,n}}\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\bigg(\frac{1}{\bar{R}^{(k,l)}_{T}} \int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}{\rm d} B^{(k,l)}_{s} \\ & -\frac{\bar{R}^{(k,l)}_{t}}{(\bar{R}^{(k,l)}_{T})^{2}} \int_{0}^{T} (V_{s}{\boldsymbol D}^{1/2})_{(k,l)}{\rm d} B^{(k,l)}_{s} \bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[0,T]} \end{align*}

in $\mathcal{D}([0,T],\smash{\mathbb{R}^{{p}(\,p+1)/{2}})}$ , where $\smash{(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}}$ indicates that we are considering only the (k,l) row of the matrix $ (V_{s}{\boldsymbol D}^{1/2}) $ (see Appendix A) and the $\smash{B_{s}^{(k,l)}}$ are one-dimensional independent Brownian motions.

Proof. Fix k, l. We have

\begin{align*} &\sqrt{n}\bigg[\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)} \Delta^{n}_{i}Y^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)} \Delta^{n}_{i}Y^{(l)}}-\frac{\bar{R}^{(k,l)}_{t,n}}{\bar{R}^{(k,l)}_{T,n}} \bigg] \\ &\qquad=\frac{\sqrt{n}}{\bar{R}^{(k,l)}_{T,n}} \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor} \dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}-\bar{R}^{(k,l)}_{t,n} \bigg) \\ &\qquad -\frac{\sqrt{n}}{\bar{R}^{(k,l)}_{T,n}}\bigg(\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta^{n}_{i}Y^{(k)}} {\bar{\tau}_{n}^{(k)}}\frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor} \frac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \frac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)^{-1} \\ &\times \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}} {\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} -\bar{R}^{(k,l)}_{T,n}\bigg), \end{align*}

which can be rewritten in vector notation as

\begin{align*} &\frac{\sqrt{n}}{\bar{R}^{(k,l)}_{T,n}}\bigg(1,-\bigg(\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}} {\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i}Y^{(l)}} {\bar{\tau}_{n}^{(l)}}\bigg)\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)^{-1}\bigg) \\ &\times\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}- \bar{R}^{(k,l)}_{t,n},\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}-\bar{R}^{(k,l)}_{T, n}\bigg)^{\top}. \end{align*}

Note that

\begin{align*}\\[-24pt] &\frac{1}{\bar{R}^{(k,l)}_{T,n}}\bigg(1,-\bigg(\frac{1}{n} \sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}} {\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}} \bigg)\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor} \dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}\bigg)^{-1}\bigg) \\ &\qquad\xrightarrow{\text{u.c.p.}} \frac{1}{\bar{R}^{(k,l)}_{T}} \bigg(1,-\frac{\bar{R}^{(k,l)}_{t}}{\bar{R}^{(k,l)}_{T}}\bigg) \end{align*}

using Proposition 5.5, and that

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\dfrac{\Delta^{n}_{i}Y^{(k)}}{\bar{\tau}_{n}^{(k)}} \dfrac{\Delta^{n}_{i}Y^{(l)}}{\bar{\tau}_{n}^{(l)}}-\bar{R}^{(k,l)}_{t, n},\frac{1}{n}\sum_{i=1}^{\lfloor nT\rfloor}\dfrac{\Delta^{n}_{i} Y^{(k)}}{\bar{\tau}_{n}^{(k)}}\dfrac{\Delta^{n}_{i}Y^{(l)}} {\bar{\tau}_{n}^{(l)}}-\bar{R}^{(k,l)}_{T,n}\bigg)^{\top}\bigg\}_{t \in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\bigg(\int_{0}^{t}(V_{s} {\boldsymbol D}^{1/2})_{(k,l)}{\rm d} B^{(k,l)}_{s},\int_{0}^{T}(V_{s} {\boldsymbol D}^{1/2})_{(k,l)}{\rm d} B^{(k,l)}_{s}\bigg)^{\top} \bigg\}_{t\in[0,T]} \!\quad\text{in }\mathcal{D}([0,T],\mathbb{R}^{2}), \end{align*}

by Theorem 4.2, where $\smash{(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}}$ indicates that we are considering only the (k, l) row of the matrix $ (V_{s}{\boldsymbol D}^{1/2}) $ . Then, using the properties of the stable convergence and the continuous mapping theorem we conclude the proof for fixed k and l. For the joint case, we proceed as we have in the proof of Proposition 5.2. In particular, we have an analogue of $\Theta_{n}$ which converges u.c.p. since its elements do. Moreover, we have an analogue of $\smash{\{(Z^{(1,1)}_{2,n}(t),\ldots, Z^{(\,p,p)}_{2,n}(t))^{\top}\}_{t\in[\epsilon,T]}}$ whose stable convergence in the Skorokhod space is guaranteed by Theorem 4.3 and the continuous mapping theorem using the function ${\boldsymbol g}(\{x_{1}(t),\ldots, x_{{p}(\,p+1){2}/}(t)\}_{t\in[0,T]})=\{(x_{1}(t),x_{1}(T), \ldots, x_{{p}(\,p+1)/{2}}(t),x_{{p}(\,p+1){2}/}(T))\}_{t\in[0,T]}$ . Finally, using the properties of the stable convergence, we obtain the stated result.

Similar results can be obtained for the second scenario of case II.

Proposition 5.7

Assume that, for all $ n\in\mathbb{N},\smash{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}}\neq0$ almost surely for any $k=1,\ldots, p$ and $l\leq k$ . Then under the assumptions of Theorem 4.3, we have

$$ \begin{equation*} \bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}}\bigg)_{k=1,\ldots, p;l\leq k} \xrightarrow{\text{u.c.p.}} \bigg(\frac{\tilde{R}^{(k,l)}_{t}} {\tilde{R}^{(k,l)}_{T}}\bigg)_{k=1,\ldots, p;l\leq k}. \end{equation*} $$

Proof. It follows from the same arguments as used in the proof of Proposition 5.5.

Proposition 5.8

Assume that, for all $n\in\mathbb{N},\smash{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}}\neq0$ almost surely for $k=1,\ldots,p$ and $l\leq k$ . Then under the assumptions of Theorem 4.3, we have

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}X^{(k)}\Delta^{n}_{i}X^{(l)}}-\frac{\tilde{R}^{(k, l)}_{t,n}}{\tilde{R}^{(k,l)}_{T,n}}\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} \bigg\{\bigg(\frac{1}{\tilde{R}^{(k,l)}_{T}} \int_{0}^{t}(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}{\rm d} B^{(k,l)}_{s} \\ &-\frac{\tilde{R}^{(k,l)}_{t}}{(\tilde{R}^{(k,l)}_{T})^{2}} \int_{0}^{T}(V_{s}{\boldsymbol D}^{1/2})_{(k,l)} {\rm d} B^{(k,l)}_{s}\bigg)_{k=1,\ldots, p;l\leq k}\bigg\}_{t\in[0,T]}, \end{align*}

in $\mathcal{D}([0,T],\smash{\mathbb{R}^{{p}(\,p+1){2}/})}$ , where $\smash{(V_{s}{\boldsymbol D}^{1/2})_{(k,l)}}$ indicates that we are considering only the (k,l) row of the matrix $ (V_{s}{\boldsymbol D}^{1/2}) $ (see Appendix A) and the $\smash{B_{s}^{(k,l)}}$ are one-dimensional independent Brownian motions.

Proof. It follows from the same arguments as used in the proof of Proposition 5.6.

Remark 5.2

Similar feasible results for general multivariate Gaussian processes with stationary increments can be derived by just setting all the $\sigma$ to be equal to 1.

6. Examples

6.1. The diagonal case

Due to the complexity of the presentation of the results, mainly due to their generality and their multidimensional nature, we now present a setting under which the results of this work simplify considerably. Nonetheless the results of this section are already a theoretical and feasible extension of the existing literature due to their multidimensional, joint, and feasible nature (indeed the work [Reference Granelli and Veraart17] provides neither joint nor feasible results, while the work [Reference Barndorff-Nielsen, Corcuera and Podolskij7] covers only the one-dimensional case).

Let $p\in\mathbb{N}$ . Consider the stochastic process $\smash{\{{\boldsymbol Y}_{t}\}_{t\in[0,T]}}=\smash{\{(Y_{t}^{(1)},\ldots, Y_{t}^{(\,p)})\}_{t\in[0,T]}}$ given by

$${{\bf{Y}}_t} = \int_{ - \infty }^t {\left( {\matrix{ {{g^{(1)}}(t - s)\sigma _s^{(1)}} & 0 & n & 0 \cr 0 & n & {} & n \cr n & {} & n & 0 \cr 0 & n & 0 & {{g^{({\kern 1pt} p)}}(t - s)\sigma _s^{({\kern 1pt} p)}} \cr } } \right)} \left( {\matrix{ {{\rm{d}}W_s^{(1)}} \cr n \cr {{\rm{d}}W_s^{({\kern 1pt} p)}} \cr } } \right) + \left( {\matrix{ {U_t^{(1)}} \cr n \cr {U_t^{({\kern 1pt} p)}} \cr } } \right).$$

Assume that the $\mathcal{F}_{t}$ -Brownian measures are jointly Gaussian (hence, we allow for dependency). For $k=1,\ldots, p$ , let $\smash{G^{(k)}_{t}\,:\!=\int_{0}^{t}g^{(k)}(t-s){\rm d} W_{s}^{(k)}}$ . Furthermore, let us define the scaling factor as in Section 3.1. For $k=1,\ldots, p$ , let $\tau_{n}^{(k)}\,:\!=\sqrt{\mathbb{E}[(\Delta^{n}_{1}G^{(k)})^{2}]}$ and

$$ \begin{equation*} r_{k,l}^{(n)}(h)\,:\!=\mathbb{E}\bigg[\dfrac{\Delta^{n}_{1}G^{(k)}} {\tau_{n}^{(k)}}\dfrac{\Delta^{n}_{1+h}G^{(l)}}{\tau_{n}^{(l)}} \bigg]. \end{equation*} $$

It is possible to see that in this setting, case I and the first scenario of case II coincide. Moreover, if we additionally assume that the volatilities are second order stationary with variance normalised to 1 (i.e. $\smash{\mathbb{E}[(\sigma_{s}^{(k)})^{2}]}=1$ for any $k=1,\ldots, p$ and $s\in(-\!\infty,T]$ ), then $\smash{\mathbb{E}[(\Delta^{n}_{1}G^{(k)})^{2}]}= \smash{\mathbb{E}[(\Delta^{n}_{1}Y^{(k)} )^{2}]}$ . Hence, by this additional assumption, all the cases presented in this work coincide.

Then, we have the following CLT, which follows directly from Theorem 4.1.

Corollary 6.1

Under Assumptions 4.1, 4.2, 4.3, and 4.5 applied to $\smash{\tau_{n}^{(l)},r_{k,l}^{(n)}}$ for $l,k=1,\ldots, p$ , we have the stable convergence

\begin{align*} &\bigg\{\sqrt{n}\bigg[\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor}\frac{\Delta_{i}^{n}Y^{(k)}}{\tau^{(k)}_{n}} \frac{\Delta_{i}^{n}Y^{(l)}}{\tau^{(l)}_{n}}-\mathbb{E} \bigg[\frac{\Delta_{1}^{n}G^{(k)}}{\tau^{(k)}_{n}} \frac{\Delta_{1}^{n}G^{(l)}}{\tau^{(l)}_{n}}\bigg] \int_{0}^{t}\sigma^{(k)}_{s}\sigma^{(l)}_{s}{\rm d} s\bigg]_{k,l=1,\ldots, p}\bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} {\boldsymbol D}^{1/2}\bigg\{\bigg(\int_{0}^{t} \sigma_{s}^{(k)}\sigma_{s}^{(l)}{\rm d} B^{(k,l)}_{s}\bigg)_{k,l=1,\ldots, p}\bigg\}_{t\in[0,T]} \end{align*}

in $\mathcal{D}([0,T],\mathbb{R}^{p^{2}}) $ , where $\smash{B^{(k,l)}_{s}}$ for $l,k=1,\ldots, p$ are independent one-dimensional Brownian motions and ${\boldsymbol D}$ is a $p^{2}\times p^{2}$ matrix defined in Proposition 3.1.

Remark 6.1

It is possible to see that in this framework we have a clear separation between the deterministic and the stochastic parts of the limiting process. Indeed, the deterministic kernel functions $\smash{g^{(1)},\ldots, g^{(\,p)}}$ constitute just a matrix of constants multiplying the limiting stochastic process.

Let $r_{k,l}(0)\,:\!=\smash{\lim_{n\rightarrow\infty}r_{k,l}^{(n)}(0)}$ . The previous result leads to the following WLLN.

Corollary 6.2

Let the assumptions of Corollary 6.1 hold. Then we have

$$ \begin{equation*} \bigg(\frac{1}{n}\sum_{i=1}^{\lfloor nt\rfloor }\frac{\Delta_{i}^{n}Y^{(k)}}{\tau^{(k)}_{n}}\frac{\Delta_{i}^{n} Y^{(l)}}{\tau^{(l)}_{n}}\bigg)_{k,l=1,\ldots, p} \xrightarrow{\text{u.c.p.}} \bigg(r_{k,l}(0)\int_{0}^{t}\sigma^{(k)}_{s}\sigma^{(l)}_{s}{\rm d} s\bigg)_{k,l=1,\ldots, p}. \end{equation*} $$

Now, recall that

$$ \begin{equation*} \bar{R}_{t,n}^{(k,l)}\,:\!=\mathbb{E}\bigg[\frac{\Delta_{1}^{n}G^{(k)}} {\tau_{n}^{(k)}}\frac{\Delta_{1}^{n}G^{(l)}}{\tau_{n}^{(l)}}\bigg] \int_{0}^{t}\sigma^{(k)}_{s}\sigma^{(l)}_{s}{\rm d} s \end{equation*} $$

and that $\smash{\bar{R}_{t}^{(k,l)}\,:\!=r_{k,l}(0)\int_{0}^{t}\sigma^{(k)}_{s} \sigma^{(l)}_{s}}{\rm d} s$ . Furthermore, let

$$ \begin{equation*} \mathcal{R}_{t}^{(k,l)}\,:\!= \sqrt{\bar{R}_{t,n}^{(k,k)} \bar{R}_{t,n}^{(l,l)}}= \sqrt{\int_{0}^{t}(\sigma^{(k)}_{s})^{2}{\rm d} s\int_{0}^{t} (\sigma^{(l)}_{s})^{2}{\rm d} s}. \end{equation*} $$

Therefore, we have

$$ \begin{equation*} \frac{\bar{R}^{(k,l)}_{t}}{\mathcal{R}_{t}^{(k,l)}}=r_{k,l}(0) \frac{\int_{0}^{t}\sigma^{(k)}_{s}\sigma^{(l)}_{s}{\rm d} s}{\sqrt{\int_{0}^{t} (\sigma^{(k)}_{s})^{2}{\rm d} s\int_{0}^{t} (\sigma^{(l)}_{s})^{2}{\rm d} s}\,}. \end{equation*} $$

We have the following feasible results on the asymptotic behaviour of the correlation ratio and of the relative covolatility. They follow directly from the results presented in Section 5.

Corollary 6.3

Under the assumptions of Corollary 6.1, we have, for any $\epsilon>0$ ,

$${\left( {{{\sum\limits_{i = 1}^{nntn} {\Delta _i^n} {Y^{(k)}}\Delta _i^n{Y^{(l)}}} \over {\sqrt {\sum\limits_{i = 1}^{nntn} {{{(\Delta _i^n{Y^{(k)}})}^2}} } \sqrt {\sum\limits_{i = 1}^{nntn} {{{(\Delta _i^n{Y^{(l)}})}^2}} } {\mkern 1mu} }}} \right)_{k,l = 1, \ldots ,p}}\buildrel {{\rm{u}}{\rm{.c}}{\rm{.p}}{\rm{.}}} \over \longrightarrow {({{\bar R_t^{(k,l)}} \over {n_t^{(k,l)}}})_{k,l = 1, \ldots ,p}},$$

and

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}{\sqrt{ \sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i}Y^{(k)})^{2}} \sqrt{\sum_{i=1}^{\lfloor nt\rfloor}(\Delta^{n}_{i} Y^{(l)})^{2}}\,} -\frac{\bar{R}_{t,n}^{(k,l)}} {\mathcal{R}_{t}^{(k,l)}}\bigg)_{k,l=1,\ldots, p}\bigg\}_{t\in[\epsilon,T]} \\ &\quad \xrightarrow{\text{st}} {\boldsymbol D}^{1/2} \bigg\{\bigg(\frac{1}{\mathcal{R}_{t}^{(k,l)}}\bigg(\int_{0}^{t} \sigma^{(k)}_{s}\sigma^{(l)}_{s}{\rm d} B^{(k,l)}_{s} \\ & -\!\frac{1}{2}\bar{R}^{(k,l)}_{t} \bigg(\frac{\int_{0}^{t}(\sigma^{(k)}_{s})^{2}dB^{(k,k)}_{s}}{\int_{0}^{t} (\sigma^{(k)}_{s})^{2}{\rm d} s}+\frac{\int_{0}^{t}(\sigma^{(l)}_{s})^{2} {\rm d} B^{(l,l)}_{s}}{\int_{0}^{t}(\sigma^{(l)}_{s})^{2}{\rm d} s}\bigg)\bigg) \bigg)_{k,l=1,\ldots, p}\bigg\}_{t\in[\epsilon,T]} \end{align*}

in $\mathcal{D}([\epsilon,T],\mathbb{R}^{p^{2}}) $ , where the $\smash{B_{s}^{(k,l)}}$ are one-dimensional independent Brownian motions and ${\boldsymbol D}$ is a $p^{2}\times p^{2}$ matrix defined in Proposition 3.1.

Corollary 6.4

Assume that, for all $n\in\mathbb{N},\smash{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}\neq0$ almost surely for $k,l=1,\ldots,p$ . Then, under the assumptions of Corollary 6.1, we have

$$ \begin{equation*} \bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}\bigg)_{k,l=1,\ldots, p} \xrightarrow{\text{u.c.p.}} \bigg(\frac{\bar{R}^{(k,l)}_{t}}{\bar{R}^{(k,l)}_{T}} \bigg)_{k,l=1,\ldots, p} \end{equation*} $$

and

\begin{align*} &\bigg\{\sqrt{n}\bigg(\frac{\sum_{i=1}^{\lfloor nt\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}{\sum_{i=1}^{\lfloor nT\rfloor}\Delta^{n}_{i}Y^{(k)}\Delta^{n}_{i}Y^{(l)}}-\frac{\bar{R}^{(k, l)}_{t,n}}{\bar{R}^{(k,l)}_{T,n}}\bigg)_{k,l=1,\ldots, p}\bigg\}_{t\in[0,T]} \\ &\qquad \xrightarrow{\text{st}} {\boldsymbol D}^{1/2}\bigg\{\bigg(\frac{1}{\bar{R}^{(k,l)}_{T}} \int_{0}^{t}\sigma^{(k)}_{s}\sigma^{(l)}_{s}{\rm d} B^{(k,l)}_{s}- \frac{\bar{R}^{(k,l)}_{t}}{(\bar{R}^{(k,l)}_{T})^{2}} \int_{0}^{T}\sigma^{(k)}_{s}\sigma^{(l)}_{s}{\rm d} B^{(k,l)}_{s}\bigg)_{k,l= 1,\ldots, p}\bigg\}_{t\in[0,T]} \end{align*}

in $\mathcal{D}([0,T],\mathbb{R}^{p^{2}}) $ , where the $\smash{B_{s}^{(k,l)}}$ are one-dimensional independent Brownian motions and ${\boldsymbol D}$ is a $p^{2}\times p^{2}$ matrix defined in Proposition 3.1.

6.2. The gamma kernel

Here we explore an example of the multivariate process $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ which satisfies the assumptions presented in Section 4. We will focus on the gamma kernel (see [Reference Barndorff-Nielsen, Corcuera and Podolskij7]) because it plays a central role in the modelling of (atmospheric) turbulences, which is one of the main objectives of the development of BSS processes. We will show that the process obtained from using the gamma kernel in the kernel matrix satisfies our assumptions. Consider the stochastic process $\{{\boldsymbol G}_{t}\}_{t\in[0,T]}$ defined as

$${{\bf{G}}_t}{\mkern 1mu} : = \left( {\matrix{ {G_t^{(1)}} \cr n \cr {G_t^{({\kern 1pt} p)}} \cr } } \right) = \int_{ - \infty }^t {\left( {\matrix{ {{g^{(1,1)}}(t - s)} & \ldots & {{g^{(1,p)}}(t - s)} \cr n & n & n \cr {{g^{({\kern 1pt} p,1)}}(t - s)} & \ldots & {{g^{({\kern 1pt} p,p)}}(t - s)} \cr } } \right)} \left( {\matrix{ {{\rm{d}}W_s^{(1)}} \cr n \cr {{\rm{d}}W_s^{({\kern 1pt} p)}} \cr } } \right),$$

where $\smash{g^{(i,j)}(t)=t^{\delta^{(i,j)}}{\rm e}^{-\lambda^{(i,j)}t} \mathbf{1}_{[0,\infty)}(t)}$ and the $\smash{W^{(i)}}$ are independent Gaussian $\mathcal{F}_{t}$ -Brownian measures on $\mathbb{R}$ , for $i,j=1,\ldots, p$ . We consider the case of independent Brownian measures for the sake of clarity and remark that the extension to the dependent case is immediate from our computations. Thus, we have $\smash{G^{(i)}_{t}=\sum_{j=1}^{p}\int_{-\infty}^{t}g^{(i,j)}(t-s) {\rm d} W^{(\,j)}_{s}}$ and

\begin{align*} \mathbb{E}[G^{(i)}_{t+h}G^{(\,j)}_{t} ] &=\sum_{l=1}^{p}\int_{-\infty}^{t}g^{(i,l)}(t+h-s)g^{(\,j,l)}(t-s){\rm d} s \\ &=\sum_{l=1}^{p}\int_{0}^{\infty}g^{(i,l)}(x+h)g^{(\,j,l)}(x){\rm d} x \\ &=\sum_{l=1}^{p}{\rm e}^{-\lambda^{(i,l)}h}\int_{0}^{\infty}(x+h)^{\delta^{(i, l)}}x^{\delta^{(\,j,l)}}{\rm e}^{-(\lambda^{(i,l)}+\lambda^{(\,j,l)})x}{\rm d} x. \end{align*}

It is important to note that if $\smash{\delta^{(i,j)}\in(-\tfrac{1}{2},0)\cup(0,\tfrac{1}{2})}$ then $\smash{\int_{-\infty}^{t}g^{(i,j)}(t-s){\rm d} W^{(\,j)}_{s}}$ is not a semimartingale (see [Reference Barndorff-Nielsen, Corcuera and Podolskij7] and [Reference Granelli and Veraart17]). We will first investigate Assumption 4.2 and then Assumption 4.1. Observe that, by stationarity,

(6.1) \begin{align} &r_{i,j}^{(n)}(k)\nonumber \\ &\quad\,:\!=\mathbb{E}\bigg[\dfrac{\Delta^{n}_{1}G^{(\,j)}} {\tau_{n}^{(\,j)}}\dfrac{\Delta^{n}_{1+k}G^{(i)}}{\tau_{n}^{(i)}} \bigg] \nonumber \\ &\quad\phantom{:}=\frac{2\mathbb{E}[G^{(\,j)}_{{1}/{n}}G^{(i)} _{{(1+k)}/{n}} ]-\mathbb{E}[G^{(\,j)}_{{1}/{n}}G^{(i)} _{{k}/{n}} ]-\mathbb{E}[G^{(\,j)}_{0}G^{(i)}_{{(1+k)}/{n}} ]}{(2\mathbb{E}[(G^{(\,j)}_{0})^{2} ]-2\mathbb{E} [G^{(\,j)}_{{1}/{n}}G^{(\,j)}_{0} ])^{{1}/{2}} (2\mathbb{E}[(G^{(i)}_{0})^{2} ]-2\mathbb{E} [G^{(i)}_{{1}/{n}}G^{(i)}_{0} ])^{{1}/{2}}} \nonumber \\ &\quad\phantom{:}= \mbox{{$ \frac{\sum_{l=1}^{p}\int_{0}^{\infty}[2g^{(i,l)}(x+{k}/{n})g^{(\,j,l)}(x) -g^{(i,l)}(x+{(k-1)}/{n})g^{(\,j,l)}(x)-g^{(i,l)}(x+{(k+1)}/{n})g^{(\,j,l)}(x)] {\rm d} x}{2(\sum_{l=1}^{p}\int_{0}^{\infty} [(g^{(i,l)}(x))^{2}-g^{(i,l)}(x+{1}/{n})g^{(i,l)}(x)]{\rm d} x )^{{1}/{2}}(\sum_{l=1}^{p}\int_{0}^{\infty} [(g^{(\,j,l)}(x))^{2}-g^{(\,j,l)}(x+{1}/{n})g^{(\,j,l)}(x)]{\rm d} x )^{{1}/{2}}} $}.} \label{eqn15}\end{align}

Remark 6.2

Since $\smash{\int_{0}^{\infty}[(g^{(i,l)}(x))^{2}-g^{(i,l)}(x+{1}/{n})g^{(i, l)}(x)]}{\rm d} x>0$ for any $i,l=1,\ldots, p$ , (6.1) is bounded below by

(6.2) $$ \begin{equation} \mbox{{$ \sum_{l=1}^{p}\bigg| \frac{\int_{0}^{\infty}[2g^{(i,l)}(x+{k}/{n}) g^{(\,j,l)}(x)-g^{(i,l)}(x+{(k-1)}/{n})g^{(\,j,l)}(x)-g^{(i,l)} (x+{(k+1)}/{n})g^{(\,j,l)}(x)]{\rm d} x}{2(\int_{0}^{\infty} [(g^{(i,l)}(x))^{2}-g^{(i,l)}(x+{1}/{n})g^{(i,l)}(x)]{\rm d} x)^{{1}/{2}} (\int_{0}^{\infty}[(g^{(\,j,l)}(x))^{2}-g^{(\,j,l)} (x+{1}/{n})g^{(\,j,l)}(x)]{\rm d} x)^{{1}/{2}}}\bigg| $}.} \label{eqn16} \end{equation} $$

From here, it is possible to see that the results and examples of [Reference Granelli and Veraart17] directly apply to our framework because each summand in (6.2) is the correlation coefficient ‘ $\smash{r_{i,j}^{(n)}(k)}$ ’ in their work.

Using the results in the supplementary material of [Reference Granelli and Veraart17] (see also Equation (12) of [Reference Bateman9, p. 234]), the numerator in (6.1) is given by

\begin{align*} &\sum_{l=1}^{p}K_{1}^{(i,j),(l)}{\rm e}^{-\lambda^{(i,l)}{k}/{n}} \sum_{r=0}^{\infty}\frac{(1+\delta^{(\,j,l)})_{r}}{(\delta^{(i,l)} +\delta^{(\,j,l)}+2)_{r}}\frac{1}{r!}(\lambda^{(i,l)}+\lambda^{(\,j,l)} )^{r} \\ &\times \bigg(2\bigg(\frac{k}{n}\bigg)^{r+\delta^{(i,l)} +\delta^{(\,j,l)}+1}-\bigg(\frac{k-1}{n}\bigg)^{r+\delta^{(i,l)}+ \delta^{(\,j,l)}+1} {\rm e}^{\lambda^{(i,l)}/{n}} \\ & -\bigg(\frac{k+1}{n}\bigg)^{r+\delta^{(i,l)} +\delta^{(\,j,l)}+1}{\rm e}^{-\lambda^{(i,l)}/{n}}\bigg) \\ &+\sum_{l=1}^{p}K_{2}^{(i,j),(l)}{\rm e}^{-\lambda^{(i,l)}{k}/{n}} \sum_{r=0}^{\infty}\frac{(\delta^{(i,l)})_{r}}{(\delta^{(i,l)} +\delta^{(\,j,l)})_{r}}\frac{1}{r!}(\lambda^{(i,l)}+\lambda^{(\,j,l)})^{r} \\ &\times \bigg(2\bigg(\frac{k}{n}\bigg)^{r} -\bigg(\frac{k-1}{n}\bigg)^{r}{\rm e}^{\lambda^{(i,l)}/{n}} -\bigg(\frac{k+1}{n}\bigg)^{r}{\rm e}^{-\lambda^{(i,l)}/{n}}\Bigg), \end{align*}

where

$$ \begin{equation*} K_{1}^{(i,j),(l)}\,:\!=\frac{\Gamma(\delta^{(\,j,l)}+1 )\Gamma(-1-\delta^{(\,j,l)}-\delta^{(i,l)} )}{\Gamma(-\delta^{(i,l)})}, \!\!\qquad\!\! K_{2}^{(i,j),(l)}\,:\!=\frac{\Gamma(\delta^{(\,j,l)}+\delta^{(i,l)}+1 )}{(\lambda^{(i,l)}+\lambda^{(\,j,l)} )^{\delta^{(\,j,l)}+\delta^{(i,l)}+1}}, \end{equation*} $$

and where $ (a)_{n}\,{:\!=}\, a(a + 1) \cdots(a + n - 1) = \smash{\prod_{q=0}^{n-1}(a+q)}= {\Gamma(a+n)}/{\Gamma(a)},$ with $ (a)_{0} \,:\!= 1$ . Furthermore, let $\bar{\delta}\,:\!=\smash{\min_{l=1,\ldots, p}\delta^{(\,j,l)}+\delta^{(i,l)}}$ . It is possible to see that, as $n\rightarrow\infty$ , the numerator is of order $\smash{({1}/{n})^{1+\bar{\delta}}}$ , because $\smash{\delta^{(i,l)}<\tfrac12}$ for every $i,l=1,\ldots, p$ in order to be in the nonsemimartingale case.

Moreover, regarding the denominator we observe that, for every $l=1,\ldots, p$ ,

\begin{align*} &\int_{0}^{\infty}\bigg[(g^{(i,l)}(x))^{2} -g^{(i,l)}\bigg(x+\frac{1}{n}\bigg)g^{(i,l)}(x)\bigg]{\rm d} x \\ &\qquad=\frac{\Gamma(2\delta^{(i,l)}+1)}{(2\lambda^{(i,l)} )^{2\delta^{(i,l)}+1}} -K_{1}^{(i,i),(l)}{\rm e}^{-\lambda^{(i, l)}/{n}}\sum_{r=0}^{\infty} \frac{(1+\delta^{(i,l)})_{r}} {(2\delta^{(i,l)}+2)_{r}}\frac{1}{r!} (2\lambda^{(i,l)} )^{r}\bigg(\frac{1}{n}\bigg)^{r +2\delta^{(i,l)}+1} \\ &\qquad -K_{2}^{(i,i),(l)}{\rm e}^{-\lambda^{(i,l)}/{n}} \sum_{r=0}^{\infty}\frac{(\delta^{(i,l)})_{r}}{(2\delta^{(i,l)})_{r}} \frac{1}{r!}(2\lambda^{(i,l)})^{r}\bigg(\frac{1}{n}\bigg)^{r}, \end{align*}

and using the facts that

$$ \begin{equation*} {{rm e}^{-\lambda^{(i,l)}x}=1-\lambda^{(i,l)}x+O(x^{2})} \quad\text{and}\quad {K_{2}^{(i,i),(l)}}=\frac{\Gamma(2\delta^{(i,l)}+ 1)}{{(2\lambda^{(i,l)})^{2\delta^{(i,l)}+1}}}, \end{equation*} $$

this equals (be careful: below we have summations from $r=0$ as well as from $r=1$ and $r=2$ )

\begin{align*} &-K_{1}^{(i,i),(l)}{\rm e}^{-\lambda^{(i,l)}/{n}}\sum_{r=0}^{\infty} \frac{(1+\delta^{(i,l)})_{r}}{(2\delta^{(i,l)}+2)_{r}}\frac{1}{r!} (2\lambda^{(i,l)} )^{r}\bigg(\frac{1}{n} \bigg)^{r+2\delta^{(i,l)}+1} \\ &-\bigg(1-\lambda^{(i,l)}\frac{1}{n}+O\bigg(\frac{1}{n^{2}} \bigg)\bigg)K_{2}^{(i,i),(l)} \\ &\times \sum_{r=1}^{\infty}\frac{(\delta^{(i,l)})_{r}}{(2\delta^{(i,l)})_{r}} \frac{1}{r!}(2\lambda^{(i,l)})^{r}\bigg(\frac{1}{n}\bigg)^{r} -\bigg(-\lambda^{(i,l)}\frac{1}{n}+O\bigg(\frac{1}{n^{2}}\bigg)\bigg) K_{2}^{(i,i),(l)} \\ &\qquad=-K_{1}^{(i,i),(l)}\bigg(1-\lambda^{(i,l)}\frac{1}{n}+ O\bigg(\frac{1}{n^{2}}\bigg)\bigg)\sum_{r=0}^{\infty}\frac{(1+ \delta^{(i,l)})_{r}}{(2\delta^{(i,l)}+2)_{r}}\frac{1}{r!} (2\lambda^{(i,l)} )^{r}\bigg(\frac{1}{n} \bigg)^{r+2\delta^{(i,l)}+1} \\ &\qquad -\bigg(1-\lambda^{(i,l)}\frac{1}{n}+O\bigg(\frac{1}{n^{2}} \bigg)\bigg)K_{2}^{(i,i),(l)}\sum_{r=2}^{\infty}\frac{(\delta^{(i, l)})_{r}}{(2\delta^{(i,l)})_{r}}\frac{1}{r!}(2\lambda^{(i,l)} )^{r}\bigg(\frac{1}{n}\bigg)^{r} \\ &\qquad -\bigg(1-\lambda^{(i,l)}\frac{1}{n}+O\bigg(\frac{1}{n^{2}} \bigg) \bigg)K_{2}^{(i,i),(l)}\lambda^{(i,l)}\bigg(\frac{1}{n}\bigg)- \bigg(-\lambda^{(i,l)}\frac{1}{n}+O\bigg(\frac{1}{n^{2}}\bigg)\bigg) K_{2}^{(i,i),(l)} \\ &\qquad=-K_{1}^{(i,i),(l)}\bigg(1-\lambda^{(i,l)}\frac{1}{n}+ O\bigg(\frac{1}{n^{2}}\bigg)\bigg)\sum_{r=0}^{\infty}\frac{(1+ \delta^{(i,l)})_{r}}{(2\delta^{(i,l)}+2)_{r}}\frac{1}{r!} (2\lambda^{(i,l)})^{r}\bigg(\frac{1}{n} \bigg)^{r+2\delta^{(i,l)}+1} \\ &\qquad -\bigg(1-\lambda^{(i,l)}\frac{1}{n}+O\bigg(\frac{1}{n^{2}} \bigg)\bigg)K_{2}^{(i,i),(l)}\sum_{r=2}^{\infty}\frac{(\delta^{(i, l)})_{r}}{(2\delta^{(i,l)})_{r}}\frac{1}{r!}(2\lambda^{(i,l)} )^{r}\bigg(\frac{1}{n}\bigg)^{r}+O\bigg(\frac{1}{n^{2}}\bigg). \end{align*}

Now, let $\smash{\hat{\delta}\,:\!=\min_{l=1,\ldots, p}\delta^{(i,l)}+\min_{l=1,\ldots, p}\delta^{(\,j,l)}}$ . From the above computations, it is possible to see that, as $n\rightarrow\infty$ , the denominator in (6.1) is of order $ ({1}/{n})^{1+\hat{\delta}}$ . Therefore, looking at the order as $n\rightarrow\infty,$ that (6.1) reduces to $\smash{({1}/{n})^{\bar{\delta}- \hat{\delta}}}$ . Observe that $\bar{\delta}-\hat{\delta}\geq 0$ by definition. Hence, if $\bar{\delta}> \hat{\delta}$ then $\smash{\lim_{n\rightarrow\infty}(r^{(n)}_{i,j}(k))^{2}}=0$ . While, if $\bar{\delta}=\hat{\delta}$ , we have, for any $k\in\mathbb{N}$ ,

(6.3) $$ \begin{equation} \lim_{n\rightarrow\infty}(r^{(n)}_{i,j}(k))^{2}=C(2k^{\bar{\delta}+1}- (k-1)^{\bar{\delta}+1}-(k+1)^{\bar{\delta}+1})^{2}<\infty, \label{eqn17} \end{equation} $$

where $C\in\mathbb{R}$ is a finite constant independent of k. Hence, the first part of Assumption 4.2 is satisfied.

Let ${\smash{\bar{R}^{(i,j),(l)}(t)}\,:\!=\smash{\int_{0}^{\infty} [(g^{(i,l)}(x))^{2}}}+\smash{(g^{(\,j,l)}(x) )^{2}}-\smash{2g^{(i,l)}(x\!+\!t)g^{(\,j,l)}(x)}]{\rm d} x$ ; thus, $\smash{\mathbb{E}[(G^{(i)}_{s+t}}-\smash{G^{(\,j)}_{s} )^{2}]} =\smash{\sum_{l=1}^{p}\bar{R}^{(i,j),(l)}(t)}$ . Recall that a function $L \colon (0, \infty) \rightarrow\mathbb{R}$ is called slowly varying at 0 when the identity $\lim_{t\rightarrow0^{+}}\{{L(\lambda t)}/{L(t)}\}=1$ holds for any fixed $\lambda>0$ . If L is continuous on $ (0, \infty) $ , we have

(6.4) $$ \begin{equation} |L(t)|\leq C t^{-\alpha},\qquad t\in(0,\lambda], \label{eqn18} \end{equation} $$

for any $\alpha> 0$ and any $\lambda > 0$ (where the constant $C > 0$ depends on $\alpha$ and $\lambda$ ; see [Reference Barndorff-Nielsen, Corcuera and Podolskij6]). Now, consider the following conditions.

$\smash{\bar{R}^{(i,i),(l)}(t) = t^{1+2\delta^{(i,l)}}L^{(i,i,l)}_{0}(t)}$ for some $\smash{\delta^{(i,l)}\in (-\tfrac{1}{2}, \tfrac{1}{2})}$ and some positive slowly varying at 0 function $L^{(i,i,l)}_{0}$ , which is continuous on (0, $\infty$ ), for every $l=1,\ldots, p$ .

$\smash{(\bar{R}^{(i,j),(l)})''(t) = t^{\delta^{(i,l)}+\delta^{(\,j,l)}-1}L^{(i,j,l)}_{2}(t)}$ for some slowly varying at 0 function $\smash{L^{(i,j,l)}_{2}}$ , which is continuous on (0, $\infty$ ), for every $l=1,\ldots, p$ .

Define $\smash{\tilde{L}_{0}^{(i,j,l)}(t) \,:\!= \sqrt{L^{(i,i,l)}_{0}(t)L^{(\,j,j,l)}_{0}(t)}}$ . There exists $d \in (0, 1) $ such that, for every $l=1,\ldots, p$ ,

$$ \begin{equation*} \limsup_{t\rightarrow0^{+}}\sup_{y\in(t,t^{d})}\frac{L^{(i,j,l)}_{2}(y)} {\tilde{L}_{0}^{(i,j,l)}(t)}<\infty. \end{equation*} $$

The following result is the multivariate version of Lemma 1 of [Reference Barndorff-Nielsen, Corcuera and Podolskij6].

Lemma 6.1

Suppose that conditions (A1)–(A3) hold. Let $\beta\,:\!=\smash{\max_{l=1,\ldots, p}\delta^{(i,l)}+\delta^{(i,l)}}+1,$ and let $\epsilon > 0$ with $\epsilon < 2 - \beta$ . Define the sequence r(k) by $r ( k) = ( k - 1)^{\beta+\epsilon-2},\,k\geq 2,$ and $r(0),r(1)\geq C$ , with $C>\smash{r^{(n)}_{i,j}(0),r^{(n)}_{i,j}(1)}$ . Then there exists a natural number $n_{0}(\epsilon) $ such that ${|r^{(n)}_{i,j}(k)|}\leq C r(k),\,k\geq0$ , for all $n>n_{0}(\epsilon) $ . Furthermore, let $\beta+\epsilon -2<-\smash{\tfrac12}$ . Then ${\sum_{k=1}^{\infty}}r^{2}(k)<\infty$ .

Proof. Observe that

$$ \begin{equation*} r^{(n)}_{i,j}(k)=\sum_{l=1}^{p}\frac{\bar{R}^{(i,j,l)}({(k-1)}/{n} ) -2\bar{R}^{(i,j,l)}({k}/{n})+\bar{R}^{(i,j,l)} ({(k+1)}/{n})}{2\sqrt{(\sum_{l=1}^{p}\bar{R}^{(i,i,l)} ({1}/{n}))(\sum_{l=1}^{p} \bar{R}^{(\,j,j,l)}({1}/{n}))}}. \end{equation*} $$

Under conditions (A1)–(A2), for $2\leq k\leq n,$ by (6.2) and the mean value theorem (applied twice), we have

(6.5) \begin{align} |r^{(n)}_{i,j}(k)|&\leq \sum_{l=1}^{p}\bigg|\frac{1}{2}\bigg(\frac{1}{n}\bigg)^{1-\delta^{(i,l)} -\delta^{(\,j,l)}}\frac{(\bar{R}^{(i,j),(l)})''({(k+\theta^{n}_{k})}/{n} )}{\tilde{L}_{0}^{(i,j,l)}({1}/{n})} \bigg| \nonumber \\ &= \sum_{l=1}^{p}\bigg|\frac{1}{2}\bigg(\frac{1}{n}\bigg)^{1- \delta^{(i,l)} -\delta^{(\,j,l)}}\bigg(\frac{k+\theta^{n}_{k}} {n}\bigg)^{\delta^{(i,l)} +\delta^{(\,j,l)}-1}\frac{L^{(i,j, l)}_{2}({(k+\theta^{n}_{k})}/{n} )}{\tilde{L}_{0}^{(i,j,l)}({1}/{n})} \bigg|, \label{eqn19}\end{align}

where $\smash{|\theta^{n}_{k}|}<1$ . In particular, for $2\leq k\leq \lfloor n^{1-d}\rfloor$ , by condition (A3), we obtain

$$ \begin{equation*} |r^{(n)}_{i,j}(k)|\leq C'(k-1)^{\delta^{(i,l)}+\delta^{(\,j,l)}-1} \leq C'(k-1)^{\beta-2}, \end{equation*} $$

where C′ is a positive constant. Moreover, for $\lfloor n^{1-d} \rfloor\leq k\leq n$ , using (6.4), we have

\begin{align*} |r^{(n)}_{i,j}(k+1)|&\leq \sum_{l=1}^{p}\bigg|\frac{1}{2}k^{\delta^{(i, l)} +\delta^{(\,j,l)}-1}\frac{L^{(i,j,l)}_{2}({(k+\theta^{n}_{k})}/{n})} {\tilde{L}_{0}^{(i,j,l)}({1}/{n})}\bigg| \\ &\leq \sum_{l=1}^{p}\bigg|\frac{1}{2}k^{\delta^{(i,l)}+\delta^{(\,j,l)}-1 +\epsilon}n^{(d-1)\epsilon}\frac{L^{(i,j,l)}_{2} ({(k+\theta^{n}_{k})}/{n})}{\tilde{L}_{0}^{(i,j,l)} ({1}/{n})}\bigg| \\ &\leq C'' k^{\delta^{(i,l)}+\delta^{(\,j,l)}-1+\epsilon} \\ &\leq C''k^{\beta+\epsilon-2}, \end{align*}

where C″ is a positive constant. The last statement is an immediate consequence.

Remark 6.3

Our proof uses the same arguments as in the proof of Lemma 1 of [Reference Barndorff-Nielsen, Corcuera and Podolskij6]. We have an additional factor $\smash{({1}/{n})^{1-\delta^{(i,l)}-\delta^{(\,j,l)}}}$ in (6.5), which appears to have been forgotten on one occasion in the proof of Lemma 1 of [Reference Barndorff-Nielsen, Corcuera and Podolskij6].

Now, we need to check that in our example conditions (A1)–(A3) are satisfied and that $\smash{r^{(n)}_{i,j}(0)}$ and $\smash{r^{(n)}_{i,j}(1)}$ are uniformly bounded. First, it is easy to see that $\smash{|r^{(n)}_{i,j}(0)|}\leq 1$ since it is the correlation function of $\smash{\Delta^{n}_{1}G^{(i)}}$ and $\smash{\Delta^{n}_{1}G^{(\,j)}}$ . Furthermore, from (6.3), $\smash{r^{(n)}_{i,j}(k)}$ is a converging sequence and, hence, bounded, for any $k\in\mathbb{N}$ . Thus, $\smash{|r^{(n)}_{i,j}(1)|,|r^{(n)}_{i,j}(1)|}<C$ , where C is a positive constant independent of n. Condition (A1) is the same as condition (A1) for the univariate case in [Reference Barndorff-Nielsen, Corcuera and Podolskij7] (see [Reference Barndorff-Nielsen, Corcuera and Podolskij5] for more details); thus, it is satisfied for the gamma kernel example. For (A2), we have

\begin{align*} &\int_{0}^{\infty}g^{(i,l)} (x + t)g^{(\,j,l)} (x){\rm d} x \\ &\qquad=K_{1}^{(i,j),(l)}{\rm e}^{-\lambda^{(i,l)}t}\sum_{r=0}^{\infty}\frac{(1 +\delta^{(\,j,l)})_{r}}{(\delta^{(i,l)}+\delta^{(\,j,l)}+2)_{r}}\frac{1}{r!} (\lambda^{(i,l)}+\lambda^{(\,j,l)} )^{r}t^{r+ \delta^{(i,l)}+\delta^{(\,j,l)}+1} \\ &\qquad +K_{2}^{(i,j),(l)}{\rm e}^{-\lambda^{(i,l)}t}\sum_{r=0}^{\infty} \frac{(\delta^{(i,l)})_{r}}{(\delta^{(i,l)}+\delta^{(\,j,l)})_{r}} \frac{1}{r!}(\lambda^{(i,l)}+\lambda^{(\,j,l)})^{r}t^{r} \end{align*}

for any $l=1,\ldots, p$ , and taking the second derivative we obtain

\begin{align*} (\bar{R}^{(i,j),(l)})''(t) &= t^{\delta^{(i,l)}+\delta^{(\,j,l)}-1} [-2(\delta^{(i,l)}+\delta^{(\,j,l)}+1)(\delta^{(i,l)}+\delta^{(\,j, l)})K_{1}^{(i,j),(l)} \\ &+O(\min(t^{1-\delta^{(i,l)}-\delta^{(\,j,l)}}, t))]. \end{align*}

Hence, (A2) is satisfied. Finally, from [Reference Barndorff-Nielsen, Corcuera and Podolskij7] (see also the supplementary material of [Reference Granelli and Veraart17]), we have $\smash{\lim_{t\rightarrow0^{+}}\tilde{L}_{0}^{(i,i,l)} =2^{-1-2\delta^{(i,l)}}{\Gamma(\tfrac{1}{2}-\delta^{(i,l)})} /{\Gamma(\tfrac{3}{2}+\delta^{(i,l)})}},$ and so

$$ \begin{equation*} \lim_{t\rightarrow0^{+}}\tilde{L}_{0}^{(i,j,l)}=2^{-1-\delta^{(i,l)} -\delta^{(\,j,l)}}\sqrt{\frac{\Gamma({1}/{2}-\delta^{(i,l)})\Gamma({1}/{2} -\delta^{(\,j,l)})}{\Gamma({3}/{2}+\delta^{(i,l)})\Gamma({3}/{2}+ \delta^{(\,j,l)})}} \,{=\!:}\,K_{0}^{(i,j),(l)}. \end{equation*} $$

Thus, we have

\begin{align*} &\limsup_{t\rightarrow0^{+}}\sup_{y\in(t,t^{d})}\frac{L^{(i,j,l)}_{2}(y)} {\tilde{L}_{0}^{(i,j,l)}(t)} \\ &\qquad\leq \limsup_{t\rightarrow0^{+}}\frac{-2(\delta^{(i,l)} +\delta^{(\,j,l)}+1)(\delta^{(i,l)}+\delta^{(\,j,l)})K_{1}^{(i,j),(l)} +C(\min(t^{d(1-\delta^{(i,l)}-\delta^{(\,j,l)})},t^{d}))}{\tilde{L}_{0}^{(i, j,l)}(t)} \\ &\qquad=\frac{-2(\delta^{(i,l)}+\delta^{(\,j,l)}+1)(\delta^{(i,l)} +\delta^{(\,j,l)})K_{1}^{(i,j),(l)}}{K_{0}^{(i,j),(l)}} \\ &\qquad<\infty \end{align*}

for some $C>0$ and every $l=1,\ldots, p$ . Therefore, (A3) and, so, Assumption 4.2 are satisfied.

Given the fact that our $\pi_{n}^{(m,l)}$ has the same structure as $\pi_{n}$ in [Reference Barndorff-Nielsen, Corcuera and Podolskij7], then the same arguments used in [Reference Barndorff-Nielsen, Corcuera and Podolskij7] hold here and we can conclude that if $\smash{\delta^{(m,l)}\in(-\tfrac{1}{2},\tfrac{1}{2})}$ for every $m,l=1,\ldots,p$ then Assumption 4.1 is satisfied.

Combining the ranges obtained, we conclude that, when $\smash{\delta^{(i,j)}\in(-\tfrac{1}{2},\tfrac{1}{2})}$ with

$$ \begin{equation*} \max_{l=1,\ldots, p}\delta^{(i,l)}+\smash{\delta^{(\,j,l)}<\tfrac12} \end{equation*} $$

for every $i,j=1,\ldots, p$ , then all the results presented in this work apply to our example.

7. Conclusion

In this paper we introduced the multivariate BSS process and studied the joint asymptotic behaviour of its realised covariation, presenting limit theorems, feasible results, and an explicit example. We also provided central limit theorems and weak laws of large numbers for general multivariate Gaussian processes with stationary increments. There are at least two directions which will be worth exploring in more detail in the future.

First, is it possible to find feasible estimates for the asymptotic variance of the multivariate BSS processes? That is, can ‘second-order" feasible results be obtained in addition to the ‘first-order" feasible results we already presented?

Second, we considered the asymptotic theory for BSS processes also outside the semimartingale setting. In doing so, we concentrated on a particular scenario (as described by the assumptions on the deterministic function g in Assumption 4.1). However, one can imagine other scenarios which lead to BSS processes (or other volatility modulated Gaussian processes) beyond the semimartingale framework. Can similar asymptotic results for the (scaled) realised covariation be obtained in such settings?

Appendix A The matrices D and V for the BSS process

In this appendix we specify the explicit structure and value of the matrices ${\boldsymbol D}$ and $\textbf{V}_{s}$ . The reason why we put them into the appendix is that in order to present them we need some combinatorial arguments which are easy but tedious, and they are similar for the different cases presented in Section 4.

A.1 Case

Let ${\boldsymbol D}^{1/2}\in\mathcal{M}^{p^{6}\times p^{6}}(\mathbb{R}) $ be defined as

\begin{align*} ({\boldsymbol D})_{z,y}&\,:\!=\lim_{n\rightarrow\infty}\frac{1}{n} \sum_{h=1}^{n-1}(n-h) (r_{k_{z},r_{z},m_{z};k_{y},r_{y},m_{y}}^{(n)}(h) r_{l_{z},q_{z},w_{z}; l_{y},q_{y},w_{y}}^{(n)}(h) \\ & +r_{l_{z},q_{z},w_{z};k_{y},r_{y},m_{y}}^{(n)}(h) r_{k_{z},r_{z},m_{z};l_{y},q_{y},w_{y}}^{(n)}(h) \\ & +r_{k_{y},r_{y},m_{y};k_{z},r_{z},m_{z}}^{(n)}(h)r_{l_{y},q_{y},w_{y};l_{z}, q_{z},w_{z}}^{(n)}(h) \\ & +r_{k_{y},r_{y},m_{y};l_{z},q_{z},w_{z}}^{(n)}(h)r_{l_{y}, q_{y},w_{y};k_{z},r_{z},m_{z}}^{(n)}(h)) \\ &\phantom{:} +(r_{k_{z},r_{z},m_{z};k_{y},r_{y},m_{y}}^{(n)}(0)r_{l_{z},q_{z},w_{z}; l_{y},q_{y},w_{y}}^{(n)}(0)\!+r_{l_{z},q_{z},w_{z};k_{y},r_{y},m_{y}}^{(n)} (0) r_{k_{z},r_{z},m_{z};l_{y},q_{y},w_{y}}^{(n)}(0)), \end{align*}

where, for each of the $p^{6}\times p^{6}$ combinations of (z, y), there is a unique combination of $ ((r_{z},m_{z},q_{z},w_{z},k_{z},l_{z}) $ , $ (r_{y},m_{y},q_{y},w_{y},k_{y},l_{y})) $ where each of these elements takes values in $\{1,\ldots, p\}$ . Let $\nu(r,m,q,w) $ be any permutation of the set of the different combinations of $r,m,q,w\in\{1,\ldots, p\},$ and let $\nu_{s}(r,m,q,w) $ determine the sth element of $\nu(r,m,q,w) $ . Note that $\nu(r,m,q,w) $ has a certain order for its element, which is not relevant for us since we only care about the consistent use of the order adopted. Recall that by the notation $ (\cdot)_{\nu(r,m,q,w)}$ we mean that the sth component of the vector is $\nu_{s}(r,m,q,w) $ . Then the association is given by

\begin{align*} (z,y)\leftrightarrow\bigg(\bigg(&\nu_{z-\lfloor{(z-1)}/{p^{4}} \rfloor p^{4}}(r,m,q,w),\bigg\lfloor\frac{\lfloor{(z-1)}/{p^{4}}\rfloor}{p} \bigg\rfloor+1,\bigg\lfloor\frac{z-1}{p^{4}}\bigg\rfloor+1 \\ & -p\bigg\lfloor \frac{\lfloor {(z-1)}/{p^{4}}\rfloor}{p}\bigg\rfloor\bigg), \\ & \bigg(\nu_{y-\lfloor{(y-1)}/{p^{4}}\rfloor p^{4}}(r,m,q,w), \bigg\lfloor \frac{\lfloor{(y-1)}/{p^{4}}\rfloor}{p}\bigg\rfloor+1, \bigg\lfloor\frac{y-1}{p^{4}} \bigg\rfloor+1 \\ & -p\bigg\lfloor\frac{\lfloor{(y-1)}/{p^{4}}\rfloor}{p} \bigg\rfloor\bigg)\bigg). \end{align*}

For a proof of this statement for the case $p=2,$ see the proof of Theorem 4.1; the extension to the case $p>2$ is trivial. Moreover, define for $s\in[0,T]$ the $p^{2}\times p^{6}$ matrix

(A.1) $$ \begin{equation} V_{s}\,:\!= \begin{pmatrix} \sigma_{s} & \textbf{0} &\cdots & \textbf{0} \\ \textbf{0} & \ddots & & \vdots \\ \vdots & & \ddots & \textbf{0}\\ \textbf{0} & \cdots& \textbf{0} & \sigma_{s} \end{pmatrix}, \label{eqn20} \end{equation} $$

where $\sigma_{s}\,:\!=\smash{(\sigma^{(r,m)}_{s}\sigma^{(q,w)}_{s})_{\nu(r,m,q, w)}^{\top}}$ , so it is a row vector of $p^{4}$ elements (here the consistency of the order of the elements of $\nu(r,m,q,w) $ is fundamental), and $\textbf{0}$ is a row vector of $p^{4}$ elements containing only 0s. Hence, $\sigma_{s}$ and $\mathbf{0}$ contain the same number of elements.

In the case of the vech notation, the association for ${\boldsymbol D}$ is the following. First, let us define, for $i\in\mathbb{N}$ ,

$$ \begin{equation*} \chi(i)\,:\!=\bigg\lfloor\sqrt{2i}+\frac{1}{2} \bigg\rfloor, \qquad \xi(i)\,:\!=i-\frac{1}{2}\bigg\lfloor\frac{\sqrt{8i-7}-1}{2} \bigg\rfloor\bigg(\bigg\lfloor\frac{\sqrt{8i-7}-1}{2} \bigg\rfloor+1\bigg). \end{equation*} $$

Then we have

(A.2) \begin{align} (z,y)\leftrightarrow\bigg(&\bigg(\nu_{z-\lfloor{(z-1)}/{p^{4}} \rfloor p^{4}}(r,m,q,w),\chi\bigg(\bigg\lfloor\frac{z-1}{p^{4}} \bigg\rfloor\bigg),\xi\bigg(\bigg\lfloor\frac{z-1}{p^{4}} \bigg\rfloor\bigg)\bigg), \nonumber \\[-2pt] &\bigg(\nu_{y-\lfloor{(y-1)}/{p^{4}}\rfloor p^{4}}(r,m,q,w), \chi\bigg(\bigg\lfloor\frac{y-1}{p^{4}}\bigg\rfloor\bigg), \xi\bigg(\bigg\lfloor\frac{y-1}{p^{4}}\bigg\rfloor\bigg)\bigg)\bigg). \label{eqn21}\end{align}

The association (A.2) comes from the fact that we have $k=1,\ldots, p$ with $k\leq l$ . The couple (k, l) has the sequence $ (1,1),(2,1),(2,2),(3,1),(3,2),(3,3),(4,1),(4,2),(4,3),(4,4), (5,1),\ldots,$ which is a fractal sequence of form given by $ (\xi(i),\chi(i)),$ where i is the ith term of the sequence. Concerning the matrix $V_{s}$ , we have the same structure as (A.1). However, the dimension is now ${p}(\,p+1)/{2}\times {p^{5}}(\,p+1)/{2}$ . The dimension and the value of the $\sigma_{s}$ and of the $\mathbf{0}$ is the same as before and, hence, we are considering fewer of them (e.g. the number of $\sigma_{s}$ is ${p}(\,p+1)/{2},$ while in (A.1) we had $p^{2}$ of them).

A.2 Case II: first scenario

We have the same as in case I. The only difference is the use of $\smash{\bar{r}^{(n)}}$ instead of $\smash{r^{(n)}}$ .

A.3 Case II: second scenario

For this case, we have something similar to the previous sections, but simpler since there are no variables r and q. Let ${\boldsymbol D}^{1/2}\in\mathcal{M}^{p^{4}\times p^{4}}(\mathbb{R}) $ be defined as

\begin{align*} ({\boldsymbol D})_{z,y}&\,:\!=\lim_{n\rightarrow\infty}\frac{1}{n}\sum_{h=1} ^{n-1}(n-h)(\tilde{r}_{k_{z},m_{z};k_{y},m_{y}}^{(n)}(h)\tilde{r}_{l_{z},w_{z};l_{y},w_{y}}^{(n)}(h)+\tilde{r}_{l_{z},w_{z};k_{y},m_{y}}^{(n)}(h)\tilde{r}_{k_{z},m_{z};l_{y},w_{y}}^{(n)}(h) \\ & + \tilde{r}_{k_{y},m_{y};k_{z},m_{z}}^{(n)}(h) \tilde{r}_{l_{y},w_{y};l_{z},w_{z}}^{(n)}(h)+\tilde{r}_{k_{y}, m_{y};l_{z},w_{z}}^{(n)}(h)\tilde{r}_{l_{y},w_{y};k_{z},m_{z}}^{(n)}(h)) \\ &\phantom{:} +(\tilde{r}_{k_{z},m_{z};k_{y},m_{y}}^{(n)}(0) \tilde{r}_{l_{z},w_{z};l_{y},w_{y}}^{(n)}(0)+\tilde{r}_{l_{z},w_{z}; k_{y},m_{y}}^{(n)}(0)\tilde{r}_{k_{z},m_{z};l_{y},w_{y}}^{(n)}(0)), \end{align*}

where, for each of the $p^{4}\times p^{4}$ combinations of (z, y), there is a unique combination of $ ((m_{z},w_{z},k_{z},l_{z}), (m_{y},w_{y},k_{y},l_{y})) $ . The association is given by

\begin{align*} (z,y)\leftrightarrow\bigg(\bigg(&\mu_{z-\lfloor{(z-1)}/{p^{2}} \rfloor p^{2}}(m,w),\bigg\lfloor\frac{\lfloor{(z-1)}/{p^{2}}\rfloor}{p} \bigg\rfloor+1,\bigg\lfloor\frac{z-1}{p^{2}}\bigg\rfloor+1 \\ &-p\bigg\lfloor \frac{\lfloor{(z-1)}/{p^{2}}\rfloor}{p}\bigg\rfloor\bigg), \\ & \bigg(\mu_{y-\lfloor{(y-1)}/{p^{2}}\rfloor p^{2}}(m,w), \bigg\lfloor\frac{\lfloor{(y-1)}/{p^{2}}\rfloor}{p}\bigg\rfloor+1, \bigg\lfloor\frac{y-1}{p^{2}}\bigg\rfloor+1 \\ & -p\bigg\lfloor\frac{\lfloor{(y-1)}/ {p^{2}}\rfloor}{p}\bigg\rfloor\bigg)\bigg), \end{align*}

where $\mu(m,w) $ is any permutation of the set of all the possible combinations of $m,w\in\{1,\ldots, p\}$ (i.e. any permutation of the set $ ((1,1),(1,2),\ldots ,(1,p),(2,1),(2,2),\ldots, (2,p), \ldots, (\,p,p)) $ ) and $\mu_{s}(m,w) $ is the sth element of $\mu(m,w) $ . Moreover, define for $s\in[0,T]$ the $p^{2}\times p^{4}$ matrix

(A.3) $$ \begin{equation} V_{s}\,:\!= \begin{pmatrix} \sigma_{(1,1),s} & \textbf{0} &\cdots & \textbf{0} \\ \textbf{0} & \ddots & & \vdots \\ \vdots & & \ddots & \textbf{0}\\ \textbf{0} & \cdots& \textbf{0} & \sigma_{(\,p,p),s} \end{pmatrix}, \label{eqn22} \end{equation} $$

where

$$ \begin{equation*} {\sigma_{(k,l),s}\,:\!=(\sigma^{(k,m)}_{s}\sigma^{(l,w)}_{s})_{\mu(m, w)}^{\top}} \end{equation*} $$

and $\textbf{0}$ is a row vector of $p^{2}$ elements containing only 0s.

For the vech notation, the structures of the matrices ${\boldsymbol D}$ and V remain the same, but their dimensions reduce. Similarly to the previous cases, the association is given by

\begin{align*} (z,y)\leftrightarrow\bigg(&\bigg(\mu_{z-\lfloor{(z-1)}/{p^{2}} \rfloor p^{2}}(m,w),\chi\bigg(\bigg\lfloor\frac{z-1}{p^{4}} \bigg\rfloor\bigg),\xi\bigg(\bigg\lfloor\frac{z-1}{p^{4}}\bigg\rfloor \bigg)\bigg), \\ &\bigg(\mu_{y-\lfloor{(y-1)}/{p^{2}}\rfloor p^{2}}(m,w), \chi\bigg(\bigg\lfloor\frac{y-1}{p^{4}}\bigg\rfloor\bigg), \xi\bigg(\bigg\lfloor\frac{y-1}{p^{4}}\bigg\rfloor\bigg)\bigg)\bigg). \end{align*}

Concerning the matrix $V_{s}$ , we have the same structure as (A.3). However, the dimension is now ${p}(\,p+1)/{2}\times {p^{3}}(\,p+1)/{2}$ . The value of the $\sigma_{s}$ and of the $\mathbf{0}$ is the same as before and, hence, we are considering fewer of them (e.g. the number of $\sigma_{s}$ is ${p}(\,p+1)/{2}$ , while in (A.3) we had $p^{2}$ of them).

Acknowledgement

The authors would like to thank Andrea Granelli, Fabio Bernasconi, Mikko Pakkanen, and Mark Podolskij for useful discussions and comments. Furthermore, we would like to thank the anonymous referee for constructive comments, which improved the presentation and the content of the paper. RP acknowledges the financial support of the CDT in MPE (EPSRC award reference 1643696) and the Grantham Institute.

References

Aldous, D. J. and Eagleson, G. K. (1978). On mixing and stability of limit theorems. Ann. Prob. 6, 325331.CrossRefGoogle Scholar
Barndorff-Nielsen, O. E. and Schmiegel, J. (2009). Brownian semistationary processes and volatility/intermittency. In Advanced Financial Modelling, eds Albrecher, H., Runggaldier, W., and Schachermayer, W., Walter de Gruyter, Berlin, pp. 125.Google Scholar
Barndorff-Nielsen, O. E. and Shephard, N. (2004). Econometric analysis of realized covariation: high frequency based covariance, regression, and correlation in financial economics. Econometrica 72, 885925.CrossRefGoogle Scholar
Barndorff-Nielsen, O. E., Benth, F. E. and Veraart, A. E. D. (2013). Modelling energy spot prices by volatility modulated Lévy-driven Volterra processes. Bernoulli 19, 803845.CrossRefGoogle Scholar
Barndorff-Nielsen, O. E., Corcuera, J. M. and Podolskij, M. (2009). Multipower variation for Brownian semistationary processes (full version). CREATES research paper 2009-21, Aarhus University.Google Scholar
Barndorff-Nielsen, O. E., Corcuera, J. M. and Podolskij, M. (2009). Power variation for Gaussian processes with stationary increments. Stoch. Process. Appl. 119, 18451865.CrossRefGoogle Scholar
Barndorff-Nielsen, O. E., Corcuera, J. M. and Podolskij, M. (2011). Multipower variation for Brownian semistationary processes. Bernoulli 17, 11591194.CrossRefGoogle Scholar
Barndorff-Nielsen, O. E., Pakkanen, M. S. and Schmiegel, J. (2014). Assessing relative volatility/intermittency/energy dissipation. Electron. J. Statist. 8, 19962021.CrossRefGoogle Scholar
Bateman, H. (1954). Tables of Integral Transforms. McGraw-Hill, New York.Google Scholar
Bennedsen, M. (2017). A rough multi-factor model of electricity spot prices. Energy Econom. 63, 301313.CrossRefGoogle Scholar
Bennedsen, M., Lunde, A. and Pakkanen, M. (2017). Decoupling the short- and long-term behavior of stochastic volatility. Preprint. Available at https://arxiv.org/abs/1610.00332v2.Google Scholar
Bennedsen, M., Lunde, A. and Pakkanen, M. (2017). Hybrid scheme for Brownian semistationary processes Finance Stoch. 21, 931965.CrossRefGoogle Scholar
Billingsley, P. (1999). Convergence of Probability Measures, 2nd edn. John Wiley, New York.CrossRefGoogle Scholar
Corcuera, J. M. (2012). New central limit theorems for functionals of Gaussian processes and their applications. Methodology Comput. Appl. Prob. 14.3, 477500.CrossRefGoogle Scholar
Corcuera, J. M., Hedevang, E., Podolskij, M. S. and Pakkanen, M. (2013). Asymptotic theory for Brownian semi-stationary processes with application to turbulence. Stoch. Process. Appl. 123, 25522574.CrossRefGoogle Scholar
Granelli, A. (2017). Limit theorems and stochastic models for dependence and contagion in financial markets. Doctoral Thesis, Imperial College London.Google Scholar
Granelli, A. and Veraart, A. E. D. (2019). A central limit theorem for the realised covariation of a bivariate Brownian semistationary process. To appear in Bernoulli.CrossRefGoogle Scholar
Jacod, J. (1997). On continuous conditional Gaussian martingales and stable convergence in law. In Séminaire de Probabilités XXXI (Lecture Notes Math. 1655), Springer, Berlin, pp. 232246.CrossRefGoogle Scholar
Jacod, J. (2008), Asymptotic properties of realized power variations and related functionals of semimartingales. Stoch. Process. Appl. 118, 517559.CrossRefGoogle Scholar
Nourdin, I. and Peccati, G. (2012). Normal Approximations with Malliavin Calculus. From Stein’s Method to Universality (Camb. Tracts Math. 92). Cambridge University Press.CrossRefGoogle Scholar
Podolskij, M. and Vetter, M. (2010). Understanding limit theorems for semimartingales: a short survey. Statist. Neerlandica 64, 329351.CrossRefGoogle Scholar
Reed, M. and Simon, B. (1975). Methods of Modern Mathematical Physics, Vol. II. Academic Press, New York.Google Scholar
Van der Vaart, A. W. (1998). Asymptotic Statistics. Cambridge University Press.CrossRefGoogle Scholar
Van der Vaart, A. W. and Wellner, J. A. (1996). Weak Convergence and Empirical Processes: with Applications to Statistics. Springer, New York.CrossRefGoogle Scholar