Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-06T18:57:06.134Z Has data issue: false hasContentIssue false

Central limit theorems for nearly long range dependent subordinated linear processes

Published online by Cambridge University Press:  16 July 2020

Martin Wendler*
Affiliation:
Otto-von-Guericke-Universität Magdeburg
Wei Biao Wu*
Affiliation:
University of Chicago
*
*Postal address: Otto-von-Guericke-Universität Magdeburg, Universitätsplatz 2, 39106Magdeburg, Germany. Email: martin.wendler@ovgu.de
**Postal address: University of Chicago, 5747 South Ellis Avenue, Chicago, IL60637, USA
Rights & Permissions [Opens in a new window]

Abstract

The limit behavior of partial sums for short range dependent stationary sequences (with summable autocovariances) and for long range dependent sequences (with autocovariances summing up to infinity) differs in various aspects. We prove central limit theorems for partial sums of subordinated linear processes of arbitrary power rank which are at the border of short and long range dependence.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

While there are different definitions and models for long range dependence, it is quite common to call a time series long range dependent (l.r.d.) if the autocovariances are not summable. In this case, one needs a stronger norming to achieve convergence of a partial sum than in the independent case. Furthermore, the limit distribution might not be Gaussian.

The central and non-central limit theorems under such a slow decay of correlations were first established [Reference Dobrushin and Major10,Reference Taqqu17,Reference Taqqu18] for subordinated Gaussian random variables. A marginal transformation of the time series might lead to a different shape of the limit distribution. A central role in the proof is played by the approximation of the partial sums by sums of Hermite polynomials and the Hermite rank.

Another popular model for l.r.d. time series is linear processes with slowly decaying coefficients. The convergence of the partial sum process of such linear processes was proved by Davydov [Reference Davydov9]. Marginally transformed l.r.d. linear processes have also been studied [Reference Avram and Taqqu1,Reference Ho and Hsing11,Reference Surgailis16]. As above, the limit distribution might be Gaussian or not, depending on the function used to transform the linear process. Techniques in this case also include approximation by sums of polynomials, where the Appell rank or the power rank play the role of the Hermite rank.

An interesting case is the situation where the autocovariance of a stationary process ${(X_n)_{n\in\mathbb{N}}}$ behaves like $\text{cov}(X_i,X_{i+k})= L(|k|)|k|^{-1}$ (for some slowly varying function L). In this borderline situation the covariance might or might not be summable, depending on the slowly varying function. For subordinated Gaussian nearly l.r.d. processes, Breuer and Major [Reference Breuer and Major5] proved the central limit theorem; similar results for continuous time Gaussian processes were given by Buchmann and Chan [Reference Buchmann and Chan6]. Sly and Heyde [Reference Sly and Heyde15] investigated subordinated Gaussian processes without finite second moments and also gave results for the boundary case. For random walks in random scenery introduced by Kesten and Spitzer [Reference Kesten and Spitzer13], Castell, Guillotin-Plantard, and Pène [Reference Castell, Guillotin-Plantard and Pène8] studied the borderline case.

There exist some results on subordinated linear processes in the borderline case between short and long range dependence. In the case of a power rank 1, the limit distribution for the empirical process for summable and for not summable covariance was established by Wu [Reference Wu19]. For linear processes without a marginal transformation, the asymptotic distribution of the covariance estimator was investigated by Wu, Huang, and Zheng [Reference Wu, Huang and Zheng20]. This is closely related to partial sums of subordinated linear processes with power rank 2.

The aim of this paper is to prove the central limit theorem for subordinated linear processes of general power rank in the borderline case of short and long range dependence. We will treat both cases of summable and not summable covariances. In the next section we introduce some notation and our assumptions. The main results can be found in Section 3. Auxiliary results follow in Section 4, and Section 5 contains the proofs of the main results.

2. Notation and assumptions

We are interested in studying subordinated linear processes, that means stationary processes $(X_n)_{n\in\mathbb{Z}}$ of the form

\begin{equation*}X_n=G\Bigg(\sum_{i=0}^{\infty}a_i\varepsilon_{n-i}\Bigg),\end{equation*}

where the sequence $(\varepsilon_i)_{i\in\mathbb{Z}}$ is independent and identically distributed (i.i.d.). If the innovations have a finite variance and the coefficients $(a_i)_{i\in\mathbb{N}_0}$ are square summable, such a process exists. The asymptotic behavior is influenced by the decay of the coefficients $(a_i)_{i\in\mathbb{N}_0}$ and the power rank. The power rank p of G is the smallest k which gives a non-zero value of $G_\infty^{(k)}(0)$ , where

\begin{equation*}G_{\infty}^{(k)}(x)=\frac{\partial^{k}}{\partial x^k}\mathbb{E}\Bigg[G\Bigg(x+\sum_{i=0}^{\infty}a_i\varepsilon_{-i}\Bigg)\Bigg].\end{equation*}

That means $G_\infty^{(p)}(0)\neq 0$ and $G_\infty^{(k)}(0)=0$ for $k< p$ . For Gaussian processes, the power rank coincides with the Hermite rank [Reference Bai and Taqqu2]. Note that G might not be continuous, because we only take derivatives of the expectation. See Assumption 2 for more details.

Our aim is to prove the convergence of the partial sum $T_n\,:\!=\sum_{j=1}^n X_j$ to a normal limit (after normalization). To this end, we define

\begin{equation*}Y_n^{(k)}\,:\!=\sum_{0\leq i_1<i_2<\cdots<i_k<\infty}a_{i_1}a_{i_2}\cdots a_{i_k}\varepsilon_{n-i_1}\varepsilon_{n-i_2}\cdots \varepsilon_{n-i_k},\end{equation*}

and write $Y_n=Y_n^{(p)}$ for short. In the l.r.d. case, we will approximate the partial sum $T_n$ by the reduced partial sum

\begin{equation*}S_n\,:\!=\sum_{j=1}^nY_j.\end{equation*}

We need the following conditions on the sequence $(\varepsilon_n)_{n\in\mathbb{Z}}$ and the coefficents of the linear process $X_n=G\big(\sum_{i=0}^{\infty}a_i\varepsilon_{n-i}\big)$ :

Assumption 1. $(\varepsilon_n)_{n\in\mathbb{Z}}$ fulfills

  1. $(\varepsilon_n)_{n\in\mathbb{Z}}$ i.i.d.,

  2. $\mathbb{E}[\varepsilon_n]=0$ ,

  3. $\sigma^2=[\varepsilon_n]>0$ ,

  4. $\mathbb{E}[\varepsilon^8_n]<\infty$ .

$(a_n)_{n\in\mathbb{N}_0}$ fulfills

  1. $a_i= i^{-\frac{p+1}{2p}}L(i)$ for $i\geq 1$ ,

  2. L slowly varying, $p\in\mathbb{N}$ .

These are standard conditions on the innovations. The conditions on the coefficient define the borderline case between short range and long range dependence. We will also need some moment assumptions on $X_n$ :

Assumption 2. Assume that

\begin{equation*}\mathbb{E}[X_n^2]<\infty\end{equation*}

and

\begin{equation*}\mathbb{E}\Bigg[\Bigg(G\Bigg(\sum_{i=0}^\infty a_i\varepsilon_{-i}\Bigg)-G\Bigg(\sum_{i=0}^l a_i\varepsilon_{-i}\Bigg)\Bigg)^2\Bigg]\xrightarrow{l\rightarrow\infty}0.\end{equation*}

Finally, we will use another assumption introduced in [Reference Ho and Hsing11] in order to approximate the linear process by an m-dependent process. Define

\begin{equation*}G_j(x)\,:\!=\mathbb{E}\Bigg[G\Bigg(x+\sum_{i=0}^ja_i\varepsilon_{-i}\Bigg)\Bigg] , \qquad G_j^{(r)}(x)=\frac{\partial^r}{\partial x^r}G_j(x).\end{equation*}

Assumption 3. Assume that for some $\lambda>0$ , $\tau\in\mathbb{N}$ , and for all $x\in\mathbb{R}$ , $r=1,\ldots,p+2$ , the derivative $G_j^{(r)}(x)$ exists and that

\begin{equation*}\sup_{I\subset\mathbb{N}}\mathbb{E}\Bigg[\Bigg(\sup_{|y|\leq \lambda}G^{(r)}_{\tau}\Bigg(x+y+\sum_{i\in I}a_i\varepsilon_{-i}\Bigg)\Bigg)^4\Bigg]<\infty,\end{equation*}

where we take the supremum over all subsets I of $\mathbb{N}$ .

While Assumption 3 is not easy to verify in general, in some simple situations it is clear that this assumption holds. If G has bounded and continuous derivatives up to order $p+2$ , the assumption holds for any distribution of the $\varepsilon_i$ . On the other hand, in the Gaussian case the density functions $f_j$ of $\sum_{i=0}^ja_i\varepsilon_{-i}$ have uniformly bounded derivatives. Note that $G_j(x)=\int G(x+y)f_j(y)\,{\rm d} y=\int G(y)f_j(y-x)\,{\rm d} y$ , so the assumption will hold for any G of bounded variation.

$\mathcal{F}_j\,:\!=\sigma(\varepsilon_j,\varepsilon_{j-1},\ldots)$ denotes the sigma field generated by $\varepsilon_j,\varepsilon_{j-1},\ldots$ , and $\|\cdot\|_2\,:\!=\sqrt{\mathbb{E}(\cdot^2)}$ denotes the $L_2$ norm. In our statements and proofs, C is a generic constant which might have different values even in one chain of inequalities, but does not depend on n.

3. Main results

We will state our results in the short range and long range dependent cases separately. In the l.r.d. case, a stronger norming is needed than the usual $n^{-1/2}$ and the limit behavior of the partial sum $T_n$ is dominated by the behavior of $S_n$ .

Theorem 1. If G has power rank p, $\sum_{j=1}^\infty\frac{L^{2p}(\,j)}{j}=\infty$ , and Assumptions 1, 2, and 3 hold, then

\begin{equation*}\frac{1}{G_{\infty}^{(p)}(0)\sqrt{ n\tilde{L}(n)}\,}\sum_{i=1}^n\big(X_i-\mathbb{E}[X_i]\big)\Rightarrow N(0,1)\end{equation*}

with

\begin{equation*}G_{\infty}^{(p)}(x)=\frac{\partial^{p}}{\partial x^p}\mathbb{E}\Bigg[G\Bigg(x+\sum_{i=0}^{\infty}a_i\varepsilon_{-i}\Bigg)\Bigg]\end{equation*}

for a slowly varying function $\tilde{L}$ .

The slowly varying function is defined as $\tilde{L}(n)\,:\!=C_p\sigma^{2p}\sum_{j=1}^n\frac{L^{2p}(\,j)}{j}$ (where the exact value of $C_p$ can be found in the proof of Lemma 3). The proof is based on martingale approximations; an alternative would be an approach by Gaussian Wiener chaos [Reference Nourdin, Peccati and Reinert14], see also Remark 1 after Proposition 1.

In the short range dependent case, the norming is $n^{-1/2}$ as in the i.i.d. case.

Theorem 2. If G has power rank p, $\sum_{j=1}^\infty\frac{L^{2p}(\,j)}{j}<\infty$ , and Assumptions 1, 2, and 3 hold, then

\begin{equation*}\frac{1}{\sqrt{ n}\,}\sum_{i=1}^n\big(X_i-\mathbb{E}[X_i]\big)\Rightarrow N\big(0,\sigma^2_\infty \big)\end{equation*}

with

\begin{equation*}\sigma^2_\infty\,:\!=\lim_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}}\sum_{j=1}^nX_j\Bigg]<\infty.\end{equation*}

As Gaussian processes can be represented as linear processes with normally distributed $(\varepsilon_n)_{n\in\mathbb{Z}}$ , it is possible to apply our theorems to Gaussian processes, too. As explained before, Assumption 3 holds for Gaussian random variables. We will show in Lemma 1 below that Assumption 2 also holds in the Gaussian case. So, we retrieve the theorems by Breuer and Major [Reference Breuer and Major5] (for the special case of time series, not for general random fields).

Lemma 1. Let $(\varepsilon_n)_{n\in\mathbb{Z}}$ be i.i.d. N(0,1) random variables and $(a_n)_{n\in\mathbb{N}_0}$ be real numbers such that $a_0>0$ and $\sum_{i=0}^\infty a_i^2<\infty$ . Furthermore, let G be a measurable function G such that

\begin{equation*}\mathbb{E}\Bigg[G^2\Bigg(\sum_{i=0}^\infty a_i\varepsilon_{-i}\Bigg)\Bigg]<\infty.\end{equation*}

Then

\begin{equation*}\mathbb{E}\Bigg[\Bigg(G\Bigg(\sum_{i=0}^\infty a_i\varepsilon_{-i}\Bigg)-G\Bigg(\sum_{i=0}^l a_i\varepsilon_{-i}\Bigg)\Bigg)^2\Bigg]\xrightarrow{l\rightarrow\infty}0.\end{equation*}

4. Auxiliary results

We will give the proofs in the case $p\geq 3$ . The proofs in the case $p=1,2$ are simpler, but are not completely the same. We will mention the different arguments needed for the case $p=2$ in the proofs of Lemmas 3 and 5. For $p=2$ , see also [Reference Wu, Huang and Zheng20].

Lemma 2. Under Assumption 1 for $S_n\,:\!=\sum_{j=1}^nY_j^{(p)}$ ,

\begin{equation*}\left\|\mathbb{E}\left[S_n\,\big|\,\mathcal{F}_0\right]\right\|_2\leq C\sqrt{n}L^p(n) .\end{equation*}

Proof. Because $E\left[\varepsilon_{j-i_1}\varepsilon_{j-i_2}\cdots \varepsilon_{j-i_k}\,\big|\,\mathcal{F}_0\right]=0$ if $j-i_1>0$ and $i_1,\ldots,i_k$ are pairwise different, we have that

\begin{multline*}\mathbb{E}\left[S_n\,\big|\,\mathcal{F}_0\right]=\sum_{j=1}^n\sum_{j\leq i_1<i_2<\cdots<i_p<\infty}a_{i_1}a_{i_2}\cdots a_{i_p}\varepsilon_{j-i_1}\varepsilon_{j-i_2}\cdots \varepsilon_{j-i_p}\\=\sum_{0\leq i_1<i_2<\cdots<i_p<\infty}\Bigg(\sum_{j=1}^na_{i_1+j}a_{i_2+j}\cdots a_{i_p+j}\Bigg)\varepsilon_{-i_1}\varepsilon_{-i_2}\cdots \varepsilon_{-i_p}.\end{multline*}

Note that the random variables $\varepsilon_{-i_1}, \varepsilon_{-i_2}, \ldots, \varepsilon_{-i_p}$ , $0\leq i_1<i_2<\cdots<i_p<\infty$ , are centered and uncorrelated, and that

\begin{equation*}\mathbb{E}\left[\left(\varepsilon_{-i_1}\varepsilon_{-i_2}\cdots \varepsilon_{-i_p}\right)^2\right]=\left(\mathbb{E}\left[\varepsilon_0^2\right]\right)^p=\sigma^{2p}.\end{equation*}

It follows that

\begin{align*}\left\|\mathbb{E}\left[S_n\,\big|\,\mathcal{F}_0\right]\right\|_2^2=&\sum_{0\leq i_1<i_2<\cdots<i_p<\infty}\Bigg(\sum_{j=1}^na_{i_1+j}a_{i_2+j}\cdots a_{i_p+j}\Bigg)^2\sigma^{2p}\\=&\sum_{n\leq i_1<i_2<\cdots<i_p<\infty}\Bigg(\sum_{j=1}^na_{i_1+j}a_{i_2+j}\cdots a_{i_p+j}\Bigg)^2\sigma^{2p}\\& \quad+\sum_{\substack{0\leq i_1<i_2<\cdots<i_p<\infty\\i_1<n}}\Bigg(\sum_{j=1}^na_{i_1+j}a_{i_2+j}\cdots a_{i_p+j}\Bigg)^2\sigma^{2p}=\!:\,A_n+B_n.\end{align*}

We will treat these two summands separately. Because L is slowly varying, we have, for $k\leq i_j$ ,

\begin{equation*}\frac{a_{i_j+k}}{a_{i_j}}=\left(\frac{i_j+k}{i_j}\right)^{-\frac{p+1}{2p}}\frac{L(i_j+k)}{L(i_j)}\leq C ,\end{equation*}

and consequently

\begin{align*}A_n&=\sum_{n\leq i_1<i_2<\cdots<i_p<\infty}\Bigg(\sum_{j=1}^na_{i_1+j}a_{i_2+j}\cdots a_{i_p+j}\Bigg)^2\sigma^{2p}\\&\leq C\sum_{n\leq i_1<i_2<\cdots<i_p<\infty}n^2a^2_{i_1}a^2_{i_2}\cdots a^2_{i_p}\\&\leq Cn^2\Bigg(\sum_{i=n}^\infty a_i^2\Bigg)^p\leq Cn^2\left(n^{-\frac{1}{p}}L^2(n)\right)^p=CnL^{2p}(n),\end{align*}

where we used that $\sum_{i=n}^\infty a_i^2\leq Cn^{-\frac{p+1}{p}+1}L^2(n)C=Cn^{-\frac{1}{p}}L^2(n)$ . For the second summand, $B_n$ , we will also use this to conclude that

\begin{align*}B_n&=\sum_{\substack{0\leq i_1<i_2<\cdots<i_p<\infty\\i_1<n}}\Bigg(\sum_{j=1}^na_{i_1+j}a_{i_2+j}\cdots a_{i_p+j}\Bigg)^2\sigma^{2p}\\&\leq C\sum_{i_1=1}^n\Bigg(\sum_{k=1}^na_{i_1+k}\Bigg)^2\Bigg(\sum_{i=i_1+1}^\infty a_i^2\Bigg)^{p-1}\leq C\sum_{i_1=1}^n\Bigg(\sum_{k=1}^na_{i_1+k}\Bigg)^2i_1^{-\frac{p-1}{p}}L^{2(p-1)}(i_1)\\&\leq C\sum_{i_1=1}^n\Big(n^{-\frac{p+1}{2p}+1}L(n)\Big)^2i_1^{-\frac{p-1}{p}}L^{2(p-1)}(i_1)\\&=C n^{-\frac{p-1}{p}}n^{-\frac{p-1}{p}+1}L^{2p}(n)=CnL^{2p}(n). \end{align*}

It would be natural to decompose $S_n\,:\!=\sum_{j=1}^nY_j^{(p)}$ as a martingale in the following way:

\begin{equation*}S_n=\mathbb{E}\left[S_n\,\big|\,\mathcal{F}_0\right]+\sum_{j=1}^n D_n(\,j) ,\end{equation*}

with

\begin{align*}D_n(\,j)&=\mathbb{E}\left[S_n\,\big|\,\mathcal{F}_j\right]-\mathbb{E}\left[S_n\,\big|\,\mathcal{F}_{j-1}\right]\\&=\varepsilon_j\sum_{k=j}^n\,\sum_{0<i_2<i_3<\cdots<i_p<\infty}a_{k-j}a_{k-j+i_2}a_{k-j+i_3}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_p}.\end{align*}

However, this triangular array is not row-wise stationary, so we consider the following triangular array of martingale differences instead:

\begin{equation*}M_n(\,j)=\varepsilon_j\sum_{k=j}^\infty\,\sum_{0<i_2<i_3<\cdots<i_p\leq n}a_{k-j}a_{k-j+i_2}a_{k-j+i_3}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_p}.\end{equation*}

Lemma 3. Under Assumption 1,

\begin{equation*}\text{var}\Bigg[\sum_{j=1}^n M_n(\,j)\Bigg]\approx n\tilde{L}(n) ,\end{equation*}

where $\tilde{L}(n)=C_p\sigma^{2p}\sum_{j=1}^n\frac{L^{2p}(\,j)}{j}$ for some constant $C_p$ and $\approx$ denotes asymptotic equivalence.

Proof. As $M_n(\,j)$ , $1\leq j\leq n$ , is a martingale difference sequence, the summands are uncorrelated and we start by studying the summand separately:

\begin{align*}\text{var}\left[M_n(\,j)\right]&=\text{var}\left[M_n(0)\right]\\&=\text{var}\Bigg[\varepsilon_0\sum_{k=0}^\infty\,\sum_{0<i_2<i_3<\cdots<i_p\leq n}a_{k}a_{k+i_2}a_{k+i_3}\cdots a_{k+i_p}\varepsilon_{-i_2}\varepsilon_{-i_3}\cdots\varepsilon_{-i_p}\Bigg]\\&=\sigma^{2p}\sum_{0<i_2<i_3<\cdots<i_p\leq n}\Bigg(\sum_{k=0}^\infty a_{k}a_{k+i_2}a_{k+i_3}\cdots a_{k+i_p}\Bigg)^2\\&\,:\!=\sigma^{2p}\sum_{0<i_2<i_3<\cdots<i_p\leq n}B^2(i_2,i_3,\ldots,i_p).\end{align*}

Without loss of generality we can asssume that $a_0=0$ , because otherwise we could study the linear process with coefficients $\tilde{a}_0=0$ and $\tilde{a}_i=a_{i-1}$ for $i\geq 1$ . By the slow variation of L and the uniform convergence theorem, we can find $k_1(m), k_2(m)\rightarrow\infty$ as $n\rightarrow \infty$ such that

\begin{equation*}\max_{n_1,n_2\in\{k_1(m),\ldots,k_2(m)+m\}}\max\left\{\frac{L(n_1)}{L(n_2)},\frac{L(n_2)}{L(n_1)}\right\}\xrightarrow{m\rightarrow\infty}1 ,\end{equation*}

and $k_1(m)/m\rightarrow 0$ and $k_2(m)/m\rightarrow \infty$ . We will break up $B(i_2,i_3,\ldots,i_p)$ into three parts:

\begin{align*}B(i_2,i_3,\ldots,i_p)&=\sum_{k=1}^{k_1(i_p)-1} a_{k}a_{k+i_2}a_{k+i_3}\cdots a_{k+i_p}+\sum_{k=k_1(i_p)}^{k_2(i_p)} a_{k}a_{k+i_2}a_{k+i_3}\cdots a_{k+i_p}\\& \quad + \sum_{k=k_2(i_p)+1}^{\infty} a_{k}a_{k+i_2}a_{k+i_3}\cdots a_{k+i_p}\\& \,:\!= B_1(i_2,i_3,\ldots,i_p)+B_2(i_2,i_3,\ldots,i_p)+B_3(i_2,i_3,\ldots,i_p).\end{align*}

We will treat these three sums seperately. For the second summand, we have

\begin{multline*}B_2(i_2,i_3,\ldots,i_p)=\sum_{k=k_1(i_p)}^{k_2(i_p)} a_{k}a_{k+i_2}a_{k+i_3}\cdots a_{k+i_p}\\\shoveleft=\sum_{k=k_1(i_p)}^{k_2(i_p)} k^{-\frac{p+1}{2p}}L(k)(k+i_2)^{-\frac{p+1}{2p}}L(k+i_2)(k+i_3)^{-\frac{p+1}{2p}}L(k+i_3)\cdots (k+i_p)^{-\frac{p+1}{2p}}L(k+i_p) \\ \shoveleft\approx L^{p}(i_p)i_p^{-\frac{p-1}{2}}\sum_{k=k_1(i_p)}^{k_2(i_p)}\frac{1}{i_p}\bigg(\frac{k}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(\frac{k}{i_p}+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(\frac{k}{i_p}+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots\bigg(\frac{k}{i_p}+1\bigg)^{-\frac{p+1}{2p}}\\=\!:\,\tilde{B}_2(i_2,i_3,\ldots,i_p)\end{multline*}

as $i_p\rightarrow\infty$ . We can bound this sum from below and above with an integral, using the monotonicity of the integrated function:

\begin{align*}\tilde{B}_2&(i_2,i_3,\ldots,i_p)\\&\leq L^{p}(i_p)i_p^{-\frac{p-1}{2}}\!\int\limits_{(k_1(i_p)-1)/i_p}^{k_2(i_p)/i_p} v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots(v+1)^{-\frac{p+1}{2p}}{\rm d} v\\&\approx L^{p}(i_p)i_p^{-\frac{p-1}{2}}\int\limits_{0}^{\infty} v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots(v+1)^{-\frac{p+1}{2p}}{\rm d} v,\\\tilde{B}_2&(i_2,i_3,\ldots,i_p)\\&\geq L^{p}(i_p)i_p^{-\frac{p-1}{2}}\!\int\limits_{k_1(i_p)/i_p}^{k_2(i_p)/i_p} v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots (v+1)^{-\frac{p+1}{2p}}{\rm d} v\\&\approx L^{p}(i_p)i_p^{-\frac{p-1}{2}}\int\limits_{0}^{\infty} v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots(v+1)^{-\frac{p+1}{2p}}{\rm d} v \end{align*}

as $i_p\rightarrow\infty$ , and consequently

\begin{equation*}B_2\approx L^{p}(i_p)i_p^{-\frac{p-1}{2}}\int\limits_{0}^{\infty} v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots(v+1)^{-\frac{p+1}{2p}}{\rm d} v.\end{equation*}

For the third summand, first choose $\delta>0$ such that $p\big(\frac{p+1}{2p}-\delta\big)>1$ . By Potter’s theorem [Reference Bingham, Goldie and Teugels4, Theorem 1.5.6], there is a constant C such that $L(k)/L(i_p)\leq C (k/i_p)^\delta$ for $k\geq i_p$ , so we have

\begin{align*}B_3&(i_2,i_3,\ldots,i_p)=\sum_{k=k_2(i_p)+1}^{\infty} a_{k}a_{k+i_2}a_{k+i_3}\cdots a_{k+i_p}\\& \leq CL^p(i_p)i_p^{-p\delta}\sum_{k=k_2(i_p)+1}^{\infty}k^{-\big(\frac{p+1}{2p}-\delta\big)}(k+i_2)^{-\big(\frac{p+1}{2p}-\delta\big)}\cdots (k+i_p)^{-\big(\frac{p+1}{2p}-\delta\big)} \\ & \leq CL^p(i_p)i_p^{-p\delta}\sum_{k=k_2(i_p)+1}^{\infty}k^{-p\big(\frac{p+1}{2p}-\delta\big)}=CL^p(i_p)i_p^{-\frac{p-1}{2}}\sum_{k=k_2(i_p)+1}^{\infty}\frac{1}{i_p}\bigg(\frac{k}{i_p}\bigg)^{-p\big(\frac{p+1}{2p}-\delta\big)}\\& \leq CL^p(i_p)i_p^{-\frac{p-1}{2}}\int_{k_2(i_p)/i_p}^\infty v^{-p\big(\frac{p+1}{2p}-\delta\big)}\,{\rm d} v = o\big(B_2(i_2,i_3,\ldots,i_p)\big)\end{align*}

as $i_p\rightarrow\infty$ . The first summand $B_1$ can be treated in the same way, because for every $\delta>0$ , there is a C such that $L(k)/L(i_p)\leq C (k/i_p)^{-\delta}$ for $k\leq i_p+k_1(i_p)$ . Thus, $B\approx B_1$ and we have the asymptotic equivalence (for $n\rightarrow\infty$ )

\begin{align*}\text{var}&\left[M_n(\,j)\right]\approx\sigma^{2p}\sum_{i_p=p}^n\frac{L^{2p}(i_p)}L^{2p}(i_p)i_p^{-(p-1)} \times \\& \qquad \quad \sum_{0<i_2<i_3<\cdots<i_p}\Bigg(\int\limits_{0}^\infty v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots(v+1)^{-\frac{p+1}{2p}}{\rm d} v\Bigg)^2 \\ & = \sigma^{2p}\sum_{i_p=p}^n\frac{L^{2p}(i_p)}{i_p} \times \\& \quad \sum_{0<i_2<i_3< \cdots<i_p}\frac{1}{i_p^{p-2}}\Bigg(\int\limits_{0}^\infty v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots(v+1)^{-\frac{p+1}{2p}}{\rm d} v\Bigg)^2\\& \approx \sigma^{2p}\sum_{i_p=p}^n\frac{L^{2p}(i_p)}{i_p}\iint\limits_{0\ 0}^{\quad 1\ t_{p-1}}\!\!\cdots\! \!\iint\limits_{0\ 0}^{\quad t_4\ t_3}\\& \qquad \Bigg(\int\limits_{0}^\infty v^{-\frac{p+1}{2p}}(v+t_2)^{-\frac{p+1}{2p}}\!\cdots\!(v+t_{p-1})^{-\frac{p+1}{2p}}(v+1)^{-\frac{p+1}{2p}}{\rm d} v\Bigg)^2{\rm d} t_2{\rm d} t_3\!\cdots\! {\rm d} t_{p-2}{\rm d} t_{p-1}\\& =\!:\, \tilde{L}(n).\end{align*}

The approximation of the inner sum by the integral follows in the same way as the approximation of $B_2$ by an integral, making use of the monotonicity of the integrand. This immediately leads to

\begin{equation*}\text{var}\Bigg(\sum_{j=1}^n M_n(\,j)\Bigg)=\sum_{j=1}^n\text{var}\big(M_n(\,j)\big)\approx n\tilde{L}(n) \end{equation*}

as $n\rightarrow\infty$ . Note that in the case $p=2$ , the outer $p-2$ integrals would not exist. Additionally, we have to show that the integral (the constant $C_p$ ) is finite:

\begin{align*}& C_p \,:\!=\\& \iint\limits_{0\ 0}^{\quad 1\ t_{p-1}}\!\!\cdots\!\!\iint\limits_{0\ 0}^{\quad t_4\ t_3}\!\Bigg(\int\limits_{0}^\infty v^{-\frac{p+1}{2p}}(v+t_2)^{-\frac{p+1}{2p}}(v+t_3)^{-\frac{p+1}{2p}}\!\cdots\!(v+t_{p-1})^{-\frac{p+1}{2p}}(v+1)^{-\frac{p+1}{2p}}{\rm d} v\!\Bigg)^2\\&\quad {\rm d} t_2{\rm d} t_3\cdots {\rm d} t_{p-2}{\rm d} t_{p-1} \\ \leq &\ 2\!\!\!\iint\limits_{0\ 0}^{\quad 1\ t_{p-1}}\!\!\cdots\!\!\iint\limits_{0\ 0}^{\quad t_4\ t_3}\!\Bigg(\int\limits_{1}^\infty v^{-\frac{p+1}{2p}}(v+t_2)^{-\frac{p+1}{2p}}(v+t_3)^{-\frac{p+1}{2p}}\!\cdots\!(v+t_{p-1})^{-\frac{p+1}{2p}}(v+1)^{-\frac{p+1}{2p}}dv\!\Bigg)^2\\&\quad {\rm d} t_2{\rm d} t_3\cdots {\rm d} t_{p-2}{\rm d} t_{p-1}\\ & + 2\iint\limits_{0\ 0}^{\quad 1\ t_{p-1}}\!\!\cdots\!\!\iint\limits_{0\ 0}^{\quad t_4\ t_3}\!\Bigg(\int\limits_{0}^1 v^{-\frac{p+1}{2p}}(v\!+\!t_2)^{-\frac{p+1}{2p}}(v\!+\!t_3)^{-\frac{p+1}{2p}}\!\!\cdots\!(v\!+\!t_{p-1})^{-\frac{p+1}{2p}}(v\!+\!1)^{-\frac{p+1}{2p}}dv\!\Bigg)^2\\&\quad {\rm d} t_2{\rm d} t_3\cdots {\rm d} t_{p-2}{\rm d} t_{p-1}\\\leq &\ 2\int\limits_0^{1}\!\int\limits_0^{t_{p-1}}\!\!\cdots\!\int\limits_0^{t_4} \!\!\int\limits_0^{t_3}\Bigg(\int\limits_{1}^\infty v^{-\frac{p(p+1)}{2p}}{\rm d} v\Bigg)^2{\rm d} t_2{\rm d} t_3\!\cdots \! {\rm d} t_{p-2}{\rm d} t_{p-1}\\& + 2\int\limits_0^{1}t_{p-1}^{-\frac{p+1}{p}}\int\limits_0^{t_{p-1}}t_{p-2}^{-\frac{p+1}{p}}\!\!\cdots\! \int\limits_0^{t_4}t_{3}^{-\frac{p+1}{p}}\int\limits_0^{t_3}t_2^{-\frac{5}{2p}}\Bigg(\int\limits_{0}^1 v^{-\frac{4p-1}{4p}}{\rm d} v\Bigg)^2{\rm d} t_2{\rm d} t_3\cdots {\rm d} t_{p-2}{\rm d} t_{p-1}\\ \leq &\ 2\Bigg(\int\limits_{1}^\infty v^{-\frac{p+1}{2}}{\rm d} v\Bigg)^2\!\!+C\!\!\int\limits_0^{1}t_{p-1}^{-\frac{p+1}{p}}\!\!\int\limits_0^{t_{p-1}}\!t_{p-2}^{-\frac{p+1}{p}}\!\!\cdots\! \int\limits_0^{t_4}t_{3}^{-\frac{7}{2p}}\Bigg(\int\limits_{0}^1 v^{-\frac{4p-1}{4p}}\!\!{\rm d} v\Bigg)^2{\rm d} t_3\cdots {\rm d} t_{p-2}{\rm d} t_{p-1}\\\leq &\ 2\Bigg(\int\limits_{1}^\infty v^{-\frac{p+1}{2}}{\rm d} v\Bigg)^2+C\int\limits_0^{1}t_{p-1}^{-\frac{2p-1}{2p}}{\rm d} t_{p-1}\Bigg(\int\limits_{0}^1 v^{-\frac{4p-1}{4p}}{\rm d} v\Bigg)^2<\infty,\end{align*}

where we used $v^{-\frac{p+1}{2p}}(v+t_2)^{-\frac{p+1}{2p}}\leq v^{-\frac{4p-1}{4p}}t_2^{-\frac{5}{4p}}$ for $0<t_2<v<\infty$ .

Lemma 4. Under Assumption 1,

  1. $\tilde{L}(x)\,:\!=C_p\sigma^{2p}\sum_{j=1}^{\lfloor x\rfloor}\frac{L^{2p}(\,j)}{j}$ is slowly varying;

  2. $L^{2p}(n)=o(\tilde{L}(n))$ .

Proof. For a slowly varying function L, the function $x\mapsto L(\lfloor x\rfloor)x/\lfloor x\rfloor$ is also slowly varying. Applying [Reference Bingham, Goldie and Teugels4, Proposition 1.5.9a] to this function leads to the statement of the lemma.

Lemma 5. Under Assumption 1,

\begin{equation*}\Bigg\|\sum_{j=1}^n \left(D_n(\,j)-M_n(\,j)\right)\Bigg\|_2^2=o(n\tilde{L}_n) .\end{equation*}

Proof. Note that $D_n(\,j)-M_n(\,j)$ , $j=1,\ldots, n$ , is a martingale difference sequence, so it suffices to show that

\begin{equation*}\big\| D_n(\,j)-M_n(\,j)\big\|_2^2=o(\tilde{L}_n).\end{equation*}

By the definitions, we have that

\begin{multline*}D_n(\,j)-M_n(\,j)\\= \varepsilon_j\sum_{k=j}^n\,\sum_{\substack{0<i_2<i_3<\cdots<i_p<\infty\\i_p>n}}a_{k-j}a_{k-j+i_2}a_{k-j+i_3}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_p}\\\qquad -\varepsilon_j\sum_{k=n+1}^\infty\,\sum_{0<i_2<i_3<\cdots<i_p\leq n}a_{k-j}a_{k-j+i_2}a_{k-j+i_3}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_p}\\=\!:\, T_{n,1}(\,j)+T_{n,2}(\,j).\end{multline*}

We will treat these two summands separately. For the second summand, we have that

\begin{multline*}\big\|T_{n,2}(\,j)\big\|_2^2=\sigma^{2p}\sum_{0<i_2<i_3<\cdots<i_p\leq n}\Bigg(\sum_{k=n+1}^\infty a_{k-j}a_{k-j+i_2}a_{k-j+i_3}\cdots a_{k-j+i_p}\Bigg)^2\\\leq C \sum_{0<i_2<i_3<\cdots<i_p\leq n}L^{2p}(n+1)(n+1)^{-2\left(\frac{p+1}{2}-1\right)}\leq CL^{2p}(n).\end{multline*}

From the second part of Lemma 4, we have that $L^{2p}(n)=o(\tilde{L}(n))$ . For the first summand, we use an approximation by an integral again:

\begin{multline*}\left\|T_{n,1}(\,j)\right\|_2^2=\sigma^{2p}\sum_{\substack{0<i_2<i_3<\cdots<i_p<\infty\\ i_p>n}}\Bigg(\sum_{k=0}^{n-j} a_{k}a_{k+i_2}a_{k-j+i_3}\cdots a_{k+i_p}\Bigg)^2\\\shoveleft\approx \sigma^{2p}\sum_{\substack{0<i_2<i_3<\cdots<i_p<\infty\\ i_p>n}}i_p^{-(p-1)}L^{2p}(i_p) \times \\\left(\int\limits_{0}^{n/i_p}v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots\bigg(v+\frac{i_{p-1}}{i_p}\bigg)^{-\frac{p+1}{2p}}(v+1)^{-\frac{p+1}{2p}}{\rm d} v\right)^2 \\ \shoveleft\approx C\sum_{i_p=n+1}^\infty \frac{L^{2p}(i_p)}{i_p}\int\limits_{0}^{1}\!\int\limits_{0}^{t_{p-1}}\!\cdots\!\int\limits_0^{t_3}\\\left(\int\limits_{0}^{n/i_p}v^{-\frac{p+1}{2p}}(v+t_2)^{-\frac{p+1}{2p}}\cdots(v+t_{p-1})^{-\frac{p+1}{2p}}(v+1)^{-\frac{p+1}{2p}}{\rm d} v\right)^2{\rm d} t_2\cdots {\rm d} t_{p-2}{\rm d} t_{p-1} \\ \shoveleft\leq C\sum_{i_p=n+1}^\infty \frac{L^{2p}(i_p)}{i_p}\int\limits_{0}^{1}t_{p-1}^{-\frac{p+1}{p}}\!\int\limits_{0}^{t_{p-1}}t_{p-2}^{-\frac{p+1}{p}}\!\cdots\!\int\limits_0^{t_3}t_{2}^{-\frac{5}{2p}}\!\left(\int\limits_{0}^{n/i_p}v^{-\frac{4p-1}{4p}}{\rm d} v\right)^2{\rm d} t_2\cdots {\rm d} t_{p-2}{\rm d} t_{p-1}\\= C\sum_{i_p=n+1}^\infty \frac{L^{2p}(i_p)}{i_p}\left(\frac{n}{i_p}\right)^{\frac{1}{2p}}\approx CL^{2p}(n)=o(\tilde{L}(n)).\end{multline*}

Note that in the case $p=2$ , the outer $p-2$ integrals would not exist.

Lemma 6. Under Assumption 1,

\begin{equation*}\mathbb{E}\big[M_n^4(\,j)\big]\leq C\tilde{L}^2(n) .\end{equation*}

Proof. For short, we write $m_4\,:\!=\mathbb{E}\left[\varepsilon_0^4\right]$ . Recall that

\begin{equation*}M_n(\,j)=\varepsilon_j\sum_{k=j}^\infty\,\sum_{0<i_2<i_3<\cdots<i_p\leq n}a_{k-j}a_{k-j+i_2}a_{k-j+i_3}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_p},\end{equation*}

so

\begin{align*}\mathbb{E}\big[M_n^4(\,j)\big]=m_4\mathbb{E}\left[\left(\sum_{k=j}^\infty\,\sum_{\substack{0<i_2<i_3< \\ \cdots<i_p\leq n}}a_{k-j}a_{k-j+i_2}a_{k-j+i_3}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_p}\right)^4\right].\\[-18pt]\end{align*}

Using the Burkholder inequality [Reference Burkholder7] for the martingale difference sequence $\tilde{d}_{i_p}$ , $i_p=p-1,\ldots,n$ , with

\begin{equation*}\tilde{d}_{i_p}\,:\!=\sum_{\substack{0<i_2<i_3<\\ \cdots<i_{p\!-\!1}<i_p}}\sum_{k=j}^\infty a_{k-j}a_{k-j+i_2}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_{p-1}}\varepsilon_{i_p}\end{equation*}

and then the Minkovski inequality, we get

\begin{align*}&\big(\mathbb{E}\big[M_n^4(\,j)\big]\big)^{\frac{1}{2}}\\=&\ m_4^{\frac{1}{2}}\left(\mathbb{E}\left[\left(\sum_{\substack{0<i_2<i_3<\\ \cdots<i_p\leq n}}\,\sum_{k=j}^\infty a_{k-j}a_{k-j+i_2}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_p}\right)^4\right]\right)^{\frac{1}{2}}\\=&\ m_4^{\frac{1}{2}}\Bigg(\mathbb{E}\Bigg[\Bigg(\sum_{i_p=p\!-\!1}^n\tilde{d}_{i_p}\Bigg)^4\Bigg]\Bigg)^{\frac{1}{2}}\leq Cm_4^{\frac{1}{2}}\Bigg(\mathbb{E}\Bigg[\Bigg(\sum_{i_p=p\!-\!1}^n\tilde{d}_{i_p}^2\Bigg)^2\Bigg]\Bigg)^{\frac{1}{2}} \\ =&\ Cm_4^{\frac{1}{2}}\Bigg\|\sum_{i_p=p\!-\!1}^n\tilde{d}_{i_p}^2\Bigg\|_2\leq Cm_4^{\frac{1}{2}}\sum_{i_p=p\!-\!1}^n\left\|\tilde{d}_{i_p}^2\right\|_2=Cm_4^{\frac{1}{2}}\sum_{i_p=p\!-\!1}^n\mathbb{E}\Big[\tilde{d}_{i_p}^4\Big]^{\frac{1}{2}}\\=&\ Cm_4^{\frac{2}{2}}\sum_{i_p=p\!-\!1}^n\left(\mathbb{E}\left[\left(\sum_{\substack{0<i_2<i_3<\\ \cdots<i_{p\!-\!1}<i_p}}\sum_{k=j}^\infty a_{k-j}a_{k-j+i_2}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_{p-1}}\right)^4\right]\right)^{\frac{1}{2}}.\end{align*}

Repeating this argument, we can conclude that

\begin{align*}&\big(\mathbb{E}\big[M_n^4(\,j)\big]\big)^{\frac{1}{2}}\\\leq&\ Cm_4^{\frac{2}{2}}\sum_{i_p=p\!-\!1}^n\left(\mathbb{E}\left[\left(\sum_{\substack{0<i_2<i_3<\\ \cdots<i_{p\!-\!1}<i_p}}\sum_{k=j}^\infty a_{k-j}a_{k-j+i_2}\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\cdots\varepsilon_{j-i_{p-1}}\!\right)^4\right]\right)^{\frac{1}{2}} \\ \leq&\ Cm_4^{\frac{3}{2}}\!\!\sum_{\substack{p-2\leq i_{p\!-\!1}<i_p\\ i_p\leq n}}\!\left(\mathbb{E}\left[\left(\!\sum_{\substack{0<i_2<i_3\\< \cdots<i_{p-1}}}\!\sum_{k=j}^\infty a_{k-j}a_{k-j+i_2}\!\cdots a_{k-j+i_p}\varepsilon_{j-i_2}\varepsilon_{j-i_3}\!\cdots\varepsilon_{j-i_{p-2}}\!\right)^4\right]\right)^{\!\frac{1}{2}}\\&\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \vdots\\\leq&\ Cm_4^{\frac{p}{2}}\sum_{0<i_2<i_3<\cdots<i_p\leq n}\Bigg(\sum_{k=j}^\infty a_{k-j}a_{k-j+i_2}\cdots a_{k-j+i_p}\Bigg)^2.\end{align*}

In the same way as in the proof of Lemma 3, it follows that

\begin{align*}\mathbb{E}\big[M_n^4(\,j)\big]\leq C\frac{m_4^p}{\sigma^{4p}}\tilde{L}^2(n).\\[-24pt] \end{align*}

Proposition 1. Under Assumption 1, for $S_n\,:\!=\sum_{j=1}^nY_j^{(p)}$ ,

\begin{equation*}\frac{S_n}{\sqrt{n\tilde{L}(n)}\,}\Rightarrow N(0,1) .\end{equation*}

Proof. Because of Lemmas 5 and 2 we have that

\begin{equation*}S_n=\sum_{j=1}^nM_n(\,j)+\sum_{j=1}^n(D_n(\,j)-M_n(\,j))+\mathbb{E}[S_n \mid \mathcal{F}_0]=\sum_{j=1}^nM_n(\,j)+o_p\left(\sqrt{n\tilde{L}(n)}\right).\end{equation*}

We will show the asymptotic normality of $\sum_{j=1}^nM_n(\,j)$ using a martingale central limit theorem from [Reference Wu and Woodroofe21]. Note that by Lemma 3, we have that $\text{var}\big[\sum_{j=1}^n M_n(\,j)\big]\approx n\tilde{L}(n)$ . First, the Ljapunov condition holds because of Lemma 6:

\begin{equation*}\frac{1}{n^2\tilde{L}^2(n)}\sum_{j=1}^n\mathbb{E}\big[M_n(\,j)^4\big]\leq C\frac{n\tilde{L}^2(n)}{n^2\tilde{L}^2(n)}\xrightarrow{n\rightarrow\infty}0.\end{equation*}

It remains to show condition (11) of [Reference Wu and Woodroofe21], i.e.

\begin{equation*}\frac{1}{n\tilde{L}(n)}\sum_{j=1}^n\mathbb{E}\big[M_n^2(\,j)\,\big|\,\mathcal{F}_{j-1}\big]\xrightarrow{n\rightarrow \infty}1\end{equation*}

in probability. First note that we can decompose $M_n(0)$ for $k<n$ into

\begin{multline*}M_n(0)=M_k(0)+\varepsilon_0\sum_{i_p=k+1}^n\,\sum_{0<i_2<i_3<\cdots<i_p}\sum_{j=0}^\infty a_{j}a_{j+i_2}a_{j+i_3}\cdots a_{j+i_p}\varepsilon_{-i_2}\varepsilon_{-i_3}\cdots\varepsilon_{-i_p}\\=\!:\,M_k(0)+\tilde M_{k,n}(0).\end{multline*}

We use this decomposition to calculate the covariance:

\begin{align*}\text{cov}\big(\mathbb{E}\big[M_n^2(\,j)\,\big|\,\mathcal{F}_{j-1}\big], &\, \mathbb{E}\big[M_n^2(\,j+k)\,\big|\,\mathcal{F}_{j+k-1}\big]\big) \\& = \text{cov}\big(\mathbb{E}\big[M_n^2(0)\,\big|\,\mathcal{F}_{-1}\big],\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\big)\\& = \text{cov}\big(\mathbb{E}\big[M_k^2(0)\,\big|\,\mathcal{F}_{-1}\big],\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\big)\\& \quad + 2\text{cov}\big(\mathbb{E}\big[\tilde{M}_{k,n}(0)M_k(0)\,\big|\,\mathcal{F}_{-1}\big],\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\big)\\& \quad + \text{cov}\big(\mathbb{E}\big[\tilde{M}^2_{k,n}(0)\,\big|\,\mathcal{F}_{-1}\big],\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\big).\end{align*}

For the first of the three summands, we have that $M_k^2(0)$ is measurable with respect to $\sigma(\varepsilon_0,\ldots,\varepsilon_{-k})$ and thus is independent of $\mathcal{F}_{-k-1}$ . It follows that

\begin{equation*}\text{cov}\left(\mathbb{E}\big[M_k^2(0)\,\big|\,\mathcal{F}_{-1}\big],\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\right)=0.\end{equation*}

For the second summand, we use the Hölder inequality repeatedly to obtain

\begin{align*}\text{cov}\big(\mathbb{E}&\big[\tilde{M}_{k,n}(0)M_k(0)\,\big|\,\mathcal{F}_{-1}\big],\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\big)\\& \leq \big\|\mathbb{E}\big[\tilde{M}_{k,n}(0)M_k(0)\,\big|\,\mathcal{F}_{-1}\big]\big\|_2\big\|\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\big\|_2\\& \leq \sqrt{\mathbb{E}\big[\tilde{M}_{k,n}^2(0)M_k^2(0)\big]}\sqrt{\mathbb{E}\big[M_n^4({-}k)\big]}\leq \big\|\tilde{M}_{k,n}(0)\big\|_4\big\|M_k(0)\big\|_4\big\|M_n(0)\big\|_4^2.\end{align*}

Dealing with the third summand in a similar way, we get

\begin{equation*}\text{cov}\big(\mathbb{E}\big[\tilde{M}^2_{k,n}(0)\,\big|\,\mathcal{F}_{-1}\big],\mathbb{E}\big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\big]\big)\leq\big\|M_n(0)\big\|_4^2\big\|\tilde{M}_{k,n}(0)\big\|_4^2.\end{equation*}

In the same way as in the proof of Lemma 6, we can conclude that

\begin{equation*}\mathbb{E}\big[\tilde{M}_{k,n}^4(0)\big]\leq C\Bigg(\sum_{i_p=k+1}^n\frac{L^{2p}(i_p)}{i_p}\Bigg)^2\leq C\big(\tilde{L}(n)-\tilde{L}(k)\big)^2.\end{equation*}

Now we are able to bound the variance of the sum:

\begin{align*}\text{var}&\Bigg[\frac{1}{n\tilde{L}(n)}\sum_{j=1}^n\mathbb{E}\left[M_n^2(\,j)\,\big|\,\mathcal{F}_{j-1}\right]\Bigg]\\& \leq \frac{2}{n\tilde{L}^2(n)}\sum_{k=0}^{n-1}\left|\text{cov}\Big(\mathbb{E}\Big[M_n^2(0)\,\big|\,\mathcal{F}_{-1}\Big],\mathbb{E}\Big[M_n^2({-}k)\,\big|\,\mathcal{F}_{-k-1}\Big]\Big)\right|\\& \leq C\frac{1}{n}\Bigg(\sum_{k=0}^{n-1}\bigg(\frac{\tilde{L}(n)-\tilde{L}(k)}{\tilde{L}(n)}\bigg)^{\frac{1}{2}}+\sum_{k=0}^{n-1}\frac{\tilde{L}(n)-\tilde{L}(k)}{\tilde{L}(n)}\Bigg)\xrightarrow{n\rightarrow\infty}0 ,\end{align*}

because $\tilde{L}$ is slowly varying by Lemma 4. With the Chebyshev inequality, [Reference Wu and Woodroofe21, (11)] follows.

Remark 1. In the special case in which $(\varepsilon_n)_{n\in\mathbb{Z}}$ are i.i.d. standard N(0,1), $Y_j^{(p)}$ is a Wiener chaos process with order p. In order to prove the central limit theorem $S_n/ (n\tilde{L}(n))^{1/2} \Rightarrow N(0,1)$ in this special case, one could apply the fourth moment theorem in [Reference Nourdin, Peccati and Reinert14]; see Theorem 3.1 therein, which was proved using the tool of Malliavin calculus. A sufficient condition for the central limit theorem is $T_1(S_n/ (n\tilde{L}(n))^{1/2}) \to 0$ , where $T_1(\cdot)$ , defined in their Theorem 3.1, is a functional of Wiener chaos which involves complicated products of tensors. It turns out that the verification of the latter sufficient condition in our setting is highly nontrivial. Hence the details will not be pursued in this paper.

For the next lemmas we have to introduce some notation:

\begin{equation*}X_{l,n}\,:\!=G\Bigg(\sum_{i=0}^la_i\varepsilon_{n-1}\Bigg) , \qquad Y_{l,n}^{(k)}\,:\!=\sum_{0\leq i_1<i_2<\cdots<i_k<l}a_{i_1}a_{i_2}\cdots a_{i_k}\varepsilon_{n-i_1}\varepsilon_{n-i_2}\cdots \varepsilon_{n-i_k} .\end{equation*}

Lemma 7. Under Assumptions 1 and 3, we have that

\begin{equation*}\lim_{l\rightarrow\infty}G_l^{(p)}(0)=G_{\infty}^{(p)}(0) .\end{equation*}

Proof. Define $\tilde{Y}_{l,n}^{(1)}\,:\!=Y_n^{(1)}-Y_{l,n}^{(1)}=\sum_{i=l+1}^\infty a_i\varepsilon_{n-i}$ . In the same way as in [Reference Ho and Hsing11, Lemma 2.1],

\begin{equation*}\big|G_l^{(p)}(0)-G_{\infty}^{(p)}(0)\big|=\big|\mathbb{E}\big[G_l^{(p)}(0)-G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})\big]\big|\leq \mathbb{E}\big[\big|G_l^{(p)}(0)-G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})\big|\big] .\end{equation*}

Note that

\begin{equation*}\text{var}\left[\tilde{Y}_{l,n}^{(1)}\right]=\sum_{i=l+1}^\infty a_i^2\sigma^{2}\xrightarrow{l\rightarrow\infty}0,\end{equation*}

so $\tilde{Y}_{l,0}^{(1)}$ converges to 0 in probability. By Assumption 3, $G_l^{(p)}$ is differentiable in a neighborhood of 0 and thus continuous. It follows that $G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})$ converges in probability to $G_l^{(p)}(0)$ . Because $G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})$ is also $L_4$ bounded by Assumption 3, the convergence to 0 of $G_l^{(p)}(0)-G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})$ in $L_1$ follows and gives the statement of the lemma.

Lemma 8. Under Assumption 1 and additionally $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , we have that

\begin{equation*}\lim_{l\rightarrow\infty}\limsup_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Big(Y_j^{(p)}-Y_{l,j}^{(p)}\Big)\Bigg]=0 .\end{equation*}

Proof. First note that by Lemma 4 we have $L(n)=o(\tilde{L}(n))$ . Because we assume $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , $\tilde{L}$ is bounded and $L(n)\rightarrow 0$ as $n\rightarrow \infty$ . So by Lemma 2 we have

\begin{equation*}\limsup_{n\rightarrow\infty}\text{var}\Bigg[\mathbb{E}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n Y_j^{(p)}\,\big|\,\mathcal{F}_0\Bigg]\Bigg]=0.\end{equation*}

Additionally, $\mathbb{E}\big[ Y_{l,j}^{(p)}\,\big|\,\mathcal{F}_0\big]=0$ for all $j>l$ , and consequently, for all $l\in\mathbb{N}$ ,

\begin{equation*}\limsup_{n\rightarrow\infty}\text{var}\Bigg[\mathbb{E}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n Y_{l,j}^{(p)}\,\big|\,\mathcal{F}_0\Bigg]\Bigg]=0.\end{equation*}

So, we only have to deal with the variance of

\begin{align*}\tilde{S}_{l,n} & \,:\!= \sum_{j=1}^n Y_j^{(p)}-\mathbb{E}\Bigg[\sum_{j=1}^n Y_j^{(p)}\,\big|\,\mathcal{F}_0\Bigg]-Y_{l,j}^{(p)}+\mathbb{E}\Bigg[\sum_{j=1}^n Y_{l,j}^{(p)}\,\big|\,\mathcal{F}_0\Bigg]\\& = \sum_{j=1}^n\,\sum_{\substack{0\leq i_1<i_2<\cdots<i_p<\infty\\i_1<j,\ \ i_p>l}}\!\!\! a_{i_1}a_{i_2}\cdots a_{i_p}\varepsilon_{j-i_1}\varepsilon_{j-i_2}\cdots\varepsilon_{j-i_p}\\& = \sum_{j=1}^n\varepsilon_j\sum_{0\leq i_2<\cdots<i_p<\infty}\sum_{k=\max\{0,l-i_p+1 \}}^{n-j} a_{k}a_{k+i_2}\cdots a_{k+i_p}\varepsilon_{j+k-i_2}\cdots\varepsilon_{j+k-i_p}.\end{align*}

Because the random variables $(\varepsilon_j)_{j\in\mathbb{Z}}$ are uncorrelated, we can follow the argument of the proof of Lemma 3 to obtain

\begin{multline*}\text{var}\big[\tilde{S}_{l,n}\big]=\sigma^{2p}\sum_{j=1}^n\,\sum_{0\leq i_2<\cdots<i_p<\infty}\Bigg(\sum_{k=\max\{0,l-i_p+1 \}}^{n-j} a_{k}a_{k+i_2}\cdots a_{k+i_p}\Bigg)^2\\\shoveleft\approx n\sigma^{2p}\sum_{0\leq i_2<\cdots<i_p<\infty}i_p^{-(p-1)}L^{2p}(i_p) \times \\\left(\,\int\limits_{\max\{l/p_p-1,0\}}^{n/i_p}v^{-\frac{p+1}{2p}}\bigg(v+\frac{i_2}{i_p}\bigg)^{-\frac{p+1}{2p}}\bigg(v+\frac{i_3}{i_p}\bigg)^{-\frac{p+1}{2p}}\cdots\bigg(v+\frac{i_{p-1}}{i_p}\bigg)^{-\frac{p+1}{2p}}(v+1)^{-\frac{p+1}{2p}}{\rm d} v\right)^2 \\ \shoveleft\leq Cn\sum_{i_p=1}^\infty \frac{L^{2p}(i_p)}{i_p}\int\limits_{0}^{1}\!\int\limits_{0}^{t_{p-1}}\!\ldots\!\int\limits_0^{t_3}\\\left(\,\int\limits_{\max\{l/p_p-1,0\}}^{n/i_p}\!\!\!\! v^{-\frac{p+1}{2p}}(v+t_2)^{-\frac{p+1}{2p}}\cdots(v+t_{p-1})^{-\frac{p+1}{2p}}(v+1)^{-\frac{p+1}{2p}}{\rm d} v\right)^2{\rm d} t_2\cdots {\rm d} t_{p-2}{\rm d} t_{p-1} \\ =\!:\,Cn\sum_{i_p=1}^\infty \frac{L^{2p}(i_p)}{i_p}I_{l,n}(i_p).\end{multline*}

$I_{l,n}(i_p)$ is bounded by $C_p < \infty$ from the proof of Lemma 3. Furthermore, for $l>2i_p$ we have the bound

\begin{equation*}I_{l,n}(i_p)\leq \Bigg(\int_{l/i_p-1}^\infty v^{-\frac{p+1}{2}}{\rm d} v\Bigg)^2\xrightarrow{l\rightarrow\infty}0.\end{equation*}

So, by the dominated convergence theorem we get

\begin{equation*}\text{var}\left[\frac{1}{\sqrt{n}\,}\tilde{S}_{l,n}\right]\leq C\sum_{i_p=1}^\infty \frac{L^{2p}(i_p)}{i_p}I_{l,n}(i_p)\xrightarrow{l\rightarrow\infty}0. \tag*{$\square$}\end{equation*}

Lemma 9. If the power rank is p, Assumptions 1, 2, and 3 hold, and additionally $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , we have

(1) \begin{equation}\lim_{l\rightarrow\infty}\limsup_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\Bigg]=0 .\end{equation}

Proof. We decompose the sum in the following way:

\begin{align*}\sum_{j=1}^n&\Bigg(X_j-\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\\&=\Bigg(\sum_{j=1}^n\Bigg(X_j-\sum_{k=1}^p G_\infty^{(k)}(0)Y_j^{(k)}\Bigg)-\sum_{j=1}^n\Bigg(X_{l,j}-\sum_{k=1}^pG_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\\& \quad +\sum_{j=1}^n\Big(G_\infty^{(p)}(0)Y_j^{(p)}-G_l^{(p)}(0)Y_{l,j}^{(p)}\Big) \\ &=\Bigg(\sum_{j=1}^n\Bigg(X_j-\sum_{k=1}^p G_\infty^{(k)}(0)Y_j^{(k)}\Bigg)-\sum_{j=1}^n\Bigg(X_{l,j}-\sum_{k=1}^pG_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\\& \quad +G_\infty^{(p)}(0)\sum_{j=1}^n\big(Y_j^{(p)}-Y_{l,j}^{(p)}\big)+\big(G_\infty^{(p)}(0)-G_l^{(p)}(0)\big)\sum_{j=1}^nY_{l,j}^{(p)},\end{align*}

where we used the assumption that the power rank is p, which means $G_\infty^{(1)}(0)=\cdots=G_\infty^{(p-1)}(0)=0$ . For the first summand we have, by [Reference Ho and Hsing11, line (3.4)],

\begin{equation*}\lim_{l\rightarrow\infty}\limsup_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\sum_{k=1}^pG_\infty^{(k)}(0)Y_j^{(k)}-X_{l,j}+ \sum_{k=1}^pG_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg]=0.\end{equation*}

The convergence of the variance of the second summand follows from Lemma 8. For the third summand, along the lines of the Lemmas 2, 3, and 5, we can prove that

\begin{equation*}\text{var}\Bigg[\sum_{j=1}^nY_{l,j}^{(p)}\Bigg] \leq Cn\sum_{j=1}^l\frac{L^{2p}(\,j)}{j}\leq Cn\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty.\end{equation*}

By Lemma 7, $G_\infty^{(p)}(0)-G_l^{(p)}(0)\rightarrow 0$ , so the third summand converges to 0, too.

Lemma 10. Under Assumptions 1, 3, and 2, and additionally G has power rank p and $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , we have

\begin{equation*}\sigma_\infty^2\,:\!=\lim_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^nX_j\Bigg]=\lim_{l\rightarrow\infty}\lim_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg] ,\end{equation*}

and the limit is finite.

Proof. Because $X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}$ , $j\in\mathbb{N}$ , is an l-dependent sequence, we know that

\begin{equation*}\sigma_l^2\,:\!=\lim_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg]\end{equation*}

exists and is finite. For any $l_0$ large enough that the lim sup in (1), Lemma 9, is finite, we conclude that

\begin{align*}\bar{\sigma}_\infty^2 & \,:\!= \limsup_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n X_j\Bigg]\\ & \leq 2\sigma_{l_0}\limsup_{n\rightarrow\infty}\Bigg\{\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\Bigg(X_{l_0,j}-\sum_{k=1}^{p-1}G_{l_0}^{(k)}(0)Y_{l_0,j}^{(k)}\Bigg)\Bigg)\Bigg]\Bigg\}^{1/2}\\& \quad + \limsup_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\Bigg(X_{l_0,j}-\sum_{k=1}^{p-1}G_{l_0}^{(k)}(0)Y_{l_o,j}^{(k)}\Bigg)\Bigg)\Bigg]+\sigma^2_{l_0}<\infty.\end{align*}

Now, for $l,l'\geq l_0$ ,

\begin{align*}\left|\sigma_l^2-\sigma_{l'}^2\right| & \leq \limsup_{n\rightarrow\infty}\Bigg|\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^nX_j\Bigg]-\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg]\Bigg|\\& \quad + \limsup_{n\rightarrow\infty}\Bigg|\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^nX_j\Bigg]-\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_{l',j}-\sum_{k=1}^{p-1}G_{l'}^{(k)}(0)Y_{l',j}^{(k)}\Bigg)\Bigg]\Bigg|.\end{align*}

For the first summand we have, by Lemma 9,

\begin{align*}\limsup_{n\rightarrow\infty}&\Bigg|\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^nX_j\Bigg]-\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\text{var}\Bigg[\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg]\Bigg|\\& \leq \bar{\sigma}_\infty\sup_{l\geq l_0}\limsup_{n\rightarrow\infty}\Bigg\{\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_{l}^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\Bigg]\Bigg\}^{1/2}\\& \quad + \sup_{l\geq l_0}\limsup_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_{l}^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\Bigg]\xrightarrow{l_0\rightarrow\infty}0.\end{align*}

Treating the second summand in the same way, we conclude that $(\sigma_l^2)_{l\in\mathbb{N}}$ is a Cauchy sequence and has a finite limit. It remains to show that the limit equals $\sigma_\infty^2$ . This can be seen easily by

\begin{align*}\big|\sigma_\infty^2&-\lim_{l\rightarrow\infty}\sigma_l^2\big|\\& \leq \limsup_{l\rightarrow\infty}\limsup_{n\rightarrow\infty}\Bigg|\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^nX_j\Bigg]-\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg]\Bigg| \\ & \leq \lim_{l\rightarrow\infty}\limsup_{n\rightarrow\infty}\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\Bigg]\\& \quad + 2\bar{\sigma}_\infty \lim_{l\rightarrow\infty}\limsup_{n\rightarrow\infty}\Bigg\{\text{var}\Bigg[\frac{1}{\sqrt{n}\,}\sum_{j=1}^n\Bigg(X_j-\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\Bigg)\Bigg]\Bigg\}^{1/2}\\&=0,\end{align*}

where we used Lemma 9.

5. Proof of main results

Proof of Theorem 1. By [Reference Ho and Hsing11, Theorem 3.2], we have

\begin{equation*}\text{var}\Bigg[\sum_{i=1}^n\big(X_i-\mathbb{E}[X_i]-G_{\infty}^{(p)}(0)Y_i^{(p)}\big)\Bigg]=O\left(n\right).\end{equation*}

Proposition 1 and Slutzky’s lemma complete the proof.

Proof of Theorem 2. For fixed $l\in\mathbb{N}$ , we split the sum into two parts:

\begin{multline*}\frac{1}{\sqrt{n}\,}\sum_{i=1}^n(X_i-\mathbb{E}[X_i])=\frac{1}{\sqrt{n}\,}\sum_{i=1}^n\Bigg(X_{l,j}-\mathbb{E}[X_{l,j}]-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)\\+\frac{1}{\sqrt{n}\,}\sum_{i=1}^n\Bigg(X_i-\Bigg(X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\Bigg)-\big(\mathbb{E}[X_i]-\mathbb{E}[X_{l,i}]\big)\Bigg),\end{multline*}

where, as before,

\begin{equation*}X_{l,n} \,:\!= G\Bigg(\sum_{i=0}^la_i\varepsilon_{n-1}\Bigg) , \qquad Y_{l,n}^{(k)} \,:\!= \sum_{0\leq i_1<i_2<\cdots<i_k<l}a_{i_1}a_{i_2}\cdots a_{i_k}\varepsilon_{n-i_1}\varepsilon_{n-i_2}\cdots \varepsilon_{n-i_k} .\end{equation*}

The first summand is l-dependent, and the variance of the summands is finite by Assumption 2, so it converges to a normal limit with variance $\sigma_l^2$ (see [Reference Hoeffding and Robbins12]). By [Reference Billingsley3, Theorem 4.2], the statement of our theorem follows from

\begin{align*}&\text{for all}\ \epsilon>0:\\&\lim_{l\rightarrow\infty}\limsup_{n\rightarrow\infty}\mathbb{P}\Bigg(\Bigg|\frac{1}{\sqrt{n}\,}\!\sum_{i=1}^n\!\Bigg(\!X_i\!-\!\Bigg(\!X_{l,j}\!-\!\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}\!\Bigg)\!-\!\big(\mathbb{E}[X_i]\!-\!\mathbb{E}[X_{l,i}]\big)\!\Bigg)\Bigg|\geq \epsilon\!\Bigg)=0,\\&\lim_{l\rightarrow\infty}\sigma_l^2=\sigma_\infty^2.\end{align*}

Obviously, the first statement follows from the Chebyshev inequality and Lemma 9, and the second convergence holds by Lemma 10.

Proof of Lemma 1. Let us recall the definitions $Y_0^{(1)}\,:\!=\sum_{i=0}^\infty a_i\varepsilon_{-i}$ and $Y_{l,0}^{(1)}\,:\!=\sum_{i=0}^l a_i\varepsilon_{-i}$ . The random variables have a normal distribution with mean 0 and variance

\begin{equation*}s^2 \,:\!= \text{var}\big[Y_0^{(1)}\big]=\sum_{i=0}^\infty a_i^2, \qquad s^2_l \,:\!= \text{var}\big[Y_{l,0}^{(1)}\big]=\sum_{i=0}^l a_i^2. \end{equation*}

Obviously, $(s^2_l)_{l\in\mathbb{N}}$ is a nondecreasing sequence with $s^2_1>0$ and bounded by is $s^2$ . First note that, for all $l\in\mathbb{N}$ ,

\begin{align*}\mathbb{E}\left[G^2\big(Y_{l,0}^{(1)}\big)\right] & = \int_{-\infty}^\infty G^2(u)\frac{1}{\sqrt{2\pi s_l^2}\,}{\rm e}^{-\frac{u^2}{2s_l^2}}{\rm d} u\leq \sup_{l\in\mathbb{N}}\sqrt{\frac{s^2}{s_l^2}}\int_{-\infty}^\infty G^2(u)\frac{1}{\sqrt{2\pi s^2}\,}{\rm e}^{-\frac{u^2}{2s^2}}{\rm d} u\\& = C\mathrm{E}\left[G^2\big(Y_{0}^{(1)}\big)\right]<\infty.\end{align*}

Now, because of the measurability of G, we can find, for any $\delta>0$ , a finite collection of intervals $I_1,\ldots,I_K$ and real numbers $c_1,\ldots,c_K$ such that, for $G_K(z)\,:\!=\sum_{k=1}^K c_k \textbf{1}_{I_k}(z)$ , we have

\begin{equation*}\mathbb{E}\left[\Big(G\big(Y_{0}^{(1)}\big)-G_K\big(Y_{0}^{(1)}\big)\Big)^2\right]\leq\delta.\end{equation*}

With the same arguments as before, we also have

\begin{equation*}\mathbb{E}\left[\Big(G\big(Y_{l,0}^{(1)}\big)-G_K\big(Y_{l,0}^{(1)}\big)\Big)^2\right]\leq C\delta.\end{equation*}

As this holds uniformly in l, it is enough to show the statement of the lemma for $G_K$ . It is clear that the sequence $\big(\big(Y_{l,0}^{(1)},Y_{0}^{(1)}\big)\big)_{l\in\mathbb{N}}$ converges in distribution to $\big(Y_{0}^{(1)},Y_{0}^{(1)}\big)$ . The limit vector has a distribution concentrated on the diagonal but with zero mass for single points. The boundary of any product $I_{k_1}\times I_{k_2}$ intersected with the diagonal consists of at most two points, so it is a continuity set of the limit distribution. Consequently, we have

\begin{align*}\lim_{l\rightarrow\infty}\mathbb{E}\left[\textbf{1}_{I_{k_1}}(Y_{l,0}^{(1)})\textbf{1}_{I_{k_2}}(Y_{0}^{(1)})\right] & = \lim_{l\rightarrow\infty}\mathbb{E}\left[\textbf{1}_{I_{k_1}}(Y_{l,0}^{(1)})\textbf{1}_{I_{k_2}}(Y_{l,0}^{(1)})\right]\\& = \mathbb{E}\left[\textbf{1}_{I_{k_1}}(Y_{0}^{(1)})\textbf{1}_{I_{k_2}}(Y_{0}^{(1)})\right].\end{align*}

A short calculation gives

\begin{align*}\left(G_K(x)-G_K(y)\right)^2 & = \sum_{k_1=1}^K\sum_{k_2=1}^Kc_{k_1}c_{k_2}\textbf{1}_{I_{k_1}}(x)\textbf{1}_{I_{k_2}}(x) -2\sum_{k_1=1}^K\sum_{k_2=1}^Kc_{k_1}c_{k_2}\textbf{1}_{I_{k_1}}(x)\textbf{1}_{I_{k_2}}(y) \\ & \quad +\sum_{k_1=1}^K\sum_{k_2=1}^Kc_{k_1}c_{k_2}\textbf{1}_{I_{k_1}}(y)\textbf{1}_{I_{k_2}}(y).\end{align*}

This leads to the statement of the lemma for $G_K$ .

Acknowledgement

The authors are grateful to the two anonymous referees for their careful reading of the paper and for many useful suggestions which greatly improved this article.

References

Avram, F. andTaqqu, M. S. (1987). Noncentral limit theorems and Appell polynomials. Ann. Prob. 15, 767775.CrossRefGoogle Scholar
Bai, S. andTaqqu, M. S. (2019). Sensitivity of the Hermite rank. Stochastic Process. Appl. 129, 822840.CrossRefGoogle Scholar
Billingsley, P. (1968). Convergence of Probability Measures. John Wiley, Chichester.Google Scholar
Bingham, N. H., Goldie, C. M., andTeugels, J. L.(1989). Regular Variation (Encyc. Math. Appl. Vol. 27). Cambridge University Press.Google Scholar
Breuer, P. andMajor, P. (1983). Central limit theorems for non-linear functionals of Gaussian fields. J. Multivariate Anal. 13, 425441.CrossRefGoogle Scholar
Buchmann, B. andChan, N. H. (2009). Integrated functionals of normal and fractional processes. Ann. Appl. Prob. 19, 4970.CrossRefGoogle Scholar
Burkholder, D. L. (1966). Martingale transforms. Ann. Math. Statist. 37, 14941504.CrossRefGoogle Scholar
Castell, F.M, Guillotin-Plantard, N. andPène, F. (2013). Limit theorems for one and two-dimensional random walks in random scenery. Ann. Inst. H. Poincaré Prob. Statist. 49, 506528.CrossRefGoogle Scholar
Davydov, Y. A. (1970). The invariance principle for stationary processes. Theory Prob. Appl. 15, 487498.CrossRefGoogle Scholar
Dobrushin, R. L. andMajor, P. (1979). Non-central limit theorems for non-linear functionals of Gaussian fields. Z. Wahrscheinlichkeitsth. 50, 2752.CrossRefGoogle Scholar
Ho, H. C. andHsing, T. (1997). Limit theorems for functionals of moving averages. Ann. Prob. 25, 16361669.CrossRefGoogle Scholar
Hoeffding, W. andRobbins, H. (1948). The central limit theorem for dependent random variables. Duke Math. J. 15, 773780.CrossRefGoogle Scholar
Kesten, H. andSpitzer, F. (1979). A limit theorem related to a new class of self similar processes. Z. Wahrscheinlichkeitsth. 50, 525.CrossRefGoogle Scholar
Nourdin, Iv., Peccati, G. andReinert, G. (2010). Invariance principles for homogeneous sums: Universality of Gaussian Wiener chaos. Ann. Prob. 38, 19471985.CrossRefGoogle Scholar
Sly, A. andHeyde, C. (2008). Nonstandard limit theorem for infinite variance functionals. Ann. Prob. 36, 796805.CrossRefGoogle Scholar
Surgailis, D. (1982). Zones of attraction of self-similar multiple integrals. Lithuanian Math. J. 22, 327340.CrossRefGoogle Scholar
Taqqu, M. S. (1975). Weak convergence to fractional Brownian motion and to the Rosenblatt process. Z. Wahrscheinlichkeitsth. 31, 287302.CrossRefGoogle Scholar
Taqqu, M. S. (1979). Convergence of integrated processes of arbitrary Hermite rank. Z. Wahrscheinlichkeitsth. 50, 5383.CrossRefGoogle Scholar
Wu, W. B. (2003). Empirical processes of long-memory sequences. Bernoulli 9, 809831.Google Scholar
Wu, W. B., Huang, Y. andZheng, W. (2010). Covariance estimation for long-memory processes. Adv. Appl. Prob. 42, 137157.CrossRefGoogle Scholar
Wu, W. B. andWoodroofe, W.B. (2004). Martingale approximations for sums of stationary processes. Ann. Prob. 32, 16741690.CrossRefGoogle Scholar