1. Introduction
While there are different definitions and models for long range dependence, it is quite common to call a time series long range dependent (l.r.d.) if the autocovariances are not summable. In this case, one needs a stronger norming to achieve convergence of a partial sum than in the independent case. Furthermore, the limit distribution might not be Gaussian.
The central and non-central limit theorems under such a slow decay of correlations were first established [Reference Dobrushin and Major10,Reference Taqqu17,Reference Taqqu18] for subordinated Gaussian random variables. A marginal transformation of the time series might lead to a different shape of the limit distribution. A central role in the proof is played by the approximation of the partial sums by sums of Hermite polynomials and the Hermite rank.
Another popular model for l.r.d. time series is linear processes with slowly decaying coefficients. The convergence of the partial sum process of such linear processes was proved by Davydov [Reference Davydov9]. Marginally transformed l.r.d. linear processes have also been studied [Reference Avram and Taqqu1,Reference Ho and Hsing11,Reference Surgailis16]. As above, the limit distribution might be Gaussian or not, depending on the function used to transform the linear process. Techniques in this case also include approximation by sums of polynomials, where the Appell rank or the power rank play the role of the Hermite rank.
An interesting case is the situation where the autocovariance of a stationary process ${(X_n)_{n\in\mathbb{N}}}$ behaves like $\text{cov}(X_i,X_{i+k})= L(|k|)|k|^{-1}$ (for some slowly varying function L). In this borderline situation the covariance might or might not be summable, depending on the slowly varying function. For subordinated Gaussian nearly l.r.d. processes, Breuer and Major [Reference Breuer and Major5] proved the central limit theorem; similar results for continuous time Gaussian processes were given by Buchmann and Chan [Reference Buchmann and Chan6]. Sly and Heyde [Reference Sly and Heyde15] investigated subordinated Gaussian processes without finite second moments and also gave results for the boundary case. For random walks in random scenery introduced by Kesten and Spitzer [Reference Kesten and Spitzer13], Castell, Guillotin-Plantard, and Pène [Reference Castell, Guillotin-Plantard and Pène8] studied the borderline case.
There exist some results on subordinated linear processes in the borderline case between short and long range dependence. In the case of a power rank 1, the limit distribution for the empirical process for summable and for not summable covariance was established by Wu [Reference Wu19]. For linear processes without a marginal transformation, the asymptotic distribution of the covariance estimator was investigated by Wu, Huang, and Zheng [Reference Wu, Huang and Zheng20]. This is closely related to partial sums of subordinated linear processes with power rank 2.
The aim of this paper is to prove the central limit theorem for subordinated linear processes of general power rank in the borderline case of short and long range dependence. We will treat both cases of summable and not summable covariances. In the next section we introduce some notation and our assumptions. The main results can be found in Section 3. Auxiliary results follow in Section 4, and Section 5 contains the proofs of the main results.
2. Notation and assumptions
We are interested in studying subordinated linear processes, that means stationary processes $(X_n)_{n\in\mathbb{Z}}$ of the form
where the sequence $(\varepsilon_i)_{i\in\mathbb{Z}}$ is independent and identically distributed (i.i.d.). If the innovations have a finite variance and the coefficients $(a_i)_{i\in\mathbb{N}_0}$ are square summable, such a process exists. The asymptotic behavior is influenced by the decay of the coefficients $(a_i)_{i\in\mathbb{N}_0}$ and the power rank. The power rank p of G is the smallest k which gives a non-zero value of $G_\infty^{(k)}(0)$ , where
That means $G_\infty^{(p)}(0)\neq 0$ and $G_\infty^{(k)}(0)=0$ for $k< p$ . For Gaussian processes, the power rank coincides with the Hermite rank [Reference Bai and Taqqu2]. Note that G might not be continuous, because we only take derivatives of the expectation. See Assumption 2 for more details.
Our aim is to prove the convergence of the partial sum $T_n\,:\!=\sum_{j=1}^n X_j$ to a normal limit (after normalization). To this end, we define
and write $Y_n=Y_n^{(p)}$ for short. In the l.r.d. case, we will approximate the partial sum $T_n$ by the reduced partial sum
We need the following conditions on the sequence $(\varepsilon_n)_{n\in\mathbb{Z}}$ and the coefficents of the linear process $X_n=G\big(\sum_{i=0}^{\infty}a_i\varepsilon_{n-i}\big)$ :
Assumption 1. $(\varepsilon_n)_{n\in\mathbb{Z}}$ fulfills
• $(\varepsilon_n)_{n\in\mathbb{Z}}$ i.i.d.,
• $\mathbb{E}[\varepsilon_n]=0$ ,
• $\sigma^2=[\varepsilon_n]>0$ ,
• $\mathbb{E}[\varepsilon^8_n]<\infty$ .
$(a_n)_{n\in\mathbb{N}_0}$ fulfills
• $a_i= i^{-\frac{p+1}{2p}}L(i)$ for $i\geq 1$ ,
• L slowly varying, $p\in\mathbb{N}$ .
These are standard conditions on the innovations. The conditions on the coefficient define the borderline case between short range and long range dependence. We will also need some moment assumptions on $X_n$ :
Assumption 2. Assume that
and
Finally, we will use another assumption introduced in [Reference Ho and Hsing11] in order to approximate the linear process by an m-dependent process. Define
Assumption 3. Assume that for some $\lambda>0$ , $\tau\in\mathbb{N}$ , and for all $x\in\mathbb{R}$ , $r=1,\ldots,p+2$ , the derivative $G_j^{(r)}(x)$ exists and that
where we take the supremum over all subsets I of $\mathbb{N}$ .
While Assumption 3 is not easy to verify in general, in some simple situations it is clear that this assumption holds. If G has bounded and continuous derivatives up to order $p+2$ , the assumption holds for any distribution of the $\varepsilon_i$ . On the other hand, in the Gaussian case the density functions $f_j$ of $\sum_{i=0}^ja_i\varepsilon_{-i}$ have uniformly bounded derivatives. Note that $G_j(x)=\int G(x+y)f_j(y)\,{\rm d} y=\int G(y)f_j(y-x)\,{\rm d} y$ , so the assumption will hold for any G of bounded variation.
$\mathcal{F}_j\,:\!=\sigma(\varepsilon_j,\varepsilon_{j-1},\ldots)$ denotes the sigma field generated by $\varepsilon_j,\varepsilon_{j-1},\ldots$ , and $\|\cdot\|_2\,:\!=\sqrt{\mathbb{E}(\cdot^2)}$ denotes the $L_2$ norm. In our statements and proofs, C is a generic constant which might have different values even in one chain of inequalities, but does not depend on n.
3. Main results
We will state our results in the short range and long range dependent cases separately. In the l.r.d. case, a stronger norming is needed than the usual $n^{-1/2}$ and the limit behavior of the partial sum $T_n$ is dominated by the behavior of $S_n$ .
Theorem 1. If G has power rank p, $\sum_{j=1}^\infty\frac{L^{2p}(\,j)}{j}=\infty$ , and Assumptions 1, 2, and 3 hold, then
with
for a slowly varying function $\tilde{L}$ .
The slowly varying function is defined as $\tilde{L}(n)\,:\!=C_p\sigma^{2p}\sum_{j=1}^n\frac{L^{2p}(\,j)}{j}$ (where the exact value of $C_p$ can be found in the proof of Lemma 3). The proof is based on martingale approximations; an alternative would be an approach by Gaussian Wiener chaos [Reference Nourdin, Peccati and Reinert14], see also Remark 1 after Proposition 1.
In the short range dependent case, the norming is $n^{-1/2}$ as in the i.i.d. case.
Theorem 2. If G has power rank p, $\sum_{j=1}^\infty\frac{L^{2p}(\,j)}{j}<\infty$ , and Assumptions 1, 2, and 3 hold, then
with
As Gaussian processes can be represented as linear processes with normally distributed $(\varepsilon_n)_{n\in\mathbb{Z}}$ , it is possible to apply our theorems to Gaussian processes, too. As explained before, Assumption 3 holds for Gaussian random variables. We will show in Lemma 1 below that Assumption 2 also holds in the Gaussian case. So, we retrieve the theorems by Breuer and Major [Reference Breuer and Major5] (for the special case of time series, not for general random fields).
Lemma 1. Let $(\varepsilon_n)_{n\in\mathbb{Z}}$ be i.i.d. N(0,1) random variables and $(a_n)_{n\in\mathbb{N}_0}$ be real numbers such that $a_0>0$ and $\sum_{i=0}^\infty a_i^2<\infty$ . Furthermore, let G be a measurable function G such that
Then
4. Auxiliary results
We will give the proofs in the case $p\geq 3$ . The proofs in the case $p=1,2$ are simpler, but are not completely the same. We will mention the different arguments needed for the case $p=2$ in the proofs of Lemmas 3 and 5. For $p=2$ , see also [Reference Wu, Huang and Zheng20].
Lemma 2. Under Assumption 1 for $S_n\,:\!=\sum_{j=1}^nY_j^{(p)}$ ,
Proof. Because $E\left[\varepsilon_{j-i_1}\varepsilon_{j-i_2}\cdots \varepsilon_{j-i_k}\,\big|\,\mathcal{F}_0\right]=0$ if $j-i_1>0$ and $i_1,\ldots,i_k$ are pairwise different, we have that
Note that the random variables $\varepsilon_{-i_1}, \varepsilon_{-i_2}, \ldots, \varepsilon_{-i_p}$ , $0\leq i_1<i_2<\cdots<i_p<\infty$ , are centered and uncorrelated, and that
It follows that
We will treat these two summands separately. Because L is slowly varying, we have, for $k\leq i_j$ ,
and consequently
where we used that $\sum_{i=n}^\infty a_i^2\leq Cn^{-\frac{p+1}{p}+1}L^2(n)C=Cn^{-\frac{1}{p}}L^2(n)$ . For the second summand, $B_n$ , we will also use this to conclude that
It would be natural to decompose $S_n\,:\!=\sum_{j=1}^nY_j^{(p)}$ as a martingale in the following way:
with
However, this triangular array is not row-wise stationary, so we consider the following triangular array of martingale differences instead:
Lemma 3. Under Assumption 1,
where $\tilde{L}(n)=C_p\sigma^{2p}\sum_{j=1}^n\frac{L^{2p}(\,j)}{j}$ for some constant $C_p$ and $\approx$ denotes asymptotic equivalence.
Proof. As $M_n(\,j)$ , $1\leq j\leq n$ , is a martingale difference sequence, the summands are uncorrelated and we start by studying the summand separately:
Without loss of generality we can asssume that $a_0=0$ , because otherwise we could study the linear process with coefficients $\tilde{a}_0=0$ and $\tilde{a}_i=a_{i-1}$ for $i\geq 1$ . By the slow variation of L and the uniform convergence theorem, we can find $k_1(m), k_2(m)\rightarrow\infty$ as $n\rightarrow \infty$ such that
and $k_1(m)/m\rightarrow 0$ and $k_2(m)/m\rightarrow \infty$ . We will break up $B(i_2,i_3,\ldots,i_p)$ into three parts:
We will treat these three sums seperately. For the second summand, we have
as $i_p\rightarrow\infty$ . We can bound this sum from below and above with an integral, using the monotonicity of the integrated function:
as $i_p\rightarrow\infty$ , and consequently
For the third summand, first choose $\delta>0$ such that $p\big(\frac{p+1}{2p}-\delta\big)>1$ . By Potter’s theorem [Reference Bingham, Goldie and Teugels4, Theorem 1.5.6], there is a constant C such that $L(k)/L(i_p)\leq C (k/i_p)^\delta$ for $k\geq i_p$ , so we have
as $i_p\rightarrow\infty$ . The first summand $B_1$ can be treated in the same way, because for every $\delta>0$ , there is a C such that $L(k)/L(i_p)\leq C (k/i_p)^{-\delta}$ for $k\leq i_p+k_1(i_p)$ . Thus, $B\approx B_1$ and we have the asymptotic equivalence (for $n\rightarrow\infty$ )
The approximation of the inner sum by the integral follows in the same way as the approximation of $B_2$ by an integral, making use of the monotonicity of the integrand. This immediately leads to
as $n\rightarrow\infty$ . Note that in the case $p=2$ , the outer $p-2$ integrals would not exist. Additionally, we have to show that the integral (the constant $C_p$ ) is finite:
where we used $v^{-\frac{p+1}{2p}}(v+t_2)^{-\frac{p+1}{2p}}\leq v^{-\frac{4p-1}{4p}}t_2^{-\frac{5}{4p}}$ for $0<t_2<v<\infty$ .
Lemma 4. Under Assumption 1,
• $\tilde{L}(x)\,:\!=C_p\sigma^{2p}\sum_{j=1}^{\lfloor x\rfloor}\frac{L^{2p}(\,j)}{j}$ is slowly varying;
• $L^{2p}(n)=o(\tilde{L}(n))$ .
Proof. For a slowly varying function L, the function $x\mapsto L(\lfloor x\rfloor)x/\lfloor x\rfloor$ is also slowly varying. Applying [Reference Bingham, Goldie and Teugels4, Proposition 1.5.9a] to this function leads to the statement of the lemma.
Lemma 5. Under Assumption 1,
Proof. Note that $D_n(\,j)-M_n(\,j)$ , $j=1,\ldots, n$ , is a martingale difference sequence, so it suffices to show that
By the definitions, we have that
We will treat these two summands separately. For the second summand, we have that
From the second part of Lemma 4, we have that $L^{2p}(n)=o(\tilde{L}(n))$ . For the first summand, we use an approximation by an integral again:
Note that in the case $p=2$ , the outer $p-2$ integrals would not exist.
Lemma 6. Under Assumption 1,
Proof. For short, we write $m_4\,:\!=\mathbb{E}\left[\varepsilon_0^4\right]$ . Recall that
so
Using the Burkholder inequality [Reference Burkholder7] for the martingale difference sequence $\tilde{d}_{i_p}$ , $i_p=p-1,\ldots,n$ , with
and then the Minkovski inequality, we get
Repeating this argument, we can conclude that
In the same way as in the proof of Lemma 3, it follows that
Proposition 1. Under Assumption 1, for $S_n\,:\!=\sum_{j=1}^nY_j^{(p)}$ ,
Proof. Because of Lemmas 5 and 2 we have that
We will show the asymptotic normality of $\sum_{j=1}^nM_n(\,j)$ using a martingale central limit theorem from [Reference Wu and Woodroofe21]. Note that by Lemma 3, we have that $\text{var}\big[\sum_{j=1}^n M_n(\,j)\big]\approx n\tilde{L}(n)$ . First, the Ljapunov condition holds because of Lemma 6:
It remains to show condition (11) of [Reference Wu and Woodroofe21], i.e.
in probability. First note that we can decompose $M_n(0)$ for $k<n$ into
We use this decomposition to calculate the covariance:
For the first of the three summands, we have that $M_k^2(0)$ is measurable with respect to $\sigma(\varepsilon_0,\ldots,\varepsilon_{-k})$ and thus is independent of $\mathcal{F}_{-k-1}$ . It follows that
For the second summand, we use the Hölder inequality repeatedly to obtain
Dealing with the third summand in a similar way, we get
In the same way as in the proof of Lemma 6, we can conclude that
Now we are able to bound the variance of the sum:
because $\tilde{L}$ is slowly varying by Lemma 4. With the Chebyshev inequality, [Reference Wu and Woodroofe21, (11)] follows.
Remark 1. In the special case in which $(\varepsilon_n)_{n\in\mathbb{Z}}$ are i.i.d. standard N(0,1), $Y_j^{(p)}$ is a Wiener chaos process with order p. In order to prove the central limit theorem $S_n/ (n\tilde{L}(n))^{1/2} \Rightarrow N(0,1)$ in this special case, one could apply the fourth moment theorem in [Reference Nourdin, Peccati and Reinert14]; see Theorem 3.1 therein, which was proved using the tool of Malliavin calculus. A sufficient condition for the central limit theorem is $T_1(S_n/ (n\tilde{L}(n))^{1/2}) \to 0$ , where $T_1(\cdot)$ , defined in their Theorem 3.1, is a functional of Wiener chaos which involves complicated products of tensors. It turns out that the verification of the latter sufficient condition in our setting is highly nontrivial. Hence the details will not be pursued in this paper.
For the next lemmas we have to introduce some notation:
Lemma 7. Under Assumptions 1 and 3, we have that
Proof. Define $\tilde{Y}_{l,n}^{(1)}\,:\!=Y_n^{(1)}-Y_{l,n}^{(1)}=\sum_{i=l+1}^\infty a_i\varepsilon_{n-i}$ . In the same way as in [Reference Ho and Hsing11, Lemma 2.1],
Note that
so $\tilde{Y}_{l,0}^{(1)}$ converges to 0 in probability. By Assumption 3, $G_l^{(p)}$ is differentiable in a neighborhood of 0 and thus continuous. It follows that $G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})$ converges in probability to $G_l^{(p)}(0)$ . Because $G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})$ is also $L_4$ bounded by Assumption 3, the convergence to 0 of $G_l^{(p)}(0)-G_l^{(p)}(\tilde{Y}_{l,0}^{(1)})$ in $L_1$ follows and gives the statement of the lemma.
Lemma 8. Under Assumption 1 and additionally $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , we have that
Proof. First note that by Lemma 4 we have $L(n)=o(\tilde{L}(n))$ . Because we assume $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , $\tilde{L}$ is bounded and $L(n)\rightarrow 0$ as $n\rightarrow \infty$ . So by Lemma 2 we have
Additionally, $\mathbb{E}\big[ Y_{l,j}^{(p)}\,\big|\,\mathcal{F}_0\big]=0$ for all $j>l$ , and consequently, for all $l\in\mathbb{N}$ ,
So, we only have to deal with the variance of
Because the random variables $(\varepsilon_j)_{j\in\mathbb{Z}}$ are uncorrelated, we can follow the argument of the proof of Lemma 3 to obtain
$I_{l,n}(i_p)$ is bounded by $C_p < \infty$ from the proof of Lemma 3. Furthermore, for $l>2i_p$ we have the bound
So, by the dominated convergence theorem we get
Lemma 9. If the power rank is p, Assumptions 1, 2, and 3 hold, and additionally $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , we have
Proof. We decompose the sum in the following way:
where we used the assumption that the power rank is p, which means $G_\infty^{(1)}(0)=\cdots=G_\infty^{(p-1)}(0)=0$ . For the first summand we have, by [Reference Ho and Hsing11, line (3.4)],
The convergence of the variance of the second summand follows from Lemma 8. For the third summand, along the lines of the Lemmas 2, 3, and 5, we can prove that
By Lemma 7, $G_\infty^{(p)}(0)-G_l^{(p)}(0)\rightarrow 0$ , so the third summand converges to 0, too.
Lemma 10. Under Assumptions 1, 3, and 2, and additionally G has power rank p and $\sum_{j=1}^{\infty}\frac{L^{2p}(\,j)}{j}<\infty$ , we have
and the limit is finite.
Proof. Because $X_{l,j}-\sum_{k=1}^{p-1}G_l^{(k)}(0)Y_{l,j}^{(k)}$ , $j\in\mathbb{N}$ , is an l-dependent sequence, we know that
exists and is finite. For any $l_0$ large enough that the lim sup in (1), Lemma 9, is finite, we conclude that
Now, for $l,l'\geq l_0$ ,
For the first summand we have, by Lemma 9,
Treating the second summand in the same way, we conclude that $(\sigma_l^2)_{l\in\mathbb{N}}$ is a Cauchy sequence and has a finite limit. It remains to show that the limit equals $\sigma_\infty^2$ . This can be seen easily by
where we used Lemma 9.
5. Proof of main results
Proof of Theorem 1. By [Reference Ho and Hsing11, Theorem 3.2], we have
Proposition 1 and Slutzky’s lemma complete the proof.
Proof of Theorem 2. For fixed $l\in\mathbb{N}$ , we split the sum into two parts:
where, as before,
The first summand is l-dependent, and the variance of the summands is finite by Assumption 2, so it converges to a normal limit with variance $\sigma_l^2$ (see [Reference Hoeffding and Robbins12]). By [Reference Billingsley3, Theorem 4.2], the statement of our theorem follows from
Obviously, the first statement follows from the Chebyshev inequality and Lemma 9, and the second convergence holds by Lemma 10.
Proof of Lemma 1. Let us recall the definitions $Y_0^{(1)}\,:\!=\sum_{i=0}^\infty a_i\varepsilon_{-i}$ and $Y_{l,0}^{(1)}\,:\!=\sum_{i=0}^l a_i\varepsilon_{-i}$ . The random variables have a normal distribution with mean 0 and variance
Obviously, $(s^2_l)_{l\in\mathbb{N}}$ is a nondecreasing sequence with $s^2_1>0$ and bounded by is $s^2$ . First note that, for all $l\in\mathbb{N}$ ,
Now, because of the measurability of G, we can find, for any $\delta>0$ , a finite collection of intervals $I_1,\ldots,I_K$ and real numbers $c_1,\ldots,c_K$ such that, for $G_K(z)\,:\!=\sum_{k=1}^K c_k \textbf{1}_{I_k}(z)$ , we have
With the same arguments as before, we also have
As this holds uniformly in l, it is enough to show the statement of the lemma for $G_K$ . It is clear that the sequence $\big(\big(Y_{l,0}^{(1)},Y_{0}^{(1)}\big)\big)_{l\in\mathbb{N}}$ converges in distribution to $\big(Y_{0}^{(1)},Y_{0}^{(1)}\big)$ . The limit vector has a distribution concentrated on the diagonal but with zero mass for single points. The boundary of any product $I_{k_1}\times I_{k_2}$ intersected with the diagonal consists of at most two points, so it is a continuity set of the limit distribution. Consequently, we have
A short calculation gives
This leads to the statement of the lemma for $G_K$ .
Acknowledgement
The authors are grateful to the two anonymous referees for their careful reading of the paper and for many useful suggestions which greatly improved this article.