1 Introduction
Let
$\mathbb {T}=\mathbb {R}/\mathbb {Z}$
be the unit circle and let
$\{\ell _n\}_{n\geq 1}$
be a sequence of real numbers decreasing to zero with
$0<\ell _n<1$
and
$\sum _{n=1}^{\infty }\ell _n=\infty $
. Let
$\{\omega _n\}_{n\geq 1}$
be a sequence of independent and identically distributed random variables, each uniformly distributed on
$\mathbb {T}$
. Let
$I_{n}=(\omega _n,\omega _n+\ell _n)$
be the random interval with left endpoint
$\omega _n$
and length
$\ell _n$
. Throughout this paper,
$(\Omega , \mathcal {F}, \mathbb {P})$
denotes the underlying probability space,
$\lambda $
stands for the Lebesgue measure on
$\mathbb {T}$
and
$\mathbb {P}\times \lambda $
stands for the product probability measure on the product space
$\Omega \times \mathbb {T}$
. By abuse of notation, sometimes we may write
$I_n=I_n(\omega )$
with
$\omega \in \Omega $
to indicate the dependence of
$I_n$
on the underlying probability space
$(\Omega , \mathcal {F}, \mathbb {P})$
.
The limsup set
$E=\limsup _{n\rightarrow \infty }I_n$
is usually called the random covering set, which consists of all points in
$\mathbb {T}$
that are covered by random intervals
$\{I_n\}_{n\geq 1}$
infinitely many times. The Dvoretzky covering problem [Reference Dvoretzky5] aims to find necessary and sufficient conditions on
$\ell _n$
for
$E=\mathbb {T}$
almost surely. After a series of contributions by Kahane et al., Shepp [Reference Shepp21] solved this problem by showing that
$E=\mathbb {T}$
almost surely if and only if
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn1.png?pub-status=live)
(See [Reference Fan7, Reference Kahane17] for more background and [Reference Ekström and Persson6, Reference Fan and Karagulyan10–Reference Järvenpää, Järvenpää, Koivusalo, Li, Suomala and Xiao16, Reference Seuret20] for various dimension results related to this problem.)
If Shepp’s condition (1.1) is satisfied, Carleson asked for a description of the infinite set of intervals covering a given point. Fan [Reference Fan7] advanced two ways to resolve this problem. One way is to introduce, for any
$x\in \mathbb {T}$
and any
$n\geq 1$
, the covering time of the point x up to time n, which is defined by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu2.png?pub-status=live)
Here
$\mathbf {1}_{I_{k}}(x)$
denotes the indicator function of the random interval
$I_{k}$
. The second method was fully developed in [Reference Fan and Kahane9] and the multifractal analysis of the asymptotic behaviours of
$N_n(x)$
was first studied in [Reference Fan8] and refined in [Reference Barral and Fan4].
When
$\sum _{n=1}^{\infty } \ell _n=\infty $
, it was pointed out in [Reference Barral and Fan4] that
$\mathbb {P}$
-almost surely,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu3.png?pub-status=live)
giving a quantitative description of the infinite covering time. This may be viewed as a quenched self-normalised law of large numbers, that is, the self-normalised law of large numbers holds
$\mathbb {P}$
-almost surely. The terminology ‘quenched’ is motivated by the theory of random walks in a random environment and
$\Omega $
is the space of ‘environment parameters’.
Write
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu4.png?pub-status=live)
The variance of
$N_n(x)$
is
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu5.png?pub-status=live)
which satisfies
$\lim _{n\rightarrow \infty }L_n/V_n^2=1$
. We prove that the self-normalised quantity
$Y_n$
satisfies a quenched central limit theorem. (See [Reference Abdelkader and Aimino1–Reference Ayyer, Liverani and Stenlund3, Reference Kifer18] for a quenched central limit theorem and other quenched limit theorems for random dynamical systems.) We state our main theorem as follows.
Theorem 1.1. Let
$\ell _{n}=c n^{-\tau }$
with
$c>0$
and
$0<\tau < 1$
. Then
$\mathbb {P}$
-almost surely,
$Y_{n}$
converges in law to the standard normal distribution.
Theorem 1.1 will be derived from the next proposition. We remark that, by our assumption on
$\ell _n$
, we have
$L_n\sim V_n^2 \sim {c}n^{1-\tau }/{(1-\tau )}$
. Here and throughout the paper, we employ the Vinogradov and Bachmann–Landau notation: for a function f and a positive-valued function g, we write
$f\sim g$
if f is equal to g asymptotically,
$f\ll g$
or
$f=O(g)$
if there exists a constant C such that
$|f|\leq C g$
pointwise, and
$f\asymp g$
if
$f\ll g$
and
$g\ll f$
simultaneously. For
$x, y\in \mathbb {T}$
, we define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu6.png?pub-status=live)
Proposition 1.2. Let
$\ell _n$
be as in Theorem 1.1 and let
$Y_n, Z_n, V_n$
be defined as above. Assume that there exists a constant
$\sigma $
with
$\sigma>\min \{{(1-\tau )}/{2}, {\tau }/{2}\}$
such that, for all
$t\in \mathbb {R}$
and
$n\geq 1$
with
$|t/V_n|$
arbitrarily small:
-
(1)
$|\mathbb {E}_{\mathbb {P}\times \lambda }(e^{it Y_{n}})-e^{-t^{2}/2}|\ll |t|^{3}e^{-t^{2}/3}n^{-\sigma };$
-
(2)
$|\mathbb {E}_{\mathbb {P}\times \lambda \times \lambda }(e^{it Z_{n}})-e^{-t^{2}}|\ll (1+|t|^3) n^{-\sigma }$ ; and
-
(3) for
$\epsilon>0$ and n sufficiently large,
$\mathbb {P}\times \lambda (|Y_{n}|\geq \varepsilon )\leq 2e^{-\varepsilon ^{2}/4}.$
Then
$\mathbb {P}$
-almost surely,
$Y_n$
converges in law to the standard normal distribution.
Our method was inspired by [Reference Ayyer, Liverani and Stenlund3], which established a quenched central limit theorem for a smooth observable of random sequences of iterated linear hyperbolic maps on the torus. In contrast to the quenched limit theorems in [Reference Abdelkader and Aimino1–Reference Ayyer, Liverani and Stenlund3, Reference Kifer18], the sequence of random variables we consider is not stationary, which causes additional difficulties.
2 Preliminaries
In this section, we collect some preliminary results. In the first two lemmas,
$\{\widetilde {X}_{n}\}_{n\geq 1}$
is assumed to be a sequence of independent random variables on a probability space
$(\widetilde {\Omega }, \widetilde {\mathcal {F}}, \widetilde {\mathbb {P}})$
with
$\mathbb {E}_{\widetilde {\mathbb {P}}}(\widetilde {X}_{n})=0$
. For any
$n\geq 1$
, write
$\widetilde {\sigma }_n^2=\mathbb {E}_{\widetilde {\mathbb {P}}}(\widetilde {X}_{n}^{2}), \widetilde {V}_{n}^2=\sum _{i=1}^{n}\widetilde {\sigma }_i^2$
and
$\widetilde {B}_{n}=\widetilde {V}_{n}^{-3}\sum _{i=1}^{n}\mathbb {E}_{\widetilde {\mathbb {P}}}(|\widetilde {X}_{i}|^{3})$
.
Lemma 2.1 [Reference Petrov19, Lemma 5.1].
Let
$\{\widetilde {X}_{n}\}_{n\geq 1}, \widetilde {V}_{n}, \widetilde {B}_n$
be defined as above and satisfying
$\mathbb {E}_{\widetilde {\mathbb {P}}}(|\widetilde {X}_{n}|^{3})<\infty $
for each
$n\geq 1$
. Write
$\widetilde {Y}_n=\widetilde {V}_{n}^{-1}\sum _{i=1}^{n}\widetilde {X}_{i}$
. Then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu7.png?pub-status=live)
Lemma 2.2 (Bernstein inequality, [Reference Petrov19, Theorem 2.8]).
Let
$\{\widetilde {X}_{n}\}_{n\geq 1}, \widetilde {\sigma }_{n}, \widetilde {V}_n$
be defined as above and satisfying
$\widetilde {\sigma }_n^2<\infty $
for each
$n\geq 1$
. Suppose that there exists a constant
$H>0$
such that
$|\mathbb {E}_{\widetilde {\mathbb {P}}}(\widetilde {X}_n^m)|\leq \tfrac 12m!\widetilde {\sigma }_n^2H^{m-1}$
for all
$n\geq 1$
and
$m\geq 2$
. Then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu8.png?pub-status=live)
In the next two lemmas,
$(\Omega , \mathcal {F}, \mathbb {P})$
denotes the underlying probability space,
$\lambda $
denotes the Lebesgue measure on the unit circle
$\mathbb {T}$
and
$I_{n}=(\omega _n,\omega _n+\ell _n)$
is the random interval with left endpoint
$\omega _n$
and length
$\ell _n$
. For a measurable function f on
$\Omega \times \mathbb {T}$
, the next lemma allows us to estimate
$\mathbb {P}(\{\mathbb {E}_{\lambda }(f)\geq a\})$
by virtue of
$\mathbb {P}\times \lambda (\{f\geq b\})$
, which is easier to treat via Markov’s inequality and Fubini’s theorem.
Lemma 2.3 [Reference Ayyer, Liverani and Stenlund3, Section 5].
Let
$M>0$
and let
$f=f(\omega , x)$
be a measurable function on
$\Omega \times \mathbb {T}$
with
$0\leq f\leq M$
. Then, for any
$\alpha>0$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu9.png?pub-status=live)
Proof. Note that
$\mathbb {P}\times \lambda (\{f\geq \alpha \})= \mathbb {E}_{\mathbb {P}\times \lambda }(\mathbf {1}_{\{f\geq \alpha \}}) \geq \mathbb {E}_{\mathbb {P}}(\mathbb {E}_{\lambda }(\mathbf {1}_{\{f\geq \alpha \}})\cdot \mathbf {1}_{\{\mathbb {E}_{\lambda }(f)\geq 2\alpha \}})$
. Since
$M\cdot \mathbb {E}_{\lambda }(\mathbf {1}_{\{f\geq \alpha \}})+\alpha \geq \mathbb {E}_{\lambda }(f)$
, the bound
$\mathbb {E}_{\lambda }(f)\geq 2\alpha $
implies that
$\mathbb {E}_{\lambda }(\mathbf {1}_{\{f\geq \alpha \}})\geq \alpha /M$
. Thus, by Fubini’s theorem and Markov’s inequality,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu10.png?pub-status=live)
Lemma 2.4. For any fixed
$x, y\in \mathbb {T}$
and any integers
$k, m\geq 1$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn2.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn3.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn4.png?pub-status=live)
Proof. By the binomial formula,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu11.png?pub-status=live)
Notice that
$|\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y)|$
takes only 0 and 1 as values. For fixed
$x, y\in \mathbb {T}$
, the function
$|\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y)|$
as a function of
$\omega $
is the indicator function of the event that
$\omega $
belongs to one of the two intervals
$(x-\ell _k, x)$
and
$(y-\ell _k, y)$
but not the other, that is, to the symmetric difference of the two intervals. There are two cases: when
$|x-y|>\ell _k$
, the two intervals are disjoint and the event has probability
$2\ell _k$
; when
$|x-y|\leq \ell _k$
, the symmetric difference is the union of two disjoint intervals of length
$|x-y|$
and the event has probability
$2|x-y|$
. Then (2.2) follows immediately.
If m is even, as we just proved,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu12.png?pub-status=live)
If m is odd, by the fact that
$\mathbb {E}_{\mathbb {P}}(\mathbf {1}_{I_{k}}(x)\cdot \mathbf {1}_{I_{k}}(y))=\max \{\ell _k-|x-y|, 0\}$
and the binomial formula,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu13.png?pub-status=live)
For the last equality, we have used
$\sum _{j=0}^{m}(-1)^j C_{m}^{j}=(1-1)^m=0$
.
3 Proof of Proposition 1.2
Recall that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu14.png?pub-status=live)
where
$x, y\in \mathbb {T}$
. By Lévy’s continuity theorem, we need to prove that,
$\mathbb {P}$
-almost surely,
$\mathbb {E}_{\lambda }(e^{itY_{n}})\rightarrow e^{-t^{2}/2}$
for all
$t\in \mathbb {R}$
. For simplicity, let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu15.png?pub-status=live)
Since
$\mathbb {R}$
can be covered by countably many compact sets, it suffices to prove that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu16.png?pub-status=live)
where
$T>0$
is a fixed integer. This is implied by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu17.png?pub-status=live)
We start by estimating the second moment
$\mathbb {E}_{\mathbb {P}}(|\xi _n^2(t)-e^{-t^{2}/2}|^{2})$
. First, we can write
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu18.png?pub-status=live)
Note that
$|\xi _n|^2=\mathbb {E}_{\lambda \times \lambda }(e^{itZ_{n}})$
by Fubini’s theorem. Hence, taking expectations with respect to
$\mathbb {P}$
leads to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu19.png?pub-status=live)
We apply hypotheses (1) and (2) of the theorem to compute an
$L^2$
estimate of the random variable
$\xi _n(t)-e^{-t^{2}/2}$
, which gives
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn5.png?pub-status=live)
By Chebyshev’s inequality and (3.1), for any
$\varepsilon>0$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn6.png?pub-status=live)
Now we choose b with
$\max \{{(\tau +1)}/{2}, 1-{\tau }/{2}\}<b<1$
. We divide the interval of integers
$[2^k, 2^{k+1})$
into
$2^{(1-b)k}$
intervals of length
$2^{bk}$
, denoted by
$\{I_{k, p}\}_{p=1}^{2^{(1-b)k}}$
, and we divide the interval of real numbers
$[-T, T]$
into
$2Tk$
intervals of length
$k^{-1}$
, denoted by
$\{J_{k, q}\}_{q=1}^{2Tk}$
. In this way,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu20.png?pub-status=live)
so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu21.png?pub-status=live)
In each ‘rectangle’
$I_{k, p}\times J_{k, q}$
, we choose a point
$(n_{k, p}, t_{k, q})$
and we compare the maximum on this ‘rectangle’ with the value at the point
$(n_{k, p}, t_{k, q})$
. For
$(n, t)\in I_{k, p}\times J_{k, q}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu22.png?pub-status=live)
The third term on the right-hand side of the above inequality does not depend on t but only on
$t_{k, q}$
. By using the inequality
$|e^{ia}-e^{ib}|\leq |a-b|$
, the first, second and fourth terms can be estimated by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu23.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu24.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu25.png?pub-status=live)
The fourth term tends to zero. So we only need to prove that,
$\mathbb {P}$
-almost surely,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn7.png?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn8.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn9.png?pub-status=live)
To prove (3.3)–(3.5), we bound the tail probability of certain events and then apply the Borel–Cantelli lemma. Let
$\epsilon>0$
be arbitrarily small. First,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn10.png?pub-status=live)
Note that
$Y_n-Y_{n_{k, p}}$
is a weighted sum of
$\mathbf {1}_{I_{k}}-\ell _{k}$
and that the probability law of
$\mathbf {1}_{I_{k}}-\ell _{k}$
is independent of x, so
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn11.png?pub-status=live)
Write
$S_n=\sum _{k=1}^{n}(\mathbf {1}_{I_{k}}-\ell _{k})$
and assume that
$n> n_{k, p}$
(the case
$n<n_{k, p}$
is the same). Since
$Y_n-Y_{n_{k, p}}=(V_n^{-1}-V_{n_{k, p}}^{-1})S_n-V_{n_{k, p}}^{-1}(S_n-S_{n_{k, p}}) = a_n S_n+b_n(S_n-S_{n_{k, p}})$
, say,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn12.png?pub-status=live)
By (2.1), the hypotheses of Lemma 2.2 are satisfied for the sequence
$\{\mathbf {1}_{I_{n}}-\ell _{n}\}_{n\geq 1}$
with the constant
$H=1$
. Now apply Lemma 2.2 twice to
$S_n$
and
$S_n-S_{n_{k, p}}$
, which are both sums of independent random variables under the probability
$\mathbb {P}$
. This gives
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn13.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn14.png?pub-status=live)
Since
$2^k\leq n < 2^{k+1}$
,
$|n-n_{k, p}|\leq 2^{bk}$
and
$V_n^2\sim {c}n^{1-\tau }/{(1-\tau )}$
, by the mean value theorem,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn15.png?pub-status=live)
Recall that
$a_n=V_n^{-1}-V_{n_{k, p}}^{-1}$
,
$b_n=V_{n_{k, p}}^{-1}$
and
$\max \{{(\tau +1)}/{2}, 1-{\tau }/{2}\}< b <1$
. Combining (3.7)–(3.11),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn16.png?pub-status=live)
Substituting (3.12) into (3.6) yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu26.png?pub-status=live)
Since the sum over k of the terms on the right-hand side of the last inequality is finite, by the convergence part of the Borel–Cantelli lemma, the events
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu27.png?pub-status=live)
happen only finitely many times with probability one. Since
$\epsilon>0$
is arbitrarily small, this shows that (3.3) holds
$\mathbb {P}$
-almost surely.
Second, for the proof of (3.4), observe that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn17.png?pub-status=live)
By the choice of b, we have
$\sigma>1-b$
, so the sum over k of the terms in (3.13) is finite. Applying Borel–Cantelli again gives the
$\mathbb {P}$
-almost sure convergence of (3.4).
Third, for the proof of (3.5),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu28.png?pub-status=live)
Since the sum over k of the terms in the last inequality is finite, the Borel–Cantelli lemma implies the
$\mathbb {P}$
-almost sure convergence of (3.5). This completes the proof.
4 Proof of Theorem 1.1 modulo Proposition 1.2
We only need to verify hypotheses (1)–(3) in Proposition 1.2.
Hypothesis (1). For fixed
$x\in \mathbb {T}$
and each
$k\geq 1$
, the three moments of
$\{\mathbf {1}_{I_{k}}(x)-\ell _{k}\}_{k\geq 1}$
are trivially computed: that is,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu29.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu30.png?pub-status=live)
As
$x(1-x)\leq 1/4$
for
$1\leq x\leq 1$
, it follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu31.png?pub-status=live)
By Lemma 2.1, for any
$|t|\leq V_n/4$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn18.png?pub-status=live)
Since (4.1) holds uniformly for any fixed
$x\in \mathbb {T}$
, we can integrate both sides with respect to x. Hypothesis (1) then follows by Fubini’s theorem and the fact
$V_n\asymp n^{(1-\tau )/2}$
.
Hypothesis (2). We also appeal to Lemma 2.1, but, in this case, for the sequence of random variables
$\{\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y)\}_{k\geq 1}$
for any fixed
$x, y\in \mathbb {T}$
. By Lemma 2.4, for any
$k\geq 1$
, we have
$\mathbb {E}_{\mathbb {P}}(\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y))=0$
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu32.png?pub-status=live)
Since
$\ell _k=c k^{-\tau }$
is strictly decreasing, there exists
$k_0=\lfloor c^{1/\tau }|x-y|^{-1/\tau }\rfloor +1$
such that
$\ell _k<|x-y|$
if and only if
$k> k_0$
. We denote
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu33.png?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu34.png?pub-status=live)
By Lemma 2.1, for all
$t\in \mathbb {R}$
with
$|t|\leq {1}/{4\sqrt {2}\widetilde {B}_{n}}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn19.png?pub-status=live)
In addition, the inequality
$|e^{ia}-e^{ib}|\leq |a-b|$
implies that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn20.png?pub-status=live)
Now we estimate the right-hand sides of (4.2) and (4.3). For any sufficiently large n, if
$x, y\in \mathbb {T}$
satisfies
$|x-y|>n^{-\tau /2}$
, then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn21.png?pub-status=live)
Since
$V_n^2\sim {c}n^{1-\tau }/{(1-\tau )}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn22.png?pub-status=live)
In addition, by (2.3), for any fixed
$x, y\in \mathbb {T}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu35.png?pub-status=live)
where
$H=1$
and
$\widetilde {\sigma }_k^2=\mathbb {E}_{\mathbb {P}}((\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y))^2)$
. Write
$T_n=\sum _{i=1}^n (\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y))$
, where
$x, y$
satisfy
$|x-y|> n^{-\tau /2}$
. Since
$|\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y)|\leq 2$
, applying Lemma 2.2 to the sequence
$\{\mathbf {1}_{I_{k}}(x)-\mathbf {1}_{I_{k}}(y)\}_{k\geq 1}$
with
$a=\widetilde {V}_n^{1+\eta }$
gives
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu36.png?pub-status=live)
where
$\eta>0$
is a constant arbitrarily close to 0. Combining this with (4.4) yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn23.png?pub-status=live)
Substituting (4.5) and (4.6) into (4.3), we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqn24.png?pub-status=live)
Combining (4.2), (4.4) and (4.7), we see that, for any
$x, y\in \mathbb {T}$
with
$|x-y|> n^{-\tau /2}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu37.png?pub-status=live)
holds for all
$t\in \mathbb {R}$
with
$|t|\leq {\widetilde {V}_{n}}/{4\sqrt {2}}$
. Therefore, by Fubini’s theorem,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu38.png?pub-status=live)
where
$\sigma $
is a constant satisfying
$\sigma>\min \{{(1-\tau )}/{2}, {\tau }/{2}\}$
.
Hypothesis (3). We first appeal to Lemma 2.2 for the sequence of random variables
$\{\mathbf {1}_{I_{k}}(x)-\ell _{k}\}_{k\geq 1}$
for any fixed
$x\in \mathbb {T}$
. From the discussion in Section 3,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu39.png?pub-status=live)
with
$H=1$
and
$\sigma _k^2=\mathbb {E}_{\mathbb {P}}((\mathbf {1}_{I_{k}}(x)-\ell _{k})^2$
. Moreover,
$V_n^2=\sum _{k=1}^n \ell _k(1-\ell _k)$
. Applying Lemma 2.2 to
$\{\mathbf {1}_{I_{k}}(x)-\ell _{k}\}_{k\geq 1}$
and
$a=\varepsilon V_n$
gives
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20250127050641315-0107:S0004972724001230:S0004972724001230_eqnu40.png?pub-status=live)
where the last inequality is due to the fact that
$\epsilon V_n<V_n^2$
for n large enough. The inequality in Hypothesis (3) follows from Fubini’s theorem.
Acknowledgement
The authors express their gratitude to the anonymous referee for many helpful suggestions which greatly improved the paper, and clarified the proofs.