1. Introduction
In recent years there has been a lot of interest in the study of the elephant random walk (ERW) since it was introduced in [Reference Schütz and Trimper15]; see the excellent thesis [Reference Laulin13] for a detailed bibliography. The standard ERW is described as follows. Let $p \in (0,1)$ and $s \in [0,1]$. We consider a sequence $X_1,X_2,\ldots$ of random variables taking values in $\{+1, -1\}$ given by
$\{U_n\colon n \geq 1\}$ a sequence of independent random variables, independent of $X_1$, with $U_n$ having a uniform distribution over $\{1, \ldots, n\}$; and, for $n \in \mathbb{N}\,:\!=\,\{ 1,2,\ldots\}$,
The ERW $\{W_n\}$ is defined by $W_n= \sum_{k=1}^n X_k$ for $n\in\mathbb{N}$.
Gut and Stadmüller [Reference Gut and Stadtmüller9, Reference Gut and Stadtmüller10] studied a variation of this model as described in [Reference Gut and Stadtmüller9, Section 3.2]; [Reference Aguech and El Machkouri1] also studied a similar variation of the model as described in [Reference Aguech and El Machkouri1, Section 2]. We present the two different formalizations of models given in [Reference Aguech and El Machkouri1, Reference Gut and Stadtmüller10]; our work is based on the first formalization.
1.1. Triangular array setting
Consider a sequence $\{m_n \colon n \in \mathbb{N}\}$ of positive integers satisfying
Let $X_1,X_2,\ldots$ be the sequence defined by (1.1) and (1.2). We define a triangular array of random variables $\{\{S^{(n)}_k \colon 1 \leq k \leq n\} \colon n \in \mathbb{N}\}$ as follows. Let $\{Y^{(n)}_k \colon 1 \leq k \leq n\}$ be random variables with
where, for $m_n \lt k \leq n$,
Here, ${\mathcal U}_n \,:\!=\, \{U_{k, n}\colon m_n \lt k \leq n\}$ is an independent and identically distributed (i.i.d.) collection of uniform random variables over $\{1, \ldots , m_n\}$, and $\{{\mathcal U}_n\colon n \in \mathbb{N}\}$ is an independent collection. Finally, for $1 \leq k \leq n$ let $S^{(n)}_k \,:\!=\, \sum_{i=1}^k Y^{(n)}_i$. We note that for fixed $n \in \mathbb{N}$, the sequence $\{S^{(n)}_k\colon 1 \leq k \leq n\}$ is a random walk with increments in $\{+1, -1\}$. However, the sequence $\{S^{(n)}_n\colon n \in \mathbb{N}\}$ does not have such a representation. We study properties of the sequence $\{T_n \colon n\in \mathbb{N}\}$ given by
The process $\{T_n \colon n\in \mathbb{N}\}$ was called the ERW with gradually increasing memory in [Reference Gut and Stadtmüller10], where
1.2. Linear setting
In this setting the ERW $W^{\prime}_{n+1} \,:\!=\, W^{\prime}_{n} + Z_{n+1}$ is given by the increments
where $V_n$ is a uniform random variable over $\{1, \ldots , m_n\}$, and $\{V_n \colon n \in \mathbb{N}\}$ is an independent collection.
Remark 1.1. We note here that the dependence structure in the definitions of $T_n$ and $W^{\prime}_n$ are different and as such, results obtained for $T_n$ need not carry to those obtained for $W^{\prime}_n$. The error in [Reference Aguech and El Machkouri1, Theorem 2 (3)] is due to the use of the linear setting for their equation (3.20), while working in the triangular array setting. In particular, there is a mistake in the expression of $\overline{M}_{\infty}$ on [Reference Aguech and El Machkouri1, p. 14], which was fixed in the subsequent corrigendum. Their results in the corrected version agree with the results obtained here, although the methods used are different; this paper also provides additional results not obtained by them.
In the next section we present the statement of our results, and in Sections 3 and 4 we prove the results. In Section 5 and thereafter we study similar questions about the ERW with stops and present our results.
2. Results for the ERW in the triangular array setting
Before we state our results, we give a short synopsis of the results for the standard ERW $\{W_n\}$ [Reference Baur and Bertoin3, Reference Bercu4, Reference Coletti, Gava and Schütz6–Reference Guérin, Laulin and Raschel8, Reference Kubota and Takei11, Reference Qin14]. Let $\alpha\,:\!=\,2p-1$.
• For $\alpha \in ({-}1,1)$ i.e. $p \in (0,1)$,
(2.1) \begin{align} \lim_{n \to \infty} \dfrac{W_n}{n} = 0\quad\text{almost surely (a.s.) and in}\ L^2. \end{align}• For $\alpha \in \big({-}1,\frac12\big)$, i.e. $p \in \big(0,\frac34\big)$,
(2.2) \begin{align} & \frac{W_n}{\sqrt{n}} \stackrel{\text{d}}{\to} N\bigg(0,\frac{1}{1-2\alpha}\bigg) \quad \text{as}\ n \to \infty, \\[-10pt] \nonumber \end{align}(2.3) \begin{align} & \limsup_{n\to\infty} \pm\frac{W_n}{\sqrt{2n\log\log n}\,} = \frac{1}{\sqrt{1-2\alpha}\,} \quad \text{a.s.} \\[6pt] \nonumber \end{align}• For $\alpha=\frac12$, i.e. $p = \frac34$,
(2.4) \begin{align} & \frac{W_n}{\sqrt{n \log n}\,} \stackrel{\text{d}}{\to} N(0,1) \quad \text{as}\ n \to \infty, \\[-10pt] \nonumber\end{align}(2.5) \begin{align} & \limsup_{n\to\infty} \pm\frac{W_n}{\sqrt{2n\log n\log\log\log n}\,} = 1 \quad \text{a.s.} \\[6pt] \nonumber \end{align}• For $\alpha \in \big(\frac12,1\big)$, i.e. $p \in \big(\frac34,1\big)$, there exists a random variable M such that
(2.6) \begin{align} \lim_{n \to \infty}\frac{W_n}{n^{\alpha}} = M \quad \text{a.s. and in}\ L^2, \end{align}where $\mathbb{E}[M] = {\beta}/{\Gamma(\alpha+1)}$, $\mathbb{E}[M^2] \gt 0$, $\mathbb{P}(M \neq 0)=1$, and(2.7) \begin{align} \frac{W_n-Mn^{\alpha}}{\sqrt{n}} \stackrel{\text{d}}{\to} N\bigg(0,\frac{1}{2\alpha-1}\bigg) \quad \text{as}\ n \to \infty. \end{align}
Our first result improves and extends [Reference Gut and Stadtmüller10, Theorem 3.1].
Theorem 2.1. Let $p \in (0,1)$ and $\alpha=2p-1$. Assume that $\{m_n \colon n \in \mathbb{N}\}$ satisfies (1.3), (1.7), and
(i) If $\alpha \in \big({-}1,\frac12\big)$, i.e. $p \in \big(0,\frac34\big)$, then
(2.9) \begin{align} \frac{\gamma_n T_n}{\sqrt{m_n}\,} \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\{\gamma+\alpha(1-\gamma)\}^2}{1-2\alpha}+\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}(ii) If $\alpha = \frac12$, i.e. $p=\frac34$, then
(2.10) \begin{align} \frac{\gamma_n T_n}{\sqrt{m_n\log m_n}\,} \stackrel{\text{d}}{\to} N\bigg(0,\frac{(1+\gamma)^2}{4}\bigg) \quad \text{as}\ n \to \infty. \end{align}(iii) If $\alpha \in \big(\frac12,1\big)$, i.e. $p \in \big(\frac34,1\big)$, then
(2.11) \begin{align} \lim_{n \to \infty} \frac{\gamma_n T_n}{(m_n)^{\alpha}} =\{\gamma+\alpha(1-\gamma)\}M \quad \text{a.s. and in}\ L^2, \end{align}where M is the random variable in (2.6). Moreover,(2.12) \begin{align} \frac{\gamma_n T_n - M\{\gamma_n + \alpha(1-\gamma_n)\}(m_n)^{\alpha}}{\sqrt{m_n}} \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\{\gamma+\alpha(1-\gamma)\}^2}{2\alpha-1}+\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}
Remark 2.1. If $\alpha=\gamma=0$ then the right-hand side of (2.9) is N(0,0), which we interpret as the delta measure at 0. Our result (2.12) differs from [Reference Aguech and El Machkouri1, Theorem 2 (3)]; the reason for this is given in Remark 1.1.
The next theorem concerns the strong law of large numbers and its refinement.
Theorem 2.2 Let $p \in (0,1)$ and $\alpha=2p-1$. Assume that $\{m_n \colon n \in \mathbb{N}\}$ satisfies (1.3), (1.7), and (2.8). Then
Actually, we obtain the following sharper result: If $c\in\big(\max\big\{\alpha,\frac12\big\},1\big)$ then
3. Proof of Theorem 2.1
Throughout this section we assume that (1.3), (1.7), and (2.8) hold.
Proof. Let $\mathcal{F}_n$ be the $\sigma$-algebra generated by $X_1,\ldots,X_n$. For $n \in \mathbb{N}$, the conditional distribution of $X_{n+1}$ given the history up to time n is
For each $n \in \mathbb{N}$, let $\mathcal{G}_{m_n}^{(n)} = \mathcal{F}_{\infty} \,:\!=\, \sigma(\{X_i \colon i \in \mathbb{N} \}) = \sigma(\{X_1\} \cup \{ U_i \colon i \in \mathbb{N} \})$ and
for $k \in (m_n,n] \cap \mathbb{N}$. From (1.5), we can see that the conditional distribution of $X_k^{(n)}$ for $k \in (m_n,n] \cap \mathbb{N}$ is given by
(This corresponds to [Reference Gut and Stadtmüller10, (2.2)].) From this we have that
for each $k \in (m_n,n] \cap \mathbb{N}$, and
We introduce
Noting that
we have
where $c_n=c_n(\alpha)\,:\!=\,\gamma_n+\alpha(1-\gamma_n)$.
First, we prove Theorem 2.1(i). Assume that $\alpha \in \big({-}1,\frac12\big)$. By (3.4) and (2.2),
In terms of characteristic functions, this is equivalent to, for $\xi \in \mathbb{R}$,
Now we turn to $\{B_n\}$. Unless specified otherwise, all the results on $\{B_n\}$ hold for all $\alpha \in ({-}1,1)$. Since, for each $n \in \mathbb{N}$, $X_k^{(n)}$ for $k \in (m_n,n] \cap \mathbb{N}$ are independent and identically distributed under $\mathbb{P}(\,\cdot\mid \mathcal{F}_{\infty})$, so
is a sum of centered i.i.d. random variables. The conditional variance of $X_k^{(n)}$ for
We have
which converges to $\gamma(1-\gamma)$ as $n \to \infty$ a.s. by (2.1). Based on this observation, we prove the following result.
Lemma 3.1. For $\gamma \in [0,1]$,
Proof. Because $B_n$ is the sum (3.7) of centered i.i.d. random variables under $\mathbb{P}(\,\cdot\mid \mathcal{F}_{\infty})$, by (3.8) we have
Note that ${\gamma_n}/{n} \to 0$ and $({\gamma_n}/{n})\cdot (n-m_n) =\gamma_n(1-\gamma_n) \to \gamma(1-\gamma)$ as $n \to \infty$. Now (3.10) follows from this together with (2.1).
From (3.5), (3.6), and (3.10), together with the bounded convergence theorem, we can see that
converges to
as $n \to \infty$. This gives (2.9).
The proof of Theorem 2.1(ii) is along the same lines as that of (i), and is actually simpler. Assume that $\alpha =\frac12$. As $c_n\big(\frac12\big)=(1+\gamma_n)/2$, from (3.4) and (2.4) we have
Also, from (3.9) and (2.1), we have
This implies that $\gamma_n B_n/\sqrt{m_n \log m_n} \to 0$ as $n \to \infty$ in $L^2$. By Slutsky’s lemma, we obtain (2.10).
Finally, we prove Theorem 2.1(iii). Assume that $\alpha \in \big(\frac12,1\big)$. By (3.4) and (2.6),
as $n \to \infty$ a.s. and in $L^2$. Noting that $\gamma_n B_n/(m_n)^{\alpha} \to 0$ as $n \to \infty$ in $L^2$, by (3.11) we obtain the $L^2$-convergence in (2.11). From (4.2), which will be proved in the next section, the almost-sure convergence in (2.11) follows. To prove (2.12), by (3.5) we have
Note that M is $\mathcal{F}_{\infty}$-measurable. Using (2.7), (3.10), and (3.13), we obtain (2.12) similarly to the proof of (2.9).
4. Proof of Theorem 2.2
In this section we assume that (1.3), (1.7), and (2.8) hold.
Proof. First we give almost-sure bounds for $\{B_n\}$.
Lemma 4.1 For any $\alpha \in ({-}1,1)$ and $\gamma \in [0,1]$,
In particular, for any $c \in \big(\frac12,1\big)$,
Proof. Note that $|X_k^{(n)}-\mathbb{E}[X_k^{(n)}\mid\mathcal{F}_{\infty}]| \leq 1$ for each $1\leq k\leq n$. For $\lambda \in \mathbb{R}$, it follows from Azuma’s inequality [Reference Azuma2] that
and
Taking $x=\sqrt{2(1+\varepsilon) \gamma_n (1-\gamma_n)m_n \log n}$ for some $\varepsilon \gt 0$, we have
This, together with the Borel–Cantelli lemma, implies (4.1). To obtain (4.2) note that, for $c \in \big(\frac12,1\big)$,
where we used $2c-2 \lt 0 \lt 2c-1$ and $m_n \leq n$.
We now prove (2.14) in Theorem 2.2. Equation (2.13) is readily derived from (2.14). For the case $\alpha \in \big({-}1,\frac12\big)$, from (2.3) and (3.4) we have
For the case $\alpha=\frac12$, from (2.5) and (3.4) we have
By (4.2), if $\alpha \in \big({-}1,\frac12\big]$ then (2.14) holds for any $c \in \big(\frac12,1\big)$. As for the case $\alpha \in \big(\frac12,1\big)$, almost-sure convergence in (2.11) follows from (3.12) and (4.2). Thus, (2.14) holds for any $c \in (\alpha,1)$.
5. The ERW with stops in the triangular array setting
Let $s \in [0,1]$, and assume that $p,q,r \in [0,1)$ satisfy $p+q+r=1$. We consider a sequence $X_1,X_2,\ldots$ of random variables taking values in $\{+1, 0, -1\}$ given by
$\{U_n\colon n \geq 1\}$ a sequence of independent random variables, independent of $X_1$, with $U_n$ having a uniform distribution over $\{1, \ldots, n\}$; and, for $n \in \mathbb{N}$,
The ERW with stops $\{W_n\}$ is defined by $W_n= \sum_{k=1}^n X_k$ for $n\in \mathbb{N}$. Note that if $r=0$ then it is the standard ERW defined in Section 1. Hereafter we assume that $r \in (0,1)$.
The ERW with stops was introduced in [Reference Kumar, Harbola and Lindenberg12]. To describe the limit theorems obtained in [Reference Bercu5], it is convenient to introduce the new parameters $\alpha\,:\!=\,p-q$ and $\beta\,:\!=\,1-r$, where $\beta \in (0,1)$ and $\alpha \in [-\beta,\beta]$. Let $\Sigma_n$ be the number of moves up to time n, i.e.
It is shown in [Reference Bercu5] that
where $\Sigma$ has a Mittag–Leffler distribution with parameter $\beta$. We turn to the central limit theorem for $\{W_n\}$ in [Reference Bercu5].
• For $\alpha \in [-\beta,\beta/2)$,
(5.4) \begin{align} \frac{W_n}{\sqrt{\Sigma_n}\,} \stackrel{\text{d}}{\to} N\bigg(0,\frac{\beta}{\beta-2\alpha}\bigg) \quad \text{as}\ n \to \infty. \end{align}• For $\alpha=\beta/2$,
(5.5) \begin{align} \frac{W_n}{\sqrt{\Sigma_n \log \Sigma_n}\,} \stackrel{\text{d}}{\to} N(0,1) \quad \text{as}\ n \to \infty. \end{align}• For $\alpha \in (\beta/2,\beta]$, there exists a random variable M such that
(5.6) \begin{align} \lim_{n \to \infty} \frac{W_n}{n^{\alpha}} = M \quad \text{a.s. and in}\ L^2 \end{align}and(5.7) \begin{align} \frac{W_n-Mn^{\alpha}}{\sqrt{\Sigma_n}} \stackrel{\text{d}}{\to} N\bigg(0,\frac{\beta}{2\alpha-\beta}\bigg) \quad \text{as}\ n \to \infty, \end{align}where $\mathbb{P}(M \gt 0) \gt 0$.
Next, we define the sequence $\{T_n\}$ as in (1.6); however, $Y_k^{(n)}$ and $X_k^{(n)}$ of (1.4) and (1.5) are defined with $\{X_i\}$ as in (5.1) and (5.2) instead of (1.1) and (1.2). We call this the ERW with stops in the triangular array setting.
Our first result of this section is an extension of [Reference Gut and Stadtmüller10, Theorem 4.1]. We note here that [Reference Gut and Stadtmüller10] allows $X_1$ to take value 0 with probability r, unlike this paper. As such they have an extra $\delta_0$ in their results for the case $\gamma=0$.
Theorem 5.1. Let $\beta\in(0,1)$ and $\alpha\in[-\beta,\beta]$. Assume that $\{m_n\colon n\in\mathbb{N}\}$ satisfies (1.3), (1.7), and (2.8).
(i) If $\alpha \in [-\beta,\beta/2)$ then
(5.8) \begin{align} \frac{\gamma_n T_n}{\sqrt{\Sigma_{m_n}}\,} \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\beta\{\gamma+\alpha(1-\gamma)\}^2}{\beta-2\alpha}+\beta\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}(ii) If $\alpha = \beta/2$ then
(5.9) \begin{align} \frac{\gamma_n T_n}{\sqrt{\Sigma_{m_n} \log \Sigma_{m_n}}\,} \stackrel{\mathrm{d}}{\to} N(0,\{\gamma+\beta(1-\gamma)/2\}^2) \quad \text{as}\ n \to \infty. \end{align}(iii) If $\alpha \in (\beta/2,\beta]$ then
(5.10) \begin{align} \lim_{n \to \infty} \frac{\gamma_n T_n}{(m_n)^{\alpha}} =\{\gamma+\alpha(1-\gamma)\}M \quad \text{in}\ L^2, \end{align}where M is the random variable in (5.6). Moreover,(5.11) \begin{align} & \frac{\gamma_nT_n- M\cdot\{\gamma_n + \alpha(1-\gamma_n)\}\cdot(m_n)^{\alpha}}{\sqrt{\Sigma_{m_n}}} \notag \\ & \qquad \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\beta\{\gamma+\alpha(1-\gamma)\}^2}{2\alpha-\beta}+\beta\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}
Remark 5.1. Unlike the results in [Reference Gut and Stadtmüller10], we have a random normalization in the results above. This is because we consider the general case $\gamma \in [0,1]$. We can obtain the $L^4$-convergence in (5.10) using Burkholder’s inequality as in [Reference Aguech and El Machkouri1, (3.15)].
We also consider the process $\{\Xi_n \colon n \in \mathbb{N} \}$ defined by $\Xi_n\,:\!=\, \sum_{k=1}^n \{X_k^{(n)}\}^2$ for $n \in \mathbb{N}$. The next theorem is an improvement of [Reference Gut and Stadtmüller10, Theorem 4.2].
Theorem 5.2. Under the same conditions as in Theorem 5.1, we have
where $\Sigma$ is defined in (5.3).
The strong law of large numbers and its refinement can also be obtained for the ERW with stops.
Theorem 5.3 Under the same conditions as in Theorem 5.1, we have (2.13). In addition, (2.14) holds for $c \in \big(\max\big\{\alpha,\frac12\big\},1\big)$.
6. Proof of Theorem 5.1
Proof. Noting that $p=(\beta+\alpha)/2$ and $q=(\beta-\alpha)/2$, for $n \in \mathbb{N}$ we have
For $k \in (m_n,n] \cap \mathbb{N}$, we have
From (6.1), we see that (3.1) and (3.2) continue to hold in this setting. Defining $\{A_n\}$ and $\{B_n\}$ by (3.3), we note that they satisfy (3.4) and (3.5).
We prepare a lemma about $\{B_n\}$.
Lemma 6.1. Under the assumption of Theorem 5.1:
(i) For $\alpha \in [-\beta,\beta]$ and $\xi \in \mathbb{R}$,
(6.3) \begin{align} \mathbb{E}\bigg[\exp\bigg(\frac{\mathrm{i}\xi\gamma_nB_n}{\sqrt{\Sigma_{m_n}}\,}\bigg) \mid \mathcal{F}_{\infty}\bigg] & \to \exp\bigg({-}\frac{\xi^2}{2}\cdot\beta\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty\ \text{a.s.,} \end{align}(6.4) \begin{align} \mathbb{E}\bigg[\exp\bigg(\frac{\mathrm{i}\xi\gamma_nB_n}{\sqrt{\Sigma_{m_n}\log\Sigma_{m_n}}\,}\bigg) \mid \mathcal{F}_{\infty}\bigg] & \to 1 \quad \text{as}\ n \to \infty\ \text{a.s.} \end{align}(ii) If $\alpha \in (\beta/2,\beta]$ then $\gamma_n B_n/(m_n)^{\alpha} \to 0$ as $n \to \infty$ in $L^2$.
Proof. Note that $\mathbb{E}[ B_n \mid \mathcal{F}_{\infty}] = 0$. By (6.1) and (6.2),
for $k \in (m_n,n] \cap \mathbb{N}$. As in (3.9), we have
From this,
For any $\beta \in (0,1)$ and $\alpha \in [-\beta,\beta]$, we show that
Indeed, if $\alpha \in [-\beta,\beta/2)$ then
by [Reference Bercu5, (3.5)]. If $\alpha=\beta/2$ then
by [Reference Bercu5, (3.13)]. If $\alpha \in (\beta/2,\beta]$ then $W_{m_n}/(m_n)^{\alpha} \to M$ as $n \to \infty$ a.s. by (5.6), and $(1+\beta)/2 \gt \alpha$ since $2\alpha - \beta\leq \beta \lt 1$. In any case we have (6.7). Since $(m_n)^{\beta}/\Sigma_{m_n}\to 1/\Sigma$ as $n \to \infty$ a.s. by (5.3), we see that (6.6) converges to $\beta \gamma(1-\gamma)$ as $n \to \infty$ a.s., and
By a similar computation to Lemma 4.1, we obtain (6.3) and (6.4) in (i).
Next, we consider (ii). By (6.5),
From [Reference Bercu5, (A.6)], $\mathbb{E}[(W_n)^2] \sim n^{2\alpha}/\{(2\alpha-\beta)\Gamma(2\alpha)\}$ as $n \to \infty$. On the other hand, from [Reference Bercu5, (4.4)] we can see that $\mathbb{E}[\Sigma_n] \sim n^{\beta}/\Gamma(1+\beta)$ as $n \to \infty$. Noting that $\beta \lt 2\alpha$, we have (ii).
Assume that $\alpha \in [-\beta,\beta/2)$. By (3.4) and (5.4), we have
Combining this and (6.3), we can prove (5.8) by the same method as for (2.9). Next, we consider the case $\alpha=\beta/2$. By (3.4) and (5.5), we have
This together with (6.4) gives (5.9). As for the case $\alpha \in (\beta/2,\beta]$, by (3.4) and (5.6),
Now (5.10) follows from Lemma 6.1(ii). The proof of (5.11) is almost identical to that of (2.12): use (3.5), (5.7), and (6.3).
7. Proof of Theorem 5.2
Proof. Put $A^{\prime}_n\,:\!=\,\mathbb{E}[\Xi_n \mid \mathcal{F}_{\infty}]$ and $B^{\prime}_n\,:\!=\, \Xi_n - A^{\prime}_n$. Using (6.2), we can see that $\gamma_n A^{\prime}_n = c_n(\beta) \Sigma_{m_n}$, which together with (5.3) imply
As for $B^{\prime}_n$, again by (6.2) we can see that
Since $\beta \lt 1$ and $\Sigma_{m_n}/(m_n)^{\beta}$ converges to $\Sigma$ in $L^2$ by (5.3), we have
Noting that $\beta \gt 0$, this shows that $\gamma_n B^{\prime}_n/(m_n)^{\beta} \to 0$ as $n \to \infty$ in $L^2$, which completes the proof.
8. Proof of Theorem 5.3
Proof. The proof of Lemma 4.1 is based on the fact that $|X_k^{(n)}-\mathbb{E}[X_k^{(n)}\mid \mathcal{F}_{\infty}]| \leq 1$. Thus, $\{B_n\}$ for the ERW with stops in the triangular array setting also satisfies (4.2) for any $c \in \big(\frac12,1\big)$. If $\alpha \in [-\beta,\beta/2]$ then, from (3.4), (6.8), and (6.9), we can see that $\gamma_n A_n=o(n^c)$ for any $c \in (\beta/2,1)$. If $\alpha \in (\beta/2,\beta]$ then (3.4) and (5.6) imply that $\gamma_n A_n=o(n^c)$ for any $c \in (\alpha,1)$. In any case, (2.14) holds for $c \in \big(\max\big\{\alpha,\frac12\big\},1\big)$.
Acknowledgements
R.R. thanks Keio University for its hospitality during multiple visits. M.T. and H.T. thank the Indian Statistical Institute for its hospitality.
Funding information
M.T. is partially supported by JSPS KAKENHI Grant Numbers JP19H01793, JP19K03514, and JP22K03333. H.T. is partially supported by JSPS KAKENHI Grant Numbers JP19K03514, JP21H04432, and JP23H01077.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.