Hostname: page-component-745bb68f8f-g4j75 Total loading time: 0 Render date: 2025-02-04T05:17:35.802Z Has data issue: false hasContentIssue false

The elephant random walk in the triangular array setting

Published online by Cambridge University Press:  16 January 2025

Rahul Roy*
Affiliation:
Indian Statistical Institute and Indraprastha Institute of Information Technology, Delhi
Masato Takei*
Affiliation:
Yokohama National University
Hideki Tanemura*
Affiliation:
Keio University
*
*Postal address: 7 SJS Sansanwal Marg, New Delhi 110016, India. Email: rahul@isid.ac.in
**Postal address: 79-5 Tokiwadai, Hodogaya-ku, Yokohama 240-8501, Japan. Email: takei-masato-fx@ynu.ac.jp
***Postal address: 3-14-1 Hiyoshi, Kohoku-ku, Yokohama, 223-8522, Japan. Email: tanemura@math.keio.ac.jp
Rights & Permissions [Opens in a new window]

Abstract

Gut and Stadmüller (2021, 2022) initiated the study of the elephant random walk with limited memory. Aguech and El Machkouri (2024) published a paper in which they discuss an extension of the results by Gut and Stadtmüller (2022) for an ‘increasing memory’ version of the elephant random walk without stops. Here we present a formal definition of the process that was hinted at by Gut and Stadtmüller. This definition is based on the triangular array setting. We give a positive answer to the open problem in Gut and Stadtmüller (2022) for the elephant random walk, possibly with stops. We also obtain the central limit theorem for the supercritical case of this model.

Type
Original Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

In recent years there has been a lot of interest in the study of the elephant random walk (ERW) since it was introduced in [Reference Schütz and Trimper15]; see the excellent thesis [Reference Laulin13] for a detailed bibliography. The standard ERW is described as follows. Let $p \in (0,1)$ and $s \in [0,1]$. We consider a sequence $X_1,X_2,\ldots$ of random variables taking values in $\{+1, -1\}$ given by

(1.1) \begin{align} X_1 = \begin{cases} +1 & \text{with probability } s, \\ -1 & \text{with probability } 1-s; \end{cases} \end{align}

$\{U_n\colon n \geq 1\}$ a sequence of independent random variables, independent of $X_1$, with $U_n$ having a uniform distribution over $\{1, \ldots, n\}$; and, for $n \in \mathbb{N}\,:\!=\,\{ 1,2,\ldots\}$,

(1.2) \begin{align} X_{n+1} = \begin{cases} +X_{U_n} & \text{with probability } p, \\ -X_{U_n} & \text{with probability } 1-p. \end{cases} \end{align}

The ERW $\{W_n\}$ is defined by $W_n= \sum_{k=1}^n X_k$ for $n\in\mathbb{N}$.

Gut and Stadmüller [Reference Gut and Stadtmüller9, Reference Gut and Stadtmüller10] studied a variation of this model as described in [Reference Gut and Stadtmüller9, Section 3.2]; [Reference Aguech and El Machkouri1] also studied a similar variation of the model as described in [Reference Aguech and El Machkouri1, Section 2]. We present the two different formalizations of models given in [Reference Aguech and El Machkouri1, Reference Gut and Stadtmüller10]; our work is based on the first formalization.

1.1. Triangular array setting

Consider a sequence $\{m_n \colon n \in \mathbb{N}\}$ of positive integers satisfying

(1.3) \begin{align} 1 \leq m_n \leq n \quad \text{for each}\ n\in \mathbb{N}. \end{align}

Let $X_1,X_2,\ldots$ be the sequence defined by (1.1) and (1.2). We define a triangular array of random variables $\{\{S^{(n)}_k \colon 1 \leq k \leq n\} \colon n \in \mathbb{N}\}$ as follows. Let $\{Y^{(n)}_k \colon 1 \leq k \leq n\}$ be random variables with

(1.4) \begin{align} Y^{(n)}_k = \begin{cases} X_{k} & \text{ for } 1 \leq k \leq m_n, \\ X^{(n)}_k & \text{ for } m_n \lt k \leq n, \end{cases} \end{align}

where, for $m_n \lt k \leq n$,

(1.5) \begin{align} X^{(n)}_k = \begin{cases} +X_{U_{k, n}} & \text{with probability } p, \\ -X_{U_{k, n}} & \text{with probability } 1-p. \end{cases} \end{align}

Here, ${\mathcal U}_n \,:\!=\, \{U_{k, n}\colon m_n \lt k \leq n\}$ is an independent and identically distributed (i.i.d.) collection of uniform random variables over $\{1, \ldots , m_n\}$, and $\{{\mathcal U}_n\colon n \in \mathbb{N}\}$ is an independent collection. Finally, for $1 \leq k \leq n$ let $S^{(n)}_k \,:\!=\, \sum_{i=1}^k Y^{(n)}_i$. We note that for fixed $n \in \mathbb{N}$, the sequence $\{S^{(n)}_k\colon 1 \leq k \leq n\}$ is a random walk with increments in $\{+1, -1\}$. However, the sequence $\{S^{(n)}_n\colon n \in \mathbb{N}\}$ does not have such a representation. We study properties of the sequence $\{T_n \colon n\in \mathbb{N}\}$ given by

(1.6) \begin{align} T_n \,:\!=\, S^{(n)}_n. \end{align}

The process $\{T_n \colon n\in \mathbb{N}\}$ was called the ERW with gradually increasing memory in [Reference Gut and Stadtmüller10], where

(1.7) \begin{align} \lim_{n \to \infty} m_n =+\infty. \end{align}

1.2. Linear setting

In this setting the ERW $W^{\prime}_{n+1} \,:\!=\, W^{\prime}_{n} + Z_{n+1}$ is given by the increments

\begin{align*} W^{\prime}_1 = Z_{1} & = \begin{cases} +1 & \text{with probability } s, \\ -1 & \text{with probability } 1-s, \end{cases} \\ Z_{n+1} & = \begin{cases} +Z_{V_n} & \text{with probability } p, \\ -Z_{V_n} & \text{with probability } 1-p, \end{cases} \end{align*}

where $V_n$ is a uniform random variable over $\{1, \ldots , m_n\}$, and $\{V_n \colon n \in \mathbb{N}\}$ is an independent collection.

Remark 1.1. We note here that the dependence structure in the definitions of $T_n$ and $W^{\prime}_n$ are different and as such, results obtained for $T_n$ need not carry to those obtained for $W^{\prime}_n$. The error in [Reference Aguech and El Machkouri1, Theorem 2 (3)] is due to the use of the linear setting for their equation (3.20), while working in the triangular array setting. In particular, there is a mistake in the expression of $\overline{M}_{\infty}$ on [Reference Aguech and El Machkouri1, p. 14], which was fixed in the subsequent corrigendum. Their results in the corrected version agree with the results obtained here, although the methods used are different; this paper also provides additional results not obtained by them.

In the next section we present the statement of our results, and in Sections 3 and 4 we prove the results. In Section 5 and thereafter we study similar questions about the ERW with stops and present our results.

2. Results for the ERW in the triangular array setting

Before we state our results, we give a short synopsis of the results for the standard ERW $\{W_n\}$ [Reference Baur and Bertoin3, Reference Bercu4, Reference Coletti, Gava and Schütz6Reference Guérin, Laulin and Raschel8, Reference Kubota and Takei11, Reference Qin14]. Let $\alpha\,:\!=\,2p-1$.

  • For $\alpha \in ({-}1,1)$ i.e. $p \in (0,1)$,

    (2.1) \begin{align} \lim_{n \to \infty} \dfrac{W_n}{n} = 0\quad\text{almost surely (a.s.) and in}\ L^2. \end{align}
  • For $\alpha \in \big({-}1,\frac12\big)$, i.e. $p \in \big(0,\frac34\big)$,

    (2.2) \begin{align} & \frac{W_n}{\sqrt{n}} \stackrel{\text{d}}{\to} N\bigg(0,\frac{1}{1-2\alpha}\bigg) \quad \text{as}\ n \to \infty, \\[-10pt] \nonumber \end{align}
    (2.3) \begin{align} & \limsup_{n\to\infty} \pm\frac{W_n}{\sqrt{2n\log\log n}\,} = \frac{1}{\sqrt{1-2\alpha}\,} \quad \text{a.s.} \\[6pt] \nonumber \end{align}
  • For $\alpha=\frac12$, i.e. $p = \frac34$,

    (2.4) \begin{align} & \frac{W_n}{\sqrt{n \log n}\,} \stackrel{\text{d}}{\to} N(0,1) \quad \text{as}\ n \to \infty, \\[-10pt] \nonumber\end{align}
    (2.5) \begin{align} & \limsup_{n\to\infty} \pm\frac{W_n}{\sqrt{2n\log n\log\log\log n}\,} = 1 \quad \text{a.s.} \\[6pt] \nonumber \end{align}
  • For $\alpha \in \big(\frac12,1\big)$, i.e. $p \in \big(\frac34,1\big)$, there exists a random variable M such that

    (2.6) \begin{align} \lim_{n \to \infty}\frac{W_n}{n^{\alpha}} = M \quad \text{a.s. and in}\ L^2, \end{align}
    where $\mathbb{E}[M] = {\beta}/{\Gamma(\alpha+1)}$, $\mathbb{E}[M^2] \gt 0$, $\mathbb{P}(M \neq 0)=1$, and
    (2.7) \begin{align} \frac{W_n-Mn^{\alpha}}{\sqrt{n}} \stackrel{\text{d}}{\to} N\bigg(0,\frac{1}{2\alpha-1}\bigg) \quad \text{as}\ n \to \infty. \end{align}

Our first result improves and extends [Reference Gut and Stadtmüller10, Theorem 3.1].

Theorem 2.1. Let $p \in (0,1)$ and $\alpha=2p-1$. Assume that $\{m_n \colon n \in \mathbb{N}\}$ satisfies (1.3), (1.7), and

(2.8) \begin{align} \gamma_n \,:\!=\, \frac{m_n}{n}, \qquad \lim_{n \to \infty} \gamma_n = \gamma \in [0,1]. \end{align}

  1. (i) If $\alpha \in \big({-}1,\frac12\big)$, i.e. $p \in \big(0,\frac34\big)$, then

    (2.9) \begin{align} \frac{\gamma_n T_n}{\sqrt{m_n}\,} \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\{\gamma+\alpha(1-\gamma)\}^2}{1-2\alpha}+\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}
  2. (ii) If $\alpha = \frac12$, i.e. $p=\frac34$, then

    (2.10) \begin{align} \frac{\gamma_n T_n}{\sqrt{m_n\log m_n}\,} \stackrel{\text{d}}{\to} N\bigg(0,\frac{(1+\gamma)^2}{4}\bigg) \quad \text{as}\ n \to \infty. \end{align}
  3. (iii) If $\alpha \in \big(\frac12,1\big)$, i.e. $p \in \big(\frac34,1\big)$, then

    (2.11) \begin{align} \lim_{n \to \infty} \frac{\gamma_n T_n}{(m_n)^{\alpha}} =\{\gamma+\alpha(1-\gamma)\}M \quad \text{a.s. and in}\ L^2, \end{align}
    where M is the random variable in (2.6). Moreover,
    (2.12) \begin{align} \frac{\gamma_n T_n - M\{\gamma_n + \alpha(1-\gamma_n)\}(m_n)^{\alpha}}{\sqrt{m_n}} \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\{\gamma+\alpha(1-\gamma)\}^2}{2\alpha-1}+\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}

Remark 2.1. If $\alpha=\gamma=0$ then the right-hand side of (2.9) is N(0,0), which we interpret as the delta measure at 0. Our result (2.12) differs from [Reference Aguech and El Machkouri1, Theorem 2 (3)]; the reason for this is given in Remark 1.1.

The next theorem concerns the strong law of large numbers and its refinement.

Theorem 2.2 Let $p \in (0,1)$ and $\alpha=2p-1$. Assume that $\{m_n \colon n \in \mathbb{N}\}$ satisfies (1.3), (1.7), and (2.8). Then

(2.13) \begin{align} \lim_{n \to \infty} \frac{T_n}{n} = 0 \quad \text{a.s.} \end{align}

Actually, we obtain the following sharper result: If $c\in\big(\max\big\{\alpha,\frac12\big\},1\big)$ then

(2.14) \begin{align} \lim_{n \to \infty}\frac{\gamma_n T_n}{(m_n)^c} = 0 \quad \text{a.s.} \end{align}

3. Proof of Theorem 2.1

Throughout this section we assume that (1.3), (1.7), and (2.8) hold.

Proof. Let $\mathcal{F}_n$ be the $\sigma$-algebra generated by $X_1,\ldots,X_n$. For $n \in \mathbb{N}$, the conditional distribution of $X_{n+1}$ given the history up to time n is

\begin{align*} \mathbb{P}(X_{n+1}= \pm 1 \mid \mathcal{F}_n) & = \frac{\#\{k=1,\ldots,n \colon X_k=\pm 1\}}{n} \cdot p + \frac{\#\{k=1,\ldots,n \colon X_k= \mp 1\}}{n} \cdot (1-p) \\ & = \frac{1}{2}\bigg(1 \pm \alpha \cdot \frac{W_n}{n} \bigg). \end{align*}

For each $n \in \mathbb{N}$, let $\mathcal{G}_{m_n}^{(n)} = \mathcal{F}_{\infty} \,:\!=\, \sigma(\{X_i \colon i \in \mathbb{N} \}) = \sigma(\{X_1\} \cup \{ U_i \colon i \in \mathbb{N} \})$ and

\begin{equation*} \mathcal{G}_k^{(n)} \,:\!=\, \sigma( \{X_i \colon i \in \mathbb{N} \} \cup \{ X_i^{(n)} \colon m_n \lt i \leq k\}) = \sigma(\{X_1\} \cup \{ U_i \colon i \in \mathbb{N} \} \cup \{ U_{i,n} \colon m_n \lt i \leq k\}) \end{equation*}

for $k \in (m_n,n] \cap \mathbb{N}$. From (1.5), we can see that the conditional distribution of $X_k^{(n)}$ for $k \in (m_n,n] \cap \mathbb{N}$ is given by

\begin{equation*} \mathbb{P} (X_k^{(n)} = \pm 1 \mid \mathcal{G}_{k-1}^{(n)}) = \frac{1}{2} \bigg( 1 \pm \alpha \cdot \frac{W_{m_n}}{m_n}\bigg). \end{equation*}

(This corresponds to [Reference Gut and Stadtmüller10, (2.2)].) From this we have that

(3.1) \begin{align} \mathbb{E}[ X_k^{(n)} \mid \mathcal{F}_{\infty}] = \alpha \cdot \frac{W_{m_n}}{m_n} \end{align}

for each $k \in (m_n,n] \cap \mathbb{N}$, and

(3.2) \begin{align} \mathbb{E}[T_n-W_{m_n}\mid\mathcal{F}_{\infty}] = \sum_{k=m_n+1}^{n}\mathbb{E}[X_k^{(n)}\mid\mathcal{F}_{\infty}] = \alpha(n-m_n)\cdot\frac{W_{m_n}}{m_n}. \end{align}

We introduce

(3.3) \begin{align} A_n \,:\!=\, \mathbb{E}[T_n \mid \mathcal{F}_{\infty}], \qquad B_n \,:\!=\, T_n-A_n. \end{align}

Noting that

(3.4) \begin{align} A_n = W_{m_n} + \mathbb{E}[T_n-W_{m_n} \mid \mathcal{F}_{\infty}] = \frac{W_{m_n}}{\gamma_n} \cdot \{ \gamma_n + \alpha(1-\gamma_n) \}, \end{align}

we have

(3.5) \begin{align} \gamma_n T_n = \gamma_n (A_n + B_n) = c_n W_{m_n}+\gamma_n B_n, \end{align}

where $c_n=c_n(\alpha)\,:\!=\,\gamma_n+\alpha(1-\gamma_n)$.

First, we prove Theorem 2.1(i). Assume that $\alpha \in \big({-}1,\frac12\big)$. By (3.4) and (2.2),

\begin{equation*} \frac{\gamma_n A_n}{\sqrt{m_n}\,} = \frac{c_n W_{m_n}}{\sqrt{m_n}} \stackrel{\text{d}}{\to} \{\gamma + \alpha(1-\gamma)\} \cdot N\bigg(0,\frac{1}{1-2\alpha}\bigg) \quad \text{as}\ n \to \infty. \end{equation*}

In terms of characteristic functions, this is equivalent to, for $\xi \in \mathbb{R}$,

(3.6) \begin{align} \mathbb{E}\bigg[\exp\bigg(\frac{\mathrm{i}\xi\gamma_n A_n}{\sqrt{m_n}}\bigg)\bigg] \to \exp\bigg({-}\frac{\xi^2}{2} \cdot \frac{\{\gamma+\alpha(1-\gamma)\}^2}{1-2\alpha}\bigg) \quad \text{as}\ n \to \infty. \end{align}

Now we turn to $\{B_n\}$. Unless specified otherwise, all the results on $\{B_n\}$ hold for all $\alpha \in ({-}1,1)$. Since, for each $n \in \mathbb{N}$, $X_k^{(n)}$ for $k \in (m_n,n] \cap \mathbb{N}$ are independent and identically distributed under $\mathbb{P}(\,\cdot\mid \mathcal{F}_{\infty})$, so

(3.7) \begin{align} B_n=\sum_{k=m_n+1}^n\{ X_k^{(n)}-\mathbb{E}[X_k^{(n)}\mid \mathcal{F}_{\infty}]\} \end{align}

is a sum of centered i.i.d. random variables. The conditional variance of $X_k^{(n)}$ for

(3.8) \begin{align} \mathbb{V}[X_k^{(n)} \mid \mathcal{F}_{\infty}] & = \mathbb{E}[(X_k^{(n)})^2 \mid \mathcal{F}_{\infty}] - (\mathbb{E}[X_k^{(n)} \mid \mathcal{F}_{\infty}])^2 \notag \\ & = \begin{cases} 0 & \text{for}\ k \in [1,m_n] \cap \mathbb{N}, \\ 1 - \alpha^2 \cdot \bigg(\dfrac{W_{m_n}}{m_n}\bigg)^2 & \text{for}\ k \in (m_n,n] \cap \mathbb{N}. \end{cases} \end{align}

We have

(3.9) \begin{align} \mathbb{V}\bigg[\frac{\gamma_n B_n}{\sqrt{m_n}\,} \mid \mathcal{F}_{\infty}\bigg] = \frac{(\gamma_n)^2}{m_n}\cdot(n-m_n)\cdot\bigg\{1-\alpha^2\cdot\bigg(\frac{W_{m_n}}{m_n}\bigg)^2\bigg\} = \gamma_n (1-\gamma_n) \cdot \bigg\{1 - \alpha^2 \cdot \bigg(\frac{W_{m_n}}{m_n}\bigg)^2\bigg\}, \end{align}

which converges to $\gamma(1-\gamma)$ as $n \to \infty$ a.s. by (2.1). Based on this observation, we prove the following result.

Lemma 3.1. For $\gamma \in [0,1]$,

(3.10) \begin{align} \mathbb{E}\bigg[\exp\bigg(\frac{\mathrm{i}\xi\gamma_nB_n}{\sqrt{m_n}}\bigg) \mid \mathcal{F}_{\infty}\bigg] \to \exp\bigg({-}\frac{\xi^2}{2} \cdot \gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty\ \text{a.s.} \end{align}

Proof. Because $B_n$ is the sum (3.7) of centered i.i.d. random variables under $\mathbb{P}(\,\cdot\mid \mathcal{F}_{\infty})$, by (3.8) we have

\begin{equation*} \mathbb{E}\bigg[\exp\bigg(\frac{i\xi\gamma_nB_n}{\sqrt{m_n}}\bigg) \mid \mathcal{F}_{\infty}\bigg] = \bigg[1 - \frac{\xi^2\gamma_n}{2n} \cdot \bigg\{1 - \alpha^2\cdot\bigg(\frac{W_{m_n}}{m_n}\bigg)^2\bigg\} + o\bigg(\frac{\gamma_n}{n}\bigg)\bigg]^{n-m_n} \quad\!\! \text{as}\ n \to \infty\ \text{a.s.} \end{equation*}

Note that ${\gamma_n}/{n} \to 0$ and $({\gamma_n}/{n})\cdot (n-m_n) =\gamma_n(1-\gamma_n) \to \gamma(1-\gamma)$ as $n \to \infty$. Now (3.10) follows from this together with (2.1).

From (3.5), (3.6), and (3.10), together with the bounded convergence theorem, we can see that

\begin{equation*} \mathbb{E}\bigg[\exp\bigg(\frac{i\xi\gamma_nT_n}{\sqrt{m_n}\,}\bigg)\bigg] = \mathbb{E}\bigg[\exp\bigg(\frac{i\xi\gamma_nA_n}{\sqrt{m_n}\,}\bigg) \cdot \mathbb{E}\bigg[\exp\bigg(\frac{i\xi\gamma_nB_n}{\sqrt{m_n}\,}\bigg) \mid \mathcal{F}_{\infty}\bigg]\bigg] \end{equation*}

converges to

\begin{equation*} \exp\bigg({-}\frac{\xi^2}{2} \cdot \frac{\{\gamma+\alpha(1-\gamma)\}^2}{1-2\alpha}\bigg) \cdot \exp\bigg({-}\frac{\xi^2}{2} \cdot \gamma(1-\gamma)\bigg) \end{equation*}

as $n \to \infty$. This gives (2.9).

The proof of Theorem 2.1(ii) is along the same lines as that of (i), and is actually simpler. Assume that $\alpha =\frac12$. As $c_n\big(\frac12\big)=(1+\gamma_n)/2$, from (3.4) and (2.4) we have

\begin{equation*} \frac{\gamma_nA_n}{\sqrt{m_n\log m_n}\,} = \frac{c_n W_{m_n}}{\sqrt{m_n \log m_n}\,} \stackrel{\text{d}}{\to} \frac{1+\gamma}{2} \cdot N(0,1) \quad \text{as}\ n \to \infty. \end{equation*}

Also, from (3.9) and (2.1), we have

(3.11) \begin{align} \mathbb{E}\bigg[\bigg(\frac{\gamma_nB_n}{\sqrt{m_n}\,}\bigg)^2 \bigg] = \gamma_n(1-\gamma_n)\bigg\{1 - \alpha^2\cdot\mathbb{E}\bigg[\bigg(\frac{W_{m_n}}{m_n}\bigg)^2\bigg]\bigg\} \to \gamma(1-\gamma) \quad \text{as}\ n \to \infty. \end{align}

This implies that $\gamma_n B_n/\sqrt{m_n \log m_n} \to 0$ as $n \to \infty$ in $L^2$. By Slutsky’s lemma, we obtain (2.10).

Finally, we prove Theorem 2.1(iii). Assume that $\alpha \in \big(\frac12,1\big)$. By (3.4) and (2.6),

(3.12) \begin{align} \frac{\gamma_nA_n}{(m_n)^{\alpha}} = \frac{c_nW_{m_n}}{(m_n)^{\alpha}} \to \{\gamma+\alpha(1-\gamma)\}M \end{align}

as $n \to \infty$ a.s. and in $L^2$. Noting that $\gamma_n B_n/(m_n)^{\alpha} \to 0$ as $n \to \infty$ in $L^2$, by (3.11) we obtain the $L^2$-convergence in (2.11). From (4.2), which will be proved in the next section, the almost-sure convergence in (2.11) follows. To prove (2.12), by (3.5) we have

(3.13) \begin{align} \gamma_n T_n-c_n \cdot M \cdot (m_n)^{\alpha} = c_n \{W_{m_n}-M \cdot (m_n)^{\alpha}\} + \gamma_n B_n. \end{align}

Note that M is $\mathcal{F}_{\infty}$-measurable. Using (2.7), (3.10), and (3.13), we obtain (2.12) similarly to the proof of (2.9).

4. Proof of Theorem 2.2

In this section we assume that (1.3), (1.7), and (2.8) hold.

Proof. First we give almost-sure bounds for $\{B_n\}$.

Lemma 4.1 For any $\alpha \in ({-}1,1)$ and $\gamma \in [0,1]$,

(4.1) \begin{align} \limsup_{n \to \infty} \frac{\gamma_n B_n}{\sqrt{2\gamma_n(1-\gamma_n)m_n \log n}\,} \leq 1 \quad \text{a.s.} \end{align}

In particular, for any $c \in \big(\frac12,1\big)$,

(4.2) \begin{align} \lim_{n \to \infty} \frac{\gamma_n B_n}{(m_n)^c} = 0 \quad \text{a.s.} \end{align}

Proof. Note that $|X_k^{(n)}-\mathbb{E}[X_k^{(n)}\mid\mathcal{F}_{\infty}]| \leq 1$ for each $1\leq k\leq n$. For $\lambda \in \mathbb{R}$, it follows from Azuma’s inequality [Reference Azuma2] that

\begin{align*} \mathbb{E}[\exp(\lambda\gamma_nB_n) \mid \mathcal{F}_{\infty}] & = \mathbb{E}\Bigg[\exp\Bigg(\lambda\gamma_n\sum_{k=m_n+1}^n \big\{X_k^{(n)}-\mathbb{E}[X_k^{(n)}\mid\mathcal{F}_{\infty}]\big\}\Bigg) \mid \mathcal{F}_{\infty}\Bigg] \\ & \leq \exp((\lambda \gamma_n)^2 (n-m_n)/2), \end{align*}

and

\begin{equation*} \mathbb{P}(|\gamma_n B_n| \geq x ) \leq 2\exp\bigg({-}\frac{x^2}{2\gamma_n(1-\gamma_n)m_n}\bigg) \quad \text{for}\ x \gt 0. \end{equation*}

Taking $x=\sqrt{2(1+\varepsilon) \gamma_n (1-\gamma_n)m_n \log n}$ for some $\varepsilon \gt 0$, we have

\begin{equation*} \sum_{n=1}^{\infty}\mathbb{P}(|\gamma_nB_n| \geq \sqrt{2(1+\varepsilon)\gamma_n(1-\gamma_n)m_n\log n}) \leq \sum_{n=1}^{\infty}\frac{2}{n^{1+\varepsilon}} . \end{equation*}

This, together with the Borel–Cantelli lemma, implies (4.1). To obtain (4.2) note that, for $c \in \big(\frac12,1\big)$,

\begin{equation*} \frac{2\gamma_n(1-\gamma_n)m_n \log n}{(m_n)^{2c}} = \frac{2(1-\gamma_n) \log n}{n(m_n)^{2c-2}} \leq \frac{2(1-\gamma_n)\log n}{n^{2c-1}} \to 0 \quad \text{as}\ n \to \infty, \end{equation*}

where we used $2c-2 \lt 0 \lt 2c-1$ and $m_n \leq n$.

We now prove (2.14) in Theorem 2.2. Equation (2.13) is readily derived from (2.14). For the case $\alpha \in \big({-}1,\frac12\big)$, from (2.3) and (3.4) we have

\begin{equation*} \limsup_{n \to \infty}\frac{\gamma_n A_n}{\sqrt{2m_n \log \log m_n}\,} \leq \frac{\gamma + \alpha(1-\gamma)}{\sqrt{1-2\alpha}\,} \quad \text{a.s.} \end{equation*}

For the case $\alpha=\frac12$, from (2.5) and (3.4) we have

\begin{equation*} \limsup_{n \to \infty}\frac{\gamma_n A_n}{\sqrt{2m_n\log m_n \log \log \log m_n}\,} \leq \frac{1+\gamma}{2} \quad \text{a.s.} \end{equation*}

By (4.2), if $\alpha \in \big({-}1,\frac12\big]$ then (2.14) holds for any $c \in \big(\frac12,1\big)$. As for the case $\alpha \in \big(\frac12,1\big)$, almost-sure convergence in (2.11) follows from (3.12) and (4.2). Thus, (2.14) holds for any $c \in (\alpha,1)$.

5. The ERW with stops in the triangular array setting

Let $s \in [0,1]$, and assume that $p,q,r \in [0,1)$ satisfy $p+q+r=1$. We consider a sequence $X_1,X_2,\ldots$ of random variables taking values in $\{+1, 0, -1\}$ given by

(5.1) \begin{align} X_1 = \begin{cases} +1 & \text{with probability } s, \\ -1 & \text{with probability } 1-s; \end{cases} \end{align}

$\{U_n\colon n \geq 1\}$ a sequence of independent random variables, independent of $X_1$, with $U_n$ having a uniform distribution over $\{1, \ldots, n\}$; and, for $n \in \mathbb{N}$,

(5.2) \begin{align} X_{n+1} = \begin{cases} X_{U_n} & \text{with probability}\ p, \\ -X_{U_n} & \text{with probability}\ q, \\ 0 & \text{with probability}\ r. \\ \end{cases} \end{align}

The ERW with stops $\{W_n\}$ is defined by $W_n= \sum_{k=1}^n X_k$ for $n\in \mathbb{N}$. Note that if $r=0$ then it is the standard ERW defined in Section 1. Hereafter we assume that $r \in (0,1)$.

The ERW with stops was introduced in [Reference Kumar, Harbola and Lindenberg12]. To describe the limit theorems obtained in [Reference Bercu5], it is convenient to introduce the new parameters $\alpha\,:\!=\,p-q$ and $\beta\,:\!=\,1-r$, where $\beta \in (0,1)$ and $\alpha \in [-\beta,\beta]$. Let $\Sigma_n$ be the number of moves up to time n, i.e.

\begin{equation*} \Sigma_n \,:\!=\, \sum_{k=1}^n \mathbf{1}_{\{X_k \neq 0\}} = \sum_{k=1}^n X_k^2 \quad \text{for}\ n\in \mathbb{N}.\end{equation*}

It is shown in [Reference Bercu5] that

(5.3) \begin{align} \lim_{n \to \infty}\frac{\Sigma_n}{n^{\beta}} = \Sigma \gt 0 \quad \text{a.s. and in}\ L^2, \end{align}

where $\Sigma$ has a Mittag–Leffler distribution with parameter $\beta$. We turn to the central limit theorem for $\{W_n\}$ in [Reference Bercu5].

  • For $\alpha \in [-\beta,\beta/2)$,

    (5.4) \begin{align} \frac{W_n}{\sqrt{\Sigma_n}\,} \stackrel{\text{d}}{\to} N\bigg(0,\frac{\beta}{\beta-2\alpha}\bigg) \quad \text{as}\ n \to \infty. \end{align}
  • For $\alpha=\beta/2$,

    (5.5) \begin{align} \frac{W_n}{\sqrt{\Sigma_n \log \Sigma_n}\,} \stackrel{\text{d}}{\to} N(0,1) \quad \text{as}\ n \to \infty. \end{align}
  • For $\alpha \in (\beta/2,\beta]$, there exists a random variable M such that

    (5.6) \begin{align} \lim_{n \to \infty} \frac{W_n}{n^{\alpha}} = M \quad \text{a.s. and in}\ L^2 \end{align}
    and
    (5.7) \begin{align} \frac{W_n-Mn^{\alpha}}{\sqrt{\Sigma_n}} \stackrel{\text{d}}{\to} N\bigg(0,\frac{\beta}{2\alpha-\beta}\bigg) \quad \text{as}\ n \to \infty, \end{align}
    where $\mathbb{P}(M \gt 0) \gt 0$.

Next, we define the sequence $\{T_n\}$ as in (1.6); however, $Y_k^{(n)}$ and $X_k^{(n)}$ of (1.4) and (1.5) are defined with $\{X_i\}$ as in (5.1) and (5.2) instead of (1.1) and (1.2). We call this the ERW with stops in the triangular array setting.

Our first result of this section is an extension of [Reference Gut and Stadtmüller10, Theorem 4.1]. We note here that [Reference Gut and Stadtmüller10] allows $X_1$ to take value 0 with probability r, unlike this paper. As such they have an extra $\delta_0$ in their results for the case $\gamma=0$.

Theorem 5.1. Let $\beta\in(0,1)$ and $\alpha\in[-\beta,\beta]$. Assume that $\{m_n\colon n\in\mathbb{N}\}$ satisfies (1.3), (1.7), and (2.8).

  1. (i) If $\alpha \in [-\beta,\beta/2)$ then

    (5.8) \begin{align} \frac{\gamma_n T_n}{\sqrt{\Sigma_{m_n}}\,} \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\beta\{\gamma+\alpha(1-\gamma)\}^2}{\beta-2\alpha}+\beta\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}
  2. (ii) If $\alpha = \beta/2$ then

    (5.9) \begin{align} \frac{\gamma_n T_n}{\sqrt{\Sigma_{m_n} \log \Sigma_{m_n}}\,} \stackrel{\mathrm{d}}{\to} N(0,\{\gamma+\beta(1-\gamma)/2\}^2) \quad \text{as}\ n \to \infty. \end{align}
  3. (iii) If $\alpha \in (\beta/2,\beta]$ then

    (5.10) \begin{align} \lim_{n \to \infty} \frac{\gamma_n T_n}{(m_n)^{\alpha}} =\{\gamma+\alpha(1-\gamma)\}M \quad \text{in}\ L^2, \end{align}
    where M is the random variable in (5.6). Moreover,
    (5.11) \begin{align} & \frac{\gamma_nT_n- M\cdot\{\gamma_n + \alpha(1-\gamma_n)\}\cdot(m_n)^{\alpha}}{\sqrt{\Sigma_{m_n}}} \notag \\ & \qquad \stackrel{\mathrm{d}}{\to} N\bigg(0,\frac{\beta\{\gamma+\alpha(1-\gamma)\}^2}{2\alpha-\beta}+\beta\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty. \end{align}

Remark 5.1. Unlike the results in [Reference Gut and Stadtmüller10], we have a random normalization in the results above. This is because we consider the general case $\gamma \in [0,1]$. We can obtain the $L^4$-convergence in (5.10) using Burkholder’s inequality as in [Reference Aguech and El Machkouri1, (3.15)].

We also consider the process $\{\Xi_n \colon n \in \mathbb{N} \}$ defined by $\Xi_n\,:\!=\, \sum_{k=1}^n \{X_k^{(n)}\}^2$ for $n \in \mathbb{N}$. The next theorem is an improvement of [Reference Gut and Stadtmüller10, Theorem 4.2].

Theorem 5.2. Under the same conditions as in Theorem 5.1, we have

(5.12) \begin{align} \lim_{n\to\infty}\frac{\gamma_n\Xi_n}{(m_n)^{\beta}} = \{\gamma + \beta(1-\gamma)\}\Sigma \quad \text{in}\ L^2, \end{align}

where $\Sigma$ is defined in (5.3).

The strong law of large numbers and its refinement can also be obtained for the ERW with stops.

Theorem 5.3 Under the same conditions as in Theorem 5.1, we have (2.13). In addition, (2.14) holds for $c \in \big(\max\big\{\alpha,\frac12\big\},1\big)$.

Remark 5.2. Assume that $\beta\in\big(\frac12,1\big)$. As a by-product of the proof of Theorem 5.3, we can prove the a.s. convergence in (5.12). The a.s. convergence in (5.10) is valid for $\alpha \in \big(\frac12,\beta\big]$.

6. Proof of Theorem 5.1

Proof. Noting that $p=(\beta+\alpha)/2$ and $q=(\beta-\alpha)/2$, for $n \in \mathbb{N}$ we have

\begin{align*} \mathbb{P}(X_{n+1}= \pm 1 \mid \mathcal{F}_n) & = \frac{\#\{k=1,\ldots,n \colon X_k=\pm 1\}}{n} \cdot p + \frac{\#\{k=1,\ldots,n \colon X_k= \mp 1\}}{n} \cdot q \\ & = \frac{1}{2}\bigg(\beta\cdot\frac{\Sigma_n}{n} \pm \alpha\cdot\frac{W_n}{n}\bigg). \end{align*}

For $k \in (m_n,n] \cap \mathbb{N}$, we have

(6.1) \begin{align} \mathbb{P}\big(X_k^{(n)}=\pm 1 \mid \mathcal{G}_{k-1}^{(n)}\big) & = \frac{1}{2}\bigg(\beta\cdot\frac{\Sigma_{m_n}}{m_n} \pm \alpha\cdot\frac{W_{m_n}}{m_n}\bigg), \end{align}
(6.2) \begin{align} \mathbb{P}\big(\{X_k^{(n)}\}^2 = 1 \mid \mathcal{G}_{k-1}^{(n)}\big) & = \beta \cdot \dfrac{\Sigma_{m_n}}{m_n}. \end{align}

From (6.1), we see that (3.1) and (3.2) continue to hold in this setting. Defining $\{A_n\}$ and $\{B_n\}$ by (3.3), we note that they satisfy (3.4) and (3.5).

We prepare a lemma about $\{B_n\}$.

Lemma 6.1. Under the assumption of Theorem 5.1:

  1. (i) For $\alpha \in [-\beta,\beta]$ and $\xi \in \mathbb{R}$,

    (6.3) \begin{align} \mathbb{E}\bigg[\exp\bigg(\frac{\mathrm{i}\xi\gamma_nB_n}{\sqrt{\Sigma_{m_n}}\,}\bigg) \mid \mathcal{F}_{\infty}\bigg] & \to \exp\bigg({-}\frac{\xi^2}{2}\cdot\beta\gamma(1-\gamma)\bigg) \quad \text{as}\ n \to \infty\ \text{a.s.,} \end{align}
    (6.4) \begin{align} \mathbb{E}\bigg[\exp\bigg(\frac{\mathrm{i}\xi\gamma_nB_n}{\sqrt{\Sigma_{m_n}\log\Sigma_{m_n}}\,}\bigg) \mid \mathcal{F}_{\infty}\bigg] & \to 1 \quad \text{as}\ n \to \infty\ \text{a.s.} \end{align}
  2. (ii) If $\alpha \in (\beta/2,\beta]$ then $\gamma_n B_n/(m_n)^{\alpha} \to 0$ as $n \to \infty$ in $L^2$.

Proof. Note that $\mathbb{E}[ B_n \mid \mathcal{F}_{\infty}] = 0$. By (6.1) and (6.2),

\begin{equation*} \mathbb{V}[X_k^{(n)} \mid \mathcal{F}_{\infty}] = \beta \cdot \frac{\Sigma_{m_n}}{m_n}- \alpha^2 \cdot \bigg(\frac{W_{m_n}}{m_n}\bigg)^2 \end{equation*}

for $k \in (m_n,n] \cap \mathbb{N}$. As in (3.9), we have

(6.5) \begin{align} \mathbb{V}[B_n \mid \mathcal{F}_{\infty}] = (n-m_n)\cdot\bigg\{\beta\cdot\frac{\Sigma_{m_n}}{m_n}-\alpha^2\cdot\bigg(\frac{W_{m_n}}{m_n}\bigg)^2\bigg\} = \frac{1-\gamma_n}{\gamma_n}\cdot\bigg\{\beta\Sigma_{m_n}-\alpha^2\cdot\frac{W_{m_n}^2}{m_n}\bigg\}. \quad \end{align}

From this,

(6.6) \begin{align} \mathbb{V}\bigg[\frac{\gamma_nB_n}{\sqrt{\Sigma_{m_n}}\,} \mid \mathcal{F}_{\infty}\bigg] = \gamma_n(1-\gamma_n)\cdot\bigg\{\beta - \alpha^2\cdot\frac{(m_n)^{\beta}}{\Sigma_{m_n}} \cdot \bigg(\frac{W_{m_n}}{(m_n)^{(1+\beta)/2}}\bigg)^2\bigg\}. \end{align}

For any $\beta \in (0,1)$ and $\alpha \in [-\beta,\beta]$, we show that

(6.7) \begin{align} \lim_{n \to \infty} \frac{W_{m_n}}{(m_n)^{(1+\beta)/2}} = 0 \quad \text{a.s.} \end{align}

Indeed, if $\alpha \in [-\beta,\beta/2)$ then

(6.8) \begin{align} \limsup_{n \to \infty}\frac{W_n}{\sqrt{2n^{\beta}\log\log n}\,} = \sqrt{\frac{\beta\Sigma}{\beta-2\alpha}} \quad \text{a.s.} \end{align}

by [Reference Bercu5, (3.5)]. If $\alpha=\beta/2$ then

(6.9) \begin{align} \limsup_{n \to \infty}\frac{W_n}{\sqrt{2n^{\beta}\log n\log\log\log n}\,} = \sqrt{\beta\Sigma} \quad \text{a.s.} \end{align}

by [Reference Bercu5, (3.13)]. If $\alpha \in (\beta/2,\beta]$ then $W_{m_n}/(m_n)^{\alpha} \to M$ as $n \to \infty$ a.s. by (5.6), and $(1+\beta)/2 \gt \alpha$ since $2\alpha - \beta\leq \beta \lt 1$. In any case we have (6.7). Since $(m_n)^{\beta}/\Sigma_{m_n}\to 1/\Sigma$ as $n \to \infty$ a.s. by (5.3), we see that (6.6) converges to $\beta \gamma(1-\gamma)$ as $n \to \infty$ a.s., and

\begin{equation*} \mathbb{V}\bigg[\frac{\gamma_nB_n}{\sqrt{\Sigma_{m_n}\log\Sigma_{m_n}}\,} \mid \mathcal{F}_{\infty}\bigg] \to 0 \quad \text{as}\ n \to \infty\ \text{a.s.} \end{equation*}

By a similar computation to Lemma 4.1, we obtain (6.3) and (6.4) in (i).

Next, we consider (ii). By (6.5),

\begin{equation*} \mathbb{E}\bigg[\bigg(\frac{\gamma_nB_n}{(m_n)^{\alpha}}\bigg)^2\bigg] = \gamma_n(1-\gamma_n)\cdot\bigg\{\beta\cdot\frac{\mathbb{E}[\Sigma_{m_n}]}{(m_n)^{2\alpha}} - \alpha^2\cdot\frac{\mathbb{E}[(W_{m_n})^2]}{(m_n)^{1+2\alpha}}\bigg\}. \end{equation*}

From [Reference Bercu5, (A.6)], $\mathbb{E}[(W_n)^2] \sim n^{2\alpha}/\{(2\alpha-\beta)\Gamma(2\alpha)\}$ as $n \to \infty$. On the other hand, from [Reference Bercu5, (4.4)] we can see that $\mathbb{E}[\Sigma_n] \sim n^{\beta}/\Gamma(1+\beta)$ as $n \to \infty$. Noting that $\beta \lt 2\alpha$, we have (ii).

Assume that $\alpha \in [-\beta,\beta/2)$. By (3.4) and (5.4), we have

\begin{equation*} \frac{\gamma_nA_n}{\sqrt{\Sigma_{m_n}}\,} = \frac{c_nW_{m_n}}{\sqrt{\Sigma_{m_n}}\,} \stackrel{\text{d}}{\to} \{\gamma + \alpha(1-\gamma)\} \cdot N\bigg(0,\frac{\beta}{\beta-2\alpha}\bigg) \quad \text{as}\ n \to \infty. \end{equation*}

Combining this and (6.3), we can prove (5.8) by the same method as for (2.9). Next, we consider the case $\alpha=\beta/2$. By (3.4) and (5.5), we have

\begin{equation*} \frac{\gamma_nA_n}{\sqrt{\Sigma_{m_n}\log\Sigma_{m_n}}\,} = \frac{c_nW_{m_n}}{\sqrt{\Sigma_{m_n}\log\Sigma_{m_n}}\,} \stackrel{\text{d}}{\to} \{\gamma + \alpha(1-\gamma)\} \cdot N(0,1) \quad \text{as}\ n \to \infty. \end{equation*}

This together with (6.4) gives (5.9). As for the case $\alpha \in (\beta/2,\beta]$, by (3.4) and (5.6),

\begin{equation*} \frac{\gamma_nA_n}{(m_n)^{\alpha}} = \frac{c_nW_{m_n}}{(m_n)^{\alpha}} \to \{\gamma+\alpha(1-\gamma)\}M \quad \text{as}\ n \to \infty\ \text{a.s.\ and\ in}\ L^2. \end{equation*}

Now (5.10) follows from Lemma 6.1(ii). The proof of (5.11) is almost identical to that of (2.12): use (3.5), (5.7), and (6.3).

7. Proof of Theorem 5.2

Proof. Put $A^{\prime}_n\,:\!=\,\mathbb{E}[\Xi_n \mid \mathcal{F}_{\infty}]$ and $B^{\prime}_n\,:\!=\, \Xi_n - A^{\prime}_n$. Using (6.2), we can see that $\gamma_n A^{\prime}_n = c_n(\beta) \Sigma_{m_n}$, which together with (5.3) imply

\begin{equation*} \frac{\gamma_n A^{\prime}_n}{(m_n)^{\beta}} = \frac{c_n(\beta)\Sigma_{m_n}}{(m_n)^{\beta}} \to \{\gamma+\beta(1-\gamma)\} \cdot \Sigma \quad \text{as}\ n \to \infty\ \text{a.s.\ and in}\ L^2. \end{equation*}

As for $B^{\prime}_n$, again by (6.2) we can see that

\begin{align*} \mathbb{V}\bigg[\bigg(\frac{\gamma_nB^{\prime}_n}{(m_n)^{\beta}}\bigg)^2 \mid \mathcal{F}_{\infty}\bigg] & = \frac{(\gamma_n)^2}{(m_n)^{2\beta}}\cdot\sum_{k=m_n+1}^n\mathbb{V}[\{X_k^{(n)}\}^2 \mid \mathcal{F}_{\infty}] \\ & = \frac{(\gamma_n)^2}{(m_n)^{2\beta}}\cdot(n-m_n)\cdot\beta\cdot\frac{\Sigma_{m_n}}{m_n}\cdot \bigg(1-\beta\cdot\frac{\Sigma_{m_n}}{m_n}\bigg), \\ \mathbb{E}\bigg[\bigg(\frac{\gamma_nB^{\prime}_n}{(m_n)^{\beta}}\bigg)^2\bigg] & = \frac{\beta\gamma_n(1-\gamma_n)}{(m_n)^{\beta}}\cdot \mathbb{E}\bigg[\frac{\Sigma_{m_n}}{(m_n)^{\beta}}\cdot\bigg(1-\beta\cdot\frac{\Sigma_{m_n}}{m_n}\bigg)\bigg]. \end{align*}

Since $\beta \lt 1$ and $\Sigma_{m_n}/(m_n)^{\beta}$ converges to $\Sigma$ in $L^2$ by (5.3), we have

\begin{equation*} \mathbb{E}\bigg[\frac{\Sigma_{m_n}}{(m_n)^{\beta}}\cdot\bigg(1-\beta\cdot\frac{\Sigma_{m_n}}{m_n}\bigg)\bigg] = \mathbb{E}\bigg[\frac{\Sigma_{m_n}}{(m_n)^{\beta}}\bigg] - \frac{\beta}{(m_n)^{1-\beta}}\cdot\mathbb{E}\bigg[\bigg(\frac{\Sigma_{m_n}}{(m_n)^{\beta}}\bigg)^2\bigg] \to \mathbb{E}[\Sigma] \quad \text{as}\ n \to \infty. \end{equation*}

Noting that $\beta \gt 0$, this shows that $\gamma_n B^{\prime}_n/(m_n)^{\beta} \to 0$ as $n \to \infty$ in $L^2$, which completes the proof.

8. Proof of Theorem 5.3

Proof. The proof of Lemma 4.1 is based on the fact that $|X_k^{(n)}-\mathbb{E}[X_k^{(n)}\mid \mathcal{F}_{\infty}]| \leq 1$. Thus, $\{B_n\}$ for the ERW with stops in the triangular array setting also satisfies (4.2) for any $c \in \big(\frac12,1\big)$. If $\alpha \in [-\beta,\beta/2]$ then, from (3.4), (6.8), and (6.9), we can see that $\gamma_n A_n=o(n^c)$ for any $c \in (\beta/2,1)$. If $\alpha \in (\beta/2,\beta]$ then (3.4) and (5.6) imply that $\gamma_n A_n=o(n^c)$ for any $c \in (\alpha,1)$. In any case, (2.14) holds for $c \in \big(\max\big\{\alpha,\frac12\big\},1\big)$.

Acknowledgements

R.R. thanks Keio University for its hospitality during multiple visits. M.T. and H.T. thank the Indian Statistical Institute for its hospitality.

Funding information

M.T. is partially supported by JSPS KAKENHI Grant Numbers JP19H01793, JP19K03514, and JP22K03333. H.T. is partially supported by JSPS KAKENHI Grant Numbers JP19K03514, JP21H04432, and JP23H01077.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Aguech, R. and El Machkouri, M. (2024). Gaussian fluctuations of the elephant random walk with gradually increasing memory. J. Phys. A 57, 065203. Corrigendum, J. Phys. A 57, 349501.CrossRefGoogle Scholar
Azuma, K. (1967). Weighted sums of certain dependent random variables. Tôhoku Math. J. (2) 19, 357367.CrossRefGoogle Scholar
Baur, E. and Bertoin, J. (2016). Elephant random walks and their connection to Pólya-type urns. Phys. Rev. E 94, 052134.CrossRefGoogle ScholarPubMed
Bercu, B. (2018). A martingale approach for the elephant random walk. J. Phys. A 51, 015201.CrossRefGoogle Scholar
Bercu, B. (2022). On the elephant random walk with stops playing hide and seek with the Mittag–Leffler distribution. J. Statist. Phys. 189, 12.CrossRefGoogle Scholar
Coletti, C. F., Gava, R. J. and Schütz, G. M. (2017). Central limit theorem for the elephant random walk. J. Math. Phys. 58, 053303.CrossRefGoogle Scholar
Coletti, C. F., Gava, R. J. and Schütz, G. M. (2017). A strong invariance principle for the elephant random walk. J. Statist. Mech. 2017, 123207.CrossRefGoogle Scholar
Guérin, H., Laulin, L. and Raschel, K. (2023). A fixed-point equation approach for the superdiffusive elephant random walk. Preprint, arXiv:2308.14630.Google Scholar
Gut, A. and Stadtmüller, U. (2021). Variations of the elephant random walk. J. Appl. Prob. 58, 805829.CrossRefGoogle Scholar
Gut, A. and Stadtmüller, U. (2022). The elephant random walk with gradually increasing memory. Statist. Prob. Lett. 189, 109598.CrossRefGoogle Scholar
Kubota, N. and Takei, M. (2019). Gaussian fluctuation for superdiffusive elephant random walks. J. Statist. Phys. 177, 11571171.CrossRefGoogle Scholar
Kumar, N., Harbola, U. and Lindenberg, K. (2010). Memory-induced anomalous dynamics: Emergence of diffusion. Phys. Rev. E 82, 021101.CrossRefGoogle ScholarPubMed
Laulin, L. (2022). Autour de la marche aléatoire de l’éléphant [About the elephant random walk]. Thesis, Université de Bordeaux.Google Scholar
Qin, S. (2023). Recurrence and transience of multidimensional elephant random walks. Preprint, arXiv:2309.09795.Google Scholar
Schütz, G. M. and Trimper, S. (2004). Elephants can always remember: Exact long-range memory effects in a non-Markovian random walk. Phys. Rev. E 70, 045101.CrossRefGoogle Scholar