Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-06T18:42:47.995Z Has data issue: false hasContentIssue false

Strong convergence of infinite color balanced urns under uniform ergodicity

Published online by Cambridge University Press:  04 September 2020

Antar Bandyopadhyay*
Affiliation:
Indian Statistical Institute, Delhi and Kolkata
Svante Janson*
Affiliation:
Uppsala University
Debleena Thacker*
Affiliation:
Uppsala University
*
*Postal address: Theoretical Statistics and Mathematics Unit, Indian Statistical Institute, Delhi Centre, 7 S. J. S. Sansanwal Marg, New Delhi 110016, India. Email address: antar@isid.ac.in
**Postal address: Department of Mathematics, Uppsala University, Box 480, 751 06 Uppsala, Sweden. Email address: svante@math.uu.se
**Postal address: Department of Mathematics, Uppsala University, Box 480, 751 06 Uppsala, Sweden. Email address: svante@math.uu.se
Rights & Permissions [Opens in a new window]

Abstract

We consider the generalization of the Pólya urn scheme with possibly infinitely many colors, as introduced in [37], [4], [5], and [6]. For countably many colors, we prove almost sure convergence of the urn configuration under the uniform ergodicity assumption on the associated Markov chain. The proof uses a stochastic coupling of the sequence of chosen colors with a branching Markov chain on a weighted random recursive tree as described in [6], [31], and [26]. Using this coupling we estimate the covariance between any two selected colors. In particular, we re-prove the limit theorem for the classical urn models with finitely many colors.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

Pólya urn schemes and their various generalizations have been a key element of study for random processes with reinforcements. Starting from the seminal work of Pólya [Reference Pólya35], various types of urn schemes with finitely many colors have been widely studied in the literature. See [Reference Pemantle33] for an extensive survey of the known classical results; some of the modern works can be found in [Reference Janson24], [Reference Janson25], [Reference Bai and Hu3], [Reference Flajolet, Dumas and Puyhaubert17], [Reference Bose, Dasgupta and Maulik8], [Reference Bose, Dasgupta and Maulik9], [Reference Dasgupta and Maulik12], [Reference Chen and Kuba10], and [Reference Chen, Hsiau and Yang11].

Pólya urn models with colors indexed by a general Polish space were first introduced by Blackwell and MacQueen [Reference Blackwell and MacQueen7]. They showed that the so-called Ferguson distribution [Reference Ferguson16] on the set of probabilities on a Polish space can be obtained as a limit of a Pólya-type urn model. Other seminal work on urn models with possibly infinitely many colors was done by Hoppe [Reference Hoppe22, Reference Hoppe23] in the context of population genetics. He introduced a new type of urn scheme [Reference Hoppe22], where at each step with some positive probability a new color can be introduced. Several authors studied this slightly different model in the context of the Griffiths–Engen–McCloskey (GEM) model, the Poisson–Dirichlet distribution, and the Ewens sampling formula [Reference Donnelly and Kurtz13, Reference Hirth20, Reference Holst21, Reference Thörnblad38]. It was also observed that Hoppe’s urn scheme has a deep relation with certain combinatorial stochastic processes, known as the Chinese restaurant process [Reference Aldous1, Reference Pitman34]. It is to be noted here that one of the key ingredient in studying these urn models is the fact that the sequence of colors are exchangeable.

A somewhat different generalization for balanced urn schemes with infinitely many colors was introduced in [Reference Thacker37] and subsequently in the papers [Reference Bandyopadhyay and Thacker4], [Reference Bandyopadhyay and Thacker5], and [Reference Bandyopadhyay and Thacker6]. These models are significantly different than the two other models discussed above. Typically for this class of models the observed color sequence need not be exchangeable, and a hence new method of analysis was introduced in [Reference Bandyopadhyay and Thacker4], [Reference Bandyopadhyay and Thacker5], and [Reference Bandyopadhyay and Thacker6]. These works have since generated a lot of interest and such models are now receiving considerable attention [Reference Janson26, Reference Janson27, Reference Mailler and Marckert31]. In this paper we will consider the infinite color balanced urn model as introduced in [Reference Bandyopadhyay and Thacker4], [Reference Bandyopadhyay and Thacker5], and [Reference Bandyopadhyay and Thacker6], where the color set is countably infinite.

1.1. Model

In this work we will consider the same generalization of the Pólya urn scheme with infinitely many colors as defined in [Reference Bandyopadhyay and Thacker4], [Reference Bandyopadhyay and Thacker5], and [Reference Bandyopadhyay and Thacker6]. However, we will focus on the special case where the set of colors is countably infinite, which will be denoted by S. We follow a similar framework and notation to [Reference Bandyopadhyay and Thacker4], [Reference Bandyopadhyay and Thacker5], and [Reference Bandyopadhyay and Thacker6]. For the sake of completeness, we will provide a brief description of the model.

Let R be an $S \times S$ (infinite) matrix with non-negative entries, representing the replacement scheme. We will assume that R is balanced, that is, each row sum is equal and finite. In that case it is customary to take R to be a stochastic matrix (see [Reference Bandyopadhyay and Thacker6] for details).

We let $U_n \coloneqq (U_{n,v})_{v \in S} \in [0, \infty)^{S}$ denote the random configuration of the urn at time $n \geq 0$ . We will view it as an infinite vector (with non-negative entries) that is in $\ell_1 \equiv \ell_1(S)$ and can thus also be viewed as a (random) finite measure on S. Intuitively, we will define $U_n$ , such that if $Z_n$ is the randomly chosen color at the $(n+1)$ th draw, then the conditional distribution of $Z_n$ , given the ‘past’, will satisfy, for all $z \in S$ ,

\begin{equation*}\mathbb{P}( Z_n =z \mid U_n, U_{n-1}, \ldots, U_0 ) \propto U_n(z).\end{equation*}

Formally, starting with a non-random $U_0 \in \ell_1$ , we define $(U_n)_{n \geq 0} \subseteq \ell_1$ recursively as

(1.1) \begin{equation}U_{n+1} = U_n + R_{Z_n},\end{equation}

where $R_z$ denotes the zth row of the matrix R, and

(1.2) \begin{equation}\mathbb{P}( Z_n =z \mid U_n, U_{n-1}, \ldots, U_0 ) = \dfrac{U_{n,z}}{n+t},\end{equation}

where $U_0$ is a $\ell_1$ -vector with total mass denoted by $0 < t < \infty$ , that is, $\sum_{v \in S} U_{0,v} = t \in (0,\infty)$ .

Observe that one can now associate with such an urn model a Markov chain $(X_n)_{n \geq 0}$ on the countable state space S, with transition matrix R and initial distribution $U_0/t$ . Conversely, given any Markov chain $(X_n)_{n \geq 0}$ , on the countable state space S, with transition matrix R and a vector $U_0\in \ell_1$ , one can associate a balanced urn model $(U_n)_{n \geq 0}$ , satisfying equations (1.1) and (1.2). We describe this as a Markov chain associated with the urn model $(U_n)_{n \geq 0}$ .

Bandyopadhyay and Thacker [Reference Bandyopadhyay and Thacker5] and Mailler and Marckert [Reference Mailler and Marckert31] have observed that the asymptotic properties of the urn model so defined are determined by the asymptotic properties of the associated Markov chain. In fact they have shown [Reference Bandyopadhyay and Thacker5, Reference Mailler and Marckert31] that the urn sequence $(U_n)_{n \geq 0}$ has the same law as that of a branching Markov chain with transition matrix R, initial distribution $U_0/t$ , and defined on a random recursive tree. In Section 2 we provide the details of this representation.

1.2. Main result

In this paper we consider the case when R is irreducible, aperiodic, and positive recurrent. From the classical theory (see Section XV.7 of [Reference Feller15] for the details), it is well known that, in that case, the chain has a unique stationary distribution, say $\pi$ , satisfying the equation

\begin{equation*} \pi R=\pi.\end{equation*}

Moreover, such a chain is ergodic, that is, for any $u, v \in S$ ,

\begin{equation*}\lim_{n \to \infty} R^{n}(u, v)= \pi_{v},\end{equation*}

where $R^n$ is the n-step transition matrix, which is simply the n-fold composition of R with itself. Note that as S is countable, $R^n$ is just the n-fold multiplication of R.

In this work we will further assume that the chain is uniformly ergodic. For the sake of completeness, we provide the definition here. (One often uses a version with summation over v in (1.3); we need only the version below.)

Definition 1. A Markov chain with transition matrix R on a countable state space S is called uniformly ergodic if there exist positive constants, $0< \rho < 1$ and $C > 0$ , such that, for any time $n \geq 1$ and for any states $u, v \in S$ ,

(1.3) \begin{equation}\vert R^n(u, v)-\pi_{v} \vert \leq C \rho^n.\end{equation}

We note here that if S is finite then an irreducible and aperiodic chain is necessarily uniformly ergodic (see Theorem 4.9 of [Reference Levin, Peres and Wilmer29]). However, when S is infinite (even countable) there are ergodic chains which are not uniformly ergodic (see e.g. [Reference Häggström19]).

Our main result is as follows.

Theorem 1. Consider an urn model $(U_n)_{n \geq 0}$ as defined by the equations (1.1) and (1.2), with colors indexed by a countably infinite set S, a balanced replacement matrix R, and an initial configuration $U_0$ . We assume that R is a stochastic matrix which is irreducible, aperiodic, positive recurrent with stationary distribution $\pi$ , and uniformly ergodic. That is, it satisfies (1.3).

  1. (i) Then, as $n \to \infty$ ,

    (1.4) \begin{equation}\dfrac{U_n}{n+t}\longrightarrow \pi \quad {a.s.},\end{equation}
    where the convergence is coordinate-wise and also in $\ell_1$ .
  2. (ii) For any $v \in S$ , let $N_{n,v}\coloneqq \sum_{k=0}^n \mathds{1}_{\{Z_k=v\}}$ , denote the number of times the color v is chosen up to time n. Then, as $n \to \infty$ ,

    (1.5) \begin{equation}\dfrac{N_{n,v}}{n+1} \longrightarrow \pi_v \quad {a.s.},\end{equation}
    where the convergence is coordinate-wise and also in $\ell_1$ .

1.3. Background and motivation

It is known (see e.g. Theorems 3.3(a) and 3.4(a) of [Reference Bandyopadhyay and Thacker5]) that under our set-up, as $n \to \infty$ ,

\begin{equation*} \dfrac{U_n}{n+t} \stackrel{p}{\longrightarrow} \pi,\end{equation*}

and also, for any $v \in S$ ,

\begin{equation*} \mathbb{P}(Z_n=v) = \dfrac{\mathbb{E}[U_{n,v}]}{n+t} \longrightarrow \pi_v.\end{equation*}

Recall that $Z_n$ denotes the randomly chosen color at the $(n+1)$ th draw from the urn, when its (random) configuration is $U_n$ . Our result strengthens this result to strong convergence. However, we would like to point out that the results in [Reference Bandyopadhyay and Thacker5, Theorems 3.3(a) and 3.4(a)] only need assumption of ergodicity for the associated Markov chain, while our main result in this work needs a stronger assumption of uniform ergodicity of the associated Markov chain. As discussed above, the two assumptions are identical when S is finite. It is worthwhile to note here that for S finite our result is essentially the classical result for Freedman–Pólya–Eggenberger-type urn models [Reference Athreya and Karlin2, Reference Bai and Hu3, Reference Gouet18, Reference Janson24]. The classical results mainly use three types of technique, namely martingale techniques [Reference Bose, Dasgupta and Maulik8, Reference Bose, Dasgupta and Maulik9, Reference Dasgupta and Maulik12, Reference Gouet18], stochastic approximations [Reference Laruelle and Pagès28], and embedding into continuous-time pure birth processes [Reference Athreya and Karlin2, Reference Bai and Hu3, Reference Janson24, Reference Janson25]. Typically the analysis of a finite color urn is heavily dependent on the Perron–Frobenius theory [Reference Seneta36] of matrices with positive entries and the Jordan decomposition of finite-dimensional matrices [Reference Athreya and Karlin2, Reference Bai and Hu3, Reference Bose, Dasgupta and Maulik8, Reference Dasgupta and Maulik12, Reference Gouet18, Reference Janson24, Reference Janson25]. Unfortunately such techniques are unavailable when S is infinite, even when countable. Our method bypasses the use of such techniques and instead uses the newer approach developed in [Reference Bandyopadhyay and Thacker5] and [Reference Mailler and Marckert31]. Our extra assumption (uniform ergodicity) is needed only when S infinite. Thus the result stated above re-proves the classical result for the finite color urn model using the new technique. The result essentially completes the work developed in [Reference Bandyopadhyay and Thacker5] and [Reference Mailler and Marckert31] for the case when S is countable. We would like to note here that similar results for a null recurrent case (when the chain is a random walk) has been derived in [Reference Mailler and Marckert31] and [Reference Janson26].

1.4. Discussion on the assumption of uniform ergodicity

As discussed above, when S is finite the assumption of uniform ergodicity is equivalent to the assumption of ergodicity of the associated Markov chain [Reference Levin, Peres and Wilmer29]. In particular, it holds for irreducible and aperiodic chains. However, when S is infinite it is indeed a much stronger assumption. Necessary and sufficient conditions under which a chain is uniformly ergodic can be found in [Reference Meyn and Tweedie32]. In particular, an irreducible and aperiodic chain on a countable state space is uniformly ergodic if and only if the so-called Doeblin’s condition is satisfied (see Section 16.2 of [Reference Meyn and Tweedie32]). This condition is satisfied by many Markov chains on a countably infinite state space, but it is indeed restrictive. We will need this assumption in the proof we provide in Section 3. However, we feel that this condition is not necessary in general. In fact we make the following conjecture.

Conjecture 1. Consider an urn model $(U_n)_{n \geq 0}$ as defined by equations (1.1) and (1.2), with colors indexed by a countably infinite set S, a balanced replacement matrix R, and an initial configuration $U_0$ . Assume that R is a stochastic matrix which is irreducible, aperiodic, positive recurrent with stationary distribution $\pi$ . Then the convergence in (1.4) and (1.5) holds a.s. and also in $\ell_1$ .

1.5. Outline

In the following section we provide some details about the representation of a balanced urn in terms of a branching Markov chain on a weighted random recursive tree, which is our main tool to prove Theorem 1. Section 3 provides the proof of Theorem 1. In Section 4 we discuss a non-trivial application of our main result.

2. Coupling of branching Markov chains and urn models

It is known from [Reference Thacker37], [Reference Bandyopadhyay and Thacker5], [Reference Mailler and Marckert31], and [Reference Janson26] that the law for the entire sequence of randomly selected colors $(Z_n)_{n \geq 0}$ can be represented in terms of a branching Markov chain on a random recursive trees. For the sake of completeness, we will briefly discuss this representation here. We will later use this representation to prove the main result of the paper.

2.1. Weighted random recursive tree

Random recursive trees (RRT) are well studied in the literature; see Chapter 6 of [Reference Drmota14]. The weighted version for RRT has been introduced and defined in [Reference Janson26]. For $n \geq -1$ , let $\mathcal{T}_n$ be the random recursive tree on $n+2$ vertices, with o as the root and the other vertices labeled as $\{w_0,w_1,\ldots, w_n\}$ , where the increasing subscripts of the vertices indicate the order in which they are attached. The root is given some initial weight $t>0$ . Every other node has weight 1. Initially we start with $\mathcal{T}_{-1}$ , which consists only of the root, denoted by o. Now we construct recursively the sequence of trees $(\mathcal{T}_n)_{n \geq -1}$ , where the parent of the incoming node in $\mathcal{T}_n$ is chosen in proportion to its weight, that is, the parent is the root o, with probability $t/(n+t+1)$ , and any other vertex with probability $1/(n+t+1)$ . Define the infinite random recursive tree as

\begin{equation*} \mathcal{T}\coloneqq \bigcup_{n \geq -1} \mathcal{T}_n.\end{equation*}

2.2. Branching Markov chain on RRT

The definition for branching Markov chains on the random recursive tree, which we abbreviate as BMC on RRT, as discussed in the context of this paper, is detailed in [Reference Bandyopadhyay and Thacker5] and [Reference Mailler and Marckert31]. To facilitate reading of this paper, we briefly discuss the BMC on RRT as given in [Reference Bandyopadhyay and Thacker5].

Recall that the set of colors is indexed by a set S. Let $\Delta\not\in S$ be a symbol. We say that a stochastic process $(W_n)_{n \geq -1}$ with state space $S\cup \Delta$ is a branching Markov chain on $\mathcal{T}$ , starting at the root o and at a position $W_{-1}=\Delta$ if, for any $n \geq 0$ and for any $z \in S$ ,

\begin{equation*} \mathbb{P} (W_n = z \mid W_{n-1}, W_{n-2}, \ldots, W_{-1}; \mathcal{T}_n )=\begin{cases}U_0(z)/t & \text{if $\stackrel{\leftarrow}{w_n} = o$,}\\[3pt]R (W_{j}, z) & \text{if $\stackrel{\leftarrow}{w_n}=w_j$,}\end{cases}\end{equation*}

where $\stackrel{\leftarrow}{w_n}$ is the parent of the vertex $w_n$ in $\mathcal{T}_n$ . We denote the vertices of $\mathcal{T}_n$ as $\{o,w_0,w_1 , \ldots, w_n \}$ .

2.3. Representation theorems

The coupling of $(Z_n)_{n \geq 0}$ and $(W_n)_{n \geq 0}$ is given in detail in [Reference Bandyopadhyay and Thacker5] and [Reference Mailler and Marckert31]. Here we follow the same notation as in [Reference Bandyopadhyay and Thacker5]. The following representation is available in Theorem 2.1 of [Reference Bandyopadhyay and Thacker5]:

(2.1) \begin{equation}(Z_n)_{n \geq 0}\stackrel{d}{=} (W_n)_{n \geq 0}.\end{equation}

3. Proof of the main results

Recall that $\mathcal{T}_n$ denotes the weighted RRT with $n+2$ vertices; for convenience we also use $\mathcal{T}_n$ to denote its vertex set $\ensuremath{\{{o,w_0,w_1,\ldots, w_n}\}}$ . Let $\mathcal{T}^{\prime}_n\coloneqq \mathcal{T}_n\setminus\ensuremath{\{{o}\}}=\ensuremath{\{{w_0,w_1,\ldots, w_n}\}}$ , the set of $n+1$ vertices excluding the root. Note that the RRT $\mathcal{T}_n$ is random, but the vertex set is non-random.

For $u,w\in\mathcal{T}_n$ , let d(u, w) denote the graph distance between u and w. In particular, d(o, u) is the depth of u, which we also denote by d(u).

We begin by proving the following lemma, where $\rho$ is as in Definition 1.

Lemma 1. Let $\mathcal{L}(u,w)$ denote the least common ancestor for the vertices u, w in the random recursive tree (RRT). Given the RRT $\mathcal{T}_n$ , we have for some suitable constant $C>0$

(3.1) \begin{equation}\text{Cov}(\mathds{1}_{\{W_u=v\}}, \mathds{1}_{\{W_w=v\}} \mid \mathcal{T}_n ) \leq C \rho^{\max(d(u, \mathcal{L}(u,w)), d(w,\mathcal{L}(u,w))} \leq C \rho^{d(u,w)/2}.\end{equation}

Proof. Let $\mathbb{P}_n$ denote the conditional probability given the RRT $\mathcal{T}_n$ . By definition,

\begin{equation*}\text{Cov}(\mathds{1}_{\{W_u=v\}}, \mathds{1}_{\{W_w=v\}} \mid \mathcal{T}_n)= \mathbb{P}_n (W_u=v,W_w=v )- \mathbb{P}_n (W_u=v)\mathbb{P}_n (W_w=v).\end{equation*}

With $\mathcal{L}(u,w)$ denoting the least common ancestor between u and w, it is easy to see that

\begin{equation*}\mathbb{P}_n (W_u=v,W_w=v )= \sum_{s \in S} \mathbb{P}_n (W_{\mathcal{L}(u,w)}=s)R^{d(u, \mathcal{L}(u,w))}(s,v) R^{d(w, \mathcal{L}(u,w))}(s,v).\end{equation*}

Thus

\begin{align*} & \text{Cov}(\mathds{1}_{\{W_u=v\}}, \mathds{1}_{\{W_w=v\}} \mid \mathcal{T}_n)\\[3pt] &\quad =\sum_{s \in S}\mathbb{P}_n (W_{\mathcal{L}(u,w)}=s)R^{d(u, \mathcal{L}(u,w))}(s,v) R^{d(w, \mathcal{L}(u,w))}(s,v)\\[3pt] &\quad\quad\, -\sum_{s,s^{\prime} \in S}\mathbb{P}_n (W_{\mathcal{L}(u,w)}=s)\mathbb{P}_n (W_{\mathcal{L}(u,w)}=s^{\prime})R^{d(u, \mathcal{L}(u,w))}(s,v)R^{d(w, \mathcal{L}(u,w))}(s^{\prime},v)\\[3pt] &\quad =\sum_{s \in S}\mathbb{P}_n (W_{\mathcal{L}(u,w)}=s)R^{d(u, \mathcal{L}(u,w))}(s,v)\\[3pt] &\quad\quad\, \times \biggl[R^{d(w, \mathcal{L}(u,w))}(s,v) -\sum_{s^{\prime} \in S}\mathbb{P}_n (W_{\mathcal{L}(u,w)}=s^{\prime})R^{d(w, \mathcal{L}(u,w))}(s^{\prime},v)\biggr] \notag \\[3pt] &\quad =\sum_{s \in S}\mathbb{P}_n (W_{\mathcal{L}(u,w)}=s)R^{d(u, \mathcal{L}(u,w))}(s,v) \\[3pt] &\quad\quad\, \times \biggl[(R^{d(w, \mathcal{L}(u,w))}(s,v)-\pi_v) -\sum_{s^{\prime} \in S}\mathbb{P}_n (W_{\mathcal{L}(u,w)}=s^{\prime})(R^{d(w, \mathcal{L}(u,w))}(s^{\prime},v)-\pi_v)\biggr].\end{align*}

The last equality is obtained by adding and subtracting $\pi_v$ inside the final square bracket. Recall that we have assumed uniform ergodicity for the Markov chain, so for both s,s we have

\begin{equation*}|R^{d(w, \mathcal{L}(u,w))}(s,v)-\pi_v| < C\rho^{d(w, \mathcal{L}(u,w))},\end{equation*}

which implies that

\begin{equation*}|\text{Cov}(\mathds{1}_{\{W_u=v\}}, \mathds{1}_{\{W_w=v\}} \mid \mathcal{T}_n)| \leq 2C \rho^{d(w, \mathcal{L}(u,w))}.\end{equation*}

The first inequality in (3.1) follows by symmetry. The second inequality is obvious as $0<\rho<1$ .

Lemma 2. Fix r with $0< r<1$ and define

(3.2) \begin{align} A_n=A_n(r)&\coloneqq \mathbb{E} \sum_{u \in \mathcal{T}^{\prime}_{n}}r^{d(u)},\quad\ \:\end{align}
(3.3) \begin{align}B_n=B_n(r)&\coloneqq \mathbb{E} \sum_{\substack{u,w \in \mathcal{T}^{\prime}_{n}}}r^{d(u,w)}. \end{align}

Then, for some constant C (possibly depending on r and t) and all $n\geq1$ ,

(3.4) \begin{align} \hspace*{-7pt} A_n &\leq C n^r,\hspace*{95pt}\end{align}
(3.5) \begin{align} B_n &\leq \begin{cases} C n^{2r} & \dfrac12<r<1,\\[2pt] C n\log (n+1) & r=\dfrac12,\\[2pt] C n & 0<r<\dfrac12. \end{cases} \end{align}

Much more precise asymptotic formulas can be derived by the same method, but we do not need them.

Proof. Recall that $w_n$ is the $(n+1)$ th coming vertex, and assume that $w_n$ is attached to $w\in\mathcal{T}_{n-1}$ . Then, for all $u\in\mathcal{T}_{n-1}$ ,

\begin{equation*}d(u, w_n) = d(u,w)+1.\end{equation*}

Hence

\begin{align*} A_{n} & = A_{n-1}+ \mathbb{E} r^{d(w_n)} \\[3pt] & = A_{n-1}+ \dfrac{1}{n+t} \mathbb{E} \biggl( \sum_{u \in \mathcal{T}^{\prime}_{n-1}} r^{d(u)+1}\biggr) +\dfrac{t}{n+t} r \\[3pt] &=\biggl({1+\dfrac{r}{n+t}}\biggr)A_{n-1}+\dfrac{tr}{n+t}.\end{align*}

Consequently, by induction and using $A_0=r$ ,

(3.6) \begin{equation} A_n = r\sum_{k=0}^n\biggl(\dfrac{t}{k+t}\prod_{j=k+1}^n\biggl({1+\dfrac{r}{j+t}}\biggr)\biggr) = rt\sum_{k=0}^n\dfrac{\Gamma(n+1+t+r)}{\Gamma(n+1+t)}\dfrac{\Gamma(k+t)}{\Gamma(k+1+t+r)}.\end{equation}

By standard asymptotics for the gamma function (following from Stirling’s formula), this yields

(3.7) \begin{equation} A_n \leq rt\sum_{k=0}^n C \dfrac{(n+1)^r}{(k+1)^{r+1}} \leq C(n+1)^r,\end{equation}

showing (3.4).

For (3.5) we argue similarly. We have

\begin{align*} B_{n} & = B_{n-1}+2 \mathbb{E}\biggl( \sum_{u\in \mathcal{T}^{\prime}_{n-1}} r^{d(u,w_n)}\biggr)+1\\[3pt] & = B_{n-1}+ \dfrac{2}{n+t} \mathbb{E} \biggl( \sum_{u,w \in \mathcal{T}^{\prime}_{n-1}} r^{d(u,w)+1}\biggr) +\dfrac{2t}{n+t} \mathbb{E} \biggl( \sum_{u\in \mathcal{T}^{\prime}_{n-1}} r^{d(u,o)+1}\biggr)+1 \\[3pt] &=\biggl({1+\dfrac{2r}{n+t}}\biggr)B_{n-1}+\dfrac{2rt}{n+t}A_{n-1}+1\end{align*}

and, with $A_{-1}\coloneqq 0$ , using $B_0=1$ ,

(3.8) \begin{equation} B_n = \sum_{k=0}^n\biggl(\biggl({1+\dfrac{2rt}{k+t}A_{k-1}}\biggr) \prod_{j=k+1}^n\biggl({1+\dfrac{2r}{j+t}}\biggr)\biggr).\end{equation}

We use the crude estimate $A_{k-1}\leq k$ and estimate the product in (3.8) using gamma functions as in (3.6)–(3.7) (with r replaced by 2r). This yields

\begin{equation*} B_n \leq C \sum_{k=0}^n \prod_{j=k+1}^n\biggl({1+\dfrac{2r}{j+t}}\biggr) \leq C \sum_{k=0}^n \dfrac{(n+1)^{2r}}{(k+1)^{2r}}.\end{equation*}

This implies (3.5) by a simple summation.

3.1. Proof of the Theorem 1

Proof. We observe that the basic recursion (1.1) can also be written as

\begin{equation*} U_{n+1}=U_{n} + \chi_{n+1} R{,}\end{equation*}

where $\chi_{n+1} = (\chi_{n+1,v})_{v \in S}$ is such that $\chi_{n+1,Z_n}=1$ and $\chi_{n+1,u} = 0$ if $u \neq Z_n$ . In other words,

\begin{equation*}U_{n+1}=U_n + R_{Z_n}{,}\end{equation*}

where $R_{Z_n}$ is the $Z_n$ th row of the matrix R. Hence

(3.9) \begin{align}\hspace*{62pt} U_{n+1}& = U_0 + \sum_{k=1}^{n+1}\chi_k R, \hspace*{40pt}\end{align}
(3.10) \begin{align} \dfrac{U_{n+1}-U_0}{n+t}& = \dfrac{1}{n+t} \sum_{k=1}^{n+1}\chi_k R. \end{align}

To prove (1.4), by (3.10) and since

\begin{equation*}\dfrac{n+1}{n+t} \longrightarrow 1\quad\text{as $n \to \infty$,}\end{equation*}

it is enough to show that

(3.11) \begin{equation} \dfrac{1}{n+1} \sum_{k=1}^{n+1}\chi_k R \longrightarrow \pi \quad \text{in $\ell^1(S)$, a.s.}\end{equation}

Since R is balanced, the mapping $x\mapsto xR$ is a bounded map $\ell^1(S)\to\ell^1(S)$ , and since furthermore $\pi R =\pi$ , to prove (3.11) it is enough to show that, as $ n \to \infty$ ,

(3.12) \begin{equation} \dfrac{1}{n+1} \sum_{k=1}^{n+1}\chi_{k}\longrightarrow \pi \quad \text{in $\ell^1(S)$, a.s.}\end{equation}

Both sides of (3.12) can be regarded as probability distributions on S, and therefore the convergence in $\ell^1$ is equivalent to convergence of every coordinate, that is, to

(3.13) \begin{equation} \dfrac{1}{n+1} \sum_{k=1}^{n+1}\chi_{k,v}\longrightarrow \pi_v \quad\text{a.s.\ for every }v \in S.\end{equation}

Moreover, (1.5) is just another way to write (3.13). Hence, to show the theorem it suffices to show (3.13).

Recall that $\chi_{k,v}=\mathds{1}_{\{Z_{k-1}=v\}}.$ From Theorem 3.3(a) of [Reference Bandyopadhyay and Thacker5] (which is easily extended to general t), it follows that

\begin{equation*} \dfrac{1}{n+1}\mathbb{E} \Biggl[ \sum_{k=1}^{n+1}\chi_{k,v}\Biggr]= \dfrac{1}{n+1} \sum_{k=0}^{n}\mathbb{P}(Z_k=v) \longrightarrow \pi_v \quad \text{as $ n \to \infty$.}\end{equation*}

Note that $|\mathds{1}_{\{Z_k=v\}}-\mathbb{E} \mathds{1}_{\{Z_k=v\}}| \leq 1$ . Therefore, from the Strong Law of Large Numbers for correlated random variables [Reference Lyons30, Theorem 1], it follows that if we prove

\begin{equation*} \sum_{n \geq 0} \dfrac{1}{n+1}\text{Var} \biggl(\dfrac{1}{n+1} \sum_{k=0}^{n}\mathds{1}_{\{Z_k=v\}}\biggr)<\infty,\end{equation*}

then, as $n \to \infty$ ,

\begin{equation*} \dfrac{1}{n+1} \sum_{k=1}^{n+1}\chi_{k,v}=\dfrac{1}{n+1} \sum_{k=0}^{n}\mathds{1}_{\{Z_k=v\}} \longrightarrow \pi_v \quad \text{a.s.},\end{equation*}

which will complete the proof. In other words, if we define

(3.14) \begin{equation}J_{n,v}\coloneqq \text{Var} \biggl( \sum_{k=0}^{n}\mathds{1}_{\{Z_k=v\}}\biggr),\end{equation}

then it suffices to show that

(3.15) \begin{equation} \sum_{n=1}^\infty \dfrac{1}{n^3}J_{n,v}<\infty.\end{equation}

Now, recalling (2.1), (3.14) can be expanded as

(3.16) \begin{equation}J_{n,v}=\text{Var} \biggl( \sum_{k=0}^{n}\mathds{1}_{\{W_k=v\}}\biggr)= \sum_{u,w\in\mathcal{T}^{\prime}_n}\text{Cov} (\mathds{1}_{\{W_u=v\}},\mathds{1}_{\{W_w=v\}}).\end{equation}

We use the conditional covariance formula to get

(3.17) \begin{align}&\text{Cov} (\mathds{1}_{\{W_u=v\}},\mathds{1}_{\{W_w=v\}}) \notag \\[3pt] &\quad = \mathbb{E} [\text{Cov} (\mathds{1}_{\{W_u=v\}}, \mathds{1}_{\{W_w=v\}} \mid \mathcal{T}_{n})] +\text{Cov} ({\mathbb{E}[\mathds{1}_{\{W_u=v\}}\mid \mathcal{T}_{n}], \mathbb{E}[\mathds{1}_{\{W_w=v\}}\mid \mathcal{T}_{n}]}).\end{align}

Now, using Lemma 1, we obtain

\begin{equation*}\text{Cov} (\mathds{1}_{\{W_u=v\}}, \, \mathds{1}_{\{W_w=v\}} \mid \mathcal{T}_{n} )\leq C \rho^{{{d(u,w)}/{2}}},\end{equation*}

where d(u,w) denotes the graph distance between u and w, and C is a suitable positive constant. Therefore, from (3.16)–(3.17), the contribution to $J_{n,v}$ from the first part of (3.17) is at most

\begin{equation*} C\mathbb{E} \biggl( \sum_{\substack{u,w \in \mathcal{T}^{\prime}_{n}}} \rho^{d(u,w)/2} \biggr)=CB({\rho^{1/2}}),\end{equation*}

where we recall (3.3) and take $r\coloneqq \rho^{1/2}$ .

For the second part on the right-hand side of (3.17), we have that, given $\mathcal{T}_n$ , the distribution of $W_u$ is $(U_0/t)R^{d(u)}$ . Hence

\begin{equation*} \mathbb{E}({\mathds{1}_{\{W_u=v\}}\mid\mathcal{T}_n})=(U_0/t)R^{d(u)}(v),\end{equation*}

and thus it follows from the uniform ergodicity assumption (1.3) that

\begin{equation*}\vert{\mathbb{E}({\mathds{1}_{\{W_u=v\}}\mid\mathcal{T}_n})-\pi_v}\vert \leq C\rho^{d(u)}.\end{equation*}

Consequently

\begin{align*}& \sum_{u,w\in\mathcal{T}^{\prime}_n}\text{Cov} ({\mathbb{E}[\mathds{1}_{\{W_u=v\}}\mid \mathcal{T}_{n}], \, \mathbb{E}[\mathds{1}_{\{W_w=v\}}\mid \mathcal{T}_{n}]})\\*&\quad = \text{Var}\biggl({\sum_{u\in\mathcal{T}^{\prime}_n}\mathbb{E}[\mathds{1}_{\{W_u=v\}}\mid \mathcal{T}_{n}]}\biggr) \\*&\quad = \text{Var}\biggl({\sum_{u\in\mathcal{T}^{\prime}_n}({\mathbb{E}[\mathds{1}_{\{W_u=v\}}\mid \mathcal{T}_{n}]-\pi_v})}\biggr)\\&\quad \leq \mathbb{E}\biggl({\sum_{u\in\mathcal{T}^{\prime}_n} ({\mathbb{E}[\mathds{1}_{\{W_u=v\}}\mid \mathcal{T}_{n}]-\pi_v})}\biggr) ^2 \\&\quad \leq \mathbb{E}\biggl({C\sum_{u\in\mathcal{T}^{\prime}_n}\rho^{d(u)}}\biggr)^2\\&\quad = C \mathbb{E} \sum_{u,w\in\mathcal{T}^{\prime}_n}\rho^{d(u)+d(w)}\\&\quad \leq C \mathbb{E} \sum_{u,w\in\mathcal{T}^{\prime}_n}\rho^{d(u,w)} \\*&\quad = C B_n(\rho),\end{align*}

where we use the fact that $d(u)+d(w) \geq d(u,w)$ and $0< \rho<1$ to obtain the last inequality. Hence the contribution to $J_{n,v}$ from the second part of (3.17) is at most $CB_n(\rho)$ .

Combining the contributions from the two parts of (3.17), we have thus shown that, recalling $0<\rho<1$ ,

\begin{equation*} J_{n,v}\leq C B_n ({\rho^{1/2}}) + CB_n(\rho) \leq C B_n ({\rho^{1/2}}).\end{equation*}

Hence we can use Lemma 2 and conclude (3.15), which completes the proof.

4. Random walk with linear reinforcement on the star graph

In this section we consider a linearly reinforced random walk model on the countably infinite star graph. We will show that the almost sure convergence for the local times for this walk can be derived using our main result stated in Section 1.2.

Let us consider a special type of vertex-reinforced nearest neighbor random walk $(X_n)_{n \geq 0}$ on an infinite star graph, with a loop at the root. We denote the root by $v_0$ and the other vertices by $v_i, i \geq 1$ . Each edge is regarded as a pair of directed edges in opposite directions; the notation $(v_i,v_j)$ indicates that the edge is from $v_i$ to $v_j$ . We impose on the walk the condition that $v_0$ is a special vertex, in the sense that, whenever the walker takes the edge $(v_j,v_0)$ , for any j, it puts an additional weight of $\alpha_j\coloneqq (\alpha_{j,i})_{i \geq 0}$ on the vertices, such that $\sum_{i}\alpha_{j,i}< \infty$ . If the edge taken is $(v_0, v_j)$ , $j \neq0$ , then no vertex is reinforced.

Initially, $X_0 \equiv v_0$ , the walker is at the root and jumps to one of the adjacent neighbors with probability proportional to the given weights $\delta_i$ , such that $\delta\coloneqq \sum_{i \geq 0}\delta_i < \infty.$ At any time $n\geq1$ , the transition probabilities for the random walk are governed by

(4.1) \begin{equation} \mathbb{P} (X_{n+1}=v_j \mid X_n=v_i)= \begin{cases}\dfrac{\Delta_{n,j}}{\sum_{k}\Delta_{n,k}}& \text{when $ i=0$,}\\\mathds{1}_{\{j=0\}} & \text{for $ i \geq 1$,}\end{cases}\end{equation}

where $\Delta_{n,j}$ denotes the weight at the vertex $v_j$ at time n.

Observe that if we let $\sigma_k$ denote the random time at which the weights are updated for the kth time, then $\sigma_{k+1}= \sigma_k+Y_{k+1}$ , where $Y_{k+1} \in \{1,2\}$ is a random variable such that

\begin{equation*}\mathbb{P}(Y_{k+1}=1\mid \Delta_0, \Delta_{\sigma_1}, \ldots, \Delta_{\sigma_k}) = \dfrac{\Delta_{\sigma_k,0}}{\sum_j \Delta_{\sigma_k,j}}.\end{equation*}

Therefore the weight sequence at these updating random times can be coupled with an infinite color urn model, as described below.

Consider the urn model with colors indexed by $S\coloneqq \{0,1,2, \ldots\}$ , and an initial composition $U_0=(\delta_i)_{ i \geq 0}$ . The replacement matrix is such that the jth row of the matrix is $\alpha_j$ . Since the graph is a star graph, for the random walk to take a step along $(v_i,v_0)$ , $i\neq 0$ , it implies that the walker has jumped along the edge $(v_0, v_i)$ according to the transition probabilities given by (4.1). So if we consider the sequence of weights at time $\sigma_1, \sigma_2, \ldots,$ then the processes are coupled such that

(4.2) \begin{equation}(\Delta_{\sigma_n})_{ n \geq 0} = (U_n)_{n \geq 0}.\end{equation}

In particular, if $\sum_{i} \alpha_{j,i}=1$ for each j, then the replacement matrix is a stochastic matrix. Henceforth we assume that $\alpha_j$ is a probability vector for every $j \geq 0.$ We also assume that the $\alpha_j$ are such that the Markov chain corresponding to the replacement matrix is irreducible, aperiodic, and uniformly ergodic.

A particular example of such a matrix is when $\alpha_0=(p_j)_{j \geq 0}$ , with $p_j>0$ and $\sum_j p_j=1$ , and, for $j \neq 0$ , $\alpha_{j,i}=1 $ if $ i =0$ , and 0 otherwise. (Our conditions, including uniform ergodicity, are easily verified.)

Theorem 2. Let $X_n$ be a vertex-reinforced random walk on an infinite star graph with a loop at the root, such that the replacement matrix is an irreducible, aperiodic, and uniformly ergodic stochastic matrix. Let the transition probabilities of $X_n$ be as in (4.1). If we let $\sigma_n$ denote the nth update time, then, as $n \to \infty$ ,

(4.3) \begin{equation}\dfrac{\sigma_n}{n+1} \longrightarrow 2- \pi_0\quad a.s.\ and\ in\ L_1.\end{equation}

Furthermore, for any $j \geq 0$ , as $n \to \infty$ ,

\begin{equation*} \dfrac{\Delta_{n,j}}{n+\delta} \longrightarrow \dfrac{\pi_j}{2-\pi_0}\quad a.s.,\end{equation*}

where $\pi$ is the stationary distribution of the coupled urn process as defined in (4.2).

Proof. As observed earlier, $\sigma_{k+1}=\sigma_k+Y_{k+1}$ , where $Y_{k+1} \in \{1,2\}$ , and

\begin{equation*}\mathbb{P}(Y_{k+1}=1\mid \Delta_0, \Delta_{\sigma_1}, \ldots, \Delta_{\sigma_k}) = \dfrac{\Delta_{\sigma_k,0}}{\sum_j \Delta_{\sigma_k,j}}.\end{equation*}

Let $\widetilde{\sigma}_k\coloneqq \sum_{i=0}^k \mathds{1}_{\{Y_j=1\}}.$ Then, from the conditional distribution of $Y_k$ above and from (4.2), we have, using the coupling above,

\begin{equation*} \widetilde{\sigma}_n = \sum_{k=0}^n \mathds{1}_{\{Z_k=0\}}=N_{n,0},\end{equation*}

where $Z_k$ denotes the random color of the ball selected in the coupled urn model. From Theorem 1(ii), we know that as $n \to \infty$ , $N_{n,0}/(n+1) \longrightarrow \pi_0$ a.s. Thus, as $n \to \infty$ ,

\begin{equation*} \dfrac{\widetilde{\sigma}_n}{n+1} \longrightarrow \pi_0\quad \text{a.s.}\end{equation*}

Since $\sigma_n= \widetilde{\sigma}_n+ 2(n-\widetilde{\sigma}_n)$ , (4.3) follows immediately. Since $0\leq {{\sigma_n}/{(n+1)}}\leq 1$ , the $L_1$ convergence in (4.3) follows by the dominated convergence theorem.

Let $m(n)\coloneqq \sup\{k\colon \sigma_k \leq n\}$ . Then it follows from (4.3) that, as $n \to \infty$ ,

\begin{equation*}\dfrac{m(n)}{n+1} \longrightarrow \dfrac{1}{2- \pi_0}\quad\text{a.s.}\end{equation*}

From (4.2) and Theorem 1(i), we have, as $n \to\infty$ ,

\begin{equation*} \hspace*{70pt} \dfrac{\Delta_{n,j}}{n+\delta} = \dfrac{\Delta_{\sigma_{m(n)},j}}{m(n)} \dfrac{m(n)}{n+\delta} = \dfrac{U_{m(n),j}}{m(n)} \dfrac{m(n)}{n+\delta} \longrightarrow \dfrac{\pi_j}{2-\pi_0} \quad\text{a.s.} \hspace*{67pt}\Box\end{equation*}

Acknowledgements

The authors are grateful to Colin Desmarais and Cecilia Holmgren for various discussions they had with them over the course of working on this paper. We are grateful for their insightful ideas and remarks, which helped us to improve the quality of this paper. We also thank the anonymous referees for their comments and suggestions, which prompted us to add more references. This improved the overall quality of the paper.

References

Aldous, D. J. (1985). Exchangeability and related topics. In École d’Été de Probabilités de Saint-Flour, XIII—1983 (Lecture Notes Math. 1117), pp. 1198. Springer, Berlin.CrossRefGoogle Scholar
Athreya, K. B. andKarlin, S. (1968). Embedding of urn schemes into continuous time Markov branching processes and related limit theorems. Ann. Math. Statist. 39, 18011817.CrossRefGoogle Scholar
Bai, Z.-D. andHu, F. (2005). Asymptotics in randomized urn models. Ann. Appl. Prob. 15 (1B), 914940.CrossRefGoogle Scholar
Bandyopadhyay, A. andThacker, D. (2014). Rate of convergence and large deviation for the infinite color Pólya urn schemes. Statist. Prob. Lett. 92, 232240.CrossRefGoogle Scholar
Bandyopadhyay, A. andThacker, D. (2016). A new approach to Pólya urn schemes and its infinite color generalization. Available at arXiv:1606.05317.Google Scholar
Bandyopadhyay, A. andThacker, D. (2017). Pólya urn schemes with infinitely many colors. Bernoulli 23 (4B), 32433267.CrossRefGoogle Scholar
Blackwell, D. andMacQueen, J. B. (1973). Ferguson distributions via Pólya urn schemes. Ann. Statist. 1, 353355.CrossRefGoogle Scholar
Bose, A., Dasgupta, A. andMaulik, K. (2009). Multicolor urn models with reducible replacement matrices. Bernoulli 15 (1), 279295.CrossRefGoogle Scholar
Bose, A., Dasgupta, A. andMaulik, K. (2009). Strong laws for balanced triangular urns. J. Appl. Prob. 46 (2), 571584.CrossRefGoogle Scholar
Chen, M.-R. andKuba, M. (2013). On generalized Pólya urn models. J. Appl. Prob. 50 (4), 11691186.CrossRefGoogle Scholar
Chen, M.-R., Hsiau, S.-R. andYang, T.-H. (2014). A new two-urn model. J. Appl. Prob. 51 (2), 590597.CrossRefGoogle Scholar
Dasgupta, A. andMaulik, K. (2011). Strong laws for urn models with balanced replacement matrices. Electron. J. Prob. 16 (63), 17231749.CrossRefGoogle Scholar
Donnelly, P. andKurtz, T. G. (1996). The asymptotic behavior of an urn model arising in population genetics. Stoch. Process. Appl. 64 (1), 116.CrossRefGoogle Scholar
Drmota, M. (2009). Random Trees. Springer, Vienna.CrossRefGoogle Scholar
Feller, W. (1968). An Introduction to Probability Theory and its Applications, Vol. I, 3rd edn. John Wiley, New York, London and Sydney.Google Scholar
Ferguson, T. S. (1973). A Bayesian analysis of some nonparametric problems. Ann. Statist. 1, 209230.CrossRefGoogle Scholar
Flajolet, P., Dumas, P. andPuyhaubert, V. (2006). Some exactly solvable models of urn process theory. In Fourth Colloquium on Mathematics and Computer Science: Algorithms, Trees, Combinatorics and Probabilities, DMTCS Proc. AG, 59–118.Google Scholar
Gouet, R. (1997). Strong convergence of proportions in a multicolor Pólya urn. J. Appl. Prob. 34 (2), 426435.CrossRefGoogle Scholar
Häggström, O. (2005). On the central limit theorem for geometrically ergodic Markov chains. Prob. Theory Relat. Fields 132 (1), 7482.CrossRefGoogle Scholar
Hirth, U. M. (1997). From GEM back to Dirichlet via Hoppe’s urn. Combin. Probab. Comput. 6 (2), 185195.CrossRefGoogle Scholar
Holst, L. (2008). The number of two consecutive successes in a Hoppe–Pólya urn. J. Appl. Prob. 45 (3), 901906.CrossRefGoogle Scholar
Hoppe, F. M. (1984). Pólya-like urns and the Ewens’ sampling formula. J. Math. Biol. 20 (1), 9194.CrossRefGoogle Scholar
Hoppe, F. M. (1987). The sampling theory of neutral alleles and an urn model in population genetics. J. Math. Biol. 25 (2), 123159.CrossRefGoogle Scholar
Janson, S. (2004). Functional limit theorems for multitype branching processes and generalized Pólya urns. Stoch. Process. Appl. 110 (2), 177245.CrossRefGoogle Scholar
Janson, S. (2006). Limit theorems for triangular urn schemes. Prob. Theory Relat. Fields 134 (3), 417452.CrossRefGoogle Scholar
Janson, S. (2018). A.s. convergence for infinite colour Pólya urns associated with random walks. Available at arXiv:1803.04207.Google Scholar
Janson, S. (2019). Random replacements in Pólya urns with infinitely many colours. Electron. Commun. Prob. 24, 23, 111.CrossRefGoogle Scholar
Laruelle, S. andPagès, G. (2013). Randomized urn models revisited using stochastic approximation. Ann. Appl. Prob. 23 (4), 14091436.CrossRefGoogle Scholar
Levin, D. A., Peres, Y. andWilmer, E. L. (2009). Markov Chains and Mixing Times. American Mathematical Society, Providence, RI.Google Scholar
Lyons, R. (1988). Strong laws of large numbers for weakly correlated random variables. Michigan Math. J. 35 (3), 353359.Google Scholar
Mailler, C. andMarckert, J.-F. (2017). Measure-valued Pólya urn processes. Electron. J. Prob. 22, 26, 133.CrossRefGoogle Scholar
Meyn, S. andTweedie, R. L. (2009). Markov Chains and Stochastic Stability, 2nd edn. Cambridge University Press, Cambridge.CrossRefGoogle Scholar
Pemantle, R. (2007). A survey of random processes with reinforcement. Prob. Surv. 4, 179.CrossRefGoogle Scholar
Pitman, J. (2006). Combinatorial Stochastic Processes (Lecture Notes Math. 1875). Springer, Berlin.Google Scholar
Pólya, G. (1930). Sur quelques points de la théorie des probabilités. Ann. Inst. H. Poincaré 1 (2), 117161.Google Scholar
Seneta, E. (2006). Non-negative Matrices and Markov Chains (SpringerSeries in Statistics). Springer, New York.Google Scholar
Thacker, D. (2015). Infinite color urn models. Doctoral thesis, Indian Statistical Institute, Delhi Centre.Google Scholar
Thörnblad, E. (2016). The dominating colour of an infinite Pólya urn model. J. Appl. Prob. 53 (3), 914924.CrossRefGoogle Scholar