Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-06T05:10:15.044Z Has data issue: false hasContentIssue false

Boundary effect in competition processes

Published online by Cambridge University Press:  01 October 2019

Vadim Shcherbakov*
Affiliation:
Royal Holloway, University of London
Stanislav Volkov*
Affiliation:
Lund University
*
*Postal address: Department of Mathematics, Royal Holloway, University of London, Egham TW20 0EX, UK.
***Postal address: Centre for Mathematical Sciences, Lund University, SE-221 00 Lund, Sweden.
Rights & Permissions [Opens in a new window]

Abstract

This paper is devoted to studying the long-term behaviour of a continuous-time Markov chain that can be interpreted as a pair of linear birth processes which evolve with a competitive interaction; as a special case, they include the famous Lotka–Volterra interaction. Another example of our process is related to urn models with ball removal. We show that, with probability one, the process eventually escapes to infinity by sticking to the boundary in a rather unusual way.

Type
Research Papers
Copyright
© Applied Probability Trust 2019 

1. The model and results

In this paper we study the long-term behaviour of a continuous-time Markov chain (CTMC) with values in ${\mathbb{Z}}_{+}^2$ , where ${\mathbb{Z}}_{+}$ is the set of all non-negative integers, defined on a certain probability space with probability measure ${\textrm{P}}$ . The Markov chain jumps only to the nearest neighbours, and we consider two types of transition rates described below.

Transition rates of type I. Given the state $\left( {{x}_{1}},{{x}_{2}} \right)\in \mathbb{Z}_{+}^{2}$ the Markov chain jumps to

(1) \begin{align} \begin{split} (x_1+1,x_2) &\quad\text{with rate}\quad{\lambda}_1+{\alpha}_1 x_1,\\[5pt] (x_1,x_2+1) &\quad\text{with rate}\quad{\lambda}_2+{\alpha}_2 x_2,\\[5pt] (x_1-1,x_2) &\quad\text{with rate}\quad x_1g_1(x_2)\quad\text{if}\quad x_1>0,\\[5pt] (x_1,x_2-1) &\quad\text{with rate}\quad x_2g_2(x_1)\quad\text{if}\quad x_2>0,\\[5pt] \end{split} \label{eqn1} \end{align}

where ${\alpha}_i, {\lambda}_i>0$ , $i=1, 2$ , and $g_i$ , $i=1, 2$ are some non-negative functions. We call the Markov chain with transition rates of type I a competition process with non-linear interaction (specified by functions $g_1$ and $g_2$ ).

Transition rates of type II. Given the state $\left( {{x}_{1}},{{x}_{2}} \right)\in \mathbb{Z}_{+}^{2}$ the Markov chain jumps to

(2) \begin{align} \begin{split} (x_1+1,x_2) &\quad\text{with rate}\quad{\lambda}_1+{\alpha}_1 x_1,\\[4pt] (x_1,x_2+1) &\quad\text{with rate}\quad{\lambda}_2+{\alpha}_2 x_2,\\[4pt] (x_1-1,x_2) &\quad\text{with rate}\quad {\beta}_1 x_2\quad\text{if}\quad x_1>0,\\[4pt] (x_1,x_2-1) &\quad\text{with rate}\quad {\beta}_2 x_1\quad\text{if}\quad x_2>0, \end{split} \label{eqn2} \end{align}

where ${\alpha}_i, {\lambda}_i\geq 0$ , $i=1,2$ and ${\beta}_i>0$ , $i=1,2$ . We call the Markov chain with transition rates of type II a competition process with linear interaction.

Competition processes with both non-linear and linear interaction belong to a class of competition processes introduced in [12] as a natural two-dimensional analogue of the birth-and-death process in ${\mathbb{Z}}_{+}$ . In [12], the competition process was defined as a CTMC with values in ${\mathbb{Z}}_{+}^2$ , where the transitions are allowed only to the nearest neighbour states (see Appendix A). This definition was generalised to a multidimensional case in [4], [5]. Some basic models of competition processes are discussed in Reference Anderson[1]. The term ‘a competition process’ was apparently coined due to the fact that original examples of such processes were motivated by modelling a competition between populations (e.g. see [12, Examples 1 and 2] and references therein). One of the best-known competition processes is the one specified by the famous Lotka–Volterra interaction. In our notation, the Lotka–Volterra interaction corresponds to functions $g_i(z)=z$ , $i=1,2$ , in the case of transition rates of type I. If, in addition, $\lambda_i=0$ , $i=1,2$ , in the Lotka–Volterra case, then we get the competition process given in [12, Example 1].

Competition processes with both non-linear and linear interactions can be interpreted in terms of interacting birth-and-death processes. Indeed, if in both cases the death rates are equated to zero, that is $g_1=g_2\equiv 0$ in (1) and $\beta_1=\beta_2=0$ in (2), then the corresponding Markov chain is formed by two independent linear birth processes with immigration. Non-zero death rates determine competitive interaction between the components. Therefore, the competition processes of interest can be naturally embedded into a more general technical framework of multivariate Markov processes formulated in terms of locally interacting birth-and-death processes. In the absence of interaction, components of such a Markov process evolve as a collection of independent birth-and-death processes, whose long-term behaviour is well known. Namely, given a set of transition rates one can, in principle, determine whether the corresponding birth-and-death process is recurrent/positive recurrent, or transient/explosive, and compute various characteristics of the process. However, the presence of interaction can significantly change the collective behaviour of the system (e.g. see Reference Janson, Shcherbakov and Volkov[6], Reference Menshikov and Shcherbakov[10] and Reference Shcherbakov and Volkov[13]).

Furthermore, note that a discrete-time Markov chain (DTMC) corresponding to a competition process with linear interaction can be regarded as an urn model with ball removal. In the symmetric case, that is ${\alpha}_1={\alpha}_2$ , ${\beta}_1={\beta}_2$ , and ${\lambda}_1={\lambda}_2$ , this DTMC is similar, in a sense, to Friedman’s urn model. This similarity enabled us to adapt Freedman’s method for Friedman’s urn model ([3]) in order to obtain a key fact in the proofs. We discuss this method in detail in Section 2.3.1. If both ${\alpha}_1={\alpha}_2=0$ and ${\lambda}_1={\lambda}_2=0$ , then the DTMC corresponding to a competition process with linear interaction coincides with the well-known OK Corral model (see, e.g., Reference Kingman and Volkov[7]). If ${\alpha}_1={\alpha}_2=0$ and ${\lambda}_1,{\lambda}_2>0$ then the corresponding competition process can be interpreted as the OK Corral model with ‘resurrection’.

Theorems 1 and 2 are the main results of the paper. These theorems show that competition processes with both non-linear and linear interaction have similar, rather unusual, long-term behaviour. Note that in the theorems and later in the proofs we use the notation

\[ i^*=\begin{cases} 2&\text{ if } i=1,\\ 1&\text{ if } i=2\\ \end{cases} \]

(i.e. ‘the other coordinate’).

Theorem 1. Let $\xi(t)$ be a competition process with non-linear interaction (transition rates of type I) specified by functions $g_1, g_2\,{:}\,[0,\infty)\to [0, \infty)$ . Assume that

$g_1$ and $g_2$ are regularly varying functions with indexes $\rho_1>0$ and $\rho_2>0$ respectively, such that $g_1(0)=g_2(0)=0$ and $g_1(x),\, g_2(x)>0$ for all $x>0$ ;

${\alpha}_i, {\lambda}_i>0$ , $i=1,2$ .

Then $\xi(t)$ is a non-explosive transient CTMC and P $\left( {{B}_{1}}\cup {{B}_{2}} \right)=1$ , where

\[ B_i=\bigg\{\xi_i(t)\to \infty,\ 0=\liminf_{t\to\infty} \xi_{i^*}(t)<\limsup_{t\to\infty} \xi_{i^*}(t)=1\bigg\}, \quad i=1,2. \]

Theorem 2. Let $\xi(t)$ be a competition process with linear interaction (transition rates of type II) specified by parameters ${\alpha}_i\geq 0$ , ${\lambda}_i>0$ , and ${\beta}_i> 0$ , $i=1,2$ . Then $\xi(t)$ is a non-explosive transient CTMC and P $\left( {{A}_{1}}\cup {{A}_{2}} \right)=1$ , where, for $i=1,2$ ,

(3) $$\begin{array}{*{35}{l}} {{A}_{i}} & =\left\{ \underset{t\to \infty }{\mathop{\lim }}\,{{\xi }_{i}}\left( t \right)=\infty ,\underset{t\to \infty }{\mathop{\lim \inf }}\,{{\xi }_{i*}}\left( t \right)=0,\underset{t\to \infty }{\mathop{\lim \sup }}\,{{\xi }_{{{i}^{*}}}}\left( t \right)={{\kappa }_{i*}} \right\}and \\ {{\kappa }_{i}} & ={{\kappa }_{i}}\left( {{\alpha }_{i*}} \right)=\left\{ \begin{array}{*{35}{l}} 1 & if{{\alpha }_{i*}}>0, \\ 2 & if{{\alpha }_{i*}}=0. \\ \end{array} \right. \\ \end{array}$$

The proof of each theorem consists of two parts. First, we show that, with probability bounded below uniformly over the initial position on the coordinate axes, the process sticks to the boundary of the quarter plane in the following way. Namely, one of the components of the process tends to infinity while the other component takes only values 0 and 1 (0, 1, and 2 in the special case of Theorem 2), oscillating infinitely often between these values. This is what we call the boundary effect. This step of the proof is the subject of Lemmas 1 (Theorem 1) and 4 (Theorem 2). In order to prove each lemma we construct a so-called Lyapunov function for the well-known transience criterion of a countable Markov chain (e.g. see [Reference Menshikov, Popov and Wade9, Theorem 2.5.8]). This allows us to show that the Markov chain is confined to a strip along the boundary, as described. Then this fact is complemented by an argument based on the Borel–Cantelli lemma that gives the oscillation effect, i.e. the process transits from one level of the strip to another infinitely often. In both cases, this implies that the Markov chain under consideration is transient and, with positive probability, escapes to infinity in a certain way.

Intuitively, it seems rather clear that if the process is already at the boundary then it prefers to stay near the boundary in the future. We use the Lyapunov function method to transform this intuition into a rigorous argument in Lemmas 1 and 4. Although a direct probabilistic proof might be possible for both lemmas, we prefer to use the general method, which can be used in other cases where a direct probabilistic argument might become cumbersome. For example, this is the case in the model in Reference Menshikov and Shcherbakov[10], where a similar boundary effect was originally observed for a pair of interacting birth-and-death processes. In fact, we borrow the idea of the construction of the Lyapunov function from that article.

Another key step in the proof of both theorems consists of showing that the process hits the boundary with probability one. Note that in applications of competition processes to population modelling the hitting time is interpreted as the extinction time of one of the competing populations. Therefore, determining whether the hitting time is finite is of interest in its own right. Sufficient conditions for finiteness of the hitting time and its mean are given in Reference Reuter[12] for competition process in ${\mathbb{Z}}_{+}^2$ . These conditions are rather restrictive, which is not surprising as they were obtained for very general assumptions on the transition characteristic of the competition process. For example, it is possible to use them only in some special cases of competition processes in Theorems 1 and 2 (see the discussion in Appendix A). We use a direct probabilistic argument in order to show finiteness of the mean hitting time in the case of the competition process with non-linear interaction in Theorem 1, and also in the case of the competition process with linear interaction in Theorem 2 under the assumption ${\alpha}_1{\alpha}_2<{\beta}_1{\beta}_2$ . However, neither this argument nor the results of Reference Reuter[12] can be applied in the case of the competition process with linear interaction (Theorem 2) under the assumption ${\alpha}_1{\alpha}_2>{\beta}_1{\beta}_2$ . In this particular case, showing that the process hits the boundary almost surely is somewhat reminiscent to showing non-convergence to an unstable equilibrium in processes with reinforcement (e.g. urn models). Often the method of stochastic approximation is used to show such non-convergence (see, e.g., Reference Pemantle[11] and references therein). Further, showing that the hitting time is finite in this case of the linear model is also similar to showing that a non-homogeneous random walk exits a cone, where the Lyapunov function method proved to be useful (e.g. see Reference MacPhee, Menshikov and Wade[8] and references therein).

Although our model is similar to both urn models with ball removal and to non-homogeneous random walks, we were unable to apply the above research techniques and used a different method instead. Our method is a modification of a method used in Reference Freedman[3] for studying Friedman’s urn model. The original method consists of estimating moments of certain martingales related to the process of interest. The similarity of the competition process with linear interaction to Friedman’s urn model allows us to adapt this idea (see Section 2.3.1 for details).

2. Proofs

Throughout the rest of the paper ${\textrm{E}}$ denotes the expectation with respect to the probability measure ${\textrm{P}}$ .

2.1. Proof of Theorem 1

Lemma 1. There exists ${\varepsilon}>0$ , depending on the model parameters only, such that

$$\underset{{{x}_{1}}\ge 0}{\mathop{\inf }}\,P\left( {{{\tilde{A}}}_{1}}|\xi \left( 0 \right)=\left( {{x}_{1}},0 \right) \right)\ge \varepsilon \text{ and }\underset{{{x}_{2}}\ge 0}{\mathop{\inf }}\,P\left( {{{\tilde{A}}}_{2}}|\xi \left( 0 \right)=\left( 0,{{x}_{2}} \right) \right)\ge \varepsilon ,$$

where ${{\tilde{A}}_{i}}=\{{{\xi }_{i}}(t)\to \infty \text{ and }{{\xi }_{{{i}^{\star }}}}(t)\in \{0,1\}\ for all\ t\ge 0\}.$

Proof of Lemma 1. We prove only the first bound of the lemma, i.e. when the process starts at $\xi(0)=(x_1, 0)$ . The other bound will follow by symmetry. In order to simplify notation we use $x=x_1$ and $y=x_2$ in the rest of the proof. Given positive numbers $\nu$ and $\mu$ , define the following function on ${\mathbb{Z}}_{+}^2\setminus{(0, 0)}$ :

(4) \begin{equation} f(x,y)=\begin{cases} x^{-\nu}-x^{-\mu} &\quad\text{if } y=0,\\ x^{-\nu} &\quad\text{if } y=1,\\ 1 &\quad\text{if } y\ge 2. \end{cases} \label{eqn4} \end{equation}

In the rest of the proof of this lemma we assume that

(5) \begin{equation} 0<\nu<\mu<\min(\rho_1, \rho_2). \label{eqn5} \end{equation}

Denote by ${\mathsf{G}}$ the generator of the CTMC $\xi(t)$ with transition rates (1). From state (x,0), where $x>0$ , transitions are possible only to states $\left(x+1, 0\right)$ and (x, 1) with rates ${\lambda}_1+{\alpha}_1x$ and ${\lambda}_2$ respectively. Therefore,

(6) \begin{equation} {\mathsf{G}}\, f(x,0)=({\lambda}_1+{\alpha}_1x)\,((x+1)^{-\nu}-x^{-\nu}-(x+1)^{-\mu}+x^{-\mu}) +{\lambda}_2x^{-\mu}. \label{eqn6} \end{equation}

Given $\gamma>0$ , Taylor’s expansion formula shows that

(7) \begin{equation} (x\pm1)^{-\gamma}-x^{-\gamma}=\mp\gamma x^{-1-\gamma}+o(x^{-1-\gamma}) \label{eqn7} \end{equation}

for sufficiently large $x>0$ . Applying this expansion for the polynomial terms on the right-hand side of (6) we obtain that

(8) \begin{equation} {\mathsf{G}} \,f(x,0) \leq x^{-\nu}(-{\alpha}_1\nu+ ({\alpha}_1\mu +{\lambda}_2)x^{-\mu+\nu}+o(1))\leq 0 \label{eqn8} \end{equation}

for all sufficiently large x, as $0<\nu<\mu$ .

Next, given state (x,1), where $x>0$ , the Markov chain can jump only to states $\left(x+1, 1\right)$ , $\left(x-1, 1\right)$ , (x, 2), and (x, 0), and these jumps occur with rates ${\lambda}_1+{\alpha}_1x$ , $x\cdot g_1(1)$ , ${\lambda}_2+{\alpha}_2$ , and $g_2(x)\cdot 1$ respectively. Therefore,

\begin{equation*} \begin{split} {\mathsf{G}}\,f(x,1)&=({\lambda}_1+{\alpha}_1x)\,((x+1)^{-\nu}-x^{-\nu})+xg_1(1)\,((x-1)^{-\nu}-x^{-\nu})\\ & \quad + ({\lambda}_2+{\alpha}_2)\,(1-x^{-\nu}) +g_2(x)\,(x^{-\nu}-x^{-\mu}-x^{-\nu})\\ & =-g_2(x)x^{-\mu}+O(1) \end{split} \end{equation*}

by applying the expansion (7). Recall that $g_2$ is a regularly varying function with index $\rho_2>0$ , i.e. $g_2(x)=x^{\rho_2}l(x)$ , where l is a slowly varying function (e.g. see Reference Bingham, Goldie and Teugels[2] for definitions). Since $\mu<\min(\rho_1, \rho_2)$ , see (5), we get that $g_2(x)x^{-\mu}=x^{\rho_2-\mu}l(x)\to \infty$ as $x\to \infty$ . This results in

(9) \begin{equation} {\mathsf{G}}\,f(x,1)\leq 0 \label{eqn9} \end{equation}

for all sufficiently large x. Define the following stopping time:

\[ \sigma=\inf(t\,{:}\, \xi(t)\notin\{x\ge N+1 \text{ and } y\le 1\}), \]

where the integer N is such that the bounds (8) and (9) hold for all $x>N$ . These bounds imply that the random process $Z(t):= f(\xi(t\wedge \sigma))$ is a supermartingale. Since $Z(t)\ge0$ , we conclude that Z(t) converges almost surely to a finite limit $Z_{\infty}$ . Next, note that on the event $\{\sigma=\infty\}$ we must have $\xi_1(t)\to\infty$ , otherwise, if $\limsup_{t\to\infty}\xi_1(t)=A<\infty$ , then Z(t) cannot converge because f is not constant on the set $\{0,1,\dots,A\}\times\{0,1\}$ which is irreducible for the chain. Consequently,

\[ Z_\infty=\begin{cases} 1\text{ or }f(N, 0)=N^{-\nu}-N^{-\mu}\text{ or } f(N, 1)=N^{-\nu}& \text{ if } \sigma<\infty,\\ 0& \text{ if } \sigma=\infty. \end{cases} \]

Assume that the initial position of the process is (x, 0), where $x\ge N+1$ . By the optional stopping theorem,

\[ (N^{-\nu}-N^{-\mu}){\textrm{P}}(\sigma<\infty)\le {\textrm{E}} (Z_\infty)\le Z(0)=f(x, 0)=x_0^{-\nu}-x^{-\mu}, \]

so that

\[ {\textrm{P}}(\sigma<\infty)\le \frac{f(x_0,0)}{f(N,0)} \le \frac{f(N+1,0)}{f(N,0)}=1-{\varepsilon}'<1 \]

for some ${\varepsilon}'>0$ , due to the monotonicity of the function $x^{-\nu}-x^{-\mu}$ for positive x. Thus, if $\xi(0)=(x,0)$ , where $x\geq N+1$ , then, with probability at least ${\varepsilon}'$ , the process $\xi(t)$ stays in the set $\{N+1,N+2,\dots\}\times\{0,1\}$ forever. Further, for each initial position (x, 0), where $x\in\{0,1,\dots,N\}$ , with a strictly positive probability, the process reaches state $\left(N+1, 0\right)$ without exiting the set $\{y=0,1\}\in{\mathbb{Z}}_{+}^2$ (e.g. by just jumping only to the right). Consequently, ${\textrm{P}}(\sigma=\infty \mid \xi(t)=(x, 0))$ is bounded away from zero uniformly over $x\geq 0$ . On this event $\xi_1(t)\to\infty$ almost surely (a.s.). □

Lemma 2. (Lemma 7.3.6 in Reference Menshikov, Popov and Wade[9].) Let $Y_t\ge 0$ , $t\ge 0$ , be a process adapted to a filtration ${\cal G}_t$ , $t\ge 0$ , and let T be a stopping time. Suppose that there exists $\varepsilon>0$ such that

$$E\left[ \text{d}{{Y}_{t}}|{{\cal G}_{t-}} \right]\le -\varepsilon \text{ d}t\text{ }on\text{ }\left\{ t\le T \right\}.$$

Then $E[T\,|\,{\cal G}_0]\le Y_0/\varepsilon$ .

Lemma 3. Define $\tau=\inf\{t\,{:}\,\xi_1(t)=0\text{ or }\xi_2(t)=0\}$ . Then $\tau$ is a.s. finite.

Proof of Lemma 3. It is easy to see that the infinitesimal mean jump of component $\xi_i(t)$ computed as

\[ {\textrm{E}}(\xi_i(t+\textrm{ d}t)-\xi_i(t) \mid \xi(t)=(x_1, x_2))=({\lambda}_i+({\alpha}_i-g_i(x_{i^{\star}}))x_i)\textrm{ d}t +o(\textrm{ d}t),\quad i=1,2, \]

is negative and bounded away from zero in the domain $\{x_i\geq 1, x_{i^{\star}}\geq C_{i^{\star}}\}$ , $i=1,2$ , where both $C_1$ and $C_2$ are large enough. Now Lemma 2 yields that in a finite mean time the Markov chain hits the boundary. □

Remark 1. Note that in the case of a competition process with Lotka–Volterra interaction (mentioned in the introduction), the lemma follows from [12, Theorem~5].

Let us finish the proof of the theorem. Let $T_j$ be the duration of jth visit to the set $D_N=\{x_1>N, x_2\leq 1\}\cup\{x_1\leq 1, x_2>N\}$ , where N is as chosen in the proof of Lemma 1. This lemma yields that ${\textrm{P}}(T_j<\infty)\leq 1-{\varepsilon}$ on $\{T_{j-1}<\infty\}$ . Consequently, with probability one, $T_j<\infty$ only for finitely many j, and the process is eventually confined to the set $D_N$ .

Finally, suppose for definiteness that the absorbing set is $\{x_1>N, x_2\leq 1\}$ . Since the drift of $\xi_2(t)$ at $x_2=1$ is directed down, the process eventually jumps from level $x_2=1$ to level $x_2=0$ . On the other hand, the process cannot stay forever at axis $x_2=0$ as ${\lambda}_2>0$ . Thus, the Markov chain goes to infinity oscillating between levels $x_2=0$ and $x_2=1$ , as claimed. Theorem 1 is proved.

2.2. Proof of Theorem 2

We start with the following lemma, which is similar to Lemma 1.

Lemma 4. There exists ${\varepsilon}>0$ , depending on the model parameters only, such that

\[ \inf_{x_1\geq 0}P({\tilde{ A}}_1 \,|\,\xi(0)=(x_1, 0))\ge {\varepsilon}\, \text { and } \inf_{x_2\geq 0}P({\tilde{ A}}_2 \,|\, \xi(0)=(0, x_2))\ge {\varepsilon}, \]

where

$$ {\tilde{ A}}_i=\begin{cases} \xi_i(t)\to\infty \text{ and } \xi_{i^{\star}}(t)\in\{0,1\}\ \text{for all}\ t\ge 0& \text{ if } {\alpha}_{i}>0,\\ \xi_i(t)\to\infty \text{ and } \xi_{i^{\star}}(t)\in\{0,1,2\}\ \text{for all}\ t\ge 0& \text{ if } {\alpha}_{i}=0. \end{cases} $$

Proof. Denote $x=x_1$ and $y=x_2$ for notational simplicity. We prove the lemma only in the case $\xi(0)=(x, 0)$ . The proof for the case where the initial position of the process is on the other axis is identical.

First, assume that ${\alpha}_1>0$ . Consider the function f defined in (4) with parameters $\mu$ and $\nu$ such that $0<\nu<\mu<1.$ Let ${\mathsf{G}}$ be the generator of the competition process with linear interaction. Given $x>0$ , transitions from state (x, 0) are possible only to states $\left(x+1, 0\right)$ and (x, 1). These transitions occur with rates ${\lambda}_1+{\alpha}_1x$ and ${\lambda}_2$ respectively. Using equation (7) we obtain that

(10) $$\begin{align} \begin{split} {\mathsf{G}} f(x,0)&=({\lambda}_1+{\alpha}_1x)((x+1)^{-\nu}-x^{-\nu}-(x+1)^{-\mu}+x^{-\mu}) +{\lambda}_2 x^{-\mu} \nonumber \\ &=-\nu{\alpha}_1x^{-\nu} +(\mu{\alpha}_1+{\lambda}_2)x^{-\mu}+o(x^{-\nu})+o(x^{-\mu})\leq 0 \end{split} \end{align}$$

for sufficiently large $x>0$ , as $\nu<\mu$ .

Now, given that $x>0$ , the transitions from state (x, 1) to states $(x+1, 1)$ , $(x-1, 1)$ , (x, 2), and (x, 0) occur with rates ${\lambda}_1+{\alpha}_1x$ , ${\beta}_1$ , ${\lambda}_2+{\alpha}_2$ , and ${\beta}_2x$ respectively. Therefore, using equation (7) one more time we obtain that

(11) $$ \begin{align} \begin{split} {\mathsf{G}} f(x,1)&=({\lambda}_1+{\alpha}_1x)\,((x+1)^{-\nu}-x^{-\nu}) +{\beta}_1((x-1)^{-\nu}-x^{-\nu}) \nonumber \\ & \quad +({\lambda}_2+{\alpha}_2)\,(1-x^{-\nu})-{\beta}_2x^{1-\mu} \nonumber \\ &\leq -\nu{\alpha}_1x^{-\nu}+{\lambda}_2+{\alpha}_2-{\beta}_2 x^{1-\mu}+o(x^{-\nu})\leq 0 \end{split} \end{align} $$

for all sufficiently large x, as $\mu<1$ .

Next, given $N>0$ define $ \sigma=\inf(t\,{:}\,\xi(t)\notin\{x>N,\, y=0, 1\}). $ Assume that N is so large that the bounds (10)) and (11) hold for $x>N$ . Then $Z(t)=f(\xi(t\wedge \sigma))$ is a non-negative supermartingale. The proof can be finished by using an argument based on the optional stopping theorem in a manner similar to the proof of Lemma 1.

Assume now that ${\alpha}_1=0$ . In this case, instead of function (4) we consider the function

\[ g(x,y)=\begin{cases} \frac1{\ln x}-\frac1{\ln^3 x} -\frac{{\lambda}_1/{\lambda}_2}{x\ln^2 x} +\frac{1}{x\ln^3 x} &\text{if } y=0,\\[3pt] \frac1{\ln x}-\frac1{\ln^3 x} &\text{if } y=1,\\ \frac1{\ln x} &\text{if } y=2,\\ 1 &\text{if } y\ge 3. \end{cases} \]

Using Taylor’s expansion, we obtain that

\begin{align*} \label{Gen20} \begin{split} {\mathsf{G}} g(x, 0)&\leq -\frac{{\lambda}_2}{x\ln^3 x}+ O\Big(\frac1{x\ln^4 x} \Big)\leq 0,\\ {\mathsf{G}} g(x, 1)&\leq -\frac{{\beta}_2 {\lambda}_1/{\lambda}_2}{\ln^2 x} + O \Big( \frac1{\ln^{3}x} \Big)\leq 0,\\ {\mathsf{G}} g(x, 2)&\leq -\frac{{\beta}_2 x}{\ln^3 x}+ O ( 1)\leq 0 \end{split} \end{align*}

for all sufficiently large x. The rest of the proof is analogous to the proof in the case ${\alpha}_1>0$ above, and we skip the details. □

The other key ingredient of the proof is the following lemma, which is identical to Lemma 3 in the proof of Theorem 1.

Lemma 5. Define $\tau=\inf\{t\,{:}\,\xi_1(t)=0\text{ or }\xi_2(t)=0\}$ . Then $\tau$ is a.s. finite.

Lemma 5 is proved in Section 2.3. Similarly to the proof of Theorem 1, it follows from Lemmas 4 and 5 that with probability one the process is eventually confined either to the set $\{x_2\leq \kappa_2\}$ or to the set $\{x_1\leq \kappa_1\}$ , where $\kappa_i$ , $i=1,2$ , are defined in (3). Suppose now for definiteness that the absorbing set is $\{x_2\leq \kappa_2\}$ and consider the following two cases. First, suppose that ${\alpha}_1>0$ , so that $\kappa_2=1$ . In this case the process cannot stay forever at $x_2=0$ . Indeed, let $\left(a_{j}, 0\right)$ , $j\geq 1$ , be a sequence of points successively visited by the Markov chain on the line $x_2=0$ . The probability of the jump $\left(a_{j}, 0\right)\to \left(a_{j}, 1\right)$ can be bounded below by $O(1)/(a_1+j)$ (for instance, consider the worst-case scenario when the process always jumps to the right); therefore, by the conditional Borel–Cantelli lemma, there are infinitely many jumps from $x_2=0$ to $x_2=1$ . Combining this with Lemma 5, or simply noting that the probability of a jump from $x_2=1$ to $x_2=0$ is bounded below (it tends to ${\beta}_2/({\alpha}_1+{\beta}_1+{\beta}_2)$ as $x\to \infty$ ), one can conclude that the process cannot stay forever at the line $x_2=1$ as well; hence, it goes to infinity oscillating between the lines $x_2=0$ and $x_2=1$ , as claimed.

Finally, suppose that ${\alpha}_1=0$ , in which case $\kappa_2=2$ . The probability of the transition $\left(x_1,0\right)\to \left(x_1,1\right)$ is equal to ${\lambda}_2/({\lambda}_1+{\lambda}_2)$ for all $x_1$ , so that the Markov chain cannot stay forever at $x_2=0$ . Similarly to the above, let $(a_{j}, 1)$ , $j\geq 1$ , be a sequence of points successively visited by the Markov chain on the line $x_2=1$ . The probability of the jump $\left(a_{j}, 1\right)\to \left(a_{j}, 2\right)$ can be bounded below by $O(1)/(a_1+j)$ . Again, by the conditional Borel–Cantelli lemma, there are infinitely many jumps from $x_2=1$ to $x_2=2$ . Combining this with Lemma 5, or simply noting that the probabilities of jumps both from $x_2=2$ to $x_2=1$ and from $x_2=1$ to $x_2=0$ are bounded below by constants, we obtain that the process goes to infinity oscillating between the lines $x_2=0$ and $x_2=2$ , as described.

2.3. Proof of Lemma 5

Note that each of the lines $x_2=\frac{{\alpha}_1x_1+{\lambda}_1}{{\beta}_1}$ (line $l_1$ ) and $x_2=\frac{{\beta}_2x_1-{\lambda}_2}{{\alpha}_2}$ (line $l_2$ ) divides ${\mathbb{Z}}_+^2$ into two parts. The infinitesimal drift of $\xi_1(t)$ is negative above $l_1$ and positive below it. Similarly, the infinitesimal drift of $\xi_2(t)$ is negative below $l_2$ and positive above it. There are two cases of mutual location of lines $l_1$ and $l_2$ , namely ${\alpha}_1{\alpha}_2 < {\beta}_1 {\beta}_2$ and ${\alpha}_1{\alpha}_2>{\beta}_1 {\beta}_2$ .

If ${\alpha}_1{\alpha}_2 < {\beta}_1 {\beta}_2$ , then $l_2$ is located above $l_1$ in the positive quarter plane. Both process components have negative drift in the domain between the lines. Moreover, the drift of one of the process components remains negative outside the negative cone. Consequently, with probability 1, the process eventually hits the axes. The formal proof is similar to the case of competition processes with non-linear interaction in Theorem 1, therefore we skip the details. In addition, the finiteness of the hitting time in this case follows from results in Reference Reuter[12] (see Appendix A).

The case ${\alpha}_1{\alpha}_2\geq {\beta}_1 {\beta}_2$ is different from the previously considered cases. In order to explain this, assume for a moment that ${\alpha}_1{\alpha}_2>{\beta}_1{\beta}_2$ . Then there is a positive drift in both coordinates in the domain between lines $l_1$ and $l_2$ . If the process starts outside the domain, where the drift of the smallest component is strictly negative, then this component becomes zero in a finite mean time by the same reasoning as in all previous cases. However, if the initial position of the process is inside the domain, then one has to show that the process eventually leaves the domain.

The proof of the lemma in this case is given in Section 2.3.2. The proof is based on an appropriately modified method used in Reference Freedman[3] for analysis of Friedman’s urn model. The main idea of the original method is explained in Section 2.3.1.

2.3.1. Freedman’s method for Friedman’s urn model

In this section we explain the main idea of Freedman’s method for Friedman’s urn model. First, recall that Friedman’s urn model with parameters ${\alpha}\geq0$ and ${\beta}\geq 0$ describes a DTMC $\left(W_n, B_n\right)\in {\mathbb{R}}^2_{+}\setminus \left(0, 0\right)$ evolving as follows. Given $\left(W_n, B_n\right)=\left(W,B\right)$ , the Markov chain jumps to $\left(W+{\alpha}, B+{\beta}\right)$ with probability $W/(W+B)$ , and to $\left(W+{\beta}, B+{\alpha}\right)$ with probability $B/(W+B)$ . In order to demonstrate the main idea of the method, we consider another Markov chain (the auxiliary process) instead. The auxiliary process is a DTMC $\left(X_n, Y_n\right)\in {\mathbb{Z}}_{+}^2\setminus \left(0, 0\right)$ evolving as follows. Given $\left(X_n, Y_n\right)=\left(x,y\right)$ it jumps to states $\left(x+1, y\right)$ and $\left(x, y+1\right)$ with probabilities $\frac{{\alpha} x+{\beta} y}{({\alpha}+{\beta})(x+y)}$ and $\frac{{\alpha} y+{\beta} x}{({\alpha}+{\beta})(x+y)}$ respectively. Similar to competition processes with linear interaction, the DTMC $\left(X_n, Y_n\right)$ takes values in the integer quarter plane and jumps to the nearest neighbour states. There is also a certain similarity between the transition probabilities of $\left(X_n, Y_n\right)$ and the competition processes with linear interaction, although the interaction between $X_n$ and $Y_n$ can now be regarded as cooperative rather than competitive. Furthermore, the auxiliary process and Friedman’s urn model are closely related, since

\begin{equation*} \begin{split} W_n&={\alpha} X_n+{\beta} Y_n,\\ B_n&={\beta} X_n +{\alpha} Y_n. \end{split} \end{equation*}

In other words, $X_n$ (resp. $Y_n$ ) can be viewed as the number of times a white (resp. black) colour has been picked up in Friedman’s urn model by time n. Without loss of generality, we apply Freedman’s method to the auxiliary process $\left(X_n, Y_n\right)$ . Given ${\alpha}\geq 0$ and ${\beta}\geq 0$ , define

(12) \begin{equation} \rho=\frac{{\alpha}-{\beta}}{{\alpha}+{\beta}}. \label{eqn10} \end{equation}

Theorem 3 below describes the asymptotic behaviour of the auxiliary process under certain assumptions. The theorem is almost a verbatim copy of part of Theorem 3.1 in Reference Freedman[3] for the original Friedman urn model with parameters ${\alpha}$ and ${\beta}$ . We state and prove the theorem for the auxiliary process for the following reason. There is a certain similarity between our competition process and the auxiliary process, which allows us to adapt the idea of the proof of Theorem 3 for our purposes, therefore we provide the proof here for the reader’s convenience.

Theorem 3. If $\rho>1/2$ then $n^{-\rho}(X_n-Y_n)$ converges almost surely to a non-trivial random variable.

Proof. Define the difference between the components $X_n$ and $Y_n$ as $U_n=X_n-Y_n$ , and their sum as $S_n=X_n+Y_n$ ; note that $S_n=S_0+n$ . We have

(13) \begin{equation} \begin{split} {\textrm{E}}(U_{n+1} \mid U_n)& =U_n\bigg(1+\frac{{\alpha}-{\beta}}{({\alpha}+{\beta})S_n}\bigg)=U_n\bigg(1+\frac{{\alpha}-{\beta}}{s+({\alpha}+{\beta})n}\bigg),\\ {\textrm{E}}(U_{n+1}^2 \mid U_n)& =U_n^2\bigg(1+\frac{2({\alpha}-{\beta})}{({\alpha}+{\beta})S_n}\bigg)+1=U_n^2\bigg(1+\frac{2({\alpha}-{\beta})}{s+({\alpha}+{\beta})n}\bigg)+1, \end{split} \label{eqn11} \end{equation}

where $s=({\alpha}+{\beta})S_0$ . Denote

\[ a_n(j)=\bigg(1+\frac{({\alpha}-{\beta})j}{s+({\alpha}+{\beta})n}\bigg),\quad j=1,2. \]

With this notation we get that

\begin{align*} \label{recur1} \begin{split} {\textrm{E}}(U_{n+1} \mid U_n)&=U_na_n(1),\\ {\textrm{E}}(U_{n+1}^2 \mid U_n)&=U_n^2a_n(2)+1. \end{split} \end{align*}

The first equation in the preceding display means that

(14) \begin{equation} Z_n:= U_n\prod_{k=0}^{n-1}a_k^{-1}(1),\quad n\geq 1, \label{eqn12} \end{equation}

is a martingale. The second equation gives

\[ {\textrm{E}}(U_{n+1}^2)={\textrm{E}}(U_n^2)a_n(2)+1. \]

Using this identity recursively, we arrive at

\[ {\textrm{E}}(U_{n+1}^2)=\bigg(U_0^2+\sum\limits_{j=0}^n\prod\limits_{k=0}^ja_k^{-1}(2)\bigg)\prod_{k=0}^{n}a_k(2). \]

Note that

\[ \prod_{k=0}^m a_k(j)=(C_j+o(1)) m^{j\rho},\quad j=1,2, \]

for some $C_1, C_2>0$ , so that

\[ \sum\limits_{j=0}^{\infty}\prod\limits_{k=0}^ja_k^{-1}(2)<\infty, \]

as $\rho>1/2$ . Consequently, $\sup_{n}n^{-2\rho}{\textrm{E}}(U_n^2)<\infty$ . Now Doob’s convergence theorem implies that the martingale $Z_n$ defined in (14) converges almost surely to a finite limit as $n\to \infty$ . Theorem 3 is thus proved. □

2.3.2. Proof of Lemma 5 in the case ${\alpha}_1{\alpha}_2>{\beta}_1{\beta}_2$

Proof in the symmetric case. We start with the symmetric case ${\lambda}_1={\lambda}_2={\lambda}$ , ${\alpha}_1={\alpha}_2={\alpha}$ , and ${\beta}_1={\beta}_2={\beta}$ , ${\alpha}>{\beta}$ , in order to provide an intuition for the way the proof works. Denote by $\zeta(n)=(\zeta_1(n), \zeta_2(n))\in {\mathbb{Z}}_{+}^2$ , $n\in {\mathbb{Z}}_{+}$ , the DTMC corresponding to the CTMC $\xi(t)$ . Let $\{{\cal F}_n\}_{n=1}^\infty$ be the standard natural filtration associated with the Markov chain $\zeta(n)$ . Define

\begin{align*} \label{tau} \begin{split} S_n&=\zeta_1(n)+\zeta_2(n),\ U_n=\zeta_1(n)-\zeta_2(n),\\ \tau&=\min\{m:\ \zeta_1(m)=0 \text{ or } \zeta_2(m)=0\}. \end{split} \end{align*}

Assume that ${\textrm{P}}(\tau=\infty)>0$ and get a contradiction. First, observe that

(15) \begin{equation} {\textrm{E}}(U_{n+1}^2 \mid {\cal F}_n) =U_n^2\bigg(1+\frac{2({\alpha}+{\beta})}{2{\lambda} +({\alpha}+{\beta})S_n}\bigg)+1 \quad \text{on the event } \{\tau>n\}. \label{eqn13} \end{equation}

Remark 2. This expression is quite similar to the second equation in (13); the fundamental difference is that the sum of the components, i.e. $S_n$ , is now a random process. This is in contrast to both the auxiliary process and Friedman’s urn model (as well as to other urn models without ball removal), where the sum of the components is a deterministic, usually linear, function of n. The main idea of what follows below is that the long-term behaviour of $S_n$ can be effectively controlled due to its simple asymptotic behaviour.

Trivially, $S_{n+1}-S_n=\pm 1$ and

\begin{align*} {\textrm{P}}(S_{n+1}=S_n+1 \,{|}\, S_n)&=\frac{{\lambda}+{\alpha} S_n}{2{\lambda}+({\alpha}+{\beta})S_n},\\ {\textrm{P}}(S_{n+1}=S_n-1 \,{|}\, S_n)&=\frac{{\beta} S_n}{2{\lambda}+({\alpha}+{\beta})S_n}. \end{align*}

This shows that the long-term behaviour of $S_n$ is similar to a homogeneous simple random walk that jumps right and left with probabilities $\frac{{\alpha}}{{\alpha}+{\beta}}$ and $\frac{{\beta}}{{\alpha}+{\beta}}$ respectively. Therefore, the strong law of large numbers, with some variations (a rigorous proof can be found in Lemma 6), implies that for any ${\varepsilon}, \delta > 0$ there exists N such that

(16) \begin{equation} {\textrm{P}}(S_n\in [(\rho-\delta)n, (\rho +\delta)n]\ \text{for all}\ n\geq N)\geq 1-{\varepsilon}, \label{eqn14} \end{equation}

where $\rho$ is defined in (12).

Further, fix some $\delta>0$ such that $\rho+\delta<1$ and an arbitrary ${\varepsilon}>0$ ; according to (16) there exists an $N=N({\varepsilon})$ so large that

\[ \sigma_N=\min(n>N:\ S_n\notin [(\rho-\delta)n, (\rho +\delta)n] ) %. \]

satisfies

(17) \begin{align} {\textrm{P}}(\sigma_N=\infty)\ge 1-{\varepsilon}. \label{eqn15} \end{align}

It follows from (15) and the definition of $\sigma_N$ that

(18) \begin{align} {\textrm{E}}(U_{n+1}^2 \mid {\cal F}_n) &\geq U_{n}^2b_n \quad \text{on} \quad \{N\le n<\min(\sigma_N, \tau)\},\label{eqn16}\end{align}
\begin{align} \text{where } b_n&=1+\frac{2({\alpha}+{\beta})}{2{\lambda} +({\alpha}+{\beta})(\rho+\delta)n} =1+\frac{2 n^{-1}}{\rho+\delta} +O(n^{-2}). \nonumber \end{align}

Iterating (18) gives that

\[ {\textrm{E}}(U_{n+1}^2 \mid {\cal F}_N)\geq b_nb_{n-1}\dots b_{N+1} b_{N} U_{N}^2 \quad \text{on} \quad \{N\le n<\min(\sigma_N, \tau)\}. \]

Assume $n\ge N$ everywhere below. Then

\begin{align*} \prod_{k=N}^n b_k&= \prod_{k=N}^n \exp\bigg\{\frac{2 k^{-1}}{\rho+\delta} +O\Big(\frac 1{k^2}\Big)\bigg\} = \exp\bigg\{\sum_{k=N}^n\bigg[\frac{2 k^{-1}}{\rho+\delta} +O\Big(\frac 1{k^2}\Big)\bigg]\bigg\} \geq C_2\cdot n^{\frac2{\rho+\delta}} \end{align*}

for some $C_2=C_2(N)>0$ , so that

\[ {\textrm{E}}(U_{n+1}^2 \mid {\cal F}_N)\geq C_2\cdot n^{\frac2{\rho+\delta}}1_{\{n<\min(\sigma, \tau)\}}. \]

Dividing both sides by $n^2$ and taking the expectation gives

\[ {\textrm{E}}\bigg(\frac{U_{n+1}^2}{n^2}\bigg)\geq C_2\, n^{\frac2{\rho+\delta}-2}\, {\textrm{P}}(n<\min(\sigma_N, \tau)). \]

The left-hand side of this inequality is uniformly bounded in n, as $|U_n|\leq |U_0|+n$ for all $n\geq 0$ . On the other hand, the bound $\rho+\delta<1$ implies that $n^{\frac2{\rho+\delta}-2}\to \infty$ as $n\to \infty$ . Therefore, if $\lim_{n\to \infty}{\textrm{P}}(n<\min(\sigma_N, \tau))={\textrm{P}}(\sigma_N=\infty, \tau=\infty)>0$ , as asserted, then we get a contradiction. Consequently, ${\textrm{P}}(\sigma_N=\infty, \tau=\infty)=0$ and

\[ {\textrm{P}}(\tau=\infty)\le {\textrm{P}}(\sigma_N=\infty, \tau=\infty)+{\textrm{P}}(\sigma_N<\infty)\le{\varepsilon} \]

by (17). Since ${\varepsilon}>0$ is arbitrary, ${\textrm{P}}(\tau=\infty)=0$ .

In turn, this means that the process hits the axes in a finite time almost surely, as claimed. □

We are now going to extend the above argument to the general case. Without loss of generality, assume from now on that

\begin{align*} \label{a1a2} {\alpha}_1 \ge{\alpha}_2. \end{align*}

Let us find the asymptotic equilibrium direction for $\zeta(n)$ , which will be shown to be unstable later in the proof. Indeed, if we assume that both ${\lambda}_{1}=0$ and ${\lambda}_2=0$ (they contribute very little to the birth rates when x and y are large) then the slope of the drift of the vector field corresponding to our system is given by

\[ \frac{{\alpha}_2 y - {\beta}_2 x}{{\alpha}_1 x - {\beta}_1 y}. \]

It coincides with the slope of the vector (x,y) if and only if $x=ry$ where r solves

(19) \begin{equation} \frac 1r=\frac{{\alpha}_2-r{\beta}_2}{r{\alpha}_1-{\beta}_1} \quad \Longleftrightarrow \quad {\beta}_2 r^2+ ({\alpha}_1-{\alpha}_2)r -{\beta}_1=0. \label{eqn17} \end{equation}

Since $x,y\ge 0$ we choose r to be the positive root of (19), which can be written as

\begin{equation*} \label{r} r=\frac{-({\alpha}_1-{\alpha}_2)+D}{2{\beta}_2}, \end{equation*}

where

\begin{align*} D=\sqrt{({\alpha}_1-{\alpha}_2)^2+4{\beta}_1{\beta}_2}= \sqrt{({\alpha}_1+{\alpha}_2)^2+4({\beta}_1{\beta}_2-{\alpha}_1{\alpha}_2)}. \end{align*}

Note that (19) can be rewritten as

(20) \begin{equation} r=\frac{{\beta}_1+r{\alpha}_2}{{\alpha}_1+r{\beta}_2}. \label{eqn18} \end{equation}

Define the following variables:

(21) $$\begin{align} R(x,y) & =({{\alpha }_{1}}+{{\beta }_{2}})x+({{\alpha }_{2}}+{{\beta }_{1}})y+{{\lambda }_{1}}+{{\lambda }_{2}}, \\ {{R}_{n}} & =R(\zeta (n)), \\ \end{align}$$

(22) $$\begin{array}{*{25}{l}} U(x,y) & =x-ry-d, \\ {{U}_{n}} & =U(\zeta (n)), \\ \end{array}$$

(23) $$\text{where }d=-\frac{2\left( {{\lambda }_{1}}-{{\lambda }_{2}}r \right)+{{\alpha }_{1}}+{{\beta }_{2}}{{r}^{2}}}{2\left( {{\alpha }_{1}}+{{\beta }_{2}}r \right)}.$$

Assume that $x,y>0$ . Then

(24) \begin{align} {\textrm{E}}(U_{n+1}^2 \mid \zeta(n)=(x,y))&= (U_n+1)^2\frac{{\lambda}_1+{\alpha}_1x}{R_n}+(U_n-1)^2\frac{{\beta}_1y}{R_n}\nonumber\\ &\quad+(U_n-r)^2\frac{{\lambda}_2+{\alpha}_2y}{R_n}+(U_n+r)^2\frac{{\beta}_2x}{R_n}\nonumber \\ &=U_n^2+2({\alpha}_1+r{\beta}_2)U_n\frac{x-\frac{{\beta}_1+r{\alpha}_2}{{\alpha}_1+r{\beta}_2}y-d}{R_n} +\frac{2U_nQ_1+Q_2(x,y)}{R_n}\nonumber\\ &=U_n^2\,\bigg[1+\frac{2({\alpha}_1+r{\beta}_2)}{R_n}\bigg] +\frac{2U_nQ_1+Q_2(x,y)}{R_n}, \label{eqn19}\end{align}

where we used (20) to rewrite the second term in the third line, and define

$$\begin{align} {{Q}_{1}} & :=d({{\alpha }_{1}}+{{\beta }_{2}}r)+{{\lambda }_{1}}-r{{\lambda }_{2}}, \\ {{Q}_{2}}(x,y) & :=({{r}^{2}}{{\beta }_{2}}+{{\alpha }_{1}})x+({{\beta }_{1}}+{{r}^{2}}{{\alpha }_{2}})y. \\ \end{align}$$

Consequently,

\begin{align*} 2U_nQ_1+Q_2(x,y)&= 2({\alpha}_1+{\beta}_2r)\bigg(d+\frac{2({\lambda}_1-{\lambda}_2r)+{\alpha}_1+{\beta}_2r^2}{2({\alpha}_1+{\beta}_2r)}\bigg)x\\ &\quad+\big({\beta}_1+{\alpha}_2r^2-2rd({\alpha}_1+{\beta}_2r)-2r({\lambda}_1-{\lambda}_2r)\big)y+Q_3, \end{align*}

where $Q_3={\lambda}_1+{\lambda}_2r^2-2d({\lambda}_1-{\lambda}_2r)-2d^2({\alpha}_1+{\beta}_2r)$ . Note that the coefficient in front of x on the right-hand side is equal to 0 by the definition of d in (23). Further, using again the definition of d, we simplify the coefficient in front of y and arrive at the following equation:

\[ 2U_nQ_1+Q_2(x,y)=({\beta}_1+{\alpha}_1r+{\alpha}_2r^2+{\beta}_2r^3)y+Q_3. \]

Note that ${\beta}_1+{\alpha}_1r+{\alpha}_2r^2+{\beta}_2r^3>0$ , therefore

(25) \begin{equation} 2U_nQ_1+Q_2(x,y)>0 \label{eqn20} \end{equation}

for all $y\geq y_0$ , where $y_0$ is a value depending on the model parameters.

Equations (24) and (25) imply that

(26) \begin{equation} {\textrm{E}}(U_{n+1}^2 \mid \zeta(n)=(x,y))\geq U_n^2\,\Big(1+\frac{2({\alpha}_1+r{\beta}_2)}{R_n}\Big) \quad \text{if } x>0 \text{ and } y\geq y_0. \label{eqn21} \end{equation}

Our next goal is to obtain an upper bound for the total transition rate $R_n$ . Define

(27) \begin{align} S(x,y)&=({\alpha}_1{\alpha}_2+{\beta}_1{\beta}_2+2{\alpha}_2{\beta}_2)x +({\alpha}_1{\alpha}_2+{\beta}_1{\beta}_2+2{\alpha}_1{\beta}_1)y,\nonumber\\ S_n&=S(\zeta(n)),\label{eqn22}\end{align}
(28) $$ \begin{align} T(x,y)&={\beta}_2 x+(r{\beta}_2 +{\alpha}_1-{\alpha}_2)y,\\ T_n&=T(\zeta(n)),\\ \tilde\rho&={\alpha}_1{\alpha}_2-{\beta}_1{\beta}_2=({\alpha}_2-r{\beta}_2)({\alpha}_1+r{\beta}_2)>0. \label{eqn23}\end{align} $$

Remark 3. Note that $U_n$ , defined by (22), functions as a measure of departure from the equilibrium; $R_n$ is the common denominator, $S_n$ is the (almost) constant drift term (see (30)), while $T_n$ is some sort of a remainder, up to a multiplying coefficient, as will become clear later in the proof.

Now we want to write R(x,y) defined in (21) as a linear combination of S(x,y), T(x,y), and an extra constant. In order to find the unknown coefficients, observe that both S and T are linear in x and y with $S(0,0)=T(0,0)=0$ . Therefore, $R(x,y)={\lambda}_1+{\lambda}_2+k\, S(x,y)+ l\, T(x,y)$ , where k and l can be found by solving the elementary system of linear equations

\[ \begin{cases} \frac{\partial R(x,y)}{\partial x}= k\, \frac{\partial S(x,y)}{\partial x}+ l\, \frac{\partial T(x,y)}{\partial x}, \\[5pt] \frac{\partial R(x,y)}{\partial y}= k\, \frac{\partial S(x,y)}{\partial y}+ l\, \frac{\partial T(x,y)}{\partial y}, \end{cases} \]

yielding

\[ k=\frac{{\alpha}_1+r{\beta}_2 }{\tilde\rho}>0, \quad l=-\frac{ ({\alpha}_1{\alpha}_2+2{\alpha}_2{\beta}_2+{\beta}_1{\beta}_2)r+{\alpha}_1{\alpha}_2+2{\alpha}_1{\beta}_1+{\beta}_1{\beta}_2 }{\tilde\rho}<0. \]

Hence,

(29) \begin{equation} R_n=({\lambda}_1+{\lambda}_2)+\frac{{\alpha}_1+r{\beta}_2 }{\tilde\rho}S_n + l T_n. \label{eqn24} \end{equation}

The next statement is probably known, but just in case we present its proof here as well.

Lemma 6. Suppose that we are given a process $Z_n$ adapted to the filtration ${\mathcal{F}}_n$ such that $|Z_{n+1}-Z_n|\le B$ for all n and

\[ a\le E(Z_{n+1}-Z_n \,{|}\, {\mathcal{F}}_n)\le a + \frac{\sigma}{Z_n} \]

for some constants $B>0$ , $a>0$ , and $\sigma\ge 0$ . Then $Z_n/n\to a$ a.s.

Proof. Fix an ${\varepsilon}>0$ and let $\hat Z_n=Z_n-a n$ . Then $\hat Z_n$ is a submartingale with jumps bounded by $B+a$ ; hence, by the Azuma–Hoeffding inequality,

\begin{align*} {\textrm{P}}(\hat Z_n-\hat Z_0\le -{\varepsilon} n)\le \exp\bigg\{-\frac{{\varepsilon}^2 n}{2(B+a)^2}\bigg\}, \end{align*}

and by the Borel–Cantelli lemma the event $\{\hat Z_n/n\le -{\varepsilon}+\hat S_0/n\}$ occurs finitely often. Since ${\varepsilon}>0$ is arbitrary and $\hat Z_0/n\to 0$ we get that $\liminf_{n\to\infty} \hat Z_n/n\ge 0$ , yielding $\liminf_{n\to\infty} Z_n/n\ge~a$ .

Next, define

\[ \bar Z_n=\hat Z_n -\sum_{i=1}^n \frac{\sigma}{\max\{1, Z_n\}}. \]

On the event $\{Z_n\ge 1\}$ we have ${\textrm{E}}(\bar Z_{n+1}-\bar Z_n \mid {\mathcal{F}}_n)=0$ . Fix a large N and consider $\bar Z_{n\wedge \tau_N}$ where $\tau_N=\inf\{n\ge N\,{:}\, Z_n<1\}$ . Then $\bar Z_{n\wedge \tau_N}$ is a martingale for $n\ge N$ with jumps bounded by $B+a+1$ , and applying the Azuma–Hoeffding inequality again we get

\begin{align*} {\textrm{P}}(|\bar Z_{n\wedge \tau_N}-\bar Z_N| \ge {\varepsilon} n)\le 2\exp\bigg\{-\frac{{\varepsilon}^2 (n-N)}{2(B+a+1)^2}\bigg\} \end{align*}

for any ${\varepsilon}>0$ . By an argument similar to the first part of the proof, this implies that $\lim_{n\to\infty} \bar Z_{n\wedge \tau_N}/n=0$ a.s. However, the first part of the proof implies that $\tau_N=\infty$ for all but finitely many Ns a.s. Hence, $\lim_{n\to\infty} \bar Z_{n}/n=0$ a.s. Now, the fact that $\liminf_{n\to\infty} Z_n/n\ge a$ gives us that $\sum_{i=1}^n \frac{\sigma}{\max\{1, Z_n\}}\le O(\log n)$ so that $\bar Z_n -\hat Z_n=o(n)$ , thus implying the statement of the lemma. □

Proposition 1. Consider $S_n$ and $\tilde\rho$ defined in (27) and (28) respectively. Then

\[\lim_{n\to\infty} \frac{S_n}n=\tilde\rho\quad\text{a.s.}\]

Proof of Proposition 1. Note that the jumps of $S_n$ are bounded (they can take at most four distinct values). The expected drift of $S_n$ is given by

\begin{align*} {\textrm{E}}(S_{n+1}- S_n \mid \zeta(n)=(x,y))&=\frac{({\alpha}_1{\alpha}_2+{\beta}_1{\beta}_2) (({\alpha}_1-{\beta}_2)x+({\alpha}_2-{\beta}_1)y)}{R_n}\\ &\quad+2\frac{{\alpha}_2{\beta}_2({\alpha}_1x-{\beta}_1y)+{\alpha}_1{\beta}_1({\alpha}_2y-{\beta}_2x)}{R_n}\\ & \quad+\frac{({\lambda}_1+{\lambda}_2)({\alpha}_1{\alpha}_2+{\beta}_1{\beta}_2) +2{\lambda}_1{\alpha}_2{\beta}_2+2{\lambda}_2{\alpha}_1{\beta}_1}{R_n}. \end{align*}

An easy algebraic computation gives that the sum of terms with x in the first and second numerators on the right-hand side is equal to $\tilde\rho({\alpha}_1+{\beta}_2)x$ . Similarly, the sum of all terms with y in the same numerators is equal to $\tilde\rho({\alpha}_2+{\beta}_1)y$ . Rearranging all terms with ${\lambda}_1$ and ${\lambda}_2$ in the last numerator gives

\[ \tilde\rho({\lambda}_1+{\lambda}_2)+2{\lambda}_1{\beta}_2({\alpha}_2+{\beta}_1)+2{\lambda}_2{\beta}_1({\alpha}_1+{\beta}_2). \]

Thus, we obtain that

(30) \begin{align} {\textrm{E}}(S_{n+1}- S_n \mid \zeta(n)=(x,y))= \tilde\rho +\frac{2{\lambda}_1{\beta}_2({\alpha}_2+{\beta}_1)+2{\lambda}_2{\beta}_1({\alpha}_1+{\beta}_2)}{R_n}\ge \tilde\rho>0. \label{eqn25} \end{align}

Note that $R(x,y)\ge (x+y) \min\{{\beta}_1,{\beta}_2\} $ and

\[ S(x,y)\le (x+y) \max\{{\alpha}_1{\alpha}_2+{\beta}_1{\beta}_2+2{\alpha}_2{\beta}_2, {\alpha}_1{\alpha}_2+{\beta}_1{\beta}_2+2{\alpha}_1{\beta}_1\}, \]

and since ${\beta}_1,{\beta}_2>0$ we have $R(x,y)\ge C_1 S(x,y)$ for some positive constant $C_1$ , so that $R_n\ge C_1 S_n$ . Now the result follows from Lemma 6 with $a=\tilde \rho$ . □

Corollary 1. Let $\kappa=\liminf_{n\to\infty} \frac{T_n}n$ . Then ${\textrm{P}}(\kappa>0)=1$ .

Proof. Similarly to the preceding proof,

\[ T(x,y)\ge (x+y)\min({\beta}_2,r{\beta}_2+{\alpha}_1-{\alpha}_2)\ge (x+y){\beta}_2\min (1,r) \]

since ${\alpha}_1-{\alpha}_2\ge 0$ , and thus $T_n\ge C_2S_n$ for some $C_2>0$ . Hence, by Proposition 1,

\begin{equation*} \liminf_{n\to\infty} \frac{T_n}{n}\ge C_2 \liminf_{n\to\infty} \frac{S_n}{n}=C_2 \tilde\rho>0. \end{equation*}

Proposition 2. For every $\delta>0$ and ${\varepsilon}>0$ there exists N such that

\[ P\bigg(R_n\le \frac{{\alpha}_1+r{\beta}_2}{1+\delta}n\ \text{for all}\ n\geq N\bigg)\geq 1-{\varepsilon}. \]

Proof of Proposition 2. Using (29), Proposition 1, and Corollary 1, we obtain that, for sufficiently small $\delta>0$ , sufficiently large n, and any fixed ${\varepsilon}$ ,

\[ R_n= ({\lambda}_1+{\lambda}_2)+ ({\alpha}_1+r{\beta}_2)\frac{S_n}{\tilde\rho}-l T_n \le ({\lambda}_1+{\lambda}_2)+ ({\alpha}_1+r{\beta}_2)(1+\delta)n +l\, \frac{\kappa}{2}n %, \]

with probability at least $1-{\varepsilon}$ . Recall that $\kappa>0$ by Corollary 2.3.2, and that $l<0$ . Let $\delta>0$ be so small that

(31) \begin{equation} ({\lambda}_1+{\lambda}_2)+ ({\alpha}_1+r{\beta}_2)(1+\delta)n + l\, \frac{\kappa}{2}n\leq ({\alpha}_1+r{\beta}_2)(1-\delta)n\leq ({\alpha}_1+r{\beta}_2) \frac{n}{1+\delta}. \label{eqn26} \end{equation}

Thus, we obtain that, with probability at least $1-{\varepsilon}$ ,

\[ R_n\le \frac{{\alpha}_1+r{\beta}_2}{1+\delta}n %, \]

for all sufficiently large n, as claimed. □

The rest of the proof is similar to the symmetric case, and we are going to explain briefly some minor modifications required. First, define

\begin{equation*} \label{tau1} \tau=\min\{n\,{:}\,\zeta_1(n)=0\text{ or }\zeta_2(n)<y_0\}, \end{equation*}

where $y_0$ is such that the bound in (26) holds. Then, assume that ${\textrm{P}}(\tau=\infty)=0$ and arrive at a contradiction. To this end, fix $\delta>0$ such that (31) holds, and given $N>0$ define

\[ \eta_N=\min\bigg\{n\ge N\,{:}\, R_n>\frac{{\alpha}_1+r{\beta}_2}{1+\delta}n \bigg\}. \]

Assume that N is sufficiently large, so that the probability ${\textrm{P}}(\eta_N=\infty)$ is sufficiently close to 1 to ensure that ${\textrm{P}}(\eta_N=\infty, \tau=\infty)>0$ . Then Proposition 2 implies that

\begin{align*} {\textrm{E}}(U_{n+1}^2\mid {\cal F}_n) &\geq U_{n}^2a_n \quad \text{on} \quad \{n<\min(\eta_N, \tau)\}, \\ \text{where } a_n&=1+\frac{2(1+\delta)}{n}. \nonumber \end{align*}

Similarly to the symmetric case, it can be shown by using this inequality that ${\textrm{P}}(\eta_N=\infty, \tau=\infty)=0$ . This contradicts the assumption that ${\textrm{P}}(\tau=\infty)>0$ .

Finally, it might happen that $\zeta_1(\tau)>0$ and $0<\zeta_2(\tau)=y_0-1$ . In this case, observe that the probability of hitting the horizontal axis $\{(x,0), x\in {\mathbb{Z}}_{+}\}$ is bounded below uniformly over the starting location $\left(x, y_0-1\right)$ , $x\ge 1$ . Indeed,

\begin{align*} {\textrm{P}}(\zeta(\tau+y_0-1)&=(x,0) \mid \zeta(\tau)=(x,y_0-1))\\ &=\prod_{k=1}^{y_0-1} \frac{{\beta}_2 x}{{\lambda}_1+{\lambda}_2+({\alpha}_1+{\beta}_2) x+({\alpha}_2+{\beta}_1) (y_0-k)} \\ & \ge \prod_{k=0}^{y_0-1} \frac{{\beta}_2 }{{\lambda}_1+{\lambda}_2+{\alpha}_1+{\beta}_2 +({\alpha}_2+{\beta}_1) (y_0-k)}\\ &=\textrm{ Const}({\lambda}_1,{\lambda}_2,{\alpha}_1,{\alpha}_2,{\beta}_1,{\beta}_2,y_0)>0, \end{align*}

since $x\ge 1$ . Consequently, with probability one, the process eventually hits the boundary.

2.3.3 Proof of Lemma 5 in the case ${\alpha}_1{\alpha}_2={\beta}_1{\beta}_2$

The proof will be very similar to the case ${\alpha}_1{\alpha}_2>{\beta}_1{\beta}_2$ , so we provide only a sketch. Let S(x,y), R(x,y), $S_n$ , $R_n$ , and $\tilde \rho$ be the same as in the previous section. Note that $\tilde\rho=0$ in this case, so we need to find a replacement for Lemma 6.

Observe that because ${\alpha}_1{\alpha}_2={\beta}_1{\beta}_2$ we have ${\alpha}_1>0$ and ${\alpha}_2>0$ since ${\beta}_1{\beta}_2>0$ , and

\begin{align*} S(x,y)&=2{\alpha}_2 ({\alpha}_1+{\beta}_2) x + 2{\alpha}_1 ({\alpha}_2+{\beta}_1) y, \\ R(x,y)&= ({\alpha}_1+{\beta}_2) x + ({\alpha}_2+{\beta}_1) y+{\lambda}_1+{\lambda}_2, \end{align*}

so that $R(x,y)\ge \frac{ S(x,y) }{\max\{2{\alpha}_1,2{\alpha}_2\}}$ . Then (30) becomes

\[ {\textrm{E}}(S_{n+1}-S_n \mid {\cal F}_n)=\frac{2{\lambda}_1{\beta}_2({\alpha}_2+{\beta}_1)+2{\lambda}_2{\beta}_1({\alpha}_1+{\beta}_2)}{R_n}\in\big[ 0, \frac{C_3}{S_n} \big] \]

for some $C_3\ge 0$ . Therefore, $S_n$ can be majorized by a Lamperti random walk (see Reference Menshikov, Popov and Wade[9]) and hence, by Theorem 3.2.7 in Reference Menshikov, Popov and Wade[9], we get that

\[ \limsup_{n\to\infty} \frac{\log S_n }{\log n}\le 1/2\quad \text{a.s.} \]

As a result, the inequality in Proposition 2 is replaced by

\[ {\textrm{P}}(R_n\le n^{1/2+\delta}\ \text{for all}\ n\geq N)\geq 1-{\varepsilon}, \]

and by setting $\delta=1/6$ , on the event $R_n\le n^{2/3}$ the right-hand side of (26) becomes

\[ U_n^2\,\bigg(1+\frac{2({\alpha}_1+r{\beta}_2)}{n^{2/3}}\bigg), \]

leading to a contradiction similar to the case ${\alpha}_1{\alpha}_2>{\beta}_1{\beta}_2$ .

Appendix A

In this appendix we recall the definition of the competition process from Reference Reuter[12] and briefly analyse the applicability of some theorems from that paper to competition processes in ours.

Recall that the competition process in Reference Reuter[12], $X(t)=(X_1(t), X_2(t))\in {\mathbb{Z}}_{+}^2$ , evolves as follows. Given the state $\left( {{x}_{1}},{{x}_{2}} \right)\in \mathbb{Z}_{+}^{2}$ , the process X(t) jumps to

(32) \begin{align} \begin{split} (x_1+1,x_2) &\quad\text{with rate}\quad a(x_1, x_2),\\ (x_1,x_2+1) &\quad\text{with rate}\quad b(x_1, x_2),\\ (x_1-1,x_2) &\quad\text{with rate}\quad c(x_1, x_2)\quad\text{if}\quad x_1>0,\\ (x_1,x_2-1) &\quad\text{with rate}\quad d(x_1, x_2)\quad\text{if}\quad x_2>0,\\ (x_1-1,x_2+1) &\quad\text{with rate}\quad e(x_1, x_2)\quad\text{if}\quad x_1>0,\\ (x_1+1,x_2-1) &\quad\text{with rate}\quad f(x_1, x_2)\quad\text{if}\quad x_2>0, \end{split} \label{eqn27} \end{align}

where $a(x_1, x_2), \ldots, f(x_1, x_2)\geq 0$ . Following Reference Reuter[12], let us assume that the Markov chain is regular in the sense that there exists exactly one associated transition matrix. For simplicity, we assume in addition that the Markov chain X(t) is irreducible, although in general there might be absorption states.

Define the following quantities:

(33) \begin{equation} \begin{split} r_k&=\max_{\substack{x_1, x_2>0;\\ x_1+x_2=k}} [a(x_1, x_2)+b(x_1, x_2)],\\ s_k&=\min_{\substack{x_1, x_2>0;\\ x_1+x_2=k}} [c(x_1, x_2)+d(x_1, x_2)],\\ \tau&=\inf(t\geq 0: X_1(t)=0 \text{ or } X_2(t)=0). \end{split} \label{eqn28} \end{equation}

It follows from Theorem 2 in Reference Reuter[12] that

\begin{equation*} \label{A} A:=\sum\limits_{k=2}^{\infty}\frac{s_2\cdots s_k}{r_2\cdots r_k}=\infty \end{equation*}

is a sufficient condition for the hitting time $\tau$ to be finite almost surely.

Consider, for simplicity, the competition process with linear interaction (with transition rates of type 2 as defined in (2)) in the symmetric case, i.e. ${\alpha}_i={\alpha}$ , ${\beta}_i={\beta}$ , ${\lambda}_i={\lambda}$ , $i=1,2$ . Then $ r_k=2{\lambda}+{\alpha} k$ and $s_k={\beta} k$ , and it is easy to see that if ${\alpha}\le {\beta}$ then

\[ \frac{ s_2\cdots s_k}{r_2\cdots r_k}\geq \begin{cases} C_1(\frac{{\beta}}{{\alpha}})^k &\text{if }{\alpha}<{\beta},\\ \frac{C_2}{k^{{2{\lambda}/{\alpha}}}} &\text{if }{\alpha}={\beta} \end{cases} \]

for some $C_1,C_2>0$ for all sufficiently large k. Consequently, if ${\alpha}<{\beta}$ or ${\alpha}={\beta}<2{\lambda}$ then $A=\infty$ ; hence $\tau$ is almost surely finite. However, if ${\alpha}>{\beta}$ or ${\alpha}={\beta}\ge 2{\lambda}$ , then the results of Reference Reuter[12] are not applicable.

Further, we are going to compare the long-term behaviour of two simple competition processes. One process of interest is the competition process X(t) given in Example 2 of Reference Reuter[12]. This process is specified by the following choice of transition rates in (32):

(34) \begin{equation} \begin{array}{l@{\,\,}lll} a(x_1, x_2)&\equiv\, a, & b(x_1, x_2)\equiv b,& c(x_1, x_2)=\gamma x_1, \\ d(x_1, x_2)&=\,\delta x_2, & e(x_1, x_2)={\varepsilon} x_1 x_2,& f(x_1,x_2)\equiv 0, \end{array} \label{eqn29} \end{equation}

where $a, b, \gamma, \delta, {\varepsilon}>0$ . The other process is a special case of the competition process with linear interaction in which the transition rates are specified by the parameters ${\alpha}_1={\alpha}_2=0$ , $ {\beta}_1=\delta$ , ${\beta}_2=\gamma$ , ${\lambda}_1=a$ , ${\lambda}_2=b>0$ . In the introduction we interpreted such a competition process as the OK Corral model with ‘resurrection’.

The interactions in these processes are different. However, their behaviours inside the quarter plane are quite similar. Indeed, the mean drift of each of these processes inside the domain are directed towards the axes. Further, the quantities $r_k=a+b$ , $k\geq 1$ , and $s_k=k \min\{\gamma, \delta\},$ $k\geq 1$ , are the same for both processes. Now, either [12, Theorem 2] or an argument based on [9, Lemma 7.3.6] (similar to Lemma 3) imply that $\tau<\infty$ a.s. in both cases.

At the same time, these processes evolve differently because of the difference in the transition rates on the boundary. The process with rates given by (34) has a strong mean drift towards the origin, while an OK Corral-type process jumps away with constant rate, as its death rates on the boundary are zero. This seemingly small change results in a quite substantial difference in the long-term behaviour of the processes. Indeed, define $\tilde r_k$ and $\tilde s_k$ by the same formula as $r_k$ and $s_k$ in (33) by taking the maximum (resp. minimum) over the set $x_1,x_2\ge 0$ , i.e. now we include the boundary states (k, 0) and (0, k). Theorem 4 in Reference Reuter[12] states that

\[ {\tilde{ A}}=\sum\limits_{k=1}^{\infty}\frac{\tilde r_1\cdots \tilde r_{k-1}}{\tilde s_1\cdots \tilde s_k}<\infty \]

is a sufficient condition for the competition process with transition rates (32) to be positive recurrent, implying that the process governed by (34) is positive recurrent. Indeed, $\tilde r_k=r_k=a+b>0$ and $\tilde s_k=s_k=k \min\{{\gamma},\delta\}$ , $k\ge1$ , so,

\[ {\tilde{ A}}=\frac{1}{a+b}\sum\limits_{k=1}^{\infty}\frac 1{k!}\Big(\frac{a+b}{\min\{{\gamma},\delta\}}\Big)^k<\infty. \]

(Note also that positive recurrence of this process follows from the Foster criterion for positive recurrence with Lyapunov function $f(x_1, x_2)=x_1+x_2$ , but we skip further details.) At the same time, Theorem 4 from Reference Reuter[12] is not applicable to the OK Corral model with ‘resurrection’ since $\tilde s_k=0$ , while our Theorem 2 shows that this process is transient and escapes to infinity in the only possible way, i.e. along the boundary, as described.

Acknowledgements

S. Volkov’s research is partially supported by Swedish Research Council grant VR2014-5147. We thank Mikhail Menshikov and Svante Janson for helpful discussions.

References

Anderson, W. (1991). Continuous-Time Markov Chains: An Application-Oriented Approach. Springer, New York.CrossRefGoogle Scholar
Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation. Cambridge University Press.CrossRefGoogle Scholar
Freedman, D. A. (1965). Bernard Friedman’s urn. Ann. Math. Statist. 36, 956970.CrossRefGoogle Scholar
Iglehart, D. L. (1964). Reversible competition processes. Z. Wahrseheinliehkeitsth. 2, 314331.CrossRefGoogle Scholar
Iglehart, D. L. (1964). Multivariate competition processes. Ann. Math. Statist. 35, 350361.CrossRefGoogle Scholar
Janson, S., Shcherbakov, V. and Volkov, S. (2019). Long-term behaviour of a reversible system of interacting random walks. J. Stat. Phys. 175, 7196.CrossRefGoogle Scholar
Kingman, J. F. C. and Volkov, S. E. (2003). Solution to the OK Corral model via decoupling of Friedman’s urn. J. Theoret. Prob. 16, 267276.CrossRefGoogle Scholar
MacPhee, M. I., Menshikov, M. V. and Wade, A. R. (2010). Angular asymptotics for multi-dimensional non-homogeneous random walks with asymptotically zero drift. Markov Process. Relat. Fields 16, 351388.Google Scholar
Menshikov, M. V., Popov, S. and Wade, A. R. (2017). Non-Homogeneous Random Walks: Lyapunov Function Methods for Near-Critical Stochastic Systems. Cambridge University Press.CrossRefGoogle Scholar
Menshikov, M. and Shcherbakov, V. (2018). Long-term behaviour of two interacting birth-and-death processes. Markov Process. Relat. Fields. 24, 85106.Google Scholar
Pemantle, R. (2007). A survey of random processes with reinforcement. Prob. Surv. 4, 179.CrossRefGoogle Scholar
Reuter, G. E. H. (1961). Competition processes. In Proc. 4th Berkeley Symp. Math. Statist. Probab., vol. II. University of California Press, Berkeley.Google Scholar
Shcherbakov, V. and Volkov, S. (2015). Long-term behaviour of locally interacting birth-and-death processes. J. Stat. Phys. 158, 132157.CrossRefGoogle Scholar