Hostname: page-component-745bb68f8f-5r2nc Total loading time: 0 Render date: 2025-02-11T08:30:33.828Z Has data issue: false hasContentIssue false

The empirical mean position of a branching Lévy process

Published online by Cambridge University Press:  23 November 2020

David Cheek*
Affiliation:
Harvard University
Seva Shneer*
Affiliation:
Harvard University
*
*Postal address: Program for Evolutionary Dynamics, Harvard University, Cambridge, Massachusetts, USA. Email address: dmcheek@g.harvard.edu
**Postal address: School of MACS, Heriot-Watt University, Edinburgh EH14 4AS, UK. Email address: v.shneer@hw.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

We consider a supercritical branching Lévy process on the real line. Under mild moment assumptions on the number of offspring and their displacements, we prove a second-order limit theorem on the empirical mean position.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

A branching Lévy process describes a population of particles undergoing spatial movement, death, and reproduction. It can be defined informally as follows (for a formal definition, see Section 2). Initially there is one particle located at the origin of the real line. The particle lives for an exponentially distributed time. During this time it moves according to a Lévy process. At the time of death, the particle is replaced by a random number of new particles, displaced from the parent particle’s death position according to a point process. All particles move, die, and reproduce in a statistically identical manner, independently of every other particle. We are only concerned with the supercritical case. That is, each particle gives birth to more than one particle on average, and thus the total number of particles grows to infinity with positive probability.

The particle positions’ empirical distribution has received much attention, especially for branching random walks and branching Brownian motion, which are special cases of the model. There are many results on the empirical distribution’s maximum [Reference Biggins4, Reference Bramson5, Reference Gantert and Höfelsauer10], as well as on large deviations [Reference Louidor and Tsairi12] and on the almost-sure weak convergence to a Gaussian distribution [Reference Biggins3, Reference Gao and Liu11].

The empirical mean position, which is simple and important for applications, has received relatively little attention. For specific branching random walks, [Reference Meli, Weber and Duffy13] shows that the empirical mean position almost surely grows asymptotically linearly with time, while [Reference Duffy, Meli and Shneer7] shows that the empirical mean position’s variance converges. These results combined raise the question of characterising a second-order limit term.

For branching Lévy processes, under some mild moment assumptions on the number of offspring and their displacements, we prove a second-order limit theorem for the empirical mean position. Namely, we show that the difference between the empirical mean position at time t and rt, for some constant r, converges almost surely to a random variable.

Before proceeding with the remainder of the paper, we discuss some special cases of the model and applications.

First, consider that particles do not move during their lifetime and that each particle is displaced by ${+1}$ from its parent. A particle’s position is its generation. Our result describes the average generation, complementing results of [Reference Meli, Weber and Duffy13] and [Reference Chauvin, Klein, Marckert and Rouault6]. Second, consider instead that displacement sizes are Poisson-distributed. This is a popular model for cancer evolution [Reference Durrett8]. Here particles are cells, and a cell’s position is its number of mutations. Our result gives the average number of mutations per cell. Third, consider that particles are not displaced from their parent but move as a random walk during their lifetime. This model is seen in phylogenetics. The branching process represents speciation [Reference Aldous1], while the positions are lengths of a particular DNA segment [Reference Felsenstein9].

The remainder of the paper is organised as follows. We introduce the model in Section 2, formulate our main result in Section 3, and prove it in Section 4.

2. Model

Initially there is a single particle named ${\emptyset}$ which moves according to a Lévy process ${(Z_{\emptyset,s})_{s\geq0}}$ , with ${Z_{\emptyset,0}=0}$ and ${\mathbb{E}[Z_{\emptyset,1}^2]<\infty}$ . After an exponentially distributed waiting time ${A_\emptyset}$ , the particle dies and is replaced by a random number ${N_\emptyset}$ of new particles with ${\mathbb{E}[N_\emptyset]>1}$ and ${\mathbb{E}[N_\emptyset^2]<\infty}$ . The new particles are born at positions ${(Z_{\emptyset,A_\emptyset}+D_i)_{i=1}^{N_\emptyset}}$ . The ${D_i}$ are ${\mathbb{R}}$ -valued random variables with

\begin{equation*}\mathbb{E}\biggl[\biggl(\sum_{i=1}^{N_\emptyset}D_i\biggr)^2\biggr]<\infty\quad\text{and}\quad\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i^2\biggr]<\infty.\end{equation*}

Independence is assumed between ${(Z_{\emptyset,s})_{s\geq0}}$ and ${A_\emptyset}$ and ${(D_i)_{i=1}^{N_\emptyset}}$ (but the ${D_i}$ need not be independent of each other or of ${N_\emptyset}$ ). All particles independently follow the initial particle’s behaviour.

To denote particles we follow standard notation. Let

\begin{equation*}\mathcal{T}=\bigcup_{n\in\mathbb{N}\cup\{0\}}\mathbb{N}^n.\end{equation*}

Here ${\mathbb{N}^0=\{\emptyset\}}$ contains the initial particle. For ${v=(v_1,\ldots,v_n)\in\mathcal{T}}$ and ${i\in\mathbb{N}}$ , write ${vi=(v_1,\ldots,v_n,i)}$ , where v is the parent of vi. To describe genealogical relationships, the set ${\mathcal{T}}$ is endowed with a partial ordering ${\prec}$ , defined by

\begin{equation*}(u_i)_{i=1}^m\prec(v_i)_{i=1}^n\iff m<n \text{ and }(u_i)_{i=1}^m=(v_i)_{i=1}^m.\end{equation*}

Write ${\preceq}$ for ${\prec}$ or ${=}$ .

Now let

\begin{equation*}\bigl[(Z_{v,s})_{s\geq0},A_v,(D_{vi})_{i=1}^{N_v}\bigr]\end{equation*}

for ${v\in\mathcal{T}}$ be independent and identically distributed copies of

\begin{equation*}\bigl[(Z_{\emptyset,s})_{s\geq0},A_\emptyset,(D_i)_{i=1}^{N_\emptyset}\bigr].\end{equation*}

The set of all particles that ever exist is

\begin{equation*}\mathcal{T}^*= \{(v_i)_{i=1}^n\in\mathcal{T}\colon v_{m+1}\leq N_{(v_i)_{i=1}^m},\text{ for }m=0,1,\ldots,n-1 \}.\end{equation*}

The particles alive at time ${t\geq0}$ are

\begin{equation*}\mathcal{T}_t=\biggl\{v\in\mathcal{T}^*\colon \sum_{u\prec v}A_u\leq t<\sum_{u\preceq v}A_u\biggr\}.\end{equation*}

Particle v at time t, if it is alive, has position

\begin{equation*}X_{v,t}=\sum_{\emptyset\prec u\preceq v}D_u+\sum_{\emptyset\preceq u\prec v}Z_{u,A_u}+Z_{v,t-\sum_{\emptyset\preceq u\prec v}A_u}.\end{equation*}

For further notation, the branching rate is

\begin{equation*}\lambda=\mathbb{E}[A_\emptyset]^{-1},\end{equation*}

the effective branching rate is

\begin{equation*}\hat{\lambda}=\lambda\mathbb{E}[N_\emptyset-1],\end{equation*}

and the movement rate is

\begin{equation*}r=\mathbb{E}[Z_{\emptyset,1}]+\lambda\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i\biggr].\end{equation*}

3. Main result

Theorem 3.1. Conditional on the event ${\{\lim_{t\rightarrow\infty}|\mathcal{T}_t|=\infty\}}$ , the limit

\begin{equation*}\lim_{t\rightarrow\infty}\\frac{1}{|\mathcal{T}_t|}\biggl(\sum_{v\in\mathcal{T}_t}X_{v,t}-rt\biggr)\end{equation*}

exists and is finite almost surely.

4. Proof of Theorem 3.1

Our proof will involve conditioning on whether branching occurs during the time interval [0, h] for some small ${h>0}$ . Write

\begin{equation*}J_{0,h}=\{A_\emptyset>h\}\end{equation*}

for the event that the first branching occurs after time h. Write

\begin{equation*}J_{1,h}=\Bigl\{A_\emptyset\leq h<A_\emptyset+\min_{i=1,\ldots,N_\emptyset} A_i\Bigr\}\end{equation*}

for the event that the first branching occurs before time h and the second branching occurs after time h. Write

\begin{equation*}J_{2,h}=\Bigl\{A_\emptyset+\min_{i=1,\ldots,N_\emptyset} A_i\leq h\Bigr\}\end{equation*}

for the event that the second branching occurs before time h. Note the probabilities

\begin{equation*}\begin{cases}\mathbb{P}[J_{0,h}]=1-h\lambda+{\mathrm{o}}(h){,} \\[3pt] \mathbb{P}[J_{1,h}]=h\lambda+{\mathrm{o}}(h){,} \\[3pt] \mathbb{P}[J_{2,h}]={\mathrm{o}}(h),\end{cases}\end{equation*}

as ${h\downarrow0}$ . Observe the conditional distribution

(4.1) \begin{equation}\biggl(\sum_{v\in\mathcal{T}_{t+h}}(X_{v,t+h}-r(t+h))\mid J_{0,h}\biggr)\overset{d}{=}\sum_{v\in\mathcal{T}_t'}(Z_{\emptyset,h}+X_{v,t}^{\prime}-r(t+h)),\end{equation}

where ${(X_{v,t}^{\prime})_{v\in\mathcal{T}_t'}\overset{d}{=}(X_{v,t})_{v\in\mathcal{T}_t}}$ , and ${(X_{v,t}^{\prime})_{v\in\mathcal{T}_t'}}$ is independent of ${Z_{\emptyset,h}}$ . Meanwhile

(4.2) \begin{equation}\biggl(\sum_{v\in\mathcal{T}_{t+h}}(X_{v,t+h}-r(t+h))\mid J_{1,h}\biggr)\overset{d}{=}\sum_{i=1}^{N_\emptyset}\sum_{v\in\mathcal{T}^i_t}(D_i+X^i_{v,t}-rt)+\eta_h,\end{equation}

where ${(X^i_{v,t})_{v\in\mathcal{T}_t^i}\overset{d}{=}(X_{v,t})_{v\in\mathcal{T}_t}}$ for ${i=1,\ldots,N_\emptyset}$ , the ${(X^i_{v,t})_{v\in\mathcal{T}_t^i}}$ are independent of each other and of ${(D_i)_{i=1}^{N_\emptyset}}$ , and

\begin{equation*}\eta_{h}=\biggl(\sum_{i=1}^{N_{\emptyset}}|\mathcal{T}_t^i|(Z_{\emptyset,A_{\emptyset}} + Z_{i,h-A_{\emptyset}}-rh)\mid J_{1,h}\biggr).\end{equation*}

Straightforward calculations show that the first and second moments of ${\eta_h}$ converge to 0 as ${h\downarrow0}$ .

Lemma 4.1. For ${t\geq0}$ ,

\begin{equation*}\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_t} (X_{v,t}-rt)\biggr]=0.\end{equation*}

Proof. From (4.1),

\begin{equation*}\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_{t+h}} (X_{v,t+h}-r(t+h))\mid J_{0,h}\biggr]=\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_t} (X_{v,t}-rt)\biggr]+h (\mathbb{E}[Z_{\emptyset,1}]-r )\mathbb{E}|\mathcal{T}_t|.\end{equation*}

From (4.2),

\begin{align*} & \mathbb{E}\biggl[\sum_{v\in\mathcal{T}_{t+h}} (X_{v,t+h}-r(t+h))\mid J_{1,h}\biggr] \\ & \quad = \mathbb{E}[N_\emptyset ]\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_t} (X_{v,t}-rt)\biggr] +\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset} D_i\biggr]\mathbb{E}|\mathcal{T}_t|+{\mathrm{o}}(1).\end{align*}

Taking the unconditional expectation,

\begin{align*} &\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_{t+h}} (X_{v,t+h}-r(t+h))\biggr] \\ &\quad = (1-h\lambda)\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_{t+h}} (X_{v,t+h}-r(t+h))|J_{0,h}\biggr]\end{align*}
\begin{align*} &\quad\quad\, +h\lambda\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_{t+h}} (X_{v,t+h}-r(t+h))|J_{1,h}\biggr]+{\mathrm{o}}(h)\\ & \quad = \mathbb{E}\biggl[\sum_{v\in\mathcal{T}_t} (X_{v,t}-rt)\biggr](1+h\hat{\lambda})+{\mathrm{o}}(h).\end{align*}

Rearranging and taking ${h\downarrow0}$ ,

\begin{equation*}\\frac{{\mathrm{d}}}{{\mathrm{d}} t}\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr]=\hat{\lambda}\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr].\end{equation*}

The statement of the lemma for any t now follows from the above and the fact that it clearly holds for ${t=0}$ . □

Next we determine second moments.

Lemma 4.2. For ${t\geq0}$ ,

\begin{equation*}\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr)^2\biggr]=c_1 \,{\mathrm{e}}^{2\hat{\lambda}t}-c_2t {\mathrm{e}}^{\hat{\lambda}t}-c_1 {\mathrm{e}}^{\hat{\lambda}t},\end{equation*}

where

\begin{equation*}c_1=\\frac{\mathbb{E}[(N_\emptyset-1)^2]}{\mathbb{E}[N_\emptyset-1]^2}\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i^2\biggr]+\\frac{1}{\mathbb{E}[N_\emptyset-1]}\mathbb{E}\biggl[\biggl(\sum_{i=1}^{N_\emptyset}D_i\biggr)^2\biggr]\end{equation*}

and

\begin{equation*}c_2=\hat{\lambda}\\frac{\mathbb{E}[(N_\emptyset-1)^2]}{\mathbb{E}[N_\emptyset-1]^2}\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i^2\biggr].\end{equation*}

Proof. From (4.1),

\begin{align*} \mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_{t+h}}(X_{v,t+h}-r(t+h))\biggr)^2\mid J_{0,h}\biggr] & = \mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr)^2\biggr]\\ &\quad\, +2h (\mathbb{E}[Z_{\emptyset,1}]-r)\mathbb{E}\biggl[|\mathcal{T}_t|\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr]\\ &\quad\, +h (\mathbb{E}[Z_{\emptyset,1}^2]-\mathbb{E}[Z_{\emptyset,1}]^2)\mathbb{E}[|\mathcal{T}_t|^2]\\ &\quad\, +{\mathrm{o}}(h).\end{align*}

From (4.2) and Lemma 4.1,

\begin{align*} \mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_{t+h}}(X_{v,t+h}-r(t+h))\biggr)^2\mid J_{1,h}\biggr] & = \mathbb{E}\biggl[\biggl(\sum_{i=1}^{N_\emptyset}\sum_{v\in\mathcal{T}^i_t}(X_{v,t}-rt)\biggr)^2\biggr] \\ &\quad\, +2\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i|\mathcal{T}_t^i|\sum_{v\in\mathcal{T}^i_t}(X_{v,t}-rt)\biggr] \\ & \quad\, +2\mathbb{E}\biggl[\sum_{\substack{i,j=1\\i\not=j}}^{N_\emptyset}D_i|\mathcal{T}_t^i|\sum_{v\in\mathcal{T}^j_t}(X_{v,t}-rt)\biggr]\\ &\quad\, +\mathbb{E}\biggl[\biggl(\sum_{i=1}^{N_\emptyset}D_i|\mathcal{T}_t^i|\biggr)^2\biggr]+{\mathrm{o}}(1)\\ & = \mathbb{E}[N_\emptyset]\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr)^2\biggr]\\ &\quad\, +2\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i\biggr]\mathbb{E}\biggl[|\mathcal{T}_t|\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr]\\&\quad\, +\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i^2\biggr]\biggl(\mathbb{E}\biggl[|\mathcal{T}_t|^2\biggr]-\biggl(\mathbb{E}|\mathcal{T}_t|\biggr)^2\biggr)\\&\quad\, +\mathbb{E}\biggl[\biggl(\sum_{i=1}^{N_\emptyset}D_i\biggr)^2\biggr] (\mathbb{E}|\mathcal{T}_t|)^2+{\mathrm{o}}(1).\end{align*}

But ${\mathbb{E}|\mathcal{T}_t|}$ and ${\mathbb{E}[|\mathcal{T}_t|^2]}$ are standard knowledge [Reference Athreya and Ney2]:

\begin{equation*}\mathbb{E}|\mathcal{T}_t|= {\mathrm{e}}^{\hat{\lambda}t}\end{equation*}

and

\begin{equation*}\mathbb{E}[|\mathcal{T}_t|^2]=\biggl(1+\\frac{\mathbb{E}[(N_\emptyset-1)^2]}{\mathbb{E}[N_\emptyset-1]}\biggr) {\mathrm{e}}^{2\hat{\lambda}t}-\\frac{\mathbb{E}[(N_\emptyset-1)^2]}{\mathbb{E}[N_\emptyset-1]} {\mathrm{e}}^{\hat{\lambda}t}.\end{equation*}

Therefore

\begin{align*} &\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_{t+h}}(X_{v,t+h}-r(t+h))\biggr)^2\biggr] \\ &\quad =(1-\lambda h)\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_{t+h}}(X_{v,t+h}-r(t+h))\biggr)^2\mid J_{0,h}\biggr]\\ &\quad\quad\, +h\lambda\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}_{t+h}}(X_{v,t+h}-r(t+h))\biggr)^2\mid J_{1,h}\biggr]+{\mathrm{o}}(h)\\ &\quad =1+h\hat{\lambda}\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}^i_t}(X_{v,t}-rt)\biggr)^2\biggr]\\ &\quad\quad\, +h a {\mathrm{e}}^{2\hat{\lambda} t}+hb {\mathrm{e}}^{\hat{\lambda} t}+{\mathrm{o}}(h),\end{align*}

where

\begin{equation*}a=\lambda\biggl(\\frac{\mathbb{E}[(N_\emptyset-1)^2]}{\mathbb{E}[N_\emptyset-1]}\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i^2\biggr]+\mathbb{E}\biggl[\biggl(\sum_{i=1}^{N_\emptyset}D_i\biggr)^2\biggr]\biggr)\end{equation*}

and

\begin{equation*}b=-\lambda\\frac{\mathbb{E}[(N_\emptyset-1)^2]}{\mathbb{E}[N_\emptyset-1]}\mathbb{E}\biggl[\sum_{i=1}^{N_\emptyset}D_i^2\biggr]{.}\end{equation*}

Rearranging and taking ${h\downarrow0}$ ,

\begin{align*} \\frac{{\mathrm{d}}}{{\mathrm{d}} t}\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}^i_t}(X_{v,t}-rt)\biggr)^2\biggr] = \hat{\lambda}\mathbb{E}\biggl[\biggl(\sum_{v\in\mathcal{T}^i_t}(X_{v,t}-rt)\biggr)^2\biggr]+a {\mathrm{e}}^{2\hat{\lambda}t}+b {\mathrm{e}}^{\hat{\lambda}t}.\end{align*}

The statement of Lemma 4.2 now follows directly from the differential equation above. □

Next we present a martingale result for which a filtration ${(\mathcal{F}_t)_{t\geq0}}$ needs to be defined:

\begin{equation*}\mathcal{F}_t=\sigma ((X_{v,s})_{v\in\mathcal{T}_s}\colon 0\leq s\leq t).\end{equation*}

Lemma 4.3.

\begin{equation*}\biggl( {\mathrm{e}}^{-\hat{\lambda} t}\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\biggr)_{t\geq0}\end{equation*}

is a martingale with respect to ${(\mathcal{F}_t)_{t\geq0}}$ .

Proof. Write

\begin{equation*}\mathcal{T}_{u,t}=\{v\in\mathcal{T}_t\colon u\preceq v\}\end{equation*}

for the particles alive at time t which are descendants of ${u\in\mathcal{T}}$ . Let ${0\leq s\leq t}$ . Then

\begin{equation*} {\mathrm{e}}^{-\hat{\lambda} t}\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt) = {\mathrm{e}}^{-\hat{\lambda} t}\sum_{u\in\mathcal{T}_s}\sum_{v\in\mathcal{T}_{u,t}}(X_{v,t}-X_{u,s}-r(t-s)) + {\mathrm{e}}^{-\hat{\lambda} t}\sum_{u\in\mathcal{T}_s}|\mathcal{T}_{u,t}|(X_{u,s}-rs).\end{equation*}

Taking conditional expectations,

\begin{align*} & \mathbb{E}\biggl[ {\mathrm{e}}^{-\hat{\lambda} t}\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)\mid \mathcal{F}_s\biggr]\\ &\quad = {\mathrm{e}}^{-\hat{\lambda} t}|\mathcal{T}_s|\mathbb{E}\biggl[\sum_{v\in\mathcal{T}_{t-s}}(X_{v,t-s}-r(t-s))\biggr]+ {\mathrm{e}}^{-\hat{\lambda} t}\sum_{u\in\mathcal{T}_s} {\mathrm{e}}^{\hat{\lambda}(t-s)}(X_{u,s}-rs)\\&\quad = {\mathrm{e}}^{-\hat{\lambda} s}\sum_{u\in\mathcal{T}_s}(X_{u,s}-rs),\end{align*}

where the last equality is due to Lemma 4.1. □

Proof of Theorem 3.1. By Lemmas 4.2 and 4.3 and the martingale convergence theorem, there is a ${\mathbb{R}}$ -valued random variable V with

(4.3) \begin{equation}\lim_{t\rightarrow\infty} {\mathrm{e}}^{-\hat{\lambda} t}\sum_{v\in\mathcal{T}_t}(X_{v,t}-rt)=V\end{equation}

almost surely. But conditioned on the event ${\{\lim_{t\rightarrow\infty}|\mathcal{T}_t|=\infty\}}$ , there is a positive random variable W with

(4.4) \begin{equation}\lim_{t\rightarrow\infty} {\mathrm{e}}^{-\hat{\lambda} t}|\mathcal{T}_t|=W\end{equation}

almost surely [Reference Athreya and Ney2]. Combine (4.3) and (4.4) to conclude the proof. □

Acknowledgements

The authors are grateful to Ken Duffy for a number of useful discussions and to the anonymous referee for a number of useful comments on the first version of our paper.

References

Aldous, D. J. (2001). Stochastic models and descriptive statistics for phylogenetic trees, from Yule to today. Statist. Sci. 16, 2334.10.1214/ss/998929474CrossRefGoogle Scholar
Athreya, K. and Ney, P. (1972). Branching Processes. Springer.10.1007/978-3-642-65371-1CrossRefGoogle Scholar
Biggins, J. (1990). The central limit theorem for the supercritical branching random walk, and related results. Stoch. Process. Appl. 34, 255274.10.1016/0304-4149(90)90018-NCrossRefGoogle Scholar
Biggins, J. (1995). The growth and spread of the general branching random walk. Ann. Appl. Prob. 5, 10081024.10.1214/aoap/1177004604CrossRefGoogle Scholar
Bramson, M. D. (1978). Maximal displacement of branching Brownian motion. Commun. Pure Appl. Math. 31, 531581.10.1002/cpa.3160310502CrossRefGoogle Scholar
Chauvin, B., Klein, T., Marckert, J.-F. and Rouault, A. (2005). Martingales and profile of binary search trees. Electron. J. Prob. 10, 420435.10.1214/EJP.v10-257CrossRefGoogle Scholar
Duffy, K. R., Meli, G. and Shneer, S. (2019). The variance of the average depth of a pure birth process converges to 7. Statist. Prob. Lett. 150, 8893.10.1016/j.spl.2019.02.015CrossRefGoogle Scholar
Durrett, R. (2013). Population genetics of neutral mutations in exponentially growing cancer cell populations. Ann. Appl. Prob. 23, 230250.10.1214/11-AAP824CrossRefGoogle ScholarPubMed
Felsenstein, J. (2004). Inferring Phylogenies. Sinauer Associates.Google Scholar
Gantert, N. and Höfelsauer, T. (2018). Large deviations for the maximum of a branching random walk. Electron. Commun. Prob. 23, 34.10.1214/18-ECP135CrossRefGoogle Scholar
Gao, Z. and Liu, Q. (2018). Second and third orders asymptotic expansions for the distribution of particles in a branching random walk with a random environment in time. Bernoulli 24, 772800.10.3150/16-BEJ895CrossRefGoogle Scholar
Louidor, O. and Tsairi, E. (2017). Large deviations for the empirical distribution in the general branching random walk. Available at .Google Scholar
Meli, G., Weber, T. S. and Duffy, K. R. (2019). Sample path properties of the average generation of a Bellman–Harris process. J. Math. Biol. 79, 673704.10.1007/s00285-019-01373-0CrossRefGoogle ScholarPubMed