Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-06T11:48:55.408Z Has data issue: false hasContentIssue false

On the multifractal analysis of the covering number on the Galton–Watson tree

Published online by Cambridge University Press:  12 July 2019

Najmeddine Attia*
Affiliation:
Faculté des Sciences de Monastir
*
*Postal address: Department Mathématiques, Faculté des Sciences de Monastir, Avenue de l’Environment 5000, Monastir, Tunisia. Email address: najmeddine.attia@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

We consider, for t in the boundary of a Galton–Watson tree $(\partial \textsf{T})$, the covering number $(\textsf{N}_n(t))$ by the generation-n cylinder. For a suitable set I and sequence (sn), we almost surely establish the Hausdorff dimension of the set $\{ t \in \partial {\textsf{T}}:{{\textsf{N}}_n}(t) - nb \ {\sim} \ {s_n}\} $ for bI.

Type
Research Papers
Copyright
© Applied Probability Trust 2019 

1. Introduction and main results

Let (N, X) be a random vector with independent components taking values in ℕ2, where ℕ denotes the set of nonnegative integers. Then consider $\{ (N_{u}, X_{u} )\}_{u\in \bigcup_{n\geq 0} \mathbb{N}^n_+}$ to be a family of independent copies of the vector (N, X) indexed by the set of finite words over the alphabet ℕ+, the set of positive integers (n = 0 corresponds to the empty sequence denoted by ∅). Let $\textsf{T}$ be the Galton–Watson tree with defining elements {Nu}. We have ∅ ∈ $\textsf{T}$; if u$\textsf{T}$ and i ∈ N+ then ui, the concatenation of u and i, belongs to $\textsf{T}$ if and only if 1 ≤ iNu and if ui$\textsf{T}$ then u$\textsf{T}$. Similarly, for each $u\in \bigcup_{n\ge 0} \mathbb{N}^n_+$, denote by $\textsf{T}$(u) the Galton–Watson tree rooted at u and defined by the {Nuv}, $v \in { \cup _{n \ge 0}} \mathbb{N}_+ ^{n}.$.

We assume that $\mathbb{E}(N) > 1$, so that the Galton–Watson tree is supercritical. We also assume that the probability of extinction is equal to 0, so that ℙ(N ≥ 1) = 1.

For each infinite word $t=t_1 t_2 \cdots\in\mathbb{N}_+^{\mathbb{N}_+}$ and n ≥ 0, we set $t_{|n}=t_1 \cdots t_n \in\mathbb{N}^n_+$ ($t_{|0}=\emptyset$). If $u\in \mathbb{N}^n_+$ for some n ≥ 0 then n is the length of u and it is denoted by |u|. Then we denote by [u] the set of infinite words $t \in\mathbb{N}_+^{\mathbb{N}_+}$such that t ||u| = u.

The set $\mathbb{N}_+^{\mathbb{N}_+}$ is endowed with the standard ultrametric distance

$$d:(u,v) \mapsto {{\rm{e}}^{{\rm{ - }}\sup \{ {\rm{|}}w{\rm{|}}:u \in [w],{\kern 1pt} v \in [w]\} }},$$

with the convention that exp (− ∞) = 0. The boundary of the Galton–Watson tree $\textsf{T}$ is defined as the compact set

$$\partial \textsf{T} = \bigcap\limits_{n \ge 1} {\bigcup\limits_{u \in {{\textsf{T}}_n}} {[u]} ,} $$

where $\textsf{T}_n=\textsf{T}\cap \mathbb{N}_+^n$.

We consider Xu as the covering number of the cylinder [u], that is, the cylinder [u] is cut off with probability p 0 = ℙ(X = 0) and is covered m times with probability pm = ℙ(X = m), m = 1, 2, ….

For t $\textsf{T}$, let

\begin{equation}\nonumber {\textsf{N}_n(t)} = \sum_{k=1}^n X_{t_1\cdots t_k}. \end{equation}

Since this quantity depends on t 1 · · · tn only, we also denote by $\textsf{N}_n(u)$ the constant value of $\textsf{N}_n(\cdot)$ over [u] whenever $u \in {{\textsf{T}}_n}$. The quantity $\textsf{N}_n(t)$ is called the covered number (or more precisely the n-covered number) of the point t by the generation-k cylinder, k = 1, 2, …, n.

We also define the α-dimensional Hausdorff measure of a set E by

$${\mathcal{H}}^{\alpha}(E) = \lim_{\delta\to 0} \mathcal{H}_{\delta}^{\alpha} (E)=\lim_{\delta\to 0} \inf\Big\{\sum_{i\in\mathbb{N}} \text{diam}(U_i)^{\alpha}\Big\},$$

where the infimum is taken over all the countable coverings (Ui)i∈ℕ of E of diameters less than or equal to δ. Then the Hausdorff dimension of E is defined as

\begin{equation}\nonumber \dim E=\sup\{\alpha > 0\colon {\mathcal H}^\alpha (E)=\infty\}=\inf\{\alpha > 0\colon {\mathcal H}^\alpha(E)=0\}, \end{equation}

with the conventions that

\begin{equation} \sup \emptyset= 0 \quad\text{and}\quad \inf \emptyset=\infty. \end{equation}

Moreover, if E is a Borel set and μ is a measure supported on E, then its lower Hausdorff dimension is defined as

\begin{equation}\nonumber \underline{\dim} (\mu)=\inf \{\dim F\colon F \text{ Borel}, \mu(F)>0 \}, \end{equation}

and we have

\begin{equation}\nonumber \underline{\dim} (\mu)={\text{ess}\,\inf}_\mu \ \liminf_{r\to 0^+}\frac{\log \mu(B(t,r))}{\log (r)}, \end{equation}

where the first infimum is taken over all t and B(t, r) stands for the closed ball of radius r centered at t [Reference Fan10].

Consider an individual infinite branch t 1 · · · tn · · · of $\textsf{T}$. When $\mathbb{E}(X)$ is defined, the strong law of large numbers yields $\lim_{n\to\infty}n^{-1} {\textsf{N}}_n(t) =\mathbb{E}(X)$. It is also well known (see [Reference Fan and Kahane11]) in the theory of the birth process that limn→∞ Nn(t) = +∞ almost surely (a.s.) for every $t\in {\mathcal D} = \{0, 1\}^{\mathbb{N}}$ if and only if

\begin{equation}p_0 = \mathbb{P}(X=0) < \tfrac{1}{2}.\end{equation}

Then, if this condition is satisfied, every point is infinitely covered a.s.

For b ∈ ℝ, we consider the set

$${E_b} = \{ t \in \partial {\textsf{ T}}:\mathop {\lim }\limits_{n{\rm{ }} \to \infty } \frac{{{{\textsf{N}}_n}(t)}}{n} = b\} .$$

These level sets can be described geometrically through their Hausdorff dimensions. They have been studied by many authors; see, e.g. [Reference Barral4], [Reference Biggins, Hambly and Jones7], [Reference Falconer8], [Reference Holley and Waymire12], and [Reference Molchan17], and [Reference Attia2] and [Reference Attia and Barral3] for the general case. All these papers also deal with the multifractal analysis of associated Mandelbrot measures (see also [Reference Kahane and Peyriére13], [Reference Liu and Rouault16], and [Reference Peyriére18] for the study of the Mandelbrot measures dimension).

For the sake of simplicity, we will assume that the free energy of X defined as

\begin{equation}\nonumber \tau( q) = \log \mathbb{E} \bigg(\kern1pt\sum_{i=1}^N \rm{e}^{q X_i} \bigg) \end{equation}

is finite over ℝ. Let τ* stand for the Legendre transform of the function τ, where, by convention, the Legendre transform of a mapping f : ℝ → ℝ is defined as the concave and upper semi-continuous function

\begin{equation}\nonumber f^*(b)\,{:\!=}\,\inf_{q\in\mathbb R} (\kern1.5pt f(q)- q b ). \end{equation}

We say that the multifractal formalism holds at b ∈ ℝ if dim Eb = τ*(b). We will assume without loss of generality that X is not constant, so that the function τ is strictly convex.

The interior of subset A of ℝ is denoted by int(A). In the following, we define the sets

\begin{array}{*{20}{c}} {J = \{ q \in {\mathbb{R}};\tau (q) - q\tau '(q) > 0\} , \quad {\rm{ }}\Omega _\alpha ^1 = {\rm{int}}\{ q:{\mathbb{ E}}(|\sum\limits_{i = 1}^N {{\rm{ }}{{\rm{e}}^{q{X_i}}}} {|^\alpha }) < \infty \} ,}\\ {{\Omega ^1} = \mathop \cup \limits_{_{\alpha \in (1,2]}} \Omega _\alpha ^1,{\rm{ }}{\cal J} = J \cap {\Omega ^1},\quad {\rm{and}}\quad I = \{ \tau '(q);{\mkern 1mu} q \in {\cal J}\} .} \end{array}

Remark 1. Define the set L = {α ∈ ℝ, τ*(α) ≥ 0}. We can show that L is a convex, compact, and nonempty set (see [Reference Attia1, Proposition 3.1]). If we add the assumption that $J = {\mathcal J}$ (for example, if we suppose that, for all qJ, there exists α ∈ (1, 2] such that $ \mathbb{E} [ |\sum_{i=1}^N \rm{e}^{ q X_i } |^\alpha ] < \infty$), then I = int(L) (see also [Reference Attia1, Proposition 3.1]). In particular, I is an interval.

Next, we define, for b ∈ ℝ and any positive sequence s = {sn} such that sn = o(n), the set

\begin{equation}\nonumber E_{b, s} =\{t\in { \partial \textsf{T}}\colon {\textsf{N}}_n(t) -n b \sim s_n \; \text{as } \; n \to +\infty \}, \end{equation}

where ${{\textsf{N}}_n}(t) - nb \sim {s_n}$ means that ${{\rm{(}}{{\textsf{N}}_n}(t) - nb)_n}$ and (sn)n are two equivalent sequences. We can obtain the Hausdorff dimension of the set Eb via, for example, the methods used in [Reference Attia2], [Reference Attia and Barral3], [Reference Lyons14], and [Reference Lyons and Pemantle15], but such methods do not give results on dim Eb,s.

Let (ηn)n≥1 be a positive sequence defined by ηn = snsn −1 for n ≥ 1 and suppose that the following hypothesis holds.

Hypothesis 1. Let sn = o(n) and ηn = o(1). Then there exists (εn) such that

\begin{equation}\nonumber \varepsilon_n\to 0, \qquad \sum_{n\ge 1} \exp\bigg( -\varepsilon \sum_{k=1}^n \varepsilon_k \;\eta_k^2 \bigg) < +\infty \quad\text{for all } \varepsilon > 0. \end{equation}

For example, to satisfy Hypothesis 1, we can choose, for n ≥ 1,

\begin{equation}\nonumber s_n = \sum_{k=1}^n \frac{1}{k^\alpha}\quad\text{and}\quad \varepsilon_n = n^{-\gamma} \end{equation}

such that $\alpha \in (0, \tfrac{1}{2})$ and 1 − 2αγ > 0.

We are able now to state our main result.

Theorem 1. Let s = (sn)n≥1 be a positive sequence. Under Hypothesis 1, we have, a.s., for all bI,

\begin{equation}\nonumber \dim E_{b, s}= \dim E_b = \tau^*(b). \end{equation}

A special case of this theorem was treated in [Reference Fan and Kahane11], where the authors considered the space {0, 1} and constructed, for each b = τ′(q) ∈ I, a Mandelbrot measure μq. Let us mention that our theorem gives a stronger result in the sense that, a.s., for all bI, we have the multifractal formalism. This requires a simultaneous building of an inhomogeneous Mandelbrot measure and the computation of their Hausdorff dimensions.

2. Proof of Theorem 1

Let s be a positive sequence such that sn = o(n) and ηn = o(1).

2.1. Upper bounds for the Hausdorff dimension

Let us define, for q ∈ ℝ, the pressure-like function of q by

\begin{equation}\widetilde \tau (q) = \limsup_{n\to +\infty}\frac{1}{n} \ln \bigg( \sum_{u \in {\textsf{T}}_n} \exp( q {\textsf{N}}_n(u))\bigg).\end{equation}

Proposition 1. With probability 1, for all b ∈ ℝ,

\begin{equation}\nonumber \dim E_{b, s}\le \dim E_{b} \leq \widetilde \tau^*(b) \leq \tau^*(b), \end{equation}

a negative dimension meaning that Eb is empty.

Proof. It is clear, since sn = o(n), that, a.s., for all b ∈ ℝ, we have Eb,sEb. Then, a.s.,

\begin{equation}\dim E_{b, s}\le \dim E_{b}.\end{equation}

In addition, we have

\begin{equation}\nonumber E_b = \bigcap _{\varepsilon >0} \bigcup_{M\in \mathbb{N}^*} \bigcap_{n\geq M} \{t\in \partial{ \textsf{T}}; \,| {\textsf{N}}_n(t) -nb | \leq n\varepsilon \}. \end{equation}

Fix ε > 0. For M ≥ 1, the set $E(M,\,\varepsilon ,\,b) = \bigcap\nolimits_{n \ge M} {\{ t \in \partial {\textsf{T}};|\textsf{N}_{n}(t) - nb| \le n\varepsilon \} } $ is covered by the union of those [u] such that $u \in {{\textsf{T}}_n}$, nM, and ${{\textsf{N}}_n}(u) - nb + n\varepsilon \ge 0$. Thus, for α ≥ 0, nM, and q > 0,

\begin{equation}\nonumber {\mathcal H}_{\rm{e}^{-n}}^\alpha (E( M, \varepsilon, b) ) \leq \sum_{u\in \textsf{T}_n} \exp(-n\alpha) \; \exp (q{\textsf{N}}_n(u)-nqb+n q\varepsilon ). \end{equation}

Consequently, if ζ > 0 and α > τ̃(q) + ζqb + , by the definition of τ̃(q), for large enough M, we have

\begin{equation}\nonumber {\mathcal H}_{\rm{e}^{-n}}^\alpha ( E(M, \varepsilon, b) ) \leq \exp\Big(-\frac{n\zeta}{2}\Big). \end{equation}

This yields ${\mathcal H}^\alpha( E(M, \varepsilon, b))=0$; hence, dim E(M, ε, b) ≤ α. Since this holds for all ζ > 0, we obtain dim E(M, ε, b) ≤ τ̃(q) − qb + . It follows that

\begin{equation}\nonumber \dim E_b \leq \inf_{q > 0} \inf_{\varepsilon >0} \sup_{M \in \mathbb{N}^*} \widetilde \tau(q) -qb +q\varepsilon. \end{equation}

Similarly, if we take q < 0, we obtain

\begin{equation}\nonumber \dim E_b\leq \inf_{q < 0} \inf_{\varepsilon >0} \sup_{M \in \mathbb{N}^*} \widetilde \tau(q) - q b - q \varepsilon. \end{equation}

Then we have

\begin{equation}\nonumber \dim E_b \leq \widetilde \tau^*(b). \end{equation}

If τ̃ (b) < 0, we necessarily have Eb = ∅.

It remains to show that, with probability 1,

\begin{equation}\nonumber \widetilde \tau^*(b) \leq \tau^*(b)\quad\text{for all } b \in \mathbb R. \end{equation}

The functions τ̃ and τ are convex and thus continuous. We need only prove that the inequality τ̃(q) ≤ τ(q) holds for each q ∈ ℝ almost surely. Fix q ∈ ℝ. For α > τ(q), we have

\begin{align*} \mathbb{E} \bigg(\sum_{n\geq1}\exp(-n\alpha) \sum_{u\in {\textsf{T}}_n} \exp (q {\textsf{N}}_{n}(u)) \bigg)&= \sum_{n\geq1} \exp(-n\alpha) \mathbb{E} \bigg(\sum_{i=1}^{N} \exp ( q X_i )\bigg)^n \\ &= \sum_{n\geq1}\exp( n(\tau(q)-\alpha)). \end{align*}

Consequently,

\begin{equation}\nonumber \sum_{n\geq1} \exp(-n \alpha) \sum_{u\in {\textsf{T}}_n} \exp ( q {\textsf{N}}_{n}(u)) <\infty, \quad \text{a.s.,} \end{equation}

so that we have

\begin{equation}\nonumber \sum_{u\in {\textsf{T}}_n} \exp (q {\textsf{N}}_{n}(u))=\text{O}(\exp(n\alpha)) \quad\text{and}\quad \widetilde \tau(q) \le \alpha. \end{equation}

Since α > τ(q) is arbitrary, this completes the proof.

2.2. Lower bounds for the Hausdorff dimension

2.2.1. Construction of inhomogeneous Mandelbrot measures

We define, for $(q,p) \in {\mathcal J} \times [1, \infty)$,

\begin{equation}\nonumber \varphi(p,q) = \exp( {\tau}(pq)-p{\tau}(q)). \end{equation}

We have the following result.

Lemma 1. For all nontrivial compact sets $K \subset {\mathcal J}$, there exists a real number 1 < pK < 2 such that, for all 1 < ppK, we have

\begin{equation}\nonumber \sup_{q \in K} \varphi (p_{K},q) < 1. \end{equation}

Proof. Let $q \in {\mathcal J}$. We have ∂φ(1+, q)/∂p < 0. Therefore, there exists pq > 1 such that φ(pq, q) < 1. In a neighborhood Vq of q, we have

\begin{equation}\nonumber \varphi (p_q, q') < 1\quad \text{for all } q'\in V_q. \end{equation}

If K is a nontrivial compact of ${\mathcal J}$, it is covered by a finite number of such Vqi.

Let pK = infi pqi. If 1 < ppK and supqK φ(p, q) ≥ 1, there exists qK such that

\begin{equation}\nonumber \varphi (p,q)\ge 1\quad\text{and}\quad q\in V_{q_i}\quad\text{for some } i. \end{equation}

Let us recall that the mapping pφ(p, q) is log convex and that φ(1, q) = 1. Since 1 < ppqi, we have φ(p, q) < 1, which is a contradiction.

Lemma 2. For all compact sets $K\subset {\mathcal J}$, there exists P̃K > 1 such that

\begin{equation}\nonumber \sup_{q\in K}\mathbb{E} \bigg( \bigg(\sum_{i=1}^N \rm{e}^{q X_i} \Biggr)^{\widetilde p_K} \biggr) < \infty. \end{equation}

Proof. Since K is compact and the family of open sets $J\cap \Omega^1_\gamma$ increases to $\mathcal J$ as γ decreases to 1, there exists γ ∈ (1, 2] such that $K\subset \Omega_\gamma^1$. Take K = γ. The conclusion follows from the fact that the function $q\mapsto \mathbb{E} ( (\sum_{i=1}^N\rm{e}^{ q X_i } )^{\widetilde p_K} )$ is convex over $ \Omega_{\widetilde p_K}^1$ and thus continuous.

Now, we will construct the inhomogeneous Mandelbrot measure. For $q \in {\mathcal J}$ and k ≥ 1, we define ψk(q) as the unique t such that

\begin{equation}\nonumber \tau'(t) = \tau'(q) + \eta_k. \end{equation}

For $u\in \bigcup_{n\ge 0} \mathbb{}N^n_+$ and $q \in {\mathcal J}$, we define, for 1 ≤ iNu,

\begin{equation}\nonumber V(ui, q) =\frac{\exp (q X_{ui})}{\mathbb{E} (\sum_{i=1}^{N} \exp (q X_i ))}=\exp ( q X_{ui}- \tau (q)), \end{equation}

and, for all n ≥ 0,

\begin{equation}\nonumber Y_n^s(q, u)= \sum_{v_1\cdots v_n \in \textsf{T}_n(u)} \prod_{k=1}^n V(u\cdot v_1\cdots v_k, \psi_{|u|+k}(q)). \end{equation}

When u = ∅, this quantity will be denoted by $Y_n^s( q )$ and, when n = 0, its value equals 1.

The sequence $ ( Y_n^s(q, u) )_{n\geq 1}$ is a positive martingale with expectation 1, which converges a.s. and in the L 1-norm to a positive random variable Ys(q, u) (see [Reference Kahane and Peyriére13], [Reference Biggins5], or [Reference Biggins6, Theorem 1]). However, our study will need the almost-sure simultaneous convergence of these martingales to positive limits.

Proposition 2. (i) Let K be a compact subset of ${\mathcal J}$. There exists pK ∈ (1, 2] such that, for all $u \in \bigcup_{n\ge 0}\mathbb{N}^n_+$, the continuous functions $q\in K\mapsto Y_n^s(q, u)$ converge uniformly, a.s. and in the LpK -norm, to a limit qKYs(q, u). In particular, $\mathbb{E}(\sup_{q\in K } Y^s(q, u)^{p_K}) < \infty$. Moreover, Ys(·, u) is positive a.s.

In addition, for all n ≥ 0, $\sigma (\{ ({X_{u1}},\, \ldots ,\,{X_u}{N_u}),\,u \in {{\textsf{T}}_n}\} )$ and $\sigma (\{ {Y^s}( \cdot ,\,u),\,u \in {{\textsf{T}}_{n + 1}}\} )$ are independent, and the random functions Ys(·, u), $\,u \in {{\textsf{T}}_{n + 1}}$, are independent copies of Ys(·): = Ys(·, ∅).

(ii) With probability 1, for all $q \in {\mathcal J}$, the weights

\begin{equation}\nonumber \mu_q^s ([u])=\bigg[ \,\prod_{k=1}^n\exp ( \psi_k(q) X_{u_1\ldots u_k}) - \tau(\psi_k(q)) ) \bigg] Y^s( q, u) \end{equation}

define a measure on ∂ $\textsf{T}$.

The measure $\mu_q^s $ will be used to approximate from below the Hausdorff dimension of the set Eb,s.

The proof of Proposition 2 needs the following result.

Lemma 3. For $q\in {\mathcal J}$, $u \in {\textsf{T}}$, and p ∈ (1, 2), there exists a constant Cp depending only on p such that, for n ≥ 1,

\begin{equation}\nonumber \mathbb{E} (|Y_n^s (q) - Y_{n-1}^s(q)|^p) \le C_p \mathbb{E} \bigg(\bigg| \sum_{i =1}^{N}V(i, \psi_n(q))\bigg|^{p} \bigg) \prod_{k=1}^{n-1}\mathbb{E}\bigg(\sum_{i=1}^N | V(i, \psi_k(q)) |^p\bigg). \end{equation}

Proof. The definition of the process Yn immediately gives

$$Y_n^s(q) - Y_{n - 1}^s(q) = \sum\limits_{u \in {\rm{ }}{\textsf{T}_{n - 1}}} {\prod\limits_{k = 1}^{n - 1} V } ({u_{|k}},{\psi _k}(q))(\sum\limits_{i = 1}^{{N_u}} V (ui,{\psi _n}(q)) - 1).$$}

For each n ≥ 1, let ${\mathcal F}_n=\sigma \{(N_u, V_{u1},\ldots) \colon |u| \leq n-1 \}$ and let ${\mathcal F}_{0}$ be the trivial sigma-field. For $u \in {{\textsf{T}}_{n - {\rm{1}}}}$, we set $B_u(q) = \sum_{i=1}^{N_u} V(ui, \psi_n(q))$. By construction, the random variables (Bu(q) − 1), u ∈ Tn−1, are centered, independent, identically distributed (i.i.d.), and independent of ${\mathcal F}_{n-1}$. Consequently, conditionally on ${\mathcal F}_{n-1}$, we can apply Lemma 6 in Appendix B to the family $\{ (B_u(q)-1)\prod_{k=1}^{n-1} V(u_{|k}, \psi_k(q))\}$. Noting that the Bu(q), $u \in {{\textsf{T}}_{n - {\rm{1}}}}$, have the same distribution yields

\begin{align*} \mathbb{E} (|Y_n^s(q) - Y_{n-1}^s(q)|^p) &= \mathbb{E} \big(\mathbb{E} (|Y_n^s(q) - Y_{n-1}^s(q)|^{p} \mid {\mathcal F}_{n-1} )\big) \\ & \leq 2^{p-1} \mathbb{E} ( | B(q) - 1 | ^p ) \mathbb{E} \bigg( \sum_{u\in \textsf{T}_{n-1}} \prod_{k=1}^{n-1} |V(u_{|k}, \psi_n(q))|^{p} \bigg), \end{align*}

where B(q) stands for any of the identically distributed variables Bu(q).

Using the branching property and the independence of the random vectors (Nu, Xu 1, …) used in the constructions yields

\begin{align*} &\mathbb{E}\bigg(\sum_{u\in \textsf{T}_{n-1}} \prod_{k=1}^{n-1} |V(u_{|k}, \psi_k(q))|^p\bigg) \\ &\qquad=\mathbb{E}\bigg[ \mathbb{E}\bigg(\sum_{u\in \textsf{T}_{n-2}} \prod_{k=1}^{n-2} |V(u_{|k}, \psi_k(q))|^p\,\,\bigg) \bigg(\sum_{i=1}^{N_u} |V(ui, \psi_{n-1}(q))|^p \big) \biggm| {\mathcal F}_{n-2} \bigg) \bigg] \\ &\qquad=\mathbb{E}\bigg(\sum_{i=1}^N | V(i, \psi_{n-1}(q)) |^p\bigg) \mathbb{E}\bigg(\sum_{u\in \textsf{T}_{n-2}} \prod_{k=1}^{n-2} |V(u_{|k}, \psi_k(q))|^p\bigg). \end{align*}

Then a recursion using the branching property and the independence of the random vectors (Nu, Xu 1, …) yields

\begin{equation}\nonumber \mathbb{E}\bigg(\sum_{u\in \textsf{T}_{n-1}} \prod_{k=1}^{n-1} |V(u_{|k}, \psi_k(q))|^p\bigg) = \prod_{k=1}^{n-1}\mathbb{E}\bigg(\sum_{i=1}^N | V(i, \psi_k(q))|^p\bigg). \end{equation}

Using the inequality

\begin{equation} |x+y|^r \leq 2^{r-1}(|x|^r + |y|^r), \qquad r > 1, \end{equation}

we obtain

\begin{equation}\nonumber \mathbb{E}\bigg(\bigg|\sum_{i=1}^{N_u} V(ui, \psi_n(q)) - 1 \bigg|^p \bigg) \leq 2^{p - 1} \mathbb{E} \bigg( \bigg| \sum_{i=1}^{N_u}V(ui, \psi_n(q))\bigg|^{p} + 1 \bigg). \end{equation}

Since

\begin{equation}\nonumber 1= \bigg(\mathbb{E} \bigg(\sum_{1=1}^{N_u}V(ui, \psi_n(q)) \bigg) \bigg)^{p} \le \mathbb{E} \bigg|\sum_{i =1}^{N_u}V(ui, \psi_n(q)) \bigg|^{p}, \end{equation}

then it follows from Lemma 6 in Appendix B that

\begin{equation}\nonumber \mathbb{E}\bigg(\bigg|\sum_{i=1}^{N_u} V(ui, \psi_n(q)) - 1 \bigg|^p \bigg) \leq 2^{p } \mathbb{E} \bigg( \bigg| \sum_{i =1}^{N_u}V(ui, \psi_n(q))\bigg|^{p} \bigg) = 2^{p } \mathbb{E} \bigg( \bigg|\sum_{i =1}^{N}V(i, \psi_n(q))\bigg|^{p} \bigg). \end{equation}

Finally, we have

\begin{equation}\nonumber \mathbb{E} (|Y_n^s(q) - Y_{n-1}^s(q)|^p) \le 2^p \mathbb{E} \bigg( \bigg| \sum_{i =1}^{N}V(i, \psi_n(q))\bigg|^{p} \bigg) \prod_{k=1}^{n-1}\mathbb{E}\bigg(\sum_{i=1}^N | V(i, \psi_k(q)) |^p\bigg). \end{equation}

Proof of Proposition 2(i). Recall that the uniform convergence result uses an argument developed in [Reference Biggins6]. Fix a compact $K \subset {\mathcal J}$. Since ηk = o(1), we can fix, without loss of generality, a compact neighborhood $K' \subset {\mathcal J}$ of K and suppose that

\begin{equation}\nonumber \psi_k(q) \in K' \quad\text{for all } q\in K \text{ and all } k\ge 1. \end{equation}

Fix a compact neighborhood K of K . By Lemma 2, we can find p̃K > 1 such that

\begin{equation}\nonumber \sup_{q\in K''}\mathbb{E} \bigg( \biggl( \sum_{i=1}^N \rm{e}^{q X_i } \biggr)^{\tilde p_{K''}} \bigg) < \infty. \end{equation}

By Lemma 1, we can fix 1 < pK ≤ min (2, p̃K ) such that supqK ϕ(pK, q) < 1. Then, for each qK″, there exists a neighborhood $V \subset \mathbb{C}$ of q whose projection to ℝ is contained in K″) and such that, for all $u \in {\textsf{T}}$ and zVq, the random variables

\begin{equation}\nonumber V(u, z)=\frac{\exp( z X_u )}{\mathbb{E} (\!\sum_{i=1}^N \exp( z X_i) )} \quad\text{and}\quad \Gamma (z) = \frac{\mathbb{E}(\!\sum_{i=1}^N X_i \exp( z X_i ))}{\mathbb{E}(\! \sum_{i=1}^N \exp( z X_i ))} \end{equation}

are well defined. For zVq and k ≥ 1, we define ψk(z) as the unique t such that

\begin{equation}\nonumber \Gamma(t) = \Gamma(z) + \eta_k. \end{equation}

Moreover, we have

\begin{equation}\nonumber \sup_{z\in V_{q}} \phi (p_{K}, z) < 1, \quad\text{where}\quad \phi (p_{K}, z)= \frac{\mathbb{E} (\!\sum_{i=1}^N |\rm{e}^{ z X_i }|^{p_{K}} )}{ |\mathbb{E} (\!\sum_{i=1}^N \rm{e}^{ z X_i } ) |^{p_{K}}}. \end{equation}

By extracting a finite covering of K′ from ∪qK Vq, we find a neighborhood $V\subset \mathbb {C}$ of K′ such that

$$\mathop {\sup }\limits_{z \in V} \phi ({p_K},z) < 1\,{\rm{and}}\,{\psi _k}(z)\,{\rm{is \, defined \, for \, all}}\,z \in V.$$

Since the projection of V to ℝ is included in K″and the mapping $z\mapsto \mathbb{E} (\sum_{i=1}^N \rm{e}^{ z X_i } )$ is continuous and does not vanish on V, by considering a smaller neighborhood of K′ included in V if necessary, we can assume that

\begin{equation}\nonumber C_V=\sup_{z\in V} \mathbb{E}\bigg(\bigg| \sum_{i=1}^N \rm{e}^{ z X_i } \bigg|^{p_{K}}\bigg) \bigg|\mathbb{E} \bigg(\sum_{i=1}^N \rm{e}^{ z X_i } \bigg)\bigg|^{-p_{K}} <\infty. \end{equation}

Now, for $u \in {\textsf{T}}$, we define the analytic extension to V of $Y_n^s(q, u)$ given by

\begin{align*} Y_n^s(z, u)&=\sum_{v\in \textsf{T}_n(u)} \prod_{k=1}^n V( u\cdot v_1\cdots v_k, \psi_{|u|+k} (z)) \\ &= \bigg[\,\prod_{k=1}^n \mathbb{E} \bigg(\sum_{i=1}^N \rm{e}^{ \psi_k( z) X_i} \bigg)\bigg]^{-1}\sum_{v\in \textsf{T}_n(u)}\prod_{k=1}^{n} \rm{e}^{ \psi_{|u|+k}(z)X(uv_{|k}) }. \end{align*}

We also denote $Y_n^s(z, \emptyset ) \ \rm{by} \ Y_n^s (z)$. The same lines as in the proof of Lemma 3 show that

\begin{equation}\nonumber \mathbb{E}(| Y_n^s(z)-Y_{n-1}^s(z) |^{p_{K}} ) \leq C_{p_K} \mathbb{E} \bigg( \bigg| \sum_{i =1}^{N}V(i, \psi_n(z))\bigg|^{p_K} \bigg) \prod_{k=1}^{n-1}\mathbb{E}\bigg(\sum_{i=1}^N | V(i, \psi_k(z)) |^{p_K}\bigg). \end{equation}

Note that $\mathbb{E}( \sum_{i=1}^{N}| V(i, z) |^{p_K} ) = \phi (p_K, \psi_k(z))$ Then

\begin{align*} \mathbb{E} (| Y_n^s(z)-Y_{n-1}^s(z) |^{p_{K}}) &\le C_{p_{K}} \mathbb{E} \bigg( \bigg| \sum_{i =1}^{N}V(i, \psi_n(z))\bigg|^{p_K} \bigg) \prod_{k=1}^{n-1} \phi (p_{K},\psi_k(z)). \\ &\le C_{p_{K}} C_V \prod_{k=1}^{n-1} \sup_{z\in V} \phi (p_{K}, z ), \end{align*}

where we have used the fact that ψk(z) ∈ V for all k ≥ 1.

With probability 1, the functions $z \in V \mapsto Y_n^s(z)$, n ≥ 0, are analytic. Fix a closed polydisc D(z 0, 2ρ) ⊂ V. Theorem 2 gives

\begin{equation}\nonumber \sup_{z\in D(z_{0},\rho)} |Y_n^s(z)-Y_{n-1}^s (z)| \leq 2 \int_{[0,1]} |Y_n^s(\zeta(t))-Y_{n-1}^s(\zeta(t))| \rm{d} t, \end{equation}

where, for t ∈ [0, 1], ζ(t) = z 0 + 2ρei2πt.

Furthermore, Jensen’s inequality and Fubini’s theorem give

\begin{align*} &\mathbb{E} \Bigl(\sup_{z\in D(z_0,\rho)} |Y_n^s(z)-Y_{n-1}^s (z)| ^{p_{K}} \Bigr) \\ &\qquad\leq \mathbb{E} \bigg( \biggl(2 \int_{[0,1]} |Y_n^s(\zeta(t))-Y_{n-1}^s(\zeta(t))| \rm{d} t\biggr)^{p_{K}} \biggr) \\ &\qquad\leq 2^{ p_{K}} \mathbb{E} \biggl(\int_{[0,1]} |Y_n^s(\zeta(t))-Y_{n-1}^s(\zeta(t))|^{p_{K}} \rm{d} t \biggr) \\ &\qquad\leq 2^{p_{K}} \int_{[0,1]} \mathbb{E} |Y_n^s(\zeta(t))-Y_{n-1}^s(\zeta(t))|^{p_{K}} \rm{d} t \\ &\qquad\leq 2^{p_{K}} C_V C_{p_{K}} \prod_{k=1}^{n-1} \sup_{z\in V} \phi (p_{K}, z). \end{align*}

Since supzV ϕ(pK, z) < 1, it follows that

\begin{equation}\nonumber \sum_{n\geq 1}\Big\|\sup_{z\in D(z_0,\rho)} |Y_n^s(z)-Y_{n-1}^s (z)| \Big\|_{p_K} <\infty. \end{equation}

This implies that $z\mapsto Y_n^s(z)$ converge uniformly a.s. and in the LpK-norm over the compact D(z 0, ρ) to a limit zYs(z). This also implies that

\begin{equation}\nonumber \Big\| \sup_{z\in D(z_{0},\rho)} Y^s(z) \Big\| _{p_{K}} < \infty. \end{equation}

Since K can be covered by finitely many such discs D(z 0, ρ), we get the uniform convergence, a.s. and in the LpK-norm, of the sequence $(q\in K \mapsto Y_n^s (q))_{n \ge 1}$ to q ∈ KYs(q)) Moreover, since ${\mathcal J}$ can be covered by a countable union of such compact K, we get the simultaneous convergence for all $q\in {\mathcal J}$. The same holds simultaneously for all the functions $q\in {\mathcal J} \mapsto Y_n^s (q, u),\,u\in \bigcup_{n\ge 0}\mathbb{N}^n_+$, $u \in \bigcup\nolimits_{n \ge 0} {{\mathbb{N}}_ + ^n} $because $\bigcup_{n\ge 0}\mathbb{N}^n_+$ is countable.

To complete the proof of Proposition 2(i), we must show that, with probability 1, qKYs(q) does not vanish. Without loss of generality, we can suppose that K = [0, 1]. If I is a dyadic closed subcube of [0, 1], we denote by EI the event {there exists qI : Ys(q) = 0}. Let I 0 and I 1 stand for the two dyadic intervals of I in the next generation. The event EI being a tail event of probability 0 or 1. If we suppose that ℙ(EI) = 1 then there exists j ∈ {0, 1} such that ℙ(EIj) = 1. Suppose now that ℙ(EK) = 1. The previous remark allows us to construct a decreasing sequence (I(n))n≥0 of dyadic subscubes of K such that ℙ(EI (n)) = 1. Let q 0 be the unique element of ∩ I(n). Since qYs(q) is continuous, we have ℙ(Ys(q 0) = 0) = 1, which contradicts the fact that $(Y_n^s(q_0))_{n \ge 1}$ converge to Ys(q 0) in L 1.

2.2.2. Proof of Theorem 1. The proof of Theorem 1 can be deduced from the two following propositions. Their proof are developed in the next subsections.

Proposition 3. Suppose that Hypothesis 1 holds. Then, with probability 1, for all $q\in {\mathcal J}$,

\begin{equation}\nonumber {\textsf{N}}_n(t) - n b \sim s_n \quad { for \ \mu_q^s-almost \ every \ t\in \partial \textsf{T}}, \end{equation}

where b = τ′(q).

Proposition 4. With probability 1, for all $q\in {\mathcal J}$ and $\mu_q^s$-almost every $t \in \partial {\textsf{T}}$,

\begin{equation}\nonumber \lim_{n\rightarrow\infty}\frac{\log Y^s(q, t_{|n})}{n} =0. \end{equation}

From Proposition 3, it follows, with probability 1, for all $q \in {\mathcal J} $ and $\mu_q^s (E_{b, s} )=1$, that ${\lim _{n \to + \infty }}{{\textsf{N}}_n}(t)/n = b$, b = τ′ (q). In addition, with probability 1, for all $q \in {\mathcal J} $ and $\mu_q^s$ almost every tEb,s, from Propositions 3 and 4, we have

\begin{align*} \lim_{n \rightarrow \infty} \frac{\log( \mu_q^s[t_{|n}])}{{\log (\text{diam}}([t_{|n}]))}& = \lim_{n \rightarrow \infty}- \frac{1}{n}\log\bigg(\prod_{k=1}^n \exp (\psi_k(q ) {X}_{t_1\ldots t_k} -\tau(\psi_k(q)) ) Y^s(q, t_{|n}) \bigg) \\ & = \lim_{n \rightarrow \infty} - \frac{1}{n} \sum_{k=1}^n \psi_k(q) {X}_{t_1\ldots t_k} + \frac{1}{n} \sum_{k=1}^n \tau(\psi_k(q)) - \frac{\log Y^s(q, t_{|n}) }{n} \\ & = \lim_{n \rightarrow \infty} - \frac{1}{n} \sum_{k=1}^n \psi_k(q) {X}_{t_1\ldots t_k} + \frac{1}{n} \sum_{k=1}^n \tau(\psi_k(q)). \end{align*}

Since ηk = o(1) and then ψk(q) → q, we obtain

\begin{equation}\nonumber \lim_{n \rightarrow \infty} \frac{\log( \mu_q^s[t_{|n}])}{{\log (\text{diam}}([t_{|n}]))} = - q \tau'(q) + \tau(q) = \tau^*( \tau'(q)). \end{equation}

We deduce the result from the mass distribution principle (Theorem 3) and Proposition 1.

2.3. Proof of Proposition 3

Let K be a compact subset of ${\mathcal J}$. For b = τ′(q), $q\in {\mathcal J}$, n ≥ 1, ε > 0, and s = (sn)n≥1, we set

\begin{array}{*{20}{l}} {E_{b,s,n,\varepsilon }^1}& = &{\{ t \in \partial {\textsf{ T}}:\sum\limits_{k = 1}^n {{X_{{t_1} \cdots {t_k}}}} (t) - b - {\eta _k} \ge \varepsilon \sum\limits_{k = 1}^n {{\eta _k}} \} ,}\\ {E_{b,n,s,\varepsilon }^{ - 1}}& = &{\{ t \in \partial \,{\textsf{T}}:\sum\limits_{k = 1}^n {{X_{{t_1} \cdots {t_k}}}} (t) - b - {\eta _k} \le - \varepsilon \sum\limits_{k = 1}^n {{\eta _k}} \} .} \end{array}

Suppose that we have shown that, for λ ∈ {−1, 1}, we have

(2.1)\begin{equation} \label{eq2} \mathbb{E}\biggl(\sup_{q\in K} \sum_{n\geq 1} \mu_{q}^s(E^\lambda_{b, n, s, \varepsilon}) \biggr)< \infty. \end{equation}

Then, with probability 1, for all $q \in {\mathcal J} $, λ ∈ {−1, 1}, and $\varepsilon\in\mathbb Q^*_+$,

\begin{equation}\nonumber \sum_{n\geq 1} \mu_q^s (E_{b,n, s, \varepsilon}^{\lambda})<\infty. \end{equation}

Consequently, by the Borel–Cantelli lemma, for $\mu_q^s$-almost every t, we have

\begin{equation}\nonumber \sum_{k=1}^n X_{t_1\cdots t_k}(t) - b - \eta_k = o\biggl( \sum_{k=1}^n \eta_k \biggr), \end{equation}

so ${\textsf{N}_n}(t) - nb \sim {s_n}$, which yields the desired result.

Let us prove (2.1) when λ = 1 (the case λ = −1 is similar). Let θ = (θn) be a positive sequence and qK. Then

\begin{equation}\nonumber \sup_{q\in K}\mu_{q}^s (E_{b, n, s, \varepsilon}^{1} ) \leq \sup_{q\in K} \sum_{u\in \textsf{T}_n} \mu_{q}^s ([u]) {\mathbf 1}_{\{ E_{b, n, s, \varepsilon}^{1}\}}(t_u), \end{equation}

where tu is any point in [u]. Denote tu simply by t. Then

\begin{align*} &\sup_{q\in K}\mu_{q}^s (E_{b, n, s, \varepsilon}^{1} ) \\ &\qquad\leq \sup_{q\in K} \sum_{u\in \textsf{T}_n} \mu_{q}^s [u] \prod_{k=1}^n \exp ( \theta_k X_{t_1\cdots t_k} - \theta_k b - \theta_k \eta_k (1+\varepsilon) ) \\ &\qquad\leq \sup_{q\in K} \sum_{u\in \textsf{T}_n}\prod_{k=1}^n \exp ( (\psi_k(q) + \theta_k) X_{t_1\cdots t_k} - \tau(\psi_k(q)) - \theta_k b - \theta_k \eta_k (1+\varepsilon) ) Y^s(q, u). \end{align*}

For qK, θ = (θn), and n ≥ 1, set

\begin{equation}\nonumber H_n^s(q, \theta)= \sum_{u\in \textsf{T}_n}\prod_{k=1}^n \exp ( (\psi_k(q) + \theta_k) X_{t_1\cdots t_k} - \tau(\psi_k(q)) - \theta_k b - \theta_k \eta_k (1+\varepsilon) ) M^s(u), \end{equation}

where

\begin{equation}\nonumber M^s(u) = \sup_{q\in K} Y^s(u, q). \end{equation}

Recall from the proof of Proposition 2 that there exists a neighborhood VK ⊂ ℂ of K such that

\begin{equation}\nonumber \Gamma (z) = \frac{\mathbb{E}( \sum_{i=1}^N X_i \exp( z X_i ))}{\mathbb{E}( \sum_{i=1}^N \exp( z X_i ))} \quad\text{and}\quad \psi_k(z)\text{ for } k\ge 1 \end{equation}

are well defined for zVK.

For ε > 0, zVK, and n ≥ 1, we define

\begin{array}{*{20}{l}} {H_n^s(z,\theta )}& = &{\sum\limits_{u \in {\rm{ }}{\textsf{T}_n}} {\prod\limits_{k = 1}^n {\exp } } (({\psi _k}(z) + {\theta _k}){X_{{u_{|k}}}} - {\theta _k}\Gamma (z) - {\theta _k}{\eta _k}(1 + \varepsilon ))}\\ {}&{\mathbb{E}} &{ \times {{(\sum\limits_{i = 1}^N {\exp } ({\psi _k}(z){X_i}))}^{ - 1}}{M^s}(u).} \end{array}}

Proposition 5. There exists a neighborhood VVK of K, a positive constant ${\mathcal C}_K$, and a positive sequence θ such that, for all zVK and all n ∈ ℕ*,

\begin{equation}\nonumber \mathbb{E}(|H_n^s(z, \theta)|) \leq {\mathcal C}_K \exp\biggl(-\frac{\varepsilon}{4}\sum_{k=1}^n \varepsilon_k\eta_k^2 \biggr), \end{equation}

where the sequence (εn)n is the sequence used in Hypothesis 1.

Lemma 4. There exists a positive sequence θ = (θn) and a positive constant $C_K$ such that, for all qK, we have

\begin{equation}\nonumber \mathbb{E}\big( H_n^s(q,\theta) \big) \le {\mathcal C}_K \exp\biggl(-\frac{\varepsilon}{2}\sum_{k=1}^n \varepsilon_k \eta_k^2 \biggr). \end{equation}

Proof. Let θ = (θn) be a positive sequence. Clearly we have

\begin{align*} &\mathbb{E}( H_n^s(q,\theta) ) \\ &\qquad= \prod_{k=1}^n \mathbb{E}\bigg(\sum_{i=1}^N \exp \big( (\psi_k(q) + \theta_k) X_i \big)\exp\kern-1.2pt\big( {-} \tau(\psi_k(q)) - \theta_k b - \theta_k \eta_k (1+\varepsilon) \big)\bigg) \mathbb{E}(M^s(u)), \\ &\qquad\le {\mathcal C'}_K \prod_{k=1}^n \exp\big( \tau(\psi_k(q) +\theta_k) - \tau(\psi_k(q)) - \theta_k b - \theta_k \eta_k (1+\varepsilon) \big), \end{align*}

where, by Proposition 2, ${\mathcal C'}_K = \mathbb{E}(M^s(u)) = \mathbb{E}(M^s(\emptyset)) < \infty$ for all $u \in \bigcup_{n\ge 0} \mathbb{N}^n_+$.

Since ηk = o(1), we can fix a compact neighborhood K′ of K and suppose that, for all k ≥ 1 and all qK, we have ψk(q) ∈ K′. For qK and k ≥ 1, writing the Taylor expansion of the function g: θτ̃(ψk(q) + θ) at 0 up to the second order, we obtain

\begin{equation}\nonumber g(\theta) = g(0) + \theta g'(0) + \theta^2 \int_{0}^1 (1-t) g''(t\theta) \rm{d} t, \end{equation}

with g″() ≤ mK = supt∈[0,1] supqK g″(). It follows that, for all k ≥ 1

\begin{equation}\nonumber \tau( \psi_k(q) + \theta_k )- \tau((\psi_k(q) ) - \theta_k \tau'((\psi_k(q) ) \le \theta_k^2 m_K. \end{equation}

Recall that τ′(ψk(q)) = τ′(q) + ηk. Then

\begin{align*} \mathbb{E}\big( H_n^s(q,\theta) \big) &\le {\mathcal C'}_K \prod_{k=1}^n \exp( \tau(\psi_k(q) +\theta_k) - \tau(\psi_k(q)) - \theta_k b - \theta_k \eta_k (1+\varepsilon) ) \\ &\le {\mathcal C'}_K \prod_{k=1}^n \exp\big( {-} \theta_k \eta_k \varepsilon + \theta^2_k m_K\big). \end{align*}

Choose the sequence θ such that θk = εkηk. Then

\begin{equation}\nonumber \mathbb{E}( H_n^s(q,\theta) ) \le {\mathcal C'}_K \prod_{k=1}^n \exp( {-}\varepsilon_k \eta_k^2 ( \varepsilon - \varepsilon_k m_K )). \end{equation}

Since εk → 0 then, for large enough k, we have εεkmK > ε/2. Then there exists a constant ${\mathcal C}_K$ such that

\begin{align*}\nonumber \mathbb{E}( H_n^s(q,\theta) ) \le {\mathcal C}_K \exp\bigg({-}\frac{\varepsilon}{2}\sum_{k=1}^n \varepsilon_k \eta_k^2\bigg).\\[-49pt] \end{align*}

Proof of Proposition 5. Since $ \mathbb{E}(| H_n^s(q, \theta) |) \le {\mathcal C}_K \exp(-({\varepsilon}/{2})\sum_{k=1}^n\varepsilon_k \eta_k^2) $ for qK, there exists a neighborhood VqVK of q such that, for all zVq, we have $\mathbb{E} (|H_n^s(z,\theta )|) \le {{\cal C}_K}$ $\exp ( - (\varepsilon /4)\sum\nolimits_{k = 1}^n {{\varepsilon _k}\eta _k^2} )$. By extracting a finite covering of K from ∪qk Vq, we find a neighborhood VVK of K such that

\begin{align*}\nonumber \mathbb{E}(| H_n^s(z, \theta) |) \le {\mathcal C}_K \exp\bigg(\!-\frac{\varepsilon}{4}\sum_{k=1}^n\varepsilon_k \eta_k^2 \bigg).\\[-49pt] \end{align*}

With probability 1, the functions $z\in V \longmapsto H_n^s(z,\theta)$ are analytic. Fix a closed polydisc D(z 0, 2ρ) ⊂ V, ρ > 0, such that D(z 0, 2ρ) ⊂ V. Theorem 2 gives

\begin{equation}\nonumber \sup_{z\in D(z_{0},\rho)} | H_n^s(z, \theta) | \leq 2 \int_{[0,1]} | H_n( \zeta(t), \theta ) | \rm{d} t, \end{equation}

where, for t ∈ [0, 1],

$$\zeta (t) = {z_0} + 2\rho {{\rm{e}}^{{\rm{i2}}\pi {\rm{t}}}}.$$

Furthermore, Fubini’s theorem gives

\begin{align*} \mathbb{E} \Big(\sup_{z\in D(z_0,\rho)} |H_n^s(z, \theta)| \Big) &\leq \mathbb{E} \bigg( 2 \int_{[0,1]} |H_n^s(\zeta(t), \theta)| \rm{d} t \bigg) \\ &\leq 2 \int_{[0,1]} \mathbb{E} | H_n^s( \zeta(t), \theta)| \rm{d} t \\ &\leq 2\exp\bigg(\!-\frac{\varepsilon}{4}\sum_{k=1}^n \varepsilon_k \eta_k^2 \bigg). \end{align*}

Finally, we obtain

\begin{equation}\nonumber \mathbb{E}\Big( \sup_{q\in K}\mu_{q}^s (E_{b, n, s, \varepsilon}^{1} ) \Big) \le 2\exp\bigg(\!-\frac{\varepsilon}{4}\sum_{k=1}^n \varepsilon_k \eta_k^2 \bigg), \end{equation}

and then, under Hypothesis 1, we obtain (2.1), which completes the proof of Proposition 3.

2.4. Proof of Propostion 4

Let K be a compact subset of ${\mathcal J}$. For a > 1, qK, and n ≥ 1, set

\begin{equation}\nonumber E_{n,a}^+= \{t\in {\partial \textsf{T}} \colon Y^s(q, t_{|n} ) > a^{n} \} \end{equation}

and

\begin{equation}\nonumber E_{n,a}^-= \{t\in{ \partial \textsf{T}} \colon Y^s(q, t_{|n} ) < a^{-n} \}. \end{equation}

It is sufficient to show that, for $E \in \{E_{n,a}^+, E_{n,a}^-\}$,

(2.2)\begin{equation}\label{eq4} \mathbb{E} \bigg(\sup_{q\in K} \sum_{n\geq 1} \mu_{q}^s(E) \bigg)< \infty. \end{equation}

Indeed, if this holds then, with probability 1, for each qK and $E \in \{E_{n,a}^+, E_{n,a}^-\}$, $\sum_{n\geq 1} \mu_{q}^s (E) < \infty$; hence, by the Borel–Cantelli lemma, for $\mu_{q}^s$-almost every $t \in \partial {\textsf{T}}$, if n is large enough, we have

\begin{equation}\nonumber -\log a \leq \liminf_{n\rightarrow\infty}\frac{1}{n}\log Y^s(t_{|n},q) \leq \limsup_{n\rightarrow\infty}\frac{1}{n}\log Y^s(t_{|n},q) \leq \log a. \end{equation}

Letting a tend to 1 along a countable sequence yields the result.

Let us prove (2.2) for $E = E_{n,a}^+$ (the case $E = E_{n,a}^-$ is similar). At first we have

\begin{align*} \sup_{q\in K} \mu_q^s (E_{n,a}^{+})&= \sup_{q\in K} \sum_{u\in {\textsf{T}}_n} \mu_q^s ([u]) {\mathbf 1}_{\{ Y^s(q, u) > a^n \}} \\ &= \sup_{q\in K} \sum_{u\in {\textsf{T}}_n} Y^s(q, u) \prod_{k=1}^n \exp( \psi_k(q) X(u) - \tau(\psi_k(q))) { \mathbf 1}_{\{ Y^s(q, u) > a^n \}} \\ &\leq \sup_{q\in K} \sum_{u\in {\textsf{T}}_n} (Y^s(q, u))^{1+\nu} \prod_{k=1}^n \exp( \psi_k(q) X_u - \tau((\psi_k(q))) a^{- \nu}, \\ &\leq \sup_{q\in K} \sum_{u\in {\textsf{T}}_n} { M^s(u)}^{1+\nu} \prod_{k=1}^n \exp(\psi_k(q) X_u- \tau(\psi_k(q))) a^{- \nu}, \end{align*}

where Ms(u) = supqK Y s(q, u) and ν > 0 is an arbitrary parameter. For qK and ν > 0, we set $L_n(q,\nu) = \sum_{u\in \textsf{T}_n} M^s(u)^{1+\nu} \prod_{k=1}^n \exp( \psi_k(q) X_u - \tau(\psi_k(q))) a^{- \nu}$.

Recall from the proof of Proposition 2 that there exists a neighborhood UK ∈ ℂ of K such that, for all zUK and k ≥ 1,

\begin{equation}\nonumber \psi_k(z)\text{ is well defined} \quad\text{and}\quad\mathbb{E} \bigg( \sum_{i=1}^N \rm{e}^{ \psi_k(z) X_i }\bigg) \neq 0. \end{equation}

Lemma 5. Fix a > 1. For zUK and ν > 0, let

\begin{equation}\nonumber L_n(z,\nu) =\bigg[ \prod_{k=1}^n \mathbb{E} \bigg( \sum_{i=1}^N \exp( \psi_k(z) X_i) \bigg)^{-1} \bigg] \sum_{u\in \textsf{T}_n} M^s(u)^{1+\nu} \prod_{k=1}^n \exp(\psi_k(z) X_{u_{|k}}) a^{- \nu}. \end{equation}

There exists a neighborhood V ∈ ℂd of K and a positive constant CK such that, for all z ∈ V and all integers n ≥ 1,

\begin{equation}\nonumber \mathbb{E}( | L_n(z, p_K-1) | ) \leq C_K a^{-n(p_K-1)/2}, \end{equation}

where pK is given by Proposition 2.

Proof. For zUK and ν > 0, let

\begin{equation}\nonumber \widetilde L_1(z,\nu)= \bigg|\mathbb{E}\bigg( \sum_{i=1}^N \exp( z X_i) \bigg)\bigg|^{-1} \mathbb{E}\bigg( \sum_{i=1}^N \bigg| \exp( z X_i ) \bigg| \bigg) a^{- \nu}. \end{equation}

Let qK. Since $\mathbb{E}(\widetilde L_1(q,\nu)) = a^{-\nu}$, there exists a neighborhood VqUK of q such that, for all zVq, we have $\mathbb{E} ( | \widetilde L_1(z, \nu) | ) \le a^{-\nu/2}$. By extracting a finite covering of K from ∪qK V q, we find a neighborhood VUK of K such that, for all zV, $\mathbb{E} ( | \widetilde L_1(z, \nu) | ) \le a^{-\nu/2}$. Without loss of generality (recall the proof of Proposition 2 and the fact that ηk = o(1)), we can suppose that, for all k ≥ 1,

\begin{equation}\nonumber \mathbb{E} \big( | \widetilde L_1(\psi_k(z), \nu) | \big) \le a^{-\nu/2} \end{equation}

for all zV. Therefore,

\begin{align*} & \mathbb{E}( | L_n(z,\nu) | ) \\ &\qquad = \bigg[\,\prod_{k=1}^n \bigg| \mathbb{E}\bigg( \sum_{i=1}^N \exp\big( \psi_k(z) X_i \big) \bigg)\bigg|^{-1}\bigg] \mathbb{E}\bigg( \bigg| \sum_{u\in \textsf{T}_n}M^s(u)^{1+\nu} \prod_{k=1}^n \exp\big( \psi_k(z) X(u)\big)\bigg| \bigg) a^{-n \nu} \\ &\qquad \le \bigg[\,\prod_{k=1}^n \bigg| \mathbb{E}\bigg( \sum_{i=1}^N \exp\big( \psi_k(z) X_i \big) \bigg)\bigg|^{-1}\bigg] \mathbb{E}\bigg( \sum_{u\in \textsf{T}_n}M^s(u)^{1+\nu} \prod_{k=1}^n \bigg| \exp\big(\psi_k( z) X(u)\big) \bigg| \bigg) a^{-n \nu}. \end{align*}

By Proposition 2, there exists pK ∈ (1, 2] such that, for all $u \in \bigcup_{n\ge 0} \mathbb{N}^n_+$,

\begin{equation}\nonumber \mathbb{E}(M^s(u)^{p_K}) = \mathbb{E}(M^s(\emptyset)^{p_K})= C_K < \infty. \end{equation}

Take ν = pK − 1 in the last calculation. It follows, from the independence of $\sigma (\{ ({X_{u1}}, \ldots ,{X_{u{N_u}}}),{\mkern 1mu} u \in {\rm{ }}{{\textsf{T}}_{n - 1}}\} )$ and $\sigma (\{ {Y^s}( \cdot ,u),{\mkern 1mu} u \in {\rm{ }}{{\textsf{T}}_n}\} )$ for all n ≥ 1, that

\begin{align*} & \mathbb{E}( | L_n(z, p_K -1) |) \\ &\qquad\le \bigg[\,\prod_{k=1}^n \bigg| \mathbb{E}\bigg( \sum_{i=1}^N \exp(\psi_k(z) X_i) \bigg)\bigg|^{-1}\bigg] \prod_{k=1}^n\mathbb{E}\bigg( \sum_{i=1}^N \bigg| \exp( \psi_k(z) X_i ) \bigg| \bigg)^n C_K a^{-n (p_K-1)} \\ &\qquad= C_K \prod_{k=1}^n \mathbb{E} \big( | \widetilde L_1(\psi_k(z), p_K-1) | \big) \\ &\qquad \le C_K a^{-n(p_K-1)/2}, \end{align*}

completing the proof.

With probability 1, the functions zVLn(z, ν) are analytic. Fix a closed polydisc D(z 0, 2ρ) ⊂ V, ρ > 0, such that D(z 0, 2ρ) ⊂ V. Theorem 2 gives

\begin{equation}\nonumber \sup_{z\in D(z_{0},\rho)} | L_n(z, p_K-1) | \leq 2 \int_{[0,1]} | L_n( \zeta(t), p_K-1 ) | \rm{d} t, \end{equation}

where, for t ∈ [0, 1],

\begin{equation}\zeta(t) = z_0 + 2\rho \rm{e}^{\rm{i} 2\pi t}.\end{equation}

Furthermore, Fubini’s theorem gives

\begin{align*} \mathbb{E} \Big(\sup_{z\in D(z_0,\rho)} |L_n(z, p_K -1)| \Big) &\leq \mathbb{E} \biggl( 2 \int_{[0,1]} |L_n(\zeta(r), p_K -1)| \rm{d} r \biggr) \\ &\leq 2 \int_{[0,1]} \mathbb{E} | L_n( \zeta(r), p_K -1)| \rm{d} r \\ &\leq 2 C_K a^{-n(p_K -1)/2}. \end{align*}

Since a > 1 and pK − 1 > 0, we obtain (2.2).

Appendix A. Cauchy formula in several variables

Let us recall the Cauchy formula for holomorphic functions.

Definition 1. Let D(ζ, r) be a disc in ℂ with centre ζ and radius r. The set ∂D is the boundary of D. Let $g \in {\mathcal C}(\partial D)$ be a continuous function on ∂D. We define the integral of g on ∂D as

$$\int_{\partial D} g (\zeta )d\zeta = 2{\rm{ i}}\pi r\int_{[0,1]} g (\zeta (t)){{\rm{e}}^{{\rm{i2}}\pi {\rm{t}}}}{\rm{dt}},$$

where ζ(t) = ζ + rei2πt.

Theorem 2. Let D = D(a, r) be a disc inwith radius r > 0, and let f be a holomorphic function in a neighborhood of D. Then, for all zD,

\begin{equation} f(z)=\frac{1}{2 \rm{i} \pi} \int_{\partial D} \frac{f(\zeta) \rm{d}\zeta}{\zeta -z}. \end{equation}

It follows that

$$\mathop {\sup }\limits_{z \in D(a,r/2)} |f(z)| \le 2\int_{[0,1]} | f(\zeta (t))|{\rm{dt}}.$$

Appendix B. Mass distribution principle

Theorem 3. ([Reference Falconer9, Theorem 4.2].) Let ν be a positive and finite Borel probability measure on a compact metric space (X, d). Assume that MX is a Borel set such that ν(M) > 0 and

\begin{equation}\nonumber M \subseteq \Big\{ t\in X, \liminf_{r \rightarrow 0^+ }\frac{\log \nu(B(t,r))}{\log r } \geq \delta \Big\}. \end{equation}

Then the Hausdorff dimension of M is bounded from below by δ.

Lemma 6. ([Reference Biggins6].) If {Xi} is a family of integrable and independent complex random variables with $\mathbb{E}(X_i) =0$, then $\mathbb{E} |\sum X_i|^p \leq 2^p \sum \mathbb{E} |X_i|^p \ for \ 1 \ge p \ge 2$.

References

Attia, N. (2012). Comportement asymptotique de marches aléatoires de branchement dans ℝd et dimension de Hausdorff. Doctoral Thesis, Universitt’e Paris-Nord XIII.Google Scholar
Attia, N. (2014). On the multifractal analysis of the branching random walk in ℝd. J. Theoret. Prob. 27, 13291349.10.1007/s10959-013-0488-xCrossRefGoogle Scholar
Attia, N. and Barral, J. (2014). Hausdorff and packing spectra, large deviations and free energy for branching random walks in ℝd. Commun. Math. Phys. 331, 139187.CrossRefGoogle Scholar
Barral, J. (2000). Continuity of the multifractal spectrum of a random statistically self-similar measure. J. Theoret. Prob. 13, 10271060.CrossRefGoogle Scholar
Biggins, J. D. (1977). Martingale convergence in the branching random walk. J. Appl. Prob. 14, 2537.CrossRefGoogle Scholar
Biggins, J. D. (1992). Uniform convergence of martingales in the branching random walk. Ann. Prob. 20, 137151.CrossRefGoogle Scholar
Biggins, J. D., Hambly, B. M. and Jones, O. D. (2011). Multifractal spectra for random self-similar measures via branching processes. Adv. Appl. Prob. 43, 139.10.1239/aap/1300198510CrossRefGoogle Scholar
Falconer, K. J. (1994). The multifractal spectrum of statistically self-similar measures. J. Theoret. Prob. 7, 681702.CrossRefGoogle Scholar
Falconer, K. (2003). Fractal Geometry: Mathematical Foundations and Applications, 2nd edn. John Wiley, Hoboken, NJ.Google Scholar
Fan, A. H. (1994). Sur les dimensions de mesures. Studia Math. 111, 117.CrossRefGoogle Scholar
Fan, A. H. and Kahane, J. P. (2001). How many intervals cover a point in random dyadic covering? Port. Math. 58, 5975.Google Scholar
Holley, R. and Waymire, E. C. (1992). Multifractal dimensions and scaling exponents for strongly bounded random cascades. Ann. Appl. Prob. 2, 819845.10.1214/aoap/1177005577CrossRefGoogle Scholar
Kahane, J.-P. and Peyriére, J. (1976). Sur certaines martingales de Benoit Mandelbrot. Adv. Math. 22, 131145.CrossRefGoogle Scholar
Lyons, R. (1990). Random walks and percolation on trees. Ann. Prob. 8, 931958.CrossRefGoogle Scholar
Lyons, R. and Pemantle, R. (1992). Random walk in a random environment and first-passage percolation on trees. Ann. Prob. 20, 125136.CrossRefGoogle Scholar
Liu, Q. and Rouault, A. (1997). On two measures defined on the boundary of a branching tree (IMA Vol. Math. Appl. 84). Springer, New York, pp. 187201.Google Scholar
Molchan, G. M. (1996). Scaling exponents and multifractal dimensions for independent random cascades. Commun. Math. Phys. 179, 681702.CrossRefGoogle Scholar
Peyriére, J. (1977). Calculs de dimensions de Hausdorff. Duke Math. J. 44, 591601.CrossRefGoogle Scholar