1. Introduction and main results
Let (N, X) be a random vector with independent components taking values in ℕ2, where ℕ denotes the set of nonnegative integers. Then consider $\{ (N_{u}, X_{u} )\}_{u\in \bigcup_{n\geq 0} \mathbb{N}^n_+}$ to be a family of independent copies of the vector (N, X) indexed by the set of finite words over the alphabet ℕ+, the set of positive integers (n = 0 corresponds to the empty sequence denoted by ∅). Let $\textsf{T}$ be the Galton–Watson tree with defining elements {Nu}. We have ∅ ∈ $\textsf{T}$; if u ∈ $\textsf{T}$ and i ∈ N+ then ui, the concatenation of u and i, belongs to $\textsf{T}$ if and only if 1 ≤ i ≤ Nu and if ui ∈ $\textsf{T}$ then u ∈ $\textsf{T}$. Similarly, for each $u\in \bigcup_{n\ge 0} \mathbb{N}^n_+$, denote by $\textsf{T}$(u) the Galton–Watson tree rooted at u and defined by the {Nuv}, $v \in { \cup _{n \ge 0}} \mathbb{N}_+ ^{n}.$.
We assume that $\mathbb{E}(N) > 1$, so that the Galton–Watson tree is supercritical. We also assume that the probability of extinction is equal to 0, so that ℙ(N ≥ 1) = 1.
For each infinite word $t=t_1 t_2 \cdots\in\mathbb{N}_+^{\mathbb{N}_+}$ and n ≥ 0, we set $t_{|n}=t_1 \cdots t_n \in\mathbb{N}^n_+$ ($t_{|0}=\emptyset$). If $u\in \mathbb{N}^n_+$ for some n ≥ 0 then n is the length of u and it is denoted by |u|. Then we denote by [u] the set of infinite words $t \in\mathbb{N}_+^{\mathbb{N}_+}$such that t ||u| = u.
The set $\mathbb{N}_+^{\mathbb{N}_+}$ is endowed with the standard ultrametric distance
with the convention that exp (− ∞) = 0. The boundary of the Galton–Watson tree $\textsf{T}$ is defined as the compact set
where $\textsf{T}_n=\textsf{T}\cap \mathbb{N}_+^n$.
We consider Xu as the covering number of the cylinder [u], that is, the cylinder [u] is cut off with probability p 0 = ℙ(X = 0) and is covered m times with probability pm = ℙ(X = m), m = 1, 2, ….
For t ∈ ∂ $\textsf{T}$, let
Since this quantity depends on t 1 · · · tn only, we also denote by $\textsf{N}_n(u)$ the constant value of $\textsf{N}_n(\cdot)$ over [u] whenever $u \in {{\textsf{T}}_n}$. The quantity $\textsf{N}_n(t)$ is called the covered number (or more precisely the n-covered number) of the point t by the generation-k cylinder, k = 1, 2, …, n.
We also define the α-dimensional Hausdorff measure of a set E by
where the infimum is taken over all the countable coverings (Ui)i∈ℕ of E of diameters less than or equal to δ. Then the Hausdorff dimension of E is defined as
with the conventions that
Moreover, if E is a Borel set and μ is a measure supported on E, then its lower Hausdorff dimension is defined as
and we have
where the first infimum is taken over all t and B(t, r) stands for the closed ball of radius r centered at t [Reference Fan10].
Consider an individual infinite branch t 1 · · · tn · · · of ∂ $\textsf{T}$. When $\mathbb{E}(X)$ is defined, the strong law of large numbers yields $\lim_{n\to\infty}n^{-1} {\textsf{N}}_n(t) =\mathbb{E}(X)$. It is also well known (see [Reference Fan and Kahane11]) in the theory of the birth process that limn→∞ Nn(t) = +∞ almost surely (a.s.) for every $t\in {\mathcal D} = \{0, 1\}^{\mathbb{N}}$ if and only if
Then, if this condition is satisfied, every point is infinitely covered a.s.
For b ∈ ℝ, we consider the set
These level sets can be described geometrically through their Hausdorff dimensions. They have been studied by many authors; see, e.g. [Reference Barral4], [Reference Biggins, Hambly and Jones7], [Reference Falconer8], [Reference Holley and Waymire12], and [Reference Molchan17], and [Reference Attia2] and [Reference Attia and Barral3] for the general case. All these papers also deal with the multifractal analysis of associated Mandelbrot measures (see also [Reference Kahane and Peyriére13], [Reference Liu and Rouault16], and [Reference Peyriére18] for the study of the Mandelbrot measures dimension).
For the sake of simplicity, we will assume that the free energy of X defined as
is finite over ℝ. Let τ* stand for the Legendre transform of the function τ, where, by convention, the Legendre transform of a mapping f : ℝ → ℝ is defined as the concave and upper semi-continuous function
We say that the multifractal formalism holds at b ∈ ℝ if dim Eb = τ*(b). We will assume without loss of generality that X is not constant, so that the function τ is strictly convex.
The interior of subset A of ℝ is denoted by int(A). In the following, we define the sets
Remark 1. Define the set L = {α ∈ ℝ, τ*(α) ≥ 0}. We can show that L is a convex, compact, and nonempty set (see [Reference Attia1, Proposition 3.1]). If we add the assumption that $J = {\mathcal J}$ (for example, if we suppose that, for all q ∈ J, there exists α ∈ (1, 2] such that $ \mathbb{E} [ |\sum_{i=1}^N \rm{e}^{ q X_i } |^\alpha ] < \infty$), then I = int(L) (see also [Reference Attia1, Proposition 3.1]). In particular, I is an interval.
Next, we define, for b ∈ ℝ and any positive sequence s = {sn} such that sn = o(n), the set
where ${{\textsf{N}}_n}(t) - nb \sim {s_n}$ means that ${{\rm{(}}{{\textsf{N}}_n}(t) - nb)_n}$ and (sn)n are two equivalent sequences. We can obtain the Hausdorff dimension of the set Eb via, for example, the methods used in [Reference Attia2], [Reference Attia and Barral3], [Reference Lyons14], and [Reference Lyons and Pemantle15], but such methods do not give results on dim Eb,s.
Let (ηn)n≥1 be a positive sequence defined by ηn = sn − sn −1 for n ≥ 1 and suppose that the following hypothesis holds.
Hypothesis 1. Let sn = o(n) and ηn = o(1). Then there exists (εn) such that
For example, to satisfy Hypothesis 1, we can choose, for n ≥ 1,
such that $\alpha \in (0, \tfrac{1}{2})$ and 1 − 2α − γ > 0.
We are able now to state our main result.
Theorem 1. Let s = (sn)n≥1 be a positive sequence. Under Hypothesis 1, we have, a.s., for all b ∈ I,
A special case of this theorem was treated in [Reference Fan and Kahane11], where the authors considered the space {0, 1}ℕ and constructed, for each b = τ′(q) ∈ I, a Mandelbrot measure μq. Let us mention that our theorem gives a stronger result in the sense that, a.s., for all b ∈ I, we have the multifractal formalism. This requires a simultaneous building of an inhomogeneous Mandelbrot measure and the computation of their Hausdorff dimensions.
2. Proof of Theorem 1
Let s be a positive sequence such that sn = o(n) and ηn = o(1).
2.1. Upper bounds for the Hausdorff dimension
Let us define, for q ∈ ℝ, the pressure-like function of q by
Proposition 1. With probability 1, for all b ∈ ℝ,
a negative dimension meaning that Eb is empty.
Proof. It is clear, since sn = o(n), that, a.s., for all b ∈ ℝ, we have Eb,s ⊂ Eb. Then, a.s.,
In addition, we have
Fix ε > 0. For M ≥ 1, the set $E(M,\,\varepsilon ,\,b) = \bigcap\nolimits_{n \ge M} {\{ t \in \partial {\textsf{T}};|\textsf{N}_{n}(t) - nb| \le n\varepsilon \} } $ is covered by the union of those [u] such that $u \in {{\textsf{T}}_n}$, n ≥ M, and ${{\textsf{N}}_n}(u) - nb + n\varepsilon \ge 0$. Thus, for α ≥ 0, n ≥ M, and q > 0,
Consequently, if ζ > 0 and α > τ̃(q) + ζ − qb + qε, by the definition of τ̃(q), for large enough M, we have
This yields ${\mathcal H}^\alpha( E(M, \varepsilon, b))=0$; hence, dim E(M, ε, b) ≤ α. Since this holds for all ζ > 0, we obtain dim E(M, ε, b) ≤ τ̃(q) − qb + qε. It follows that
Similarly, if we take q < 0, we obtain
Then we have
If τ̃ ∗(b) < 0, we necessarily have Eb = ∅.
It remains to show that, with probability 1,
The functions τ̃ and τ are convex and thus continuous. We need only prove that the inequality τ̃(q) ≤ τ(q) holds for each q ∈ ℝ almost surely. Fix q ∈ ℝ. For α > τ(q), we have
Consequently,
so that we have
Since α > τ(q) is arbitrary, this completes the proof.
2.2. Lower bounds for the Hausdorff dimension
2.2.1. Construction of inhomogeneous Mandelbrot measures
We define, for $(q,p) \in {\mathcal J} \times [1, \infty)$,
We have the following result.
Lemma 1. For all nontrivial compact sets $K \subset {\mathcal J}$, there exists a real number 1 < pK < 2 such that, for all 1 < p ≤ pK, we have
Proof. Let $q \in {\mathcal J}$. We have ∂φ(1+, q)/∂p < 0. Therefore, there exists pq > 1 such that φ(pq, q) < 1. In a neighborhood Vq of q, we have
If K is a nontrivial compact of ${\mathcal J}$, it is covered by a finite number of such Vqi.
Let pK = infi pqi. If 1 < p ≤ pK and supq∈K φ(p, q) ≥ 1, there exists q ∈ K such that
Let us recall that the mapping p ↦ φ(p, q) is log convex and that φ(1, q) = 1. Since 1 < p ≤ pqi, we have φ(p, q) < 1, which is a contradiction.
Lemma 2. For all compact sets $K\subset {\mathcal J}$, there exists P̃K > 1 such that
Proof. Since K is compact and the family of open sets $J\cap \Omega^1_\gamma$ increases to $\mathcal J$ as γ decreases to 1, there exists γ ∈ (1, 2] such that $K\subset \Omega_\gamma^1$. Take P̃K = γ. The conclusion follows from the fact that the function $q\mapsto \mathbb{E} ( (\sum_{i=1}^N\rm{e}^{ q X_i } )^{\widetilde p_K} )$ is convex over $ \Omega_{\widetilde p_K}^1$ and thus continuous.
Now, we will construct the inhomogeneous Mandelbrot measure. For $q \in {\mathcal J}$ and k ≥ 1, we define ψk(q) as the unique t such that
For $u\in \bigcup_{n\ge 0} \mathbb{}N^n_+$ and $q \in {\mathcal J}$, we define, for 1 ≤ i ≤ Nu,
and, for all n ≥ 0,
When u = ∅, this quantity will be denoted by $Y_n^s( q )$ and, when n = 0, its value equals 1.
The sequence $ ( Y_n^s(q, u) )_{n\geq 1}$ is a positive martingale with expectation 1, which converges a.s. and in the L 1-norm to a positive random variable Ys(q, u) (see [Reference Kahane and Peyriére13], [Reference Biggins5], or [Reference Biggins6, Theorem 1]). However, our study will need the almost-sure simultaneous convergence of these martingales to positive limits.
Proposition 2. (i) Let K be a compact subset of ${\mathcal J}$. There exists pK ∈ (1, 2] such that, for all $u \in \bigcup_{n\ge 0}\mathbb{N}^n_+$, the continuous functions $q\in K\mapsto Y_n^s(q, u)$ converge uniformly, a.s. and in the LpK -norm, to a limit q ∈ K ↦ Ys(q, u). In particular, $\mathbb{E}(\sup_{q\in K } Y^s(q, u)^{p_K}) < \infty$. Moreover, Ys(·, u) is positive a.s.
In addition, for all n ≥ 0, $\sigma (\{ ({X_{u1}},\, \ldots ,\,{X_u}{N_u}),\,u \in {{\textsf{T}}_n}\} )$ and $\sigma (\{ {Y^s}( \cdot ,\,u),\,u \in {{\textsf{T}}_{n + 1}}\} )$ are independent, and the random functions Ys(·, u), $\,u \in {{\textsf{T}}_{n + 1}}$, are independent copies of Ys(·): = Ys(·, ∅).
(ii) With probability 1, for all $q \in {\mathcal J}$, the weights
define a measure on ∂ $\textsf{T}$.
The measure $\mu_q^s $ will be used to approximate from below the Hausdorff dimension of the set Eb,s.
The proof of Proposition 2 needs the following result.
Lemma 3. For $q\in {\mathcal J}$, $u \in {\textsf{T}}$, and p ∈ (1, 2), there exists a constant Cp depending only on p such that, for n ≥ 1,
Proof. The definition of the process Yn immediately gives
For each n ≥ 1, let ${\mathcal F}_n=\sigma \{(N_u, V_{u1},\ldots) \colon |u| \leq n-1 \}$ and let ${\mathcal F}_{0}$ be the trivial sigma-field. For $u \in {{\textsf{T}}_{n - {\rm{1}}}}$, we set $B_u(q) = \sum_{i=1}^{N_u} V(ui, \psi_n(q))$. By construction, the random variables (Bu(q) − 1), u ∈ Tn−1, are centered, independent, identically distributed (i.i.d.), and independent of ${\mathcal F}_{n-1}$. Consequently, conditionally on ${\mathcal F}_{n-1}$, we can apply Lemma 6 in Appendix B to the family $\{ (B_u(q)-1)\prod_{k=1}^{n-1} V(u_{|k}, \psi_k(q))\}$. Noting that the Bu(q), $u \in {{\textsf{T}}_{n - {\rm{1}}}}$, have the same distribution yields
where B(q) stands for any of the identically distributed variables Bu(q).
Using the branching property and the independence of the random vectors (Nu, Xu 1, …) used in the constructions yields
Then a recursion using the branching property and the independence of the random vectors (Nu, Xu 1, …) yields
Using the inequality
we obtain
Since
then it follows from Lemma 6 in Appendix B that
Finally, we have
Proof of Proposition 2(i). Recall that the uniform convergence result uses an argument developed in [Reference Biggins6]. Fix a compact $K \subset {\mathcal J}$. Since ηk = o(1), we can fix, without loss of generality, a compact neighborhood $K' \subset {\mathcal J}$ of K and suppose that
Fix a compact neighborhood K ″ of K ′. By Lemma 2, we can find p̃K ″ > 1 such that
By Lemma 1, we can fix 1 < pK ≤ min (2, p̃K ″) such that supq ∈ K′ ϕ(pK, q) < 1. Then, for each q ∈ K″, there exists a neighborhood $V \subset \mathbb{C}$ of q whose projection to ℝ is contained in K″) and such that, for all $u \in {\textsf{T}}$ and z ∈ Vq, the random variables
are well defined. For z ∈ Vq and k ≥ 1, we define ψk(z) as the unique t such that
Moreover, we have
By extracting a finite covering of K′ from ∪q∈K′ Vq, we find a neighborhood $V\subset \mathbb {C}$ of K′ such that
Since the projection of V to ℝ is included in K″and the mapping $z\mapsto \mathbb{E} (\sum_{i=1}^N \rm{e}^{ z X_i } )$ is continuous and does not vanish on V, by considering a smaller neighborhood of K′ included in V if necessary, we can assume that
Now, for $u \in {\textsf{T}}$, we define the analytic extension to V of $Y_n^s(q, u)$ given by
We also denote $Y_n^s(z, \emptyset ) \ \rm{by} \ Y_n^s (z)$. The same lines as in the proof of Lemma 3 show that
Note that $\mathbb{E}( \sum_{i=1}^{N}| V(i, z) |^{p_K} ) = \phi (p_K, \psi_k(z))$ Then
where we have used the fact that ψk(z) ∈ V for all k ≥ 1.
With probability 1, the functions $z \in V \mapsto Y_n^s(z)$, n ≥ 0, are analytic. Fix a closed polydisc D(z 0, 2ρ) ⊂ V. Theorem 2 gives
where, for t ∈ [0, 1], ζ(t) = z 0 + 2ρei2πt.
Furthermore, Jensen’s inequality and Fubini’s theorem give
Since supz∈V ϕ(pK, z) < 1, it follows that
This implies that $z\mapsto Y_n^s(z)$ converge uniformly a.s. and in the LpK-norm over the compact D(z 0, ρ) to a limit z ↦ Ys(z). This also implies that
Since K can be covered by finitely many such discs D(z 0, ρ), we get the uniform convergence, a.s. and in the LpK-norm, of the sequence $(q\in K \mapsto Y_n^s (q))_{n \ge 1}$ to q ∈ K ↦ Ys(q)) Moreover, since ${\mathcal J}$ can be covered by a countable union of such compact K, we get the simultaneous convergence for all $q\in {\mathcal J}$. The same holds simultaneously for all the functions $q\in {\mathcal J} \mapsto Y_n^s (q, u),\,u\in \bigcup_{n\ge 0}\mathbb{N}^n_+$, $u \in \bigcup\nolimits_{n \ge 0} {{\mathbb{N}}_ + ^n} $because $\bigcup_{n\ge 0}\mathbb{N}^n_+$ is countable.
To complete the proof of Proposition 2(i), we must show that, with probability 1, q ∈ K ↦ Ys(q) does not vanish. Without loss of generality, we can suppose that K = [0, 1]. If I is a dyadic closed subcube of [0, 1], we denote by EI the event {there exists q ∈ I : Ys(q) = 0}. Let I 0 and I 1 stand for the two dyadic intervals of I in the next generation. The event EI being a tail event of probability 0 or 1. If we suppose that ℙ(EI) = 1 then there exists j ∈ {0, 1} such that ℙ(EIj) = 1. Suppose now that ℙ(EK) = 1. The previous remark allows us to construct a decreasing sequence (I(n))n≥0 of dyadic subscubes of K such that ℙ(EI (n)) = 1. Let q 0 be the unique element of ∩ I(n). Since q ↦ Ys(q) is continuous, we have ℙ(Ys(q 0) = 0) = 1, which contradicts the fact that $(Y_n^s(q_0))_{n \ge 1}$ converge to Ys(q 0) in L 1.
2.2.2. Proof of Theorem 1. The proof of Theorem 1 can be deduced from the two following propositions. Their proof are developed in the next subsections.
Proposition 3. Suppose that Hypothesis 1 holds. Then, with probability 1, for all $q\in {\mathcal J}$,
where b = τ′(q).
Proposition 4. With probability 1, for all $q\in {\mathcal J}$ and $\mu_q^s$-almost every $t \in \partial {\textsf{T}}$,
From Proposition 3, it follows, with probability 1, for all $q \in {\mathcal J} $ and $\mu_q^s (E_{b, s} )=1$, that ${\lim _{n \to + \infty }}{{\textsf{N}}_n}(t)/n = b$, b = τ′ (q). In addition, with probability 1, for all $q \in {\mathcal J} $ and $\mu_q^s$ almost every t ∈ Eb,s, from Propositions 3 and 4, we have
Since ηk = o(1) and then ψk(q) → q, we obtain
We deduce the result from the mass distribution principle (Theorem 3) and Proposition 1.
2.3. Proof of Proposition 3
Let K be a compact subset of ${\mathcal J}$. For b = τ′(q), $q\in {\mathcal J}$, n ≥ 1, ε > 0, and s = (sn)n≥1, we set
Suppose that we have shown that, for λ ∈ {−1, 1}, we have
Then, with probability 1, for all $q \in {\mathcal J} $, λ ∈ {−1, 1}, and $\varepsilon\in\mathbb Q^*_+$,
Consequently, by the Borel–Cantelli lemma, for $\mu_q^s$-almost every t, we have
so ${\textsf{N}_n}(t) - nb \sim {s_n}$, which yields the desired result.
Let us prove (2.1) when λ = 1 (the case λ = −1 is similar). Let θ = (θn) be a positive sequence and q ∈ K. Then
where tu is any point in [u]. Denote tu simply by t. Then
For q ∈ K, θ = (θn), and n ≥ 1, set
where
Recall from the proof of Proposition 2 that there exists a neighborhood VK ⊂ ℂ of K such that
are well defined for z ∈ VK.
For ε > 0, z ∈ VK, and n ≥ 1, we define
Proposition 5. There exists a neighborhood V ⊂ VK of K, a positive constant ${\mathcal C}_K$, and a positive sequence θ such that, for all z ∈ VK and all n ∈ ℕ*,
where the sequence (εn)n is the sequence used in Hypothesis 1.
Lemma 4. There exists a positive sequence θ = (θn) and a positive constant $C_K$ such that, for all q ∈ K, we have
Proof. Let θ = (θn) be a positive sequence. Clearly we have
where, by Proposition 2, ${\mathcal C'}_K = \mathbb{E}(M^s(u)) = \mathbb{E}(M^s(\emptyset)) < \infty$ for all $u \in \bigcup_{n\ge 0} \mathbb{N}^n_+$.
Since ηk = o(1), we can fix a compact neighborhood K′ of K and suppose that, for all k ≥ 1 and all q ∈ K, we have ψk(q) ∈ K′. For q ∈ K and k ≥ 1, writing the Taylor expansion of the function g: θ ↦ τ̃(ψk(q) + θ) at 0 up to the second order, we obtain
with g″(tθ) ≤ mK = supt∈[0,1] supq∈K′ g″(tθ). It follows that, for all k ≥ 1
Recall that τ′(ψk(q)) = τ′(q) + ηk. Then
Choose the sequence θ such that θk = εkηk. Then
Since εk → 0 then, for large enough k, we have ε − εkmK > ε/2. Then there exists a constant ${\mathcal C}_K$ such that
Proof of Proposition 5. Since $ \mathbb{E}(| H_n^s(q, \theta) |) \le {\mathcal C}_K \exp(-({\varepsilon}/{2})\sum_{k=1}^n\varepsilon_k \eta_k^2) $ for q ∈ K, there exists a neighborhood Vq ⊂ VK of q such that, for all z ∈ Vq, we have $\mathbb{E} (|H_n^s(z,\theta )|) \le {{\cal C}_K}$ $\exp ( - (\varepsilon /4)\sum\nolimits_{k = 1}^n {{\varepsilon _k}\eta _k^2} )$. By extracting a finite covering of K from ∪q∈k Vq, we find a neighborhood V ⊂ VK of K such that
With probability 1, the functions $z\in V \longmapsto H_n^s(z,\theta)$ are analytic. Fix a closed polydisc D(z 0, 2ρ) ⊂ V, ρ > 0, such that D(z 0, 2ρ) ⊂ V. Theorem 2 gives
where, for t ∈ [0, 1],
Furthermore, Fubini’s theorem gives
Finally, we obtain
and then, under Hypothesis 1, we obtain (2.1), which completes the proof of Proposition 3.
2.4. Proof of Propostion 4
Let K be a compact subset of ${\mathcal J}$. For a > 1, q ∈ K, and n ≥ 1, set
and
It is sufficient to show that, for $E \in \{E_{n,a}^+, E_{n,a}^-\}$,
Indeed, if this holds then, with probability 1, for each q ∈ K and $E \in \{E_{n,a}^+, E_{n,a}^-\}$, $\sum_{n\geq 1} \mu_{q}^s (E) < \infty$; hence, by the Borel–Cantelli lemma, for $\mu_{q}^s$-almost every $t \in \partial {\textsf{T}}$, if n is large enough, we have
Letting a tend to 1 along a countable sequence yields the result.
Let us prove (2.2) for $E = E_{n,a}^+$ (the case $E = E_{n,a}^-$ is similar). At first we have
where Ms(u) = supq∈K Y s(q, u) and ν > 0 is an arbitrary parameter. For q ∈ K and ν > 0, we set $L_n(q,\nu) = \sum_{u\in \textsf{T}_n} M^s(u)^{1+\nu} \prod_{k=1}^n \exp( \psi_k(q) X_u - \tau(\psi_k(q))) a^{- \nu}$.
Recall from the proof of Proposition 2 that there exists a neighborhood UK ∈ ℂ of K such that, for all z ∈ UK and k ≥ 1,
Lemma 5. Fix a > 1. For z ∈ UK and ν > 0, let
There exists a neighborhood V ∈ ℂd of K and a positive constant CK such that, for all z ∈ V and all integers n ≥ 1,
where pK is given by Proposition 2.
Proof. For z ∈ UK and ν > 0, let
Let q ∈ K. Since $\mathbb{E}(\widetilde L_1(q,\nu)) = a^{-\nu}$, there exists a neighborhood Vq ⊂ UK of q such that, for all z ∈ Vq, we have $\mathbb{E} ( | \widetilde L_1(z, \nu) | ) \le a^{-\nu/2}$. By extracting a finite covering of K from ∪q∈K V q, we find a neighborhood V ⊂ UK of K such that, for all z ∈ V, $\mathbb{E} ( | \widetilde L_1(z, \nu) | ) \le a^{-\nu/2}$. Without loss of generality (recall the proof of Proposition 2 and the fact that ηk = o(1)), we can suppose that, for all k ≥ 1,
for all z ∈ V. Therefore,
By Proposition 2, there exists pK ∈ (1, 2] such that, for all $u \in \bigcup_{n\ge 0} \mathbb{N}^n_+$,
Take ν = pK − 1 in the last calculation. It follows, from the independence of $\sigma (\{ ({X_{u1}}, \ldots ,{X_{u{N_u}}}),{\mkern 1mu} u \in {\rm{ }}{{\textsf{T}}_{n - 1}}\} )$ and $\sigma (\{ {Y^s}( \cdot ,u),{\mkern 1mu} u \in {\rm{ }}{{\textsf{T}}_n}\} )$ for all n ≥ 1, that
completing the proof.
With probability 1, the functions z ∈ V → Ln(z, ν) are analytic. Fix a closed polydisc D(z 0, 2ρ) ⊂ V, ρ > 0, such that D(z 0, 2ρ) ⊂ V. Theorem 2 gives
where, for t ∈ [0, 1],
Furthermore, Fubini’s theorem gives
Since a > 1 and pK − 1 > 0, we obtain (2.2).
Appendix A. Cauchy formula in several variables
Let us recall the Cauchy formula for holomorphic functions.
Definition 1. Let D(ζ, r) be a disc in ℂ with centre ζ and radius r. The set ∂D is the boundary of D. Let $g \in {\mathcal C}(\partial D)$ be a continuous function on ∂D. We define the integral of g on ∂D as
where ζ(t) = ζ + rei2πt.
Theorem 2. Let D = D(a, r) be a disc in ℂ with radius r > 0, and let f be a holomorphic function in a neighborhood of D. Then, for all z ∈ D,
It follows that
Appendix B. Mass distribution principle
Theorem 3. ([Reference Falconer9, Theorem 4.2].) Let ν be a positive and finite Borel probability measure on a compact metric space (X, d). Assume that M ⊂ X is a Borel set such that ν(M) > 0 and
Then the Hausdorff dimension of M is bounded from below by δ.
Lemma 6. ([Reference Biggins6].) If {Xi} is a family of integrable and independent complex random variables with $\mathbb{E}(X_i) =0$, then $\mathbb{E} |\sum X_i|^p \leq 2^p \sum \mathbb{E} |X_i|^p \ for \ 1 \ge p \ge 2$.