1. Introduction
Many random graph models have been developed in recent decades in order to describe real-world complex systems such as social networks and the internet. Given a connected graph with n nodes, we assign positive random weights to the edges that represent the cost, the transmission information time, or the infection time (for example in an epidemic model) among the vertices. We typically assume that these weights are independent and identically distributed (i.i.d.). The optimal path between two uniformly chosen vertices u and v is the path between them with the minimal edge weights sum. More precisely, writing
$X_e\sim G$
for an edge e and a continuous distribution G, and writing
$\Gamma_{uv}$
for the set of all paths between u and v, the weighted length
$L_n=L_n(u,v)$
of the optimal path between u and v is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU1.png?pub-status=live)
So
$L_n$
can be viewed as the infection time of the vertex v knowing that u is infected (or vice versa) in a network epidemic model. The diameter of the resulting graph will be the maximum of these optimal paths for any randomly chosen pair of vertices, and the flooding with respect to a vertex u is the maximal time that we need to spend to reach all the vertices in the graph starting from u. Again, we use first passage percolation (FPP) techniques to describe the asymptotic behavior of the diameter and the flooding in the weighted configuration model, a random graph with prescribed degrees; FPP can describe how a fluid spreads in a medium. Several authors, e.g. Fernholz and Ramachandran [Reference Fernholz and Ramachandran6] and van der Hofstad, Hooghiemstra, and Znamenski [Reference van der Hofstad, Hooghiemstra and Znamenski9], have studied the asymptotic behavior of the diameter for a non-weighted random graph.
Bhamidi, van der Hofstad, and Hooghiemstra [Reference Bhamidi, van der Hofstad and Hooghiemstra4, Reference Bhamidi, van der Hofstad and Hooghiemstra5] obtained the asymptotic distributions of the typical weight between two randomly chosen vertices and of the hopcount, which is the number of edges in the optimal path, first in the exponential weight case [Reference Bhamidi, van der Hofstad and Hooghiemstra4] and then in the general case [Reference Bhamidi, van der Hofstad and Hooghiemstra5]. Amini, Lelarge, and Draief [Reference Amini and Lelarge1, Reference Amini, Draief and Lelarge2] obtained a law of large numbers of the diameter and the flooding in the configuration model with exponential edge weights. In this paper we give a generalization of their results to all edge weight distributions having a certain exponential tail behavior.
2. Definitions and notations
We first recall the well-known configuration model described in detail in [Reference van der Hofstad8] and [Reference Bhamidi, van der Hofstad and Hooghiemstra5]. Given an integer n and a sequence
$\mathbf{d}\coloneqq (d_i^n)_{i=1}^n$
of non-negative integers such that
$\sum_{i=1}^n d_i^n$
is even, the configuration model on n vertices is constructed as follows.
We start with n vertices numbered 1 to n, and we assign
$d_i^n$
half-edges to the ith vertex. The random graph
$CM_n(\mathbf{d})$
is obtained by randomly choosing pairs of half-edges to form edges between the two corresponding vertices. Let
$F_n$
be the cumulative distribution of the degree of a randomly chosen vertex, denoted by
$D_n$
, that is,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU2.png?pub-status=live)
We let
$V^n$
denote the set of vertices
$\{1,2,\ldots, n\}$
and let
$l_n\coloneqq \sum_{i\in V^n} d_i$
be the total degree of the graph. We assume that there exists a distribution
$\mathbf{p}=(p_k)_{k\geq 0}$
such that
$\mathbf{d}$
and
$\mathbf{p}$
satisfy the following regularity conditions, as in [Reference Amini, Draief and Lelarge2].
(a)
${{\#\{i \mid d_i^n=r\}}/{n}}\to p_r$ for all
$ r\geq0$ ,
$n\to \infty$ ,
(b)
$\min_{i=1,\ldots, n}d_i^n\coloneqq d_{\min}\geq 3$ and
$p_{d_{\min}}>0$ ,
(c)
$\limsup_{n \rightarrow \infty } {n^{-1}} \sum_i (d_i^n) ^ {2+\delta}<\infty$ for a certain
$\delta>0$ .
Remark 2.1. Condition (c) above ensures the convergence of first and second moments of
$D_n$
to the respective moments of D (a random variable distributed according to
$(p_r)_{r\geq d_{\min}}$
). Moreover, it gives an upper bound for the maximal degree
$\Delta_n$
of the graph constructed on n vertices. Indeed, this condition is equivalent to
$\sup_n \mathbb{E}[D_n^{2+\delta}\log D_n]<\infty$
and hence
$\mathbb{E}[D_n^2]$
is uniformly upper-bounded. By the uniform integrability of
$D_n^2$
, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU3.png?pub-status=live)
where
$p_r^{(n)}= {{\#\{i \mid d_i^n=r\}}/{n}}$
. The argument is similar for
$\mathbb{E}[D_n]\to \mathbb{E}[D]$
.
On the other hand, by writing
$\Delta_n$
for the maximal degree in
$CM_n(\mathbf{d})$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU4.png?pub-status=live)
Under these conditions the resulting random graph may have loops or multi-edges, but we will see that locally the random graph will not have either and will look like a random tree. This will be detailed as a coupling argument in Sections 4 and 5, based on [Reference Amini, Draief and Lelarge2] and [Reference Bhamidi, van der Hofstad and Hooghiemstra5]. In fact, for a vertex v picked at random among
$\{1,2 \cdots n\}$
, the number of vertices at (graphical) distance r from v will tend in distribution, as n tends to infinity, to that of an inhomogeneous branching process which for generation 1 has distribution
$\mathbf{p}=(p_k)_{k\geq 0}$
for the number of offspring but thereafter has the ‘size-biased’ distribution
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn1.png?pub-status=live)
for the number of offspring. Assumption (c) in Condition 1 guarantees that the distribution
$\widehat{\mathbf{p}}=(\widehat{p_k})_{k\geq 0}$
has finite mean (which we denote by
$\nu$
). Note that
$\nu $
is greater than
$d_{\min} -1 \geq 2$
.
We recall the Malthusian parameter
$\alpha$
corresponding to the rate at which a continuous-time branching process grows, with splitting law
$\widehat{\mathbf{p}}=(\widehat{p_k})_{k\geq 0}$
and lifetimes distributed as G. It is the unique positive real number satisfying
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn2.png?pub-status=live)
The population of the branching process will grow at rate
$\alpha $
.
The following distribution, which tends to the size-biased distribution as
$n\to \infty$
, will be used for the upper bound of the diameter:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU5.png?pub-status=live)
We let
$\nu_n$
denote its mean and
$\alpha_n$
its corresponding Malthusian parameter. It is easy to see that
$\nu_n\to \nu$
, and so we have that
$\alpha_n\to \alpha$
as
$n\to \infty$
using the fact that
$l_n/n={n^{-1}}\sum_{i=1}^n d_i^{(n)}$
tends to m by Condition 1.
We give i.i.d. positive random weights for the edges following a continuous law G that has an exponential tail behavior, that is,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn3.png?pub-status=live)
where
$\overline{G}(x)\coloneqq 1-G(x)$
.
We write
$\text{dist}_w(a,b)$
for the sum of the weights along the optimal path between a and b, the weights being i.i.d. according to a continuous law G satisfying (3). We define the weighted diameter and the weighted flooding time of
$CM_n(\mathbf{d})$
as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU6.png?pub-status=live)
where V is the set of vertices of
$CM_n(\mathbf{d})$
, and where the vertex a in the flood is chosen uniformly at random in the flooding definition.
From now on we say that an event
$A_n$
holds with high probability (w.h.p.) when
$\mathbb{P}(A_n)\to 1$
as
$n\to \infty$
.
The same methods used in this paper together with complementary arguments can be used to derive the general case (where
$d_{\min}\geq 1)$
, in a similar way to [Reference Amini and Lelarge1].
2.1. Exploration process
Instead of constructing the random graph and then looking for the optimal path between two vertices, we use a coupling argument as in [Reference Amini, Draief and Lelarge2] and [Reference Bhamidi, van der Hofstad and Hooghiemstra5], by exploring balls of a particular size around the vertices and constructing the graph at the same time. The shortest weighted path between two vertices u and v will be described by the first time collision of the two exploration balls around u and v. Another way to understand this is to imagine water percolating in the graph started from two different nodes. In this case the growing exploration ball around a vertex u at a time t can be seen as the set of nodes reached by the flow until this time starting from u.
We now give a precise definition of this exploration process.
• At time 0 we look at the
$d_u$ half-edges incident to u and
$d_v$ half-edges incident to v, and remove all those forming self-loops at u or v. If two half-edges incident to u and v, respectively, are matched, they form a collision edge, and we assign to it a random weight according to G. We assign random weights with distribution G for the remaining half-edges and write A(0) for these unmatched half-edges.
• We wait until the minimum of lifetimes, denoted by
$T_1$ , of the active half-edges is reached (the minimum is unique almost surely since G is continuous).
• The corresponding half-edge, denoted by
$e^*$ , with weight
$T_1$ is matched with any other randomly chosen free half-edge, and we give weight
$T_1$ to the newly formed edge.
• We remove the newly discovered half-edges that are parts of loops or cycles, and update
$A(T_1)$ by removing
$e^*$ from A(0) and adding the remaining newly discovered free half-edges.
Remark 2.2. This exploration process shows how to explore a neighborhood of a vertex by looking at the random weights on the edges and constructing the graph at the same time by random matching of the half-edges. The order in which we choose the half-edges to be paired in the configuration model does not affect this exploration process. In the rest of the paper we will use different variants of this exploration process, which will be useful in getting upper and lower bounds for the diameter of the random graph, in order to prove Theorem 3.1 given below.
3. Main theorem, overview of the approach
We now state the main result of this paper.
Theorem 3.1. Let
$CM_n(\mathbf{d})$
be a random graph constructed according to the configuration model, with i.i.d. edge weights with common law G satisfying condition (3), and the degree sequence satisfying Condition 1. Then we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU7.png?pub-status=live)
where
$\alpha$
is the Malthusian parameter of a branching process with degree law
$\widehat{\mathbf{p}}$
and edge weight distribution G for the particles; see [Reference Athreya and Ney3].
In the penultimate section we establish a ‘converse’.
Theorem 3.2. Let
$CM_n(\mathbf{d})$
be a random graph constructed according to the configuration model with i.i.d. edge weights with common continuous law G with the degree sequence satisfying Condition 1. If we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU8.png?pub-status=live)
then (3) holds with value
$c\in (0,\infty)$
.
Theorems 3.1 and 3.2 generalize the result in [Reference Amini, Draief and Lelarge2]. According to these two theorems, the weighted diameter and flooding time on the configuration model are of order
$\log n$
as
$n\to \infty$
if and only if the weight distribution G belongs to a set of light-tailed distributions satisfying (3).
We will focus on proving Theorems 3.1 and 3.2 for the diameter of the graph. Based on the same techniques, we show in Section 7 how to get the desired asymptotics in these theorems for the flooding time.
The idea of the proof, for the diameter, is to study the growth of a ball centered, according to the weighted distance, at a certain vertex, and the time needed until any two such balls intersect. The same tools are used to study the behavior of the flooding, so its proof is fairly straightforward once the result for the diameter is proved. The coupling argument for the growth of the balls and the construction of the graph at the same time are explained in detail in [Reference Amini, Draief and Lelarge2]. The idea is to start from a vertex with a certain number of half-edges
$d_i$
and assign to each of them i.i.d. weights according to G.
This idea of exploring the neighborhood of a vertex, and coupling it with a branching process in order to study optimal paths on the configuration model, is similar to [Reference Bhamidi, van der Hofstad and Hooghiemstra5]. In this paper we couple the neighborhood of vertices with different variants of branching processes on which we impose some conditions (by ‘freezing’ some of the half-edges) in order to get lower and upper bounds for the time needed for a ball centered at a vertex to reach a certain size, which will allow us to compute the asymptotic behavior of the weighted diameter and flooding time in the graph. According to [Reference Bhamidi, van der Hofstad and Hooghiemstra5], we know that the typical size of the balls around two uniformly chosen vertices u and v for collision is of order
$\sqrt{n}$
. In our case, since we are studying the weighted diameter of the graph and thus considering all the
$\binom{n}{2}$
pairs of vertices, we will see that we need to explore the neighborhood of the vertices up to a size of order
$\sqrt{n\log n}$
.
The proof will be divided into two parts. In Section 4 we will prove the upper bound for the diameter by first finding an upper bound for the time needed to reach a size of
$K\log n$
half-edges while exploring the neighborhood of a vertex, where K is a constant that is chosen to be large enough, and that will be useful for proving the upper bound (see Theorem 4.2). We then show that for any
$\varepsilon > 0$
, with high probability, the time needed for all these
$K\log n$
half-edges to connect to new vertices is less than
$(1 + \varepsilon){(cd_{\min})^{-1}}\, {\log n} $
. We then show that we need at most
$\sqrt{\log n}$
time before having at least
$K\log n/2$
new splittings, each one giving at least two new half-edges (so we have at least
$K\log n$
new processes). Then, using a coupling argument, we show that, as
$n\to \infty$
, there exist at least two subprocesses, among the
$K\log n$
starting subprocesses, that will together reach a size of order
$\sqrt{n\log n}$
in a time bounded by
$(1 + \varepsilon ) {(2\alpha)^{-1}}\log n$
.
In Section 5 we show the lower bound for the diameter by finding at least two vertices u and v such that, for any
$\varepsilon>0$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU9.png?pub-status=live)
Finally, in the last section, we describe the behavior of the flooding time, as
$n\to \infty$
, using the same arguments and results as for the diameter.
The intuition behind this result can be compared to the result of flooding and diameter on a weighted complete graph studied by Janson [Reference Janson7]. In his case he proved that the typical weighted distance between two vertices is asymptotically
${{\log n}/{n}}$
. For the flooding time starting from a certain vertex, there is an additional price of
${{\log n}/{n}}$
to pay, and this cost is added twice when we consider the diameter.
In our case the typical weighted distance between two uniformly chosen nodes is of order
${{\log n}/{\alpha}}$
, as shown in a precise way in [Reference Bhamidi, van der Hofstad and Hooghiemstra5]. For the flooding time, we are fixing a node and looking for the maximum amount of time needed to reach all the other vertices in the graph. Some vertices are harder to reach, especially if they have the minimum number of neighbors
$d_{\min}$
and large weights on the edges connecting them with their neighbors. This is why there is a certain price to pay (that will be equal to
${{\log n}/{(c d_{\min})}}$
, as shown later) in order to guarantee reaching all the other vertices in the graph.
For the diameter, we have two remote nodes that can be ‘bad’ in the sense that they have
$d_{\min}$
neighbors and large weights on their edges. In this case we need to pay twice the price of
${{\log n}/{(c d_{\min})}}$
.
4. Upper bound
The purpose of this section is to provide the upper bound for the diameter needed for Theorem 3.1. As with [Reference Bhamidi, van der Hofstad and Hooghiemstra5], we will see that for two ‘typical’ vertices v and u, the weighted distance will correspond to twice the time needed for the discovery process for v and u to reach approximately
$\sqrt{n\log n} $
half-edges.
We will write this time as
$U_1(u) + U_2(v) + U_3$
, where
$U_1(u)$
and
$U_2(v) $
are the times for the discovery processes for u and v, respectively, to gain
$K \log n$
half-edges, and
$U_3$
is twice the subsequent ‘time’ for the two clusters to meet. Typically (for any K) the values of
$U_1$
and
$U_2$
are of order
${\text{o}}(\!\log n)$
, and it is
$U_3$
(of order
$\log n)$
which dominates. However, we will see that for exceptional ‘slow’ points u and v,
$ U_1 $
and
$U_2$
can be of order
$\log n$
. We will also see that for K fixed but large, the term
$U_3 /\log(n)$
is very close to
$1/ \alpha $
uniformly over u and v.
In Sections 4.1 and 4.2 our chief aim is to bound the tails of the random variable
$U_1(u)$
(or
$U_2(v)$
) uniformly over all vertices. For a vertex
$v \in V$
and positive C, we define the random variable
$T_C (v) = \inf \{t\mid \text{the discovery process for \textit{v} has at least \textit{C} half-edges}\}$
. When a vertex v is given or fixed we drop the dependence on v and write
$T_C$
. The principal result for this section (which will be proved in Section 4.2) is
Proposition 4.1. For any
$\varepsilon > 0 $
and any
$K < \infty $
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU10.png?pub-status=live)
Remark 4.1. Here and elsewhere we write
$T_{K\log n}$
and not
$T_{\lceil K\log n\rceil}$
, where an a priori non-integer value is offered for an integer argument.
This result evidently follows immediately from the lemma below, which is shown in Section 4.2.
Lemma 4.1. For any
$\varepsilon>0$
, there exists
$h>0, \delta > 0$
such that, for sufficiently large n, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU11.png?pub-status=live)
4.1. The result for tree-branching process
In this subsection we consider a continuous-time generalized (non-Markov) branching process
$ (Z(t)\colon t \geq 0)$
with
$Z(0) = d_{\min} $
and so that individuals have a lifetime distributed independently as G, at the end of which they split into a random number of ‘offspring’ with law size equal to the biased distribution
$\{\widehat{p}_k\}_{k \geq d_{\min}-1}$
given in (1). So by abuse of notation, in this subsection
$T_{K\log n} $
will denote the time for the branching process to attain population size
$K\log n$
. We prove the following result.
Lemma 4.2. For any
$\varepsilon > 0$
, there exists C and
$\delta > 0 $
so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU12.png?pub-status=live)
In Section 4.2 we will adapt the approach presented here to show the same result in the general case (Lemma 4.1), where the exploring ball around a vertex up to size
$K\log n$
contains cycles, which is the case of any realization of the configuration model w.h.p.
We fix v. To analyze
$T_{K\log n}$
, which represents the time needed for the continuous-time branching process starting from v to reach
$K\log n$
half-edges, we use some comparisons with simpler objects. This is chiefly to deal with the absence of the memoryless property for general distribution G satisfying (3). For the branching process extra edges can only serve to reduce the random variable
$T_{K\log n}$
, so we may (and shall) take the number of offspring to be deterministically equal to
$d_{\min} -1 \geq 2$
, since we are looking for an upper bound for
$T_{K\log n}$
. This being the case, we may regard our branching process as derived from a rooted tree where the root has
$d_{\min} $
‘offspring’ and subsequent vertices have
$d_{\min} -1 $
offspring. We associate with each edge e of the tree the random variable
$X_e$
, where the
$X_e$
are i.i.d. random variables distributed as G. The idea is to use condition (3) in order to stochastically upper-bound it by an exponential random variable and use these exponential random variables to find the desired upper bound. Our first real comparison process comes from ‘freezing’ the births of the branching process Z beyond the
$(\!\log n) ^ \gamma $
generation for some fixed
$0 < \gamma <1 $
. Alternatively we can see this as changing all the variables
$X_e $
corresponding to edges from a
$(\!\log n)^ \gamma $
generation vertex to equal infinity. Such a process must necessarily reach (at a random time) the configuration of
$d_{\min} (d_{\min} - 1) ^{(\!\log n)^ \gamma - 1} $
individuals, which is bigger than
$K\log n$
for large n. Thus, writing
$T^{\prime} _{K\log n} $
for the time this modified branching process has
$K\log n $
individuals, we obviously have
$ T_{K\log n} \leq T^{\prime}_{K\log n} $
, and thus an upper bound on tail probabilities for the latter will serve for the former. The next comparison process involves changing the
$X_e $
random variables to shifted exponentials. Property (3) entails that for each
$\varepsilon > 0 $
there exists
$R_ \varepsilon < \infty $
so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU13.png?pub-status=live)
from which it follows that G is stochastically dominated by the exponential distribution with parameter
$c(1- \varepsilon ) $
shifted by
$R_ \varepsilon $
to the right. By abuse of notation we write
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn4.png?pub-status=live)
Accordingly we can couple random variables
$X_e $
with i.i.d.
$ \text{Exp} (c(1- \varepsilon )) $
random variables
$X^{\prime\prime}_e $
so that for each edge e,
$ X_e \leq X^{\prime\prime}_e + R_ \varepsilon $
.
Our final comparison involves
$T^{\prime\prime}_{K\log n} $
, the time for the branching process with variables
$G^{\prime\prime}(e) $
to have
$K\log n $
individuals where again no births after generation
$\log ^ \gamma (n) $
are permitted;
$T^{\prime\prime}_{K\log n} $
is obviously easier to deal with than its preceding objects. We also note that while in general
$T^{\prime\prime}_{K\log n} $
may be less than
$T^{\prime}_{K\log n} $
, given that we only allow generations up to
$\log ^ \gamma (n) $
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU14.png?pub-status=live)
and that the latter term is negligible compared to
$\log n $
as n becomes large.
So our proof of Lemma 4.2 has been reduced to proving the following result.
Lemma 4.3. For
$\varepsilon > 0, $
there exists
$h, \delta > 0 $
so that, for all n large,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU15.png?pub-status=live)
Before proving this lemma we will need an elementary counting result for regular trees. In our deterministic branching model each birth increases Z, the population size, by
${d_{\min}-2}$
. Thus it will increase the jump rate of Z by
$d_{\min}-2$
unless the
$d_{\min}-1$
offspring are of generation
$\log^\gamma\!(n) $
, in which case the rate is reduced by 1.
Writing L for the number of splittings needed to reach size
$K\log n$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn5.png?pub-status=live)
As mentioned above, all integer variables used in this paper (which represent a certain number of splittings, generations, or number of half-edges) are written without
$\lceil\ \rceil$
brackets for simplicity.
We want to find an upper bound for
$T^{\prime\prime}_{K\log n}$
that will also serve as an upper bound (stochastically) for
$T_{K\log n}$
. To do so, we want to show that, in spite of the restrictions imposed on our modified process (only
$d_{\min}$
-degree vertices, freezing half-edges at generation
$(\!\log n)^\gamma$
), the number of half-edges discovered after each splitting is still sufficiently large to reach size
$K\log n$
in a time of order at most
$\log n$
.
By (5), the time
$T^{\prime\prime}_{K\log n}$
is equal to the sum of
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU16.png?pub-status=live)
times between jumps of process Z. That is,
$T^{\prime\prime}_{K\log n} = \sum_{i=1}^{L} F_i$
, where
$F_i $
is the time between the
$(i-1) $
th jump and the ith. Conditional upon the generational information of the jumps up to the
$(i-1)$
th jump, the random variable
$F_i $
is an exponential random variable of parameter
$c(1- \varepsilon) $
times an integer which is measurable with respect to the information up to the
$(i-1)$
th jump. Up to
$ i = \log^ \gamma (n) -1$
, the parameter of
$F_i $
is non-random and equal to
$d_{\min} + (i-1)(d_{\min}-2) $
times
$c(1- \varepsilon ) $
. Thereafter the rate can rise or fall. The lemma below (which is far from optimal but equal to our needs) records that after this point the parameter of
$F_i $
, which we denote by
$f_i$
, has a large lower bound.
Lemma 4.4. For n large and for all
$ \log^ \gamma (n) -1 \leq i \leq L$
,
$F_i $
has parameter
$f_i$
satisfying
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU17.png?pub-status=live)
Proof. Let M be the number of splittings at which generation
$(\!\log n)^\gamma$
is reached for the first time. Obviously
$M \geq \log^ \gamma (n) -1 $
. If
$(\!\log n)^\gamma-1\leq i<M$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU18.png?pub-status=live)
for large n and using the fact that
$d_{\min}\geq 3$
.
Suppose now that
$i\geq M$
. After the Mth jump there is a path from the root v to one of the generation
$(\!\log n)^\gamma$
individuals. Let
$u_j$
be the vertex belonging to this path at generation
$j\leq (\!\log n)^\gamma$
. Notice that at least
$\log^\gamma (n)/2$
generations can be discovered in the subtrees having roots
$u_1,u_2,\ldots, u_{{{ (\!\log n)^\gamma}/{2}}}$
before each one of them reaches a total of
$\log^\gamma\!(n)$
generations. This means that if one of these subtrees has only free half-edges at generation
$\log^\gamma (n)$
(that will not contribute to the jump rate since they are ‘frozen’), then their number is at least
$(d_{\min}-1)^{\log^\gamma(n)/2}$
.
However, for n sufficiently large we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU19.png?pub-status=live)
This contradicts
$i\leq L$
, where L is the number of splittings to reach size
$K\log n$
. This shows that each of these
${{\log^\gamma(n)}/{2}}$
subtrees has at least one free half-edge belonging to one of the first
$\log^\gamma(n)-1$
generations of the main branching process. In other words we see that (provided n is large enough) before time
$T^{\prime\prime}_{K\log(n)}$
each one of these subtrees must ‘supply’ a jump rate of at least
$c(1- \varepsilon) $
, and thus we have
$f_i\geq \log^\gamma (n)/2\times c(1-\varepsilon)$
.
Proof of Lemma 4.3. Using the same notations as in Lemma 4.4, we write
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU20.png?pub-status=live)
where
$\lambda\coloneqq c(1-\varepsilon)$
. We want to show that, for
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU21.png?pub-status=live)
and for any
$0<s<a$
, there exist
$h,\delta>0$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU22.png?pub-status=live)
for n large. This will finish the proof since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn6.png?pub-status=live)
for a certain
$h'>0$
. Using the Markov inequality and Lemma 4.4, and recalling that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU23.png?pub-status=live)
we obtain for T
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU24.png?pub-status=live)
Therefore, for large n and
$\varepsilon$
sufficiently small, there exists
$\delta>0$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU25.png?pub-status=live)
This concludes the proof by (6).
4.2. The result for the general case
In the previous section we showed the upper bound for the time needed to reach
$K\log n$
half-edges starting from a random vertex, assuming that no cycles or loops occur before that time. In this section we show that the same bound holds when we have one or more cycles. We say that two paths starting at a vertex v generate a cycle whenever they have another vertex vʹ in common. We extend this definition to the case where two half-edges incident to the same vertex are matched together and hence form a loop at this vertex.
We will first show that, with high probability, we need at most
$(1+\varepsilon){(cd_{\min})^{-1}} \log n $
time, starting from a vertex v, to reach
$K\log n$
half-edges if we only have exactly one cycle in the exploration process. Then we will show that the probability of having two or more cycles during this process is very small compared to
$n^{-1-\delta}$
for
$0<\delta<1$
, as
$n\to \infty$
. Hence this will be sufficient to prove the upper bound of
$T_{K\log n}$
in the general case.
Exactly one cycle. Suppose first that we have exactly one cycle before reaching
$K\log n$
half-edges. In this case the maximal degree of a newly discovered vertex should be less than
$K\log n$
(because we stop when we reach
$K\log n$
half-edges). On the other hand, we have at most
$K\log n$
half-edges that can create a cycle during this exploration process (before reaching
$K\log n$
half-edges). Hence the probability of having a cycle can be bounded as follows:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn7.png?pub-status=live)
where
$C=K^2/m$
and where we used the fact that
${{l_n}/{n}}\to m$
when
$n\to \infty$
.
We will consider the
$d_{\min}$
-regular case where all newly added vertices have degree
$d_{\min}$
. Then we will show that, even in this case, the time needed to reach size
$K\log n$
(even if there are cycles and loops) is upper-bounded by
${{\log n}/{(cd_{\min})}}$
with high probability.
In order to justify the restriction to the
$d_{\min}$
-regular case, we use a comparison argument similar to that in the previous section to simplify the current setting.
If we have one cycle before reaching
$K\log n$
half-edges, we remove the two half-edges that formed a cycle, to obtain an almost
$d_{\min}$
-regular tree.
Let
$T^{\prime\prime\prime}_{K\log n}$
be the time needed for this almost
$d_{\min}$
-regular tree to reach size
$K\log n$
(by always connecting to new vertices with degree
$d_{\min}$
). This amount of time is clearly greater than or equal to that in the previous case, where only one cycle occurs and no restrictions on the degrees of the vertices are made.
Thus it is sufficient to show that Proposition 4.1 also holds for
$T^{\prime\prime\prime}_{K\log n}$
.
Notice first, as in the previous section, that the number
$S_i$
of live particles after the ith splitting in the
$d_{\min}$
-regular branching process is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU26.png?pub-status=live)
After removing the two half-edges that formed a cycle at the ith splitting (for a certain integer i), there are
$d_{\min}+(i-1)(d_{\min}-2)-2$
remaining half-edges. Therefore we need at most two new splittings to obtain at least
$S_i$
half-edges, since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU27.png?pub-status=live)
We let
$\tau_1$
be the time spent until the ith splitting,
$\tau_2$
the time to reach at least
$S_i$
again after removing the two bad half-edges, and
$\tau_3$
the remaining time to reach
$K\log n$
half-edges. We write
$R_j=(i-1+j)(d_{\min}-2)$
for
$j=1,2$
. We obtain, for
$\varepsilon>0$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU28.png?pub-status=live)
We write
$C_i$
for the event ‘Exactly one cycle occurred, at the ith splitting’. We finally obtain, using Lemma 4.3 and (7),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU29.png?pub-status=live)
Writing Cʹ for the event ‘Exactly one cycle occurred before time
$K\log n$
’, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU30.png?pub-status=live)
Hence, taking the union of this event over all the vertices of the graph and writing
$h_1$
for this probability, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU31.png?pub-status=live)
This shows that, starting from any vertex, we need with high probability at most
$ {{\log n}/{(cd_{\min})}}$
time to reach size
$K\log n$
in the exploration process around this vertex, assuming that we have at most one cycle in this exploration process.
Two or more cycles. On the other hand, using (7), the probability
$h_2$
of having two or more cycles before reaching size
$K\log n$
(starting from a fixed vertex) is bounded for large n by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU32.png?pub-status=live)
Writing Cʹʹ for the event ‘Two or more cycles occurred before time
$K\log n$
’, we thus obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU33.png?pub-status=live)
Hence, taking the union of this event over all the vertices of the graph and writing
$h_3$
for this probability, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn8.png?pub-status=live)
4.3. Time for at least
${{K\log n}/{2}}$
splittings
Once we reach a number of
$K\log n$
half-edges in the exploration process (with a time upper-bounded with high probability by
${{\log n}/{(cd_{\min})}}$
as seen in the previous section), we let
$(R_i^v)_{i\leq K\log n}$
denote the random variables corresponding to the remaining times on the
$K\log n$
half-edges obtained in the previous branching process with the root v, and we write
$(X_i^v)_{i\leq K\log n}$
for the corresponding random variables with cumulative distribution function G representing the total weights on these half-edges. So we have
$R_i^v\leq X_i^v$
for all
$i\leq K\log n$
and any vertex v.
In the case of an exponential distribution for the edge weights (with rate 1, for example), the
$R_i^v$
also have the same exponential distribution by the memorylessness property of the exponential law. In this case we can study the time until collision between two balls around vertices u and v, where both of these vertices have degree
$K\log n$
. The law of the waiting time before the first splitting in one of these balls is
$\text{Exp}(K\log n)$
by the memorylessness property of
$\text{Exp}(1)$
.
Since
$X_i^v\sim G$
and G does not have the memorylessness property, the random variables
$R_i^v$
are not distributed according to G.
To circumvent this problem, we will show that at least
${{K\log n}/{2}}$
of the
$K\log n$
half-edges will be connected to new vertices in an amount of time of order
$\sqrt{\log n}$
. Since
$d_{\min}\geq 3$
, we will get at least
$K\log n$
new half-edges, and we have again that the lifetime of these new half-edges is distributed according to G. Since
$\sqrt{\log n}$
is negligible compared to
$\log n$
, the waiting time to get these
$K\log n$
new half-edges will not affect the upper bound of the diameter, which will be shown to be of order
$\log n$
. We can put aside the half-edges that do not connect to new vertices in this
$\sim \sqrt{\log n}$
amount of time (there are at most
${{K\log n}/{2}}$
of these half-edges). Hence, by not considering such half-edges, we will need even more time to reach the typical size for collision starting from at least
$K\log n$
(newly discovered) half-edges. It is then sufficient to show that the upper bound still holds in this case.
We will show the following theorem.
Theorem 4.1. Consider the
$K\log n$
half-edges that were reached by the branching process around v (as in Section 4.1), with
$(R_i^v)_{i\leq K\log n}$
remaining time on these half-edges before they connect to new vertices. Then we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU34.png?pub-status=live)
This will show that, starting from any vertex v, with high probability, at least
${{K\log n}/{2}}$
live particles in the corresponding exploration process will die in the next
$\sqrt{\log n}$
units of time, giving birth to at least two new particles (since
$d_{\min}\geq 3$
).
Proof. To prove this, we first notice that the number of the explored weighted half-edges needed to obtain
$K\log n$
live particles is less than
$3K\log n$
. To see that, we again consider the worst case where every vertex has degree
$d_{\min}$
. In this case we need
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU35.png?pub-status=live)
splittings to reach size
$K\log n$
. Therefore, for n sufficiently large, the number of weighted half-edges used in this process is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn9.png?pub-status=live)
We let
$X_1,\ldots, X_{3K\log n}$
be the maximal set of random variables with cumulative distribution function G that were discovered during the exploration process before reaching size
$K\log n$
. We let A be the event that at least
${{K\log n}/{2}}$
of the
$X_i^v$
are bigger than
$\sqrt{\log n}$
. Since
$X_i^v\geq R_i^v$
for all i and vertices v, it is sufficient to prove that
$\mathbb{P}(A)\to 0$
faster than
${{1}/{n}}$
:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU36.png?pub-status=live)
where we used Stirling’s approximation
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU37.png?pub-status=live)
to approximate
$\binom{3K\log n}{3K\log n/2}$
.
Corollary 4.1. The result of Theorem 4.1 also holds in the general case, where one or multiple cycles can be created by two or more of the
$K\log n$
half-edges that were obtained by exploring the neighborhood of a vertex v.
Proof. The probability of having two or more cycles is negligible as
$n\to \infty$
(see (8)).
In the case of one cycle, we can show that Theorem 4.1 holds in the exact same way if we replace
${{K\log n}/{2}}$
with
${{K\log n}/{2}}+1$
. In other words, with high probability, at least
${{K\log n}/{2}}+1$
half-edges will be matched within a time of order
$\sqrt{\log n}$
. If one of them creates a cycle (by connecting to one of the
$K\log n$
half-edges in the exploration process around v), then the other
${{K\log n}/{2}}$
half-edges will connect to new vertices with degree bigger than
$d_{\min}\geq 3$
, so we reach at least
${{K\log n}/{2}}\times (d_{\min-1})\geq K\log n$
new subprocesses within a time of order
$\sqrt{n}$
.
4.4. Time for collision starting from
$K\log n$
Starting with
$K\log n$
newly discovered subprocesses in the exploration process of vertices u and v respectively (as explained in Section 4.3), we write S(u, v) for the time spent exploring the
$2\times K\log n$
processes before the first collision between the two balls.
By Section 4.3, we may have more than
$K\log n$
free half-edges in each ball at this stage, but we consider only
$K\log n$
of them, which have weights distributed according to G (whereas other half-edges can have remaining lifetimes that are not distributed according to G).
The time needed for the collision between the two balls of size
$K\log n$
is greater than the time needed for collision of the original balls (which can contain more than
$K\log n$
half-edges, as explained above). Therefore it is sufficient to upper-bound the time needed to have a collision between the two balls of size
$K\log n$
each. We want to show that we need at most
$({{(1+\delta)}/{\alpha}})\log n$
time (with high probability) before the collision happens, for
$\delta>0$
arbitrarily small. The matching among these half-edges is explained in Section 2.1.
The size-biased distribution corresponding to a distribution
$(p_k)_{k\geq 0}$
is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn10.png?pub-status=live)
where
$m=\sum_r p_r$
and we let
$\nu\coloneqq \sum_{k}k\widehat{p_k}$
. We will use a slightly modified distribution in order to couple each of the
$K\log n$
processes with a continuous branching process with a maximal finite degree
$\Delta$
. For this, given
$\varepsilon>0$
, we define an i.i.d. sequence
$(Y_k)_{k\geq 0}$
with distribution
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn11.png?pub-status=live)
where
$\varepsilon>0$
is small,
$\Delta$
is the maximal degree in this case verifying
$\Delta<\varepsilon^{{{-1}/{3}}}$
, and
$l_n$
is the total number of half-edges corresponding to the total of n vertices in the graph. Similarly, we define, for every
$k\geq 0$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU38.png?pub-status=live)
Coupling the forward degrees. We now present a coupling between the forward degrees (the degree minus 1 of a discovered vertex) and a sequence of i.i.d. random variables
$(Y_i)_{i\geq 1}$
with common law
$\mathbf{q}$
given in (11) (we write
$\mathbf{q}$
instead of
$\mathbf{q}^{\mathbf{n}}$
for simplicity).
We start by showing that the two balls collide with high probability whenever their sizes exceed
$C\sqrt{n\log n}$
for some constant C. For this, we first define, for a vertex u and time
$s>0$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU39.png?pub-status=live)
Proposition 4.2. For any pair of vertices
$u,v\in V^n$
, we have with high probability
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU40.png?pub-status=live)
where
$A_n\coloneqq \sqrt{3mn\log n}$
, and
$T_{A_n}(u)$
is the time needed for the ball around u to reach a total of
$A_n$
half-edges.
Proof. Fix two vertices u and v and suppose that
$B_u(T_{A_n}(u))$
and
$B_v(T_{A_n}(v))$
are disjoint. A free half-edge belonging to
$B_u(T_{A_n}(u))$
will be matched uniformly at random with another half-edge in the graph. Therefore the probability that it is not matched with a half-edge in
$B_v(T_{A_n}(v))$
is at most
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU41.png?pub-status=live)
Hence the probability that the two balls do not intersect immediately is upper-bounded by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU42.png?pub-status=live)
for sufficiently large n, where we used the fact that
$l_n/n\to m$
and fixed
$0<\delta<1$
. Thus, by summing over all the pairs of vertices (u, v) in the graph, this probability will tend to 0.
Let
$\tilde{p}_k$
be the probability of having
$1\leq k<\varepsilon^{{{-1}/{3}}}$
children after a splitting in one of the two balls before the collision happens, and supposing that we have at most one cycle. This probability obviously depends on the number of already matched half-edges, but this number is upper-bounded by
$4\sqrt{3mn\log n}$
by Proposition 4.2 and a computation similar to (9), so for large n we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn12.png?pub-status=live)
We now focus on the evolution of the ball around u, by looking at the
$K\log n$
processes related to this ball. For the ith splitting,
$i\geq 1$
, we let
$\tilde{q}_{k,i}$
be the probability of obtaining k children, and none of them belongs to a cycle or a loop,
$\varepsilon^{-{{1}/{3}}}\geq k\geq 2$
. Then, for large n, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU43.png?pub-status=live)
We write
$(U_i)_{i\geq 1}$
for a sequence of i.i.d. uniform random variables in (0, 1). The branching process approximation used in this section is constructed as follows.
• For the ith splitting of the
$K\log n$ processes related to u, if we have k children with
$k>\varepsilon^{-{{1}/{3}}}$ , then we freeze these half-edges and they will not be taken into account later on.
• If
$k<\varepsilon^{-{{1}/{3}}}$ , we keep these half-edges if they do not belong to a cycle and if
$U_i\leq {{q_k}/{\tilde{q}_{k,i}}}$ .
This gives us a coupling between each of the
$K\log n$
processes and a continuous branching process with offspring distribution q.
Remark 4.2. By (12) and the fact that
$q_k=0$
for
$k>\varepsilon^{-{{1}/{3}}}$
, we see that the time needed before the collision of the two balls is larger when considering the branching process with offspring distribution
$\mathbf{q}$
. This shows that the bound for this amount of time (before the collision) in the branching process case is sufficient to bound the actual amount of time in the general case.
We now let
$Z_t^n$
be the number of live particles at time t for a continuous branching process with the law for the children given by (11), bounded by
$\Delta$
, and continuous cumulative distribution G for the edge weights. We also write
$Z_t$
for the number of live particles at time t for a continuous branching process with the size-biased law for the children and continuous cumulative distribution G for the edge weights. By [Reference Athreya and Ney3, page 152], we know that in the supercritical case
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU44.png?pub-status=live)
where
$\nu$
is the average number of children at each splitting and
$\alpha$
is the Malthusian parameter corresponding to the process, which is the unique solution of
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU45.png?pub-status=live)
Lemma 4.5. Let
$\nu_n$
and
$\nu_n^*$
be the expectations corresponding to
$p_k^n$
and
$q_k^n$
respectively, let
$\alpha_n$
and
$\alpha_n^*$
be the corresponding Malthusian parameters, and let
$\Delta$
denote the maximal degree of the graph. Then we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU46.png?pub-status=live)
Proof. We first see that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU47.png?pub-status=live)
Since
$\sum_{k=1}^\infty kp_k^n$
converges uniformly by assumption (c) in Condition 1, we have
$\nu_n-\nu_n^*\to 0$
when
$\varepsilon\to 0$
. Let
$\alpha_n$
and
$\alpha_n^*$
be the corresponding Malthusian parameters of
$\nu_n$
and
$\nu_n^*$
, which are the unique respective solutions of
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU48.png?pub-status=live)
We easily see that H is differentiable. Using the fact that
$\nu_n\to \mathbb{E}[D^*-1]<\infty$
, the derivative of H is bounded as follows for sufficiently large n:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU49.png?pub-status=live)
We then obtain, for a certain
$\alpha_0\in ]\alpha_n^*,\alpha_n[$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU50.png?pub-status=live)
Since
$\nu_n-\nu_n^*\to 0$
when
$\varepsilon\to 0$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU51.png?pub-status=live)
Theorem 4.2. For
$u,v\in V^n$
, we let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU52.png?pub-status=live)
Then, for n large enough, there exists
$\delta>0$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU53.png?pub-status=live)
where
$\alpha$
is the Malthusian parameter defined in (2) and where we recall that S(u,v) is the time spent exploring the
$2\times K\log n$
processes before the collision.
Remark 4.3. We need to mention that condition (3) on the tail of the distribution G was used in Section 4 to upper-bound the time for the exploration process to reach size
$K\log n$
, as well as for the lower bound in Section 5, but it is not used to prove this theorem.
This will show that
$\mathbb{P}(\cup_{(u,v)\in V^n\times V^n} A(u,v))\to 0$
,
$ n\to \infty$
. In other words, with probability that tends to 1, and using the result of the previous section, we need at most
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU54.png?pub-status=live)
time before a collision happens between two exploration process around any two uniformly chosen vertices for an arbitrarily small
$\gamma>0$
.
Proof. Let
$Z_t^{*,n}$
denote the number of live particles in a continuous branching process with law G for the edges and probability
$q_k$
of having k children for every splitting and every
$k\geq 1$
. Since we have at least
$K\log n$
such processes coming from the exploration balls of u and v respectively, in order to simplify the notations we will write these processes as
$U_1(t),\ldots, U_{K\log n}(t)$
for those related to u and
$V_1(t),\ldots,V_{K\log n}(t)$
for v and
$U_i(t)$
,
$V_j(t)\sim Z_t^{*,n}$
,
$ 1\leq i$
,
$j\leq K\log n$
.
Let
$t^*$
be such that
${\text{e}}^{\alpha_n^*t^*}=\sqrt{3mn\log n}$
. We first notice that, for any
$\varepsilon>0$
, there exists n sufficiently large that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU55.png?pub-status=live)
We will now show that there exists at least a pair of processes
$(U_i(t),V_i(t))$
that collide before time
$t^*$
.
By Proposition 4.2,
$(U_i(t),V_i(t))$
will collide with high probability before time
$t^*$
whenever
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU56.png?pub-status=live)
Since
$Z_t{\text{e}}^{-\alpha t}\stackrel{\text{a.s.}}{\to} c'W$
and W has a continuous distribution (see [Reference Athreya and Ney3]), there exists
$0<a<1$
such that, for large t,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU57.png?pub-status=live)
From this, we can easily deduce, again using Proposition 4.2, that the probability of collision between
$U_i(t^*)$
and
$V_i(t^*)$
is greater than
$(1-a)^2$
for large n and for a certain
$0<a<1$
.
Hence the probability that none of these pairs of processes
$(U_i(t),V_i(t))$
collide before time
$t^*$
is upper-bounded by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU58.png?pub-status=live)
By taking K sufficiently large, we get that this probability is bounded by
$n^{-2-\delta}$
for
$\delta>0$
.
By summing over all the pairs of vertices (u, v) in the graph, we can directly conclude that, with high probability, for any pair (u, v), and after reaching size
$K\log n$
around these two vertices, there will be collision in less than
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU59.png?pub-status=live)
with high probability. By Lemma 4.5, for any
$\varepsilon>0$
, there exists
$\gamma>0$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU60.png?pub-status=live)
We conclude that we need at most
${\alpha_n}^{-1}\, \log n(1+\gamma)$
time, with high probability, to have a collision between the two balls once they reach size
$K\log n$
each. This finishes the proof since
$\gamma$
is arbitrarily small and since
$\alpha_n\to \alpha$
as
$n\to \infty$
.
5. Lower bound
The goal of this section is to show that, for any
$\varepsilon>0$
, we have with high probability
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU61.png?pub-status=live)
To do this, it is sufficient to show that for any
$\varepsilon>0$
, we can find two vertices u and v in the graph such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU62.png?pub-status=live)
We will only deal with the worst case, where the exploration process starting from any vertex is a branching process.
Coupling the forward degrees. While exploring the neighborhood of a vertex u, we let
$\widehat{d}_i$
be the forward degree (the degree minus one) of the discovered vertex at the ith splitting. As in [Reference Amini, Draief and Lelarge2], we set
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU63.png?pub-status=live)
and we present a coupling of
$(\widehat{d_i})_{i\leq \beta_n}$
with an i.i.d. sequence of random variables. We write
$\Delta_n$
for the maximum degree in the random graph on n vertices. By writing the order statistics of the degrees as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU64.png?pub-status=live)
we write
$\overline{m}^{(n)}\coloneqq \sum_{i\geq (\beta_n+1)\Delta_n}d_{(i)}^{(n)}$
, and we define the size-biased empirical distribution without considering the
$(\beta_n+1)\Delta_n-1$
lowest degrees as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU65.png?pub-status=live)
By Remark 2.1, we know that
$\Delta_n={\text{o}}(\sqrt{n/\log n})$
. We then conclude that
$\Delta_n\beta_n={\text{o}}(n)$
.
Hence it is easy to see that
$\overline{\pi}^{(n)}$
tends to the size-biased distribution
$\widehat{\mathbf{p}}$
defined in (10) as
$n\to \infty$
.
The following lemma, proved in [Reference Amini, Draief and Lelarge2], will be used for the proof of the main result of this section, Proposition 5.1.
Lemma 5.1. For a randomly chosen vertex u and
$i\leq \beta_n$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU66.png?pub-status=live)
where
$\overline{D}_i^{(n)}$
are i.i.d. with distribution
$\overline{\pi}^{(n)}$
.
For a vertex u and time
$t>0$
, let
$B'(u,t)\coloneqq \{v \mid \text{dist}_w(N(u),v)\leq t\}$
, where N(u) represents the set of neighbors of u in the graph. Based on Proposition 4.3 in [Reference Amini, Draief and Lelarge2], we show the following proposition.
Proposition 5.1. Let
$CM_n(\mathbf{d})$
denote the random graph constructed with n vertices and a degree sequence
$\mathbf{d}=(d_i)_{i=1}^n$
. Let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU67.png?pub-status=live)
where
$\alpha$
is the Malthusian parameter corresponding to a branching process with edge weight distribution G and size-biased offspring distribution
$\widehat{\mathbf{p}}$
. For any two uniformly chosen vertices
$u,v\in V_{d_{\min}}$
, we have with high probability
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU68.png?pub-status=live)
Proof. According to [Reference Athreya and Ney3], in the case of a supercritical age-dependent branching process
$(Z_t)_{t\geq 0}$
, there exists a constant cʹ such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqn13.png?pub-status=live)
Let
$u\in V_{d_{\min}}$
. We consider the worst case for which Bʹ(u, t) is the union of
$d_{\min}$
branching processes growing until time
$t>0$
and with forward degree
$\overline{D}_i^{(n)}$
for the ith splitting. We denote these branching processes by
$(Z_t^1)_{t\geq 0},\ldots, (Z_t^{d_{\min}})_{t\geq 0}$
. Writing
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU69.png?pub-status=live)
where
$\alpha_n$
is the Malthusian parameter corresponding to
$\overline{\pi}^{(n)}$
and G, we know that
$\alpha_n\to \alpha$
as
$n\to \infty$
. Let
$z_n\coloneqq \sqrt{{{n}/{\log n}}}$
. We define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU70.png?pub-status=live)
Using (13), we have, for any
$1\leq d_{\min}$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU71.png?pub-status=live)
This implies that
$q_n\to 1$
as
$n\to \infty$
. Therefore, with high probability, the size of
$B'(u,t^{\prime}_n)$
is bounded by
$z_n$
. Consequently, the probability of getting a collision edge between
$B'(u,t^{\prime}_n)$
and
$B'(v,t^{\prime}_n)$
is bounded by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU72.png?pub-status=live)
which completes the proof.
Remark 5.1. By [Reference Amini, Draief and Lelarge2], we have that the number of free half-edges after
$\beta_n$
splittings,
$S_{\beta_n}(u)$
, in the exploration process around a vertex u satisfies, for large n,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU73.png?pub-status=live)
This means that, for any uniformly chosen vertex, we need with high probability at most
$\beta_n$
splittings before reaching size
$\sqrt{3mn\log n}$
, which is the typical size order for collision according to Proposition 4.2. Hence coupling the first
$\beta_n$
forward degrees in the exploration process of a given vertex before collision (with another ball) is sufficient with high probability.
Let
$V_{d_{\min}}$
be the set of vertices of degree
$d_{\min}$
and let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU74.png?pub-status=live)
A vertex in
$V_{d_{\min}}$
is called bad if the weights on its
$d_{\min}$
connected edges are all greater than
$s_n$
. We also write
$A_u$
for the event that u is a bad vertex.
The following lemma shows that the average number of bad vertices in the graph tends to infinity as
$n\to \infty$
but is negligible compared to n.
Lemma 5.2. For any
$\varepsilon>0$
, there exist
$a_\varepsilon,b_\varepsilon>0$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU75.png?pub-status=live)
Proof. By condition (3), for any
$\varepsilon>0$
, there exist
$R^{\prime}_\varepsilon$
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU76.png?pub-status=live)
Using this, and writing
$X_1,\ldots, X_{d_{\min}}$
for the random weights on the half-edges connected to a vertex
$u\in V_{d_{\min}}$
, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU77.png?pub-status=live)
where
$a_\varepsilon\coloneqq {\text{e}}^{-c(1+\varepsilon)R^{\prime}_{\varepsilon}}$
. From this, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU78.png?pub-status=live)
The upper bound for
$\mathbb{E}[Y]$
follows similarly using (4).
Lemma 5.3. Let
$Y=\sum_u \mathbbm 1_{A_u}$
the number of bad vertices in the graph. Then we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU79.png?pub-status=live)
Proof. Using the fact that
$\text{Cov}(\mathbbm 1_{A_u},\mathbbm 1_{A_v})$
and
$\text{Var}(\mathbbm 1_{A_u})$
are both upper-bounded by
$\mathbb{P}(A_u)$
, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU80.png?pub-status=live)
By Chebyshev’s inequality, we obtain for
$A>0$
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU81.png?pub-status=live)
Taking
$A=\frac{1}{3}\mathbb{E}[Y]$
, we get for large n
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU82.png?pub-status=live)
We let Yʹ denote the number of bad vertices belonging to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU83.png?pub-status=live)
for a uniformly chosen vertex a. By Proposition 5.1, we have, for any vertex i,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU84.png?pub-status=live)
By Markov’s inequality we deduce that
$Y'\leq \frac{1}{3}\mathbb{E}[Y]$
with high probability and thus
$Y-Y'>0$
with high probability using Lemma 5.2.
We write
$R= \binom{Y}{2}$
for the number of pairs of distinct bad vertices and Rʹ for the number of pairs of distinct bad vertices at distance at most
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU85.png?pub-status=live)
By Proposition 5.1 it is easy to see that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU86.png?pub-status=live)
Using this, we get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU87.png?pub-status=live)
Therefore, with high probability, the difference
$R-R'$
is strictly positive by Lemma 5.2. We deduce that for any
$\varepsilon>0$
we can find two vertices that are at distance bigger than
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU88.png?pub-status=live)
In other words we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU89.png?pub-status=live)
Since
$\varepsilon$
is arbitrary, this proves the lower bound of the diameter, and thus, by Section 4, we finally obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU90.png?pub-status=live)
6. Proof of the converse theorem
A first step to proving Theorem 3.2 is the following, which amounts to saying simply that exponential tails are required for the diameter (or flood) to scale as
$\log n $
in the sense of the theorem.
Lemma 6.1. If
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU91.png?pub-status=live)
then for every
$M < \infty$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU92.png?pub-status=live)
Remark 6.1. The claimed conclusion obviously contradicts the hypotheses of Theorem 3.2, and so in particular any G for which the hypotheses of Theorem 3.2 hold must possess all moments.
Proof. It is easily seen that, by hypothesis, for every
$ \varepsilon > 0$
there exists a sequence of integers
$n_j $
tending to infinity so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU93.png?pub-status=live)
Thus we easily have that with probability tending to 1 as
$j \rightarrow \infty $
, there exist vertices
$v \in V_{{d_{\min}}} \subset V^{n_j} $
such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU94.png?pub-status=live)
This implies that the diameter or flood for the graph
$CM_{n_j}(\mathbf{d}) $
must exceed
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU95.png?pub-status=live)
The conclusion follows from the arbitrariness of
$\varepsilon > 0 $
.
As usual we establish convergence by suitably bounding the
$\limsup$
above and the
$\liminf$
below. Theorem 3.2 follows from the next two lemmas.
Lemma 6.2. For distribution G satisfying the hypotheses of Theorem 3.2,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU96.png?pub-status=live)
Lemma 6.3. For distribution G satisfying the hypotheses of Theorem 3.2,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU97.png?pub-status=live)
Proof of Lemma 6.2. Suppose not. Then there exist
$\varepsilon > 0 $
and a sequence of integers
$n_j $
tending to infinity so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU98.png?pub-status=live)
We can now argue as in Section 5. For random graph
$CM_{n_j}(\mathbf{d})$
we have that
$M_j $
, the number of vertices v in
$V^{n_j}_{d_{\min}} $
so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU99.png?pub-status=live)
will satisfy the following two conditions with probability tending to one as j tends to infinity.
(i)
$M_j \geq c n_j^{1 - (1- \varepsilon)/(1- \varepsilon /2 )}$ for some universal strictly positive c.
(ii) For each
$ \delta > 0 $ , with probability tending to one as j tends to infinity for two randomly chosen vertices u and v among
$M_j $ such vertices, we have
\begin{equation*}B\biggl(u, \log(n) \dfrac{1 - \delta }{\alpha}\biggr) \cap B\biggl(u, \log(n) \dfrac{1 - \delta }{\alpha}\biggr) = \emptyset.\end{equation*}
Taking
$\delta $
sufficiently small with respect to
$\varepsilon $
gives
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU101.png?pub-status=live)
which contradicts the hypotheses of Theorem 3.2.
Proof of Lemma 6.3. Suppose not. In this case there exist
$\varepsilon > 0 $
and a sequence of integers
$n_j $
tending to infinity so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU102.png?pub-status=live)
We may assume by Lemma 6.2 that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU103.png?pub-status=live)
and from this we can apply the argument of Proposition 4.1 and see that, as j tends to infinity, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU104.png?pub-status=live)
Thus
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU105.png?pub-status=live)
which again contradicts the hypotheses of Theorem 3.2.
7. Flooding
In this section we show, based on the proofs and results obtained in Sections 4 and 5, that with high probability the weighted flooding time behaves like
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU106.png?pub-status=live)
We first show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU107.png?pub-status=live)
as
$n\to \infty$
. We let
$T_{u,\sqrt{n\log n}}(G)$
be the time, starting from vertex u, needed to reach
$\sqrt{n\log n}$
half-edges, given that the edge weights have cumulative distribution function G. We have already shown, in Section 4, that with high probability, for any v vertex of the graph,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU108.png?pub-status=live)
Hence it is sufficient to show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU109.png?pub-status=live)
for a randomly chosen vertex a.
Using computations similar to those in Lemma 4.1, we have, for any
$\varepsilon'>0$
,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU110.png?pub-status=live)
By Section 4.4 we have that, with high probability, the time needed to reach
$\sqrt{n\log n}$
half-edges starting from
$K\log n$
is smaller than
${(2\alpha)^{-1}}\,\log n$
. Therefore we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU111.png?pub-status=live)
Since
$\varepsilon'$
is arbitrary, we finally obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU112.png?pub-status=live)
For the lower bound we recall the notations introduced in Section 5. Since
$Y'\leq \frac{1}{3}\mathbb{E}[Y]$
with high probability and using Lemma 5.3, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU113.png?pub-status=live)
In other words, with high probability, there exists a vertex w that does not belong to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU114.png?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU115.png?pub-status=live)
This is equivalent to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20200903100850284-0087:S0021900220000455:S0021900220000455_eqnU116.png?pub-status=live)