Hostname: page-component-745bb68f8f-v2bm5 Total loading time: 0 Render date: 2025-02-06T09:04:51.424Z Has data issue: false hasContentIssue false

First passage percolation on sparse random graphs with boundary weights

Published online by Cambridge University Press:  30 July 2019

Lasse Leskelä*
Affiliation:
Aalto University
Hoa Ngo*
Affiliation:
Aalto University
*
*Postal address: Department of Mathematics and Systems Analysis, Aalto University, Otakaari 1, 02015 Espoo, Finland.
**Email address: lasse.leskela@aalto.fi
Rights & Permissions [Opens in a new window]

Abstract

A large and sparse random graph with independent exponentially distributed link weights can be used to model the propagation of messages or diseases in a network with an unknown connectivity structure. In this article we study an extended setting where, in addition, the nodes of the graph are equipped with nonnegative random weights which are used to model the effect of boundary delays across paths in the network. Our main results provide approximative formulas for typical first passage times, typical flooding times, and maximum flooding times in the extended setting, over a time scale logarithmic with respect to the network size.

Type
Research Papers
Copyright
© Applied Probability Trust 2019 

1. Introduction

Classical first passage percolation theory, initiated about half a century ago in [Reference Hammersley, Welsh, Neyman and Le Cam10], studies a connected undirected graph G where each adjacent node pair e is attached a weight W(e) > 0. When the weights are independent and identically distributed random variables, then

[$${W_G}(u,v)\; = \;\mathop {\inf }\limits_\Gamma {\kern 1pt} \sum\limits_{e \in \Gamma } W(e),$$

where the infimum is taken over all paths Γ in graph G from u to v, defines a natural random metric which has been intensively studied in a wide variety of settings, especially integer lattices [Reference Auffinger, Damron and Hanson5]. The quantity WG(u, v) may be interpreted as the first passage time from u to v, when the link weights are considered as transmission times. A relevant quantity of interest in modern social and information networks is the flooding time maxv WG(u, v), which corresponds to the time it takes for a message or disease to spread from a single root node u to all other nodes along the paths of the graph. Alternatively, the link weights can be viewed as economic costs, congestion delays, or carrying capabilities that can be encountered in various real networks [Reference Newman16, Reference Van Mieghem19].

In this paper we study a generalized version of the above setting where in addition to link weights, each node is assigned two weights X 0(v) ≥ 0 and X 1(v) ≥ 0, and we define

[$$W(u,v)\; = \;{X_0}(u) + {W_G}(u,v) + {X_1}(v).$$

When the weights are considered as transmission times, W(u, v) can be interpreted as the first passage time from u to v in a setting where X 0(u) represents the entry delay and X 1(v) the exit delay along a path from u to v in a network modeled by the graph G. The above formulation can also correspond to a generalization of the susceptible–infectious (SI) epidemic model [Reference Andersson and Britton4] with incubation times by setting X 0(v) = 0 and letting X 1(v) represent the length of time during which an infected individual v spreads a disease while displaying no symptoms of illness. In this case WG(u, v) represents the time until node v becomes infected, and W(u, v) the time until node v becomes acutely ill in a population where initially node u is ill and all other nodes are susceptible.

The main results of this paper are approximate formulas for W(u, v), maxv W(u, v), and maxu,v W(u, v) in a large and sparse random graph G, when the link weights (W(e))e E (G ) and the node weights (X 0(v), X 1(v))v V (G ) are mutually independent collections of independent random numbers, such that W(e) is exponentially distributed with rate parameter λ > 0, and the distribution of Xi(v) has an exponential tail with rate parameter λi ∊ (0, ∞] in the sense that lim

(1) [$$\mathop {\lim }\limits_{t \to \infty } {{ - \log {\rm{P}}({X_i}(v) > t)} \over t}\; = \;{\lambda _i},\quad \quad i = 0,1.$$

The case λi = ∞ includes distributions with bounded support, for example the uniform distribution on [0, 1], and the degenerate case with Xi(v) = 0 almost surely. No restrictions about the joint distribution of X 0(v) and X 1(v) are required for the main results.

1.1. Notation

A large network is modeled as a sequence of graphs indexed by a scale parameter n = 1, 2,... Hence, most scalars, probability distributions, and random variables depend on n, but this dependence is often omitted for clarity. In particular, we write P instead of Pn for the probability measure characterizing events related to the model with scale parameter n. An event depending on n is said to occur with high probability if its probability tends to one as n → ∞. The symbol [$$\buildrel {\rm{p}} \over \longrightarrow $$ refers to convergence in probability. We write f (n) = o(g(n)) if limn →∞ f (n)/g(n) = 0, and f (n) = O(g(n)) if lim supn →∞ f (n)/g(n) < ∞. We write [$$f(n) = o(g(n))$$ when random variables X and Y have the same distribution. The positive part of a number x is denoted (x)+ = max{x, 0}.

2. Main results

Given a list of nonnegative integers d = (d 1,..., dn), let G = G(n, d) be a random graph, which is uniformly distributed in the set 𝒢(n, d) of all undirected graphs on node set [n] = {1,..., n} such that node v has degree dv for all v. We assume that the degree list d satisfies the Erdós–Gallai condition [Reference Marshall, Olkin and Arnold15, Theorem C.7], so that 𝒢(n, d) is nonempty. A stochastic model for a sparse large graph is obtained by considering a sequence of random graphs G = 𝒢(n, d (n)) with degree lists [$${d^{(n)}} = (d_1^{(n)}, \ldots ,d_n^{(n)})$$ indexed by n = 1, 2,... such that the empirical degree distribution

[$${f_n}(k)\; = \;{1 \over n}\sum\limits_{v = 1}^n {\bf{1}}(d_v^{(n)} = k)$$

converges to a limiting probability distribution f with a nonzero finite mean [$$\mu = \sum\nolimits_k k\,f(k)$$ according to

(2) [$${f_n}(k)\; \to \;f(k)\quad {\rm{for all k}} \ge {\rm{0}}.$$

Throughout, we will also assume that, for all n,

(3) [$$\sum\limits_k {k^{2 + \varepsilon }}{f_n}(k)\; \le \;c$$

and

(4) [$$\mathop {\min }\limits_v d_v^{(n)}\; \ge \;\delta $$

for some constants c, ϵ > 0 and δ ≥ 3 such that f (δ) > 0. Condition (3) implies that the family of probability measures (fn)n≥1 is relatively compact in the 2-Wasserstein topology [Reference Leskelä and Vihola14] and guarantees that the mean and the variance of the empirical degree distribution converge to finite values which are equal to the mean and variance of the limiting distribution. Condition (4)in turn implies that G is connected with high probability [Reference Amini, Draief and Lelarge2, Reference van der Hofstad18].

The following theorem summarizes the main results of the paper. Here, u* and v* represent uniformly and independently randomly chosen nodes, corresponding to typical values of the quantities of interest.

Theorem 1. Let G = G(n, d (n)) be a random graph satisfying the regularity conditions (24). Then, for independent and uniformly random nodes u* and v*,

(5) [$${{W({u^*},{v^*})} \over {\log n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{1 \over {\lambda (\nu - 1)}},$$
(6) [$${{\mathop {\max }\limits_v W({u^*},v)} \over {\log n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta \wedge {\lambda _1}}},$$
(7) [$${{\mathop {\max }\limits_{u,v} W(u,v)} \over {\log n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{1 \over {\lambda \delta \wedge {\lambda _0}}} + {1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta \wedge {\lambda _1}}},$$

where ν = ∑k k(k − 1)f(k)/ ∑k kf (k).

3. Discussion and applications

3.1. Earlier work

The results of Section 2 are structurally similar to the main result in [Reference Janson11], which states that for the complete graph K = Kn on n nodes, the weighted distances (without boundary weights) satisfy

(8) [$${{{W_K}({u^*},{v^*})} \over {\log n/n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{1 \over \lambda },$$
(9) [$${{\mathop {\max }\limits_v {W_K}({u^*},v)} \over {\log n/n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{2 \over \lambda },$$
(10) [$${{\mathop {\max }\limits_{u,v} {W_K}(u,v)} \over {\log n/n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{3 \over \lambda }.$$

The above results have more recently been extended to sparse random graphs. For a random graph G = G(n, d (n)) satisfying the regularity conditions (24), the weighted distances (without boundary weights) satisfy

(11) [$${{{W_G}({u^*},{v^*})} \over {\log n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{1 \over {\lambda (\nu - 1)}},$$
(12) [$${{\mathop {\max }\limits_v {W_G}({u^*},v)} \over {\log n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta }},$$
(13) [$${{\mathop {\max }\limits_{u,v} {W_G}(u,v)} \over {\log n}}\;\buildrel {\rm{p}} \over \longrightarrow \;{1 \over {\lambda (\nu - 1)}} + {2 \over {\lambda \delta }}.$$

Formulas (1113) agree with (810) because νn and δn for the complete graph on n nodes. Formula (11) was proved in [Reference Ding, Kim, Lubetzky and Peres9] for degenerate degree distributions (random regular graph), in [Reference Bhamidi, van der Hofstad and Hooghiemstra7] for power-law degree distributions (when τ ∊ (2, 3)), and in [Reference Amini, Draief and Lelarge2] for general limiting degree distributions with a finite variance. Formulas (1213) have been proved in [Reference Ding, Kim, Lubetzky and Peres9] for random regular graphs and in [Reference Amini, Draief and Lelarge2, Reference Amini and Lelarge3] for general limiting degree distributions with a finite variance. Sparse random graphs where the limiting degree distribution has infinite variance have in general a completely different behavior with typical passage times of order o(log n) [Reference Baroni, van der Hofstad and Komjáthy6, Reference Bhamidi, van der Hofstad and Hooghiemstra7] and they are not discussed further in this paper. The constant ν appearing in the above formulas can be recognized as the mean of the downshifted size biasing [Reference Leskelä and Ngo13] of the limiting degree distribution f, and ν is finite if and only if the second moment of f is finite.

Theorem 1 generalizes formulas (1113) to the setting where nodes have nonnegative random weights X 0(v) and X 1(v) with exponential tail. The main qualitative findings are that the boundary weights have no effect on the typical passage time W(u*, v*), but they may affect the typical flooding time maxv W(u*, v) and the maximum flooding time maxu,v W(u, v). All boundary weight effects can be ignored on the log n time scale when the tails of the node weight distributions decay sufficiently fast (λ 0, λ 1 > λδ).

A notable feature of the results in Theorem 1 is that the leading role of the node weight distributions is the behavior of P(Xi(v) > t)as t →∞, whereas the leading role of link weight distribution is in many cases [Reference Baroni, van der Hofstad and Komjáthy6, Reference Janson11] governed by the behavior of P(W(e) > t)as t → 0.

Remark 1. The distribution of the node weight Xi(v) is heavy-tailed if the limit in (1) is zero. For heavy-tailed node weight distributions, it is easy to check that maxuV Xi(u) grows to infinity faster than logarithmically. Hence Theorem 1 also remains formally valid when λ 0 = 0 or λ 1 = 0, using the convention that [$${1 \over 0} = \infty $$.

3.2. Application: Broadcasting on random regular graphs

As an application, we discuss a continuous-time version of a message transmission and replication model operating in a push mode [Reference Aalto and Leskelä1, Reference Amini, Draief and Lelarge2, Reference Pittel17]. Let G be a random δ-regular graph on n nodes, where each node has a state in {0, 1, 2}. Initially one of the nodes called root is in state 1, and all other nodes are in state 0. Each node activates at random time instants according to a Poisson process of rate κ > 0, independently of other nodes and the underlying graph structure. When a node activates, it contacts a random target among its neighbors. The states of the nodes are updated in two ways:

  • 0 ↦ 1: If the initiator of a contact is in state 1 or 2, and the target node is in state 0, then the state of the target node changes from 0 to 1; otherwise nothing happens during the contact.

  • 1 ↦ 3: Having entered state 1, node v remains in this state for a random time period of length X 1(v), and then the state of node v changes into 2.

We can interpret the model in the context of computer or biological viruses as follows: State 0 refers to nodes which are vulnerable to receiving a virus. State 1 refers to nodes carrying and spreading the virus but displaying no symptoms. State 2 refers to nodes carrying and spreading the virus and displaying symptoms. We denote by flood1(G) the time until every node in the graph has received the virus, and by flood2(G) the time until every node displays symptoms.

The above model can be analyzed using the weighted random graph where all links have a random exponentially distributed weight of rate parameter λ = κ/δ with X 0(v) = 0, and X 1(v) modeling the delay until an infected node displays symptoms. Then for a random root node u*,

[$$\matrix{ {{\rm{floo}}{{\rm{d}}_1}(G)\;\mathop = \limits^{\rm{d}} \;\mathop {\max }\limits_v {W_G}({u^*},v),} \hfill \cr {{\rm{floo}}{{\rm{d}}_2}(G)\;\mathop = \limits^{\rm{d}} \;\mathop {\max }\limits_v ({W_G}({u^*},v) + {X_1}(v)).} \hfill \cr } $$

Applying formula (6) in Theorem 1 with λ 1 = ∞ corresponding to X 1(v) = 0, we have, with high probability (w.h.p.),

(14) [$${\rm{floo}}{{\rm{d}}_1}(G)\; = \;({1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta }})\log n + o(\log n).$$

Note that the same formula can also be obtained from (12). Applying (6) again, we have, w.h.p.,

(15) [$${\rm{floo}}{{\rm{d}}_2}(G)\; = \;({1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta \wedge {\lambda _1}}})\log n + o(\log n).$$

These two formulas lead to the following results.

Corollary 1. For a random δ-regular graph G on n nodes with δ ≥ 3, when the distribution of X 1(v) has an exponential tail of rate λ 1 according to (1),

[$${\rm{floo}}{{\rm{d}}_1}(G)\; = \;{2 \over \kappa }({{\delta - 1} \over {\delta - 2}})\log n + o(\log n)$$

and

[$${\rm{floo}}{{\rm{d}}_2}(G)\; = \;({\delta \over {\kappa (\delta - 2)}} + {1 \over {\kappa \wedge {\lambda _1}}})\log n + o(\log n)$$

with high probability as n → ∞.

Proof. The results follow directly by substituting λ = δ/κ and ν = δ − 1 into(14) and (15). The coefficient in (14) simplifies by direct calculation into

[$${1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta }} = {\delta \over {\kappa (\delta - 2)}} + {1 \over \kappa } = {2 \over \kappa }({{\delta - 1} \over {\delta - 2}}).$$

Figure 1 illustrates how the limiting approximations of Corollary 1 relate to simulated values of the flooding times on 3-regular graphs. The sizes of the fluctuations around the theoretical values appear to be of constant order with respect to n. A constant order of fluctuations corresponds to the well-known fact in statistical extreme value theory that the maximum of n independent exponential random numbers is approximately Gumbel-distributed around a value of size log n. However, the additional randomness induced by the underlying random graph may cause the fluctuations to grow slowly with respect to n. Whether or not the fluctuations grow with n is not possible to detect from simulations of modest size, because the growth rate of the fluctuations is at most o(log n).

Figure 1: Flooding times on random δ-regular graphs with δ = 3, κ = 1, and λ 1 = 1/2. The blue circles represent simulated values of flood1(G), and the red triangles represent simulated values of flood2(G). The blue solid line and the red dashed line correspond to the limiting formulas of Corollary 1. (Color online.)

Figure 2 describes simulated trajectories of node counts in different states in a random 3-regular graph of 1000 nodes. The trajectories are approximately S-shaped, with random horizontal shifts caused by the initial and final phases of the process.

Figure 2: Simulated trajectories of the number of nodes in state 1 or state 2 (blue, solid) and the number of nodes in state 2 (red, dashed) for a random δ-regular graph with n = 1000, δ = 3, κ = 1, and λ 1 = 1/2. (Color online.)

4. Proofs

4.1. Configuration model

A standard method for studying the random graph G = G(n, d (n)) is to investigate a related random multigraph. A multigraph is a triplet G = (V, E, ϕ), where V and E are finite sets and [$$\phi {\kern 1pt} :{\kern 1pt} E \to \left( {\matrix{ V \cr 1 \cr } } \right) \cup \left( {\matrix{ V \cr 2 \cr } } \right)$$. Here, ϕ(e) refers to the set of one (loop) or two (non-loop) nodes incident to eE. A multigraph is called simple if ϕ is one-to-one (no parallel links) and [$$\phi (E) \subset \left( {\matrix{ V \cr 2 \cr } } \right)$$ (no loops). The degree of a node i is defined by [$$\sum\nolimits_{e \in E} ({\bf{1}}(i \in \phi (e)) + {\bf{1}}(\{ i\} = \phi (e)))$$ , that is, the number of links incident to i, with loops counted twice. A path of length k ≥ 0 from x 0 to xk is a set of distinct nodes {x 0, x 1,..., xk} such that {x j−1, x j} ∊ ϕ (E) for all j. For a multigraph G weighted by W : E → (0, ∞), we denote

[$${W_G}(u,v)\; = \;\mathop {\inf }\limits_\Gamma {\kern 1pt} \sum\limits_{e \in \Gamma } W(e),$$

where Γ is the set of paths from u to v. When G is connected, the above formula defines a metric on G.

Let us recall the usual definition of the configuration model in [Reference Bollobás8]. Let n be a positive integer and d = (d 1, d 2,..., dn) be a sequence of nonnegative integers. For each node i ∊ [n] we attach di distinct elements called half-edges. A pair of half-edges is called an edge. To obtain a random multigraph G*, it is required that the sum of half-edges d = (d 1, d 2,..., dn) is even, [$$\sum\nolimits_{i = 1}^n {d_i} = 2m$$, where m refers to the number of edges. Let Di be the set of half-edges of node i. Then the size of the set Di is di and the sets D 1, D 2,..., Dn are disjoint. Let [$$D = \bigcup\nolimits_{i = 1}^n {D_i}$$ be the collection of all the half-edges and let E be a pairing of D (partition into m pairs) selected uniformly at random. The configuration model G* = G*(n, d) is the multigraph ([n], E,ϕ), where the function [$$\phi {\kern 1pt} :{\kern 1pt} E \to \left( {\matrix{ n \cr 1 \cr } } \right) \cup \left( {\matrix{ n \cr 2 \cr } } \right)$$ is defined by ϕ(e) = {i ∊ [n]: Die = Ø}.

A key feature of the configuration model is that the conditional distribution of G*(n, d) given that G*(n, d) is simple equals the distribution of the random graph G(n, d). Moreover, for a sequence of degree lists d (n) satisfying the regularity conditions (2) and (3), the probability that G*(n, d (n)) is simple is bounded away from zero [Reference Janson and Luczak12]. Therefore, any statement concerning G*(n, d (n)) which holds with high probability, also holds for G(n, d (n)) with high probability. This is why, in the following, we write G in place of G* and the analysis of weighted distances will be conducted on the configuration model.

4.2. Notation

For a node u in the weighted multigraph, we denote by B(u, t) = {WG(u, v) ≤ t} the set of nodes within distance t ∊ [0, ∞] from u. For an integer k ≥ 0, we define

[$${T_u}(k)\; = \;\min \{ t \ge 0{\kern 1pt} :{\kern 1pt} |B(u,t)| \ge k + 1\} ,$$

with the convention that min Ø = ∞. We also denote by Su(k) the number of outgoing links from set B(u, Tu(k)). Then for any k less than the component size of u:

  • Tu(k) equals the distance from u to its kth nearest neighbor, and

  • Su(k) equals the number of outgoing links from the set of nodes consisting of u and its k nearest neighbors.

Moreover, Tu(k) =∞ and Su(k) = 0 for all k greater than or equal to the component size of u.

Throughout the following, we assume that G satisfies the regularity conditions (24). We introduce the scale parameters

[$$\matrix{ {{\alpha _n} = \mathop {\log }\nolimits^3 n,} \hfill \cr {{\beta _n} = 3\sqrt {{\textstyle{\mu \over {\nu - 1}}}n\log n} } \hfill \cr } $$

and, with high probability, [Reference Amini, Draief and Lelarge2, Proposition 4.2] (see alternatively [Reference Ding, Kim, Lubetzky and Peres9, Lemma 3.3] or [Reference Bhamidi, van der Hofstad and Hooghiemstra7, Proposition 4.9])

(16) [$${W_G}(u,v)\; \le \;{T_u}({\beta _n}) + {T_v}({\beta _n})$$

for all nodes u and v in the graph G. We will next analyze the behavior of Tu(βn) and Tv(βn)in typical (uniformly randomly chosen node) and extremal cases.

4.3. Upper bound on weighted distances

The following upper bound on the weighted distances is a sharpened version of [Reference Amini, Draief and Lelarge2, Lemmas 4.7, 4.12]. Below we assume that X ≥ 0 is an arbitrary random number and u* is a uniformly randomly chosen node, such that X, u*, and the graph G are mutually independent, and independent of the weights (W(e))eE(G), where weights W(e) are exponentially distributed with rate λ > 0. We use [$${{\cal F}_{{S_{{u^*}}}}}$$ to denote the sigma-algebra generated by [$${S_{{u^*}}} = ({S_{{u^*}}}(0), \ldots ,{S_{{u^*}}}(n - 1))$$.

Lemma 1. Fix integers 0 ≤ a < b < n and numbers c 1, c 2 ≥ 0, and let ℛ be an [$${{\cal F}_{{S_{{u^*}}}}}$$ measurable event on which [$${S_{{u^*}}}(k) \ge {c_1} + {c_2}k$$ for all akb − 1. For any 0 < θ < λ(c 1 + c 2a),

[$${\rm{E}}({{\rm{e}}^{\theta ({T_{{u^*}}}(b) - {T_{{u^*}}}(a) + X)}}|{\cal R})\; \le \;{M_X}(\theta )\exp ({\theta \over {{\theta _1} - \theta }} + {\theta \over {{\theta _0}}}({1 \over {a + 1}} + \log {{b - 1} \over {a + 1}})),$$

where [$${M_X}(\theta ) = {\rm{E}}({{\rm{e}}^{\theta X}})$$, [$${\theta _0} = \lambda {c_2} - {{{{(\theta - \lambda {c_1})}_ + }} \over {a + 1}}$$, and θ 1 = λ(c 1 + c 2a).

Proof. A key property of the model is that, conditionally on [$${S_{{u^*}}}$$, the random numbers [$${T_{{u^*}}}(k + 1) - {T_{{u^*}}}(k)$$ are independent and exponentially distributed with rates [$$\lambda {S_{{u^*}}}(k)$$. On the event , we see that [$$\lambda {S_{{u^*}}}(a) \ge {\theta _1}$$, and for all a + 1 ≤ kb − 1,

[$$\lambda {S_{{u^*}}}(k) - \theta \; \ge \;\lambda {c_1} + \lambda {c_2}k - \theta \; \ge \;(\lambda {c_2} - {{{{(\theta - \lambda {c_1})}_ + }} \over k})k\; \ge \;{\theta _0}k.$$

As a consequence,

[$$\matrix{ {{\rm{E}}({{\rm{e}}^{\theta ({T_{{u^*}}}(b) - {T_{{u^*}}}(a) + X)}}|{{\cal F}_{{S_{{u^*}}}}})\; = \;{M_X}(\theta )\prod\limits_{k = a}^{b - 1} {{\lambda {S_{{u^*}}}(k)} \over {\lambda {S_{{u^*}}}(k) - \theta }}} \hfill \cr {\; = \;{M_X}(\theta )\prod\limits_{k = a}^{b - 1} (1 + {\theta \over {\lambda {S_{{u^*}}}(k) - \theta }})} \hfill \cr {\; \le \;{M_X}(\theta )\exp (\sum\limits_{k = a}^{b - 1} {\theta \over {\lambda {S_{{u^*}}}(k) - \theta }}).} \hfill \cr } $$

Separating the first term from the sum, and by the choice of the event , we obtain

[$${\rm{E}}({{\rm{e}}^{\theta ({T_{{u^*}}}(b) - {T_{{u^*}}}(a) + X)}}|{\cal R})\; \le \;{M_X}(\theta )\exp ({\theta \over {{\theta _1} - \theta }} + {\theta \over {{\theta _0}}}\sum\limits_{k = a + 1}^{b - 1} {1 \over k}).$$

By integration we have [$$\sum\nolimits_{k = m}^n {1 \over k} \le \log ({n \over {m - 1}})$$ for any integers 2 ≤ m < n. Hence, separating the first term again from the sum, we have the desired result,

[$${\rm{E}}({{\rm{e}}^{\theta ({T_{{u^*}}}(b) - {T_{{u^*}}}(a) + X)}}|{\cal R})\; \le \;{M_X}(\theta )\exp ({\theta \over {{\theta _1} - \theta }} + {\theta \over {{\theta _0}}}({1 \over {a + 1}} + \log {{b - 1} \over {a + 1}})).$$

4.4. Upper bounds on nearest neighbor distances

Proposition 1. For any 0 ≤ p ≤ 1, ε > 0, and any random variable X ≥ 0 independent of G,

[$${\rm{P}}({T_{{u^*}}}({\alpha _n}) + X > ({p \over {\lambda \delta \wedge {\theta ^*}}} + \varepsilon )\log n)\; = \;o({n^{ - p}}),$$

where θ* = sup{θ ≥ 0 : E(eθX) < ∞} > 0.

Proof. Let

[$${t_n} = ({p \over {\lambda \delta \wedge {\theta ^*}}} + \varepsilon )\log n.$$

An upper bound for the event under study, [$${{\cal A}_n} = \{ {T_{{u^*}}}({\alpha _n}) + X > {t_n}\} $$, is obtained by

(17) [$$\matrix{ {{\rm{P}}({{\cal A}_n})} \hfill \amp {\; \le \;{\rm{P}}({{\cal A}_n}|{{\cal R}_1}) + {\rm{P}}({{\cal A}_n}|{{\cal R}_2} \cap {\cal R}_1^{\rm{c}}){\kern 1pt} {\rm{P}}({\cal R}_1^{\rm{c}}) + {\rm{P}}({\cal R}_2^{\rm{c}}),} \hfill \cr } $$

where

[$$\matrix{ {{{\cal R}_1}\; = \;\{ {S_{{u^*}}}(k) \ge \delta + (\delta - 2)k\;{\rm{forall0}} \le {\rm{k}} \le {\alpha _{\rm{n}}} - {\rm{1}}\} ,} \cr {{{\cal R}_2}\; = \;\{ {S_{{u^*}}}(k) \ge 1 + (\delta - 2)k\;{\rm{forall0}} \le {\rm{k}} \le {\alpha _{\rm{n}}} - {\rm{1}}\} .} \cr } $$

We will next analyze the conditional probabilities in (17).

(i) To obtain an upper bound of P(𝒜n | 1), by applying Lemma 1 with a = 0, b = αn, c 1 = δ, and c 2 = δ − 2 and Markov’s inequality, we find that

[$${\rm{P}}({{\cal A}_n}|{{\cal R}_1})\; \le \;{M_X}(\theta )\exp ({\theta \over {{\theta _1} - \theta }} + {\theta \over {{\theta _0}}}(1 + \log {\alpha _n}) - \theta {t_n})$$

for all 0 < θ < θ 1θ*, where θ 0 = λ(δ − 2) and θ 1 = λδ. Now we may choose

[$$\theta \ge (1 - {{\lambda \delta \wedge {\theta ^*}} \over {2(p + \varepsilon (\lambda \delta \wedge {\theta ^*}))}}\varepsilon )(\lambda \delta \wedge {\theta ^*})$$

to have [$$\theta {t_n} \ge (p + {1 \over 2}\varepsilon ({\theta _1} \wedge {\theta ^*}))\log n$$ log n. Note that θ can be arbitrary close to its maximum value λδθ* if we choose ε > 0 to be sufficiently small. Since the constant term [$${\theta \over {{\theta _1} - \theta }}$$ small compared to log αn = Θ (log log n), we have, for large values of n,

[$${\theta \over {{\theta _1} - \theta }} + {\theta \over {{\theta _0}}}(1 + \log {\alpha _n})\; \le \;{1 \over 4}\varepsilon ({\theta _1} \wedge {\theta ^*})\log n.$$

These two inequalities imply that

(18) [$${\rm{P}}({{\cal A}_n}|{{\cal R}_1})\; \le \;{M_X}(\theta ){n^{ - (p + {1 \over 4}\varepsilon (\lambda \delta \wedge {\theta ^*}))}}\; = \;o({n^{ - p}}).$$

(ii) For an upper bound of [$${{\rm{P}}_{{{\cal R}_2}\backslash {{\cal R}_1}}}({{\cal A}_n})$$, we apply Lemma 1 with a = 0, b = αn, c 1 = 1, and c 2 = δ − 2 and Markov’s inequality to conclude that

[$${\rm{P}}({{\cal A}_n}|{{\cal R}_2} \cap {\cal R}_1^{\rm{c}})\; \le \;{M_X}(\theta )\exp ({\theta \over {\lambda - \theta }} + {\theta \over {\lambda (\delta - 2)}}(1 + \log {\alpha _n}) - \theta {t_n})$$

for all 0 < θ < λθ*. For any such θ, we see that [$$\theta {t_n} \ge {\varepsilon _1}\log n$$ with

[$${\varepsilon _1} = \theta ({p \over {\lambda \delta \wedge {\theta ^*}}} + \varepsilon ) > 0.$$

Because

[$${\theta \over {\lambda - \theta }} + {\theta \over {\lambda (\delta - 2)}}(1 + \log {\alpha _n})\; \le \;{1 \over 2}{\varepsilon _1}\log n$$

for all large n, it follows that

(19) [$${\rm{P}}({{\cal A}_n}|{{\cal R}_2} \cap {\cal R}_1^{\rm{c}})\; = \;O({n^{ - {\varepsilon _1}/2}}).$$

Note that our Su(k) has the same distribution as the exploration process in [Reference Amini, Draief and Lelarge2, Section 4.1]. Hence, by [Reference Amini, Draief and Lelarge2, Lemma 4.6], [$${\rm{P}}({\cal R}_1^{\rm{c}}) = o({n^{ - 1}}\mathop {\log }\nolimits^{10} n)$$ and [$${\rm{P}}({\cal R}_2^{\rm{c}}) = o({n^{ - 3/2}})$$. Hence, by substituting the bounds (18) and (19) into (17), it follows that

[$${\rm{P}}({{\cal A}_n})\; \le \;o({n^{ - p}}) + O({n^{ - {\varepsilon _1}/2}})o({n^{ - 1}}\mathop {\log }\nolimits^{10} n) + o({n^{ - 3/2}})\; = \;o({n^{ - p}}).$$

4.5. Upper bounds on moderate distances

Proposition 2. For any ε > 0,

[$${\rm{P}}({T_{{u^*}}}({\beta _n}) - {T_{{u^*}}}({\alpha _n}) > ({1 \over {2\lambda (\nu - 1)}} + \varepsilon )\log n)\; = \;o({n^{ - 1}}).$$

Proof. Denote [$$c = {1 \over {2\lambda (\nu - 1)}}$$ and tn = (c + ε) log n. Set c 1 = 0 and c 2 = 1/λc. Fix a number [$$\theta > {2 \over \varepsilon }$$, and set [$${\theta _0} = \lambda {c_2} - {\theta \over {{\alpha _n} + 1}}$$ and [$${\theta _1} = \lambda {c_2}{\alpha _n}$$. Then, for all sufficiently large values of n,we see that 0 < θ < θ 1. When we apply Lemma 1 with X = 0, a = αn, and b = βn and Markov’s inequality, we find that on the event 3 that [$${S_{{u^*}}}(k) \ge {c_2}k$$ for all αnkβn − 1,

[$$\matrix{ {{\rm{P}}({T_{{u^*}}}({\beta _n}) - {T_{{u^*}}}({\alpha _n}) > {t_n}|{{\cal R}_3})} \hfill \cr {\; \le \;\exp ({\theta \over {{\theta _1} - \theta }} + {\theta \over {{\theta _0}}}({1 \over {{\alpha _n}}} + \log {{{\beta _n}} \over {{\alpha _n}}}) - \theta {t_n})} \hfill \cr {\; = \;\exp ({\theta \over {\lambda {c_2}{\alpha _n} - \theta }} + {\theta \over {\lambda {c_2} - {\theta \over {{\alpha _n} + 1}}}}({1 \over {{\alpha _n}}} + \log {{{\beta _n}} \over {{\alpha _n}}}) - (c + \varepsilon )\theta \log n).} \hfill \cr } $$

Note that βn/αnn. Because αn →∞, we see that

[$$\matrix{ {{\rm{P}}({T_{{u^*}}}({\beta _n}) - {T_{{u^*}}}({\alpha _n}) > {t_n}|{{\cal R}_3})\; \le \;\exp ((c + \varepsilon /2)\theta \log n - (c + \varepsilon )\theta \log n)} \cr {\; = \;\exp ( - {\varepsilon \over 2}\theta \log n).} \cr } $$

Due to our choice of θ, the right-hand side is o(n 1). The claim follows from this because [$${\rm{P}}({\cal R}_3^{\rm{c}}) = o({n^{ - 3/2}})$$ by [Reference Amini, Draief and Lelarge2, Lemma 4.9].

4.6. Proof of Theorem 1: Upper bounds

Observe that [$${\rm{E}}({{\rm{e}}^{\theta {X_i}(v)}})$$ is finite for θ < λi and infinite for θ > λi due to our assumption on exponential tails (1). Hence, by applying Proposition 1 with p = 1,

[$${\rm{P}}({T_{{v^*}}}({\alpha _n}) + {X_i}({v^*}) > ({1 \over {\lambda \delta \wedge {\lambda _i}}} + \varepsilon )\log n)\; = \;o({n^{ - 1}}),$$

so that, by applying the generic union bound

(20) [$${\rm{P}}(\mathop {\max }\limits_v X(v) > t)\; \le \;\sum\limits_v {\rm{P}}(X(v) > t)\; = \;n{\rm{P}}(X({v^*}) > t),$$

it follows that

(21) [$$\mathop {\max }\limits_v ({T_v}({\alpha _n}) + {X_i}(v))\; \le \;({1 \over {\lambda \delta \wedge {\lambda _i}}} + \varepsilon )\log n\quad \quad {\rm{w}}{\rm{.h}}{\rm{.p}}{\rm{.}}$$

Furthermore, by applying Proposition 1 with p = 0, it follows that

(22) [$${T_{{v^*}}}({\alpha _n})\; \le \;{T_{{v^*}}}({\alpha _n}) + {X_i}({v^*})\; \le \;\varepsilon \log n\quad \quad {\rm{w}}{\rm{.h}}{\rm{.p}}.,$$

and by Proposition 2 and the generic union bound (20), w.h.p.,

(23) [$${T_{{v^*}}}({\beta _n}) - {T_{{v^*}}}({\alpha _n})\; \le \;\mathop {\max }\limits_v ({T_v}({\beta _n}) - {T_v}({\alpha _n}))\; \le \;({1 \over {2\lambda (\nu - 1)}} + \varepsilon )\log n.$$

By combining (21) and (22) with (23), we conclude that, w.h.p.,

(24) [$$\mathop {\max }\limits_v ({T_v}({\beta _n}) + {X_i}(v))\; \le \;({1 \over {\lambda \delta \wedge {\lambda _i}}} + {1 \over {2\lambda (\nu - 1)}} + 2\varepsilon )\log n$$

and

(25) [$${T_{{v^*}}}({\beta _n})\; \le \;({1 \over {2\lambda (\nu - 1)}} + 2\varepsilon )\log n.$$

To prove an upper bound for (5), observe that the distribution of Xi(v*) does not depend on the scale parameter n. Therefore, Xi(v*) ≤ ε log n with high probability. In light of (16) and (25), it follows that, w.h.p.,

[$$\matrix{ {W({u^*},{v^*})\; = \;{X_0}({u^*}) + {W_G}({u^*},{v^*}) + {X_1}({v^*})} \cr {\; \le \;{X_0}({u^*}) + {T_{{u^*}}}({\beta _n}) + {T_{{v^*}}}({\beta _n}) + {X_1}({v^*})} \cr {\; \le \;({1 \over {\lambda (\nu - 1)}}\log n + 6\varepsilon )\log n.} \cr } $$

To prove an upper bound for (6), observe that by applying (16), (24), and (25), with high probability,

[$$\matrix{ {\mathop {\max }\limits_v W({u^*},v)\; = \;\mathop {\max }\limits_v ({X_0}({u^*}) + {W_G}({u^*},v) + {X_1}(v))} \hfill \cr {\; \le \;\mathop {\max }\limits_v ({X_0}({u^*}) + {T_{{u^*}}}({\beta _n}) + {T_v}({\beta _n}) + {X_1}(v))} \hfill \cr {\; = \;{X_0}({u^*}) + {T_{{u^*}}}({\beta _n}) + \mathop {\max }\limits_v ({T_v}({\beta _n} + {X_1}(v)))} \hfill \cr {\; \le \;({1 \over {2\lambda (\nu - 1)}} + 3\varepsilon )\log n + ({1 \over {\lambda \delta \wedge {\lambda _1}}} + {1 \over {2\lambda (\nu - 1)}} + 2\varepsilon )\log n} \hfill \cr {\; = \;({1 \over {\lambda \delta \wedge {\lambda _1}}} + {1 \over {\lambda (\nu - 1)}} + 5\varepsilon )\log n.} \hfill \cr } $$

Finally, for an upper bound for (7), observe that, by (16), with high probability,

[$$\matrix{ {\mathop {\max }\limits_{u,v} W(u,v)\; = \;\mathop {\max }\limits_{u,v} ({X_0}(u) + {W_G}(u,v) + {X_1}(v))} \cr {\; \le \;\mathop {\max }\limits_{u,v} ({X_0}(u) + {T_u}({\beta _n}) + {T_v}({\beta _n}) + {X_1}(v))} \cr {\; = \;\mathop {\max }\limits_u ({X_0}(u) + {T_u}({\beta _n})) + \mathop {\max }\limits_v ({X_1}(v) + {T_v}({\beta _n})).} \cr } $$

And hence, by (24), it follows that, with high probability,

[$$\mathop {\max }\limits_{u,v} W(u,v)\; \le \;({1 \over {\lambda \delta \wedge {\lambda _0}}} + {1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta \wedge {\lambda _1}}} + 4\varepsilon )\log n.$$

The above inequalities are sufficient to confirm the upper bounds in Theorem 1 because ε> 0 can be chosen arbitrarily small.

4.7. Proof of Theorem 1: Lower bounds

The lower bounds are relatively straightforward generalizations of the analogous results (1113) for the model without node weights, which imply that for an arbitrarily small ε > 0, the weighted graph distance WG satisfies, w.h.p.,

(26) [$${{{W_G}({u^*},{v^*})} \over {\log n}}\; \ge \;{1 \over {\lambda (\nu - 1)}} - \varepsilon ,$$
(27) [$${{\mathop {\max }\limits_v {W_G}({u^*},v)} \over {\log n}}\; \ge \;{1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta }} - \varepsilon ,$$
(28) [$${{\mathop {\max }\limits_{u,v} {W_G}(u,v)} \over {\log n}}\; \ge \;{1 \over {\lambda (\nu - 1)}} + {2 \over {\lambda \delta }} - \varepsilon .$$

We first prove the following lemma and we apply it later in the proof of the lower bounds.

Lemma 2. For every integer n ≥ 1, let An(i) and Bn(i) be random numbers indexed by a finite set iIn. Assume that [$${({A_n}(i))_{i \in {I_n}}}$$ are independent and identically distributed, that

[$$|{I_n}|{\kern 1pt} {\rm{P}}({A_n}(i) > {a_n})\; \to \;\infty ,$$

and that [$${B_n}({i^*}) > {b_n}$$ with high probability, where i* is a uniformly random point of In, independent of [$${({B_n}(i))_{i \in {I_n}}}$$. Assume also that An(i) and Bn(i) are independent for every iIn. Then

[$$\mathop {\max }\limits_{i \in {I_n}} ({A_n}(i) + {B_n}(i))\; > \;{a_n} + {b_n}$$

with high probability.

Proof. Let

[$${M_n}\; = \;|\{ i \in {I_n}{\kern 1pt} :{\kern 1pt} {A_n}(i) > {a_n}\} |$$

and

[$${N_n}\; = \;|\{ i \in {I_n}{\kern 1pt} :{\kern 1pt} {A_n}(i) > {a_n},{\kern 1pt} {A_n}(i) + {B_n}(i) > {a_n} + {b_n}\} |.$$

Observe that Mn is binomially distributed with |In| trials and rate parameter pn = P(An(i) > an). Then E(Mn) = |In|pn and Var(Mn) ≤ E(Mn), and because |In|pn →∞, it follows that [$${M_n} \ge {1 \over 2}|{I_n}|{p_n}$$ with high probability. Moreover,

[$$\matrix{ {{\rm{E}}({M_n} - {N_n})\; = \;\sum\limits_{i \in {I_n}} {\rm{P}}({A_n}(i) > {a_n},{\kern 1pt} {A_n}(i) + {B_n}(i) \le {a_n} + {b_n})} \cr {\; \le \;\sum\limits_{i \in {I_n}} {\rm{P}}({A_n}(i) > {a_n},{\kern 1pt} {B_n}(i) \le {b_n})} \cr {\; = \;\sum\limits_{i \in {I_n}} {\rm{P}}({A_n}(i) > {a_n}){\rm{P}}({B_n}(i) \le {b_n})} \cr {\; = \;|{I_n}|{p_n}{\kern 1pt} {\rm{P}}({B_n}({i^*}) \le {b_n}).} \cr } $$

Because P(Bn(i*) ≤ bn) = o(1), Markov’s inequality implies that [$${M_n} - {N_n} \le {1 \over 4}|{I_n}|{p_n}$$ with high probability. We conclude that, with high probability,

[$${N_n}\; = \;{M_n} - ({M_n} - {N_n})\; \ge \;{1 \over 2}|{I_n}|{p_n} - {1 \over 4}|{I_n}|{p_n}\; = \;{1 \over 4}|{I_n}|{p_n}\; \ge \;1,$$

and [$$\mathop {\max }\nolimits_{i \in {I_n}} ({A_n}(i) + {B_n}(i)) > {a_n} + {b_n}$$.

(i) A suitable lower bound for (5) follows immediately from (26) because W(u*, v*) ≥ WG(u*, v*) almost surely.

(ii) To prove a lower bound for (6), note that the exponential tail assumption (1) implies that

[$$n{\kern 1pt} {\rm{P}}\;({X_1}(v) > ({1 \over {{\lambda _1}}} - \varepsilon )\log n)\; \to \;\infty .$$

Then, by applying Lemma 2 (with An(v) = X 1(v), Bn(v) = WG(u*, v), and In = [n]), recalling (26), we find that, w.h.p.,

[$$\matrix{ {\mathop {\max }\limits_v W({u^*},v)} \hfill \amp {\; = \;\mathop {\max }\limits_v ({X_0}({u^*}) + {W_G}({u^*},v) + {X_1}(v))} \hfill \cr {} \hfill \amp {\; \ge \;\mathop {\max }\limits_v ({W_G}({u^*},v) + {X_1}(v))} \hfill \cr {} \hfill \amp {\; \ge \;({1 \over {\lambda (\nu - 1)}} + {1 \over {{\lambda _1}}} - 2\varepsilon )\log n.} \hfill \cr } $$

By noting that maxv W(u*, v) ≥ maxv WG(u*, v) and applying (27), we also obtain

[$$\mathop {\max }\limits_v W({u^*},v)\; \ge \;({1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta }} - 2\varepsilon )\log n,$$

and hence, w.h.p.,

[$$\mathop {\max }\limits_v W({u^*},v)\; \ge \;({1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta \wedge {\lambda _1}}} - 2\varepsilon )\log n.$$

(iii) To prove a lower bound for (7), note that, for any uv,

[$$\matrix{ {{\rm{P}}\;({{{X_0}(u) + {X_1}(v)} \over {\log n}} > {1 \over {{\lambda _0}}} + {1 \over {{\lambda _1}}} - 2\varepsilon )} \cr {\; \ge \;{\rm{P}}\;({{{X_0}(u)} \over {\log n}} > {1 \over {{\lambda _0}}} - \varepsilon ,{{{X_1}(u)} \over {\log n}} > {1 \over {{\lambda _1}}} - \varepsilon )} \cr {\; = \;{\rm{P}}\;({{{X_0}(u)} \over {\log n}} > {1 \over {{\lambda _0}}} - \varepsilon ){\rm{P}}\;({{{X_1}(u)} \over {\log n}} > {1 \over {{\lambda _1}}} - \varepsilon ).} \cr } $$

Then the exponential tail assumption (1) implies that

[$$n(n - 1){\rm{P}}\;({{{X_0}(u) + {X_1}(v)} \over {\log n}} > {1 \over {{\lambda _0}}} + {1 \over {{\lambda _1}}} - 2\varepsilon )\; \to \;\infty .$$

Observe next that if u* and v* are independent uniformly random elements of [n], and i* is a uniformly random event in In ={(u, v) ∊ [n]2 : uv}, then

[$$\matrix{ {{\rm{P}}({W_G}({u^*},{v^*}) \in F)\; = \;{1 \over {{n^2}}}\sum\limits_u \sum\limits_v {\rm{P}}({W_G}(u,v) \in F)} \cr {\; = \;{1 \over n}{\rm{P}}({W_G}({u^*},{u^*}) \in F) + {{n - 1} \over n}{\rm{P}}({W_G}({i^*}) \in F)} \cr } $$

for all measurable sets F ⊂ ℝ. Hence, (26) implies that, w.h.p.,

[$${W_G}({i^*})\; > ({1 \over {\lambda (\nu - 1)}} - \varepsilon )\log n.$$

Then we can apply Lemma 2 with An(u, v) = X 0(u) + X 1(v) and Bn(u, v) = WG(u, v), to conclude that, w.h.p.,

[$$\mathop {\max }\limits_{u,v} W(u,v)\; \ge \;({1 \over {{\lambda _0}}} + {1 \over {\lambda (\nu - 1)}} + {1 \over {{\lambda _1}}} - 3\varepsilon )\log n.$$

We will next apply Lemma 2 again, this time with In = [n], An(v) = X 1(v), and Bn(v) = maxu (X 0(u) + WG(u, v)), recalling (27), to conclude that, w.h.p.,

[$$\mathop {\max }\limits_{u,v} W(u,v)\; \ge \;({1 \over {\lambda \delta }} + {1 \over {\lambda (\nu - 1)}} + {1 \over {{\lambda _1}}} - 3\varepsilon )\log n.$$

By a symmetrical argument, we also find that, w.h.p.,

[$$\mathop {\max }\limits_{u,v} W(u,v)\; \ge \;({1 \over {{\lambda _0}}} + {1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta }} - 3\varepsilon )\log n.$$

By combining the above three inequalities with (28), we may conclude that, w.h.p.,

[$$\mathop {\max }\limits_{u,v} W(u,v)\; \ge \;({1 \over {\lambda \delta \wedge {\lambda _0}}} + {1 \over {\lambda (\nu - 1)}} + {1 \over {\lambda \delta \wedge {\lambda _1}}} - 3\varepsilon )\log n.$$

References

Aalto, P. and Leskelä, L. (2015). Information spreading in a large population of active transmitters and passive receivers. SIAM J. Appl. Math. 75, 19651982.CrossRefGoogle Scholar
Amini, H., Draief, M. and Lelarge, M. (2013). Flooding in weighted sparse random graphs. SIAM J. Discrete Math. 27, 126.CrossRefGoogle Scholar
Amini, H. and Lelarge, M. (2015). The diameter of weighted random graphs. Ann. Appl. Prob. 25, 16861727.CrossRefGoogle Scholar
Andersson, H. and Britton, T. (2000). Stochastic Epidemic Models and Their Statistical Analysis. Springer, New York.CrossRefGoogle Scholar
Auffinger, A., Damron, M. and Hanson, J. (2015). 50 years of first passage percolation. ArXiv e-prints.Google Scholar
Baroni, E., van der Hofstad, R. and Komjáthy, J. (2017). Nonuniversality of weighted random graphs with infinite variance degree. J. Appl. Prob. 54, 146164.CrossRefGoogle Scholar
Bhamidi, S., van der Hofstad, R. and Hooghiemstra, G. (2010). First passage percolation on random graphs with finite mean degrees. Ann. Appl. Prob. 20, 19071965.CrossRefGoogle Scholar
Bollobás, B. (1980). A probabilistic proof of an asymptotic formula for the number of labelled regular graphs. Europ. J. Combinatorics 1, 311316.CrossRefGoogle Scholar
Ding, J., Kim, J. H., Lubetzky, E. and Peres, Y. (2010). Diameters in supercritical random graphs via first passage percolation. Combinatorics Prob. Comput. 19, 729751.CrossRefGoogle Scholar
Hammersley, J. M. and Welsh, D. J. A. (1965). First-passage percolation, subadditive processes, stochastic networks, and generalized renewal theory. In Proc. Int. Research Seminar, Statistical Laboratory, University of California, Berkeley, eds Neyman, J. and Le Cam, L. M.. Springer, New York, pp. 61110.Google Scholar
Janson, S. (1999). One, two and three times log n/n for paths in a complete graph with random weights. Combinatorics Prob. Comput. 8, 347361.CrossRefGoogle Scholar
Janson, S. and Luczak, M. J. (2009). A new approach to the giant component problem. Random Structures Algorithms 34, 197216.CrossRefGoogle Scholar
Leskelä, L. and Ngo, H. (2017). The impact of degree variability on connectivity properties of large networks. Internet Math. 13, 124.Google Scholar
Leskelä, L. and Vihola, M. (2013). Stochastic order characterization of uniform integrability and tightness. Statist. Prob. Lett. 83, 382389.CrossRefGoogle Scholar
Marshall, A. W., Olkin, I. and Arnold, B. C. (2011). Inequalities: Theory of Majorization and its Applications. Springer, New York.CrossRefGoogle Scholar
Newman, M. E. J. (2010). Networks: An Introduction. Oxford University Press.CrossRefGoogle Scholar
Pittel, B. (1987). On spreading a rumor. SIAM J. Appl. Math. 47, 213223.CrossRefGoogle Scholar
van der Hofstad, R. (2018). Random Graphs and Complex Networks. Vol. 2. To appear.Google Scholar
Van Mieghem, P. (2014). Performance Analysis of Complex Networks and Systems. Cambridge University Press.CrossRefGoogle Scholar
Figure 0

Figure 1: Flooding times on random δ-regular graphs with δ = 3, κ = 1, and λ1 = 1/2. The blue circles represent simulated values of flood1(G), and the red triangles represent simulated values of flood2(G). The blue solid line and the red dashed line correspond to the limiting formulas of Corollary 1. (Color online.)

Figure 1

Figure 2: Simulated trajectories of the number of nodes in state 1 or state 2 (blue, solid) and the number of nodes in state 2 (red, dashed) for a random δ-regular graph with n = 1000, δ = 3, κ = 1, and λ1 = 1/2. (Color online.)