Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-06T02:08:44.938Z Has data issue: false hasContentIssue false

On local weak limit and subgraph counts for sparse random graphs

Published online by Cambridge University Press:  21 July 2022

Valentas Kurauskas*
Affiliation:
Faculty of Mathematics and Informatics, Vilnius University
*
*Postal address: Akademijos 4, LT-08412 Vilnius, Lithuania. Email address: valentas@gmail.com
Rights & Permissions [Opens in a new window]

Abstract

We use an inequality of Sidorenko to show a general relation between local and global subgraph counts and degree moments for locally weakly convergent sequences of sparse random graphs. This yields an optimal criterion to check when the asymptotic behaviour of graph statistics, such as the clustering coefficient and assortativity, is determined by the local weak limit.

As an application we obtain new facts for several common models of sparse random intersection graphs where the local weak limit, as we see here, is a simple random clique tree corresponding to a certain two-type Galton–Watson branching process.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

A rooted graph is a pair (H, v) where H is a graph and $v \in V(H)$ is a distinguished vertex called the root. We often use only the symbol H to denote (H, v); in this case we write $\mathrm{root}(H)=v$ . For a graph G and its vertex v, let $B_r$ be the function that maps (G, v) to the the rooted graph (H, v), where H is the subgraph induced on the vertices of G with distance from v at most r. We simplify $B_r(G,v) = B_r((G,v))$ for $B_r$ and other functions on rooted graphs.

A graph is locally finite if the degree of each of its vertices is finite. Let $\cong$ denote the isomorphism relation between connected rooted graphs which preserves the root. Let ( $\mathcal{G}_*$ , $d_{\operatorname{loc}}$ ) be the space of rooted connected locally finite graphs with equivalence relation $\cong$ and distance

\begin{align*}d_{\operatorname{loc}}(G_1, G_2) = 2^{-\sup \{r\,:\, B_r(G_1) \cong B_r(G_2)\}}.\end{align*}

Consider a sequence of finite graphs $\{G_n, n=1,2, \ldots\}$ . In this paper we assume $|V(G_n)| \ge 1$ for $n \ge 1$ . Let $v^*_n$ be a uniformly random vertex from $V(G_n)$ . The component of $G_n$ containing $v^*_n$ together with root $v^*_n$ induces a Borel measure $\mu_n$ on $(\mathcal{G}_*, d_{\operatorname{loc}})$ for each n. Let $\mu^*$ be another Borel measure on $(\mathcal{G}_*, d_{\operatorname{loc}})$ , and let $G^*$ denote a random element with law $\mu^*$ . (Without loss of generality we assume that all random objects we define in the paper are random elements in a single probability space $(\Omega, {\mathcal{F}}, {\mathbb{P}})$ with the specified laws; the integration ${\mathbb{E}}$ is over $\Omega$ .) Following Benjamini and Schramm [Reference Benjamini and Schramm5], Aldous, Lyons, and Steele, and other authors [Reference Aldous and Steele2, Reference Lovász36, Reference Lyons37], we say that $G^*$ is the local weak limit of $\{(G_n,v^*_n)\}$ and write $(G_n,v^*_n) \,\xrightarrow{d}\, G^*$ if and only if the measures $\mu_n$ converge weakly to $\mu^*$ : for each continuous bounded function $f\,:\, (\mathcal{G}_*, d_{\operatorname{loc}}) \to \mathbb{R}$ ,

(1.1) \begin{equation} {\mathbb{E}} f(G_n, v^*_n) \to {\mathbb{E}} f(G^*).\end{equation}

Here and below all limits are as $n\to \infty$ , unless stated otherwise. Since $(\mathcal{G}_*, d_{\operatorname{loc}})$ is separable and complete [Reference Aldous and Lyons1], a standard argument (e.g. Theorem 2.3 of [Reference Billingsley8]) shows that $(G_n, v^*_n) \,\xrightarrow{d}\, G^*$ if and only if, for each non-negative integer r and each rooted connected graph H,

\begin{align*}{\mathbb{P}}(B_r(G_n, v^*_n) \cong H) \to {\mathbb{P}}(B_r(G^*) \cong H).\end{align*}

We focus on models of random graphs with bounded average degree. Among others, the inhomogeneous random graph model of Bollobás, Janson, and Riordan [Reference Bollobás, Janson and Riordan22] and the preferential attachment model (see Berger, Borgs, Chayes, and Saberi [Reference Berger, Borgs, Chayes and Saberi7]) have been shown to have a weak limit (in an explicit form). Recently such a limit was also shown to exist for random planar graphs [Reference Stufler46]. The local weak limit, if it exists, yields a lot of information about the asymptotics of various graph parameters; see e.g. [Reference Benjamini, Lyons and Schramm6], [Reference Bollobás and Riordan19], [Reference Bollobás, Janson and Riordan22], [Reference Bordenave and Lelarge23], [Reference Lovász36], and [Reference Salez44].

The present contribution consists of a general result, Theorem 2.1, relating the asymptotics of subgraph counts with the local weak limit, and its application in the area of random intersection graphs.

The structure of the paper is as follows. In Section 2 we present and prove Theorem 2.1. In a separate result, Theorem 3.1 of Section 3, we determine the (very simple) local weak limit of several popular random intersection graph models. Combining this with Theorem 2.1 and using the fact that many important graph parameters can be expressed in terms of small subgraph counts, we obtain a number of previous and some new results for this type of models; see Section 4. The same method works for any sparse random graph model where we have weak local convergence (see e.g. Section 4.2).

The first manuscript of this paper was completed and posted to arXiv in 2015 [Reference Kurauskas40]. Since then there has appeared some work in a similar general direction, including, for example, [Reference Vadon, Komjáthy and van der Hofstad47], unaware of the very general Theorem 2.1. A recent book in preparation [Reference van der Hofstad33] also devotes a chapter to weak limits as a general technique to study real world networks. The present version of the paper fixes several minor errors and omissions and has an updated literature list.

2. Local weak limit and subgraph counts

In Section 7 of [Reference Bollobás and Riordan19], Bollobás, Janson, and Riordan remark that the local weak limit does not always determine the global subgraph count asymptotics; see also Example 2.1 below. They propose an extra condition of ‘exponentially bounded tree counts’. Our main result is that a simple condition on the degree moment is sufficient and, in general, necessary.

A homomorphism from a graph H to a graph G is a mapping from V(H) to V(G) that maps adjacent vertices in H to adjacent vertices in G. Let $\mathrm{emb}(H, G)$ denote the number of embeddings (injective homomorphisms) from H to G. For a rooted graph H′ let $\mathrm{emb}^{\prime}(H^{\prime}, G, v)$ denote the number of embeddings from H ′ to G that map $\mathrm{root}(H^{\prime})$ to v. Let ${\mathcal R}(H)$ denote the set of all $|V(H)|$ possible rooted graphs obtained from a graph H. Finally, let $d_G(v)$ denote the degree of vertex v in G.

Theorem 2.1. Let $h\ge2$ be an integer, let $\{G_n, n = 1, 2, \ldots\}$ be a sequence of graphs, such that $n_1 = n_1(n) = |V(G_n)| \to \infty$ and $n_1 \ge 1$ , let $v^*_n$ be chosen uniformly at random from $V(G_n)$ , and suppose $(G_n,v^*_n) \,\xrightarrow{d}\, G^*$ . Write $d_n = d_{G_n}(v^*_n)$ , $d^* = d_{G^*}(r^*)$ , where $r^* = \mathrm{root}(G^*)$ , and assume ${\mathbb{E}} (d^*)^{h-1} < \infty$ . Then the following statements are equivalent:

  1. (i) ${\mathbb{E}} d_n^{h-1} \to {\mathbb{E}} (d^*)^{h-1}$ ,

  2. (ii) $d_n^{h-1}$ is uniformly integrable,

  3. (iii) for any connected graph H on h vertices and any $H^{\prime} \in {\mathcal R}(H)$ ,

    \begin{align*}n_1^{-1}\ \mathrm{emb}(H, G_n) \to {\mathbb{E}}\ \mathrm{emb}^{\prime}(H^{\prime}, G^*, r^*).\end{align*}

The above theorem provides a sufficient condition for the continuous but not necessarily bounded function $f_H \,:\, (\mathcal{G}_*, d_{\operatorname{loc}}) \to \mathbb{R}$ defined by $f_H(G, v)= \mathrm{emb}^{\prime}(H, G, v)$ to satisfy (1.1). It is easy to construct weakly convergent sequences for which (i)–(iii) fail to hold.

Example 2.1. Let $(G_n,v^*_n)$ be as in Theorem 2.1 and assume $|V(G_n)|= n$ . Let $(G^{\prime}_n, v^*_n)$ be obtained by merging edges of a clique on a subset $S_n$ of $G_n$ . If $|S_n| = \Omega(n^{1/h})$ and $|S_n| = {\mathrm{o}} (n)$ then $(G^{\prime}_n,v^*_n) \,\xrightarrow{d}\, G^*$ , but (i)–(iii) do not hold for $G^{\prime}_n$ .

The proof of our theorem follows from the next basic but not widely known result of Sidorenko [Reference Sidorenko45].

Theorem 2.2. (Sidorenko, 1994.) Let H be a connected graph on h vertices. Then, for any graph G,

\begin{align*}\hom\!(H, G) \le \hom\!(K_{1,h-1}, G) = \sum_{v \in V(G)} d_G(v)^{h-1}.\end{align*}

Here $\hom\!(H, G)$ is the number of homomorphisms from H to G, and $K_{1, s}$ is the complete bipartite graph with part sizes 1 and s. A special case where H is a path has been rediscovered in [Reference Fiol and Garriga27]; see also [Reference Csikvári and Zhicong24].

Below, in Section 4, we restate Theorem 2.1 in a random setting and demonstrate how it implies general results on network statistics expressible through subgraph counts such as the clustering and assortativity coefficients.

Recall that a sequence of random variables $\{X_n, n=1,2,\ldots\}$ is uniformly integrable if $\sup_{a\to \infty} \sup_n {\mathbb{E}} |X_n| {\mathbb I}_{|X_n| > a} = 0$ , and equivalently if ${\mathbb{E}} |X_n| {\mathbb I}_{|X_n| > \omega_n} \to 0$ for any $\omega_n \to \infty$ . A basic fact (see e.g. [Reference Billingsley9, pp. 31–32]) is as follows.

Lemma 2.1. Suppose random variables $X^*, X_n, n=1,2,\ldots$ are non-negative, integrable and $X_n$ converges to $X^*$ in distribution as $n \to \infty$ . Then $\{X_n\}$ is uniformly integrable if and only if ${\mathbb{E}} X_n \to {\mathbb{E}} X^*$ .

Proof of Theorem 2.1. In the proof denote $G=G_n$ and $v^*=v^*_n$ .

(i) $\Leftrightarrow$ (ii) We see that $d_n$ converges in distribution to $d^*$ by (1.1). Thus $d_n^{h-1}$ converges in distribution to $(d^*)^{h-1}$ and the proof follows by Lemma 2.1.

(i) $\Rightarrow$ (iii) Suppose (i) holds. Fix any connected graph H with $|V(H)| = h$ . Let r be the diameter of H. Write $b_j(G,v) = |B_j(G, v)|$ and $b_j^* = |B_j(G^*)|$ . Note that ${\mathbb{P}}(b_j^* = \infty) = 0$ for $j=0,1,\ldots$ since $G^*$ is locally finite. For a rooted graph H ′ denote

\begin{align*}X(H^{\prime}) = \mathrm{emb}^{\prime}(H^{\prime}, G, v^*) \quad \text{and} \quad X^*(H^{\prime}) = \mathrm{emb}^{\prime}(H^{\prime}, G^*, r^*).\end{align*}

Using Sidorenko’s theorem, Theorem 2.2, for any $H^{\prime} \in {\mathcal R}(H)$ ,

(2.1) \begin{align} {\mathbb{E}} X(H^{\prime}) &= n_1^{-1} \mathrm{emb}(H,G) \le n_1^{-1} \hom\!(H,G) \notag \\ & \le n_1^{-1} \hom\!(K_{1,h-1},G) = {\mathbb{E}} d_n^{h-1} \to {\mathbb{E}} (d^*)^{h-1} < \infty. \end{align}

Next, we have $X(H^{\prime}) \to X^*(H)$ in distribution for each $H^{\prime} \in {\mathcal R}(H)$ (apply (1.1) to the continuous and bounded function $f(G,v) = {\mathbb I}_{\mathrm{emb}^{\prime}(G, H^{\prime}, v) = k}$ ). Therefore, by Fatou’s lemma and (2.1),

\begin{equation*} {\mathbb{E}} X^*(H^{\prime}) \le \lim \inf {\mathbb{E}} X(H^{\prime}) < \infty.\end{equation*}

Let $\epsilon \in (0,1)$ . Since ${\mathbb{E}} (d^*)^{h-1}$ and ${\mathbb{E}} X^*(H^{\prime})$ are finite, we can find a $t > 0$ such that

\begin{align*}&{\mathbb{E}} (d^*)^{h-1} {\mathbb I}_{d^* > t} \le {\mathbb{E}} (d^*)^{h-1} {\mathbb I}_{b^*_{r+1} > t} < \epsilon \quad \text{and} \\ &{\mathbb{E}} X^*(H^{\prime}) {\mathbb I}_{b_{r+1}^* > t}< \epsilon \quad \text{for each } H^{\prime} \in {\mathcal R}(H).\end{align*}

Pick $s \ge t $ large enough that ${\mathbb{P}}(b_{r+1}^* > s) \le 0.5 \epsilon t^{-(h + r - 1)}$ . By Lemma 2.1 and (1.1), for each $H^{\prime} \in {\mathcal R}(H)$ ,

(2.2) \begin{align} &{\mathbb{E}} d_n^{h-1} {\mathbb I}_{d_n \le t} \to {\mathbb{E}} (d^*)^{h-1} {\mathbb I}_{d^* \le t}, \qquad\qquad\qquad\qquad\qquad \end{align}
(2.3) \begin{align} &{\mathbb{E}} X(H^{\prime}) {\mathbb I}_{b_{r+1}(G, v^*) \le s} \to {\mathbb{E}} X^*(H^{\prime}) {\mathbb I}_{b_{r+1}^* \le s} \ge {\mathbb{E}} X^*(H^{\prime}) -\epsilon, \end{align}
(2.4) \begin{align} &{\mathbb{E}} X(H^{\prime}) {\mathbb I}_{b_{r+1}(G, v^*) \in (t,s]} \to {\mathbb{E}} X^*(H^{\prime}) {\mathbb I}_{b_{r+1}^* \in (t,s]} \le \epsilon,\qquad\quad \end{align}
(2.5) \begin{align} & \ \ {\mathbb{P}}(b_{r+1}(G, v^*) > s) \to {\mathbb{P}}(b_{r+1}^* > s) \le 0.5 \epsilon t^{-(h+r-1)}.\qquad \end{align}

Define subsets of V(G):

\begin{align*}R_1 \,:\!=\, \{v\,:\, d_G(v) > t\}, \quad R_2 \,:\!=\, \{v\,:\, b_{r+1}(G, v) > s\}.\end{align*}

We call an embedding $\sigma$ of H into G bad if its image shares a vertex with $R_1 \cup R_2$ . Denote the set of all bad embeddings by ${\mathcal X}_{\operatorname{bad}}$ . Note that for $H^{\prime} \in {\mathcal R}(H)$

(2.6) \begin{equation}0 \le \mathrm{emb}(H, G) - n_1 {\mathbb{E}} X(H^{\prime}) {\mathbb I}_{b_{r+1}(G, v^*) \le s} \le |{\mathcal X}_{\operatorname{bad}}|. \end{equation}

Let ${\mathcal X}_1$ be the set of all embeddings $\sigma$ whose image intersects both $R_1$ and $V \setminus (R_1 \cup R_2)$ . Let ${\mathcal X}_2 = {\mathcal X}_{\operatorname{bad}} \setminus {\mathcal X}_1$ . By Theorem 2.2, the number of bad embeddings which have the image entirely contained in $R_1$ is

(2.7) \begin{align} \mathrm{emb}(H, G[R_1]) &\le \hom\!(H, G[R_1]) \notag \\ &\le \sum_{v \in R_1} d_G(v)^{h-1} \notag \\& = n_1 \bigl({\mathbb{E}} d_n^{h-1} - {\mathbb{E}} d_n^{h-1}{\mathbb I}_{d_n \le t}\bigr) \notag \\ &\le n_1 {\mathbb{E}} d_n ^{h-1} - n_1{\mathbb{E}} (d^*)^{h-1} + \epsilon n_1 + {\mathrm{o}} (n_1) \notag \\ &\le \epsilon n_1 + {\mathrm{o}} (n_1).\end{align}

Here the last two inequalities follow by (i) and (2.2). Let $v \in V \setminus (R_1 \cup R_2)$ be a vertex in the image of an embedding in ${\mathcal X}_1$ . By the definition of $R_1$ and $R_2$ , $b_{r+1}(G, v) \in (t, s]$ . So using (2.4),

\begin{align*}|{\mathcal X}_1| \le n_1 \sum_{H^{\prime} \in {\mathcal R}(H)} {\mathbb{E}} X(H^{\prime}) {\mathbb I}_{b_{r+1}(G,v^*) \in (t, s]} \le h \epsilon n_1 + {\mathrm{o}} (n_1).\end{align*}

Now consider a subgraph $H_\sigma$ of G, $H_\sigma \cong H$ corresponding to an embedding $\sigma \in {\mathcal X}_2$ . $H_\sigma$ cannot have an edge in $E_1 = \{xy \in G\,:\, x \in R_1, y \in V(G) \setminus (R_1 \cup R_2)\}$ , otherwise $\sigma$ would be an element of ${\mathcal X}_1 = {\mathcal X}_{\operatorname{bad}}\setminus {\mathcal X}_2$ . So $V(H_\sigma)$ is contained in $R_1 \cup Q$ , where

\begin{align*}Q = \bigcup_{v \in R_2} V(B_r(G - E_1, v)) \setminus R_1.\end{align*}

Note that since each vertex in Q has degree at most t, $|Q| \le 2 |R_2| t^r$ . By Theorem 2.2,

(2.8) \begin{equation}|{\mathcal X}_2| \le \sum_{v \in R_1} d_G(v)^{h-1} + \sum_{v \in Q} d_G(v)^{h-1}.\end{equation}

For the second term we have by (2.5)

(2.9) \begin{equation} \sum_{v \in Q} d_G(v)^{h-1} \le |Q| t^{h-1} \le 2 |R_2| t^r t^{h-1} \le \epsilon n_1 + {\mathrm{o}} (n_1). \end{equation}

Combining (2.7), (2.8), and (2.9), we obtain $|{\mathcal X}_2| \le 2 \epsilon n_1 + {\mathrm{o}} (n_1)$ . We have proved

\begin{align*} |{\mathcal X}_{\operatorname{bad}}| = |{\mathcal X}_1| + |{\mathcal X}_2| \le (h + 2) \epsilon n_1 + {\mathrm{o}} (n_1).\end{align*}

Since the proof holds for arbitrarily small $\epsilon$ , we see that $n_1^{-1} |{\mathcal X}_{\operatorname{bad}}| \to 0$ . Thus (iii) follows using (2.3) and (2.6).

(iii) $\Rightarrow$ (i) Write $(x)_p = x (x-1) \cdots (x-p+1)$ . Then (iii) applied to $H = K_{1, h-1}$ yields ${\mathbb{E}} (d_n)_{h-1} \to {\mathbb{E}} (d^*)_{h-1}$ , while $(G,v^*) \,\xrightarrow{d}\, G^*$ shows that $(d_n)_{h-1} \to (d^*)_{h-1}$ in distribution. Thus $(d_n)_{h-1}$ is uniformly integrable by Lemma 2.1. This implies that for any $j = 1, 2, \ldots, $ $h-1 (d_n)_j \le (d_n)_{h-1}$ is uniformly integrable, so by Lemma 2.1 again ${\mathbb{E}} (d_n)_j \to {\mathbb{E}} (d^*)_j$ . Using $S(h-1,j)$ to denote Stirling numbers of the second kind,

\begin{align*}& {\mathbb{E}} (d_n)^{h-1} = \sum_{j=1}^{h-1} S(h-1,j) {\mathbb{E}} (d_n)_j\to \sum_{j=1}^{h-1} S(h-1,j) {\mathbb{E}} (d^*)_j = {\mathbb{E}} (d^*)^{h-1}.\end{align*}

The next fact is simple and known (see Lemma 9.3 of [Reference Bollobás, Janson and Riordan22]), but we include a proof for completeness.

Lemma 2.2. Suppose $(G_n,v^*_n) \,\xrightarrow{d}\, G^*$ and the degree $d_n$ of a uniformly random vertex $v_n^*$ from $V(G_n)$ is uniformly integrable. Write $n_1 = n_1(n) = |V(G_n)|$ . Let $G^{\prime}_n$ be obtained from $G_n$ by adding or removing edges incident to a set $S_n \subseteq V(G_n)$ of size ${\mathrm{o}} (n_1)$ . Then $(G^{\prime}_n,v^*_n) \,\xrightarrow{d}\, G^*$ .

Proof of Example 2.1. It is straightforward that the uniform integrability condition (ii) fails for $G^{\prime}_n$ , so the other two conditions also fail by Theorem 2.1. The fact that $G_n$ and $G^{\prime}_n$ have the same local weak limit $G^*$ follows from Lemma 2.2.

Proof of Lemma 2.2. Let $N_n$ denote the set of vertices in $V(G_n) \setminus S_n$ which have a neighbour in $S_n$ .

Claim 2.3. For any $\epsilon \in (0,1)$ there are $\delta > 0$ , $n_0 > 0$ such that if $n \ge n_0$ and $0 < |S_n| < \delta n_1$ then $|N_n| \le \epsilon n_1$ .

Proof. Let $\delta$ and $n_0$ be such that $\delta < \epsilon$ , ${\mathbb{E}} d_n {\mathbb I}_{d_n > 0.5 \epsilon \delta^{-1}} < 0.5 \epsilon$ for all $n \ge n_0$ . Assume that $|N_n| > \epsilon n_1$ for some $n \ge n_0$ . Write $d(v) = d_{G_n}(v)$ . We have

\begin{align*} n_1 {\mathbb{E}} d_n {\mathbb I}_{d_n > 0.5 \epsilon \delta^{-1}} &\ge \sum_{v \in S_n} d(v) {\mathbb I}_{d(v) > 0.5 \epsilon \delta^{-1}} \\ &\ge \sum_{v \in S_n} \bigl(d(v) - 0.5 \epsilon \delta^{-1}\bigr) {\mathbb I}_{d(v) > 0.5 \epsilon \delta^{-1}} \\ & \ge |S_n| \Biggl(|S_n|^{-1} \sum_{v \in S_n} d(v) - 0.5 \epsilon \delta^{-1} \Biggr)\\ & \ge \epsilon n_1 - 0.5 \epsilon n_1 \\ &\ge 0.5 \epsilon n_1, \end{align*}

which is a contradiction. Here we used Jensen’s inequality and the assumption $\sum_{v\in S} d(v) \ge |N_n| > \epsilon n_1$ .

Now fix any positive integer r. As is done in [Reference Bollobás, Janson and Riordan22], we apply the above claim r times to get that the set $N_n^{(r)}$ of vertices at distance at most r from $S_n$ in $G_n$ has size ${\mathrm{o}} (n_1)$ . Now $(G^{\prime}_n, v_n^*) \,\xrightarrow{d}\, G^*$ follows since

\begin{equation*} {\mathbb{P}}(B_r(G_n, v^*_n) \cong B_r(G^{\prime}_n, v^*_n)) \ge {\mathbb{P}} \bigl(v^*_n \not \in N_n^{(r)}\bigr) = 1 - {\mathrm{o}} (1). \end{equation*}

3. Uncorrelated random clique trees

Random intersection graphs were introduced in [Reference Karoński, Scheinerman and Singer-Cohen38] and received some attention as a potential model for large empirical networks with clustering; see e.g. the survey papers [Reference Bloznelis, Jaworski, Godehardt, Kurauskas and Rybarczyk16] and [Reference Bloznelis, Jaworski, Godehardt, Kurauskas and Rybarczyk17]. We show that in the regime that yields sparse graphs with a positive clustering coefficient in such models the weak limit is very specific, namely it is an uncorrelated random clique tree, defined formally below.

Let $H = (V^1, V^2, E)$ be a bipartite graph. The intersection graph $G = G(H)$ of H is the graph on the vertex set $V(G) = V^1$ with edges

\begin{align*}E(G) = \bigl\{uv\,:\, \exists w\in V^2 \text{ such that } uw, wv \in H\bigr\},\end{align*}

where $e \in H$ is shorthand for $e \in E(H)$ . An intersection graph of a random bipartite graph H is called a random intersection graph. It will be convenient to assume that $V^i$ consists of the first $n_i$ elements of a countable set ${\mathcal V}^i$ , where ${\mathcal V}^1 \cap {\mathcal V}^2 = \emptyset$ . The set $V^2$ is often called the set of attributes. (The names V and W are often used in the literature for $V^1$ and $V^2$ .) We will call the elements of ${\mathcal V}^i$ vertices of type i. For $v \in V^i$ we denote $S_v = \Gamma_H(v)$ , $X_v = |S_v|$ , where $\Gamma_H(x)$ is the set of neighbours of v in the graph H. Sometimes we will want to stress the type of v in the notation. Since $V^i = \bigl\{v_1^{(i)}, v_2^{(i)} , \ldots\bigr\}$ consists of the first $n_i$ vertices of ${\mathcal V}_i$ , for $v = v_j^{(i)}$ we will set $X_v^{(i)} \,:\!=\, X_v$ and $S_v^{(i)} \,:\!=\, S_v$ . We will let $X \sim Y$ denote the fact that X and Y have the same distribution.

Many different variants of the random bipartite graph H have been studied; see e.g. the survey papers [Reference Bloznelis, Jaworski, Godehardt, Kurauskas and Rybarczyk16] and [Reference Bloznelis, Jaworski, Godehardt, Kurauskas and Rybarczyk17].

  • The active random intersection graph: each $v \in V^1$ independently chooses $X^{(1)}_v$ from a distribution P on $\{0,\ldots,n_2\}$ , then draws a uniformly random subset $S^1_v$ of size $X^{(1)}_v$ of its neighbours from $V^2$ (independently of other vertices). A special case is the binomial random intersection graph.

  • The passive random intersection graph: each $v \in V^2$ independently chooses $X^{(2)}_v$ from a distribution P on $\{0,\ldots,n_1\}$ , then draws a uniformly random subset $S^2_v$ of size $X^{(2)}_v$ of its neighbours from $V^1$ (independently of other vertices).

  • The inhomogeneous random intersection graph $G^{\operatorname{inhomog}}(n_1, n_2, \xi^{(1)}, \xi^{(2)})$ : the vertices $v \in V^i$ are independently assigned random non-negative weights $\xi_v^{(i)} \sim \xi^{(i)}$ . Given the weights, edges vw appear in H independently with probability

    \begin{align*} \min\biggl(\dfrac {\xi_v^{(1)} \xi_w^{(2)}} {\sqrt{n_1 n_2}}, 1\biggr). \end{align*}
  • We will also consider random intersection graphs $G^{\operatorname{conf}}(d_1, d_2)$ based on the configuration model: see [Reference Guillaume and Latapy32], [Reference Janson and Luczak34], and [Reference Wormald48]. Let $d_1 = \{d_{1,u}, u \in V^1\}$ and $d_2 = \{d_{2,v}, v \in V^2\}$ be sequences of non-negative integers indexed by $V_1$ and $V_2$ respectively such that $\sum_u d_{1,u} = \sum_v d_{2,v}$ . The random bipartite multigraph $H^{\operatorname{conf}}(d_1, d_2)$ with parts $({V^{1}}, {V^{2}})$ of sizes $n_1$ and $n_2$ is obtained as follows. Distribute the total number of $2 \sum d_{1,u}$ half-edges among the vertices of ${V^{1}} \cup {V^{2}}$ so that the jth vertex of part i, $v=v_j^{(i)}$ , receives $d_{i,v}$ half-edges. Pick a uniformly random perfect matching between the half-edges of parts $V^1$ and $V^2$ . In the bipartite graph, add an edge between u and v whenever a half-edge from u is matched with a half-edge from v (we allow multi-edges).

Usually (see e.g. [Reference Bloznelis12]) the above models yield random graphs with a linear number of edges and a clustering coefficient bounded away from zero only if ${{n_2}/{n_1}} = \Theta(1)$ . Therefore we will assume ${{n_2}/{n_1}} = \Theta(1)$ in this paper.

Let $\mu$ be the distribution of a random variable Z on $[0, \infty)$ with $0 < {\mathbb{E}} Z < \infty$ . We let $Z^*$ denote a random variable with the size-biased distribution

\begin{align*}\mu^*(A) = ({\mathbb{E}} Z)^{-1} \int_{A} t \,{\mathrm{d}} \mu(t)\end{align*}

for any Borel set A. If Z is integer-valued, then ${\mathbb{P}}(Z^*=k) = ({\mathbb{E}} Z)^{-1} k {\mathbb{P}}(Z=k)$ . (We follow the star notation of other authors; see e.g. [Reference Arratia, Goldstein and Kochman4] and [Reference Lovász36]. We also use symbols such as $G^*, d^*, v^*$ to denote objects unrelated to size-biased random variables; the actual meaning should be clear from the context.)

Given two random variables $D_1, D_2$ on $\{0,1,2, \ldots\}$ with ${\mathbb{E}} D_1, {\mathbb{E}} D_2 \in (0, \infty)$ , define a multi-type Galton–Watson process ${\mathcal T} = {\mathcal T}(D_1, D_2)$ as follows. S(0) consists of a single root node $r=\mathrm{root}({\mathcal T})$ . The root r has a set S(1) of offspring, where $|S(1)| \sim D_1$ . For each $k \ge 1$ , $S(k+1)$ consists of the offspring of the nodes in S(k). Given $|S(k)|$ , the number of offspring of each node in S(k) is independent and distributed as $D_{i(k)}^*-1$ . Here $i(k)=2$ if k is odd and $i(k)=1$ otherwise. We call S(k), the set of vertices at distance k from the root, the generation k of ${\mathcal T}$ . A corresponding random tree, also denoted by ${\mathcal T}$ , is a graph on the vertex set $\cup_k S(k)$ with edges $\{uv\,:\, v\text{ is an offspring of }u\}$ and root r. Consider ${\mathcal T}$ as a bipartite graph with parts $({V^{1}}, {V^{2}})$ , where ${V^{1}}$ and ${V^{2}}$ consists of all nodes in generations $0,2,\ldots$ and $1,3,\ldots$ respectively. We define the uncorrelated random clique tree $G_{{\mathcal T}}$ to be the intersection graph of ${\mathcal T}$ rooted at r.

For a finite (random) sequence A, we write $X \in_u A$ to denote the fact that X is chosen uniformly at random from all the elements of A (given A). For random variables $Z, Z_1, Z_2, \ldots,$ we let $Z_n \xrightarrow{d} Z$ denote the fact that $Z_n$ converges in distribution to Z.

Let H be a rooted connected graph. For a (multi-)graph G of size $n_1 \ge 1$ , write $p_r(G, H) = n_1^{-1} |\{v \in V(G)\,:\, B_r(G, v) \cong H\}|$ . Let $\{G_n, n=1,2,\ldots\}$ be a sequence of finite random graphs with $|V(G_n)|\ge 1$ , let $v^*_n \in_u V(G_n)$ , and let $G^*$ be a random graph on $(\mathcal{G}_*, d_{\operatorname{loc}})$ . (For multigraphs we define $G_1 \cong G_2$ if and only if there are bijections $\phi_1\,:\, V(G_1) \to V(G_2)$ and $\phi_2\,:\, E(G_1) \to E(G_2)$ such that $\phi_1$ maps the endpoints of e to the endpoints of $\phi_2(e)$ for each edge $e \in G_1$ , and $\phi_1(\mathrm{root}(G_1)) = \mathrm{root}(G_2)$ .) We write ${\mathcal L}((G_n,v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ as $n\to\infty$ if, for each non-negative integer r and each rooted connected graph H,

(3.1) \begin{equation}p_r(G_n, H) \xrightarrow{p} {\mathbb{P}}(B_r(G^*) \cong H).\end{equation}

As observed in the recent literature [Reference Stufler46], this is equivalent to the convergence of the conditional random measures ${\mathcal L}((G_n,v^*_n)\mid G_n)$ to the fixed measure ${\mathcal L}(G^*)$ in probability, also known as quenched convergence. That is, consider the space of Borel measures on $(\mathcal{G}_*, d_{\operatorname{loc}})$ with the Lévy–Prokhorov metric $\pi$ . Then $\pi({\mathcal L}((G_n,v^*_n)\mid G_n), {\mathcal L}(G^*)) \xrightarrow{p} 0$ if and only if (3.1) holds for each r and each H as above. This can be seen using an argument similar to (iv) on page 72 of Billingsley [Reference Billingsley9]. While this equivalence reduces many questions related to local weak limits to the classical theory for separable metric spaces, in this paper it is only used to justify our notation.

Theorem 3.1. Let $\{G_n\}$ be a sequence of random intersection graphs where the underlying bipartite graphs are $H_n = ({V^{1}}, {V^{2}}, F)$ with ${V^{1}} = {V^{1}(n)}, {V^{2}} = {V^{2}(n)}$ and $F = F(n)$ . For $i=1, 2$ write $v_i^* = v_i^*(n)$ , where $v_i^*(n) \in_u {V^{i}}$ , $n_i = n_i(n) = |{V^{i}}|$ and ${X^{(i)}} = {X^{(i)}(n)} = X_{v_i^*}$ .

Suppose $\{n_1\}, \{n_2\}$ are sequences of positive integers, such that $n_1,n_2 \to \infty$ , $n_2/n_1 \to \beta \in (0, \infty)$ and

  1. (i) either $G_n$ , $n=1,2,\ldots$ is an active random intersection graph and there is a random variable $D_1$ with ${\mathbb{E}} D_1 \in (0, \infty)$ such that ${\mathbb{E}} {X^{(1)}} \to {\mathbb{E}} D_1$ and

    (3.2) \begin{equation}{X^{(1)}} \xrightarrow{d} D_1;\end{equation}
  2. (ii) or $G_n$ , $n=1,2,\ldots$ is a passive random intersection graph and there is a random variable $D_2$ with ${\mathbb{E}} D_2 \in (0, \infty)$ such that ${\mathbb{E}} {X^{(2)}} \to {\mathbb{E}} D_2$ and

    (3.3) \begin{equation}{X^{(2)}} \xrightarrow{d} D_2;\end{equation}
  3. (iii) or $G_n = G^{\operatorname{inhomog}}(n_1, n_2, \xi^{(1)}, \xi^{(2)})$ , $n = 1,2,\ldots,$ such that for $i=1,2$ $0 < {\mathbb{E}} \xi^{(i)} < \infty$ and $\xi^{(i)}$ does not depend on n;

  4. (iv) or $G_n = G^{\operatorname{conf}}(d_1, d_2)$ , $n = 1,2, \ldots,$ where $d_1 = d_1(n)$ , $d_2 = d_2(n)$ are non-random and for $i=1,2$ we have ${\mathbb{E}} d_{1,v_i^*} \to {\mathbb{E}} D_i$ and $d_{1,v_i^*} \xrightarrow{d} D_i$ .

Then both (3.2) and (3.3) hold and ${\mathcal L}((G_n, v^*_1)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G_{\mathcal T})$ with ${\mathcal T} = {\mathcal T}(D_1, D_2)$ where any $D_i$ that is not defined here is defined in Remark 3.1.

The proof is available in the arXiv version of this paper [Reference Kurauskas40, Appendix A]. (In case (iv) a previous version (v2) of [Reference Kurauskas40] stated an analogous result for random sequences $d_1, d_2$ . We simplified the condition to match, for example, [Reference Andersson3] and [Reference Janson and Luczak34]. The previous result follows by a simple technical argument.)

Recall that given a non-negative random variable X, a mixed Poisson random variable with parameter X attains value k with probability ${\mathbb{E}} \,{\mathrm{e}}^{-X} X^k (k!)^{-1}$ for $k = 0, 1, \ldots.$ We denote this distribution by $\operatorname{Po} (X)$ .

Remark 3.1. (See also [Reference Bloznelis11] and [Reference Bloznelis12].) In case (i) we have $D_2 \sim \operatorname{Po} (\beta^{-1} {\mathbb{E}} D_1)$ , in case (ii) we have $D_1 \sim \operatorname{Po} (\beta {\mathbb{E}} D_2)$ , and in case (iii) we have $D_1 \sim \operatorname{Po} \bigl(\beta^{1/2} \xi^{(1)} {\mathbb{E}} \xi^{(2)}\bigr)$ , $D_2 \sim \operatorname{Po} \bigl( \beta^{-1/2} \xi^{(2)} {\mathbb{E}} \xi^{(1)}\bigr)$ . Thus in (i)–(iv) $\beta {\mathbb{E}} D_2 = {\mathbb{E}} D_1$ .

Remark 3.2. For arbitrary random variables $D^{\prime}_1$ , $D^{\prime}_2$ on $\{0, 1, 2, \ldots \}$ with positive means, there is a sequence of random configuration intersection graphs as in (iv) for which $D_1 = D^{\prime}_1$ , $D_2 = D^{\prime}_2$ .

Thus in active, passive, and inhomogeneous models, either $D_1$ , or $D_2$ , or both, has a (mixed) Poisson distribution. The configuration model generalises these models in terms of local weak limits. For example, both $D_1$ and $D_2$ can be power-law.

The fact that ${\mathcal T}(D_1, D_2)$ is a limit for many sparse bipartite graph sequences is intuitive and in some physics literature has been assumed implicitly [Reference Karrer and Newman39, Reference Newman, Strogatz and Watts41]. A result similar to Theorem 3.1(iv) can be found in [Reference Bordenave and Lelarge23] and [Reference Richardson and Urbanke43]. A nice proof for almost sure convergence in random configuration graphs (which could possibly be extended to bipartite graphs) can be found in [Reference Dembo and Montanari25]. For completeness, we provide our own formal proof in the Appendix of the arXiv version [Reference Kurauskas40]. We do not use the second moment condition and work under slightly weaker assumptions (convergence in probability). We are not aware of prior literature on weak limits in cases (i)–(iii).

4. Applications

The proofs of the results in this section are given in Section 4.4. Let $\{G_n\}$ be a sequence of finite random graphs, and let $G^*$ be a random element on $(\mathcal{G}_*, d_{\operatorname{loc}})$ . Assume $|V(G_n)| \ge 1$ for all n and let $v^*_n$ be chosen uniformly at random from $V(G_n)$ (given $G_n$ ).

4.1. Subgraph counts in random graphs

To apply Theorem 2.1 in a random setting we need some easy technical facts. Due to the equivalence mentioned after (3.1), the Lévy–Prokhorov metric and Skorokhod’s representation theorem could be used to show these or stronger properties (see e.g. [Reference Bordenave and Lelarge23], [Reference Stufler46]), but we derive them from more basic arguments.

Lemma 4.1. Suppose ${\mathcal L}((G_n,v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ and $\{(G_n, v^*_n)\}$ are defined on the same probability space. Then there is a random set A of positive integers such that

  1. (a) ${\mathbb{P}}(n \in A) \to 1$ as $n \to \infty$ and

  2. (b) almost surely $|A|=\infty$ and $(G_n,v_n^*) \,\xrightarrow{d}\, G^*$ as $n\to\infty$ , $n \in A$ .

Lemma 4.2. ${\mathcal L}((G_n,v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ if and only if, for each bounded continuous function $f\,:\, (\mathcal{G}_*, d_{\operatorname{loc}}) \to \mathbb{R}$ , we have ${\mathbb{E}} (f(G_n, v^*_n) \mid G_n) \xrightarrow{p} {\mathbb{E}} f(G^*)$ .

We now restate Theorem 2.1 for sequences of random graphs.

Lemma 4.3. Let $h\ge2$ be an integer, suppose ${\mathcal L}((G_n,v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ and assume $\inf |V(G_n)| \to \infty$ . As before, denote $d^* = d_{G^*}(r^*)$ , $r^* = \mathrm{root}(G^*)$ , $n_1=n_1(n)=|V(G_n)|$ and assume ${\mathbb{E}} (d^*)^{h-1} < \infty$ . Let $d_n$ denote the degree of a uniformly random vertex in $G_n$ . Then the following statements are equivalent:

  1. (i) ${\mathbb{E}} d_n^{h-1} \to {\mathbb{E}} (d^*)^{h-1}$ ,

  2. (ii) $d_n^{h-1}$ is uniformly integrable,

  3. (iii) for any connected graph H on h vertices and any $H^{\prime} \in {\mathcal R}(H)$ ,

    \begin{align*}n_1^{-1} {\mathbb{E}}\ \mathrm{emb}(H, G_n) \to {\mathbb{E}}\ \mathrm{emb}^{\prime}(H^{\prime}, G^*, r^*).\end{align*}

Each of the above statements implies that for any connected graph H on h vertices and any $H^{\prime} \in {\mathcal R}(H)$ ,

(4.1) \begin{equation}n_1^{-1} \mathrm{emb}(H, G_n) \xrightarrow{p} {\mathbb{E}}\ \mathrm{emb}^{\prime}(H^{\prime}, G^*, r^*).\end{equation}

4.2. General weakly convergent sequences

In this section we assume that ${\mathcal L}((G_n, v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ , $n_1=n_1(n)= |V(G_n)|\ge3$ is non-random and $n_1 \to \infty$ . As before, $d_n$ is the degree of $v^* = v^*_n$ in $G_n$ and $d^*$ is the degree of the root $r^*$ of $G^*$ . Lemma 4.3 yields convergence of ${n_1}^{-1}\mathrm{emb}(H, G_n)$ provided that the $(|V(H)|-1)$ th degree moment of $G_n$ converges. This allows us to determine the limit behaviour of statistics based on subgraph counts.

The clustering coefficient of a graph G is defined as

\begin{align*}\alpha(G) \,:\!=\, \dfrac {\mathrm{emb}(K_3, G)} {\mathrm{emb}(P_3, G)},\end{align*}

where $K_3$ is the clique on three vertices and $P_t$ is the path on t vertices. (Set $\alpha(G) \,:\!=\, 0$ when the denominator is zero.) For a rooted graph H ′ let $\hom^{\prime}\!(H^{\prime}, G, v)$ denote the number of homomorphisms from H ′ to G that map $\mathrm{root}(H^{\prime})$ to v. For $t\ge 2$ , let $K^{\prime}_t$ be $K_t$ rooted at any vertex and let $K^{\prime}_{1,t}$ be the bipartite graph $K_{1,t}$ rooted at the vertex of degree t.

Corollary 4.1. Suppose ${\mathcal L}( (G_n, v^*_n) \mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ and ${\mathbb{E}} d_n^2 \to {\mathbb{E}} (d^*)^2 \in (0, \infty)$ . Then

\begin{align*} \alpha(G_n) \xrightarrow{p} \alpha^* \,:\!=\, \dfrac{{\mathbb{E}}\ \mathrm{emb}^{\prime}(K^{\prime}_3, G^*, r^*)} {{\mathbb{E}}\ \mathrm{emb}^{\prime}(K_{1,2}^{\prime}, G^*, r^*)} = \dfrac{{\mathbb{E}}\ \mathrm{emb}^{\prime}(K^{\prime}_3, G^*, r^*)} {{\mathbb{E}} (d^*)_2}. \end{align*}

The assortativity coefficient (see e.g. [Reference Bloznelis, Jaworski and Kurauskas18], [Reference Litvak and van der Hofstad35]) for a graph G is defined as Pearson’s correlation of the degrees over the neighbouring vertices,

\begin{align*} r(G) \,:\!=\, \dfrac {g(G) - b(G)^2} {b^{\prime}(G) - b(G)^2}, \end{align*}

where

\begin{alignat*}{3} g(G) &\,:\!=\, (2 e(G))^{-1} \sum d_G(u) d_G(v), \quad & b(G) &\,:\!=\, (2 e(G))^{-1}\sum d_G(u), \\ b^{\prime}(G) &\,:\!=\, (2 e(G))^{-1} \sum d_G(u)^2, & e(G) &\,:\!=\, |E(G)|, \end{alignat*}

and the sums are over all 2e(G) ordered pairs (u, v) of adjacent vertices in G. We define $r(G) \,:\!=\, 0$ when either e(G) or $b^{\prime}(G) - b(G)^2$ is zero (i.e. G is regular). The above quantities can be easily expressed in terms of subgraph count statistics; see e.g. [Reference Bollobás, Janson and Riordan22] and Section 4.4. Let $P^{\prime}_4$ denote the graph $P_4$ rooted at one of its internal vertices.

Corollary 4.2. Suppose ${\mathcal L}((G_n, v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ , ${\mathbb{E}} d_n^3 \to {\mathbb{E}} (d^*)^3 < \infty$ and $\operatorname{Var}\!(d^*) > 0$ . Then

(4.2) \begin{equation} {\mathbb{E}} r(G_n) \xrightarrow{p} \rho^* \,:\!=\, \dfrac{{\mathbb{E}} d^* {\mathbb{E}} \hom^{\prime}\!(P^{\prime}_4, G^*, r^*) - ({\mathbb{E}} (d^*)^2)^2 } {{\mathbb{E}} d^* {\mathbb{E}} (d^*)^3 - ({\mathbb{E}} (d^*)^2)^2}. \end{equation}

Corollaries 4.1 and 4.2 easily follow from Lemma 4.3; see Section 4.4. Notice that since $\alpha(G), r(G) \in [0, 1]$ , convergence in probability in these corollaries implies convergence of means.

Statistics that can be expressed in terms of integrals of bounded functions, such as the limit degree distribution, are obtained directly from the local weak limit. Hence no degree moment conditions are necessary. Let $\pi_k(G)$ be the fraction of vertices of degree k in G. By Lemma 4.2,

(4.3) \begin{equation}\pi_k(G_n) = {\mathbb{E}} \bigl({\mathbb I}_{d_{G_n}(v^*_n)=k} \mid G_n\bigr) \xrightarrow{p} {\mathbb{P}}(d^* = k).\end{equation}

Given a graph G and an integer $k \ge 2$ , let $(u_1^*, u_2^*, u_3^*)$ be a uniformly random triple of distinct vertices from V(G). The conditional clustering coefficient is

\begin{align*}\alpha_k(G) \,:\!=\, {\mathbb{P}}\bigl(u_1^* u_3^* \in G \mid u_1^* u_2^*, u_2^*u_3^* \in G, d(u_2^*) = k\bigr),\end{align*}

and set $\alpha_k(G) \,:\!=\, 0$ if the event in the condition has probability zero. Lemma 4.2 implies that if ${\mathbb{P}}(d^* = k) > 0$ then

(4.4) \begin{equation}\alpha_k(G_n) \xrightarrow{p} \alpha_k^* = \dfrac{{\mathbb{E}} {\mathbb I}_{d^* = k}\, \mathrm{emb}^{\prime}(K^{\prime}_3, G^*, r^*)} {k(k-1){\mathbb{P}}(d^* = k)}.\end{equation}

The conditional assortativity (see [Reference Bloznelis, Jaworski and Kurauskas18]) is defined as

\begin{align*}r_k(G) \,:\!=\, {\mathbb{E}}\bigl(d_G (u_2^*) \mid u_1^* u_2^* \in G, d(u_1^*) = k\bigr),\end{align*}

and set $r_k(G) \,:\!=\, 0$ if the event in the condition has probability zero. Let $P^{\prime}_3$ be $P_3$ rooted at one of the endpoints.

Corollary 4.3. Suppose ${\mathcal L}((G_n,v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ , ${\mathbb{E}} d_n^2 \to {\mathbb{E}} (d^*)^2 < \infty$ and ${\mathbb{P}}(d^* = k) > 0$ . Then

\begin{equation*}r_k(G_n) \xrightarrow{p} r_k^* = \dfrac{{\mathbb{E}} {\mathbb I}_{d^* = k} \hom^{\prime}\!(P^{\prime}_3, G^*, r^*)} {k {\mathbb{P}}(d^* = k)} = 1 + \dfrac{{\mathbb{E}} {\mathbb I}_{d^* = k}\, \mathrm{emb}^{\prime}(P^{\prime}_3, G^*, r^*)} {k {\mathbb{P}}(d^* = k)}.\end{equation*}

In a similar way we can study the bivariate degree distribution [Reference Bloznelis13] and many other functionals.

4.3. The case of random intersection graphs

Here we apply the above general results in the case where the limit is the uncorrelated clique tree of Section 3. We stress that Theorem 2.1 and its corollaries are applicable to a much broader class of sequences, including the inhomogeneous sparse random graph and the preferential attachment model [Reference Berger, Borgs, Chayes and Saberi7, Reference Bollobás, Janson and Riordan22], general random configuration graphs with their many potential applications (see e.g. [Reference Janson and Luczak34]), and random graphs from certain minor-closed classes, including random planar graphs [Reference Georgakopoulos and Wagner29, Reference Panagiotou, Stufler and Weller42, Reference Stufler46].

Theorem 3.1 yields the first main condition (convergence to a local weak limit) for Lemma 4.3. For the other condition (convergence of a degree moment) we prove the following result.

Lemma 4.4. Let $\{G_n\}$ be a sequence as in Theorem 3.1 and let k be a positive integer. Suppose an additional condition for each of cases (i)–(iv) of Theorem 3.1 holds:

  1. (i) ${\mathbb{E}} (X^{(1)})^k \to {\mathbb{E}} D_1^k < \infty$ ,

  2. (ii) ${\mathbb{E}} (X^{(2)})^{k+1} \to {\mathbb{E}} D_2^{k+1} < \infty$ ,

  3. (iii) ${\mathbb{E}} (\xi^{(1)})^k < \infty$ and ${\mathbb{E}} (\xi^{(2)})^{k+1} < \infty$ ,

  4. (iv) ${\mathbb{E}} d_{1,v_1^*}^k \to {\mathbb{E}} D_1^k < \infty$ and ${\mathbb{E}} d_{2,v_2^*}^{k+1} \to {\mathbb{E}} D_2^{k+1} < \infty$ .

Then ${\mathbb{E}} (d^*)^k < \infty$ and ${\mathbb{E}} d_n^k \to {\mathbb{E}} (d^*)^k$ .

(We simplified (iv) of a previous version of [Reference Kurauskas40] to fixed sequences and dropped a redundant assumption. To extend it to random $d_1, d_2$ , use arguments similar to those of the proof of Lemma 4.3.) The special case of (i) where $k \le 2$ was shown in [Reference Bloznelis and Kurauskas15]. Here we use a different argument based on Theorem 3.1; see Section 4.4.

Using the same notation as in Section 4.2, assume that ${\mathcal L}((G_n,v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*) = {\mathcal L}(G_{\mathcal T})$ , where ${\mathcal T} = {\mathcal T}(D_1, D_2)$ , ${\mathbb{E}} D_1 > 0$ and ${\mathbb{E}} D_2 > 0$ . Let $Z_1, Z_2, \ldots \sim D_2^*-1$ be independent and independent of $D_1$ . By (4.3) we have

\begin{align*}\pi_k(G_n) \xrightarrow{p} {\mathbb{P}}(d^* = k) = {\mathbb{P}}\Biggl(\sum_{i=1}^{D_1} Z_i = k\Biggr).\end{align*}

For sequences of graphs as in Theorem 3.1(i)–(iii), the corresponding convergence of means has been shown in [Reference Bloznelis10] and [Reference Bloznelis and Damarackas14]; see also Remark 3.1. We also notice that the second moment condition required in [Reference Bloznelis and Damarackas14] for the inhomogeneous model is not necessary.

By simple calculations we get

\begin{align*}&{\mathbb{E}} Z_1^k = \dfrac {{\mathbb{E}} (D_2 -1)^k D_2 } {{\mathbb{E}} D_2}, \quad {\mathbb{E}} (Z_1)_k = {\mathbb{E}} (D_2)_{k+1} ({\mathbb{E}} D_2)^{-1}, \quad k = 1, 2, \ldots, \\ &{\mathbb{E}} d^* = {\mathbb{E}} (Z_1 + \cdots + Z_{D_1}) = {\mathbb{E}} D_1 {\mathbb{E}} Z_1, \\ &{\mathbb{E}} (d^*)^2 = {\mathbb{E}} D_1 {\mathbb{E}} Z_1^2 + {\mathbb{E}} (D_1)_2 ({\mathbb{E}} Z_1)^2, \\ &{\mathbb{E}} (d^*)^3 = {\mathbb{E}} D_1 {\mathbb{E}} Z_1^3 + 3 {\mathbb{E}} (D_1)_2 {\mathbb{E}} Z_1 {\mathbb{E}} Z_1^2 + {\mathbb{E}} (D_1)_3 ({\mathbb{E}} Z_1)^3, \\ &{\mathbb{E}}\ \mathrm{emb}^{\prime}(K^{\prime}_3, G^*, r^*) = {\mathbb{E}} D_1 {\mathbb{E}} (Z_1)_2 = {\mathbb{E}} D_1 ({\mathbb{E}} Z_1^2 - {\mathbb{E}} Z_1)\end{align*}

and

(4.5) \begin{align}&{\mathbb{E}} \hom^{\prime}\!(P^{\prime}_4, G^*, r^*) = {\mathbb{E}} D_1 {\mathbb{E}} Z_1^3 + {\mathbb{E}} (D_1)_2 {\mathbb{E}} Z_1 {\mathbb{E}} Z_1^2 + \dfrac {{\mathbb{E}} (D_1)_2} {{\mathbb{E}} D_1} {\mathbb{E}} (d^*)^2 {\mathbb{E}} Z_1 .\end{align}

(The above estimates also hold in the case when either side is infinite.)

When ${\mathbb{E}} D_2^3 < \infty$ and ${\mathbb{E}} D_1^2 < \infty$ , we have

\begin{align*} \alpha^* = \dfrac {{\mathbb{E}} D_1 {\mathbb{E}} D_2 {\mathbb{E}} (D_2)_3} {{\mathbb{E}} D_1 {\mathbb{E}} D_2 {\mathbb{E}}(D_2)_3 + {\mathbb{E}} (D_1)_2 ({\mathbb{E}} (D_2)_2)^2}\end{align*}

in Corollary 4.1. Using Remark 3.1, this simplifies to $\alpha^* = {\mathbb{E}} D_1 / {\mathbb{E}} D_1^2$ for active random intersection graphs and to $\alpha^* = {\mathbb{E}} (D_2)_3 ({\mathbb{E}} (D_2)_3 + \beta ({\mathbb{E}} (D_2)_2)^{-2}$ for passive random intersection graphs. This is equal to a related estimate $\hat{\alpha} = \lim {\mathbb{E}}\ \mathrm{emb}(K_3, G_n) ({\mathbb{E}}\ \mathrm{emb}(P_3, G_n))^{-1}$ obtained by Bloznelis [Reference Bloznelis12] and the estimates of Godehardt, Jaworski, and Rybarczyk [Reference Godehardt, Jaworski and Rybarczyk30] for these particular models.

Similarly, if ${\mathbb{E}} D_1^2 < \infty$ and ${\mathbb{E}} D_2^4 < \infty$ then $\rho^*$ in Corollary 4.2 is a rational function of ${\mathbb{E}} D_1$ , ${\mathbb{E}} D_1^2$ , and ${\mathbb{E}} D_2^k$ , $k=1,2,3,4$ obtained by using (4.5) and the above expressions for ${\mathbb{E}} (d^*)^j$ in (4.2). One can check by simple algebra that $\rho^*$ is equal to

\begin{align*}\hat{\rho} = \lim \bigl({\mathbb{E}} g(G_n) - {\mathbb{E}} b(G_n)^2\bigr) \bigl({\mathbb{E}} b^{\prime}(G_n) - {\mathbb{E}} b(G_n)^2\bigr)^{-1}\end{align*}

computed in [Reference Bloznelis, Jaworski and Kurauskas18] for sparse passive and active random intersection graphs.

Assuming only that ${\mathbb{P}}(d^* = k) > 0$ , we get in (4.4)

\begin{equation*} \alpha_k^* = \dfrac { {\mathbb{E}} \bigl(\sum_{j=1}^{D_1} Z_i (Z_i - 1) \mid d^* = k\bigr) } { k (k-1)}.\end{equation*}

If $D_2 \sim \operatorname{Po} (\lambda)$ , as is the case for the active random intersection graph of Theorem 3.1(i), for example, then as in [Reference Bloznelis, Jaworski and Kurauskas18] (but without a second moment assumption)

\begin{equation*} \alpha_k^* = \dfrac {\lambda {\mathbb{P}}(d^* = k-1)} {k {\mathbb{P}}(d^* = k)}.\end{equation*}

Finally, if ${\mathbb{E}} D_1^2 < \infty$ , ${\mathbb{E}} D_2^2 < \infty$ in Corollary 4.3, we have

\begin{align*}r_k^* = k^{-1} {\mathbb{E}} \Biggl( \sum_{i=1}^{D_1} Z_i^2 \biggm| d^* = k \Biggr) + \dfrac {{\mathbb{E}} (D_1)_2 {\mathbb{E}} (D_2)_2} {{\mathbb{E}} D_1 {\mathbb{E}} D_2}.\end{align*}

This agrees with a related estimate obtained in [Reference Bloznelis, Jaworski and Kurauskas18] for active and passive random intersection graphs.

Thus Corollaries 4.14.3 generalise several previous results for particular random intersection graph models to arbitrary sequences of graphs with the uncorrelated clique tree as a limit. Applying them together with Lemma 4.4 with an appropriate k yields slightly stronger versions (i.e. convergence in probability and optimal moment conditions) of these results for the active and passive random intersection graphs with bounded expected degree. We are not aware of similar prior results for the inhomogeneous and configuration models.

4.4. Proofs

The radius of a connected rooted graph is the maximum distance from any vertex of the graph to the root.

Proof of Lemma 4.1.Let $H_1, H_2 ,\ldots$ be an enumeration of finite graphs in $\mathcal{G}_*$ . For positive integers i, n, define the event

\begin{align*}B(i,n) = \bigl\{\exists j \le i\,:\, | p_{r_j}(G_n, H_j) - {\mathbb{P}}( B_{r_j}(G^*) \cong H_j) | > i^{-1} \bigr\},\end{align*}

where $r_j$ is the radius of $H_j$ . By the assumption of the lemma, ${\mathbb{P}}(B(i,n)) \to 0$ for each $i=1,2,\ldots.$ Define $N_1 = 1$ , $N_i, i = 2,3,\ldots$ by taking

\begin{align*}N_i = 1 + \sup \bigl\{n > N_{i-1} \,:\, {\mathbb{P}}(B(i,n)) > i^{-1}\bigr\}.\end{align*}

Let $i(n) = \max\{i \,:\, N_i \le n\}$ . Now let $A = \bigl\{n\,:\, \overline{B(i(n),n)} \bigr\}$ . We have ${\mathbb{P}}(n \not \in A) = {\mathbb{P}}(B(i(n), n)) \le i(n)^{-1} \to 0$ . For any sequence of events $\{A_n, n\ge1\}$ on the same probability space $(\Omega, {\mathcal{F}}, {\mathbb{P}})$ , we have

(4.6) \begin{equation}{\mathbb{P}}(\cap_{m \ge 1} \cup_{n\ge m} A_n) \ge \lim \sup {\mathbb{P}}(A_n).\end{equation}

Thus

\begin{align*}{\mathbb{P}}(|A| = \infty) \ge \lim \sup {\mathbb{P}}\bigl(\overline{B(i(n),n)}\bigr) = 1.\end{align*}

Now, by the definition of B(i, n) on the event $|A| = \infty$ , we have $(G_n, v_n^*) \,\xrightarrow{d}\, G^*$ as $n\to\infty$ , $n \in A$ .

Proof of Lemma 4.2. ( $\Leftarrow$ ) The function $(\mathcal{G}_*, d_{\operatorname{loc}}) \to \mathbb{R}$ that maps (G, v) to ${\mathbb I}_{B_r(G,v) \cong H}$ is bounded and continuous for each $r\ge 0$ and connected rooted graph H.

( $\Rightarrow$ ) Without loss of generality we may assume $\{(G_n,v_n^*)\}$ are defined on a single probability space. Let A be a random set guaranteed by Lemma 4.1. Suppose there is some $\epsilon > 0$ , a bounded continuous function f, and an infinite subset of positive integers B, such that ${\mathbb{P}}( |{\mathbb{E}}(f(G_n,v_n^*)\mid G_n) - {\mathbb{E}} f(G^*)| > \epsilon) > \epsilon$ for all $n \in B$ . Define a random set $C = \{n \in B\,:\, |{\mathbb{E}}(f(G_n,v_n^*)\mid G_n) - {\mathbb{E}} f(G^*)| > \epsilon\}$ . Since for $n \in A \cap B$ we have ${\mathbb{P}}(n \in A \cap C) \ge \epsilon - {\mathrm{o}} (1)$ , by (4.6) ${\mathbb{P}}(|A \cap C| = \infty) \ge \epsilon$ . However, $(G_n,v_n^*) \,\xrightarrow{d}\, G^*$ when $n \to \infty, n \in A$ , by (1.1), which contradicts our assumption.

Proof of Lemma 4.3. We can assume $n_1 \ge 1$ .

(i) $\Leftrightarrow$ (ii) This follows by Lemma 2.1, since Lemma 4.2 implies convergence in distribution of $d_n^{h-1}$ to $(d^*)^{h-1}$ .

(i) $\Rightarrow$ (4.1), (iii) Assume (i). Then there is a positive sequence $a_n \to 0$ , such that $|{\mathbb{E}} d_n^{h-1} - {\mathbb{E}} (d^*)^{h-1}| \le a_n$ . The empirical $(h-1)$ th moment of the degree of $G_n$ is $D_n = n_1^{-1} \hom\!(K_{1,h-1}, G_n)$ . Also, $D_n = \sum_{H \in \mathcal{S}} d_{H}(\mathrm{root}(H))^{h-1} p_r(G_n, H)$ , where $\mathcal{S}$ consists of graphs in $\mathcal{G}_*$ of radius 1. Since ${\mathbb{E}} D_n = {\mathbb{E}} d_n^{h-1}$ , it follows by (i) and (3.1) that $D_n \xrightarrow{p} {\mathbb{E}}(d^*)^{h-1}$ . So there is a positive sequence $\epsilon_n \to 0$ , such that for all n,

\begin{equation*}{\mathbb{P}}\bigl(|D_n - {\mathbb{E}} (d^*)^{h-1}| > \epsilon_n\bigr) \le \epsilon_n.\end{equation*}

We may assume that $\epsilon_n \ge a_n$ . Let the random set C consist of those n for which $|D_n - {\mathbb{E}} (d^*)^{h-1}| \le \epsilon_n$ . It follows by (4.6) that ${\mathbb{P}}(|C| = \infty) = 1$ , ${\mathbb{P}}(n \in C) \to 1$ and on the event $|C| = \infty$ , $D_n \to {\mathbb{E}} (d^*)^{h-1}$ as $n \to \infty$ , $n \in C$ .

Let A be the random set guaranteed by Lemma 4.1. On the event $|A \cap C| = \infty$ , the subsequence of graphs $\{G_n, n \in A \cap C\}$ satisfies the conditions of Theorem 2.1.

Assume (4.1) does not hold. Then there is $H^{\prime} \in \mathcal{R}(H)$ , $\epsilon > 0$ and a deterministic infinite set D of positive integers such that for all $n \in D$ ,

\begin{align*}{\mathbb{P}}( |X_n - {\mathbb{E}} X^*| > \epsilon) > \epsilon.\end{align*}

Here $X_n = n_1^{-1}\, \mathrm{emb}(H, G_n)$ and $X^* = \mathrm{emb}^{\prime}(H^{\prime}, G^*, r^*)$ . Let $D_1 \subseteq D$ consist of those n in D for which $|X_n - {\mathbb{E}} X^*| > \epsilon$ . Again, by (4.6), we get that ${\mathbb{P}}(|A\cap C \cap D_1| = \infty) \ge \epsilon$ . On this event $X_n \not \to {\mathbb{E}} X^*$ as $n \to \infty, n \in A \cap C$ . This is a contradiction to Theorem 2.1(iii).

It remains to show (iii). Write $Y_n = n_1^{-1} \hom\!(H, G_n)$ . Trivially, $X_n \le Y_n$ , and by Lemma 2.2 $Y_n \le D_n$ . So, for any $t > 0$ ,

\begin{align*}{\mathbb{E}} X_n {\mathbb I}_{X_n > t} \le {\mathbb{E}} Y_n {\mathbb I}_{Y_n > t} \le {\mathbb{E}} D_n {\mathbb I}_{D_n > t}.\end{align*}

Since ${\mathbb{E}} D_n \to {\mathbb{E}} (d^*)^{h-1}$ and $D_n \xrightarrow{p} {\mathbb{E}} (d^*)^{h-1}$ , $D_n$ is uniformly integrable, and so is $X_n$ . Since $X_n$ also converges in probability by (4.1), (iii) follows by Lemma 2.1.

(iii) $\Rightarrow$ (i) The proof is identical to that of the corresponding implication of Theorem 2.1.

Proof of Corollary 4.1. Apply Lemma 4.3.

Proof of Corollary 4.2. For non-empty G, we have

\begin{equation*}g(G) = \dfrac {\hom\!(P_4, G)} {\mathrm{emb}(K_2, G)},\quad b(G) = \dfrac {\hom\!(K_{1,2}, G)} {\mathrm{emb}(K_2, G)},\quad b^{\prime}(G) = \dfrac {\hom\!(K_{1,3}, G)} {\mathrm{emb}(K_2, G)}.\end{equation*}

Let S(t, j) denote Stirling numbers of the second kind. Using Lemma 4.3,

\begin{align*}&\dfrac 1 n\ \mathrm{emb}(K_2, G_n) \xrightarrow{p} {\mathbb{E}} d^*, \\ & \dfrac 1 n \hom\!(P_4, G_n) \\ & \quad = \dfrac 1 n \bigl(\mathrm{emb}(P_4, G_n) + \mathrm{emb}(K_3, G_n) + 2\ \mathrm{emb}(P_3, G_n) + \mathrm{emb}(K_2, G_n) \bigr) \\ &\quad \xrightarrow{p} {\mathbb{E}} \bigl( \mathrm{emb}^{\prime}(P^{\prime}_4, G^*) + \mathrm{emb}^{\prime}(K^{\prime}_3, G^*) + \mathrm{emb}(K_{1,2}^{\prime}, G^*) + \mathrm{emb}(P^{\prime}_3, G^*) + \mathrm{emb}^{\prime}(K^{\prime}_2, G^*)\bigr) \\ & \quad = {\mathbb{E}} \hom^{\prime}\!(P^{\prime}_4, G^*,r^*), \\ & \dfrac 1 n \hom\!(K_{1,t}, G_n) = \dfrac 1 n \sum_{j=1}^t S(t, j) \mathrm{emb}(K_{1,j}, G_n) \xrightarrow{p} {\mathbb{E}} \hom^{\prime}\!(K^{\prime}_{1,t}, G^*,r^*) = {\mathbb{E}} (d^*)^t\end{align*}

for $t = 2, 3$ . The claim follows by the definition of r(G).

Proof of Corollary 4.3. Note that $\pi_k(G_n) \xrightarrow{p} {\mathbb{P}}(d^*=k) > 0$ and when $\pi_k(G) > 0$ we have

(4.7) \begin{align} r_k(G) &= \dfrac{{\mathbb{E}} \bigl( d_{G}(u_2^*) {\mathbb I}_{d_{G_n}(u_1^*)=k} {\mathbb I}_{u_1^*u_2^* \in G} \mid G_n=G \bigr)} {{\mathbb{P}}(d_{G_n}(u_1^*) = k, u_1^*u_2^* \in G \mid G_n = G)} \notag \\ &=\dfrac{(n_1)_2^{-1} H(G)} {k (n_1-1)^{-1} \pi_k(G)} \notag \\ & = n_1^{-1} H(G) (k \pi_k(G))^{-1}, \end{align}

where H(G) is the number of homomorphisms from $P_3 = x y z$ to G so that x is mapped to a vertex of degree k. Let $H_t(G)$ denote the number of such homomorphisms where additionally y is mapped to a vertex of degree at most t, and let ${\bar H}_t(G) = H(G) - H_t(G)$ .

Fix $\delta > 0$ . We will show that for any $\epsilon > 0$ and all n large enough,

(4.8) \begin{equation} {\mathbb{P}}\bigl(|n_1^{-1}H(G_n) - {\mathbb{E}} {\mathbb I}_{d^*=k}\hom^{\prime}\!(P^{\prime}_3, G^*, r^*) | > \delta\bigr) \le \epsilon, \end{equation}

i.e. $n_1^{-1}H(G_n) \xrightarrow{p} {\mathbb{E}} {\mathbb I}_{d^*=k}\hom^{\prime}\!(P^{\prime}_3, G^*, r^*)$ .

By Lemma 4.2,

\begin{align*} n_1^{-1} H_t(G_n) &= {\mathbb{E}}\Biggl({\mathbb I}_{d_{G_n}(u_1^*)=k} \sum_{u\,:\, u u_1^* \in G_n} d_{G_n}(u) {\mathbb I}_{d_{G_n}(u) \le t} \biggm| G_n\Biggr) \\ &\xrightarrow{p} h_t^* = {\mathbb{E}} \Biggl({\mathbb I}_{d^*=k} \sum_{u\,:\, u r^* \in G^*} d_{G^*}(u) {\mathbb I}_{d_{G^*}(u) \le t}\Biggr). \end{align*}

Also, $h_t^* \to h^* = {\mathbb{E}} {\mathbb I}_{d^* = k} \hom^{\prime}\!(P^{\prime}_3, G^*, r^*)$ as $t \to \infty$ since ${\mathcal L}((G_n, v^*_n) \mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ and $h^* \le {\mathbb{E}} \hom^{\prime}\!(P^{\prime}_3, G^*, r^*) < \infty$ by Lemma 4.3. Therefore we can pick $t_1$ such that for $t \ge t_1$ and all n large enough,

\begin{align*} {\mathbb{P}}\biggl(|n_1^{-1} H_{t}(G_n) - h_{t}^*| > \dfrac \delta 4\biggr) \le \dfrac \epsilon 4, \quad |h_{t}^* - h^*| \le \dfrac \delta 4 \end{align*}

and so

(4.9) \begin{equation} {\mathbb{P}}\biggl(|n_1^{-1} H_{t}(G_n) - h^*|> \dfrac \delta 2\biggr) \le \dfrac \epsilon 2. \end{equation}

Next, note that

\begin{align*} \bar{H}_t(G_n) \le \sum_{v \in V(G_n)} d_{G_n}(v)^2 {\mathbb I}_{d_{G_n}(v) > t}. \end{align*}

So, by Markov’s inequality,

\begin{align*} {\mathbb{P}}\bigl(n_1^{-1} {\bar H}_t(G_n) > \delta/2\bigr) \le 2 \delta^{-1} n_1^{-1} {\mathbb{E}} {\bar H}_t(G_n) \le 2 \delta^{-1} {\mathbb{E}} d_n^2 {\mathbb I}_{d_n > t}. \end{align*}

Now $d_n^2$ is uniformly integrable by Lemma 4.3, so there is $t_2$ such that for all $t \ge t_2$ and all large enough n,

(4.10) \begin{equation} {\mathbb{P}}\bigl(n_1^{-1} {\bar H}_{t}(G_n) > \delta/2\bigr) \le \dfrac \epsilon 2. \end{equation}

Now (4.8) follows by setting $t = \max\!(t_1, t_2)$ and combining (4.7), (4.9), and (4.10).

Proof of Lemma 4.4. Let $Z_1, Z_2, \ldots \sim D_2^* - 1$ and $D_1$ be independent. For a random variable X with ${\mathbb{E}} X \in (0, \infty)$ and its size-biased version $X^*$ , we have

\begin{align*} {\mathbb{E}} (X^*-1)_j = \sum_{m \ge 1} (m-1)_j \dfrac{m {\mathbb{P}}(X=m)} {{\mathbb{E}} X} = \dfrac{ {\mathbb{E}} (X)_{j+1}} {{\mathbb{E}} X}, \quad j = 1, 2, \ldots . \end{align*}

Thus, using the assumptions and Remark 3.1,

(4.11) \begin{equation} {\mathbb{E}} (Z_1)_j = {\mathbb{E}} (D_2)_{j+1} ({\mathbb{E}} D_2)^{-1} < \infty \quad \text{for } j=1, \ldots, k. \end{equation}

Similarly ${\mathbb{E}} Z_1^k = ({\mathbb{E}} D_2)^{-1} {\mathbb{E}} D_2^{k+1} < \infty$ . Write

\begin{align*} \binom k {k_1, \ldots, k_j} = \dfrac {k!} {k_1! \cdots k_j!}. \end{align*}

Conditioning on $D_1$ , using linearity of expectation and symmetry, we get

(4.12) \begin{align} {\mathbb{E}} (d^*)^k &= {\mathbb{E}} \Biggl(\sum_{i=1}^{D_1} Z_i\Biggr)^k \notag \\ & = \sum \binom {k} {k_1, \ldots, k_j} {\mathbb{E}} \binom {D_1} {j} {\mathbb{E}} Z_1^{k_1} \times \cdots \times {\mathbb{E}} Z_j^{k_j} \in (0,\infty). \end{align}

Here the sum is over all j and all tuples of positive integers $(k_1, \ldots, k_j)$ such that $k_1 + \cdots + k_j = k$ . Write $d_n = d_{G_n}(v^*_n)$ and recall that $v^*_n$ is a uniformly random vertex from $V(G_n)$ . By Theorem 3.1 ${\mathcal L}((G_n,v^*_n)\mid G_n) \,\xrightarrow{p}\, {\mathcal L}(G^*)$ , so $d_n^k \xrightarrow{d} (d^*)^k$ . By Fatou’s lemma,

\begin{align*} {\mathbb{E}} (d^*)^k \le \lim \inf {\mathbb{E}} d_n^k. \end{align*}

We assume without loss of generality that in case (iv) the sequences $d_1(n)$ and $d_2(n)$ (not to be confused with the random variable $d_n$ ) are symmetric random permutations of two fixed-degree sequences (each permutation of a particular sequence is equally likely). So in all cases (i)–(iv) by symmetry ${\mathbb{E}} d_{G_n}(v_1)^k = {\mathbb{E}} d_n^k$ , where $v_1$ is a fixed vertex in $V(G_n)$ . For each of the random intersection graph models we will show

(4.13) \begin{equation} {\mathbb{E}} d_{G_n}(v_1)^k \le {\mathbb{E}} (d^*)^k + {\mathrm{o}} (1). \end{equation}

Let ${\mathcal T} \sim {\mathcal T}(D_1,D_2)$ . Assume that $D_1 = d_{\mathcal T}(\mathrm{root}({\mathcal T}))$ , $x_1, \ldots, x_{D_1}$ are the children of $\mathrm{root}({\mathcal T})$ and $Z_i$ is the number of children of $x_i$ . For each n define a bipartite graph (a tree) $\tilde{H}_n$ as follows. On the event $D_1 > n_2$ , let $\tilde{H}_n$ be a tree consisting of just the root $\tilde{v}_1$ . On the event $D_1 \le n_2$ , let $\tilde{H}_n$ be the subtree induced by generations 0, 1 and 2 of ${\mathcal T}$ , but take only the first $Z^{\prime}_i = Z_i {\mathbb I}_{Z_i \le n_1 -1}$ children for the node $x_i$ , $i = 1, \ldots, D_1$ . Label the root $v_1$ . Given $D_1, Z_1, \ldots, Z_{D_1}$ , draw labels for $x_1, \ldots, x_{D_1}$ from ${V^{2}}$ uniformly at random without replacement and draw $Z^{\prime}_i$ distinct labels from ${V^{1}} \setminus \{v_1\}$ for the children of $x_i$ $i=1,\ldots,D_1$ , for each i (conditionally) independently. Here ${V^{i}} = V_i(H_n)$ is the set of first $n_i$ vertices of the fixed ground set ${\mathcal V}^i$ as in Theorem 3.1.

Write $\tilde{d}_n = {\mathbb I}_{D_1 \le n_2} \sum_{i=1}^{D_1} Z^{\prime}_i$ and notice that ${\tilde d}_n$ is an upper bound on the degree of $v_1$ in the resulting intersection graph. We have, as in (4.12),

(4.14) \begin{align} {\mathbb{E}} {\tilde d}_n^k \notag &= \sum \binom k {k_1, \ldots, k_j} {\mathbb{E}} \binom {D_1} j {\mathbb I}_{D_1 \le n_2} {\mathbb{E}} (Z^{\prime}_1)^{k_1}\times \cdots \times {\mathbb{E}} (Z^{\prime}_t)^{k_t} \\ &= {\mathbb{E}} (d^*)^k - {\mathrm{o}} (1). \end{align}

Here we used (4.11), (4.12) and bounds

\begin{align*} {\mathbb{E}} (D_1)_j {\mathbb I}_{D_1 \le n_2} = {\mathbb{E}} (D_1)_j - {\mathrm{o}} (1), \quad {\mathbb{E}} (Z^{\prime}_1)^j = {\mathbb{E}} Z_1^j - {\mathbb{E}} Z_1^j {\mathbb I}_{Z_1 > n_1 -1} = {\mathbb{E}} Z_1^j - {\mathrm{o}} (1), \end{align*}

valid for any $j \le k$ by Lemma 2.1. Thus it suffices to prove that ${\mathbb{E}} d_{G_n}(v_1)^k \le {\mathbb{E}} {\tilde d}_n^k + {\mathrm{o}} (1)$ . Recall that $H_n$ is the bipartite graph underlying the intersection graph $G_n$ . Call a path xyz good if $x = v_1$ , $y \in {\mathcal V}_2$ and $z \in {\mathcal V}_1 \setminus \{v_1\}$ . We have

\begin{align*}d_{G_n}(v_1) \le \sum {\mathbb I}(v_1 w v, H_n) \quad \text{and}\quad {\tilde d}_n = \sum {\mathbb I}(v_1 w v, {\tilde H}_n)\end{align*}

where the sum is over all good paths $v_1 w v$ and ${\mathbb I}(F, H)$ is the indicator of the event that $F \subseteq E(H)$ .

For any graph H we denote $v(H) = |V(H)|$ and $e(H) = |E(H)|$ . If H is bipartite (more precisely, 2-coloured), $v_j(H)$ , $j=1, 2$ denotes the size of jth part $V_j$ of H. Define an equivalence relation between bipartite graphs $H^{\prime} = (V^{\prime}_1, V^{\prime}_2, E^{\prime})$ , $H^{\prime\prime} = (V^{\prime\prime}_1, V^{\prime\prime}_2, E^{\prime\prime})$ : $H^{\prime} \sim H^{\prime\prime}$ if and only if there is an isomorphism from H ′ to H ′′ that maps $V^{\prime}_j$ to $V^{\prime\prime}_j$ , $j=1,2$ . Let ${\mathcal{F}}_k$ consist of one member for each equivalence class of all graphs formed from a union of k good paths (a not necessarily disjoint union of graphs $G_1 = (V_1, E_1), \ldots, G_k = (V_k, E_k)$ is a graph $(V_1 \cup \cdots \cup V_k, E_1 \cup \cdots \cup E_k)$ ). For $H^{\prime} \in {\mathcal{F}}_k$ , let N(H ′) be the number of distinct tuples of k good paths whose union is a bipartite graph H ′′ with parts $V^{\prime\prime}_1 \subseteq {V^{1}}$ and $V^{\prime\prime}_2 \subseteq {V^{2}}$ such that $H^{\prime\prime} \sim H^{\prime}$ . It is easy to see that there are positive constants c(H ′), C(H ′) such that, for all n large enough,

(4.15) \begin{equation}N(H^{\prime}) = c(H^{\prime}) (n_1)_{v_1(H^{\prime})-1} (n_2)_{v_2(H^{\prime})} = C(H^{\prime}) n_1^{v(H^{\prime}) - 1} (1 + {\mathrm{o}} (1)).\end{equation}

By linearity of expectation,

\begin{align*}&{\mathbb{E}} d_{G_n}(v_1)^k \le {\mathbb{E}} \Bigl(\sum {\mathbb I}(v_1 w v, H_n)\Bigr)^k =\sum_{H^{\prime} \in {\mathcal{F}}_k} N(H^{\prime}) {\mathbb{E}} {\mathbb I}(H^{\prime}, H_n),\end{align*}

and similarly

\begin{align*}{\mathbb{E}} {\tilde d}_n^k = \sum_{H^{\prime} \in {\mathcal{F}}_k} N(H^{\prime}) {\mathbb{E}} {\mathbb I}(H^{\prime}, {\tilde H}_n).\end{align*}

Using (4.14), (4.15) and the fact that ${\mathcal{F}}_k$ is finite, in order to prove (4.13) it suffices to check that

(4.16) \begin{equation}{\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) \le {\mathbb{E}} {\mathbb I}(H^{\prime}, \tilde{H}_n) + {\mathrm{o}} \bigl(n_1^{-v(H^{\prime})+1}\bigr) \quad \text{for each } H^{\prime} \in {\mathcal{F}}_k.\end{equation}

So fix any $H^{\prime} \in {\mathcal{F}}_k$ . Suppose $v_2(H^{\prime}) = t$ and the degrees of vertices in $V_2(H^{\prime})$ are $b_1, \ldots, b_t$ . Note that $b_j \le k+1$ for $j = 1, \ldots, t$ . Conditioning on $D_1$ and the positions of generation 1 nodes labelled $V_2(H^{\prime})$ and using (4.11)

(4.17) \begin{align}{\mathbb{E}} {\mathbb I}(H^{\prime}, {\tilde H}_n) &= {\mathbb{E}} \dfrac {(D_1)_t {\mathbb I}_{D_1 \le n_1}} {(n_2)_t} \dfrac{(Z^{\prime}_1)_{b_1-1}} {(n_1 - 1)_{b_1-1}} \times \cdots \times\dfrac{(Z^{\prime}_t)_{b_t-1}} {(n_1 - 1)_{b_t-1}} \notag\\ &= n_2^{-t} n_1^{-e(H^{\prime}) + t} {\mathbb{E}} (D_1)_t \prod_{i=1}^t {\mathbb{E}} (Z^{\prime}_i)_{b_i-1} (1+{\mathrm{o}} (1)) \notag\\ &= \beta^{-t} n_1^{-e(H^{\prime})} {\mathbb{E}} (D_1)_t ({\mathbb{E}} D_2)^{-t} \prod_{i=1}^t {\mathbb{E}} (D_2)_{b_i} (1+{\mathrm{o}} (1)).\end{align}

Now if H ′ is a tree then $e(H^{\prime}) = v(H^{\prime})-1$ , and (4.16) follows if

(4.18) \begin{equation} {\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) \le {\mathbb{E}} {\mathbb I}(H^{\prime}, \tilde{H}_n) (1 + {\mathrm{o}} (1)).\end{equation}

Meanwhile, if H ′ has a cycle then $e(H^{\prime}) \ge v(H^{\prime})$ and ${\mathbb{E}} {\mathbb I}(H^{\prime}, {\tilde H}_n) = {\mathrm{O}} \bigl(n_1^{-v(H^{\prime})}\bigr)$ , so (4.16) follows whenever

(4.19) \begin{equation}{\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) = {\mathrm{o}} \bigl(n^{-v(H^{\prime})+1}\bigr).\end{equation}

We now consider (4.16) for each model separately.

(i) Active intersection graph. Let $a_1, \ldots, a_s$ be the degrees of vertices in $V_1(H^{\prime})$ . We can assume $a_1 = d_{H^{\prime}}(v_1) = t$ . Of course, $a_j \le k$ , $j = 1, \ldots, s$ . Since the vertices in $V_1(H^{\prime})$ choose their neighbours independently, using Lemma 2.1,

\begin{align*}{\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) = {\mathbb{E}} \prod_{i=1}^s \dfrac { (X_v)_{a_i}} { (n_2)_{a_i}} = n_2^{-e(H^{\prime})} \prod_{i=1}^s {\mathbb{E}} (D_1)_{a_i} (1 + {\mathrm{o}} (1)).\end{align*}

If H ′ has a cycle then $e(H^{\prime}) \ge v(H^{\prime})$ and ${\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) = {\mathrm{O}} \bigl(n_1^{-e(H^{\prime})}\bigr)$ , so (4.19) holds.

By Remark 3.1, $D_2 \sim \operatorname{Po} (\beta^{-1} {\mathbb{E}} D_1)$ . So ${\mathbb{E}} (D_2)_{b_i} = (\beta^{-1} {\mathbb{E}} D_1)^{b_i}$ . Thus (4.17) reduces to

\begin{align*}{\mathbb{E}} {\mathbb I}(H^{\prime}, {\tilde H}_n) = (\beta n_1)^{-e(H^{\prime})} {\mathbb{E}} (D_1)_t ({\mathbb{E}} D_1)^{e(H^{\prime}) - t} (1 + {\mathrm{o}} (1))\end{align*}

If H ′ has no cycle, then $a_j = 1$ for all $j \ge 2$ . Thus

\begin{align*}{\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) &\le n_2^{-e(H^{\prime})} {\mathbb{E}} (D_1)_t ({\mathbb{E}} D_1)^{e(H^{\prime})-t} (1 + {\mathrm{o}} (1))\end{align*}

and (4.19) follows.

(ii) Passive intersection graph. Since $b_i \le k+1$ for $i = 1, \ldots, t$ by assumption (ii) of the lemma,

\begin{align*}{\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) = {\mathbb{E}} \prod_{i=1}^s \dfrac { (X_v)_{b_i}} {(n_1)_{b_i}} = n_1^{-e(H^{\prime})} \prod_{i=1}^s {\mathbb{E}} (D_2)_{b_i} (1 + {\mathrm{o}} (1)).\end{align*}

Using Remark 3.1, $D_1 \sim \operatorname{Po} (\beta {\mathbb{E}} D_2)$ , so ${\mathbb{E}} (D_1)_t = \beta^t ({\mathbb{E}} D_2)^t$ . Therefore (4.17) reduces to

\begin{align*}{\mathbb{E}} {\mathbb I}(H^{\prime}, {\tilde H}_n) = n_1^{-e(H^{\prime})} \prod_{i=1}^s {\mathbb{E}} (D_2)_{b_i} (1 + {\mathrm{o}} (1))\end{align*}

and (4.16) follows.

(iii) Inhomogeneous random intersection graph. Let $\{\xi_u\,:\, u \in V(H^{\prime})\}$ be independent random variables such that $\xi_u \sim \xi^{(i)}$ for $u \in V_i(H^{\prime})$ , $i = 1, 2$ . Write $a \wedge b = \min(a,b)$ . Then

\begin{align*}{\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) &= {\mathbb{E}} \prod_{uv \in E(H^{\prime})}\biggl( \dfrac {\xi_u \xi_v} {\sqrt{n_1 n_2}} \wedge 1\biggr) \\ &\le \beta^{-e(H^{\prime})/2}n_1^{-e(H^{\prime})} \prod_{u \in V(H^{\prime})} {\mathbb{E}} \xi_u^{d_{H^{\prime}}(u)} (1 + {\mathrm{o}} (1)).\end{align*}

If H ′ contains a cycle, then by the assumption that ${\mathbb{E}} (\xi^{(1)})^k$ and ${\mathbb{E}} (\xi^{(2)})^{k+1}$ are finite, we get that ${\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) = {\mathrm{O}} \bigl(n_1^{-v(H^{\prime})}\bigr)$ , so (4.19) holds. If H ′ is a tree then

(4.20) \begin{equation}{\mathbb{E}} {\mathbb I}(H^{\prime}, H_n) \le \beta^{-e(H^{\prime})/2} n_1^{-e(H^{\prime})} {\mathbb{E}} \bigl(\xi^{(1)}\bigr)^t \bigl({\mathbb{E}} \xi^{(1)}\bigr)^{s-1} \prod_{j=1}^t {\mathbb{E}} \bigl(\xi^{(2)}\bigr)^{b_j} (1 + {\mathrm{o}} (1)).\end{equation}

Using Remark 3.1, we have $D_1 \sim \operatorname{Po} \bigl(\beta^{1/2} \xi^{(1)} {\mathbb{E}} \xi^{(2)}\bigr)$ and $D_2 \sim \operatorname{Po} \bigl(\beta^{-1/2} \xi^{(2)} {\mathbb{E}} \xi^{(1)}\bigr)$ , so ${\mathbb{E}} (D_1)_t = \beta^{t/2} {\mathbb{E}} (\xi^{(1)}) ^ t ({\mathbb{E}} \xi^{(2)})^t$ and ${\mathbb{E}} (D_2)_j = \beta^{-j/2} {\mathbb{E}} (\xi^{(2)}) ^ j ({\mathbb{E}} \xi^{(1)})^j$ for $j \le k+1$ . Putting these estimates into (4.17) and simplifying, we get the expression on the right-hand side of (4.20). Then (4.18) follows.

(iv) Random configuration graph. For $i = 1, 2$ , let

\begin{align*} \tilde{d}_{i, m} = n_i^{-1} \sum_{v \in {V^{i}}} (d_{i,v})_m. \end{align*}

Recall that $N = \sum_{u \in {V^{1}}}^{n_1} d_{1,u}$ is the total number of half-edges in each of the parts. Since $n_1, n_2 \to \infty$ and by the assumption of the lemma $N n_1^{-1} \to {\mathbb{E}} D_1$ , there exists $\omega_n \to \infty$ such that for all n

(4.21) \begin{equation} n_1,n_2, N \ge \omega_n, \quad \omega_n \ge k+2. \end{equation}

The probability that $H_n$ contains H ′ as a subgraph is at most

\begin{align*} a(H^{\prime}) = \dfrac {1} {(N)_{e(H^{\prime})}} {\mathbb{E}} \prod_{u \in V(H^{\prime})} (d_{H_n}(u))_{d_{H^{\prime}}(u)}. \end{align*}

Here the product counts the number of ways to choose particular half-edges forming H ′. Let $(u_1^*, \ldots, u_s^*)$ and $(w_1^*, \ldots, w_t^*)$ be independent uniformly random tuples of distinct vertices from $V_1(H_n)$ and $V_2(H_n)$ respectively. Using symmetry, (4.21), and the assumption of the lemma,

\begin{align*} a(H^{\prime}) & = \dfrac {1} {(N)_{e(H^{\prime})}} {\mathbb{E}} \prod_{i=1}^s \bigl(d_{1,u_i^*}\bigr)_{a_i} \prod_{j=1}^t \bigl(d_{2,w_j^*}\bigr)_{b_j} \\ & \le \dfrac {1} {(N)_{e(H^{\prime})}} \prod_{i=1}^s \tilde{d}_{1, a_i} \prod_{j=1}^t \tilde{d}_{2, b_j}(1 + {\mathrm{o}} (1)) \\ & = ({\mathbb{E}} D_1 n_1)^{-e(H^{\prime})} \prod_{i=1}^s {\mathbb{E}} (D_1)_{a_i} \prod_{j=1}^t {\mathbb{E}} (D_2)_{b_j} (1+{\mathrm{o}} (1)). \end{align*}

Again, if H ′ has a cycle then (4.19) follows. Otherwise, if H ′ is a tree, then since ${\mathbb{E}} D_1 n_1 = {\mathbb{E}} D_2 n_2 (1+{\mathrm{o}} (1))$ we have ${\mathbb{E}} D_1 = \beta {\mathbb{E}} D_2$ and

\begin{align*} ({\mathbb{E}} D_1)^{-e(H^{\prime})} \prod_{i=1}^s {\mathbb{E}} (D_1)_{a_i} = {\mathbb{E}} (D_1)_t \dfrac { ({\mathbb{E}} D_1) ^{s-1}} { ({\mathbb{E}} D_1)^{s+t-1}} = \beta^{-t} {\mathbb{E}} (D_1)_t ({\mathbb{E}} D_2)^{-t}. \end{align*}

By comparing a(H ′) with (4.17), we see that (4.18) holds.

Acknowledgements

I would like to thank the referees for their helpful remarks, in particular for pointing out the important connections to the classical theory of convergence of measures in complete separable metric spaces.

Funding information

Part of this work was supported by the Research Council of Lithuania (MIP-067/2013).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Aldous, D. and Lyons, R. (2007). Processes on unimodular random networks. Electron. J. Prob. 12, 14541508.10.1214/EJP.v12-463CrossRefGoogle Scholar
Aldous, D. and Steele, J. M. (2004). The objective method: probabilistic combinatorial optimization and local weak convergence, In Probability on Discrete Structures (Encyclopaedia Math. Sci. 110), pp. 172. Springer, Berlin.Google Scholar
Andersson, H. (1998). Limit theorems for a random graph epidemic model. Ann. Appl. Prob. 8, 13311349.10.1214/aoap/1028903384CrossRefGoogle Scholar
Arratia, R., Goldstein, L. and Kochman, F. (2019). Size bias for one and all. Prob. Surv. 16, 161.10.1214/13-PS221CrossRefGoogle Scholar
Benjamini, I. and Schramm, O. (2001). Recurrence of distributional limits of finite planar graphs. Electron. J. Prob. 6, 113.10.1214/EJP.v6-96CrossRefGoogle Scholar
Benjamini, I., Lyons, R. and Schramm, O. (2013). Unimodular random trees. Ergodic Theory Dyn. Syst. 35, 115.Google Scholar
Berger, N., Borgs, C., Chayes, J. T. and Saberi, A. (2014). Asymptotic behavior and distributional limits of preferential attachment graphs. Ann. Prob. 42, 140.10.1214/12-AOP755CrossRefGoogle Scholar
Billingsley, P. (1971). Weak Convergence of Measures: Applications in Probability. Society for Industrial and Applied Mathematics, Philadelphia.10.1137/1.9781611970623CrossRefGoogle Scholar
Billingsley, P. (1999). Convergence of Probability Measures, 2nd edn. John Wiley, New York.10.1002/9780470316962CrossRefGoogle Scholar
Bloznelis, M. (2008). Degree distribution of a typical vertex in a general random intersection graph. Lith. Math. J. 48, 3845.10.1007/s10986-008-0004-7CrossRefGoogle Scholar
Bloznelis, M. (2010). The largest component in an inhomogeneous random intersection graph with clustering. Electron. J. Combin. 17, R110.10.37236/382CrossRefGoogle Scholar
Bloznelis, M. (2013). Degree and clustering coefficient in sparse random intersection graphs. Ann. Appl. Prob. 23, 12541289.10.1214/12-AAP874CrossRefGoogle Scholar
Bloznelis, M. (2015). Degree-degree distribution in a power law random intersection graph with clustering. In Algorithms and Models for the Web Graph: 12th International Workshop (WAW 2015) (Lecture Notes Comput. Sci. 9479), eds D. F. Gleich et al., pp. 4253. Springer.10.1007/978-3-319-26784-5_4CrossRefGoogle Scholar
Bloznelis, M. and Damarackas, J. (2013). Degree distribution of an inhomogeneous random intersection graph. Electron. J. Combin. 20, P3.10.37236/2786CrossRefGoogle Scholar
Bloznelis, M. and Kurauskas, V. (2017). Large cliques in sparse random intersection graphs. Electron. J. Combin. 24, P2.5.10.37236/6233CrossRefGoogle Scholar
Bloznelis, M., Jaworski, J., Godehardt, E., Kurauskas, V. and Rybarczyk, K. (2015). Recent progress in complex network analysis: models of random intersection graphs. In Data Science, Learning by Latent Structures and Knowledge Discovery (Studies in Classification, Data Analysis and Knowledge Organization), eds B. Lausen et al., pp. 6978. Springer, Berlin and Heidelberg.10.1007/978-3-662-44983-7_6CrossRefGoogle Scholar
Bloznelis, M., Jaworski, J., Godehardt, E., Kurauskas, V. and Rybarczyk, K. (2015). Recent progress in complex network analysis: properties of random intersection graphs. In Data Science, Learning by Latent Structures and Knowledge Discovery (Studies in Classification, Data Analysis and Knowledge Organization), eds B. Lausen et al., pp. 7988. Springer, Berlin and Heidelberg.10.1007/978-3-662-44983-7_7CrossRefGoogle Scholar
Bloznelis, M., Jaworski, J. and Kurauskas, V. (2013). Assortativity and clustering in sparse random intersection graphs. Electron. J. Prob. 18, 124.10.1214/EJP.v18-2277CrossRefGoogle Scholar
Bollobás, B. and Riordan, O. (2011). Sparse graphs: metrics and random models. Random Structures Algorithms 39, 138.10.1002/rsa.20334CrossRefGoogle Scholar
Bollobás, B. and Riordan, O. (2015). An old approach to the giant component problem. J. Combinatorial Theory B 113, 236260.10.1016/j.jctb.2015.03.002CrossRefGoogle Scholar
Bollobás, B., Janson, S. and Riordan, O. (2007). The phase transition in inhomogeneous random graphs. Random Structures Algorithms 31, 3122.10.1002/rsa.20168CrossRefGoogle Scholar
Bollobás, B., Janson, S. and Riordan, O. (2011). Sparse random graphs with clustering. Random Structures Algorithms 38 269323.10.1002/rsa.20322CrossRefGoogle Scholar
Bordenave, C. and Lelarge, M. (2010). Resolvent of large random graphs. Random Structures Algorithms 37, 332352.10.1002/rsa.20313CrossRefGoogle Scholar
Csikvári, P. and Zhicong, L. (2014). Graph homomorphisms between trees. Electron. J. Prob. 21, P4.9.10.37236/4096CrossRefGoogle Scholar
Dembo, A. and Montanari, A. (2010). Gibbs measures and phase transitions on sparse random graphs. Braz. J. Prob. Stat. 24, 137211.10.1214/09-BJPS027CrossRefGoogle Scholar
Durrett, R. (2010). Probability: Theory and Examples, 4th edn. Cambridge University Press.10.1017/CBO9780511779398CrossRefGoogle Scholar
Fiol, M. A. and Garriga, E. (2009). Number of walks and degree powers in a graph. Discrete Math. 309, 26132614.10.1016/j.disc.2008.03.025CrossRefGoogle Scholar
Gamarnik, D. and Misra, S. (2015). Giant component in multipartite graphs with given degree sequences. Stoch. Syst. 5, 372408.10.1287/13-SSY135CrossRefGoogle Scholar
Georgakopoulos, A. and Wagner, S. (2016). Limits of subcritical random graphs and random graphs with excluded minors. Available at arXiv:1512.03572.Google Scholar
Godehardt, E., Jaworski, J. and Rybarczyk, K. (2012). Clustering coefficients of random intersection graphs. In Challenges at the Interface of Data Analysis, Computer Science and Optimization, eds W. Gaul et al., pp. 243253. Springer, Berlin.10.1007/978-3-642-24466-7_25CrossRefGoogle Scholar
Grimmett, G. and Stirzaker, D. (2001). Probability and Random Processes. Oxford University Press.Google Scholar
Guillaume, J.-L. and Latapy, M. (2006). Bipartite graphs as models of complex networks. Physica A 371, 795813.10.1016/j.physa.2006.04.047CrossRefGoogle Scholar
van der Hofstad, R. (2020+). Random Graphs and Complex Networks, Vol. II, preliminary version. https://www.win.tue.nl/~rhofstad/NotesRGCNII_colleagues_25_04_2022.pdf, accessed 2022-06-09.Google Scholar
Janson, S. and Luczak, M. J. (2009). A new approach to the giant component problem. Random Structures Algorithms 34, 197216.10.1002/rsa.20231CrossRefGoogle Scholar
Litvak, N. and van der Hofstad, R. (2013). Uncovering disassortativity in large scale-free networks. Phys. Rev. E 87, 022801.10.1103/PhysRevE.87.022801CrossRefGoogle ScholarPubMed
Lovász, L. (2012). Large Networks And Graph Limits (American Mathematical Society Colloquium Publications 60). American Mathematical Society, Providence, RI.Google Scholar
Lyons, R. (2005). Asymptotic enumeration of spanning trees. Combinatorics Prob. Comput. 14, 491522.10.1017/S096354830500684XCrossRefGoogle Scholar
Karoński, M., Scheinerman, E. R. and Singer-Cohen, K. B. (1999). On random intersection graphs: the subgraph problem. Combinatorics Prob. Comput. 8, 131159.10.1017/S0963548398003459CrossRefGoogle Scholar
Karrer, B. and Newman, M. E. J. (2010). Random graphs containing arbitrary distributions of subgraphs. Phys. Rev. E 82, 066118.10.1103/PhysRevE.82.066118CrossRefGoogle ScholarPubMed
Kurauskas, V. (2015). On local weak limit and subgraph counts for sparse random graphs. Available at arXiv:1504.08103.Google Scholar
Newman, M. E. J., Strogatz, S. H. and Watts, D. J. (2001). Random graphs with arbitrary degree distributions and their applications. Phys. Rev. E 64, 026118.10.1103/PhysRevE.64.026118CrossRefGoogle ScholarPubMed
Panagiotou, K., Stufler, B. and Weller, K. (2016). Scaling limits of random graphs from subcritical classes. Ann. Prob. 44, 32913334.10.1214/15-AOP1048CrossRefGoogle Scholar
Richardson, T. and Urbanke, R. (2008). Modern Coding Theory. Cambridge University Press.10.1017/CBO9780511791338CrossRefGoogle Scholar
Salez, J. (2011). Some implications of local weak convergence for sparse random graphs. Doctoral thesis, Université Pierre et Marie Curie – Paris VI, Ecole Normale Supérieure de Paris – ENS Paris.Google Scholar
Sidorenko, A. (1994). A partially ordered set of functionals corresponding to graphs. Discrete Math. 131, 263277.10.1016/0012-365X(94)90388-3CrossRefGoogle Scholar
Stufler, B. (2021). Local convergence of random planar graphs. In Extended Abstracts EuroComb 2021 (Trends in Mathematics 14), eds J. Nešetřil et al. Birkhäuser, Cham.10.1007/978-3-030-83823-2_10CrossRefGoogle Scholar
Vadon, V., Komjáthy, J. and van der Hofstad, R. (2019). A new model for overlapping communities with arbitrary internal structure. Applied Network Science 4, 119.10.1007/s41109-019-0149-9CrossRefGoogle Scholar
Wormald, N. C. (1999). Models of random regular graphs. In Surveys in Combinatorics 1999 (Canterbury) (London Math. Soc. Lecture Notes 267), pp. 239298. Cambridge University Press.10.1017/CBO9780511721335.010CrossRefGoogle Scholar