Hostname: page-component-7b9c58cd5d-hpxsc Total loading time: 0 Render date: 2025-03-15T12:24:20.717Z Has data issue: false hasContentIssue false

WEIGHT OF THE SHORTEST PATH TO THE FIRST ENCOUNTERED PEER IN A PEER GROUP OF SIZE m

Published online by Cambridge University Press:  18 December 2007

P. Van Mieghem
Affiliation:
Delft University of Technology, 2600 GA Delft, The Netherlands E-mail: P.F.A.VanMieghem@tudelft.nl; S.Tang@ewi.tudelft.nl
S. Tang
Affiliation:
Delft University of Technology, 2600 GA Delft, The Netherlands E-mail: P.F.A.VanMieghem@tudelft.nl; S.Tang@ewi.tudelft.nl
Rights & Permissions [Opens in a new window]

Abstract

We model the weight (e.g., delay, distance, or cost) from an arbitrary node to the nearest (in weight) peer in a peer-to-peer (P2P) network. The exact probability generating function and an asymptotic analysis is presented for a random graph with independent and identically distributed exponential link weights. The asymptotic distribution function is a Fermi–Dirac distribution that frequently appears in statistical physics. The good agreement with simulation results for relatively small P2P networks makes the asymptotic formula for the probability density function useful for estimating the minimal number of peers to offer an acceptable quality (delay or latency).

Type
Research Article
Copyright
Copyright © Cambridge University Press 2008

1. INTRODUCTION

The present analysis is motivated by content distribution in peer-to-peer (P2P) networks (see, e.g., [Reference Van Mieghem6, Chap. 13]) such as Napster, Bit Torrent, Tribler, and so forth. Content is often either fully replicated at a peer or is split into chunks and stored over m peers. A major task of a member of the peer group lies in selecting the best peer among those m peers that possess the desired content. This searching process for peers is not always optimized and several variants exist depending on which criterion is optimized. For example, one might choose that peer that is most nearby in the number of hops or in the delay or latency. The first problem is analogous to the anycast problem in IPv6, treated in [Reference Van Mieghem5, Chap. 18]. Here, we confine ourselves to the second problem: Given a network of N nodes over which m peers are randomly distributed, what is the delay from a certain peer to the nearest (in delay) peer? The presented analysis shows how that delay varies as the number m of peers changes and it might set bounds on the minimum number of peers needed to still offer an acceptable content distribution service.

In a P2P network, peers are joining and leaving regularly, which motivates us to consider such a network to first order as an Erdös–Rényi random graph. The shortest path between two nodes is the path for which the sum of the link weights is minimal. The shortest path tree (SPT) rooted at an arbitrary node to m uniformly chosen peers is the union of the shortest paths from that node to all m different peers in the graph. Thus, the computation of a shortest path requires the knowledge of link weights, such as a monetary cost, the number of hops, a delay, a distance, and so forth. In most communication networks, link weights are not precisely known. Here and as motivated in [Reference Van Mieghem5, Sect. 16.1], we assign independent and identically distributed (i.i.d.) exponentially distributed weights with unit mean to links.

In [Reference van der Hofstad, Hooghiemstra and Van Mieghem4,Reference Van Mieghem5], we have rephrased the shortest path problem between two arbitrary nodes in the complete graph K N with i.i.d. exponential link weights as a Markov discovery process, which starts the path searching process at the source. The discovery process is a continuous-time Markov chain with N states. Each state n represents the n already discovered nodes (including the source node). If at some stage in the Markov discovery process n nodes are discovered, then the next node is reached with rate λn,n+1 = n(Nn), which is the transition rate in the continuous-time Markov chain. Since the discovering of nodes at each stage only increases n, the Markov discovery process is a pure birth process with birth rate n(Nn). We call τn the interattachement time between the inclusion of the nth and (n + 1)st node to the SPT for n = 1, … , N − 1. The interattachement time τn is exponentially distributed with parameter n(Nn) as follows from the theory of Markov processes. By the memoryless property of the exponential distribution and the symmetry in the complete graph K N, the new node is added uniformly to an already discovered node. Hence, the resulting SPT to all nodes (i.e., m = N − 1) is exactly a uniform recursive tree (URT). A URT of size N is a random tree rooted at some source node and where, at each stage, a new node is attached uniformly to one of the existing nodes until the total number of nodes is equal to N. As proven in [Reference van der Hofstad, Hooghiemstra and Van Mieghem4] for large N, the URT is also asymptotically the shortest path in the class of Erdös–Rényi random graph with i.i.d. exponential link weights.

The article is outlined as follows. Section 2 derives the exact probability generating function of the weight toward the first encountered peer. The asymptotics for large N (and constant peer group size m) is computed in Section 3 and compared with simulations for finite graph size N. Mathematical derivations are given in the appendices.

2. PROBABILITY GENERATING FUNCTION

The weight W N;m of the shortest path from an arbitrary node to the nearest (in weight) peer in a peer group of size m isFootnote 1

(1)
$$W_{N\semicolon m} = \sum_{k=1}^{N-1}1_{\left\{T_{N}\left(m\right)\ge k\right\}}\tau_{k}\comma$$

where T N(m) is the number of steps in the continuous Markov discovery process until one peer of the group of m peers is reached, or T N(m) is the hitting time to the peer group of size m. Since the random variables 1{T N(m) ≥ k} and τk (for a fixed k) are independent, the average weight directly follows as

$$E\left[W_{N\semicolon m}\right]=\sum_{k=1}^{N-1}\hbox{Pr}\left[T_{N}\left(m\right)\ge k\right]E\left[\tau_{k}\right].$$

The event {T N(m) ≥ k} implies that the kth discovered (attached) node in the URT is the first peer out of the m peers that is reached. Hence, the k − 1 previously discovered nodes do not belong to the peer group. Since the m peers as well as the k − 1 nodes are uniformly chosen out of the N − 1 nodes, we have that

$$\hbox{Pr} \lsqb T_{N}\left(m\right)\ge k\rsqb = {\left(\displaystyle{N-1-m \atop k-1}\right)\over \left(\displaystyle{N-1 \atop k-1}\right)}$$

such that, with Ek] = 1/k(Nk),

$$E\lsqb W_{N\semicolon m}\rsqb = {\left(N-1-m\right)! \over \lpar N-1\rpar !}\sum_{k=1}^{N-1}{\lpar N-1-k\rpar ! \over k\lpar N-m-k\rpar !}.$$

Invoking the identity (C.1) of Appendix C yields

$$\eqalign{\sum_{k=1}^{N-m}{1 \over k}{\lpar N-1-k\rpar ! \over \lpar N-m-k\rpar !} = {\left(N-1\right)! \over \left(N-m\right)!}\sum_{j=1}^{N-m}{1 \over j}-{\left(N-1\right)! \over \lpar N-m-1\rpar !} \sum_{q=0}^{m-2}{1 \over \left(m-1-q\right)\lpar N-1-q\rpar }.}$$

After partial fraction expansion, the q-sum is

$$\sum_{q=0}^{m-2}{1 \over \left(m-1-q\right)\lpar N-1-q\rpar } = {1 \over N-m} \sum_{j=1}^{m-1}{1 \over j}-{1 \over N-m}\sum_{j=N-m+1}^{N-1}{1 \over j}$$

and we arrive at

(2)
$$E\lsqb W_{N\semicolon m}\rsqb = {1 \over N-m}\left\{\sum_{j=1}^{N-1}{1 \over j}-\sum_{j=1}^{m-1}{1 \over j}\right\}={\psi\, \lpar N\rpar -\psi\, \lpar m\rpar \over N-m}$$

where ψ(x) is the digamma function [Reference Abramowitz and Stegun1, Sect. 6.3]. For large N, we have [Reference Abramowitz and Stegun1, Sect. 6.3.18]

$$E\lsqb W_{N\semicolon m}\rsqb ={\ln {N \over m} \over N-m} + {1 \over 2Nm} + O \left(N^{-1}m^{-2}\right).$$

Since the sequence of random variables 1{T N(m)≥1}, 1T N(m)≥2}, … , 1{T N(m)≥N−1} is obviously not independent, a computation of the probability generating function (pgf) ϕW N;m(z) = E[e zW N;m] different from straightforwardly using (1) is needed. We define v k = ∑n=1kτn as the weight of the shortest path to the kth attached (discovered) node. Since the laws of the Markov discovery process and of the URT are independent [Reference van der Hofstad, Hooghiemstra and Van Mieghem4], we obtain

$$\varphi_{W_{N\semicolon m}}\left(z\right)= \sum_{k=1}^{N-m}E\lsqb e^{-zv_{k}}\rsqb \hbox{Pr} \lsqb Y_{m}\left(k\right)\rsqb \comma$$

where Y m(k) denotes the event that the kth attached node is the first encountered peer among the m peers in the URT. Now, there are $\left({N-1 \atop m}\right)$ ways to distribute the m peers (different from the source node) over the N − 1 remaining nodes. If the kth node is the first of the m discovered peers, it implies that the remaining m − 1 peers need to be discovered later. There are $\left({N-1-k \atop m-1}\right)$ possible ways, from which

$$\hbox{Pr} \lsqb Y_{m} \lpar k\rpar \rsqb ={\left(\displaystyle{N-1-k \atop m-1}\right)\over \left(\displaystyle{N-1 \atop m}\right)}.$$

Thus, we obtainFootnote 2

(3)
$$\varphi_{W_{N\semicolon m}}\left(z\right)={m\lpar N-1-m\rpar ! \over \lpar N-1\rpar !}\sum_{k=1}^{N-m}{\lpar N-1-k\rpar ! \over \lpar N-m-k\rpar !} \prod_{n=1}^{k} {n\lpar N-n\rpar \over z + n\lpar N-n\rpar }.$$

After partial integration of a single-sided Laplace transform ϕX(z) = ∫0f X(t)e ztdt and assuming that the derivative fX(t) exists, the relation f X(0) = limz→∞zϕX(z) is found. Applied to (3), this yields

(4)
$$f_{W_{N\semicolon m}}\left(0\right)=m.$$

3. ASYMPTOTIC pgf AND pdf

Although the inverse Laplace transform of (3) can be computed, the resulting series for the probability density function (pdf) is hardly appealing. As shown below, an asymptotic analysis leads to an elegant result that, in addition, seems applicable to a relatively small network size N. Following a similar procedure as in [Reference Van Mieghem5, pp. 518–520], we write

$$z + n\lpar N-n\rpar =\left(\sqrt{\left({N \over 2}\right)^{2}+\ z}+{N \over 2}-n\right)\left(\sqrt{\left({N \over 2}\right)^{2}+\ z} - \left({N \over 2}-n\right)\right)$$

and define $y=\sqrt{\left( {N \over 2}\right) ^{2}+\,z}$. Then

$$\eqalign{\prod_{n=1}^{k}{n\lpar N-n\rpar \over z + n\lpar N-n\rpar } = {k!\lpar N-1\rpar ! \over \lpar N-k-1\rpar !} \prod_{n=1}^{k} {1 \over \left(y + \displaystyle{N \over 2}-n\right)} \prod_{n=1}^{k} {1 \over \left(y-{N \over 2}+n\right)} \cr ={k!\lpar N-1\rpar ! \over \lpar N-k-1\rpar !}{\Gamma\left(y+{N \over 2}-k\right)\over \Gamma\left(y+{N \over 2}\right)}\quad {\Gamma\left(y-{N \over 2}+1\right)\over \Gamma\left(y-{N \over 2}+k+1\right)}}$$

and, substituted in (3), yields

$$\eqalign{\varphi_{W_{N\semicolon m}}\left(z\right) = {m\lpar N-1-m\rpar !\Gamma\left(y-{N \over 2}+1\right)\over \Gamma\left(y+{N \over 2}\right)}\cr \quad \times \sum_{k=1}^{N-m}{k! \over \lpar N-m-k\rpar !} {\Gamma\left(y+{N \over 2}-k\right)\over \Gamma\left(y-{N \over 2}+k+1\right)}.}$$

For large N and |z| < N, we have(3) that $y=\sqrt{\left ( {N \over 2}\right ) ^{2}+\,z}\sim{N \over 2}+{z \over N}$ such that

$$\eqalign{&\varphi_{W_{N\semicolon m}}\left(z\right)\sim {m\lpar N-1-m\rpar ! \Gamma\left({z \over N} + 1\right)\over \Gamma\left(N+{z \over N}\right)}\cr &\quad \times \sum_{k=1}^{N-m} {k! \over \lpar N-m-k\rpar !} {\Gamma \left(N+{z \over N}-k\right)\over \Gamma\left({z \over N}+k+1\right)}.}$$

We now introduce the scaling z = Nx, where |x| < 1 since |z| < N:

(5)
$$\eqalignno{\varphi_{W_{N\semicolon m}}\left(Nx\right)\sim {m\lpar N-1-m\rpar !\Gamma\left(x+1\right)\over \Gamma\left(N+x\right)}\cr \quad \times \sum_{k=1}^{N-m}{k! \over \lpar N-m-k\rpar !}{\Gamma\left(N+x-k\right)\over \Gamma\left(x+1+k\right)}.}$$

For large N and fixed m and x, the asymptotic order of the sum

(6)
$$S=\sum_{k=1}^{N-m}{\Gamma\left(k+1\right)\over \Gamma\left(k+x+1\right)}{\Gamma\left(N-k+x\right)\over \Gamma\lpar N-k-m+1\rpar }$$

scales as

(7)
$$S={\Gamma\lpar N+x+1\rpar N^{-x-1} \over \Gamma\left(x\right)\Gamma\left(N-m\right)}{\pi \over \sin\pi\, x}{\Gamma\lpar x+m\rpar \over m!}\left(1+O\lpar N^{-1}\rpar \right).$$

This result (7) is derived in Appendix A. Substitution of (7) into (5), leads, for large N, fixed m and |x| < 1, to

$$\eqalign{\varphi_{W_{N\semicolon m}}\left(Nx\right)\sim{\pi\, x \over \sin\pi\, x} {\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }N^{-x}\left(1+O\left(N^{-1}\right)\right)}$$

or

(8)
$$\eqalignno{\lim_{N\rightarrow\infty}N^{x}\varphi_{W_{N\semicolon m}}\left(Nx\right) =\lim_{N\rightarrow\infty} E\left[e^{-\lpar NW_{N\semicolon m}-\ln N\, \rpar x}\right]\cr ={\pi\, x \over \sin\pi\, x}{\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }.}$$

Finally, the inverse Laplace transform, computed in Appendix B, is

(9)
$$\eqalignno{\lim_{N\rightarrow\infty} \hbox{Pr} \lsqb NW_{N\semicolon m}-\ln N\le t\rsqb = me^{-mt}e^{e^{-t}}\cr \vint_{e^{-t}}^{\infty}{e^{-u} \over u^{m+1}}\, du = me^{-mt} e^{e^{-t}}\Gamma\left(-m\comma e^{-t}\right)\comma\;}$$

where Γ(−m,e t) is the incomplete Gamma function (C.3). If $m=1$, then we have

(10)
$$\lim_{N\rightarrow\infty}\hbox{Pr}\left[NW_{N}-\ln N\le t\right]=1-e^{-t} e^{e^{-t}}\vint_{e^{-t}}^{\infty}{e^{-u} \over u}\, du.$$

The pdf follows after derivation of (9) with respect to t as

(11)
$$\lim_{N\rightarrow\infty}f_{NW_{N\semicolon m}-\ln N}\left(t\right)=m-me^{-mt} \left(e^{-t}+m\right)e^{e^{-t}}\vint_{e^{-t}}^{\infty}{e^{-u} \over u^{m+1}}\, du.$$

4.1. The Fermi–Dirac Distribution

For large m and to first order, we have ${{\Gamma\lpar x+m\rpar }\over{\Gamma\lpar m\rpar }}\sim m^{x}$, such that

$$\lim_{N\rightarrow\infty} E\left[e^{-\lpar NW_{N\semicolon m}-\ln {N / m}\rpar x}\right]={\pi x \over {\rm sin}\, \pi x}.$$

In view of the average (2) for large N, we can write NW N;m − ln(N/m) = N(W N;mE[W N;m]). After inverse Laplace transform with 0 < c < 1,

$$\lim_{N\rightarrow\infty}\hbox{Pr}\left[NW_{N\semicolon m}-\ln \textstyle {\displaystyle {N \over m}}\le t\right]={1 \over 2\pi i}\vint_{c-i\infty}^{c+i\infty}{\pi e^{tx} \over {\rm sin}\, \pi x}\, dx.$$

For t > 0 (t < 0), the contour can be closed over the negative (positive) Re(x)-plane, resulting, for any t, in the Fermi–Dirac distribution function

(12)
$$\lim_{N\rightarrow\infty}\hbox{Pr}\left[NW_{N\semicolon m}-\ln \textstyle {\displaystyle {N \over m}}\le t\right]={1 \over 1+e^{-t}}$$

whose symmetric probability density function is

$${\lim_{N\rightarrow\infty}}f_{NW_{N\semicolon m}-\ln\!{\lpar N/ m\rpar }}\left(t\right)= {1 \over 4} \, \hbox{sech}^{2}\left({t \over 2}\right).$$

Figure 1 plots the scaled, asymptotic pdf (11) for various values of m. The Fermi–Dirac distribution function frequently appears in statistical physics (see, e.g., [Reference Ashcroft and Mermin2]). It is of interest to observe that the deep tails of f NW N;m−ln N(t) for large |t| decrease as O(e −|t|). This is different than the decrease of either a Gumbel or Gaussian, which implies that larger variations around the mean can occur.

Figure 1. The asymptotic and scaled pdf (11) for various values of m = 1, 2, 3, 4, 5, 10, 15, 20. The pdf of the Gumbel and the Fermi–Dirac distributions are also shown.

If we rescale (9) by changing t = y − ln m, then

$$\lim_{N\rightarrow\infty} \hbox{Pr}\left[NW_{N\semicolon m}-\ln \textstyle {\displaystyle {N \over m}}\le y\right]=e^{-my}m^{m+1}e^{me^{-y}}\vint_{me^{-y}}^{\infty}{e^{-u} \over u^{m+1}}\, du$$

whose dependence on m and tendency to the Fermi–Dirac distribution (12) is shown in Figure 2. From a practical point of view, if m > 5, the much simpler Fermi–Dirac distribution is sufficiently accurate. The observation that a peer group of m ≈ 5 is already close to the asymptotic regime leads us to conclude that relatively small peer group sizes suffice to offer a good service quality (e.g., small latency) and that increasing the peer group size only logarithmically (i.e., marginally) improves the quality of service of weight-related metrics.

Figure 2. The tendency toward the Fermi–Dirac distribution (in bold) for increasing m = 1, 2, 3, 4, 5, 10, 15, 20.

Rewriting (12) as

(13)
$$\hbox{Pr}\left[W_{N\semicolon m}\le y\right]\simeq{1 \over 1+{N \over m}e^{-Ny}}$$

and after derivation, we obtain

$$f_{W_{N\semicolon m}}\left(y\right)\simeq{{N^{2} \over m}e^{-Ny} \over \left(1+{N \over m}e^{-Ny}\right)^{2}}\comma$$

from which

$$f_{W_{N\semicolon m}}\lpar 0\rpar \simeq{\displaystyle{{N^{2} \over m}}\over{\left(1+ \displaystyle {N \over m}\right)^{2}}} = m\left(1+ \displaystyle {m \over N}\right)^{-2}.$$

The condition that W N;m > 0 is ignored in the asymptotic analysis, such that the largest error is made for f W N;m(0). From (4), it follows that the accuracy of the asymptotic analysis is always better than a factor $\lpar 1+{m \over N}\rpar ^{-2}$. Hence, the smaller the fraction ${m \over N}$ of peers in the network, the higher the accuracy of the asymptotic analysis. Figure 3 shows, for finite values of N, simulation of the scaled pdf f NW N;m−ln N(t) and the asymptotic result (8) for m = 1, and Figure 4 plots the comparison for m = 5. Even when ignoring the simulation error bars, both Figures 3 and 4 indicate that the asymptotic analysis is already accurate for the relatively small network size N. The accuracy factor $\lpar 1+{m \over N}\rpar ^{-2}$ also agrees and explains why the asymptotic formula (8) is less accurate for higher m.

Figure 3. Comparison of the analysic result (11) with simulations for various N and m = 1.

Figure 4. Comparison of the analysic result (11) with simulations for various N and m = 5.

ACKNOWLEDGMENT

This work has been partially supported by European Union NoE CONTENT (FP6-IST-038423) (www.ist-content.eu).

APPENDIX A

Evaluation of the Sum S

We introduce the Beta function [Reference Abramowitz and Stegun1, 6.2.1],

$$B\left(w\comma \; z\right)={\Gamma\left(z\right)\Gamma\left(w\right)\over \Gamma\left(z+w\right)}=\vint_{0}^{1}t^{z-1}\left(1-t\right)^{w-1}\; dt,$$

which is valid for Re(z) > 0 and Re(w) > 0, in (6) such that for Re(x) > 0,

$$\eqalign{S = {1 \over \Gamma\left(x\right)}\vint_{0}^{1}dt \, \left(1-t\right)^{x-1}\sum_{k=1}^{N-m}t^{k}{\Gamma\left(N-k+x\right)\over \Gamma\lpar N-k-m+1\rpar}\cr = {1 \over \Gamma\left(x\right)} \vint_{0}^{1}dt \, \left(1-t\right)^{x-1}\sum_{j=m}^{N-1}t^{\, N-j}{\Gamma\left(j+x\right)\over \lpar\, j-m\rpar !}\cr ={1 \over \Gamma\left(x\right)}\vint_{0}^{1}dt \, \left(1-t\right)^{x-1}\sum_{j=0}^{N-m-1}t\,^{N-j-m}{\Gamma\left(j+m+x\right)\over j!}.}$$

Introducing Euler's Gamma integral yields

$$\eqalign{S ={1 \over \Gamma\left(x\right)}\vint_{0}^{1}dt\, \left(1-t\right)^{x-1}\sum_{j=0}^{N-m-1}{t^{N-m-j} \over j!}\vint_{0}^{\infty}u\, ^{j+m+x-1}e^{-u}\; du \cr ={1 \over \Gamma\left(x\right)}\vint_{0}^{1}dt\, t^{N-m}\left(1-t\right)^{x-1}\vint_{0}^{\infty}u^{m+x-1}e^{-u}\sum_{j=0} ^{N-m-1}{\left(\displaystyle{u \over t}\right)^{j} \over j!}\;du.}$$

Using (C.5),

$$\sum_{j=0}^{N-m-1}{(\displaystyle{u \over t})^{j} \over j!}= {e^{u/t}\Gamma\left(N-m\comma {u \over t}\right)\over \Gamma\left(N-m\right)}$$

gives

$$\eqalign{S ={1 \over \Gamma\left(x\right)\Gamma\left(N-m\right)}\vint_{0}^{1}\;dt\, t\,^{N-m}\left(1-t\right)^{x-1} \vint_{0}^{\infty}du \, u^{m+x-1}e^{-u}e^{u/t}\, \Gamma\left(N-m\comma\ {u \over t}\right)\cr = {1 \over \Gamma\left(x\right)\Gamma\left(N-m\right)}\vint_{0}^{1}dt\, t\,^{N-m}\left(1-t\right)^{x-1} \vint_{0}^{\infty}du\, u ^{m+x-1}e^{-u}e^{u/t}\vint_{u/t}^{\infty}y\, ^{N-m-1}e^{-y}\, dy.}$$

Let

$$\eqalign{I =\vint_{0}^{\infty}du\, u^{m+x-1}e^{-u}e^{u/t}\vint_{u/t}^{\infty}y\,^{N-m-1}e^{-y}\, dy =t^{m+x}\vint_{0}^{\infty}dw\, w^{m+x-1}e^{-wt}e^{w}\vint_{w}^{\infty}y^{N-m-1}e^{-y}\, dy}.$$

Partial integration yields

$${I \over t^{m+x}}=\vint_{0}^{\infty}w\,^{N-m-1}e^{-w}\vint_{0}^{w}dw\,u^{m+x-1} e^{u\left(1-t\right)}\, du$$

and we find that

$$\eqalign{S ={1 \over \Gamma\left(x\right)\Gamma\left(N-m\right)}\vint_{0}^{1}dt\, t^{N+x}\left(1-t\right)^{x-1} \vint_{0}^{\infty} w^{N-m-1}e^{-w} \vint_{0}^{w}u^{m+x-1}e^{u\left(1-t\right)}\ du.}$$

We compute the u-integral as

$$\eqalign{\vint_{0}^{w}u^{m+x-1}e^{u\left(1-t\right)} \ du = \sum_{k=0}^{\infty} {\lpar 1-t\rpar ^{k} \over k!} \vint_{0}^{w}u^{m+x-1+k} \ du=\sum_{k=0}^{\infty} {\lpar 1-t\rpar ^{k}w^{m+x+k} \over k!\lpar m+x+k\rpar }.}$$

Substituting this series results in

(A.1)
$$\eqalignno{S ={1 \over \Gamma\left(x\right)\Gamma\left(N-m\right)} \sum_{k=0}^{\infty}{1 \over k!\lpar m+x+k\rpar } \vint_{0}^{1}\, dt\, t^{N+x}\left(1-t\right)^{x+k-1}\vint_{0}^{\infty}w^{N-1+x+k}e^{-w}\, dw \cr ={\Gamma\lpar N+x+1\rpar \over \Gamma\left(x\right)\Gamma\left(N-m\right)}\sum_{k=0}^{\infty}{\Gamma\lpar x+k\rpar \over k!\lpar m+x+k\rpar }{\Gamma\left(N+x+k\right)\over \Gamma\lpar N+2x+k+1\rpar}.}$$

This second series (A.1) for S is more suited than (6) for deriving the asymptotic behavior of S for large N (and fixed m and x). Invoking [Reference Abramowitz and Stegun1, 6.1.47] yields

$$\eqalign{S={\Gamma\lpar N+x+1\rpar N^{-x-1} \over \Gamma\left(x\right)\Gamma\left(N-m\right)}\sum_{k=0}^{\infty}{\Gamma\lpar x+k\rpar \over k!\lpar m+x+k\rpar } \left(1-{\left(x+1\right)\lpar 3x+2k\rpar \over N} + O \! \left({k^{2} \over N}\right)\right)\cr ={\Gamma\lpar N+x+1\rpar N^{-x-1} \over \Gamma\left(x\right)\Gamma\left(N-m\right)} \left(\sum_{k=0}^{\infty}{\Gamma\lpar x+k\rpar \over k!\lpar m+x+k\rpar } -{\left(x+1\right)}{N}\sum_{k=0}^{\infty}{\lpar 3x+2k\rpar \Gamma \lpar x+k\rpar \over k!\lpar m+x+k\rpar } + O \left({k^{2} \over N^{2}}\right)\right).}$$

The remaining series can be rewritten as a hypergeometric series:

$$\eqalign{\sum_{k=0}^{\infty}{\Gamma\lpar x+k\rpar \over k!\lpar m+x+k\rpar} =\sum_{k=0}^{\infty}{\Gamma\lpar x+k\rpar \Gamma\lpar x+m+k\rpar \over \Gamma\lpar x+m+1+k\rpar}{1 \over k!} = {\Gamma\left(x\right)\Gamma\left(x+m\right)\Gamma\left(1-x\right)\over \Gamma\left(m+1\right)}},$$

where the latter follows from Gauss' formula (C.2). The next sum is similarly computed with

$$\eqalign{\sum_{k=0}^{\infty}{\Gamma\lpar x+k\rpar \over \lpar k-1\rpar !\lpar m+x+k\rpar} = \sum_{k=0}^{\infty} {\Gamma\lpar x+1+k\rpar \over k!\lpar m+1+x+k\rpar} = {\Gamma\left(x+1\right)\Gamma\left(x+1+m\right)\Gamma\left(-x\right)\over \Gamma\left(m+1\right)}}$$

as

$$\eqalign{{\left(x+1\right)\over N}\sum_{k=0}^{\infty}{\lpar 3x+2k\rpar \Gamma \lpar x+k\rpar \over k!\lpar m+x+k\rpar } ={1 \over N}\left({\Gamma\left(x+2\right)\Gamma\left(x+m\right)\Gamma\left(-x\right) \over \Gamma\left(m+1\right)}\left(2m-x\right)\right)}.$$

Higher-order terms in O(N j) with j > 1 are computed similarly and converge. Combining all contributions yields

$$S={\Gamma\lpar N+x+1\rpar N^{-x-1} \over \Gamma\left(x\right)\Gamma\left(N-m\right)}\left({\Gamma\left(x\right)\Gamma\left(x+m\right)\Gamma\left(1-x\right)\over \Gamma\left(m+1\right)}+O\left(N^{-1}\right)\right).$$

After applying the reflection formula for the Gamma function [Reference Abramowitz and Stegun1, 6.1.17], $\Gamma\left(x\right)\Gamma\left(1-x\right)={\pi \over \sin\pi x} $, we find (7).

APPENDIX B

Inverse Laplace Transform of (8)

It is a little easier to compute the distribution function instead of the pdf. The inverse Laplace transform is

$$\lim_{N\rightarrow \infty}\hbox{Pr}\left[NW_{N\semicolon m}-\ln N\le t\right]= {1 \over 2\pi i\Gamma\lpar m\rpar }\vint_{c-i\infty}^{c+i\infty}{\pi \over \sin\pi x} \Gamma\lpar x+m\rpar e^{xt}\, dx,$$

where 0 < c < 1. Since

$$\lim_{x\rightarrow-\infty}{\pi x \over \sin\pi x} {\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }e^{xt}=0$$

for any t, the contour can be closed over the negative Re(x) < 0 plane, where we encounter simple poles of ${1 \over \sin\pi x}$ at x = −k and of Γ(x + m) at x = −m, −m − 1, … . Hence, the integrand has simple poles at x = −1, − 2, … , −m + 1 and double poles at x = −m, −m − 1, … . After closing the contour over the negative Re(x)-plane and applying Cauchy's residue theorem, we obtain

$$\eqalign{\lim_{N\rightarrow\infty}\hbox{Pr}\left[NW_{N\semicolon m}-\ln N\le t\right]=\sum_{k=0}^{m-1}\lim_{x\rightarrow-k}{\pi \, \lpar x+k\rpar \over \sin\pi x}{\Gamma \lpar x+m\rpar \over \Gamma\lpar m\rpar }e \, ^{xt}\cr \quad + \sum_{k=m}^{\infty}\lim_{x\rightarrow-k}\left[{d \over dx}{\pi \, \lpar x+k\rpar ^{2} \over \sin\pi x}{\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }e \, ^{xt}\right].}$$

The first sum equals

$$\sum_{k=0}^{m-1}\lim_{x\rightarrow-k}{\pi \, \lpar x+k\rpar \over \sin\pi x} {\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }e^{xt}=\sum_{k=0}^{m-1}\lpar - \!\! 1\rpar ^{k}{\lpar m\!-\!\!1\!\!-k\rpar ! \over \lpar m\!-\!1\rpar !}e^{-kt}.$$

The second sum first requires the computation of the derivative:

$$\eqalign{{d \over dx}{\lpar x+k\rpar ^{2} \over \sin\pi x}\Gamma\lpar x+m\rpar e^{xt} = {2\lpar x+k\rpar \over \sin\pi x}\Gamma \lpar x + m\rpar e^{xt} + t{\lpar x+k\rpar ^{2} \over \sin\pi x} \Gamma\lpar x+m\rpar e^{xt}\cr\quad -{\pi \, \lpar x+k\rpar ^{2}\cos\pi x \over \sin^{2}\!\!\pi x}\Gamma\lpar x+m\rpar e^{xt} + {\lpar x+k\rpar ^{2} \over \sin\pi x}\Gamma\lpar x + m\rpar \psi\, \lpar x + m\rpar e^{xt}.}$$

Using the reflection formula of the Gamma function,

$$\Gamma\lpar x+m\rpar = {\pi \, \lpar\!\! -\!\!1\rpar ^{m} \over \sin \lpar \pi x\rpar \Gamma\lpar 1\!\!-\!\!x\!-\!m\rpar }\comma$$

the derivative becomes

$$\eqalign{{d \over dx}{\lpar x+k\rpar ^{2} \over \sin\pi x}\Gamma\lpar x+m\rpar e^{xt} = {\lpar x+k\rpar ^{2} \over \sin^{2}\!\! \pi x}{\pi \, \lpar \!\!-\!\!1\rpar ^{m} \over \Gamma\lpar 1\!\!-\!\!x\!-\!m\rpar }e^{xt}\left[{2 \over \lpar x+k\rpar }-\pi\cot\pi x+\psi \, \lpar x+m\rpar +t\right].}$$

The reflection formula for the digamma function [Reference Abramowitz and Stegun1, 6.3.7] shows that

$$\psi \, \lpar 1-x-m\rpar =\psi \, \lpar x+m\rpar +\pi\cot\pi x$$

such that the sum between brackets is

$${2 \over \lpar x+k\rpar }-\pi\cot\pi x+\psi \, \lpar x+m\rpar +t={2 \over \lpar x+k\rpar }-2\pi\cot\pi x + \psi \, \lpar 1\!\!-\!\!x\!-\!m\rpar +t.$$

Since

$$\lim_{x\rightarrow-k}{2 \over \lpar x+k\rpar }-2\pi\cot\pi x=2\lim_{x\rightarrow-k} {{\rm sin}\, \pi x-\pi \, \lpar x+k\rpar \,{\rm cos}\, \pi x \over \lpar x+k\rpar \,{\rm sin}\, \pi x}=0 \comma$$

we have

$$\eqalign{\lim_{x\rightarrow-k}\left[{d \over dx}{\pi \, \lpar x+k\rpar ^{2} \over {\rm sin}\, \pi x} {\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }e^{xt}\right] ={\lpar\!-\!\!1\rpar^{m} \over \left(k\!-\!m\right)!\Gamma\lpar m\rpar }e^{-kt}\left[\psi \, \lpar 1\!+k-m\rpar +t\right].}$$

The second sum is

$$\eqalign{\sum_{k=m}^{\infty}\lim_{x\rightarrow-k}\left[{d \over dx}{\pi \, \lpar x+k\rpar^{2} \over {\rm sin}\, \pi x} {\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }e^{xt}\right]\cr ={\lpar\! -\!\!1\rpar ^{m} \over \Gamma\lpar m\rpar }\sum_{k=m}^{\infty}{e^{-kt} \over \left(k-m\right)!}\left[\psi \, \lpar 1\!+k-m\rpar +t\right]\cr ={\lpar \!-\!\!1\rpar ^{m}e^{-mt} \over \Gamma\lpar m\rpar }\sum_{k=0}^{\infty}{e^{-kt} \over k!}\left[\psi \, \lpar 1\!+k\rpar +t\right]\cr ={\lpar\! -\!\!1\rpar ^{m}e^{-mt} \over \Gamma\lpar m\rpar }\left\{te^{e^{-t}} + \sum_{k=0}^{\infty}{e^{-kt} \over k!}\psi \, \lpar 1+k\rpar \right\}.}$$

For integer values of the diagamma function, we know [Reference Abramowitz and Stegun1, 6.3.2] that $\psi \, \lpar 1+k\rpar =-\gamma+\sum_{j=1}^{k}{1 \over j}$, and, thus,

$$\sum_{k=0}^{\infty}{e^{-kt} \over k!}\psi \, \lpar 1+k\rpar =-\gamma \sum_{k=0}^{\infty}{e^{-kt} \over k!} + \sum_{k=1}^{\infty}{e^{-kt} \over k!}\sum_{j=1}^{k}{1 \over j}.$$

Using (C.7) gives

$$\sum_{k=0}^{\infty}{e^{-kt} \over k!}\psi\, \lpar 1+k\rpar = -\gamma e^{e^{-t}}-e^{e^{-t}}\sum_{m=1}^{\infty}{\lpar\!\! -\!e^{-t}\rpar ^{m} \over mm!}.$$

The series expansion of the exponential integral $E_{1}\left(z\right)= \vint_{z}^{\infty}{(e^{-u}/ u)}\,du$ is [Reference Abramowitz and Stegun1, 5.1.11]

$$E_{1}\left(z\right)=-\gamma-\log z-\sum_{m=1}^{\infty}{\lpar \!\!-\!\! z\rpar ^{m} \over mm!}$$

and we recognize that

$$\sum_{k=0}^{\infty}{e^{-kt} \over k!}\psi \, \lpar 1+k\rpar =e^{e^{-t}}\! \left(E_{1}\left(e^{-t}\right)-t\right).$$

Hence,

$$\sum_{k=m}^{\infty}\lim_{x\rightarrow-k}\left[{d \over dx}{\pi \, \lpar x+k\rpar ^{2} \over {\rm sin}\, \pi x} {\Gamma\lpar x+m\rpar \over \Gamma\lpar m\rpar }e^{xt}\right]= {\lpar \!-\!\!1\rpar ^{m}e^{-mt} \over \Gamma\lpar m\rpar }e^{e^{-t}}E_{1}\left(e^{-t}\right).$$

Combining all contributions yields

$$\eqalign{\lim_{N\rightarrow\infty}\hbox{Pr}\left[NW_{N\semicolon m}-\ln N\le t\right]=\sum_{k=0}^{m-1}\lpar \!\!-\!\!1\rpar ^{k}{\lpar m-\!1\!-k\rpar ! \over \lpar m-\!1\rpar !}e^{-kt} + {\lpar \!\!-\!\!1\rpar ^{m}e^{-mt} \over \lpar m-\!1\rpar !}e^{e^{-t}}\vint_{e^{-t}}^{\infty}{e^{-u} \over u}\, du.}$$

Finally, after introducing (C.6), we arrive at (9).

APPENDIX C

Identities

This section lists identities that we apply. For any complex a,b and positive integers k and n, there holds that

(C.1)
$$\sum_{j=n}^{b}{1 \over j}{\lpar a-j\rpar ! \over \lpar b-j\rpar !} = {a! \over b!} \sum_{j=n}^{b}{1 \over j}\,-\,{a! \over \lpar b\!-\!n\rpar !} \sum_{q=0}^{a-b-1}{1 \over a-b-q} {\lpar a-q-n\rpar ! \over \lpar a-q\rpar !}.$$

A famous result of Gauss on the hypergeometric series is [Reference Abramowitz and Stegun1, 5.1.20]

(C.2)
$$\sum_{k=0}^{\infty}{\Gamma\left(a+k\right)\Gamma\left(b+k\right)\over \Gamma\left(c + k\right)k!}={\Gamma\left(a\right)\Gamma\left(b\right)\Gamma\left(c-a-b\right)\over \Gamma\left(c-a\right)\Gamma\left(c-b\right)}\comma\;$$

where c≠ 0, −1, −2, … and Re(cab) > 0.

The incomplete Gamma function is defined [Reference Abramowitz and Stegun1, 6.5.3] as

(C.3)
$$\Gamma\left(a\comma\ x\right)=\vint_{x}^{\infty}t^{a-1}e^{-t}\, dt.$$

Clearly, Γ(1, x) = e x. Partial integration yields

$$\vint_{x}^{\infty}t^{a-1}e^{-t}\, dt = -{x\,^{a} \over a}e^{-x} + {1 \over a}\vint_{x}^{\infty}t^{a}e^{-t}\, dt\comma$$

from which the recursion follows:

(C.4)
$$\Gamma\left(a\comma\ x\right)=-{x^{a} \over a}e^{-x} + {1 \over a}\Gamma\left(a+1\comma\ x\right).$$

Iterating this recursion a few times reveals that

$$\Gamma\left(a\comma\ x\right)={\Gamma\left(a\right)\Gamma\left(a+n\comma\ x\right)\over \Gamma\left(a+n\right)}-\Gamma\left(a\right)e^{-x}\sum_{k=0}^{n-1}{x^{a+k} \over \Gamma\left(a+k+1\right)}.$$

If a = 1, then

(C.5)
$$\sum_{k=0}^{n-1}{x^{k} \over k!} = {e^{x}\Gamma\left(n\comma\ x\right)\over \left(n-1\right)!}.$$

If a = −n, then, after applying the reflection formula of the Gamma function, we obtain

(C.6)
$$\sum_{k=0}^{n-1}\lpar \!\!-\!\!1\rpar ^{k}k!x^{-k-1}=e^{x}\left\{\Gamma\left(0\comma\ x\right)+\left(-1\right)^{n-1}n!\Gamma\left(-n\comma\ x\right)\right\}\comma$$

where Γ(0, x) = E 1(x) is the exponential integral.

A result derived from Ramanyuan's work [Reference Abramowitz and Stegun3, p. 46] is

$$\sum_{m=1}^{\infty}{a_{m}\, x\, ^{m} \over m} = -\sum_{m=1}^{\infty} {\lpar -\!\! 1\rpar ^{m}\, f^{\lpar m\rpar }\lpar x\rpar x\, ^{m} \over m!}\, \left(\sum_{k=1}^{m}{1 \over k}\right)\comma\;$$

where f(x) = ∑k=0a kx k for |x| < R. Applied to f(x) = e x, this yields

(C.7)
$$\sum_{m=1}^{\infty}{x\, ^{m} \over mm!}=-e^{x}\sum_{m=1}^{\infty} {\lpar \!-\!\! x\rpar ^{m}\, \over m!}\, \left(\sum_{k=1}^{m}{1 \over k}\right).$$

Footnotes

1. This has been conveyed to us by Professor S. M. Ross.

2. It is readily verified that by computing $E\lsqb W_{N;m}\rsqb =-{d\varphi_{W_{N;m}}(z) \over dz}{\vert}_{z=0}, $ relation (2) is obtained.

3. The notation f(x) ~ g(x) for large x means that $\lim_{x\rightarrow\infty}{f\left(x\right)\over g\left(x\right)}=1.$

References

REFERENCES

1.Abramowitz, M. & Stegun, I.A. (1968). Handbook of mathematical functions. New York: Dover Publications.Google Scholar
2.Ashcroft, N.W. & Mermin, N.D. (1985). Solid state physics. Tokyo: Holt-Saunders International Editions.Google Scholar
3.Berndt, B.C. (1985). Ramanuyan's notebooks. New York: Springer-Verlag, Part I.Google Scholar
4.van der Hofstad, R., Hooghiemstra, G., & Van Mieghem, P. (2001). First-passage percolation on the random graph. Probability in the Engineering and Informational Sciences 15: 225237.CrossRefGoogle Scholar
5.Van Mieghem, P. (2006). Performance analysis of communication systems and networks. Cambridge: Cambridge University Press.Google Scholar
6.Van Mieghem, P. (2006). Data communications networking. Amsterdam: Techne Press.Google Scholar
Figure 0

Figure 1. The asymptotic and scaled pdf (11) for various values of m = 1, 2, 3, 4, 5, 10, 15, 20. The pdf of the Gumbel and the Fermi–Dirac distributions are also shown.

Figure 1

Figure 2. The tendency toward the Fermi–Dirac distribution (in bold) for increasing m = 1, 2, 3, 4, 5, 10, 15, 20.

Figure 2

Figure 3. Comparison of the analysic result (11) with simulations for various N and m = 1.

Figure 3

Figure 4. Comparison of the analysic result (11) with simulations for various N and m = 5.