1. INTRODUCTION
In a recent article, Dobson, Carreras, and Newman [6] have proposed an interesting model to describe the occurrence of cascading failures in a closed system of n components. The basic points of the model can be summarized as follows. Initially, the n components have random loads L1,…, Ln that correspond to independent uniform (0,1) random variables (r.v.'s). Following some disturbance, a load d is added to each of the n loads. Any component fails if its new load is larger than one. When a components fails, a new load p is added to each of the components that are still functioning. This can cause further failures in a cascade. The central problem is to determine the distribution of a r.v. N that represents the total number of components that will have finally failed. This loading-dependent model of cascading failure is motivated by the study of large blackouts of electric power transmission systems (see also [6] for a bibliography).
Section 2 is mainly concerned with the same failure model. First, we reformulate this model by means of a cascade algorithm, different but equivalent to the original one, that allows us to deal with the occurrences of failures one by one. Moreover, it is convenient to introduce the r.v.'s U1 = 1 − L1,…,Un = 1 − Ln, which can be interpreted as the initial resistances of the n components. Obviously, these r.v.'s are still independent uniform (0,1); let U1:n,…,Un:n be the associated order statistics. We then show that the distribution of N can be expressed in terms of the joint distributions of the order statistics {Ui:n, 1 ≤ i ≤ k} for 1 ≤ k ≤ n, with respect to the linear boundary {d + (i − 1)p, 1 ≤ i ≤ k}. As a corollary, we are able to derive, in an enlightening way, a key result obtained in [6], namely in case of nonsaturation N has a quasi-binomial distribution. Furthermore, we prove how this result can be extended to a model for which the additional loads after failures are independent and identically distributed (i.i.d.) r.v.'s.
In Section 3 we discuss a generalized failure model in which the additional loads are still constant but arbitrarily fixed (instead of being equal to each other). In particular, this extension covers those cases in which the initial resistances are i.i.d. r.v.'s with any given (not necessarily uniform) distribution. For this model, we can now write the distribution of N in terms of the joint distributions of the order statistics {Ui:n, 1 ≤ i ≤ k} for 1 ≤ k ≤ n, with respect to an arbitrary (instead of linear) nondecreasing boundary {si, 1 ≤ i ≤ k}. We then show that the state probabilities of N still have a remarkable structure and that they can be calculated using a simple recursion. As special situations, we examine in more detail a model built with several redundant components (so that the first failures will not generate any additional load) and a model for which the resistances are exponentially distributed and the additional loads are independent r.v.'s. Finally, some numerical illustrations and simple bounds are presented in Section 4.
The approach that we will follow to tackle this cascading failure is inspired by an argument used in epidemic modeling to determine the distribution of the final number of new infections for epidemics of the S (susceptible) → I (infected) → R (removed) schema (see, e.g., Ball and O'Neill [2], Lefèvre and Utev [8], and Picard and Lefèvre [9]). Very roughly, in an epidemic context, the n components represent the n initial susceptibles and the failure of a component corresponds to the infection of a susceptible (by one of the infectives present at that time). The additional load in case of failure means that once infected, an individual adds its infectiousness to the other infectives against the remaining susceptibles. An indirect contribution of our article is thus to point out the existence of a closed connection, within this framework, between reliability and epidemic modelings.
Applying the proposed method asks us to evaluate the partial joint distributions of order statistics for a sample of n independent uniform (0,1) r.v.'s. Much on this problem can be found in Shorack and Wellner [10, Chap.9]; see also Denuit, Lefèvre, and Picard [5] and the references therein. In [5], it is established that the left tail distributions such as derived here rely on an underlying polynomial structure of the Abel–Gontcharoff form. This property, however, is not exploited in the present work, at least not in a direct or explicit way.
2. THE BASIC MODEL
Consider a system made of n identical components. Initially, the n components are unfailed and have random loads L1,…, Ln that are independent and uniformly distributed on (0,1). The cascade proceeds according to the following algorithm (Dobson et al. [6]):
Step 0
[bull ] Load increment. A disturbance d is added to the load of each of the n components.
[bull ] Failure test. For each component j, 1 ≤ j ≤ n, if the load Lj is greater than one, j fails. Let M0 be the number of failures.
Step 1
[bull ] Load increment. If there are failures, a disturbance M0 p is added to the load of each of the n − M0 remaining components.
[bull ] Failure test. For each of these components j, if the new load d + M0 p + Lj is greater than one, j fails. Let M1 be the number of failures.
Step 2
[bull ] Load increment. A disturbance M1 p is added to the load of each of the n − M0 − M1 remaining components.
[bull ] Failure test. For each of these components j, if the new load d + (M0 + M1)p + Lj is greater than one, j fails. Let M2 be the number of failures.
The next steps are similar. Failures can occur until time T, when there are no more new failures. Indeed, when MT = 0, there is no new load added to the remaining components, so that the cascade process stops (the system is stabilized). The total number of failures is N′ = M0 + M1 + ··· + MT−1, the number of unfailed components being n − N′.
An equivalent cascade algorithm. We now construct another algorithm that leads to the same total number of failures in the system. As indicated in Section 1, a similar approach is used in epidemic modeling to determine the final size distribution of an epidemic of the SIR schema. Let us introduce the order statistics L1:n,…, Ln:n, which represent the initial loads of the n components arranged by increasing weights.
Step 0
[bull ] Load increment. A disturbance d is added to the load of each of the n components.
[bull ] Failure test. If the largest load d + Ln:n is greater than one, the corresponding component fails.
Step 1
[bull ] Load increment. If this occurs, a disturbance p is added to the load of each of the n − 1 remaining components.
[bull ] Failure test. If the new load d + p + Ln−1:n is greater than one, the corresponding component fails.
Step 2
[bull ] Load increment. A disturbance p is added to the load of each of the n − 2 remaining components.
[bull ] Failure test. If the new load d + 2p + Ln−2:n is greater than one, the corresponding component fails.
Here too, failures stop when all of the loads of the remaining components are smaller than one. Let N be the associated total number of failures.
It can be easily understood that as long as the total number of failures is considered, both algorithms are quite equivalent; that is, N′ and N have the same distribution. For instance, in the former formulation any given unfailed component at step 1 receives an additional load M0 p, whereas in the latter formulation, that component receives the same load in several steps. This change of timescale, however, will not affect the final possible failure of the component.
The new version of the algorithm allows us to express the distribution of N in a simple way, using the key tool of order statistics. Indeed, we directly see that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm001.gif?pub-status=live)
In (2.1), it was implicitly assumed that 1 − d − (n − 1)p > 0. If this is not true, define j* = min{j : 1 − d − jp ≤ 0}. Obviously, P(N ≥ k) is still given by (2.1) when k = 1,…, j*, whereas P(N ≥ k) = P(N ≥ j*) when k = j* + 1,…, n, meaning that the occurrence of j* failures necessarily yields n failures. Such a consequence of cascading is called a saturation effect in [6]. To simplify the presentation, we will continue with (2.1) without saturation.
Now let us introduce the r.v.'s U1 = 1 − L1,…,Un = 1 − Ln. Obviously, these Ui's are still independent and uniformly distributed on (0,1). By construction of the model, they can be interpreted as the initial random resistances of the n components. Denote by U1:n,…,Un:n the associated order statistics (i.e., the n initial resistances arranged by increasing strengths). Then we can rewrite (2.1) as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm002.gif?pub-status=live)
From (2.2), we then get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm003.gif?pub-status=live)
where we put Un+1:n = 1, say.
In order to evaluate the probability (2.3), we begin by deriving the preliminary result (2.4) below. That formula is well known (see, e.g., [5]); the simple proof given here will be adapted later to more general situations. For any fixed size k (1 ≤ k ≤ n), consider a sample U1,…,Uk of independent and uniform (0,1) r.v.'s, and denote by U1:k,…,Uk:k the associated order statistics. Let x be any real in (0,d).
Lemma 2.1:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm004.gif?pub-status=live)
Proof: We will proceed by induction. Obviously, for k = 1, P[x < U1:1 < d] = d − x. Suppose that (2.4) holds for k = 1,…, l (≤ n − 1). For k = l + 1, we start by representing the probability on the left-hand side of (2.4) as follows. Consider the l + 1 possible choices Ui, 1 ≤ i ≤ l + 1, for the first ordered value U1:l+1, and denote by U1:l(i),…,Ul:l(i) the order statistics of the original sample deprived of Ui (which reduces to a sample of size l). The probability in (2.4) can be expressed as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm005.gif?pub-status=live)
By conditioning on Ui, we can then rewrite (2.5) as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm006.gif?pub-status=live)
However, by induction, (2.6) is equivalent to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm007.gif?pub-status=live)
which, after integration, yields the right-hand side of (2.4) for k = l + 1. █
It now becomes easy to show from (2.3) that N has a quasi-binomial distribution (in the sense of Consul [3]). This result was derived in [6]; as indicated in [6], it was also obtained in other contexts and by different methods. For related problems, see, for example, Stadje [11] and Zacks [12].
Proposition 2.2:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm008.gif?pub-status=live)
Proof: Since the n initial components are i.i.d., a number of N = k failures can arise in an analogous fashion among all of the sets of k components, hence the factor
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm001.gif?pub-status=live)
in (2.8). Fix any given set of k components as the set having those k failures. Looking at (2.3), we first see that the other n − k components will not fail with the probability (1 − d − kp)n−k, as indicated in (2.8). Furthermore, when k ≥ 1, the failure of all of the k components of the set now means that [U1:k < d,U2:k < d + p,…,Uk:k < d + (k − 1)p] with a sample of size k (instead of n), and this will occur with a probability given by (2.4) evaluated at x = 0, hence the remaining factor in (2.8). █
The noncascade situation arises when p = 0. As expected, (2.8) then reduces to the binomial distribution for a sample size n and a success (i.e., failure) probability d.
A randomized extension. The functioning of the components can be influenced by various factors, internal or external. In such situations, the additional loads imposed to the system are not known with certainty but can be viewed as random elements of the model. So, let us suppose that the initial disturbance corresponds to a r.v. D and that, independently of D, the successive loads after failures correspond to i.i.d. r.v.'s W1,W2,…,Wn−1. Denote the cumulated additional loads by P1 = W1,P2 = W1 + W2,…, Pn−1 = W1 + ··· + Wn−1. To avoid saturation, we assume that D + Pn−1 < 1 a.s. (this hypothesis is satisfied if, for instance, D and the Wi's are bounded by 1/n). Now (2.8) for the law of N can be extended as follows (see Lefèvre and Picard [7] for a similar result derived in an epidemic context).
Proposition 2.3:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm009.gif?pub-status=live)
Proof: For any given k, let us condition the event (N = k) with respect to the two r.v.'s D and Pk. By adapting the argument followed for (2.8), we then get
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm010.gif?pub-status=live)
where θ0(0) ≡ 1 and the other θk(0)'s represent the following conditional probabilities evaluated at x = 0:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm011.gif?pub-status=live)
x being any real in (0,D). Now we will prove that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm012.gif?pub-status=live)
which leads to the announced formula (2.9).
As for Lemma 2.1, we proceed by recurrence and consider (2.11) for k = l + 1. First, (2.6) is replaced by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm013.gif?pub-status=live)
Now, for the term P[···] in (2.13), let us condition on the r.v. P1, and put
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm002.gif?pub-status=live)
. We obtain that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm014.gif?pub-status=live)
However, by the hypothesis of recurrence (for k = l) and using
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm003.gif?pub-status=live)
, (2.14) then becomes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408104253-14572-mediumThumb-S0269964806060256frm015.jpg?pub-status=live)
To close, we observe that E(P1 | Pl+1) = Pl+1 /(l + 1), so that (2.15) is analogous to (2.7) with Pl+1 /(l + 1) substituted for p, hence (2.12) for k = l + 1. █
3. A GENERALIZED MODEL
In this section we discuss another generalization of the model in [6] by assuming this time that the additional loads in case of failures are still constant but arbitrarily fixed. Such an extension allows us to widen the field of applications of the model.
Thus, a large system is often built, for security reasons, with several redundant components. Then the first failures of components will not influence the functioning conditions applied to the other components. In the model of Section 2, this means that a load p is added to each unfailed component only when there are at least c + 1 failures (instead of one), with 0 ≤ c ≤ n − 1. The probability (2.2) is then modified as follows: If k = 1,…, c,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm016.gif?pub-status=live)
and if k = c + 1,…, n,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm017.gif?pub-status=live)
More generally, let us assume that the additional loads are successively of given weights w1,…,wn−1. Denote the cumulated weights by p1 = w1,…, pn−1 = w1 + ··· + wn−1, and to avoid saturation, suppose that d + pn−1 < 1. Then (2.2) is changed in
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm018.gif?pub-status=live)
Such an extension is also relevant when in (2.2) the initial resistances of the n components are independent but no longer uniformly distributed. More precisely, let us suppose that these resistances are represented by i.i.d. random variables X1,…, Xn, with an arbitrary continuous distribution function F. Let X1:n,…, Xn:n be the associated order statistics. The initial disturbance is still denoted by d, and the successive loads are still denoted by p1,…, pn−1; assume that d and d + pn−1 are in the support of F. Then, instead of (2.2), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm019.gif?pub-status=live)
However, since the vector [X1:n,…, Xn:n] has the same distribution as [F−1(U1:n),…, F−1(Un:n)], (3.4) is equivalent to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm020.gif?pub-status=live)
where we put s1 = F(d),s2 = F(d + p1),…, sn = F(d + pn−1). Of course, (3.5) covers all of the previous cases.
Now let us determine from (3.5) the exact probability of N. To begin, we will extend Lemma 2.1. Specifically, we consider again any fixed sample of size k (1 ≤ k ≤ n) and define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm021.gif?pub-status=live)
x being any real in (0,s1), with ξ0 ≡ 1. These ξk's are no longer known explicitly but they can be computed recursively from (3.7).
Lemma 3.1: For any real y,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm022.gif?pub-status=live)
Proof: First, we observe that as for (2.6), one can write, for 0 ≤ l ≤ n − 1,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm023.gif?pub-status=live)
Now let us proceed by induction and suppose that (3.7) is true for k = 1,…, l (≤ n − 1). For k = l + 1, applying induction to the probability on the right-hand side of (3.8) yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408104253-65195-mediumThumb-S0269964806060256frm024.jpg?pub-status=live)
However, computing the first integral in (3.9) and again using (3.8) for the second integral, we find that (3.9) reduces to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm004.gif?pub-status=live)
hence (3.7) for k = l + 1. █
The recursion (3.7) can sometimes be slightly simplified by choosing an appropriate value for y. For instance, y = 1 will guarantee that the coefficients in the recursion are all positive. Several applications are given in the following.
The distribution (2.8) for N can now be generalized by (3.10) in terms of the probabilities ξk(0). The reader is referred to [5] for an alternative proof based on Abel–Gontcharoff polynomials.
Proposition 3.2:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm025.gif?pub-status=live)
Proof: As for (2.3), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm005.gif?pub-status=live)
Thus, it suffices to follow the argument given in the proof of Proposition 2.2 using (3.6) with x = 0. █
Formula (3.10) with the recursion (3.7) can be combined to give the single recursion (3.11).
Corollary 3.3:
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm026.gif?pub-status=live)
Proof: Equation (3.7) with x = 0 and y = 1 yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm027.gif?pub-status=live)
After multiplication by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm006.gif?pub-status=live)
, (3.12) can be rewritten as
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm028.gif?pub-status=live)
However, applying (3.10) inside the sum of (3.13) allows us to exhibit P(N = j), which then leads to (3.11). █
Notice that the relations (3.11) constitute a triangular system of n + 1 linear equations in the n + 1 unknown probabilities P(N = j). For instance, (3.11) for k = 0 gives P(N = 0) = (1 − s1)n, and for k = n, P(N = 0) + ··· + P(N = n) = 1.
Case with redundant components. Let us go back to the previous particular model specified by (3.1), (3.2), with a threshold of c + 1 failures. Here thus,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm029.gif?pub-status=live)
Corollary 3.4: Under (3.14), the law of N is given by (3.10), where, if k = 1,…, c,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm030.gif?pub-status=live)
and if k = c + 1,…, n,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm031.gif?pub-status=live)
Proof: Equation (3.15) is straightforward and we will focus on (3.16). For k = c + 1,…, n, choosing x = 0 and y = d in (3.7) then yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm032.gif?pub-status=live)
Proceeding by recurrence, let us suppose that (3.16) is true for ξj(0) with j = c + 1,…, k − 1. Then, by substitution in (3.17), and after rearrangement and permutation of the two sums, we obtain
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408104253-39277-mediumThumb-S0269964806060256frm033.jpg?pub-status=live)
However, for any real x,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm034.gif?pub-status=live)
since the left-hand side of (3.19) corresponds to Δm(xm−1), where Δ denotes the usual forward difference operator. Thus, using (3.19) we see that the second sum in (3.18) reduces to −(k − c)k−l−1. Finally, (3.18) becomes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm007.gif?pub-status=live)
hence (3.16), as indicated. █
Case with exponential resistances. Let us assume that the resistances have an exponential law, with parameter μ (> 0) say. Thus, s1 = 1 − exp(−μd), and for k = 1,… n − 1, s1+k = 1 − exp[−μ(d + pk)], with pk = w1 + ··· + wk. In other words,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm035.gif?pub-status=live)
where q0 ≡ exp(−μd) and qk ≡ exp(−μwk), k = 1,…, n − 1. This particular situation is of the type met in epidemic theory with the so-called Reed–Frost model. The homogeneous case where q1 = ··· = qn−1 ≡ q corresponds to that model in its standard version (see, e.g., Andersson and Britton [1] and Daley and Gani [4]); the nonhomogeneous case with different qk's gives a nonstationary version of the model (see [7]). Under (3.20), (3.11) becomes
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm036.gif?pub-status=live)
An advantage of this case is that the additional loads imposed on the system can be considered, without difficulty, as random elements. Specifically, let us suppose that the initial disturbance and the successive loads in case of failure correspond to independent r.v.'s, D and W1,…,Wn−1, all possibly with different laws. In an epidemic context, W1,…,Wn are i.i.d. r.v.'s that represent lengths of infectious periods; the model then corresponds to the randomized version of the Reed–Frost model (e.g., [1,4]). Now, for this randomized case, the n + 1 relations (3.21) can still be extended into a linear system of similar structure. More precisely, for j = 0,…, n, define the parameters
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm037.gif?pub-status=live)
Corollary 3.5: Under randomized (3.20),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm038.gif?pub-status=live)
Proof: First, we condition on fixed values d for D and w1,…,wn−1 for W1,…,Wn−1. The conditional state probabilities of N thus satisfy the relations (3.21). We observe that P(N = j), 0 ≤ j ≤ n, depends only on the parameters q0,…,qj. Let us multiply both sides of (3.21) by (q0···qk)n−k, and on the right-hand side, replace (q0···qk)n−k/(q0···qj)n−k by (q1+j···qk)n−k . Then, to eliminate the condition, we take the expectation, which yields, using the parameters (3.22),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm008.gif?pub-status=live)
hence the system (3.23). █
4. EXAMPLES AND BOUNDS
For illustration, we consider a special case of nonregular type for the model examined in Section 3. Specifically, (3.5) is assumed to hold now with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm039.gif?pub-status=live)
where θ > 0. This arises when the resistances are i.i.d. r.v.'s with a power-function distribution [i.e., F(x) = xθ for x ∈ (0,1) with θ > 0] and when the initial disturbance is equal to d and each additional load is equal to p, with d + p(n − 1) < 1. If θ = 1, the model corresponds to the case in Section 2.
We have evaluated the probability law of N, using the recursion (3.10), (3.12), when the system has n = 50 components and for different sets of parameters (with nonsaturation). Figure 1 gives this distribution on a logarithmic scale if
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm009.gif?pub-status=live)
and θ = 0.5,0.7,1, or 1.1. As expected, the power parameter θ plays a crucial role; in particular, the mode is at k = 50 for θ = 0.5 or 0.7 and at k = 0 if θ = 1 or 1.1. In Figure 2, the parameters are θ = 0.5 and d = 0.1p, 0.75p, or p, with
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm010.gif?pub-status=live)
. We observe that varying the initial disturbance d moderately affects the distribution; of course, the influence exerted by d depends also on the value of θ.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408104253-83447-mediumThumb-S0269964806060256fig001g.jpg?pub-status=live)
Log plot of the distribution of the number N of failed components, under (4.1) with and for four different θ's.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary-alt:20170408104253-71048-mediumThumb-S0269964806060256fig002g.jpg?pub-status=live)
Log plot of the distribution of the number N of failed components, under (4.1) with and for three different d's.
It is worth pointing out that for n large, the recursion is not always efficient and can even generate negative values for certain probabilities of N.
To close, we indicate that for arbitrary s1,…, sn, the distribution of N can be approximated by finding upper and lower bounds for the probabilities ξk(0) in (3.10). Such simple bounds are provided in Property 4.1.
Property 4.1: In general,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm040.gif?pub-status=live)
If the sequence s1,…, sk is concave (resp. convex), then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm041.gif?pub-status=live)
and for any given k′ ∈ {1,…, k − 1},
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm042.gif?pub-status=live)
provided, in the convex case, that sk′+1 − k′(sk′+1 − sk′) > 0.
Proof: Let us first derive (4.2). By definition (3.6),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm013.gif?pub-status=live)
whereas choosing x = 0 and y = sk in (3.7) yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm014.gif?pub-status=live)
Now if the sequence s1,…, sk is concave, then a lower linear approximation to this sequence is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm015.gif?pub-status=live)
and an upper linear approximation is given by
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256ffm016.gif?pub-status=live)
for any k′ ∈ {1,…, k − 1}. Moreover, it is obvious that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20170224030644741-0735:S0269964806060256:S0269964806060256frm043.gif?pub-status=live)
It then remains to evaluate the two bounds in (4.5) by applying (2.4), which leads to (4.3) and (4.4). The convex case can be treated similarly, under the constraint for (4.4) that s1(u) defined earlier is positive. █
Acknowledgments
This research was supported in part by the Banque Nationale de Belgique. I am grateful to I. Coulibaly (ISRO) for his help with the numerical part. I also wish to thank the Editor for helpful remarks.