No CrossRef data available.
Published online by Cambridge University Press: 22 June 2005
In the Note published last year [1], bounds and monotonicity of shot-noise and max-shot-noise processes driven by spatial stationary Cox point processes are discussed in terms of some stochastic order. Although all the statements concerning the shot-noise processes remain valid, those concerning the max-shot-noise processes have to be corrected.
In the Note published last year [1], bounds and monotonicity of shot-noise and max-shot-noise processes driven by spatial stationary Cox point processes are discussed in terms of some stochastic order. Although all the statements concerning the shot-noise processes remain valid, those concerning the max-shot-noise processes have to be corrected.
First, equations (7) and (9) in Theorem 1 (p. 566) should be replaced by
where ≤st denotes the usual stochastic order; that is, (7) means E f (Umix) ≤ E f (U) for all increasing f such that the expectations exist. The above (7) and (9) are now verified by checking E f (U) ≤ E f (Umix) and E f (Uhom) ≤ E f (U), respectively, for any decreasing f.
To prove them, the second assertion in Lemma 1(i) (p. 563) should be replaced by the following: If
is supermodular [resp. decreasing], then
, defined by
is supermodular [resp. decreasing and componentwise convex](1); that is, if f is decreasing and supermodular, then ψ is decreasing and dcx (ddcx). Furthermore, the statement at the end of the proof of Theorem 1 (p. 567) should also be replaced by g(x1,…,xk) = f (max{x1,…,xk}) is decreasing and supermodular for any decreasing f.(2) The proofs of (1) and (2) are provided at the end of this Correction.
According to the above modification, we have along the same lines as in the proof of the article that
where g(k) is now ddcx for any decreasing f (note that Lemma 1(ii) still holds even when idcx is replaced by ddcx). Hence, we can show (7) and (9), where the remaining steps in the proof are the same as in the article (note that Lemma 3 can be generalized such that “≤idcx” is replaced by “≤ism or “≤dsm” and Lemma 4 can also be generalized such that “≤icx” and “≤idcx” are replaced by “≤cx” and “≤dcx” respectively).
On the other hand, in Corollary 1 (p. 569), it would be difficult to fix the statements concerning the Palm version of the max-shot-noise processes with this approach since g(x0,x1,…,xk) = x0 f (x1,…,xk) in the proof of Lemma 5 (p. 569) is no longer supermodular when f is decreasing.
Finally, the statement concerning the max-shot-noise processes in Theorem 2 (p. 570) should be replaced by the following: If
is ≤ddcx-regular, then Uc is ≤st-increasing in c (> 0).
Proof of (1): Let f be supermodular. Then, for nonnegative integers ci and cj,
where Xi = maxl=1,…,ni{Sl(i)}, Ai = (maxl=ni+1,…,ni+ci{Sl(i)} − Xi)+ ≥ 0, and Xj and Aj are defined similarly.
Next, let f be decreasing. Then since
is a sequence of i.i.d. random variables, for nonnegative ci and di,
where Xi = maxl=1,…,ni{Sl(i)} and
If Ai ≥ Bi, then (*) reduces to −E f (…,Xi + Bi,…) + E f (…,Xi,…) ≥ 0, and if Ai < Bi, then (*) reduces to −E f (…,Xi + Ai,…) + E f (…,Xi,…) ≥ 0 since f is decreasing.
Proof of (2): Let f be decreasing. Then, clearly g is decreasing. Now, for yi and yj ≥ 0,
where X = max{…,xi,…,xj,…}, Yi = (xi + yi − X)+, and Yj = (xj + yj − X)+. If Yi ≥ Yj, then (#) reduces to −f (X + Yj) + f (X) ≥ 0, and if Yi < Yj, then (#) reduces to −f (X + Yi) + f (X) ≥ 0 since f is decreasing.
The author thanks Rafał Kulik for pointing out the error and for discussing the issue.