Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-05T19:51:56.080Z Has data issue: false hasContentIssue false

EXTREMES: A CONTINUOUS-TIME PERSPECTIVE

Published online by Cambridge University Press:  22 June 2005

Iddo Eliazar
Affiliation:
Department of Mathematics, Bar-Ilan University, Ramat-Gan 52900, Israel, E-mail: eliazar@math.biu.ac.il; eliazar@post.tau.ac.il
Rights & Permissions [Opens in a new window]

Abstract

We consider a generic continuous-time system in which events of random magnitudes occur stochastically and study the system's extreme-value statistics. An event is described by a pair (t,x) of coordinates, where t is the time at which the event took place and x is the magnitude of the event. The stochastic occurrence of the events is assumed to be governed by a Poisson point process.

We study various issues regarding the system's extreme-value statistics, including (i) the distribution of the largest-magnitude event, the distribution of the nth “runner-up” event, and the multidimensional distribution of the “top n” extreme events, (ii) the internal hierarchy of the extreme-value events—how large are their magnitudes when measured relative to each other, and (iii) the occurrence of record times and record values. Furthermore, we unveil a hidden Poissonian structure underlying the system's sequence of order statistics (the largest-magnitude event, the second largest event, etc.). This structure provides us with a markedly simple simulation algorithm for the entire sequence of order statistics.

Type
Research Article
Copyright
© 2005 Cambridge University Press

1. INTRODUCTION

The Gaussian (Normal) curve is well known as the universal probability law governing the statistical distribution of large samples. However, what truly impacts us is not the overwhelming majority of “normal events,” but the few rare exceptions of “abnormal” extreme events. Indeed, it is the unique Michael Jordan we remember, rather than a multitude of excellent professional NBA basketball players scoring an average number of points per game; it is hurricane Andrew that insurance companies (painfully) recall, rather than the thousands of strong storms that took place in the United States in the 1990s; and it is the 1912 Titanic disaster we remember, rather than numerous other unfortunate sinking events of ships in the Atlantic. For many more examples of the importance we attribute to extreme events, we refer the readers to the Guinness Book of Records.

The interest in rare and extreme events was shared—apart from the devoted readers of the Guinness Book of Records—also by the scientific community. The pioneering studies took place in the 1920s and 1930s, with the works of von Bortkiewicz [20], Fréchet [8], Fisher and Tippett [7], von Mises [21], Weibull, and Gumbel [12]. A rigorous theoretical framework was presented in 1943 by Gnedenko [11]. Nowadays, the study of extremes is a well-established branch of Probability Theory called Extreme Value Theory (EVT). This theory is of major importance in the analysis of rare and “catastrophic” events such as floods in hydrology, large claims in insurance, crashes in finance, material failure in corrosion analysis, and so forth. Classic references on EVT are [9,12]. For both the theory and applications of modern EVT we refer the readers to [3,15,19].

Given a sequence {ξn}n=1 of independent and identically distributed (i.i.d.) random variables (random samples), the “normal” approach is to study the asymptotic behavior of the scaled sums of the ξ's, namely the limiting probability distribution of

(as n → ∞), where {an}n=1 and {bn}n=1 are properly chosen scaling coefficients. This, as the Central Limit Theorem asserts, leads to the universal Gaussian (Normal) distribution. (In case the ξ's fail to have a finite variance or mean, the Central Limit Theorem is replaced by the Generalized Central Limit Theorem, and the limiting distributions are the stable Lévy laws [13].) The “extreme” approach, on the other hand, studies the asymptotic behavior of the scaled maxima of the ξ's:

The “Central Limit Theorem” of EVT asserts that (2) has three possible limiting probability laws (named, respectively, after the pioneers of EVT): Fréchet, Weibull, and Gumbel. The probability distribution functions of these universal extreme value laws are

(the exponent α appearing in the Fréchet and Weibull distributions is a positive parameter).

Typically, EVT studies the extreme statistics (maxima, minima, order statistics, records, etc.) of random sequences {ξn}n=1. The fundamental theory considers i.i.d. sequences, but generalizations to stationary sequences do exist (see, for example, [3]). Such random sequences represent a time series of data measured at discrete-time epochs. However, most “real-world” systems are usually continuous-time systems. Hence, why not study the extremes of continuous-time systems directly in some appropriate continuous-time setting?

Consider a generic continuous-time physical system in which events that take place are monitored and logged. Each event is described by a pair (t,x) of coordinates, with t being the time at which the event occurred and x being the magnitude of the event (a numerical value). Hence, the “history” of the physical system is given by a random collection

of points in the plane, where each point of

represents an event that took place. In the mathematical nomenclature, the random collection

—the system's history of events—is called a point process. A direct continuous-time approach would thus be to study the statistics of the extreme points of

. To that end, one obviously has to specify the probability distribution of the point process

. The continuous-time counterparts of discrete-time i.i.d. sequences are Poisson point processes. If

is a Poisson point process governed by the rate function λ(x), then, informally, (i) events of magnitude belonging to the infinitesimal range (x,x + dx) arrive at rate λ(x)dx and (ii) the occurrences of events of different magnitudes are independent.

In this article, we will explore the extremes of a generic continuous-time physical system whose history of events forms a Poisson point process. As we will demonstrate, this setting turns out to be “tailor-made” to the modeling and analysis of extreme events in continuous time. We begin, in Section 2, with a short review of the notion of Poisson point processes and introduce our underlying continuous-time system model. In Section 3, we define the system's sequence of order statistics and explore their distributions: the probability law of the maximum, the probability law of the nth “runner-up,” and the multidimensional probability law of the “top n” extremes. In Section 4, we unveil a hidden Poissonian structure underlying the sequence of order statistics. This hidden structure gives rise to a markedly simple simulation algorithm for the sequence of order statistics. In Section 5, we turn to study the internal hierarchy of the sequence of order statistics; namely we analyze the magnitudes of the extreme events when measured relative to each other. We conclude, in Section 6, with the exploration of the system's records times and record values.

A note about notation: Throughout the article,

denotes the real line, P(·) := probability, E[·] := expectation, and dω (dx, dt, etc.) is used to denote the infinitesimal neighborhood of the point ω (x, t, etc.).

2. THE CONTINUOUS-TIME SETTING

In this section we concisely review the notion of Poisson point processes, introduce our underlying continuous-time “event process” (which will accompany us throughout the manuscript), and explain the passage from the discrete-time i.i.d. setting to the continuous-time Poisson setting.

2.1. Poisson Point Processes

Let Ω be an Euclidean space (or subspace or domain). A random countable collection of points Π ⊂ Ω is called a point process [17]. We denote by Π(B) the number of points of Π residing in the subset B:

Hence, the point process Π ⊂ Ω induces a counting measure on Ω given by (4).

A point process Π ⊂ Ω is said to be a Poisson point process [17] with rate r(ω) if the following pair of conditions hold:

  • If B is a subset of Ω, then the random variable Π(B) is Poisson distributed with mean ∫B r(ω) dω.
  • If {Bk}k is a finite collection of disjoint subsets of Ω, then {Π(Bk)}k is a finite collection of independent random variables.

The Poisson point process Π can also be described by its finite-dimensional distributions: The probability that a given set of points {ωj}j=1n belongs to Π is

Informally, Ω is divided into infinitesimal cells. Each cell contains either a single point or no points at all. The cells are independent, and the probability that the cell dω contains a point is r(ω) dω.

The best known example of a Poisson point process is the standard Poisson process, where Ω = [0,∞) and the rate function r(ω) is constant.

2.2. The Event Process

Equipped with the notion of Poisson point processes, we can now rigorously define the point process

presented in Section 1. Recall that we considered a generic continuous-time physical system whose “history” is a random collection

of points in the plane—the point (t,x) represents an event of magnitude x taking place at time t. In Section 1, we said, informally, that “events of magnitude belonging to the infinitesimal range (x,x + dx) arrive at rate λ(x) dx” and, “the occurrences of events of different magnitudes are independent.” Put rigorously, the process

(henceforth referred to as our event process) is taken to be a time-homogeneous Poisson point process on

with rate

Thus,

We set Λ(x) to be the Poissonian rate at which samples of size greater than x arrive, namely

The function Λ(x) is smooth, monotone nonincreasing, and fully characterizes the event process

. This function will turn out to play a key role in the sequel. We henceforth refer to the function Λ(x) as the characteristic of the process

and assume that

We will elaborate on this assumption in the following subsection. Furthermore, in Subsection 3.1 we shall show that this assumption is in fact an essential requirement.

Finally, note that the random variable

—counting the number of events of size x ∈ [a,b] occurring during the time interval [t,t + Δ] —is Poisson distributed with mean

2.3. From Discrete to Continuous and Back

In the discrete-time setting described in Section 1, the underlying time series {ξn}n=1 is an i.i.d. sequence of random variables. If we take the random samples {ξn}n=1 to arrive according to some continuous-time counting process (N(t))t≥0, then the sample set, at time t, would be

(the sample set being empty in the case N(t) = 0). The setting in which (N(t))t≥0 is a renewal process was introduced in [10] and coined “Random Record Models” (see also [4,22]).

If (N(t))t≥0 is a standard Poisson process with rate ρ, then the “sample process” (11) is in fact a time-homogeneous Poisson point process on

with rate

where f (x) is the probability density function of the ξ's.

On the other hand, if the rate function λ(x) of the event process

is integrable (i.e., if ∫λ(x) dx < ∞), then

can be described in the form of the sample set (11), where (i) the Poissonian arrival rate is ρ := ∫λ(x) dx and (ii) the probability density function of the ξ's is f (x) := λ(x)/ρ.

Hence, in the case of integrable rate functions, there is a one-to-one correspondence, via embedding, between discrete-time i.i.d. sequences {ξn}n=1 and continuous-time event processes

. However, the truly interesting case is where the rate function is actually not integrable: ∫λ(x) dx = ∞. In this case,

has infinitely many events occurring on all time scales (i.e., on any time interval), and no correspondence between

and any discrete-time i.i.d. sequence {ξn}n=1 can be established. Thus, in the case of a nonintegrable rate function, the event process

is a truly continuous-time “creature.”

In this article, we focus on the “truly continuous-time case,” where the rate functions are nonintegrable. This is the reason for assumption (9). The limit limx→−∞ Λ(x) = +∞ ensures that the rate function λ(x) is indeed nonintegrable. The limit limx→+∞ Λ(x) = 0, on the other hand, ensures that events of infinitely large magnitude do not occur. We will return to discuss assumption (9) in Subsection 3.1.

3. THE SEQUENCE OF ORDER STATISTICS

Let

denote the sequence of order statistics of the event process

; namely Xn(t) is the nth largest sample observed during the time period [0,t]:

where X0(t) is set to equal +∞. (Definition (13) is in fact valid for any point process on

.)

Let us now turn to analyze the probability distributions of the maximum value X1(t), the nth “runner-up” Xn+1(t), and the vector of the “top n” order statistics (X1(t),…,Xn(t)).

3.1. The Maximum

We compute the distribution of the maximum X1(t). The maximum at time t is less or equal to x if and only if no events of magnitude greater than x occurred during the time period [0,t]; that is,

However, the random variable

is Poisson distributed with mean tΛ(x) (recall (10)) and, hence,

The probability density function of X1(t) is given, in turn, by

The assumption of (9): From (14), we see that the distribution of the maximum is proper if and only if the assumption of (9) holds. Indeed, in order for the distribution of X1(t) to be proper, the limit of (14) must equal zero when x → −∞ and must equal one when x → +∞. This, however, takes place if and only if limx→−∞ Λ(x) = +∞ and limx→+∞ Λ(x) = 0. Hence, the assumption of (9) is not an impeding restriction, but an essential requirement.

3.2. Beating the Maximum

Assume that the current maximum level is x. How long will we have to wait until this maximum level is beat? Since events of magnitude greater than x arrive at rate Λ(x), the answer is straightforward: The waiting time is exponentially distributed with rate Λ(x) (mean 1/Λ(x)). Let us consider the two following variations of this question.

(i) Assume that we are at time t, but we do not know the current maximum level X1(t). How long will we have to wait—from the present time t onward—until the unknown maximum level X1(t) is beat? Let L(t) denote the respective waiting time. Given that X1(t) = x, the waiting time L(t) is exponentially distributed with rate Λ(x). Conditioning on the (unknown) maximum level X1(t), we obtain that

The proof of (16) is given below. Note that the waiting time L(t) has infinite mean.

(ii) This variation regards positive-valued event processes. Assume (as earlier) that the current maximum level is x and let k > 1 be a fixed parameter. How long will we have to wait until the occurrence of a maximum level that is at least k times larger than all the maximum levels preceding it? These waiting times (coined “geometric record times”) are explored in [2] (their analysis is considerably harder than the analysis of the waiting times described earlier).

To prove (16), use the probability density function of (15) and the change of variables u = tΛ(x):

3.3. Fréchet, Weibull, and Gumbel

Equation (14) implies that the function Λ(x) fully characterizes the distribution of the maximum. On the other hand, the function Λ(x) also fully characterizes the underlying event process

. Hence, there is a one-to-one correspondence—conveyed by the characteristic Λ(x)—between event processes and the distribution of their maxima. In particular, the event processes corresponding to the “Central Limit Theorem” distributions of EVT (viz. the Fréchet, Weibull, and Gumbel distributions) are given, respectively, by

(the exponent α appearing in the Fréchet and Weibull distributions is a positive parameter).

Could these extreme value laws be obtained (as in the “classic” EVT) as the only possible maxima scaling limits? The answer, as explained below, is affirmative.

The continuous-time scaling of the maximum X1(t), analogous to the discrete-time scaling of (2), is

where a(t) and b(t) are the scaling functions (a(t) being positive valued). The distribution of the scaled maxima

, using (14), is hence given by

On the other hand, the distribution of the scaled maxima

of (2) is given by

where L(x) := −ln(P1x)). Moreover, the function L(x) satisfies the very same properties the function Λ(x) does; namely it is monotone decreasing, with limx→−∞ L(x) = +∞ and limx→+∞ L(x) = 0.

Thus, (17) (as t → ∞) and (18) (as n → ∞) must yield the same distributional limits. The “classic” EVT asserts that the three possible limits of (18) are the extreme value distributions: Fréchet, Weibull, and Gumbel. Hence, these probability laws are also the only possible limits of (17).

The scaling functions in the Fréchet, Weibull, and Gumbel cases are

It is interesting to note that in these three special cases, the scaling yields

.

For a detailed analysis regarding the “basins of attraction” of the Fréchet, Weibull, and Gumbel extreme value distributions, we refer the readers to [3].

3.4. The nth “Runner-up” and the “Top n

The distribution of Xn+1(t)—the nth “runner-up” is computed analogously to the distribution of the maximum X1(t). The (n + 1)st order statistic at time t is less or equal to x if and only if no more than n events of magnitude greater than x occurred during the time period [0,t]; that is,

Hence, since

is Poisson distributed with mean tΛ(x), we have

The probability density function of Xn+1(t) is given, in turn, by

Note that (19) and (20) indeed coincide with (14) and (15) in the case n = 0.

Equation (19) gives the one-dimensional distribution of the order statistics. What about the joint, multidimensional, probability distribution of the order statistics; namely the joint distribution of the “top n”—the vector (X1(t),…,Xn(t))?

Well, the joint probability density function of the vector (X1(t),···,Xn(t)) is given by

where x1 > x2 > ··· > xn. The explanation follows.

In order for the points x1 > x2 > ··· > xn to be the “top n” points of the sample process

at time t, we need (for j = 1,…,n) the following: (i) an event of magnitude xj takes place during the time period [0,t]; this occurs with probability tλ(xj) dxj; and (ii) no events of magnitude ∈ (xj,xj−1) (where x0 is set to equal +∞) take place during the time period [0,t]; this occurs with probability exp{−t(Λ(xj) − Λ(xj−1))}. Multiplying these probabilities together yields the multidimensional density function (21).

4. THE STRUCTURE OF THE SEQUENCE OF ORDER STATISTICS

When viewed as a stochastic process in the “order parameter” n (keeping the time t fixed), the sequence of order statistics {Xn(t)}n=1 conceals a hidden underlying structure, which we unveil in this section. First, we reveal a Markovian structure governing the sequence of order statistics. Second, we prove that the Markovian structure is due to a hidden, underlying, Poissonian structure. This Poissonian structure, in turn, provides us with a markedly simple simulation algorithm for the sequence of order statistics. Third, we show that the (discretely parameterized) sequence of order statistics can be embedded in a simple transformation of a (continuously parameterized) Gamma process.

4.1. Markovian Structure

The conditional distribution of a (n + 1)st order statistic, given the value of the nth order statistic, is

(y < x). The explanation follows.

Given that the nth order statistic equals x, the (n + 1)st order statistic will be less or equal to y (where y < x) if and only if no events of magnitude ∈ (y,x) occur during the time interval [0,t]; that is, if and only if

. Since

is a Poisson point process, the random variable

is Poisson distributed with mean t(Λ(y) − Λ(x)) and is independent of the points of

residing in [0,t] × [x,∞). Hence, the left-hand side of (22) equals the probability that

, which, in turn, is given by the right-hand side of (22).

Since (22) is a Markovian recursion, it implies that the sequence of order statistics {Xn(t)}n=1 is a Markov chain (in the variable n, for t fixed). The initial condition of this Markov chain is given by the distribution of the maximum X1(t); see (14).

4.2. The Hidden Poissonian Structure

When viewed in the proper perspective, the sequence of order statistics {Xn(t)}n=1 conceals a hidden Poissonian structure. The “proper perspective” is to consider the sequence {Λ(Xn(t))}n=1, rather than the original sequence {Xn(t)}n=1. Let us begin with the distribution of Λ(X1(t)).

Using (14), we have

(u ≥ 0). Hence, Λ(X1(t)) is exponentially distributed with parameter t (mean 1/t). In a similar way, (19) implies that Λ(Xn(t)) is Gamma distributed with parameters (t,n).

More informative, however, is to analyze the conditional distribution of the increment Λ(Xn+1(t)) − Λ(Xn(t)) given Λ(Xn(t)). Indeed, using (22), we have

that is, the increment Λ(Xn+1(t)) − Λ(Xn(t)) is independent of Λ(Xn(t)) and is exponentially distributed with parameter t (mean 1/t). Hence, we have obtained the following proposition:

Proposition 1: Let

be an event process with characteristic Λ(x) and order statistics {Xn(t)}n=1. Then the increasing sequence of points {Λ(Xn(t))}n=1 forms a standard Poisson process with rate t.

4.3. Simulation and Gamma Embedding

Proposition 1 provides us with a remarkably simple simulation algorithm for the entire sequence of order statistics {Xn(t)}n=1:

where {Zn}n=1 is an i.i.d. sequence of exponentially distributed random variables with unit mean. In particular, the simulation algorithms for the Fréchet, Weibull, and Gumbel order statistics are as follows:

Equation (23) is in fact a manifestation of a Gamma embedding as we now explain.

The Gamma process (G(s))s≥0 is a stochastic process starting at the origin (G(0) = 0) whose increments are independent, stationary, and Gamma distributed:

(b > a ≥ 0). The Gamma process is a special example of one-sided Lévy processes (Lévy subordinators) [1].

At the point s = 1, the value of the Gamma process G(1) is exponentially distributed with unit mean. Hence, since the Gamma process has independent and stationary increments, the increments {G(n) − G(n − 1)}n=1 form an i.i.d. sequence of exponentially distributed random variables with unit mean. Combining this observation with (23), we obtain the following Gamma embedding of the sequence of order statistics {Xn(t)}n=1:

that is, the discrete-parameter sequence {Xn(t)}n=1 is embedded in the continuous-time process (Λ−1(G(s)/t))s≥0, which, in turn, is a transformation of the Gamma process (G(s))s≥0.

Furthermore, the random variable Λ−1(G(s)/t), for noninteger parameter values s, may be considered a “virtual” s-order statistic. Continuing yet another step in this direction, we can define the Fréchet, Weibull, and Gumbel virtual order processes as follows (s ≥ 0):

5. THE INTERNAL HIERARCHY OF THE SEQUENCE OF ORDER STATISTICS

What is the “internal hierarchy” of the sequence of the order statistics? What are the relative magnitudes of the order statistics? The answer to these questions is given by the following proposition.

Proposition 2: Let

be an event process with characteristic Λ(x) and order statistics {Xn(t)}n=1. Then the ratios {Λ(Xn(t))n/Λ(Xn+1(t))n}n=1 are independent and uniformly distributed on the unit interval. Equivalently,

The proof of Proposition 2, which is based on the use of an order-preserving transformation of event processes, is provided in the Appendix.

We note that the distribution of the ratio Λ(Xn(t))/Λ(Xn+1(t)) can be deduced from Proposition 1. Indeed, (23) implies that this ratio equals, in law, the ratio (Z1 + ··· + Zn)/(Z1 + ··· + Zn+1), where {Zn}n=1 is an i.i.d. sequence of exponentially distributed random variables with unit mean. The latter ratio, in turn, is known (see, for example, [6]) to be governed by the Beta distribution function un (0 < u < 1).

With Proposition 2 at hand, let us explore the internal hierarchy of the sequence of order statistics in the special Fréchet, Weibull, and Gumbel cases.

5.1. The Fréchet and Weibull Cases

In the Fréchet case, Λ(x) = x−α, and in the Weibull case, Λ(x) = |x|α (the exponent α being a positive parameter). Hence, Proposition 2 implies the following:

[bull ] Fréchet: The ratios Xn+1(t)/Xn(t), n = 1,2,…, are independent random variables governed by the Beta distribution

[bull ] Weibull: The ratios |Xn(t)|/|Xn+1(t)|, n = 1,2,…, are independent random variables governed by the Beta distribution

Most interesting is the first ratio—X2(t)/X1(t) in the Fréchet case and |X1(t)|/|X2(t)| in the Weibull case—which measures the relative magnitudes of the “winner” and the “first runner-up.” The ratio distribution, in both the Fréchet and Weibull cases, is governed by the probability density function f (u) = αuα−1 (0 < u < 1). This density function undergoes a phase transition when crossing the parameter value α = 1: (i) When 0 < α < 1, the density f (u) is monotone decreasing, starting at f (0) = ∞ and decreasing to f (1) = α; (ii) when α = 1 (the “critical” parameter value), the density f (u) is uniform; and (iii) when α > 1, the density f (u) is monotone increasing, starting at f (0) = 0 and increasing to f (1) = α. Hence, the “first runner-up” trails considerably behind the “winner” when α is small, and it follows the “winner” closely when α is large.

At the other end of the order spectrum, the distribution function uαn becomes more and more concentrated around the value u = 1 as n → ∞. This implies that as n → ∞, the order statistics get closer and closer to each other in ratio. In the Fréchet case, as n → ∞, the order statistics converge to zero and hence get closer and closer to each other, also absolutely. In the Weibull case, on the other hand, the order statistics diverge to −∞ and hence remain apart.

In the Fréchet and Weibull cases, Proposition 2 also provides us with an algorithm for the simulation of the relative contributions of the “top n” order statistics X1(t),X2(t),…,Xn(t) to their aggregate magnitude. The explanation follows.

Let {Uk}k=1n−1 be an i.i.d. sequence of uniformly distributed random variables, and set {Mk}k=0n−1 to be given by

Proposition 2 implies (all equalities below are equalities in law):

Fréchet case: Xk+1(t)/Xk(t) = Uk1/kα and hence, using recursion, Xk(t) = X1(t)Mk−1. This, in turn, yields the arithmetic average:

Weibull case: |Xk(t)|/|Xk+1(t)| = Uk1/kα and hence, using recursion, |Xk(t)| = |Xn(t)|Mn−1 /Mk−1. This, in turn, yields the harmonic average:

5.2. The Gumbel Case

In the Gumbel case Λ(x) = exp{−x} and hence proposition 2 implies:

Gumbel: The differences Xn(t) − Xn+1(t), n = 1,2,…, are independent random variables governed by the exponential distribution

Alternatively, the random variables {n(Xn(t) − Xn+1(t))}n=1 are i.i.d. and exponentially distributed with unit mean.

Let us study the following “asymptotically centered” sequence of differences {Dn}n=1 defined by

Using (30), the sequence {Dn}n=1 admits the probabilistic representation

where {Zn}n=1 is an i.i.d. sequence of exponentially distributed random variables with unit mean. The representation (31), in turn, yields that the mean and variance of Dn are

where γ ≃ 0.577 is Euler's constant.

Moreover, the representation (31) further yields that the Laplace transform of Dn is

(θ ≥ 0). Hence, using Stirling's formula, we have

The function Γ(1 + θ), however, is the Laplace transform of the standard Gumbel distribution (see, for example, [5]), and hence we conclude the following:

The sequence {Dn}n=1 converges, in law, to the standard Gumbel distribution.

6. RECORDS

Among the points of the event process

, the set of record points (henceforth denoted by

) is of special interest. This last section is devoted to the study of these points.

A point

is said to be a record point if and only if all events occurring during the time period [0,t) were smaller in magnitude than the value x:

If

, then t is called a record time and x is called a record value. We denote the sets of record times and record values respectively by

. Clearly, the sets

are the projections of the record set

on the time and space axes.

The probability that the points {(tj,xj)}j=1n, where t1 < t2 < ··· < tn and x1 < x2 < ··· < xn, belong to the record set

is given by

where t0 is set to equal zero. The explanation follows.

In order for the points {(tj,xj)}j=1n to belong to the record set

, we need the following (for j = 1,…,n): (i) the point (tj,xj) belongs to the underlying event process

; this occurs with probability λ(xj) dtj dxj; and (ii) during the time interval (tj−1,tj), no events of magnitude greater or equal to the value xj take place; this occurs with probability exp{−(tjtj−1)Λ(xj)}. Multiplying these probabilities together yields (33).

We note that (33) can be written, alternatively, in the form

where xn+1 is set to equal +∞.

Now, integrating (33) over x1 < x2 < ··· < xn yields

and integrating (34) over t1 < t2 < ··· < tn yields

Hence, using (5), we can conclude the following:

Proposition 3: Let

be an event process with characteristic Λ(x). Then the following hold:

(i) The set of record times

is a temporal Poisson point process with rate 1/t.

(ii) The set of record values

a is Poisson point process (on

) with rate λ(x)/Λ(x).

In particular, for the Fréchet, Weibull, and Gumbel cases the rate λ(x)/Λ(x) of the Poisson point process of record values

is as follows:

Several remarks are necessary:

(i) We emphasize that the record set itself

is not a Poisson point process on

since the probability

given in (34) does not admit the multiplicative form (5).

(ii) Equation (35) is the continuous-time counterpart of Rényi's record theorem [18]. Rényi's theorem asserts that if {ξj}j=1n is an i.i.d. sequence of random variables and Ej is the event {ξj is a record value}, then the events {Ej}j=1n are independent and P(Ej) = 1/j.

(iii) Results analogous to those given in (35) and (36) for i.i.d. random sequences can be found in [3, Chap.5].

(iv) Recall the waiting time L(t) defined in Subsection 3.2: L(t) is the length of the time period elapsing from time t until the first occurrence of a record event after time t. Note that {L(t) > l} if and only if the set of record times

has no points on the interval [t,t + l]. Hence, Proposition 3 yields that

reaffirming (16).

7. CONCLUSIONS

Motivated by the fact that the classic EVT undertakes a discrete-time approach, whereas most “real-world” systems are usually continuous time, our aim in this work was to introduce a simple and parsimonious model, counterpart to the discrete-time i.i.d. model, for the study of extremes in continuous time.

To that end, we considered a generic continuous-time system in which events of random magnitudes arrive stochastically—following time-homogeneous Poisson point process dynamics on

. The (monotone decreasing) function Λ(x) was used to denote the Poissonian rate at which events of magnitude greater than x occur and was assumed to satisfy limx→−∞ Λ(x) = +∞ and limx→+∞ Λ(x) = 0. This ensured the following: (i) the occurrences of the events are everywhere dense on the time axis (i.e., there are countably many events occurring on any time interval), whereas (ii) events greater than any given level occur discretely (i.e., there are only finitely many such events on any time interval).

This Poissonian model turned out to naturally accommodate the study and investigation of extremes in continuous time:

  • the maximum X1(t): the magnitude of the greatest event occurring during the time interval [0,t];
  • the nth “runner-up” Xn+1(t): the magnitude of the (n + 1)st greatest event occurring during the time interval [0,t];
  • the “top n” (X1(t),…,Xn(t)): the vector of magnitudes of the n greatest events occurring during the time interval [0,t];
  • the set of record times Rtime: the time epochs at which the record events occurred;
  • the set of record values Rvalue: the values of the record events.

Furthermore, for any fixed time t, we explored the sequence of order statistics {Xn(t)}n=1—viewed as a stochastic process indexed by the parameter n:

  • Structure: We discovered that the increasing sequence of points {Λ(Xn(t))}n=1 forms a standard Poisson Process with rate t.
  • Simulation: We devised a markedly simple algorithm for the simulation of the sequence {Xn(t)}n=1.
  • Hierarchy: We discovered that the ratios {Λ(Xn(t))n/Λ(Xn+1(t))n}n=1 are independent and uniformly distributed on the unit interval.

Throughout, emphasis was focused on three special cases: (i) Fréchet Λ(x) = x−α (x > 0); (ii) Weibull Λ(x) = |x|α (x < 0); (iii) Gumbel Λ(x) = exp{−x} (x real). For these cases, the resulting maximum distributions are governed respectively by the Fréchet, Weibull, and Gumbel probability laws. These laws are of major importance since they are the universal “stable laws” of the classic EVT—the only possible maxima scaling limits.

We hope that researchers in the engineering and informational sciences, when having to deal with extreme events in continuous-time systems, would find use in the modeling approach and results presented in this work.

Acknowledgments

The author would like to thank the anonymous referee for his helpful and constructive comments.

APPENDIX

We introduce an order-preserving transformation of event processes and use it to prove Proposition 2.

A.1. An Order-Preserving Transformation

Given a point process

on [0,∞) × I and a monotone-increasing bijection φ : IJ (I and J being subintervals of the real line

), consider the transformation

that is, the point (t,x) of

is mapped by the transformation Φ to the point (t,φ(x)) of

. The map Φ transforms the point process

(on [0,∞) × I) to a new point process

(on [0,∞) × J). Furthermore, the map Φ preserves the order of the order statistics: If {Xn(t)}n=1 is the sequence of order statistics of the process

is the sequence of order statistics of the process

, then

Now, if

is a Poisson point process with rate

, then standard probabilistic arguments imply that

is also a Poisson point process and its rate is

. In particular, if

is a sample process with characteristic

, then

is a new sample process with characteristic

Equation (A.3) can also be derived directly by combining together (A.2) and (14).

Hence, given a pair of sample processes—

with characteristic

with characteristic

—the map Φ induced by the monotone-increasing bijection

transforms the process

to the process

.

Equations (A.3) and (A.4) enable us to transform between different sample processes while preserving the order of their order statistics.

A.2. Proof of Proposition 2

We split the proof into two steps. In the first step, we prove that Proposition 2 holds for the special case of an event process

with characteristic

. In the second step, we use the order-preserving transformation of Subsection A.1 to validate Proposition 2 for any event process

.

We remark that the running maximum process (Y1(t))t≥0 (associated with the event process

) is analogous, albeit not identical, to the “Poisson hyperbolic staircase process” introduced and studied in [16].

Step 1: Fix an integer n. The characteristic of the event process

is

and, hence, (21) implies that the joint probability density function of the vector of order statistics (Y1(t),…,Yn+1(t)) is given by

Now,

This, in turn, implies that ratios {Yj+1(t)/Yj(t)}j=1n are independent random variables governed by the Beta distribution:

Step 2: Let

be an arbitrary event process with characteristic

and let

be an event process with characteristic

. Equation (A.4) implies that the order-preserving map Φ transforming the process

to the process

is induced by the monotone-increasing bijection

(or, equivalently, using (A.3): if

, then

. Exploiting the fact that Φ is order-preserving, (A.2) implies that

Hence, using Step 1, we obtain that the ratios

are independent random variables governed by the Beta distribution:

Since the choice of n was arbitrary, the proof of Proposition 2 is complete.

References

REFERENCES

Bertoin, J. (1996). Lévy processes. Cambridge: Cambridge University Press. Bertoin, J. (1999). Subordinators: Examples and applications. Lecture Notes in Mathematics 1717. New York: Springer-Verlag.
Eliazar, I. (2005). On geometric record times. Physica A 348: 181198.Google Scholar
Embrechts, P., Kluppelberg, C., & Mikosch, T. (1997). Modelling extremal events for insurance and finance. Springer-Verlag.
Embrechts, P. & Omey, E. (1983). On subordinated distributions and record processes. Mathematical Proceedings of the Cambridge Philosophical Society 93: 339353.Google Scholar
Evans, M., Hastings, N., & Peacock, B. (1993). Statistical distributions, 2nd ed. New York: Wiley.
Feller, W. (1971). An introduction to probability theory and its applications, Vol. 2, 2nd ed. New York: Wiley.
Fisher, R.A. & Tippett, L.H.C. (1928). Limiting forms of the frequency distribution of the largest and smallest member of a sample. Proceedings of the Cambridge Philosophical Society 24: 180190.Google Scholar
Fréchet, M. (1927). Sur la loi de probabilité de l'écart maximum. Ann. Soc. Polon. Math. Cracovie 6: 93116.Google Scholar
Galambos, J. (1987). Asymptotic theory of extreme order statistics, 2nd ed. New York: Krieger.
Gaver, D.P. (1976). Random record models. Journal of Applied Probability 13: 538547.Google Scholar
Gnedenko, B. (1943). Sur la distribution limite du terme maximum d'une serie aleatoire. Annals of Mathematics 44: 423. Translated and reprinted in S. Kotz & N.L. Johnson, eds. (1992). Breakthroughs in statistics I. Berlin: Springer-Verlag, pp. 195–225.Google Scholar
Gumbel, E.J. (1958). Statistics of extremes. New York: Columbia University Press.
Ibragimov, I.A. & Linnik, Yu.V. (1971). Independent and stationary sequences of random variables. Groningen: Walters-Noordhoff.
Kingman, J.F.C. (1993). Poisson processes. Oxford: Oxford University Press.
Kotz, S. & Nadarajah, S. (2000). Extreme value distributions. London: Imperial College Press.
Levikson, B., Rolski, T., & Weiss, G. (1999). On the Poisson hyperbolic staircase. Probability in the Engineering and Informational Sciences 13: 1132.Google Scholar
Reiss, R.D. (1993). A course on point processes. New York: Springer-Verlag.
Rényi, A. (1976). On outstanding values of a sequence of observations. In: P. Turan (ed.), Selected papers of A. Rényi, Vol. 3. Budapest: Akadémiai Kiadó.
Thomas, M. & Reiss, R.D. (2001). Statistical analysis of extreme values. Boston: Birkhauser.
von Bortkiewicz, L. (1922). Variationsberiteund mittlerer Fehler, Sitzungsber. Berlin Math. Ges. 21: 3.Google Scholar
von Mises, R. (1936). La distribution de la plus grande de n valuers. Rev. Math. Union Interbalcanique 1: 141160. Translated and reprinted in (1964). Selected papers of Richard von Mises II. Providence, RI: American Mathematical Society, pp. 271–294.Google Scholar
Westcott, M. (1977). The random record model. Proceedings of the Royal Society, London, A 356: 529.Google Scholar