Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-05T20:05:28.338Z Has data issue: false hasContentIssue false

NUMERICAL TRANSFORM INVERSION USING GAUSSIAN QUADRATURE

Published online by Cambridge University Press:  12 December 2005

Peter den Iseger
Affiliation:
Cardano Risk Management, Rotterdam, The Netherlands, and, Erasmus University, Rotterdam, The Netherlands, E-mail: p.deniseger@cardano.nl
Rights & Permissions [Opens in a new window]

Abstract

Numerical inversion of Laplace transforms is a powerful tool in computational probability. It greatly enhances the applicability of stochastic models in many fields. In this article we present a simple Laplace transform inversion algorithm that can compute the desired function values for a much larger class of Laplace transforms than the ones that can be inverted with the known methods in the literature. The algorithm can invert Laplace transforms of functions with discontinuities and singularities, even if we do not know the location of these discontinuities and singularities a priori. The algorithm only needs numerical values of the Laplace transform, is extremely fast, and the results are of almost machine precision. We also present a two-dimensional variant of the Laplace transform inversion algorithm. We illustrate the accuracy and robustness of the algorithms with various numerical examples.

Type
Research Article
Copyright
© 2006 Cambridge University Press

1. INTRODUCTION

The emphasis on computational probability increases the value of stochastic models in queuing, reliability, and inventory problems. It is becoming standard for modeling and analysis to include algorithms for computing probability distributions of interest. Several tools have been developed for this purpose. A very powerful tool is numerical Laplace inversion. Probability distributions can often be characterized in terms of Laplace transforms. Many results in queuing and reliability among others are given in the form of transforms and become amenable for practical computations once fast and accurate methods for numerical Laplace inversion are available. Numerical Laplace inversion is much easier to use than it is often made to seem. This article presents a new and effective algorithm for numerical Laplace inversion. The new algorithm outperforms existing methods, particularly when the function to be inverted involves discontinuities or singularities, as is often the case in applications.

The algorithm can compute the desired function values f (kΔ), k = 0,1,…,M − 1 for a much larger class of Laplace transforms than the ones that can be inverted with the known methods in the literature. It can invert Laplace transforms of functions with discontinuities and singularities, even if we do not know the location of these discontinuities and singularities a priori, and also locally nonsmooth and unbounded functions. It only needs numerical values of the Laplace transform, the computations cost M log(M) time, and the results are near machine precision. This is especially useful in applications in computational finance, where one needs to compute a large number of function values by transform inversion (cf. Carr and Madan [8]). With the existing Laplace transform inversion methods, this is very expensive. With the new method, one can compute the function values f (kΔ), k = 0,1,…,M − 1, at once in M log(M) time.

There are many known numerical Laplace inversion algorithms. Four widely used methods are (1) the Fourier series method, which is based on the Poisson summation formula (cf. Dubner and Abate [13], Abate and Whitt [4,5], Choudhury, Lucantoni, and Whitt [9], O'Cinneide [17], and Sakurai [21]), (2) the Gaver-Stehfest algorithm, which is based on combinations of Gaver functionals (cf. Gaver [14] and Stehfest [22]), (3) the Weeks method, which is based on bilinear transformations and Laguerre expansions (cf. Weeks [27] and Abate, Choudhury, and Whitt [1,2]), and (4) the Talbot method, which is based on deforming the contour in the Bromwich inversion integral (cf. Talbot [25] and Murli and Rizzardi [16]).

The new algorithm is based on the well-known Poisson summation formula and can therefore be seen as a so-called Fourier series method, developed in the 1960s by Dubner and Abate [13]. The Poisson summation formula relates an infinite sum of Laplace transform values to the z-transform (Fourier series) of the function values f (kΔ), k = 0,1,…. Unfortunately, the infinite sum of Laplace transform values, in general, converges very slowly. In their seminal article, Abate and Whitt [4] used an acceleration technique called Euler summation to accelerate the convergence rate of the infinite sum of Laplace transform values. Recently, Sakurai [21] extended the Euler summation to be effective for a wider class of functions. A disadvantage of all the variants of the Fourier series methods is that unless one has specified information about the location of singularities, the accelerating techniques are not very effective and the convergence is slow.

We present a Gaussian quadrature rule for the infinite sum of Laplace transform values. The Gaussian quadrature rule approximates accurately the infinite sum with a finite sum. Then we compute the function values f (kΔ), k = 0,1,…,M − 1, efficiently with the well-known fast Fourier transform (FFT) algorithm (cf. Cooley and Tukey [11]). For smooth functions, the results are near machine precision. With a simple modification, we can handle known discontinuities in the points kΔ, k = 0,1,…., such that the running time of the algorithm is still insensitive to the number of discontinuities and we get results near machine precision. We also extend the Gaussian quadrature formula for the multidimensional Laplace transform and present a multidimensional Laplace transform inversion algorithm. With this algorithm, we can compute the function values f (k1 Δ1,…,kj Δj), kj = 0,1,…,M − 1, at once in Mj log(Mj) time and the results are again of machine precision. We develop a modification that gives results near machine precision for functions with singularities and discontinuities on arbitrary locations. We also develop a modification that is effective for almost all kinds of discontinuity, singularity, and local nonsmoothness. With this modification, we can obtain results near machine precision in continuity points for almost all functions, even if we do not know the location of these discontinuities and singularities a priori.

The accurateness and robustness of the algorithms is illustrated by examining all of the Laplace transforms that are used in the overview article of Davies and Martin (cf. [12]). We also demonstrate the effectiveness of the algorithm with an application in queuing theory by computing the waiting time distribution for a M/D/1 queue.

2. OUTLINE OF THE METHOD

Let f be a complex-valued Lebesgue integrable function f, with e−αtf (t) ∈ L1[0,∞), for all α > c. The Laplace transform of f is defined by the integral

The Poisson summation formula relates an infinite sum of Laplace transform values to the z-transform (damped Fourier series) of the function values f (kΔ), k = 0,1,….

Theorem 2.1 (PSF): Suppose that fL1[0,∞) and f is of bounded variation. Then for all υ ∈ [0,1),

here, f (t) has to be understood as (f (t+) + f (t))/2, the so-called damping factor a is a given real number, and i denotes the imaginary number

.

A proof of this classical result can, for instance, be found in Abate and Whitt [4] or Mallat [15]. The right-hand side of (2.2) is the (damped) Fourier series of the function values {f (k); k = 0,1,…}. In this article we present a Gaussian quadrature rule for the left-hand side of (2.2); that is, we approximate this infinite sum with a finite sum,

with {βk} given positive numbers and the {λk} given real numbers. In Appendix A the reader can find these numbers for various values of n. We can compute the function values {f (k); k = 0,1,…,M − 1} by

here, M2 is a given power of 2. With the well-known FFT algorithm (cf. Cooley and Tukey [11]) we can compute these sums in M2 log(M2) time. We can compute the function values f (kΔ), k = 0,1,…, by applying (2.3) to the scaled Laplace transform

which is the Laplace transform of the function fΔ(t) = ft).

3. THE GAUSSIAN QUADRATURE RULE

We associate with the left-hand side of (2.2) an inner product, say 〈·,·〉Qυ.

Definition 3.1: Let the inner product 〈·,·〉Qυ be given by

Having

, we can write the left-hand side of (2.2) as

. The idea is to approximate this inner product with a Gaussian quadrature rule. As usual, we associate with the inner product 〈·,·〉Qυ the norm ∥·∥Qυ, which induced the sequence space L2(Qυ). We say that a function f belongs to L2(Qυ) iff the sequence {f (1/2πi(k + υ))} belongs to L2(Qυ). Let the polynomials {qnυ; nN0} be given by

where

The following result holds.

Theorem 3.2: The set {qjυ; j = 0,1,…} is a complete orthogonal set of polynomials in L2(Qυ).

The proof is given in Appendix E.

We can now easily obtain the desired Gaussian quadrature rule (cf. Definition C.1 in Appendix C). We denote this quadrature rule with 〈·,·〉Qυn.

Definition 3.3: The inner product 〈·,·〉Qυn is given by

Thekυ} are the zeros of the polynomial qnυ. The so-called Christophel numberskυ} are positive numbers given by

It is well known (cf.

[24]) that the roots of orthogonal polynomials are all distinct and lie on the support of the inner product; thus, the roots {μkυ} are all distinct and lie on the imaginary axis. Having

, we approximate

with our Gaussian quadrature rule and obtain the quadrature formula

where the last identity follows from the PSF formula (2.2). In Appendix F we explain how we can compute the numbers {μkυ} and {αkυ} efficiently. Considering that only the real part yields the quadrature formula

we used here that the αkυ are real. Similarly to the Abate and Whitt algorithm (cf. [4]), we use a damping factor for functions with support contained in [0,∞); that is, we take the Laplace transform

, which is the Laplace transform of eatf (t). This yields the quadrature rule

where

It remains to compute the function values f (k), k = 0,1,…,M − 1. We can realize this by the discrete Fourier series inversion formula

with M2 a given power of 2. With the well-known FFT algorithm (cf. Cooley and Tukey [11]), we can compute the sums (3.4) in M2 log(M2) time. In Appendix H we discuss the details of this approach.

Remark 3.4: The quadrature rule is also valid for a function f with support (−∞,∞). Formula (3.3) then reads as

Remark 3.5: For smooth functions on (−∞,∞) or [0,∞), the quadrature formula is extremely accurate. In Appendix G and, in particular, Theorem G.3 we prove that for smooth functions, the approximation error of (3.3) is approximately

with α some number in (0,2). This implies that if we compute the function values f (kΔ), k = 0,1,…, by inverting the Laplace transform

with (3.4), the approximation error is of the order O2n+1). Formula (3.5) shows also that if

is bounded, then the quadrature formula converges faster than any power of n.

Remark 3.6: The convergence of the quadrature rule is insensitive for discontinuity in t = 0 (cf. Remark G.4 in Appendix G).

Remark 3.7: Gaussian quadrature is a standard method for the numerical evaluation of integrals (cf. Stoer and Bulirsch [23]). Piessens (cf. [18,19]) tried to apply Gaussian quadrature directly on the Bromwich inversion integral. He reports in [19] for the first Laplace transform of Table 1 (cf. Section 4.2) an average approximation error of 1.0e-4 for a 17-point quadrature rule (for the values t = 0,1,…,12). Our method gives an average approximation error of 1.0e-15 (cf. Table 1).

Results for Analytical Test Functions

4. A SIMPLE LAPLACE TRANSFORM INVERSION ALGORITHM

In this section we explain how we can approximate the quadrature rule (3.2) with a simpler quadrature rule. In addition to this approximation being easier to implement, it is also numerically more stable. Therefore, we strongly recommend using this quadrature rule.

4.1. The Algorithm

We start by writing the PSF (cf. (2.2)) as

with

Applying the quadrature rule on the left-hand side of (4.1) yields the approximation

with

Considering only the real part yields the formula

with

We make

periodic by defining

equal to

It remains to compute the function values f (k), k = 0,1,…,M − 1. We can realize this by the discrete Fourier series inversion formula

with M2 a given power of 2.

Finally, we show how to simplify (4.7). Since the coefficients of the polynomials qk0.5, k = 1,2,…, are real, we can order the roots {μkυ} such that the μk0.5 and μnk0.5 are pairs of conjugate numbers. It follows from (3.1) that the Christophel numbers αk0.5 and αnk0.5 are equal. Using this symmetry, (4.4), and the symmetry cos(2πυ) = cos(2π(1 − υ)) yields (for n is even) that (4.7) can be simplified to

where

In Appendix A the reader can find these numbers for various values of n. With the well-known FFT algorithm (cf. Cooley and Tukey [11]), we can compute the sums (4.8) in M2 log(M2) time. In Appendix H we discuss the details of the discrete Fourier inversion formula (4.8). We can now present our Laplace inversion algorithm.

Algorithm

Input:

, with M a power of 2

Output: f ([ell ]Δ), [ell ] = 0,1,…,M − 1

Parameters: M2 = 8M, a = 44/M2, and n = 16

Step 1. For k = 0,1,…,M2 and j = 1,2,…,n/2, compute the numbers

Step 2. For [ell ] = 0,1,…,M2 − 1, compute the numbers

with the backwards FFT algorithm.

Step 3. Set f ([ell ]Δ) = ea[ell ]f[ell ] for [ell ] = 0,1,…,M − 1.

In Appendix H we discuss the details of Steps 2 and 3 of the algorithm.

Remark 4.1: We can compute the function values f (kΔ), k = 0,1,…, by applying the quadrature rule to the scaled Laplace transform

Remark 4.2: Extensive numerical experiments show that n = 16 gives, for all smooth functions, results attaining the machine precision. For double precision, we choose a = 44/M2 and M2 = 8M. For n = 16, we need 8 Laplace transform values for the quadrature rule and we use an oversampling factor of M2 /M = 8; thus, on average, we need 64 Laplace transform values for the computation of 1 function value. The method can be efficiently implemented with modern numerical libraries with FFT routines and matrix/vector multiplication routines (BLAS). If we are satisfied with less precision, then we can reduce the oversampling factor and/or the number of quadrature points. For other machine precisions, it is recommend to choose a somewhat larger than −log(ε)/M2, with ε the machine precision.

Remark 4.3: The choice of the parameter M2 is a trade-off between the running time and the accuracy of the algorithm. The actual running time is M2 log(M2); hence, from this perspective, we want to choose M2 as small as possible. However, in Step 3 of the algorithm we multiply the results with a factor exp(a[ell ]) = exp(44[ell ]/M2); to obtain a numerically stable algorithm, we want to choose M2 as large as possible. We recommend choosing M2 = 8M.

Remark 4.4: Since the Gaussian quadrature rule is exact on the space of polynomials of degree 2n − 1 or less, we obtain that

This is clearly minimal for υ = 0.5. This shows that (4.6) is numerically more stable than the original Gaussian quadrature formula.

Remark 4.5: If n is odd, then one of the {μk0.5; k = 1,2,…,n} is equal to zero. The evaluation of (3.2) can be a problem in this case. To avoid this, we take n even.

Remark 4.6: The Gaussian quadrature formula (3.3) is extremely accurate for smooth inverse functions (cf. Appendix G and Remark 3.5). Since the smoothness of the inverse function f implies that the function

is a smooth function too, we can expect that the modified quadrature formula is extremely accurate for smooth functions. All of the numerical examples support this statement (cf. Section 4.2). In fact, many numerical experiments show that the modified formula performs even better. The reason for this is that the modified quadrature formula is numerically more stable.

4.2. Numerical Test Results

To test the method, we examined all of the Laplace transforms that are used in the overview article by Davies and Martin (cf. [12]). In this subsection we discuss the results for eight analytical test functions. These results are presented in Table 1. The inversions are done in 32 points and so M = 32. We have taken M2 = 8M = 256, a = 44/M2, and n = 16. The computations are done with double precision. We have taken Δ = 1/16, Δ = 1, and Δ = 10. We see that for smooth functions the approximation error seems to be independent of Δ. The reason for this is that the approximation error is dominated with round-off error. From all of the examples, we can conclude that the method is very accurate, robust, and fast. In Section 6.2 we discuss the other test functions that are used in the overview article by Davies and Martin (cf. [12]).

5. MULTIDIMENSIONAL LAPLACE TRANSFORM

Let f be a complex-valued Lebesgue integrable function with

. The two-dimensional Laplace transform of f is defined by the integral

In order to extend the algorithm of Section 4 to a two-dimensional Laplace transform algorithm, we need the two-dimensional Poisson summation formula (PSF2).

Theorem 5.1 (PSF2): If

is of bounded variation, then for all υ and ω ∈ [0,1),

where

here, f (x,y) has to be understood as

Similarly as in the one-dimensional case, we associate with (5.3) an inner product; we call this inner product 〈·,·〉Qυω.

Definition 5.2: Let the inner product 〈·,·〉Qυω be given by

where ukυ = 1/(2πi(k + υ)).

Having

, we can write (5.3) as

. The idea is to approximate this inner product with a Gaussian quadrature rule. Similarly as in the one-dimensional case, we associate with the inner product 〈·,·〉Qυω the norm ∥·∥Qυω and introduce the sequence space L2(Qυω). We say that a function f belongs to L2(Qυω) iff the sequence {f (ukυ,ujω)} belongs to L2(Qυω). Let the polynomials {qkjυω} be given by

The following result holds.

Theorem 5.3: The set {qkjυω; j = 0,1,…} is a complete orthogonal set of polynomials in L2(Qυω).

The proof is given in Appendix E.

We can now easily obtain the desired Gaussian quadrature rule. We denote this quadrature rule by 〈·,·〉Qυωn.

Definition 5.4: The inner product 〈·,·〉Qυωn is given by

Thekυ} are the zeros of the polynomial qnυ. The so-called Christophel numbersk} are positive numbers given by

Having

, we approximate

with Gaussian quadrature rule (5.4). By considering the real part of the quadrature formula, we obtain

with

where

The second equation in (5.5) follows from PSF2 (cf. (5.2)).

As in Section 4, we approximate quadrature rule (5.5) with a numerically more stable quadrature rule. Again, we start by writing the PSF2 (cf. (5.1)) as

where

with

Applying the quadrature rule to (5.6) yields the approximation

with

where

We make

periodic by defining

and by defining

all equal to

It remains to compute the function values f (k,j), k = 0,1,…,M1 − 1 and j = 0,1,…,M2 − 1. We can realize this by the discrete Fourier series inversion formula

where N1 and N2 are given powers of 2. With the well-known (two-dimensional) FFT algorithm (cf. Cooley and Tukey [11]), we can compute the sums (5.8) in N1 N2 log(N1 N2) time.

Algorithm

Input:

with M1 and M2 powers of 2

Output: f ([ell ]Δ1,mΔ2), [ell ] = 0,1,…,M1 − 1, m = 0,1,…,M2 − 1

Parameters: N1 = 8M1, N2 = 8M2, a = 44/N1, b = 44/N2, and n = 16

Step 1. For k = 0,1,…,N1, j = 1,2,…,n, m = 0,1,…,N2, and l = 1,2,…,n, compute the numbers

and the numbers

Set

Step 2. For [ell ] = 0,1,…,N1 − 1 and m = 0,1,…,N2 − 1, compute the numbers

with the backwards FFT algorithm.

Step 3. Set f ([ell ]Δ1,mΔ2) = ea[ell ]ebmf[ell ]m for [ell ] = 0,1,…,M1 − 1 and m = 0,1,…,M2 − 1.

Remark 5.5: We can compute the function values f (kΔ1,jΔ2), k,j = 0,1,…, by applying the quadrature rule to the scaled Laplace transform

Remark 5.6: In a similar way, we can invert higher-dimensional transforms and compute the function values f (k1 Δ1,…,kd Δd), kj = 0,1,…,Mj − 1, at once in

time.

Remark 5.7: We have tested the two-dimensional Laplace transform inversion with many numerical experiments. All of these experiments show that the two-dimensional inversion algorithm is extremely accurate for smooth functions. In Section 6 we discuss the number of modifications to obtain the same accurate results for nonsmooth functions. We discuss them in the context of the one-dimensional inversion algorithm. The modifications can also be used in combination with the two-dimensional Laplace transform inversion algorithm.

6. MODIFICATIONS FOR NONSMOOTH FUNCTIONS

In Appendix G we prove that the inversion algorithm is extremely accurate for smooth functions. In the following subsections we present a number of modifications to obtain the same accurate results for nonsmooth functions. In Section 6.1 we discuss a modification for piecewise smooth functions. With this modification, we can obtain results near machine precision for functions with discontinuities in the points

. In Section 6.2 we discuss a modification for functions with singularities at arbitrary locations. With this modification, we can obtain results near machine precision for functions with discontinuities at arbitrary but a priori known locations. In Section 6.3, we discuss a modification that is effective for almost all kinds of discontinuity, singularity, and local nonsmoothness. With this modification, we can obtain results near machine precision in continuity points for almost all functions, even if we do not know the discontinuities and singularities a priori. The modifications of the next subsections can, in principle, also be used in combination with other Fourier series methods. The effectiveness of the modifications in combination with other Fourier series methods is a subject for future research.

6.1. Piecewise Smooth Functions

In this subsection we discuss a modification for piecewise smooth functions. With this modification, we can obtain results near machine precision for functions with discontinuities in the points

. The discontinuities of such a function are represented as exponentials in the Laplace transform. To be more precise, suppose that the function f is a piecewise smooth function with discontinuities in the points

. We can write the Laplace transform

as

with

the Laplace transforms of functions that are smooth on [0,∞). The function f is then given by

here, we use the translation property. Since the fj are smooth functions on [0,∞), the quadrature rule is accurate. Hence,

with sυa = a + 2πiυ. On the other hand,

Combining the above equations yields

We now proceed as in Section 4 and compute the function values f (k), k = 0,1,…,M − 1, with

where

Remark 6.1: If f is piecewise smooth, with discontinuities in the points

, we can write the Laplace transform

. We scale the Laplace transform to

and can obtain the values {f (kα/m)} with high precision.

As an example, consider the Laplace transform

We then obtain the quadrature formula

Algorithm

Input:

, with M a power of 2

Output: f ([ell ]Δ), [ell ] = 0,1,…,M − 1

Parameters: M2 = 8M, a = 44/M2, and n = 16

Step 1. For k = 0,1,…,M2 and j = 1,2,…,n, compute the numbers

and the numbers

Step 2. For [ell ] = 0,1,…,M2 − 1, compute the numbers

with the backwards FFT algorithm.

Step 3. Set f ([ell ]Δ) = ea[ell ]f[ell ] for [ell ] = 0,1,…,M − 1.

In Appendix H we discuss the details of Steps 2 and 3 of the algorithm.

We test this modification on two discontinuous test functions. The results are presented in Table 2. The inversions are done in 32 points and so M = 32. We have taken M2 = 256 = 8M and Δ = 1/16 and we have taken a = 44/M2. In addition to the choice of the previous parameters, we have taken n = 16.

Results for Noncontinuous Test Functions

We can conclude that our method is also very accurate, robust, and fast for discontinuous functions. In the next subsection we show the effectiveness of the modification of this subsection with an application from queuing theory.

6.1.1. An application in queuing theory.

Consider the M/G/1 queue arrival rate λ and general service time distribution B with first moment μB. It is assumed that ρ = λμB < 1. It is well known that the distribution of the stationary waiting time W is given by (see, e.g., Asmussen [6])

with

. The Laplace transform

of W is given by

As an example, consider the M/D/1 queue; that is, the service times are deterministic, with mean 1. For this case, the Laplace transform

is given by

Observe that

contains the factor exp(−s). Therefore, introduce the function

For this choice, it follows that

and, therefore, we can apply the modification of Section 6.1. We have calculated the waiting time distribution for ρ = 0.7, 0.8, 0.9, and 0.95. By comparing our results with results of the algorithm in Tijms [26, p.381], it appears that we can calculate the waiting time distribution with near machine accuracy in a fraction of a second; see Table 3.

Results for the Waiting Time Distribution in the M/D/1 Queue

Remark 6.2: We can generalize the above approach to a service distribution

We then obtain

6.2. Functions with Singularities

In this subsection we discuss a modification for functions with singularities at arbitrary locations. With this modification, we can obtain results near machine precision for functions with discontinuities at arbitrary but a priori known locations. Suppose that there is a singularity in t = α, with

. We then consider the function

where the so-called window function w is a trigonometric polynomial with period 1, w(0) = 1, and w(α) = 0, and q is a positive integral number. Hence, the function fw is smooth in t = α and

We can compute the function values f (k) by inverting the Laplace transform

with the algorithm of Section 4. The following class of window functions gives results near machine precision:

where the coefficients A and B are given by

and p is chosen such that (pα mod 1) is close to ½. We obtain by the modulation property that

Remark 6.3: Suppose that there are singularities in the points αj, j = 1,2,…,m. Then we can multiply the windows; that is, we take

where wj is a trigonometric polynomial with period 1, wj(0) = 1, and wjj) = 0.

Remark 6.4: If we want to compute f (kΔ), we have to replace

.

If there is a singularity in t = 0, then A and B are not defined. Consider, therefore, the function

The function w has period 2, w(1) = 1, w(α) = 0, and ∂w(α)/∂α = 0. Hence, fw is smooth in t = 0 and

We obtain from

and the modulation property that

We can compute the function values f (2k + 1), with the algorithm of Section 4. If we want to compute f (Δ(2k + 1)), we have to replace

.

Remark 6.5: For singularities at t = 0 of the order xα, 0 < α < 1, we can obtain with q = 1 results near machine precision. For singularities at t = 0 of the order xα, −1 < α < 0, with q = 2 we can obtain results near machine precision. For discontinuities and singularities at arbitrary t, we recommend using q = 6.

Remark 6.6: Since the composite inverse function fw(tΔ) is a highly oscillating function, we recommend choosing n = 32. With this choice, we obtain results near machine precision. If the inversion point is close to a singularity, we can compute f (t) accurately by inverting the scaled Laplace transform

with Δ small enough. The price we have to pay for a small Δ is that M becomes large (Mt/Δ).

Remark 6.7: The idea of smoothing through the use of multiplying smoothing functions has been discussed in the context of Euler summation by O'Cinneide [17] and Sakurai [21]. O'Cinneide called the idea “product smoothing.”

We end this subsection with the results for eight test functions with singularities that were in the overview article of Davies and Martin (cf. [12]). These results are presented in Table 4. The inversions are done in 32 points and so M = 32. We have taken M2 = 256 = 8M, and for all of the examples, we have taken a = 44/M2. We have used the window function w(t) = sin(πt/2)2, and we have taken q according to Remark 6.5. In addition to the choice of the previous parameters, we have taken n = 32. The computations are done with double precision. We have taken Δ = 1/16, Δ = 1, and Δ = 10. We see that, for smooth functions, the approximation error seems to be independent of Δ. The reason for this is that the approximation error is dominated with round-off errors and the error caused by the oscillation of the window function.

Results for Continuous Nonanalytical Test Functions

From all of the examples, we can conclude that the method is very accurate, robust, and fast for functions with singularities.

6.3. A Robust Laplace Inversion Algorithm

In this subsection we discuss a modification that is effective for almost all kinds of discontinuity, singularity, and local nonsmoothness. With this modification, we can obtain results near machine precision in continuity points for almost all functions, even if we do not know the discontinuities and singularities a priori. As in Section 6.2, we compute the function value f (k) by inverting the Laplace transform of the function

but now the window function depends on the point k; that is, for the computation of each function value f (k) we use a different window function. We choose the window function such that the ε-support {t;|e−αtfwk(t)| ≥ ε} of fwk is contained in [k − δ, k + δ], with δ given positive control parameters, ε a given tolerance, and wk(k) = 1. The advantage of this construction is that it is sufficient that the function f is smooth on [k − δ, k + δ], in order that the function fwk is smooth on [0,∞). Thus, we only need that the function f is smooth on [k − δ, k + δ] to compute f (k) accurately with the quadrature rule. Hence, we can obtain results near machine precision in continuity points for almost all functions.

A second advantage is that we can efficiently invert the resulting z-transform. As the ε-support of fwk is contained in [k − δ, k + δ] implies that

(the first equation follows from (4.5)) and the N-point discrete Fourier transform computes the numbers

(cf. Appendix H), we can compute the function values f (k) efficiently with an N-point discrete Fourier transform (N ≥ 1 + 2[lfloor ]δ[rfloor ]).

We can construct the window functions with the desired properties as follows: Let w be a smooth bounded function satisfying

with P a given positive number. We choose the damping factor α such that

We extend w to a periodic function with period P by setting

The desired window function wk is defined by

We will now show how we can efficiently evaluate the Fourier series

,

for k = 0,1,…,M − 1. Since w has period P and w is smooth, we can expand w in a fast converging Fourier series. Thus, we can write

with

This yields that the Laplace transform of fwk is given by

Since w is a smooth periodic function, the {Aj} converge rapidly to zero. Hence, we can approximate

accurately with

for J large enough. Hence,

with

Moreover, if J is a multiple of P, then

with L = J/P. We make

periodic by defining

equal to

We can compute the sums (6.10) in L log(L) + J time with the FFT algorithm (cf. Cooley and Tukey [11]). It remains to compute the function values f (k), k = 0,1,…,M − 1. We can realize this by the discrete Fourier series inversion formula

with

. If δ ≤ 1, then

Let us finally define the window function w. A good choice is the so-called Gaussian window (cf. Mallat [15]). The Gaussian window is defined by

with σ a given scale parameter. Given a prespecified tolerance ε, the scale parameter σ is chosen such that

We can compute the numbers Aj by

The truncation parameter L (in (6.10)) is chosen as

and δ is chosen as δ = 1.

Algorithm

Input:

, with M a power of 2

Output: f ([ell ]Δ), [ell ] = 0,1,…,M − 1

Parameters: P = 8M, δ = 1, L (cf. (6.12)), σ (cf. (6.11)), and n = 48

Step 1. For k = −PL,…,PL − 1 and j = 1,2,…,n, compute the numbers

for k = −PL,…,PL − 1, compute the numbers

and for j = 0,1,…,P − 1, compute the numbers

with

Step 2. Compute

, for k = 0,1,…,P − 1, with the FFT algorithm.

Step 3. Compute f (kΔ) with f (kΔ) = (−1)keakfk, for k = 0,1,…,M − 1.

Remark 6.8: The composite inverse functions fwk are highly peaked; therefore we recommend choosing n = 48. With this choice, we obtain results near machine precision.

Remark 6.9: For an arbitrary window function, we can also compute the coefficients Aj efficiently with the fractional FFT (cf. [7]). Since |w(t)| < ε for [−P/2,−δl] ∪ [δu,P/2], we can write this discrete Fourier transform as

with M a power of 2. This expression can be efficiently computed with the fractional FFT.

Remark 6.10: If the inversion point is close to a singularity, we can compute f (t) accurately by inverting the scaled Laplace transform

To be more precise, suppose that f is smooth on the interval [k − δa,k + δa] and that the function wk has ε-support contained in [k − δ, k + δ]. If we choose Δ ≤ δa /δ, then fwk is a smooth function. The price we have to pay for a small Δ is that the period of wk is large and that we need a larger number of terms for an accurate approximation of the Fourier series (6.8).

We end this subsection with a discussion of the numerical results for eight test functions with singularities that were used in the overview article of Davies and Martin (cf. [12]). These results are presented in Table 5. The inversions are done in 32 points and so M = 32. We have taken P = 256 = 8M, and for all of the examples, we have taken a = 44/M2. In addition to the choice of the previous parameters, we have taken

. The computations are done with double precision. We have taken Δ = 1/16, Δ = 1, and Δ = 10. We see that the approximation error seems to be independent of Δ. The reason for this is that the approximation error is dominated with round-off error and the error caused by the highly peaked window function. We also test the method on two discontinuous functions. These results are presented in Table 6. For this example, we have taken Δ = 1/16. From all of the examples, we can conclude that the method is very accurate, robust, and fast.

Results for Continuous Nonanalytical Test Functions

Results for Noncontinuous Test Functions

Acknowledgments

The author is indebted to Emőke Oldenkamp, Marcel Smith, Henk Tijms, and the two anonymous referees for their valuable comments and suggestions, which significantly improved the article.

APPENDIX A: Some Weights and Nodes for the Quadrature Rule

We tabulated only the numbers λj and βj for j = 1,2,…,n/2. The numbers λn+1−j are given by −λj − 2π. The numbers βn+1−j coincide with βj.

APPENDIX B: The Fourier Transform

We start by introducing the Fourier transform over the space L1(−∞,∞) of Lebesgue integrable functions. For a function fL1(−∞,∞) and pure imaginary s, the Fourier integral

is properly defined. In fact,

and

is a continuous function. If the support of f is contained in [0,∞), then we call

the Laplace transform of f. If

, the space of complex-valued Lebesgue integrable functions, satisfies

, then the inverse Fourier integral is properly defined.

Theorem B.1. If

, then

is properly defined and

This theorem can be found in any text about Fourier transforms; we have taken it from Mallat [15]. It follows immediately that

is a continuous function.

APPENDIX C: Gaussian Quadrature

Definition C.1 (Gaussian Quadrature (cf.

[24])): Let the inner product 〈·,·〉 be given by

with μ a given positive measure and I a subinterval of the real axis or a subinterval of the imaginary axis. Let {qk} be an orthogonal set w.r.t. this inner product. Let the inner product 〈·,·〉n given by

withk; k = 0,1,…,n − 1} the simple roots of qn and the strictly positive numbers {αk; k = 0,1,…,n − 1}, the so-called Christophel numbers, be given by

with An the highest coefficient in qn. The rootsk; k = 0,1,…,n − 1} ∈ I. The inner product 〈·,·〉n is called the Gaussian quadrature rule and is the unique nth-order quadrature rule satisfying

where πn is the space of polynomials with degree not exceeding n and 1 is the constant function 1(t) = 1.

APPENDIX D: Legendre Polynomials

The Legendre polynomials {φn; nN0} are a complete orthogonal polynomial set in L2([0,1]). These polynomials

are given by

with D the differential operator.

Consider the polynomials

A routine computation shows that the Laplace transform

of the nth Legendre polynomial φn is given by

Introduce

and

. Since exp(−s−1) = exp(−i2πυ) on

, it follows that

on

. This identity is crucial for Appendixes E and F.

APPENDIX E: Proofs of Theorems 3.2 and 5.3

Proof of Theorem 3.2: Recall that

and

. Since exp(−s−1) = exp(−i2πυ) on

, it follows that

, on

. Hence,

Let

Since

and the function Φmn is of bounded variation and in L1, we can apply the PSF (Theorem 2.1) and obtain that

This proves that the {qj} are orthogonal.

Define

Since

. Since {φk} is a complete orthogonal set in L2([0,1]), we obtain that

Since

we obtain that

in the last step, we use

. Thus,

We obtain from Parseval's theorem (cf. Conway [10, Thm.4.13]) that {qkυ} is a complete orthogonal set in L2(Qυ). █

Proof of Theorem 5.3: Since for f (s,r) = f1(s) f2(r) and g(s,r) = g1(s)g1(r),

the orthogonality of {qkjυω} follows from the orthogonality of the sets {qkυ} and {qjυ} Let

denote

, respectively. Since

we obtain that

Using the fact that the sets {qkυ} and {qjω} are both complete orthogonal in L2(Qυ) and L2(Qω), respectively, yields by Parseval's theorem (cf. Conway [10, Thm.4.13]) that

We obtain from Parseval's theorem that {qkjυω} is a complete orthogonal set in L2(Qυω). █

APPENDIX F: The Computation of {μkυ} and {αkυ}

Let us begin with analyzing the matrix representation of the multiplication operator w.r.t. the basis {qkυ}.

Theorem F.1: Let M : LQυ2LQυ2 be the multiplication operator defined by Mf (s) = sf (s). The matrix representation of

, w.r.t. the basis {qkυ} is given by

where

Let Mn : LQυn2LQυn2 be the multiplication operator defined by Mn f (s) = sf (s). The matrix representation of

, w.r.t. the basis {qkυ; k = 0,1,…,n − 1} is given by

Proof: Since

M is a skew adjoint operator. Since qn is orthogonal to πn−1, the space of polynomials with degree less or equal n − 1, we obtain that

Since M is skew adjoint, we obtain that

Let An be the coefficient of sn in qnυ and let Bn be the coefficient of sn−1 in qnυ. It can be easily verified that

Comparing the coefficient of sn in Mqn−1υ and qnυ respectively yields that

Comparing the coefficients of sn in Mqnυ − 〈Mqnυ,qn+1υQυ qn+1υ and qnυ respectively yields that

Expanding Mqnυ in {qkυ} yields the desired result for the operator M. Since for all fLQυn2,

we obtain the desired result for the operator Mn. █

Theorem F.2: The numberskυ} are the eigenvalues of Mn. The Christophel numberskυ} are given by |〈vk,q0υ〉|2/|1 − exp(−i2πυ)|2, with the {vk} the normalized eigenfunctions (∥vk∥ = 1) of Mn.

Proof: Let ρ be an arbitrary polynomial in π2n−1. Since the matrix Mn is skew self-adjoint, we obtain by the spectral theorem (cf. Conway [10]) that

with {ak} the eigenvalues and {vk} the normalized eigenfunctions (∥vk∥ = 1) of Mn. Further, since Mn and M are tridiagonal, we obtain that

Since the Gaussian quadrature rule is the unique nth-order quadrature rule satisfying

we obtain the result. █

Remark F.3: Since the matrix

is skew self-adjoint, we can efficiently compute the eigenvalues {μkυ} and eigenvectors {vk} with the QR algorithm (cf. Stoer and Bulirsch [23]).

APPENDIX G: Error Analysis

Let us start by giving an alternative formula for the Christophel numbers {αjυ}.

Lemma G.1: The Christophel numbersjυ} are given by

where Anυ is the coefficient of sn in qnυ.

Proof: Let [ell ]j be the Lagrange polynomial defined by [ell ]jkυ) = δkj, with δkj = 1 if k = j and δkj = 0 if kj. Then 〈qn−1υ,[ell ]j〉 = αjυqn−1υjυ). On the other hand, 〈[ell ]j,qn−1υ〉 = Bj /An−1υ, with An−1υ the coefficient of sn−1 in qn−1υ and Bj the coefficient of sn−1 in [ell ]j. Since Bj = Anυ/Dqnυkυ), we obtain the identity

Before we can give an error estimate for the quadrature rule, we need the following technical lemma.

Lemma G.2: Let the integration operator I : L2([0,1]) → L2([0,1]) be given by

For m = 0,1,…,n − 1, the estimate

is valid.

Proof: Since by the PSF (Theorem 2.1)

, with en the nth unit vector, we only have to prove that

. Introduce the directed graph G with vertices {0,1,…} and edges V = {Vj,j+1} and V+ = {Vj,j−1}. Let Gn be the subgraph of G with vertices {0,1,…,n − 1} and edges V = {Vj,j+1; jn − 2} and V+ = {Vj,j−1; jn − 1}. Let the weight of the path {j0,…,jm} be given by

. Let

be the set of all paths from n − 1 to k of length m in G and Gn, respectively. The remainder of the proof consists of the following steps:

Proof of Step (1): We will prove this result by induction. Clearly, the result is true for m = 1. Since

we obtain

where the second equation follows from the induction hypotheses and the last equality follows from the fact that the matrix

is tridiagonal (Theorem F.1) with only the first element on the diagonal nonzero and mn − 1.

Proof of Step (2): Since each path from n − 1 to k contains exactly (m − (n − 1 − k))/2 edges of the set V and (m + (n − 1 − k))/2 edges of the set V+, the weight of each path in Skm has the same sign.

Proof of Step (3): Since

is a subset of Skm and each path in Skm has the same sign, we obtain

The desired result follows from

We can now give an error estimate for the quadrature rule.

Theorem G.3: The error of the quadrature formula

is given by

where f(k) denotes the kth-order derivative of f and

with φn the nth Legendre polynomials given by

where D denotes the differential operator. Furthermore, the remainder

is bounded by

with

Moreover,

where

Thus, for smooth functions,

with α some number in (0,2).

Proof: The proof consists of the following steps:

(1) The functional Eυ can be expanded as

where Vj is given by

(2) By the definition of the functional Eυ,

The term

is equal to

(3) The term

is bounded by

(4) We can estimate Wj h and ∥Rj h1 by

Proof of Step (1): Let

; then

Let Pn−1 be the interpolation polynomial of

, where the {μkυ} are the zeros of qnυ. Since the quadrature rule is exact for polynomials of degree n − 1, we obtain that

Hence,

with

Since for s ∈ {2πi(k + υ)} (cf. (D.1))

we obtain

with

where

The following observations show that

is a proper Laplace transform, satisfying the conditions for the PSF (Theorem 2.1):

Since

is holomorphic in the plane except in the points {1/μkυ} and converges uniformly to zero as |s| → ∞, we obtain

Hence, we obtain for x < 0,

where the last equality follows from Lemma G.1. Since the support of W is (−∞,0], we obtain

in the last equality we use that φn is orthogonal on each polynomial of degree less than n. By (G.5) and the definition of the inner product 〈·,·〉Qυn (Definition 3.3), this yields

with Vj given by

Proof of Step (2): Since for s ∈ {2πi(k + υ)},

the first term of (G.2) is equal to

We obtain from the PSF (Theorem 2.1) that this is equal to

We obtain from (G.1) that the second term of (G.2) is equal to

with

Proof of Step (3): Since φn is orthogonal on πn−1, we obtain Rj p2n−1 = 0 for p2n−1 ∈ π2n−1. Hence,

for 0 ≤ m ≤ (n − 1). Hence,

the last inequality follows from (A.1). We obtain from Lemma G.2 that

Proof of Step (4): Recall that

Integration by part and using

yields that

where Bn is the density of the Beta distribution; that is,

Since the Beta distribution is a probability distribution, we obtain by the mean value theorem that

Similarly,

Hence,

By a similar argument, we obtain

Remark G.4: It follows from Theorem G.3 that the approximation error does not depend on the discontinuity of f in t = 0.

APPENDIX H: On the Inversion of the z-Transforms

Theorem H.1: Let

The following inversion formula holds:

for j = 0,1,…,N − 1.

Proof: Since for integral k and m, exp(−ikm) = 1, we obtain that

Hence,

with

Formula (H.1) follows from

Remark H.2: This result is sometimes called the discrete Poisson summation formula (cf. Abate and Whitt [3]). In their article Abate and Whitt presented an efficient inversion algorithm for the inversion of z-transforms (also called generating functions) of discrete probability distributions.

We proceed as the article by Abate and Whitt [3]. It follows immediately from Theorem H.1 that if the discretization error

is small, then

In general, it is hard to ensure that the discretization error ξd is small. However, for a special class of z-transforms, we can easily control the discretization error ξd. A z-transform,

belongs to the Hilbert space

, for k < 0 (cf. Rudin [20, Chap.17]). Since fk = 0, for k < 0, we obtain that the inverse sequence of the z-transform

is given by

. Hence, rkfkr = fk and

is of order O(rN). Hence, we can make ξjr arbitrarily small by choosing r small enough. The conclusion is that if

, then we can efficiently compute the numbers fk with arbitrary precision, with the following algorithm.

Remark H.3: On one hand, we want to choose r as small as possible, since then the discretization error is as small as possible. On the other hand, we multiply fkr by the factor rk; hence, a small r makes the algorithm numerically unstable. Suppose that we want to compute the values {fk; k = 0,1,…,m − 1} with a precision of ε. We can control both the discretization error and the numerical stability by using an m2 = 2pm (with p a positive integral number) discrete Fourier transform (FFT) and choose r = ε1/m2. The discretization error is then of magnitude ε and the multiplication factor is bounded by ε1/2p. For double precision, we recommend the parameter values p = 3 and r = exp(−44/2pm).

References

REFERENCES

Abate, J., Choudhury, G.L., & Whitt, W. (1996). On the Laguerre method for numerically inverting Laplace transforms. INFORMS Journal on Computing 8: 413427.Google Scholar
Abate, J., Choudhury, G.L., & Whitt, W. (1998). Numerical inversion of multidimensional Laplace transforms by the Laguerre method. Performance Evaluation 31: 216243.Google Scholar
Abate, J. & Whitt, W. (1992). Numerical inversion of probability generating functions. Operations Research Letters 12: 245251.Google Scholar
Abate, J. & Whitt, W. (1992). The Fourier-series method for inverting transforms of probability distributions. Queueing Systems 10: 588.Google Scholar
Abate, J. & Whitt, W. (1995). Numerical inversion of Laplace transforms of probability distributions. ORSA Journal on Computing 7: 3643.Google Scholar
Asmussen, S. (1987). Applied probability and queues. New York: Wiley.
Bailey, D.H. & Swartztrauber, P.N. (1991). The fractional Fourier transform and applications. SIAM Review 33(3): 389404.Google Scholar
Carr, P. & Madan, D.B. (1999). Option pricing and the Fast Fourier Transform. Journal of Computational Finance 2(4): 6173.Google Scholar
Choudhury, G.L., Lucantoni, D.L., & Whitt, W. (1994). Multi-dimensonal transform inversion with applications to the transient M/G/1 queue. Annals of Applied Probability 4: 719740.Google Scholar
Conway, J.B. (1990). A course in functional analysis. New York: Springer-Verlag.
Cooley, J.W. & Tukey, J.W. (1965). An algorithm for the machine calculation of complex Fourier series. Mathematics of Computing 19: 297301.Google Scholar
Davies, B. & Martin, B.L. (1979). Numerical inversion of the Laplace transform: A survey and comparison of methods. Journal of Computational Physics 33: 132.Google Scholar
Dubner, H. & Abate, J. (1968). Numerical inversion of Laplace transforms by relating them to the finite Fourier cosine transform. Journal of the ACM 15: 115123.Google Scholar
Gaver, D.P. (1966). Observing stochastic processes and approximate transform inversion. Operations Research 14: 444459.Google Scholar
Mallat, S. (2001). A wavelet tour of signal processing, 2nd ed. San Diego, CA: Academic Press.
Murli, A. & Rizzardi, M. (1990). Algorithm 682. Talbot's method for the Laplace inversion problem. ACM Transactions on Mathematical Software 16: 158168.Google Scholar
O'Cinneide, C.A. (1997). Euler summation for Fourier series and Laplace transform inversion. Stochastic Models 13: 315337.Google Scholar
Piessens, R. (1971). Gaussian quadrature formulas for the numerical inversion of Laplace transform. Journal of Engineering Mathematics 5: 19.Google Scholar
Piessens, R. (1971). Some aspects of Gaussian quadrature formulae for the numerical inversion of the Laplace transform. The Computer Journal 14: 433436.Google Scholar
Rudin, W. (1987). Real and complex analysis, 3rd ed. Singapore: McGraw-Hill.
Sakurai, T. (2004). Numerical inversion of Laplace transform of functions with discontinuities. Advances in Applied Probability 20(2): 616642.Google Scholar
Stehfest, H. (1970). Algorithm 368: Numerical inversion of Laplace transforms. Communications of the ACM 13: 4749.Google Scholar
Stoer, J. & Bulirsch, R. (1991). Introduction to numerical analysis, 2nd ed. Text in Applied Mathematics Vol. 12, Berlin: Springer-Verlag.
Szego, G. (1975). Orthogonal polynomials, 4th ed. American Mathematical Society Colloquium Publication 23. Providence, RI: American Mathematical Society.
Talbot, A. (1979). The accurate inversion of Laplace transforms. Journal of the Institute of Mathematics and Its Applications 23: 97120.Google Scholar
Tijms, H.C. (2003). A first course in stochastic models. New York: Wiley.CrossRef
Weeks, W.T. (1966). Numerical inversion of Laplace transforms using Laguerre functions. Journal of the ACM 13: 419426.Google Scholar
Figure 0

Results for Analytical Test Functions

Figure 1

Results for Noncontinuous Test Functions

Figure 2

Results for the Waiting Time Distribution in the M/D/1 Queue

Figure 3

Results for Continuous Nonanalytical Test Functions

Figure 4

Results for Continuous Nonanalytical Test Functions

Figure 5

Results for Noncontinuous Test Functions