Hostname: page-component-745bb68f8f-mzp66 Total loading time: 0 Render date: 2025-02-11T01:44:52.607Z Has data issue: false hasContentIssue false

Non-parametric estimation for a pure-jump Lévy process

Published online by Cambridge University Press:  10 May 2017

Chunhao Cai
Affiliation:
School of Mathematical Sciences, Nankai University, Tianjin 300071, China
Junyi Guo
Affiliation:
School of Mathematical Sciences, Nankai University, Tianjin 300071, China
Honglong You*
Affiliation:
School of Mathematical Sciences, Nankai University, Tianjin 300071, China
*
*Correspondence to: Honglong You, School of Mathematical Sciences, Nankai University, Tianjin 300071, China. Tel: 0086-13212002912. E-mail: youhonglong815@163.com
Rights & Permissions [Opens in a new window]

Abstract

In this paper, we propose an estimator of the survival probability for a Lévy risk model observed at low frequency. The estimator is constructed via a regularised version of the inverse of the Laplace transform. The convergence rate of the estimator in a sense of the integrated squared error is studied for large sample size. Simulation studies are also given to show the finite sample performance of our estimator.

Type
Paper
Copyright
© Institute and Faculty of Actuaries 2017 

1 Introduction

The surplus process of an insurance company is given by the following process:

(1.1) $$X_{t} \,{\equals}\,u{\plus}ct{\minus}J_{t} ,\quad t\geq 0$$

where u≥0 is the initial surplus and c>0 the constant premium rate. Here the aggregate claims process J={J t , t≥0} is a subordinator with Laplace exponent

(1.2) $$\psi _{J} (s)\,{\equals}\,{\int}_0^\infty {(1{\minus}e^{{{\minus}sx}} )\nu (dx)} ,\quad s\,\gt\,0$$

where ν is the Lévy measure of J supported on (0, ∞) satisfying the condition $$\mu \,{\equals}\,{\bf E}[J_{1} ]\,{\equals}\,{\int}_0^\infty {x\nu (dx)} \,\lt\,\infty$$ . Here we suppose the safety loading condition holds, i.e. c>μ. The infinite-time horizon ruin probability Ψ(u) is defined by

$$\Psi (u)\,{\equals}\,{\bf P}\left( {\mathop {{\rm inf}}\limits_{0\leq t\,\lt\,\infty} X_{t} \,\lt\,0\!\mid\!X_{0} \,{\equals}\,u} \right)$$

The corresponding survival probability Φ(u) is defined by

$$\Phi (u)\,{\equals}\,1{\minus}\Psi (u)$$

For simplicity, we will suppose that Φ(u) has the first derivative g(u) which is of the polynomial growth, i.e.

(1.3) $$\,\mid\!g(u)\!\mid\,\leq k(p{\plus}u^{q} )$$

for some k, p and integer-valued q.

The statistical inference for the ruin probability has been studied by many authors (see e.g. Frees, Reference Frees1986; Hipp, Reference Hipp1989; Croux & Veraverbeke, Reference Croux and Veraverbeke1990; Pitts, Reference Pitts1994; Bening & Korolev, Reference Bening and Korolev2002; Politis, Reference Politis2003; Mnatsakanov et al., Reference Mnatsakanov, Ruymgaart and Ruymgaart2008; Zhang & Yang, Reference Zhang and Yang2013). Statistical methodology has some advantages over analytic and probabilistic methods. On the one hand, the model can be more general. For example, no specific structure on the claim size distribution is assumed. On the other hand, in practical situations, instead of knowing the specific model one can only obtain the data on the surplus. Thus, statistical methodology can be directly used to analyse the insurances risk from the data. For more recent contributions on statistical inference of the ruin probability we refer the reader to Mnatsakanov et al. (Reference Mnatsakanov, Ruymgaart and Ruymgaart2008), Shimizu (Reference Shimizu2012) and Zhang & Yang (Reference Zhang and Yang2013).

In Zhang & Yang (Reference Zhang and Yang2013), the ruin probability for the pure-jump Lévy risk model is estimated and the key tool for estimation is the Pollaczek–Khinchin formula. The authors apply the Fourier method to transform the infinite sum of convolutions to a single integral and then construct the estimator of the ruin probability. In Mnatsakanov et al. (Reference Mnatsakanov, Ruymgaart and Ruymgaart2008), the authors consider an empirical-type estimator of the Laplace transform of the ruin probability, recovered it by a regularised Laplace inversion technique, and show the weak consistency in a sense of the integrated squared error (ISE). In Shimizu (Reference Shimizu2012), the author constructs an estimator of the Gerber–Shiu function for the Wiener–Poisson risk model in a manner similar to Mnatsakanov et al. (Reference Mnatsakanov, Ruymgaart and Ruymgaart2008) and shows the consistency in the ISE sense.

In this paper, we will estimate the ruin probability in the pure-jump Lévy risk model. Note that in the Lévy risk model there may exist an infinite number of jumps of small size in the finite time interval. For an insurance company, if the surplus has lots of small fluctuations, it is not easy to identify the inter-claim times. One feasible way of dealing with this problem is to observe the surplus process.

The rest of this paper is organised as follows. In sections 2 and 3 we will present the exact estimator of Φ(u) and study its properties. In section 4, we will do some simulations to show the finite sample size performance of the estimator. Finally, some conclusions are given in section 5. All the technical proofs will be presented in the Appendix.

2 Construction of the Estimator

2.1 Preliminaries: general notation

Throughout the paper, we use the primary notations and assumptions.

  • AB signifies that there exists an universal constant k>0 such that AkB.

  • Symbols $$\buildrel {\rm P} \over \longrightarrow $$ and $$\buildrel {\rm D} \over \longrightarrow $$ stand for the convergence in probability, and in distribution, respectively.

  • L M is the Laplace transform for a function M: for s>0

    $$L_{M} (s)\,{\equals}\,{\int}_0^\infty {e^{{{\minus}su}} M(u)du} $$
  • $\left\Vert f \right\Vert {K} \,{\equals}\,\left( {{\int}_0^K {\left| {f(t)} \right|^{2} dt} } \right)^{{{1 \over 2}}} $ for any function f and K>0. In particular, $$\left\Vert f \right\Vert\,{\equals}\,\left\Vert f \right\Vert_{\infty} $$ . We say that fL 2 (0, K) if $$\left\Vert f \right\Vert_{K} \,\lt\,\infty$$ . In particular, fL 2(0, ∞) if $$\left\Vert f \right\Vert\,\lt\,\infty$$ .

  • For a stochastic sequence {X n , n≥1} and any real-valued sequence {R n , n≥1}, X n =O P (R n ) if, for any ϵ>0, there exists a constant d such that $$P\left( {\left| {{{X_{n} } \over {R_{n} }}} \right|\,\gt\,d} \right)\leq {\epsilon}$$ for all n and X n =o P (R n ) if, for any ϵ>0, $$P\left( {\left| {{{X_{n} } \over {R_{n} }}} \right|\,\gt\,{\epsilon}} \right)\to0$$ as n→∞.

  • $${\mib {\cal N}} (a,b)$$ indicates the Gaussian distribution with mean a and variance b.

2.2 Estimator

As we know, the exact closed-form expression for Φ(u) is difficult to obtain, but the corresponding Laplace transform of Φ(u) can be obtained easily by Morales (Reference Morales2007). The Laplace transform of Φ(u) is

(2.1) $$\eqalignno{ L_{\Phi } (s)\,{\equals}\, & {{1{\minus}{{{\int}_0^\infty {z\nu (dz)} } \over c}} \over {s{\minus}{1 \over c}\left( {{\int}_0^\infty {\left( {1{\minus}e^{{{\minus}sz}} } \right)\nu (dz)} } \right)}} \cr \,{\equals}\, & {{1{\minus}\rho } \over {s{\minus}{1 \over c}\psi _{J} (s)}},\quad s\,\gt\,0 $$

where ρ=(μ)/(c).

By the Pollaczek–Khinchin formula for Φ(u) in Huzak et al. (Reference Huzak, Perman, Sikic and Vondracek2004), we can obtain Φ(0)=1−ρ.

Now we consider to construct an estimator of L Φ(s). It follows from (2.1) that we have to estimate the parameter ρ and the function ψ J (s). Suppose that the observations consist of a discrete sample $$\left\{ {J_{{t_{i}^{n} }} \!\mid\!t_{i}^{n} \,{\equals}\,ih;\,i\,{\equals}\,0,1,2,...,n} \right\}$$ for some fixed h>0. Let $$\Delta _{i}^{n} J\,{\equals}\,J_{{t_{i}^{n} }} {\minus}J_{{t_{{i{\minus}1}}^{n} }} $$ , i=1, 2, … , n.

Immediately, an unbiased estimator for ρ is given by

(2.2) $$\widetilde{{\rho _{n} }}\,{\equals}\,{1 \over {cnh}}\mathop{\sum}\limits_{i\,{\equals}\,1}^n \Delta _{i}^{n} J$$

Obviously, we can obtain an estimator of Φ(0) which is given by

(2.3) $$\hat{\Phi }_{n} (0)\,{\equals}\,1{\minus}\widetilde{{\rho _{n} }}$$

Because J is a spectrally positive Lévy process, the Laplace exponent is denoted by ψ J (s) via

(2.4) $${\bf E}\left[ {e^{{{\minus}sJ_{t} }} } \right]\,{\equals}\,e^{{{\minus}t\psi _{J} (s)}} ,\quad s\,\gt\,0$$

Let us consider the empirical Laplace transform function of the increments of J, at each stage n (below s>0):

(2.5) $$\widetilde{{\phi _{n} }}(s)\,{\equals}\,{1 \over n}\mathop{\sum}\limits_{i\,{\equals}\,1}^n e^{{{\minus}s\Delta _{i}^{n} J}} $$

Then we set

$$\widetilde{{\psi _{J} }}_{n} (s)\,{\equals}\,{\minus}{1 \over h}{\rm log}\,\widetilde{{\phi _{n} }}(s),\quad s\,\gt\,0$$

Let $$\widetilde{{\psi _{J} }}_{n} (s)$$ be an estimator of ψ J (s).

Therefore, we can define $$\widetilde{{L_{\Phi } }}(s)$$ as an estimator of L Φ(s) as follows:

(2.6) $$\widetilde{{L_{\Phi } }}(s)\,{\equals}\,{{1{\minus}\widetilde{\rho }_{n} } \over {s{\minus}{1 \over c}\widetilde{{\psi _{J} }}_{n} (s)}},\quad s\,\gt\,0$$

In order to estimate the original functions Φ(u), we will apply the L 2-inversion method proposed by Chauveau et al. (Reference Chauveau, Vanrooij and Ruymgaart1994) to $$\widetilde{{L_{\Phi } }}(s)$$ . The L 2-inversion method is defined as follows, which is available for any L 2(0, ∞) functions.

Definition 2.1 Let m>0 be a constant. The regularised Laplace inversion $$L_{m}^{{{\minus}1}} \,\colon\,L^{2} (0,\infty)\toL^{2} (0,\infty)$$ is given by

(2.7) $$L_{m}^{{{\minus}1}} g(t)\,{\equals}\,{1 \over {\pi ^{2} }}{\int}_0^\infty {{\int}_0^\infty {\Psi _{m} (y)y^{{{\minus}{1 \over 2}}} e^{{{\minus}tvy}} g(v)dvdy} } $$

for a function gL 2(0, ∞) and t∈(0, ∞), where

$$\Psi _{m} (y)\,{\equals}\,{\int}_0^{a_{m} } {{\rm cos}h(\pi x){\rm cos}(x\,{\rm log}\,y)dx} $$

and a m =π −1cosh −1(πm).

Remark 2.2 It is well known that the norm L −1 is generally unbounded, which causes the ill-posedness of the Laplace inversion. However, $$L_{m}^{{{\minus}1}} $$ is bounded for each m>0, in particular

(2.8) $$\left\Vert {L_{m}^{{{\minus}1}} } \right\Vert\leq m\left\Vert L \right\Vert\,{\equals}\,\sqrt \pi m$$

For details and further information see Chauveau et al. (Reference Chauveau, Vanrooij and Ruymgaart1994).

Proposition 2.3 For s>0, we have $$\widetilde{{L_{\Phi } }}(s) \,\notin\, L^{2} (0,\infty)$$ .

From Proposition 2.3, we know $$\widetilde{{L_{\Phi } }}(s) \,\notin\, L^{2} (0,\infty)$$ for s>0. Hence the L 2-inversion method in Definition 2.1 cannot be applied directly. Below, let us define

(2.9) $$\Phi _{\theta } (u)\,{\equals}\,e^{{{\minus}\theta u}} \Phi (u),\quad u\,\gt\,0$$

for arbitrary fixed θ>0.

It is obvious that Φ θ (u)∈L 2(0, ∞) and

(2.10) $$L_{{\Phi _{\theta } }} (s)\,{\equals}\,L_{\Phi } (s{\plus}\theta ),\quad s\,\gt\,0$$

Obviously, $$L^{2}_{{\Phi _{\theta } }} \in L (0,\infty)$$ .

We define an estimator of $$L_{{\Phi _{\theta } }} $$ as follows:

(2.11) $$\widetilde{{L_{{\Phi _{\theta } }} }}(s)\,{\equals}\,\widetilde{{L_{\Phi } }}(s{\plus}\theta ),\quad s\,\gt\,0$$

We have $$\widetilde{{L_{{\Phi _{\theta } }} }} \in L^{2} (0,\infty)$$ by (2.6) and (A.6).

For suitable m(n)>0, we have

(2.12) $$\widetilde{{\Phi _{{\theta ,m(n)}} }}(u)\,{\equals}\,L_{{m(n)}}^{{{\minus}1}} \left( {\widetilde{{L_{{\Phi _{\theta } }} }}(s)} \right)(u),\quad u\,\gt\,0$$

Finally, we construct an estimator of Φ(u) as follows:

(2.13) $$\widetilde{\Phi }_{{m(n)}} (u)\,{\equals}\,\left\{ {\matrix{ {e^{{\theta u}} \widetilde{{\Phi _{{\theta ,m(n)}} }}(u),} \hfill & {\:u\,\gt\,0} \hfill \cr {\hat{\Phi }_{n} (0),} \hfill & {\:u\,{\equals}\,0\:} \hfill \cr } } \right.$$

3 Asymptotic Properties of Estimators

We first consider the asymptotic properties of $$\widetilde{\rho }_{n} $$ and $$\widetilde{{\psi _{J} }}_{n} (s)$$ .

Theorem 3.1 As mentioned in section 1, we suppose that the safety loading condition holds and $${\int}_0^\infty {z^{2} \nu (dz)\,\lt\,\infty} $$ . Then, for s>0

(3.1) $$\widetilde{\rho }_{n} {\minus}\rho \buildrel {\bf P} \over \longrightarrow 0$$
(3.2) $$\widetilde{{\psi _{J} }}_{n} (s){\minus}\psi _{J} (s)\buildrel {\bf P} \over \longrightarrow 0$$
(3.3) $$\sqrt n \left( {\widetilde{\rho }_{n} {\minus}\rho } \right)\buildrel {\bf D} \over \longrightarrow {\mib {\cal N}} \left( {0,{1 \over {c^{2} h}}{\int}_0^\infty {z^{2} \nu (dz)} } \right)$$
(3.4) $$\sqrt n \left( {\widetilde{{\psi _{J} }}_{n} (s){\minus}\psi _{J} (s)} \right)\buildrel {\bf D} \over \longrightarrow {\mib {\cal N}} \left( {0,{1 \over {h^{2} }}\left( {e^{{h(2\psi _{J} (s){\minus}\psi _{J} (2s))}} {\minus}1} \right)} \right)$$

as $$n\to\infty$$ .

Remark 3.2 By (3.1) and (3.3), it is easy to obtain

(3.5) $$\hat{\Phi }_{n} (0){\minus}\Phi (0)\buildrel {\bf P} \over \longrightarrow 0,\quad \sqrt n (\hat{\Phi }_{n} (0){\minus}\Phi (0))\buildrel {\bf D} \over \longrightarrow {\mib {\cal N}} \left( {0,{1 \over {c^{2} h}}{\int}_0^\infty {z^{2} \nu (dz)} } \right)$$

Now, we will present our main result which states a convergence in probability of the ISE.

Theorem 3.3 Suppose that the conditions of Theorem 3.1 are satisfied and Φ(u) has first derivative g(u) with polynomial growth. Then, for $$m\left( n \right)\,{\equals}\,\sqrt {\left( {{{\left( n \right)} \over {\left( {{\rm log}\,n} \right)}}} \right)} $$ , B>0 and u>0, we have

$$\vskip 8pt \left\Vert {\widetilde{\Phi }_{{m(n)}} (u){\minus}\Phi (u)} \right\Vert_{B}^{2} \,{\equals}\,O_{P} \left( {({\rm log}\,n)^{{{\minus}1}} } \right),\quad n\to\infty$$

Remark 3.4 The explicit integral expression for the estimator of the survival probability is

$$\widetilde{\Phi }_{{m(n)}} (u)\,{\equals}\,\left\{ {\matrix{ {{{e^{{u\theta }} } \over {\pi ^{2} }}{\int}_0^\infty {{\int}_0^\infty {} e^{{{\minus}usy}} \widetilde{{L_{{\Phi _{\theta } }} }}(s)\Psi _{{m(n)}} (y)y^{{{\minus}{1 \over 2}}} dsdy} ,} \hfill & {\:u\,\gt\,0} \hfill \cr {1{\minus}\widetilde{\rho }_{n} ,} \hfill & {\:u\,{\equals}\,0\:} \hfill \cr } } \right.$$

where $$\Psi _{{m(n)}} (y)\,{\equals}\,{\int}_0^{a_{{m(n)}} } {\cos h\,(\pi x){\rm cos}\,\left( {x\,{\rm log}\,(y)} \right)dx} $$ , $a_{{m(n)}} \,{\equals}\,\pi ^{{{\minus}1}} \,{{\rm cos}h}^{{\minus}1} \left( {\pi m(n)} \right)$ and $m\left( n \right)\,{\equals}\,\sqrt {\left( {{{\left( n \right)} \over {\left( {{\rm log}\,n} \right)}}} \right)} .$

4 Simulation Studies

In this part we provide some simulation results to illustrate the behaviour of our estimator. We assume that the Lévy measure is given by $\nu (dx)\,{\equals}\,\lambda {1 \over {\mu _{0} }}e^{\!{{\minus}{1 \over {\mu _{0} }}x}} dx$ . Then (J t ) t≥0 is a compound Poisson process where the Poisson intensity is λ and the individual claim sizes are exponentially distributed with mean μ 0. Let the premium rate be c and c>λμ 0. In this case, the survival probability is given by

(4.1) $$\Phi (u)\,{\equals}\,1{\minus}\mu _{0} {\lambda \over c}e^{{{\minus}\left( {{1 \over {\mu _{0} }}{\minus}{\lambda \over c}} \right)u}} ,\quad u\geq 0$$

Because $g(u)\,{\equals}\,\Phi '(u)\,{\equals}\,{{\mu _{0} \lambda } \over c}\left( {{1 \over {\mu _{0} }}{\minus}{\lambda \over c}} \right)e^{{{\minus}\left( {{1 \over {\mu _{0} }}{\minus}{\lambda \over c}} \right)u}} $ and c >λμ 0, we have $g(u)\,\lt\,{{\mu _{0} \lambda } \over c}\left( {{1 \over {\mu _{0} }}{\minus}{\lambda \over c}} \right)$ , we have that g(u) satisfies the condition of the polynomial growth.

First, we give the simulation of survival probability when the initial surplus u=0. Let us take c=λ=10, $\mu _{0} \,{\equals}\,{1 \over 2}$ and h=0.1. Obviously, we have Φ(0)=0.5 by (4.1). On Figure 1, we plot the true survival probability point and some mean points with sample sizes n=1,000; 8,000; 10,000; 50,000; 100,000; 500,000 as u=0, which are computed based on 500 simulation experiments.

Figure 1 True point and mean points.

Now, Figure 1 shows that the results improve as the sample size increases. In order to give the better depiction, we use the following tables to compare data.

Table 1 shows some true value data and some mean data with sample sizes n=1,000; 8,000; 10,000; 50,000; 100,000; 500,000 as u=0, which are computed based on 500 simulation experiments. Table 2 shows the errors of the mean data and true value.

Table 1 True value and some estimation values of survival probability at u=0.

Table 2 Error of true value and some estimation values in Table 1.

On Table 2, we can find that the errors are very small when the sample size n≥8,000.

If the initial surplus u>0, we plot the true survival probability curve and some mean points with sample sizes n=1,000; 8,000; 10,000; 50,000; 100,000; 500,000. Let us take c=λ=10, $\mu \,{\equals}\,{1 \over 2}$ , h=0.1 and θ=0.075.

Tables 3 and 4 give some data to show the behaviour of our estimator as the initial surplus u>0. Table 3 shows some true values and some simulation results when the initial surplus u>0. Table 4 shows the errors of some simulation results and true value.

Table 3 True value and some estimation values of survival probability at u>0.

Table 4 Error of true value and some estimation values in Table 3.

Figure 2 shows the results improve as the sample size increases. However, when the sample size n<10,000, we can find that the simulation results are not good. This result may explain the slow rate of the estimator. Therefore, our simulations need to choose a large sample size (e.g. n≥100,000). On Table 4, the errors is very small when the sample size n≥100,000.

Figure 2 True curve and mean points.

5 Conclusions

In this paper, we adopted the regularised Laplace inversion technique since it was an easy way to construct an estimator of the Laplace transform of the survival probability. The rate of convergence for the estimator is logarithmic. Although the convergence rate is slower than that in Croux & Veraverbeke (Reference Croux and Veraverbeke1990), Bening & Korolev (Reference Bening and Korolev2002), Zhang & Yang (Reference Zhang and Yang2013), the procedure is easy to manipulate in practice.

Acknowledgements

This research is supported by the Chinese NFSC (11571189, 11501304).

Appendix

Proof of Proposition 2.3: The first derivative of $$\widetilde{{\psi _{J} }}_{n} (s)$$ is denoted by $$\widetilde{\psi }_{{J_{{}} n}}^{{}} '(s)$$ as follows:

(A.1) $$\widetilde{{\psi _{J} }}_{n} '^{{}} '(s)\,{\equals}\,{1 \over h}{{\mathop{\sum}\limits_{i\,{\equals}\,1}^n \Delta _{i}^{n} Je^{{{\minus}s\Delta _{i}^{n} J}} } \over {\mathop{\sum}\limits_{i\,{\equals}\,1}^n e^{{{\minus}s\Delta _{i}^{n} J}} }}$$

The second derivative of $$\widetilde{{\psi _{J} }}_{n} (s)$$ is denoted by $$\widetilde{{\psi _{J} }}_{n} ''(s)$$ as follows:

(A.2) $$\eqalignno{ \widetilde{{\psi _{J} }}_{n} ''(s)\,{\equals}\, &#x0026; {1 \over h}{{\left( {\mathop{\sum}\limits_{i\,{\equals}\,1}^n \Delta _{i}^{n} Je^{{{\minus}s\Delta _{i}^{n} J}} } \right)^{2} {\minus}\mathop{\sum}\limits_{i\,{\equals}\,1}^n \left( {\Delta _{i}^{n} J} \right)^{2} e^{{{\minus}s\Delta _{i}^{n} J}} \mathop{\sum}\limits_{i\,{\equals}\,1}^n e^{{{\minus}s\Delta _{i}^{n} J}} } \over {\left( {\mathop{\sum}\limits_{i\,{\equals}\,1}^n e^{{{\minus}s\Delta _{i}^{n} J}} } \right)^{2} }} \cr \,{\equals}\, &#x0026; {\minus}{1 \over h}{{\mathop{\sum}\limits_{\{ 1\leq i\,\ne\,j\leq n\} } \left( {\Delta _{i}^{n} J{\minus}\Delta _{j}^{n} J} \right)^{2} e^{{{\minus}s\Delta _{i}^{n} J}} e^{{{\minus}s\Delta _{j}^{n} J}} } \over {\left( {\mathop{\sum}\limits_{i\,{\equals}\,1}^n e^{{{\minus}s\Delta _{i}^{n} J}} } \right)^{2} }} $$

By Taylor expansion we obtain

(A.3) $$s{\minus}{1 \over c}\widetilde{{\psi _{J} }}_{n} (s)\,{\equals}\,s\left( {1{\minus}{1 \over c}\widetilde{{\psi _{J} }}_{n} '(0)} \right){\minus}{{s^{2} } \over 2}{1 \over c}\widetilde{{\psi _{J} }}_{n} ''\left( {s^{{\asterisk}} } \right),\quad 0\leq s^{{\asterisk}} \leq s$$

By (A.1), we have

(A.4) $$\widetilde{{\psi _{J} }}_{n} '(0)\,{\equals}\,{1 \over {nh}}\mathop{\sum}\limits_{i\,{\equals}\,1}^n \Delta _{i}^{n} J$$

By (A.2) and (A.4), we have

(A.5) $$s{\minus}{1 \over c}\widetilde{{\psi _{J} }}_{n} (s)\,{\equals}\,s\left( {1{\minus}\widetilde{{\rho _{n} }}} \right){\plus}{1 \over {ch}}{{\mathop{\sum}\limits_{\{ 1\leq i\,\ne\,j\leq n\} } \left( {\Delta _{i}^{n} J{\minus}\Delta _{j}^{n} J} \right)^{2} e^{{{\minus}s^{{\asterisk}} \Delta _{i}^{n} J}} e^{{{\minus}s^{{\asterisk}} \Delta _{j}^{n} J}} } \over {\left( {\mathop{\sum}\limits_{i\,{\equals}\,1}^n e^{{{\minus}s^{{\asterisk}} \Delta _{i}^{n} J}} } \right)^{2} }}$$

Thanks to c μ, we have

(A.6) $$0\leq s\left( {1{\minus}\widetilde{{\rho _{n} }}} \right)\leq s{\minus}{1 \over c}\widetilde{{\psi _{J} }}_{n} (s)\leq s$$

almost surely.

Therefore, by (2.6) and (A.6), we know that $$\widetilde{{L_{\Phi } }}(s)\notinL^{2} (0,\infty)$$ .

Proof of Theorem 3.1: First, we have

$$\widetilde{\rho }_{n} \,{\equals}\,{1 \over c}{1 \over {nh}}\mathop{\sum}\limits_{i\,{\equals}\,1}^n \Delta _{i}^{n} J$$

Because J has independent and stationary increments, we know that the random variables $$\left\{ {\Delta _{i}^{n} J,i\,{\equals}\,1,2,...,n} \right\}$$ are independent and identically distributed.

Thanks to c μ, $${\int}_0^\infty {z^{2} \nu (dz)\,\lt\,\infty} $$ and (2.4), we have

$$\eqalignno{ {\bf E}[\Delta _{i}^{n} J]\,{\equals}\, &#x0026; {\bf E}[J_{h} ] \cr \,{\equals}\, &#x0026; h\mu \,\lt\,\infty $$
$$\eqalignno{ {\bf Var}[\Delta _{i}^{n} J]\,{\equals}\, &#x0026; {\bf Var}[J_{h} ] \cr \,{\equals}\, &#x0026; h{\int}_0^\infty z^{2} \nu (dz)\,\lt\,\infty $$

By law of large numbers and central limit theorem it follows that the (3.1) and (3.3) are right.

Next, we consider the (3.2) and (3.5).

Because of

$$\widetilde{{\phi _{n} }}\,{\equals}\,{1 \over n}\mathop{\sum}\limits_{i\,{\equals}\,1}^n e^{{{\minus}s\Delta _{i}^{n} J}} $$

and the properties of $$\left\{ {\Delta _{i}^{n} J,i\,{\equals}\,1,2,...,n} \right\}$$ , we have

(A.7) $$\widetilde{{\phi _{n} }}\buildrel {\bf P} \over \longrightarrow e^{{{\minus}h\psi (s)}} $$

and

(A.8) $$\sqrt n \left( {\widetilde{{\phi _{n} }}{\minus}e^{{{\minus}h\psi (s)}} } \right)\buildrel {\bf D} \over \longrightarrow {\mib {\cal N}} \left( {0,\,e^{{{\minus}h\psi (2s)}} {\minus}e^{{{\minus}2h\psi (s)}} } \right)$$

as $$n\to\infty$$ .

Since $${\minus}{1 \over h}{\rm log}\,(x)$$ is a continuous function defined on (0, ∞) and its first derivative is not equal 0 on (0, ∞), we may make use of Continuous Mapping Theorem and Delta Theorem to obtain (3.2) and (3.5).

For the proof of Theorem 3.3 we need the following Lemma A.1.

Lemma A.1 Suppose that, for a function fL 2(0,∞) with the derivative f′, ${\int}_0^\infty {\left[ {t\left( {t^{{{1 \over 2}}} f(t)} \right)^{\prime} } \right]^{2} t^{{{\minus}1}} dt\,\lt\,\infty} $ , then

$$\left\Vert {L_{n}^{{{\minus}1}} L_{f} {\minus}f} \right\Vert\,{\equals}\,O\left( {({\rm log}\,n)^{{{\minus}{1 \over 2}}} } \right)\quad (n\to\infty)$$

Lemma A.1 can be essentially obtained by the proof of Theorem 3.2 in Chauveau et al. (Reference Chauveau, Vanrooij and Ruymgaart1994) and it shows that $$L_{n}^{{{\minus}1}} $$ can be a Laplace inversion asymptotically in ISE sense.

Proof of Theorem 3.3: Let us first observe that (see (2.13))

(A.9) $$\eqalignno{ \left\Vert {\widetilde{\Phi }_{{m(n)}} {\minus}\Phi } \right\Vert_{B}^{2} \leq &#x0026; e^{{2\theta B}} \left\Vert {\widetilde{\Phi }_{{\theta ,m(n)}} {\minus}\Phi _{\theta } } \right\Vert_{B}^{2} \cr \leq &#x0026; 2e^{{2\theta B}} \left\{ {\left\Vert {L_{{m(n)}}^{{{\minus}1}} \widetilde{{L_{{\Phi _{\theta } }} }}{\minus}L_{{m(n)}}^{{{\minus}1}} L_{{\Phi _{\theta } }} } \right\Vert^{2} {\plus}\left\Vert {\Phi _{{\theta ,m(n)}} {\minus}\Phi _{\theta } } \right\Vert^{2} } \right\} $$

In order to deal with the bias part, i.e. the second term on the right-hand side of (A.9), let us write $$\Phi _{\theta }^{\prime} \,{\equals}\,g_{\theta } $$ and note that

$$\eqalignno{ {\int}_0^\infty {\left[ {x\left( {\sqrt x \Phi _{\theta } (x)} \right)^{\prime} } \right]^{2} {1 \over x}dx} \,{\lesssim}\, &#x0026; {\int}_0^\infty {\Phi _{\theta }^{2} (x)dx} {\plus}{\int}_0^\infty {x^{2} g_{\theta }^{2} (x)dx} \cr \,{\equals}\, &#x0026; \left\Vert {\Phi _{\theta } } \right\Vert^{2} {\plus}{\int}_0^\infty {x^{2} \left[ {g(x)e^{{{\minus}\theta x}} {\minus}\theta \Phi (x)e^{{{\minus}\theta x}} } \right]^{2} dx} \cr {\lesssim}\, &#x0026; \left\Vert {\Phi _{\theta } } \right\Vert^{2} {\plus}{\int}_0^\infty {x^{2} g^{2} (x)e^{{{\minus}2\theta x}} dx} {\plus}{\int}_0^\infty {x^{2} \Phi ^{2} (x)e^{{{\minus}2\theta x}} dx} \cr \,{\equals}\, &#x0026; \left\Vert {\Phi _{\theta } } \right\Vert^{2} {\plus}{\int}_0^\infty {x^{2} e^{{{\minus}2\theta x}} \left( {g^{2} (x){\plus}\Phi ^{2} (x)} \right)dx} $$

It is obvious that $$g^{2} (x){\plus}\Phi ^{2} (x)\,{\lesssim}\,1{\plus}\,\mid\!x\!\mid\,^{C} $$ for some integer-valued C. Thus

$${\int}_0^\infty \left[ {x\left( {\sqrt x \Phi _{\theta } (x)} \right)^{\prime} } \right]^{2} {1 \over x}dx\,\lt\,\infty$$

By Lemma A.1, we may conclude that

(A.10) $$\left\Vert {\Phi _{{\theta ,m(n)}} {\minus}\Phi _{\theta } } \right\Vert^{2} \,{\equals}\,O\left( {{1 \over {{\rm log}\,m(n)}}} \right),\quad n\to\infty$$

Next, let us consider the first term on the right-hand side of (A.9), the variance part. It is immediate from (2.1), (2.6), (2.10) and (2.11) that

(A.11) $$\left\Vert {\widetilde{{L_{{\Phi _{\theta } }} }}{\minus}L_{\theta } } \right\Vert^{2} \,{\equals}\,{\int}_0^\infty \left( {{{(1{\minus}\rho )\left( {{1 \over c}\widetilde{{\psi _{J} }}_{n} (s{\plus}\theta ){\minus}{1 \over c}\psi _{J} (s{\plus}\theta )} \right)} \over {\left( {s{\plus}\theta {\minus}{1 \over c}\widetilde{{\psi _{J} }}_{n} (s{\plus}\theta )} \right)\left( {s{\plus}\theta {\minus}{1 \over c}\widetilde{{\psi _{J} }}(s{\plus}\theta )} \right)}}{\plus}{{\left( {\rho {\minus}\widetilde{\rho }_{n} } \right)} \over {s{\minus}{1 \over c}\widetilde{{\psi _{J} }}_{n} (s{\plus}\theta )}}} \right)^{2} ds$$

Exploiting (A.6) and the fact that ${\bf P}\left( {\left\{ {\omega \in\Omega ;\,\widetilde{\rho }_{n} \,{\equals}\,1} \right\}} \right)\,{\equals}\,0$ , it follows after some algebra that, almost surely, the right-hand side of (A.11) is bounded by

(A.12) $${2 \over {\left( {1{\minus}\widetilde{\rho }_{n} _{{}} } \right)^{2} }}I_{1} {\plus}{{2\left( {\widetilde{\rho }_{n} {\minus}\rho } \right)^{2} } \over {\left( {1{\minus}\widetilde{\rho }_{n} } \right)^{2} }}I_{2} $$

where

$$I_{1} \,{\equals}\,{\int}_0^\infty {{1 \over {c^{2} }}{{\left( {\widetilde{{\psi _{J} }}_{n} (s{\plus}\theta ){\minus}\psi _{J} (s{\plus}\theta )} \right)^{2} } \over {(s{\plus}\theta )^{4} }}ds} ,\quad I_{2} \,{\equals}\,{\int}_0^\infty {{1 \over {(s{\plus}\theta )^{2} }}} ds\,\lt\,\infty$$

By Theorem 3.1, it follows that

(A.13) $$I_{1} \,{\equals}\,O_{{\bf P}} \left( {{1 \over n}} \right)$$
(A.14) $${2 \over {\left( {1{\minus}\widetilde{\rho }_{n} } \right)^{2} }}\,{\equals}\,O_{{\bf P}} (1)$$
(A.15) $${{2\left( {\widetilde{\rho }_{n} {\minus}\rho } \right)^{2} } \over {\left( {1{\minus}\widetilde{\rho }_{n} } \right)^{2} }}\,{\equals}\,O_{{\bf P}} \left( {{1 \over n}} \right)$$

as n→∞.

Combining (A.12), (A.13), (A.14) and (A.15) yields

(A.16) $$\left\Vert {\widetilde{{L_{{\Phi _{\theta } }} }}{\minus}L_{{\Phi _{\theta } }} } \right\Vert^{2} \,{\equals}\,O_{{\bf P}} \left( {{1 \over n}} \right),\quad n\to\infty$$

Combining (2.8), (A.10) and (A.16), we have

(A.17) $$\left\Vert {\widetilde{\Phi }_{{m(n)}} {\minus}\Phi } \right\Vert_{B}^{2} \,{\equals}\,O_{{\bf P}} \left( {{{m^{2} (n)} \over n}} \right){\plus}O_{{\bf P}} \left( {{1 \over {{\rm log}\,m(n)}}} \right)$$

With an optimal $m(n)\,{\equals}\,\sqrt {{n \over {{\rm log}\,n}}} $ balancing the two terms on the right-hand side of (A.17), the order becomes O P ((log n)−1).

References

Bening, V.E. & Korolev, V.Y. (2002). Nonparametric estimation of the ruin probability for generalized risk processes. Theory of Probability and its Applications, 47(1), 116.Google Scholar
Chauveau, D.E., Vanrooij, A.C.M. & Ruymgaart, F.H. (1994). Regularized inversion of noisy Laplace transforms. Advances in Applied Mathematics, 15(2), 186201.Google Scholar
Croux, K. & Veraverbeke, N. (1990). Non-parametric estimators for the probability of ruin. Insurance: Mathematics and Economics, 9, 127130.Google Scholar
Frees, EW. (1986). Nonparametric estimation of the probability of ruin. Astin Bulletin, 16, 8190.Google Scholar
Hipp, C. (1989). Estimators and bootstrap confidence intervals for ruin probabilities. Astin Bulletin, 19, 5770.Google Scholar
Huzak, M., Perman, M., Sikic, H. & Vondracek, Z. (2004). Ruin probabilities and decompositions for general perturbed risk processes. Annals of Applied Probability, 14(3), 13781397.Google Scholar
Morales, M. (2007). On the expected discounted penalty function for a perturbed risk process driven by a subordinator. Insurance: Mathematics and Economics, 40(2), 293301.Google Scholar
Mnatsakanov, R., Ruymgaart, L.L. & Ruymgaart, F.H. (2008). Nonparametric estimation of ruin probabilities given a random sample of claims. Mathematical Methods of Statistics, 17(1), 3543.Google Scholar
Pitts, SM. (1994). Nonparametric estimation of compound distributions with applications in insurance. Annals of the Institute of Statistical Mathematics, 46(3), 537555.Google Scholar
Politis, K. (2003). Semiparametric estimation for non-ruin probabilities. Scandinavian Actuarial Journal, 2003(1), 7596.Google Scholar
Shimizu, Y. (2012). Non-parametric estimation of the Gerber-Shiu function for the Wiener-Poisson risk model. Scandinavian Actuarial Journal, 2012(1), 5669.Google Scholar
Zhang, Z. & Yang, H. (2013). Nonparametric estimate of the ruin probability in a pure-jump Lévy risk model. Insurance: Mathematics and Economics, 53(1), 2435.Google Scholar
Figure 0

Figure 1 True point and mean points.

Figure 1

Table 1 True value and some estimation values of survival probability at u=0.

Figure 2

Table 2 Error of true value and some estimation values in Table 1.

Figure 3

Table 3 True value and some estimation values of survival probability at u>0.

Figure 4

Table 4 Error of true value and some estimation values in Table 3.

Figure 5

Figure 2 True curve and mean points.