1. Introduction
In the classical Lundberg–Cramér models of collective risk theory, insurance companies keep their reserves in cash (or in a bank account, typically, paying zero interest rate). In recent, more realistic models, it is assumed that the capital reserves may vary not only due to the business activity but also due to a stochastic interest rate. In other words, an insurance company may invest all or just part of its capital reserve into risky assets. These models lead to the important conclusion that the financial risk contributes enormously in the asymptotic behavior of the ruin probabilities: even under the Cramér condition, they are not exponentially decaying when the initial capital grows to infinity, and ruin always happens if the volatility of the risky asset is large with respect to the instantaneous interest rate. Moreover, they allow quantification of what proportion of risky investments may lead to imminent ruin.
Due to their practical importance, ruin problems with investments have become a vast and quickly growing chapter of the collective risk theory studying numerous models with various levels of generality. The ruin problem can be treated in at least two different ways: using the techniques of integro-differential equations (see, e.g., [Reference Belkina2, Reference Belkina, Konyukhova and Kurochkin4, Reference Frolova6, Reference Frolova, Kabanov and Pergamenshchikov7, Reference Kabanov and Pergamenshchikov9]), or results from implicit renewal theory (see [Reference Kabanov and Pergamenshchikov10] and references therein). The first approach, allowing the calculation of not only asymptotics but also ruin probabilities for given values of the capital reserves, has some interesting mathematical questions.
Our paper is a complement to [Reference Frolova, Kabanov and Pergamenshchikov7] and [Reference Kabanov and Pergamenshchikov9], which respectively extend the Lundberg–Cramér models for non-life insurance and life insurance to the case where the capital of an insurance company is invested in a risky asset with the price dynamics given by a geometric Brownian motion. In both, the business activity is given by a compound Poisson process either with negative jumps and positive drift (non-life insurance), or with positive jumps and negative drift (life insurance or annuity payments in the early literature). Technically speaking, these two models are quite different: in the first case the downward crossing of zero may happen only at the instant of a jump (thus, the model can be reduced to a discrete-time one), while in the second case the downward crossing happens in a continuous way and reduction to a discrete-time model is not possible. Of course, in the classical setting, if jumps are exponentially distributed then a model with upward jumps can be transformed into a model with downward jumps and vice versa. In models with investment the duality arguments do not work and one needs to treat them separately. In the papers mentioned earlier it was shown that in the Lundberg-type model, i.e. with exponentially distributed jumps, the ruin probability decreases with rate
$C u^{-\beta} $
,
$C>0$
, when
$\beta\,:\!=\,2a/\sigma^2-1>0$
, and ruin happens with probability 1 when
$\beta\ge 1$
(the result for
$\beta=1$
was established in [Reference Pergamenshchikov and Zeitouni11].
Here we consider a model with investments in which the corresponding compound Poisson process has positive and negative jumps. Such models without investments are quite frequent in the classical actuarial literature to describe the evolution of the surplus process of an insurance company, combining two types of business activity [Reference Saxén13] or having not only random losses but a random income (see [Reference Albrecher, Gerber and Yang1] and references therein).
We prove, under some minor assumptions on the distributions of jumps, that the ruin probability as a function of the initial value is smooth and satisfies an integro-differential equation (IDE). For a more specific case of exponential distributions we show that the ruin probability is a solution of a fourth-order ordinary differential equation (ODE). Asymptotic analysis of this ODE leads us to our main result (Theorem 2.1), which looks exactly like those of [Reference Frolova, Kabanov and Pergamenshchikov7, Reference Kabanov and Pergamenshchikov9]. Though the arguments follow the same general line as [Reference Kabanov and Pergamenshchikov9], they are different in many aspects. Several seemingly new results are obtained. In particular, for the model of Sparre Andersen with investments, where the interarrival times form a renewal process, we derive a sufficient condition for ruin with probability 1 (Theorem 2.2) and also a lower asymptotic bound (Proposition 4.1).
2. The model
Let
$(\Omega,{\mathcal{F}},{\bf F}=({\mathcal{F}}_t)_{t\ge 0},{\mathbb P})$
be a stochastic basis, i.e. a filtered probability space, where we are given a Wiener process
$W=(W_t)_{t\ge 0}$
and
$P=(P_t)_{t\ge 0}$
an independent compound Poisson process with drift c and successive jump instants
$T_n$
. We denote by
$p_P(\text{d} t,\text{d} x)$
the jump measure of the latter with its mean measure
$\Pi_P(\text{d} x) \, \text{d} t$
, where
$\Pi_P({\mathbb R})<\infty$
. We assume that
$\Pi_P({\mathbb R}_+)>0$
and
$\Pi_P({\mathbb R}_-)>0$
(some comments for the cases where
$\Pi_P$
charges only of the half-axes will be also given).
We consider the process
$X=X^{u}$
that is the solution of the non-homogeneous linear stochastic equation
$X_t = u + \int_0^t X_s \, \text{d} R_s + P_t$
, where
$R_t=at+\sigma W_t$
is the relative price process of the risky asset, P is a compound Poisson process with drift
$c\in {\mathbb R}$
representing the business activity of the insurance company, and
$u>0$
is the initial capital at time zero; we assume that
$\sigma^2>0$
. The process X can be written in ‘differential’ form as
$\text{d} X_t= X_t \text{d} R_t + \text{d} P_t$
,
$X_0=u$
, where
$\text{d} P_t = c \text{d} t + \int x {p_P}(\text{d} t,\text{d} x)$
,
$P_0=0$
.
In the actuarial context
$X=X^u$
represents the dynamics of the reserve of an insurance company combining both life and non-life insurance business and investing in a stock with the price given by a geometric Brownian motion
$S_t = S_0\text{e}^{(a-\sigma^2/2)t+\sigma W_t}$
solving the linear stochastic differential equation
$\text{d} S_t = S_t\text{d} R_t$
with initial value
$S_0$
. We assume without loss of generality that
$S_0=1$
. In this case we have, by the product formula,

In classical collective risk theory the process P is usually represented in the form

where
$N_t\,:\!=\,{p_P([0,t]\times {\mathbb R})}$
is a Poisson process with intensity
$\alpha=\Pi_P({\mathbb R})$
independent of the random variables
$\xi_n=\Delta P_{T_n}$
, where
$T_n$
are successive instants of jumps of N; the independent and identically distributed (i.i.d.) random variables
$\xi_n$
have the probability distribution
$F(\text{d} x)=\Pi_P(\text{d} x)/\Pi_P({\mathbb R})$
. Alternatively, we can represent P using independent compound Poisson processes given by the sums in the representation
$P_t=ct+\sum_{i=1}^{N^2_t}\xi^2_i-\sum_{i=1}^{N^1_t}\xi^1_i$
. The Poisson processes
$N^1$
and
$N^2$
with intensities
$\alpha_1=\Pi_P({\mathbb R}_-)$
and
$\alpha_2=\Pi_P({\mathbb R}_+)$
count, respectively, downward and upward jumps of P and have successive jump instants
$T^1_n$
,
$T^2_n$
. The corresponding jump sizes
$\xi^1_n$
,
$\xi^2_n$
are positive and have the distribution functions
$F_1(x)\,:\!=\,\Pi_P\big(\mathopen{]}-x,0\mathclose{]} \big)/\alpha_1$
,
$F_2(x)\,:\!=\,\Pi_P\big(\mathopen{]}0,x\mathclose{]} \big)/\alpha_2$
for
$x>0$
.
Let
$\tau^u\,:\!=\,\inf \{t: X^u_t\le 0\}$
(the instant of ruin),
$\Psi (u)\,:\!=\,P(\tau^u<\infty)$
(the ruin probability), and
$\Phi(u)\,:\!=\,1-\Psi (u)$
(the survival probability).
Note that in (2.2) N can be an independent renewal process, i.e. the counting process in which the lengths of the interarrival intervals
$T_n-T_{n-1}$
form an i.i.d. sequence. In collective risk theory this corresponds to the Sparre Andersen model.
The main aims of the present work are (i) to get a result on smoothness in u of the ruin probability
$\Psi$
justifying that
$\Psi$
solves an IDE in the classical sense, and (ii) to deduce from the latter, in the special case of exponentially distributed jumps, an ODE and use it to obtain the exact asymptotic of the ruin probability as
$u\to \infty$
.
Of course, in a model with only upward jumps (i.e. with
$\Pi_P\big(\mathopen{]}-\infty,0\mathclose{[}\big) = 0$
), if
$c\ge 0$
there is no ruin; the nontrivial case where
$c<0$
is studied in detail in [Reference Kabanov and Pergamenshchikov9, Reference Pergamenshchikov and Zeitouni11]. The model in [Reference Frolova, Kabanov and Pergamenshchikov7] deals with downward jumps, i.e. with the non-life insurance business, but the question of smoothness is not discussed and needs to be revisited (in [Reference Wang and Wu14] the smoothness is established under restrictions on the parameters).
Throughout the paper we use the following notation:
$\beta\,:\!=\, \frac{2a}{\sigma^2}-1$
,
$\kappa\,:\!=\, a - \frac{1}{2}\sigma^2 = \frac{1}{2}\sigma^2\beta$
, and
$\eta_t\,:\!=\,\ln S_t=\kappa t+\sigma W_t$
.
The following simple result holds for any Sparre Andersen model with investments.
Lemma 2.1. Suppose that there exists
$\beta^{\prime}\in \mathopen{]}0,\beta\wedge 1\mathclose{[}$
such that
$\mathbb{E}\big(\xi^1_1\big)^{\beta^{\prime}}<1$
. Then
$\Psi(u)\to 0$
as
$u\to \infty$
.
Proof.Let
$\tilde \Psi(u)$
be the ruin probability for the reserve process
$\tilde X^u$
corresponding to the model where the business of the company is given by
$\tilde P_t=-|c|t-\sum_{i=1}^{N_t}\xi^1_i$
. Then we have
$\Psi(u)\le \tilde \Psi(u)\le {\mathbb P}(Z_{\infty}>u)$
, where

and
$\kappa s + \sigma W_s=\frac{1}{2}\sigma^2s\big(\beta+ \frac{2}{\sigma} \, \frac{W_s}{s}\big)$
. By the law of large numbers, for almost all
$\omega$
there exists
$s_0(\omega)$
such that
$\beta+ \frac{2}{\sigma} \, \frac{W_s}{s} > \frac{\beta}{2}$
when
$s\ge s_0(\omega)$
. Thus, the integral on the right-hand side above is a finite random variable. Also,
$\text{e}^{-\kappa T_n-\sigma W_{T_n}} = \prod_{k=1}^n \zeta_k$
, where

form an i.i.d. sequence. Note that
${\mathbb E} \zeta_1^\beta\! =\! 1$
. Hence,
${\mathbb E} \zeta_1^{\beta^{\prime}}\! <\! 1$
and
${\mathbb E} \sum_{n=1}^\infty\!(\xi^1_n)^{\beta^{\prime}}\! \prod_{k=1}^n \zeta^{\beta^{\prime}}_k\! <\infty$
. That is,
$\sum_{n=1}^\infty\Big(\xi^1_n \prod_{k=1}^n \zeta_k\Big)^{\beta^{\prime}} <\infty$
almost surely (a.s.). But then
$\sum_{n=1}^\infty\xi^1_n\prod_{k=1}^n \zeta_k <\infty$
(a.s.).
The following theorem is the main result of the paper.
Theorem 2.1. Let
$F_1(x) = 1 - \text{e}^{-x/\mu_1}$
and
$F_2(x) = 1 - \text{e}^{-x/\mu_2}$
for
$x>0$
. Assume that
$\sigma>0$
and P is not an increasing process.
-
(i) If
$\beta>0$ then, for some
$K>0$ ,
$\Psi (u)= Ku^{-\beta}(1+o(1))$ as
$u\to \infty$ .
-
(ii) If
$\beta\le 0$ , then
$\Psi (u)=1$ for all
$u>0$ .
Property (ii), corresponding to the case where the volatility is large (with respect to the drift), also holds for a more general Sparre Andersen model with investments under minor additional assuptions on the distribution of interarrival intervals.
Theorem 2.2. Suppose that P is a non-increasing compound renewal process with drift (i.e.
$c<0$
or
${\mathbb P}(\xi_1<0)>0$
) such that
${\mathbb E} |\xi_1|^{\varepsilon} < \infty$
and
${\mathbb E} \text{e}^{{\varepsilon} T_1} < \infty$
for some
${\varepsilon}>0$
. If
$\beta\le 0$
, then
$\Psi(u)=1$
for any
$u>0$
.
The proof of Theorem 2.2 is rather straightforward and will be given in the next section. Proving property (i) requires a bit more work. In Section 4 we establish a lower asymptotic bound needed to select suitable components of the fundamental solution of the ODE for the ruin probability. In Section 5 we give a sufficient condition for the smoothness of the latter, and in Section 6 we derive an integro-differential equation for it. In the case of exponential distributions of jump sizes, the solutions of this IDE also solve an ODE of the fourth order, which we analyze in the concluding Section 7.
3. Large-volatility case: Ruin with probability 1
First, we consider a general setting where the random variables
$T>0$
and
$\xi$
, and the Wiener process W, are independent,
$\eta_t\,:\!=\,\sigma W_t+\kappa t$
, and where the objects of interest are the random variables
$A \,:\!=\, \text{e}^{\eta_{T}}$
and
$B\,:\!=\,\xi+c\,\int^{T}_{0} \text{e}^{-\eta_{v}} \, \text{d} v$
. In our proofs we shall use conditioning with respect to the final value of the Wiener process on a given finite interval. For convenience, we have formulated the required result as the following lemma (see, e.g., [Reference Revuz and Yor12, Chapter 1]).
Lemma 3.1. Let
$W=(W_s)_{s\le t}$
be a Wiener process on [0, t]. Then the conditional law of W given
$W_t=x$
is the same as the unconditional law of the Brownian bridge on [0, t] ending at x, i.e. of the process
$B^x=(B^x_s)_{s\le t}$
with
$B^x_s=W_s-\frac{s}{t}W_t+\frac{s}{t}x$
,
$s\le t$
.
Lemma 3.2. Suppose that
$c<0$
or
${\mathbb P}(\xi<0)>0$
. Then the ratio
$\frac{B}{A} = \text{e}^{-\eta_{T}}\xi + c \int_0^{T} \text{e}^{-\eta_s} \text{d} s$
is unbounded from below on the (non-null) set
$\{W_{T}<0\}$
.
Proof.Let
$c<0$
. Take arbitrary
$N>0$
. According to Lemma 3.1,

where the random variable

is independent of
$\xi$
. The law of
$\zeta^x_t$
is a probability measure charging every open interval in
${\mathbb R}_+$
. Therefore, the right-hand side of (3.1) is strictly positive for every
$(t,x)\in \mathopen{]}0,\infty\mathclose{[}\times {\mathbb R}$
. Integrating with respect to the law of
$(T,W_{T})$
, we get that
$\frac{B}{A}$
is unbounded from below.
Let
${\mathbb P}(\xi<0)>0$
. Take
${\varepsilon}>0$
and
$ u>0$
such that the probabilities
${\mathbb P}(\xi\le -{\varepsilon})$
and
${\mathbb P}(u\le T)$
are strictly positive. Then

for all sufficiently small x, namely, satisfying the inequality
${\varepsilon} \text{e}^{-\kappa u-\sigma x}>N$
. Since the law of
$(T,W_{T})$
is a probability measure under which every set
$[0,T]\times \mathopen{]}-\infty,y\mathclose{]}$
,
$y\in {\mathbb R}$
, is non-null, the result follows.
Lemma 3.3. Let
$\kappa\le 0$
. If
${\mathbb E} |\xi|^{\varepsilon} < \infty$
and
${\mathbb E} \text{e}^{{\varepsilon} T} < \infty$
for some
${\varepsilon}>0$
, then
${\mathbb E} |B|^\delta < \infty$
for some
$\delta\in \mathopen{]}0,1\mathclose{[}$
.
Proof.Note that
$\eta_{T}-\eta_s\le \sigma (W_{T}-W_s)$
. For any
$\delta\in \mathopen{]}0,1\mathclose{[}$
,

Let
$m(\text{d} t)$
denote the distribution of T. Substituting the density of distribution of the running maximum of the Wiener process, we get

Our assumptions imply that
${\mathbb E}|B|^{\delta}<\infty$
for all sufficiently small
$\delta>0$
.
As in [Reference Kabanov and Pergamenshchikov9], the arguments in the proof of Theorem 2.2 are based on the ergodic property of the discrete-time autoregressive process
$\big(\tilde X_n^u\big)_{n\ge 1}$
with random coefficients, which is defined recursively by the relations

where
$(A_n,B_n)_{n\ge 1}$
is an i.i.d. sequence in
${\mathbb R}^2$
. For the following result see [Reference Pergamenshchikov and Zeitouni11, Proposition 7.1].
Lemma 3.4. Suppose that
${\mathbb E} |A_1|^\delta < 1$
and
${\mathbb E} |B_1|^\delta < \infty$
for some
$\delta \in \mathopen{]}0,1\mathclose{[}$
. Then, for any
$u\in {\mathbb R}$
, the sequence
$\tilde X_n^u$
converges in
$L^\delta$
(and hence in probability) to the random variable
$\tilde X_\infty^0=\sum_{k=1}^\infty B_k\prod_{j=1}^{k-1}A_j$
and, for any bounded uniformly continuous function f,

Corollary 3.1. Suppose that
${\mathbb E} |A_1|^\delta < 1$
and
${\mathbb E} |B_1|^\delta < \infty$
for some
$\delta\in \mathopen{]}0,1\mathclose{[}$
.
(i) If
${\mathbb P}\Big(\tilde X_\infty^0<0\Big)>0$
, then
$\inf_{k\ge 1}\tilde X_k^u<0$
.
(ii) If
$A_1>0$
and
$\frac{B_1}{A_1}$
is unbounded from below, then
$\inf_{n\ge 1}\tilde X_k^u<0$
.
Proof.
(i) Let
$f(x)\,:\!=\,-\mathbf{1}_{\{x<-1\}}+x \mathbf{1}_{\{-1\le x<0\}}$
. Then
${\mathbb E} f\big(\tilde X_\infty^0\big) < 0$
and (3.4) may hold only if
$\inf_{k\ge 1}\tilde X_k^u<0$
.
(ii) Put
$\tilde X_\infty^{0,1}\,:\!=\,\sum_{n= 2}^\infty B_n\prod_{j=2}^{n-1}A_j$
. Then
$\tilde X_\infty^0=B_1+A_1\, \tilde X_\infty^{0,1}=A_1\Big(\tilde X_\infty^{0,1} + \frac{B_1}{A_1}\Big)$
. Since
$\frac{B_1}{A_1}$
and
$\tilde X_\infty^{0,1}$
are independent random variables and
$\frac{B_1}{A_1}$
is unbounded from below, so is the sum
$\tilde X_\infty^{0,1} + \frac{B_1}{A_1}$
. It follows that the probability
${\mathbb P}\big(\tilde X_\infty^0<0\big) > 0$
, and we can use (i).
Proof of Theorem 2.2. Let
$\tilde X_n=\tilde X^u_n\,:\!=\,X^u_{T_n}$
, where
$X^u$
is the process given by (2.1) and (2.2) where N is a renewal process. In this case,

Since the Wiener process W and P (a compound renewal process with drift) are independent,
$(A_k,B_k)_{k\ge 1}$
is an i.i.d. sequence.

Clearly,
$\tilde X_{n}=A_n\tilde X_{n-1}+B_n$
, i.e.
$\tilde X_{n}$
satisfies (3.3).
In the case where
$\beta < 0$
we get the result by virtue of Lemmas 3.2 and 3.3 as well as Corollary 3.1(ii), taking into account that

and, therefore,
${\mathbb E} A_1^{\delta} < 1$
for any
$\delta \in \mathopen{]}0,-\beta\mathclose{[}$
.
In the case where
$\beta=0$
, we consider a suitably chosen random subsequence of
$\tilde X_n$
which satisfies a linear difference equation with the required properties.
Let
$\widehat X_n=\widehat X^u_n\,:\!=\,\tilde X_{\theta^n}$
, where
$\theta_n\,:\!=\, \inf\big\{k> \theta_{n-1}\colon {\mathcal{E}}_k<{\mathcal{E}}_{\theta_{n-1}}\big\}$
. Thus,
$\theta_n$
are ladder times for the random walk
$M_k=\ln {\mathcal{E}}_k$
. If
$\beta=0$
, then
$M_1=\sigma W_{T_1}$
and
$ {\mathbb E} M_1 = 0$
,
$ {\mathbb E} M_1^2 = {\mathbb E} T_1 < \infty$
. Therefore, there is a finite constant C such that

(see [Reference Feller5, Theorem 1a, Chapter XII.7] and the remark before it).
In particular,
$\theta_n<\infty $
and the differences
$\theta_n-\theta_{n-1}$
form a sequence of finite independent random variables distributed as
$\theta_1$
. The discrete-time process

solves the linear equation
$\widehat X_n^u=\widehat A_n\widehat X_{n-1}^u +\widehat B_n$
,
$n\ge 1$
,
$\widehat X_0^u=u$
, where

By construction,
$ \widehat A_1<1$
and

According to Lemma 3.3,
${\mathbb E} \vert B_{1}\vert^{\delta} < \infty$
for some
$\delta \in \mathopen{]}0,1\mathclose{[}$
. Taking
$r \in \mathopen{]}0,\frac{\delta}{5}\mathclose{[}$
and defining the sequence
$l_{n}\,:\!=\,\big[n^{4r}\big]$
, we have, using the Chebyshev inequality and (3.7),

To apply Corollary 3.1(ii) it remains to check that the random variable
$\frac{\widehat B_1}{\widehat A_1}$
is unbounded from below. But Lemma 3.2 asserts that this ratio coinciding with
$\frac{B_1}{A_1}$
on the set
$\big\{W_{T_1}<0\big\}$
of strictly positive probability is unbounded from below on this set.
4. Lower asymptotic bound
The next result we need for our asymptotic analysis indicates that the ruin probability decreases at infinity not faster than a certain power function. The proof given below also covers the more general case where P is a compound renewal process with drift given by the representation (2.2) where N is a counting renewal process.
Proposition 4.1. Suppose that
$c<0$
or the random variable
$\xi_1$
is unbounded from below. Then there exists
$\beta_{*}>0$
such that

Proof.Let
$\tilde X_n=\tilde X^u_n\,:\!=\,X^u_{T_n}$
and let
$\vartheta^u\,:\!=\,\inf\big\{n\,:\, \tilde X^ u_n\le 0\big\}$
. If
$c<0$
then the ruin may happen between jump times, but in all cases
$\Psi(u)\,:\!=\, {\mathbb P}(\tau^u<\infty) \ge {\mathbb P}(\vartheta^u<\infty)$
.
Recall that for
$\big(\tilde X_n\big)$
we have (3.6) and (3.3) with
$\big(A_k,B_k\big)$
defined by (3.5).
For reals
$\varrho \in \mathopen{]}0,1\mathclose{[}$
and
$b > \big({\varrho^{2}(1-\varrho)}\big)^{-1}$
we define the sets
$\Gamma_{k}\,:\!=\,\big\{A_{k}\le \varrho, \ B_{k}\le \varrho^{-1}\big\}$
and
$D_{k}\,:\!=\,\big\{A_{k}\le \varrho^{-1},\ B_{k} \le -b\big\}$
. Note that
${\mathbb P} (\Gamma_{k})={\mathbb P} (\Gamma_{1})$
and
${\mathbb P} (D_{k})={\mathbb P} (D_{1})$
for all k.
Lemma 4.1. If there are
$\varrho$
and b such that
${\mathbb P} (\Gamma_{1})>0$
and
${\mathbb P} (D_{1})>0$
, then (4.1) holds.
Proof.Using (3.6) we easily get that on the set
$\cap^{n}_{k=1}\,\Gamma_{k}$
,

From the representation
$\tilde X_{n+1}=A_{n+1} \tilde X_{n} + B_{n+1}$
and the above bound we infer that on the set
$\Big(\cap^{n}_{k=1} \Gamma_{k}\Big)\cap D_{n+1}$
,

Let
$u>b_1$
and let
$n=n(u)\,:\!=\,2+[{(1/\ln\varrho)}{\ln(b_{1}/u)}] $
, where [x] means the integer part,
$x-1<[x]\le x$
. Then
$u \varrho^{n-1}= u \text{e}^{(n-1)\ln \varrho} < u\text{e}^{\ln (b_1/u)} - b_1$
, and therefore

Take
$\beta_*\,:\!=\,\frac{\ln{\mathbb P}(\Gamma_{1})}{\ln\varrho}$
. Then

Since
${\mathbb P}\big(\tau^u<\infty\big)\ge {\mathbb P}\big(\vartheta^u<\infty\big)$
, the lemma is proven.
It remains to show that our assumptions ensure that for any
$ \varrho, b>0$
the probability
${\mathbb P} (A_{1}\le \varrho, \ B_{1}\le -b)$
is strictly positive. With this in mind, take
$v_1, v_2>0$
and
$x_0$
such that
${\mathbb P}(T_1\in [v_1, v_2])>0$
and
$|\kappa |v_2 + \sigma x_0 \le \ln \varrho$
. Let
$\Delta\,:\!=\, [v_1, v_2]\times \mathopen{]}-\infty,x_0\mathclose{]}$
. By virtue of Lemma 3.1, for any
$(t,x)\in \Delta$
we have

where
$\Upsilon^x_t \,:\!=\, \text{e}^{ \sigma x+\kappa t } \zeta_t^x$
, with
$\zeta_t^x$
given by (3.2). Recall that
$\zeta_t^x$
takes arbitrarily large values with strictly positive probability (regardless of x and
$t>0$
), and
$\xi_1$
and
$\zeta_t^x$
are independent. Therefore, the probability on the right-hand side is strictly positive if
$\xi_1$
is unbounded from below or
$c<0$
. Integrating (4.2) over
$\Delta$
with respect to the distribution of
$\big(T_1,W_{T_1}\big)$
charging
$\Delta$
, we get the result.
5. Regularity of the ruin probability
Following tradition (seemingly justified by notational convenience), we shall work with the survival probability
$\Phi=1-\Psi$
, which has the same regularity property and satisfies the same equations.
Theorem 5.1. Suppose that
$F_k(\text{d} x) = f_k(x) \, \text{d} x$
, where the densities
$f_k$
are twice differentiable on
$\mathopen{]}0,\infty\mathclose{[}$
and
$f^{\prime}_k,f^{\prime\prime}_k\in L^1({\mathbb R}_{+})$
,
$k=1,2$
. Then
$\Phi $
is twice continuously differentiable on
$\mathopen{]}0,\infty\mathclose{[}$
, and
$\Phi^{\prime}$
,
$\Phi^{\prime\prime}$
are bounded.
Proof.Define the continuous process
$Y^{u}_{t}\,:\!=\,S_t\Big (u+c\int_{[0,t]} S^{-1}_s \, \text{d} s\Big)$
coinciding with
$X^u$
on
$\mathopen{[}0,T_1\mathclose{[}$
, and introduce the stopping time
$\theta^{u}\,:\!=\,\inf\{t\ge 0 \,:\, Y^{u}_{t}\le 0\}$
. By virtue of the strong Markov property of
$X ^u$
,

Due to the independence of W and the Poisson processes
$N^1$
and
$N^2$
, the values of
$\theta^u(\omega) $
,
$T^1_1(\omega)$
, and
$T^2_1(\omega)$
are all different for almost all
$\omega$
. Since
$\Phi\big(X^u_{\theta^{u}}\big)\mathbf{1}_{\big\{\theta^{u}< T_1\big\}}=0$
we have the representation
$\Phi=K^\downarrow +K^\uparrow $
, where

The analysis of the smoothness of these two functions is similar, so we consider the first one. It is convenient to represent it as the sum
$K^\downarrow = K_1^\downarrow +K_2^\downarrow $
, where

with

Lemma 5.1. For any bounded measurable function G(y,w), the function
$K_2^\downarrow (u)$
defined above belongs to
$C^{\infty}(\mathopen{]}0,\infty\mathclose{[})$
and has bounded derivatives of any order.
Proof.Using the representation
$Y_s^u= \text{e}^{\eta_s-\eta_1}Y_1^u + \int_{[1,s]} \text{e}^{\eta_s-\eta_r} \, \text{d} r$
,
$s\ge 2$
, and noticing that the random variable
$Y_1^u$
is independent of the process
$(\eta_s-\eta_1)_{s\ge 1}$
, we get

where
$\tilde G(s,y,w)\,:\!=\,{\mathbb E} G\Big(\text{e}^{\eta_s-\eta_1}y+\int_{[1,s]} \text{e}^{\eta_s-\eta_r} \text{d} r,w\Big) $
. Substituting the expression for
$Y_1^u$
, we obtain
${\mathbb E} G\big(Y^u_s,w\big) = {\mathbb E} \tilde G\big(s,Y^u_1,w\big) = {\mathbb E} \tilde G\big(s, \text{e}^{\kappa+\sigma W_1}(u +c R_1),w\big)$
with
$R_1\,:\!=\,\int_{[0,1]} \text{e}^{-\kappa r -\sigma W_r} \, \text{d} r$
. Conditioning on
$W_1=x$
and Lemma 3.1 lead to the representation
${\mathbb E} G\big(Y^u_s,w\big) =\int_{{\mathbb R}} {\mathbb E} \tilde G\big(s, \text{e}^{\kappa +\sigma x}(u+ \zeta^x),w\big) \varphi_{0,1}(x) \, \text{d} x$
, where
$\varphi_{0,1}(x)$
is the standard Gaussian density, and the random variable
$\zeta^x \,:\!=\, c\int_{[0,1]} \text{e}^{-\kappa r-\sigma (W_r+r(x-W_1))} \, \text{d} r$
.
The case
$c= 0$
is easy. By the change of variable
$z=\kappa +\sigma x+\ln u$
we get

The function
$u\mapsto \varphi_{0,1}\big(\frac{1}{\sigma}(z - \kappa- \ln u)\big)$
belongs to
$C^{\infty}(\mathopen{]}0,\infty\mathclose{[})$
, and its derivatives on any interval
$[u_1,u_2]\subset \mathopen{]}0,\infty\mathclose{[}$
are dominated by integrable functions. It follows that the function
$u\mapsto {\mathbb E} G\big(Y^u_s,w\big) $
belongs to
$C^{\infty}(\mathopen{]}0,\infty\mathclose{[})$
, and its derivatives are locally bounded.
Now, let
$c\neq 0$
. Lemma 5.2 below asserts that, for every x, the random variable
$\zeta^x$
has a density
$\rho (x,\cdot)$
such that
${\mathbb E} G\big(Y^u_s,w\big) = \int_{{\mathbb R}^2}\tilde G\big(s,\text{e}^{\kappa +\sigma x}z,w\big)\rho(x,z-u)\varphi_{0,1}(x) \, \text{d} x \, \text{d} z$
. Since this density belongs to
$C^{\infty}$
and its derivatives are of subexponential growth in x, the function
$y\mapsto {\mathbb E} G\big(Y^u_s,w\big) $
also belongs to
$C^{\infty}$
and has bounded derivatives. So, the function
$u\mapsto K_2^\downarrow (u)$
has the same property and the lemma is proved.
Lemma 5.2. ([Reference Kabanov and Pergamenshchikov9], Lemma 5.2) The random variable
$\zeta^x$
has a density
$\rho(x,\cdot)\in C^\infty$
such that, for any
$n\ge 1$
,
$\sup_{y\ge 0}\,\left\vert \frac {\partial^n}{\partial y^n} \rho(x,y) \right\vert \le C_n \text{e}^{C_n |x|}$
with some constant
$C_n$
, and
$\frac{\partial^n}{\partial y^n}\rho(x,0)=0$
.
The required smoothness property of
$K_1^\downarrow$
follows from the next lemma.
Lemma 5.3. Let
$\xi>0$
be a random variable with a density f which is twice differentiable on
$\mathopen{]}0,\infty\mathclose{[}$
with
$f^{\prime},f^{\prime\prime} \in L^1({\mathbb R}_{+})$
. Let
$G \colon {\mathbb R} \to [0,1]$
be a measurable function equal to zero on
$\mathopen{]}-\infty,0\mathclose{]}$
, and let
$h(y)\,:\!=\, {\mathbb E} G(y-\xi) $
. Then the function
$(s,u) \mapsto {\mathbb E} h\big(Y_s^u\big) $
has two continuous derivatives in u bounded on
$[0,t] \times \mathopen{]}0,\infty\mathclose{[}$
.
Proof.First, we are able to observe that
$h(y)= \int_{\mathbb R} G(y-x)f(x) \, \text{d} x = \int_{\mathbb R} G(z)f(y-z) \, \text{d} z$
and
$h^{\prime}(y)=\int_{\mathbb R} G(z)f^{\prime}(y-z) \, \text{d} z$
. It follows that
$|h^{\prime}(y)|\le ||f^{\prime}||_{L^1}$
and
$|h^{\prime\prime}(y)|\le ||f^{\prime\prime}||_{L^1}$
.
Using the representation
$Y^u_s=\text{e}^{\eta_s}u+\int_{[0,s]}\text{e}^{\eta_s-\eta_r} \, \text{d} r$
, and arguing in the same spirit as above but conditioning this time on the random variable
$W_s\sim {\mathcal{N}}(0,s)$
and considering the Brownian bridge on [0,s], we obtain

where
$\zeta^{s,x} \,:\!=\, c\int_{[0,s]}\exp\big\{{-} \big(\frac{\sigma r x}{s} + \kappa r+\sigma (W_{r}-\frac{r}{s}W_{s}\big)\big\} \, \text{d} r$
. If
$c=0$
, then

and the required property is obvious.
Let
$c\neq 0$
. It is easily seen that the random variable
$\zeta^{s,x}$
has a
$C^\infty$
density (the same as
$\zeta^x$
but with the parameters cs,
$\kappa s$
, and
$\sigma s^{1/2}$
). Unfortunately, the derivatives of this density have non-integrable singularities at
$s=0$
. For this reason, the arguments used above do not work.
The smooth function
$x\to \zeta ^{s,x}$
is strictly decreasing and maps
${\mathbb R}$
onto
${\mathbb R}_+$
(when
$c>0$
) or is strictly increasing and maps
${\mathbb R}$
onto
${\mathbb R}_-$
(when
$c<0$
). Let
$z(s,\cdot)$
denote its inverse, which is a function decreasing from
$+\infty$
to
$-\infty$
(when
$c>0$
) and increasing from
$-\infty$
to
$+\infty$
(when
$c<0$
). The partial derivative in x is given by

where

In both cases,
$z_{xx}(s,x)>0$
for
$s>0$
.
Changing the variable, we obtain that
${\mathbb E} h\big(Y_s^u\big) = {\mathbb E} H(s,u)$
, where

For
$s\in \mathopen{[}0,2\mathclose{]}$
,

where the last integral is a finite constant. This estimate and the boundedness of h
′ legitimate the differentiation under the sign of the integrals. In the case
$c>0$
we have

and the bound
$|H_u(s,u)|\le C$
for all
$s\in \mathopen{]}0,2\mathclose{]}$
.
Repeating the arguments, we get

and
$|H_u(s,u)|\le C$
for all
$s \in \mathopen{]}0,2\mathclose{]}$
.
Similar arguments are applied in the case
$c<0$
. The lemma is proven.
Remark 5.1. Let
$V \colon {\mathbb R}\to [0,1]$
be a measurable function,
$V(x)=0$
for
$x\le 0$
. Define
$\Psi_V(u)\,:\!=\,{\mathbb E} V\big(X^u_{\tau^u}\big) \mathbf{1}_{\{\tau^u<\infty\}}$
. Then the statement of Theorem 5.1 holds for
$\Psi_V$
with the same proof. Indeed, note that the strong Markov property for
$\Psi_V$
has the same form as for
$\Phi$
in (5.1), i.e.
$\Psi_V(u) = {\mathbb E} \Phi_V\big(X^u_{\theta^{u}\wedge T_1}\big)$
. Also, Proposition 6.1 below holds for
$\Psi_V$
. In the particular case where
$V(x)=1$
for all
$x<0$
, the function
$\Psi_V$
coincides with
$\Psi$
on
$\mathopen{]}0,\infty\mathclose{[}$
but they are different on
${\mathbb R}_-$
.
6. The integro-differential equation for the survival probability
Proposition 6.1. Suppose that
$\Psi\in C^2$
(see the sufficient conditions for this in Theorem 5.1). Then the function
$\Phi$
on
$\mathopen{]}0,\infty\mathclose{[}$
satisfies the following equation:

ProofFor
$h>0$
and
$\epsilon>0$
small enough to ensure that
$u\in \mathopen{]}\epsilon,\epsilon^{-1}\mathclose{[}$
, put

Let
${\mathcal{L}}^0\Phi(u)\,:\!=\, \frac12\sigma^2u^2 \Phi^{\prime\prime}(u)+ (au+c)\Phi^{\prime}(u)$
. By the Itô formula,

Due to the strong Markov property,
$\Phi(u) = {\mathbb E} \Phi\Big(X^u_{\tau^{\epsilon}_h}\Big)$
(since
$\Phi(u)=0$
for
$u\le 0$
). For every
$\epsilon>0$
the integrands above are bounded by constants, and hence the expectation of the stochastic integral with respect to the Wiener process is zero. The expectation of the integral with respect to the integer-valued measure
$p_P(\text{d} s, \text{d} x)$
is equal to the integral with respect to the compensator of the latter, i.e. to
${\mathbb E} \int_{0}^{\tau^{\epsilon}_h}\int\big(\Phi\big(X^u_{s-}+x\big) - \Phi\big(X^u_{s-}\big)\big) \, \text{d} s \, \Pi_P(\text{d} x)$
. Moreover,
$\tau^\epsilon_h=h$
when h is sufficiently small (the threshold below which the equality holds, of course, depends on
$\omega$
).
It follows that, independently of
$\epsilon$
,

as
$h\to 0$
. Finally,

It follows that
$\Phi$
satisfies (6.1).
Remark 6.1. The equation (6.1) holds in the viscosity sense, i.e. without additional assumptions on the smoothness of
$\Psi$
(see [Reference Belkina and Kabanov3]).
7. Exponentially distributed jumps: From IDE to ODE
In the case of exponentially distributed jumps the IDE can be written as

where

Changing variables in the integrals, we get

where

Note that we have
$I^{\prime}_1 =$
$\Phi-\frac{1}{\mu_1}I_1$
and
$I^{\prime}_2 =-\Phi+\frac{1}{\mu_2}I_2$
.
Put
${\mathcal{T}} f\,:\!=\,\mu_1\mu_2f^{\prime\prime}+(\mu_2-\mu_1)f^{\prime}-f$
. It is easily seen that
${\mathcal{T}} I_1 = \mu_1\mu_2 \Phi^{\prime} - \mu_1\Phi$
and
${\mathcal{T}} I_2 = -\mu_1\mu_2 \Phi^{\prime} - \mu_2\Phi$
. Applying the operator
${\mathcal{T}}$
to both sides of (7.1) we see that the survival probability
$\Phi$
(as well as the ruin probability
$\Psi\,:\!=\,1-\Phi$
) solves the differential equation
${\mathcal{D}}\Phi=0$
, where
${\mathcal{D}}$
is the differential operator of the fourth order:

After some simple calculation we get that the fourth-order equation obtained is in fact the following third-order differential equation for G:

where the coefficients (depending on u) are:

with
$\Delta \mu\,:\!=\,\mu_2-\mu_1$
,
$\mu^2\,:\!=\,\mu_1\mu_2$
. Since
$\mu_1 , \mu_2>0$
we get from here the equation with unit coefficient at the third derivative:

where

Let us denote by
${\mathcal{J}}$
the operator on the left-hand side of the basic IDE acting in the space of sufficiently smooth functions. Then
${\mathcal{T}}{\mathcal{J}}={\mathcal{D}}$
. Note that
$\text{dim}\,{\text{Ker}}\, {\mathcal{D}}=4$
. Also, we have the inclusion
${\text{Ker}}\, {\mathcal{J}}\subseteq {\text{Ker}}\, {\mathcal{D}}$
. The kernel of the second-order differential operator
${\mathcal{T}}$
is a two-dimensional linear subspace generated by the functions
$h_1(u)\,:\!=\,\text{e}^{-u/\mu_1}$
,
$h_2(u)\,:\!=\,\text{e}^{u/\mu_2}$
. Let
$f_1$
and
$f_2$
be solutions of the equations
${\mathcal{J}} f_j=h_j$
. Then
$f_1$
and
$f_2$
are linearly independent and belong to
${\text{Ker}}\, {\mathcal{D}}$
. The four functions the identity, the survival probability
$\Phi$
(assumed not to be a constant),
$f_1$
, and
$f_2$
form a basis in
${\text{Ker}}\, {\mathcal{D}}$
. Indeed, if their linear combination

then
$a_1{\mathcal{J}} f_1+a_2{\mathcal{J}} f_2=0$
. That is,
$a_1 h_1+a_2 h_2=0$
and
$a_1=a_2=0$
. But the equality
$a_3+a_4\Phi=0$
holds only if
$a_3=a_4=0$
.
8. Asymptotic analysis of the differential equation for the survival probability
We analyze the asymptotic behavior of solutions of (7.2) using a result on systems with asymptotically constant coefficients. To this end, we put
$y=(y_1,y_2,y_3)\,:\!=\,(G,G^{\prime},G^{\prime\prime})$
. Using matrix notation where the vectors are columns, we get from (7.2) that
$y^{\prime}=A(u)y$
where

Then,
$A(u)=A(\infty)+V(u)$
, where the matrix
$V(u)\,:\!=\,A(u)-A(\infty)$
is such that its norm
$|V^{\prime}(u)|$
(the Euclidean or any other) is integrable on
$[1,\infty[$
.
Let
$\lambda_j=\lambda_j(u)$
,
$j=1,2,3$
, be the roots of the characteristic equation

Note that

Recall that we are working under the assumptions
$a> \frac12 \sigma^2 > 0$
and
$\mu_1,\mu_2>0$
. Then
$q_0(u)\to 0$
,
$q_2(u) \to \frac {\Delta \mu}{\mu^2 }\neq 0$
,
$q_1(u) \to -\frac {1}{\mu^2 } < 0$
, and
$u q_0(u) \to -\frac{2a}{\mu^2 \sigma^2} < 0$
as
$u\to \infty$
. The Cardano formulae imply that the roots
$\lambda_j(u)$
are continuous functions having finite limits as
$u\to \infty$
. According to the last equation in (8.1), at least one root, say
$\lambda_3(u)$
, tends to zero as
$u\to \infty$
. Since
$q_1(\infty)\neq 0$
, two other roots have non-zero limits satisfying the system
$\lambda_1(\infty)+\lambda_2(\infty)=-q_2(\infty)$
,
$\lambda_1(\infty)\lambda_2(\infty)=q_1(\infty)$
, i.e.
$\lambda_1(\infty) = \frac{1}{\mu_2}$
,
$\lambda_2(\infty) = -\frac{1}{\mu_1}$
. Thus, we obtain

Applying the implicit function theorem, we conclude that the function
$\lambda_3(u)$
can be expanded in powers of
$u^{-1}$
. In particular,

The numbers
$\lambda_1(\infty) = \frac{1}{\mu_2}$
,
$\lambda_2(\infty) = -\frac{1}{\mu_1}$
, and
$\lambda_3(\infty) = 0$
are eigenvalues of the matrix
$A(\infty)$
.
The conditions of [Reference Hsieh and Sibuya8, Theorem VII-5-3] are fulfilled, and therefore the fundamental matrix of the equation
$y^{\prime}=A(u)y$
has the form
$P_0(u)(I+H(u))\exp\left\{\int_1 ^u \Lambda (s) \, \text{d} s \right\}$
, where the matrix-valued functions
$P_0(u)$
and H(u) are continuous,
$P_0(u)\to Q$
,
$H(u)\to 0$
as
$u\to \infty$
,
$\Lambda (s)\,:\!=\,{\text{diag}}\,(\lambda_1(s),\lambda_2(s),\lambda_3(s))$
, I is the identity matrix, and the columns of Q are eigenvectors of
$A(\infty)$
corresponding to the eigenvalues
$\lambda_1(\infty)$
,
$\lambda_2(\infty)$
, and
$\lambda_3(\infty)$
, i.e.

A general solution G of (7.2) is a linear combination of the functions

where the continuous functions
$\theta_j$
and
$\gamma_j$
vanish at infinity for
$j=1,2,3$
, and the function
$\gamma_3$
is integrable. Thus, the general solution of the fourth-order differential equation is a linear combination of a constant function and the three functions above. In particular, the ruin probability
$\Psi$
is given by a certain linear combination. Clearly, the latter cannot involve the unbounded function corresponding to the integral of first function. The integrals of the others are bounded. and therefore
$\Psi(u)= c_0+ c_2H_2(u)+c_3H_3(u)$
, where
$H_j(u)\,:\!=\,\int_u^\infty h_j(s) \, \text{d} s$
,
$j=2,3$
. Note that
$c_0=0$
, the integral
$H_2(u)$
converges to zero exponentially fast, and

as
$u\to \infty$
. By virtue of Proposition 4.1
$c_3\neq 0$
, and we easily obtain that
$\Psi(u)\sim K u^{-\beta}$
, where
$K>0$
and
$\beta = \frac{2a}{\sigma^2}-1$
.
Acknowledgement
The first author wishes to thank Ernst Eberlein and Thorsten Schmidt for helpful discussions during his stay at FRIAS.
Funding information
This work was supported by the Russian Science Foundation grant 20-68-47030.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process of this article.