1. Introduction
Conditioning Markov processes to avoid sets is a classical problem. Indeed, suppose
${({\mathbb{P}}_x)_{x\in E}}$
is a family of Markov probabilities on the state space E, and that
$T_B$
is the first hitting time of a fixed set B. When
$T_B$
is almost surely finite, it is non-trivial to construct and characterise the conditioned process through the natural limiting procedure

or the randomised version

for
$\Lambda\in \mathcal F_t$
and
$x\in E$
. Here,
${(\mathcal{F}_t)_{t\ge 0}}$
denotes the natural enlargement of the filtration induced by the Markov process, and
$e_q$
are independent exponentially distributed random variables with parameter
$q>0$
.
A classical example is Brownian motion conditioned to avoid the negative half-line. In this case, the limits in (1.1) and (1.2) lead to a so-called Doob h-transform of the Brownian motion killed on entering the negative half-line, by the positive invariant function
$h(x) = x$
on
$(0,\infty)$
. This Doob h-transform turns out (see Chapter VI.3 of [Reference Revuz and Yor8]) to be the Bessel process of dimension 3, which is transient. This example is typical, in that a conditioning procedure leads to a new process which is transient where the original process was recurrent.
Extensions of this result have been obtained in several directions, most notably to random walks and Lévy processes. A prominent example with several applications is that of a Lévy process conditioned to stay positive, which was found by Chaumont and Doney [Reference Chaumont and Doney2] using the randomised conditioning of (1.2). In that case, the associated invariant function h is given by the potential function of the descending ladder height process.
In recent articles [Reference Döring, Kyprianou and Weissmann4,Reference Döring, Watson and Weissmann5] the authors considered the problem of conditioning a Lévy process to avoid an interval. The first one is focused on stable processes, a subclass of Lévy processes which contains processes satisfying the scaling property

where
$\alpha \in (0,2)$
is the so-called index of self-similarity. One assumption is that the stable processes have two-sided jumps. Our task is to fill the remaining gap and handle the case when the stable process has just one-sided jumps. We will show that the conditioned process in the sense of (1.2) is again a Markov process, of which we will explicitly present the transition probabilities. Unfortunately, it is not possible to describe this Markov process as an h-transform of the process killed on entering the interval. Instead, it is a combination of the processes conditioned to stay above (respectively below) the interval. As was done in [Reference Döring, Kyprianou and Weissmann4], we restrict to the interval
$[{-}1,1]$
because we can translate the process conditioned to avoid
$[{-}1,1]$
to the process conditioned to avoid general intervals via spatial homogeneity and the scaling property.
Before presenting our results, we introduce the most important definitions concerning Lévy processes, their subclass containing stable processes, and Doob’s h-transform. More details can be found, for example, in [Reference Bertoin1,Reference Chung and Walsh3,Reference Kyprianou6,Reference Sato10].
1.1. Lévy processes
A Lévy process X is a stochastic process with stationary and independent increments whose trajectories are almost surely right-continuous with left-limits. For each
$x \in \mathbb{R}$
, we define the probability measure
${\mathbb{P}}_x$
under which the canonical process X starts at x almost surely (a.s.). We write
${\mathbb{P}}$
for the measure
${\mathbb{P}}_0$
. The dual measure
$\hat{{\mathbb{P}}}_x$
denotes the law of the so-called dual process
$-X$
started at x. A Lévy process can be identified using its characteristic exponent
$\Psi$
, defined by the equation
${\mathbb{E}}[ \mathrm{e}^{\mathrm{i} q X_t}] = \mathrm{e}^{-t\Psi(q)}$
,
$q\in {\mathbb{R}}$
, which has the Lévy–Khintchine representation

where
$a\in{\mathbb{R}}$
(sometimes called the centre of the process),
$\sigma^2\geq 0$
is the variance of the Brownian component, and the Lévy measure
$\Pi$
is a real measure with no atom at 0 satisfying
$\int_{\mathbb{R}} (x^2\wedge 1)\, \Pi({\mathrm{d}} x)<\infty$
.
1.2. Killed Lévy processes and
${{\textit{h}}}$
-transforms
For a Lévy process X and a Borel set B the killed transition measures are defined as

The corresponding sub-Markov process is called the Lévy process killed in B. An invariant function for the killed process is a measurable function
$h \,:\, {\mathbb{R}} \backslash B \rightarrow [0,\infty)$
such that

An invariant function taking only strictly positive values is called a positive invariant function. When h is a positive invariant function, the associated Doob h-transform is defined via the change of measure

for
$\Lambda \in {\mathcal{F}}_t$
. From Chapter 11 of [Reference Chung and Walsh3], we know that under
${\mathbb{P}}^h_{x}$
the canonical process is a conservative strong Markov process.
1.3. Stable processes
Stable processes are Lévy processes observing the scaling property

for all
$x \in {\mathbb{R}}$
and
$c>0$
, where
$\alpha$
is the index of self-similarity. It turns out to be necessarily the case that
$\alpha\in (0,2]$
, with
$\alpha=2$
corresponding to Brownian motion. The continuity of sample paths naturally excludes Brownian motion from our study, because conditioning to avoid the interval
$[{-}1,1]$
translates to conditioning to stay above 1 or below
$-1$
, depending on the initial value of the process. So we restrict to
$\alpha\in (0,2)$
.
As a Lévy process, stable processes are characterised entirely by their jump measure, which is known to be

where
$\rho:={\mathbb{P}}({X}_1 \geq 0)$
and
$\hat{\rho}= 1-\rho$
. We have taken a normalisation for which the characteristic exponent satisfies

Later, we will consider the special case of a stable process without negative jumps which forces
$\rho=1$
when
$\alpha<1$
(in which case X is even a subordinator) and
$\rho = 1-1/\alpha$
if
$\alpha>1$
. For
$\alpha=1$
the stable process reduces to the symmetric Cauchy process with
$\rho=\frac 1 2$
. Since we are just interested here in one-sided jump processes, we also exclude the symmetric Cauchy process from our study.
An important fact we will use for the parameter regimes is that the stable process exhibits (set) transience and (set) recurrence according to whether
$\alpha\in (0,1)$
or
$\alpha\in[1,2]$
. When
$\alpha\in(1,2]$
the notion of recurrence is even stronger in the sense that individual points are hit with probability one.
2. Main results
From now on, let X be a stable process with self-similarity index
$\alpha \in (0,2)\setminus\{1 \}$
without negative jumps. Our results are separated between the parameter regimes
$\alpha \in (0,1)$
and
$\alpha \in (1,2)$
.
If
$\alpha<1$
the stable process is a subordinator. The following theorem presents an invariant function for the process killed on entering the interval.
Theorem 2.1. Let
${(X_t)_{t \ge 0}}$
be a stable process with self-similarity index
$\alpha \in (0,1)$
and no negative jumps. Then the function

is invariant for the process killed on entering
$[{-}1,1]$
.
The key to proving this Theorem lies in showing that
$h(x) = {\mathbb{P}}_x(T_{[{-}1,1]} = \infty)$
. Using this invariant function we define the h-transform of the process killed on entering
$[{-}1,1]$
:

One can show that the function h in Theorem 2.1 is (up to a constant) the limit of the invariant function found in [Reference Döring, Kyprianou and Weissmann4] as the positivity parameter
$\rho = {\mathbb{P}}(X_1>0)$
tends to 1.
The next result states that the process conditioned to avoid
$[{-}1,1]$
in the sense of (1.1) equals this h-transform.
Corollary 2.1. Let
${(X_t)_{t \geq 0}}$
be a stable process with self-similarity index
$\alpha \in (0,1)$
without negative jumps. Then

Since the case
$\alpha<1$
is handled, we focus on
$\alpha>1$
; in this case the process is recurrent, which implies in particular that
${\mathbb{P}}_x(T_{[{-}1,1]}= \infty) = 0$
, and hence the methods used in the case
$\alpha<1$
do not work.
Proposition 2.1.
Let
${(X_t)_{t \geq 0}}$
be a stable process with self-similarity index
$\alpha \in (1,2)$
without negative jumps and

for
$\Lambda \in {\mathcal{F}}_t$
. Then
${\mathbb{P}}_x^\updownarrow$
defines a probability measure under which
${(X_t)_{t \geq 0}}$
is a Markov process with transition probabilities

We will see that X under
${\mathbb{P}}_x^\updownarrow$
is just the stable process conditioned to stay above 1 when it starts above 1 and the stable process conditioned to stay below
$-1$
when it starts below
$-1$
.
Now we are ready to present our main result, which says that the process conditioned to avoid
$[{-}1,1]$
in the sense of (1.2) corresponds to
$(X,{\mathbb{P}}_x^\updownarrow)$
.
Theorem 2.2. Let
${(X_t)_{t \geq 0}}$
be a stable process with self-similarity index
$\alpha \in (1,2)$
and no negative jumps. Then

for
$\Lambda \in {\mathcal{F}}_t$
and
$x \notin [{-}1,1]$
.
When the starting value is above the interval
$[{-}1,1]$
this result is not very surprising because the process conditioned to avoid the interval must be the same as the process conditioned to stay above 1 since there are no negative jumps. But if the initial value is below
$-1$
the result is rather surprising. In this case our statement says again that the process conditioned to avoid the interval is just the process conditioned to stay below
$-1$
. In other words, the case in which the process makes a jump from below to above the interval is cancelled by the limiting procedure.
A simple consequence is that the conditioned process converges a.s. to
$+\infty$
if the initial value is above the interval and to
$-\infty$
if the initial value is below the interval. This can be seen by the long-time behaviour of Lévy processes conditioned to stay positive [Reference Chaumont and Doney2].
We remark that the natural procedure of plugging in the values of
$\rho$
in the one-sided jumps case into the invariant function of the two-sided jumps case (found in [Reference Döring, Kyprianou and Weissmann4]) does not work. When
$\alpha<1$
(and
$\rho = 1$
) the function does not exist, and when
$\alpha>1$
(and
$\rho = 1-1/\alpha$
) one can show that the function is no longer invariant.
3. Preliminaries
To prove our results we have to introduce some fluctuation theory for Lévy processes and Lévy processes conditioned to stay positive. For this section let
$X=(X_t)_{t \geq 0}$
be a Lévy process.
3.1. Ladder height processes and potential functions
Crucial ingredients in our analysis are the potential functions
$U_+$
and
$U_-$
of the ascending and descending ladder height processes. To introduce them, some notation is needed. Denote the local time of the Markov process
${(\sup_{s \leq t} X_s - X_t)_{t \ge 0}}$
at 0 by L, which is also called the local time of X at the maximum. Let
$L_t^{-1} = \inf\{s>0 \,:\, L_s >
t\}$
denote the inverse local time at the maximum and
$\kappa(q) = -\log
{\mathbb{E}} [ \mathrm{e}^{- q L^{-1}_1} ]$
for
$q \geq 0$
the Laplace exponent of
$L^{-1}$
. We define
$H_t =\sup_{s\leq L^{-1}_t}X_s$
, the so-called (ascending) ladder height process. It is well known that H is a subordinator. Under the dual measure
$\hat{\mathbb{P}}$
, the process
$L^{-1}$
is the inverse local time at the minimum, and we denote its Laplace exponent by
$\hat\kappa$
. Still under this dual measure, H is the descending ladder height process.
The q-resolvents of H, for
$q \geq 0$
, will be denoted by
$U_+^q$
; that is,

For
$q=0$
we abbreviate
$U_+({\mathrm{d}} x) = U_+^0({\mathrm{d}} x)$
, and denote the so-called potential function by
$U_+(x) = U_+([0,x])$
for
$x\geq 0$
. We define
$U_-^q$
and
$U_-$
according to the same procedure for the descending ladder height process.
In the case of a stable process, all appearing fluctuation expressions are known explicitly:
$\kappa(q) = q^\rho$
,
$\hat{\kappa}(q) = q^{1-\rho}$
,
$U_-(x) = x^{\alpha(1-\rho)}$
, and
$U_+(x) = x^{\alpha\rho}$
. As mentioned earlier, for a stable process without negative jumps we have
$\rho = 1$
in the case
$\alpha<1$
and
$\rho = 1-1/\alpha$
in the case
$\alpha>1$
. Hence, the expressions above simplify even more.
3.2. Lévy processes conditioned to stay positive
For general Lévy processes (which are not compound Poisson processes) Chaumont and Doney [Reference Chaumont and Doney2] showed that the potential function
$U_-$
of the dual ladder height process is an invariant function for the Lévy process killed on entering
${({-}\infty,0]}$
. Furthermore, they showed that

i.e. the h-transformed process corresponding to the invariant function
$U_-$
is the process conditioned to avoid
${({-}\infty,0]}$
in the sense of (1.2). Since we later consider (stable) Lévy processes without negative jumps, we remark that in this case we have
$U_-(x) = x$
. Analogously, one can construct the Lévy process conditioned to stay negative via the potential function of the ladder height process
$U_+$
. One important ingredient in the proofs is the following relation (see, e.g., (13.6) of [Reference Kyprianou6]):

Because of the spatial homogeneity of Lévy processes it is possible to condition them to stay above/below a level via an h-transform using shifted versions of
$U_-$
and
$U_+$
.
4. Proofs
4.1. Proofs of Theorem 2.1 and Corollary 2.1
First we show that
$h(x) = {\mathbb{P}}_x(T_{[{-}1,1]} = \infty)$
,
$x \notin [{-}1,1]$
. For
$x>1$
this is obvious because X is a subordinator. For
$x<-1$
it can be shown via overshoot distributions for stable processes (see, e.g., (Reference Chaumont and Doney2) of [Reference Rogozin9]) that

The following standard argument shows that h is invariant for the process killed on entering
$[{-}1,1]$
:

where we used dominated convergence and the Markov property. Furthermore, another simple calculation shows that the conditioned process in the sense of (1.1) (and similarly (1.2)) is the corresponding h-transformed process:

4.2. Proof of Proposition 2.1
If X is a stable process with self-similarity index
$\alpha \in (1,2)$
without negative jumps, then
$U_-(x) = x$
and
$U_+(x) = x^{\alpha-1}$
for
$x >0$
. By the results of Subsection 3.2 and the spatial homogeneity of Lévy processes we get that
$x \mapsto x-1$
,
$x>1$
, is an invariant function for the stable process killed on entering
${({-}\infty,1]}$
, and
$x \mapsto ({-}x-1)^{\alpha-1}$
,
$x<-1$
, is an invariant function for the stable process killed on entering
$[{-}1,\infty)$
. In particular,
${\mathbb{P}}_x^\uparrow$
(for
$x>1$
) and
${\mathbb{P}}_x^\downarrow$
(for
$x<-1$
) are probability measures since they are constructed via h-transforms using invariant functions. Consequently,
${\mathbb{P}}_x^\updownarrow$
is a probability measure for
$x \notin [{-}1,1]$
.
That
${\mathbb{P}}_x^\updownarrow(X_t \in {\mathrm{d}} y) = P_t^\updownarrow(x,{\mathrm{d}} y)$
is obvious. It remains to show the Chapman–Kolmogorov equality for
${(P_t^\updownarrow)_{t \geq 0}}$
. For that, let us denote by
$P_t^\uparrow(x,{\mathrm{d}} y)$
the transition semigroup of the stable process conditioned to stay above 1, i.e.

Obviously, we have
$P_t^\updownarrow(x,{\mathrm{d}} y) = P_t^\uparrow(x,{\mathrm{d}} y)$
for
$x>1$
(we agree that
$P_t^\uparrow(x,{\mathrm{d}} y) = 0$
for
$y \leq 1$
). So we get, for
$x>1$
and
$y \notin [{-}1,1]$
,

The second to last equality holds because
${(P_t^\uparrow)_{t \geq 0}}$
fulfils the Chapman–Kolmogorov equality since the Lévy process conditioned to stay positive is a Markov process. The last equality holds because
$P^\updownarrow_{t+s}(x,{\mathrm{d}} y)= 0 = P^\uparrow_{t+s}(x,{\mathrm{d}} y)$
for
$x>1,\ y<-1$
. The case
$x<-1$
can be handled analogously.
4.3. Proof of Theorem 2.2
With the knowledge of stable processes conditioned to stay positive, the case
$x>1$
is almost trivial. We remind ourselves that, due to no negative jumps, the process cannot reach the area below the interval without hitting it, i.e.
$T_{({-}\infty,1]} = T_{[{-}1,1]}$
a.s. under
${\mathbb{P}}_x$
. Hence,

From now on, let
$x<-1$
. We follow the usual approach and consider first the asymptotics of
${\mathbb{P}}_x(e_q < T_{[{-}1,1]})$
for
$q \searrow 0$
.
Lemma 4.1.
There is a constant
$c>0$
such that

Proof. We apply a Theorem of Port [Reference Port7] which says that there is a constant
$\tilde c >0$
such that

where for a general compact set
$y \mapsto u^{K}(x,y)$
is the potential density of the stable process killed on entering K started in x. Using that the process has no negative jumps and [1, Theorem VI.20] leads to

for
$x,y<-1$
, where
$k>0$
is a constant. Combining (4.1) and (4.2) we get

where
$\hat c = \tilde c k$
. To reach the asymptotic including the exponential time, we note that

Since we know that
${\mathbb{P}}_x(s < T_{[{-}1,1]})$
behaves like
$\hat c ({-}x-1)^{\alpha-1} s^{\frac{1}{\alpha}-1}$
for s tending to
$\infty$
, we can apply standard Tauberian theorems (see, e.g., [Reference Kyprianou6, Theorem 5.14]) to deduce that
${\mathbb{P}}_x(e_q < T_{[{-}1,1]})$
behaves like

for
$q \searrow 0$
, which shows the claim.
With this lemma in hand we are ready to show Theorem 2.2 for the remaining case,
$x<-1$
.
Proof of Theorem 2.2. Let
$x<-1$
. First, we note that

Now we start on the left-hand side of (2.2) and do some basic manipulations like applying (4.5) and the Markov property.

In the next step we show that the second summand of the last term converges to 0 for
$q \searrow 0$
.
From (3.1) and the explicit form
$U_-(x) = x$
we know that

In particular, we have

Since an
$\alpha$
-stable process has all moments of order less than
$\alpha$
, it holds that

On the other hand, by Lemma 4.1,
$q^{\frac{1}{\alpha}} / {\mathbb{P}}_x(e_q < T_{[{-}1,1]})$
behaves like
$q^{\frac{1}{\alpha}}/(c q^{1-\frac{1}{\alpha}}) = q^{\frac{2}{\alpha}-1}/c$
, which converges to 0 because
$\alpha<2$
implies
$2/\alpha >1$
. By this we obtain

Now we plug this into (4.6) to calculate the limit for
$q \searrow 0$
. We use Lemma 4.1 and a standard procedure in the area of Lévy processes conditioned to avoid a set. First we get via Fatou’s lemma,

Next, note that for
$\Lambda \in {\mathcal{F}}_t$
, also
$\Lambda^{\mathrm{C}} \in {\mathcal{F}}_t$
. Moreover, we already know that
${\mathbb{P}}^\updownarrow_x$
is a probability measure by Proposition 2.1. It follows that

Together, this shows that
