1. Introduction
Let X = {X t}t≥0 be a process with independent stationary increments, where the time parameter t is either discrete (i.e. t ∈ ℤ+ := {0, 1, … }) or continuous (i.e. t ∈ ℝ+ := [0, ∞)). Let X 0 = x be the initial state. We assume that X is defined on a filtered probability space (Ω, $\mathcal{F}$,
$\{\mathcal{F}_t\}$, ℙ), where for each t,
$\mathcal{F}_t$ is the (enlarged) σ-field generated by {X s : s ≤ t}. For a measurable reward function g ≥ 0 and a discount factor q ≥ 0, we consider the problem of maximizing the expected discounted reward over all stopping times. The objective is to find the value function V(x) and an optimal stopping time τ ∗ (if it exists) satisfying
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn1.gif?pub-status=live)
where the subscript x in 𝔼x refers to the initial state X 0 = x, $\mathcal{M}$ is the collection of all stopping times taking values in [0,∞], and 1A denotes the indicator function of A. For a < ∞, the stopping time τa := inf{t ≥ 0: Xt ≥ a} is said to be of threshold type with threshold a. An optimal stopping time of threshold type exists if τ ∗ = τ a for some a < ∞, in which case the optimal stopping problem (1.1) is said to admit a one-sided solution.
In the literature, motivated by applications in American option pricing, the optimal stopping problem (1.1) has been solved explicitly for special reward functions including (x +)ν := (max{x, 0})ν (ν > 0), (ex − K)+, and (K − e−x)+ in both discrete and continuous time; see [Reference Dubins and Teicher10], [Reference Darling, Liggett and Taylor8], [Reference Mordecki15], [Reference Novikov and Shiryaev17], [Reference Novikov and Shiryaev18], and [Reference Kyprianou and Surya13]. Note that all of the above reward functions are continuous, increasing, and logconcave, while the corresponding optimal stopping times are of threshold type. (Here and below, the word ‘increasing’ means ‘nondecreasing’.) Note also that, for each of these reward functions, the one-sided solution for (1.1) is found explicitly under general random walks in discrete time and Lévy processes in continuous time (subject to mild integrability conditions). On the other hand, by imposing more structures on {X t}, the problem (1.1) can be solved explicitly for more general reward functions. Indeed, when {X t} is a matrix- exponential jump diffusion, Sheu and Tsai [Reference Sheu and Tsai21] obtained an explicit one-sided solution of (1.1) for a fairly general class of increasing and logconcave reward functions g ≥ 0 which satisfy some additional technical conditions.
In view of the above results, two natural questions arise concerning the relationship between the logconcavity and monotonicity of g and the existence of a one-sided solution:
(Q1) Does the logconcavity and monotonicity of g imply the existence of a one-sided solution? More precisely, if g ≥ 0 is increasing and logconcave, does (1.1) admit a one-sided solution under general random walks and Lévy processes?
(Q2) To what extent is the logconcavity and monotonicity of g implied by the existence of a one-sided solution?
To make (Q2) more precise, observe that it is easy to find a reward function g ≥ 0 (which is neither increasing nor logconcave) and a process {Y t}t≥0 with Y 0 = 0 such that, for some threshold a, τ a is optimal under {X t = x + Y t}t≥0 for any initial state x ∈ ℝ. Indeed, if, for some g ≥ 0, τ a =inf{t ≥ 0: X t ≥ a} is optimal under {X t = x + Y t}t≥0 for all x ∈ ℝ, then, with respect to the same process {X t},
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU1.gif?pub-status=live)
where g̃ ≥ 0 is any reward function satisfying
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn2.gif?pub-status=live)
This shows that, under the process {X t} (for any X 0 = x ∈ ℝ), τ a is optimal for any reward function g̃ satisfying (1.2), which need not be increasing or logconcave. Thus, it seems natural to formulate (Q2) as: if g ≥ 0 is such that (1.1) admits a one-sided solution with respect to a sufficiently rich class of processes {X t}, is g necessarily increasing and logconcave?
The work of Hsiau et al. [Reference Hsiau, Lin and Yao11] makes an attempt to address (Q1) and (Q2). Specifically, they showed (cf. [Reference Hsiau, Lin and Yao11, Theorem 3.1]) that if g ≥ 0 is increasing, logconcave, and right continuous, (1.1) admits a one-sided solution provided that {X t} is a spectrally negative Lévy process (for which a tractable fluctuation theory is available due to no overshoots). Furthermore, it was established (cf. [Reference Hsiau, Lin and Yao11, Theorem 6.1]) that, for fixed q > 0, a nonnegative measurable reward function g is necessarily increasing and logconcave if, for each α ∈ ℝ, there is a threshold u(α) < ∞ such that τ u(α) is optimal under the Brownian motion process X t = x + αt + B t, {B t} is a standard Brownian motion. (A similar result was also established for the q = 0 case(cf. [Reference Hsiau, Lin and Yao11, Theorem 5.2]) where the drift parameter α is restricted to α < 0.)
In the present paper we address (Q1) in full generality (with the right-continuity condition imposed on g), thereby generalizing the aforementioned results in the literature. Specifically, we treat the discrete-time case in Section 2, and show that if g ≥ 0 is nonconstant, increasing, logconcave, and right continuous then, with respect to any random walk, there is a unique threshold − ∞ ≤ u ≤ ∞ such that V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u. Moreover, if −∞≤ u <∞, τ u is optimal attaining the (finite) value V(x) of (1.1) for all x ∈ ℝ. If u = ∞, either V(x) = ∞ for all x ∈ ℝ in which case there are randomized stopping times with an infinite expected (discounted) reward, or V(x) < ∞ for all x ∈ ℝ in which case no optimal stopping time exists. Via the standard time discretization device, the results in Section 2 are applied in Section 3 to general Lévy processes in continuous time.
With the help of the results in Section 3, in Section 4 we investigate the principle of smooth fit for Lévy processes {X t} when g ≥ 0 is a general increasing and logconcave function. Alili and Kyprianou [Reference Alili and Kyprianou1] showed that, for g(x) = (K − e−x)+ (described here in our setting which corresponds to perpetual American put), the smooth fit principle holds if and only if 0 is regular for (0, ∞) for {X t}. They also conjectured that this result holds more generally. (See also [Reference Boyarchenko and Levendorskiǐ3] for a related discussion, and [Reference Peskir19] for an example in which {X t} is a regular diffusion process and g is differentiable, but the value function fails to satisfy the smooth fit condition at the optimal stopping boundary.) We show in Section 4 that their conjecture holds for general increasing and logconcave g ≥ 0 provided log g is not linear in any interval. If log g is linear in some interval, the smooth fit principle may hold even when 0 is irregular for (0, ∞) for {X t}. Section 5 contains concluding remarks. The proofs of some technical lemmas (stated in Section 2) are relegated to Appendix A, in which the logconcavity property of g plays a key role. (Because of space limitation, some detailed computations in the proofs are not provided. The reader may refer to the long version [Reference Lin and Yao14] of the present paper for details.) It should be remarked that as the characterization of the optimal threshold u for general g and {X t} (cf. (2.2) below) is less than explicit, there is no general explicit expression for the corresponding value function. This fact makes it a nontrivial task to verify that the value function is excessive.
We close this section by briefly reviewing some recent papers in which effective methods are proposed to construct an explicit solution of (1.1) for g (not necessarily increasing or logconcave) under {X t}, which is a Lévy process or a more general Markov process in continuous time. Surya [Reference Surya23] introduced an averaging problem (associated with (1.1)) whose solution, if it exists, yields a fluctuation identity for overshoots of a Lévy process. Then the value and the optimal stopping time for (1.1) can be expressed in terms of the solution to the averaging problem provided this solution has certain monotonicity properties. See also [Reference Deligiannidis, Le and Utev9] for related results on Lévy processes as well as on random walks. The work of Christensen et al. [Reference Christensen, Salminen and Ta6] characterized the solution of (1.1) similarly as in [Reference Deligiannidis, Le and Utev9] and [Reference Surya23], but under very general strong Markov processes including diffusions, Lévy processes, and continuous-time Markov chains. Moreover, the solution can be either one-sided or two-sided depending on the representing function for the given reward function. More recently, Mordecki and Mishura [Reference Mordecki and Mishura16] generalized Surya’s averaging problem so as to make the construction method more flexible. Lately, Christensen [Reference Christensen4] introduced an auxiliary problem for a two-dimensional process consisting of the underlying (Markov) process {X t} and its running maximum. Under suitable assumptions, the auxiliary problem turns out to have the infinitesimal look-ahead rule as its solution, which then yields an optimal stopping time for (1.1).
2. Optimal stopping for random walks
In this section we use n ∈ ℤ+ (instead of t) to denote the discrete-time parameter. Let ξ, ξ 1, ξ 2, … be a sequence of real-valued independent and identically distributed (i.i.d.) random variables. To avoid trivial cases, assume that ℙ(ξ > 0) > 0. For X 0 = x ∈ ℝ, let X n+1 = X n + ξ n+1 for n = 0, 1, …, so that {X n}n≥0 is a random walk with initial state x. For y ∈ ℝ ∪ {− ∞}, define τ y := inf{n ≥ 0: X n ≥ y} (a threshold-type stopping time) and Ty := inf{n ≥ 1: X n ≥ y} (which is different from τ y if X 0 ≥ y). Consider a nonnegative reward function g: ℝ → [0, ∞) which is nonconstant, increasing (i.e. g(x) ≤ g(y) for x < y), and logconcave (i.e. g(θx + (1 − θ)y) ≥ (g(x))θ (g(y)) 1−θ for all x, y and 0 < θ < 1). Letting log 0 := − ∞, the function h(x) : = log g(x) is increasing and concave, so that the left-hand derivative h′(x−) is well defined (possibly + ∞) at every x with h(x) > − ∞. Letting h′(x −) : = + ∞ if h(x) = − ∞, then h′(x −) is decreasing (and nonnegative) in x ∈ ℝ. Define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn3.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn4.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn5.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn6.gif?pub-status=live)
(While the subscript x in 𝔼x refers to the initial state X 0 = x, in the x = 0 case, we write 𝔼 = 𝔼0 for simplicity, as in (2.3).)
Remark 2.1
A nonnegative, nonconstant, increasing, and logconcave function g is continuous everywhere except possibly at
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn7.gif?pub-status=live)
Note that − ∞ ≤ x 0 < ∞. Moreover, since g is nonconstant, g is not identically 0 while limx→−∞ g(x) = 0. In Theorem 2.1 below, g is assumed to be right continuous, which implies that g(x 0) = g(x 0 +) if x 0 > −∞.
Remark 2.2
Note that $\mathcal{L}(T_x,X_{T_x}\mathbf{1}_{\{T_x<\infty\}}{\mid}
X_0\ {=}\ x)\ {=}\ \mathcal{L}(T_0,(x\ {+}\ X_{T_0})\mathbf{1}_{\{T_0<\infty\}}{\mid}
X_0\ {=}\ 0)$, where
$\mathcal{L}(Z)$ denotes the law of a random vector Z. Since, by logconcavity, g(x + δ)/g(x) is decreasing in x for δ ≥ 0, it follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn8.gif?pub-status=live)
It can be argued that if G(x) := 𝔼x(e–qT xg (XTx)1{T x < ∞}) = ∞ for some x then G(x) = ∞ for all x ∈ ℝ, in which case we have u = ∞. Thus, if u < ∞ then G(x) < ∞ is continuous and increasing in x > x 0, so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU2.gif?pub-status=live)
where x 0 is given in (2.5). It follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn9.gif?pub-status=live)
Theorem 2.1
Let g: ℝ → [0, ∞) be nonconstant, increasing, logconcave and right continuous, and define β, u, W, and V(x) as in ( 2.1 )–( 2.4 ). Assume ℙ(ξ > 0) > 0. Then the following statements hold.
(i) If −∞ ≤ u < ∞ then the threshold-type stopping time τu is optimal, i.e. V(x) = 𝔼x(e–qτ ug (Xτu)1{τ u < ∞}) for all x ∈ ℝ.
(ii) If u = ∞ then V(x) = limy→∞ 𝔼x(e–qτ y g(X τy)1τ y<∞}) = eβxW for all x. If, in addition, W = ∞ then there exist (randomized) stopping times that yield an infinite expected (discounted) reward; if W < ∞then there is no optimal stopping time.
Corollary 2.1
(i) V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u.
(ii) V(x)/g(x) is decreasing in x.
(iii) If x 0 < u < ∞ or u = −∞ or u = ∞ and W < ∞, then V(x) is continuous everywhere.
Remark 2.3
By Corollary 2.1, u = inf{x: V(x) = g(x)} = sup{x: V(x) > g(x)}. By Theorem 2.1, (− ∞, u) and [u, ∞) are the (optimal) continuation and stopping regions, respectively. If u = −∞, there is no continuation region, so stopping immediately is optimal. If u = ∞, the stopping region is empty and there is no optimal stopping time in the sense that, for any initial state x and for any stopping time τ, we can always find another stopping time τ′ ′ such that ℙx(τ′ ≥ τ) = 1 and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU3.gif?pub-status=live)
Remark 2.4
Novikov and Shiryaev [Reference Novikov and Shiryaev18] solved (1.1) for g(x) = (x +)ν with ν > 0, and found that the optimal threshold is the positive root of the associated Appell function, which generalizes an earlier result of Darling et al. [Reference Darling, Liggett and Taylor8] for ν = 1. It can be shown that their optimal threshold agrees with (2.2). As an illustration, consider g(x) = x + and q = 0, for which Darling et al. [Reference Darling, Liggett and Taylor8] showed that if 𝔼(ξ) < 0 then τ 𝔼(M) is optimal where M := sup{0, ξ 1, ξ 1 + ξ 2, … }. For 𝔼(ξ) < 0, the value of u defined in (2.2) satisfies (cf. (2.7))
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU4.gif?pub-status=live)
yielding
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU5.gif?pub-status=live)
where the second equality follows from the fact that $\mathcal{L}(M)=\mathcal{L}\left((X_{T_0}+M')\mathbf{1}_{\{T_0<\infty\}}
\mid X_0=0\right),$, with M′ being an independent copy of M. When 𝔼(ξ) ≥ 0 or 𝔼(ξ) is undefined (i.e. 𝔼(max{ξ, 0}) = ∞ = 𝔼(max{– ξ, 0})), it is readily shown that u = ∞ = 𝔼(M). Thus, the value of u defined in (2.2) equals 𝔼(M) regardless of whether 𝔼(ξ) < 0.
Remark 2.5
For g(x) > 0, it can be shown that 𝔼x(e–qT xg (XTx)1{T x < ∞})/g(x) ≤ 1 if and only if 𝔼x(e–qτ x+g (Xτx+)1{τ x+ < ∞})/g(x) ≤ 1, where τ x+ := inf{n ≥ 0: X n > x}. It follows that the definition of u in (2.2) is equivalent to
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU6.gif?pub-status=live)
If x 0 < u < ∞, it follows from (2.7) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU7.gif?pub-status=live)
which together with the optimality of τ u implies that τ u+ is also optimal. However, τ u+ may not be optimal if u = x 0 > − ∞. As an example, consider the (logconcave) function g(x) = 1[0,∞)(x) for which u = x 0 = 0. While τ 0 is optimal, we have g(0) = 1 > 𝔼0(e–qτ 0+g (Xτ0+)1{τ 0+ < ∞}) if q > 0 or 𝔼(ξ) < 0.
To prove Theorem 2.1, we need the following lemmas.
Lemma 2.1
For f : ℝ → [0, ∞) and v ∈ ℝ, let U(x):= 𝔼x(e–qτ vf (Xτv)1{τ v < ∞}), x ∈ ℝ. Then 𝔼(e–qU (x + ξ)) = 𝔼x(e–qT vf (XTv)1{T v < ∞}), x ∈ ℝ.
Lemma 2.2
Let f (x) and g(x) be nonnegative functions defined on ℝ. If f (x) ≥ g(x) and f(x) ≥ 𝔼(e–qf(x+ξ)) for all x, then $f(x)\geq\sup_{\tau\in\mathcal{M}}\mathbb {E}_{x}\left(e^{-q\tau}g(X_{\tau}) \mathbf{1}_{\left\{\tau<\infty\right\}}\right)$ for x ∈ R.
Lemma 2.3
Assume that ℙ(ξ > 0) > 0. Let g: ℝ → [0, ∞) be nonconstant, increasing, and logconcave. Suppose that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU8.gif?pub-status=live)
Then the following statements hold:
(i) 𝔼x(e–qT ag (XTa)1{T a < ∞}) ≤ 𝔼x(e–qT yg (XTy)1{T y < ∞}) for x ∈ ℝ and a ∈ [ − ∞, y);
(ii) g(x) ≤ 𝔼x(e–qτ yg (Xτy)1{τ y < ∞}) for x ∈ ( − ∞, y).
Lemma 2.4
Assume that ℙ(ξ > 0) > 0. Let g: ℝ → [0, ∞) be nonconstant, increasing and logconcave. Suppose that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU9.gif?pub-status=live)
Then the following statements hold:
(i) 𝔼x(e–qT yg (XTy)1{T y < ∞}) ≥ 𝔼x(e–qT ag (XTa)1{T a < ∞}) for x ∈ ℝ and a ∈ (y, ∞);
(ii) g(y) ≥ 𝔼y(e–qτ ag (Xτa)1{τ a < ∞}) for a ∈ (y,∞).
Lemma 2.5
Assume that ℙ(ξ > 0) > 0 and ϕ(λ):= 𝔼(eλξ) < ∞ for all λ ≥ 0. Let g: ℝ → [0, ∞) be nonconstant, increasing, logconcave, and right continuous. Define u as in (2.2). Suppose that u < ∞. Then τu is optimal. Moreover, the value function V(x) satisfies V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u.
Lemma 2.6
Assume that ℙ(ξ > 0) > 0. Let g: ℝ → [0, ∞) be nonconstant, increasing, logconcave, and right continuous. Define u as in (2.2). Suppose that there exist x′ ∈ ℝ and c > 0 such that g(x) = c for x ≥ x′. Then u ≤ x′ and τu is optimal. Moreover, the value function V(x) satisfies V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u.
Lemma 2.1 follows easily by conditioning on X 1 = x + ξ. Lemma 2.2 is a standard result (see [Reference Novikov and Shiryaev17, Lemma 5]). The proofs of Lemmas 2.3–2.6 are relegated to Appendix A. We are now ready to prove Theorem 2.1.
Proof of Theorem 2.1(i). Let {b k}k≥1 be an increasing sequence such that b 1 > x 0 and limk→∞ b k = ∞. For each k ≥ 1, let g k(x) := g(x ∧ b k) := g( min{x, b k}) for x ∈ ℝ, and let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn10.gif?pub-status=live)
Then, for k ≥ 1, g k(x) is nonconstant, increasing, logconcave, and right continuous. Since g k(x) is increasing in k, so is V k(x). Note that g k(x)=g(b k)>0 for x ≥ b k and g k(x) = g(x) for x < b k. We have V k(x) ≤ g(b k) for x ∈ ℝ, and, by Lemma 2.6, it follows that τ u k is optimal for the optimal stopping problem (2.8) with reward function g k, where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn11.gif?pub-status=live)
We claim that u k is increasing in k. If u k+1 ≥ b k then u k+1 ≥ u k clearly. In the u k+1 < b k case, we have V k(u k+1) ≤ V k+1(u k+1) = g k+1(u k+1) = g(u k+1) = g k(u k+1), implying that u k+1 ≥ u k.
Let u ∞ : = limk→∞ u k and V ∞(x) := limk→∞ V k(x). For any $\tau\in\mathcal{M}$, we have, by the monotone convergence theorem,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU10.gif?pub-status=live)
implying that V(x) = V ∞(x) for all x. Now we prove in three steps that τ u is optimal if −∞ ≤ u < ∞. We show in step 1 that u ∞ ≤ u ( < ∞), in step 2 that τ u ∞ is optimal, and in step 3 that u ∞ ≥ u.
Step 1. To prove that u ∞ ≤ u, suppose to the contrary that u < u ∞. Choose an x and a (large) k such that u < x < u k and b k ≥ x. We have, by (2.2),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU11.gif?pub-status=live)
which together with (2.9) implies that x ≥ u k, a contradiction. This proves that u ∞ ≤ u.
Step 2. To prove that τ u ∞ is optimal, it suffices to show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn12.gif?pub-status=live)
If u ∞ = −∞ then u ∞ = u k = −∞ for all k, implying that, for all x,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU12.gif?pub-status=live)
establishing (2.10) for the u ∞ = −∞ case.
Suppose that −∞ < u ∞ ( ≤ u < ∞). Since u ∞ ≥ u k for all k, we have, for x ≥ u ∞,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU13.gif?pub-status=live)
It remains to prove (2.10) for x < u ∞. Note that u ∞ ≥ u k ≥ x 0 for all k. If u ∞ = x 0 then u k = u ∞ = x 0 for all k, so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU14.gif?pub-status=live)
proving (2.10).
Now suppose that u ∞ > x 0. To prove (2.10) for x < u ∞, let k 0 be so large that x < u k 0, 2u k 0 − u ∞ > x 0, and b k 0 > u ∞. Thus, for all k ≥ k 0, we have x < u k, 2u k − u ∞ > x 0 and b k > u ∞. For k ≥ k 0, let ε k := u ∞ − u k ( ≥ 0),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU15.gif?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU16.gif?pub-status=live)
Since $\mathcal{L}(\tau_{u_\infty},X_{\tau_{u_\infty}}
\mathbf{1}_{\{\tau_{u_\infty}<\infty\}}\mid
X_0=x)=\mathcal{L}(\tau_{u_k},(\varepsilon_k+X_{\tau_{u_k}})
\mathbf{1}_{\{\tau_{u_k}<\infty\}}\mid X_0=x-\varepsilon_k)$, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn13.gif?pub-status=live)
Let τ ″:= τ u k−x := inf{n ≥ 0: X n ≥ u k − x}. On {τ ″< ∞}, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU17.gif?pub-status=live)
Since $\mathcal{L}(\tau_{u_\infty},X_{\tau_{u_\infty}}
\mathbf{1}_{\{\tau_{u_\infty}<\infty\}}\mid
X_0=x)=\mathcal{L}(\tau_{u_k},(\varepsilon_k+X_{\tau_{u_k}})
\mathbf{1}_{\{\tau_{u_k}<\infty\}}\mid X_0=x-\varepsilon_k)$, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn14.gif?pub-status=live)
where the inequality is due to the logconcavity of g k. By (2.11) and (2.12), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn15.gif?pub-status=live)
Letting k → ∞, the right- and left-hand sides of (2.13) tend, respectively, to V ∞(x) and 𝔼x(e–qτ u ∞ g(Xτu ∞)1{τ u ∞ < ∞}), yielding (2.10).
Step 3. We now prove that u ∞ ≥ u. Suppose to the contrary that u > u ∞ ( ≥ x 0). Then it follows from (2.2) that 𝔼u ∞(e–qT u ∞ g(XTu ∞)1{T u ∞ < ∞})/g(u ∞) > 1, which implies by the optimality of τ u ∞ (established in step 2) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU18.gif?pub-status=live)
a contradiction. Thus, u ≤ u ∞. The proof is complete.
Proof of Theorem 2.1(ii). Let Q y(x) :=𝔼x(e–qτ y g(Xτy)1{τ y < ∞}), x, y ∈ ℝ. Claim that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn16.gif?pub-status=live)
For any given x, y and y′ with y < y′, we have ℙx(τ y ≤ τ y′) = 1. On {τ y < ∞}, define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU19.gif?pub-status=live)
so that Q y′(x) = 𝔼x(e–qτ y H 1{τ y < ∞}). If y < y′ ≤ x 0, we have H ≥ g(X τ y) = 0 on {τ y < ∞, X τ y < x 0} and H = g(X τ y) on {τ y < ∞, X τ y ≥ x 0}. If y′ > x 0, we have 𝔼y′(e–qT y′ g(XT y′)1T y′<∞})/g(y′) > 1 (since u = ∞). By Lemma 2.3(ii), H ≥ g(X τ y) on {τ y < ∞, X τ y < y′} and H = g(X τ y) on {τ y < ∞, X τ y ≥ y′}. Thus, we have shown that H ≥ g(X τ y) on {τ y < ∞}, implying that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU20.gif?pub-status=live)
establishing (2.14). Let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU21.gif?pub-status=live)
To show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn17.gif?pub-status=live)
consider an increasing sequence {b k}k≥1 with b 1 > x 0 and limk→∞ b k = ∞. Let g k(x) := g(x ∧ b k), and let V k(x) and u k be defined as in (2.8) and (2.9), respectively. Observing that V(x) = limk→∞ V k(x) = supk V k(x), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU22.gif?pub-status=live)
where the second equality is due to the optimality of the threshold u k for the reward function g k. This proves (2.15).
Since $\mathcal{L}(\tau_y,X_{\tau_y}\mathbf{1}_{\{\tau_y<\infty\}} \mid
X_0=x)=\mathcal{L}(\tau_{y-x},(x+X_{\tau_{y-x}})\mathbf{1}_{\{\tau_{y-x}<\infty\}}\mid
X_0=0)$, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn18.gif?pub-status=live)
where we have used the facts that, on {τ y < ∞}, X τ y ≥ y and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU23.gif?pub-status=live)
Note that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU24.gif?pub-status=live)
as y → ∞, and that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU25.gif?pub-status=live)
as y → ∞. Suppose that W = ∞. By (2.16), V(x) = eβxW = ∞. In view of Q y(x) → ∞ as y → ∞, let y k, k = 1, 2, …, be such that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU26.gif?pub-status=live)
Let τ be a randomized stopping time of threshold type which chooses the threshold y k with probability 2−k, k = 1, 2, …. Then we have 𝔼x(e–qτ g(Xτ)1{τ < ∞}) = ∞.
Now suppose that W <∞. To prove that there is no optimal stopping time, it suffices to show for x ∈ ℝ and any stopping time τ with ℙx(τ <∞) > 0 that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn19.gif?pub-status=live)
Consider an increasing sequence of stopping times $\tau'_{k}\,{:\!=}\,\inf\{n\ge\tau\colon X_{n}\ge k\}\ge\tau$, k =1,2, .... We have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU27.gif?pub-status=live)
which, by (2.15), increases to 𝔼x(e–qτ V (Xτ)1{τ < ∞}) as K → ∞. Since V(y) > g(y) for all y and ℙx(τ < ∞) > 0, we have V(x) ≥ 𝔼x(e–qτ V (Xτ)1{τ < ∞}) > 𝔼x(e–qτ g(Xτ)1{τ < ∞}), establishing (2.17). The proof is complete.
Remark 2.6
The proof of Corollary 2.1 is relatively straightforward and is omitted. The interested reader may refer to [Reference Lin and Yao14] for details.
3. Optimal stopping for Lévy processes
Let X = {X t}t≥0 be a Lévy process with initial state X 0 = x ∈ ℝ. For a comprehensive discussion of Lévy processes, see [Reference Bertoin2], [Reference Kyprianou12], and [Reference Sato20]. As in Section 2, assume that ℙ(X 1 > 0) > 0 since the ℙ(X 1 ≤ 0) = 1 case is trivial. For q ≥ 0 and g: ℝ → [0, ∞), let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn20.gif?pub-status=live)
where $\mathcal{M}$ is the class of all stopping times τ taking values in [0, ∞] with respect to the filtration
$\{\mathcal{F}_t\}_{t\ge0}$,
$\mathcal{F}_t$ being the natural enlargement of σ{X s, 0 ≤ s ≤ t}.
To apply Theorem 2.1 to the problem (3.1), we introduce a sequence of optimal stopping problems in discrete time (cf. [Reference Shiryaev22, Chapter 3]). For ℓ = 1,2, ...., let $\mathcal{M}^{(\ell)}$ denote the class of all stopping times in
$\mathcal{M}$ taking values in {n2−ℓ: n = 0, 1, …, ∞} and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn21.gif?pub-status=live)
Note that, by the Markov property of X, V (ℓ)(x) equals the supremum of 𝔼x(e–qτg (Xτ)1{τ x < ∞}) over the (smaller) class of all stopping times τ taking values in {n2−ℓ : n = 0, 1, …, ∞} such that $\{\tau=n2^{-\ell}\}\in\sigma\{X_{i2^{-\ell}},\,
i=0,1,\dots,n\}\subset\mathcal{F}_{n2^{-\ell}}$. So we can apply Theorem 2.1 to (3.2).
Let u (ℓ) be defined as in (2.2) with T x replaced by $T_x^{(\ell)}\,{:\!=}\,2^{-\ell}\inf\{n\ge1\colon X_{n{2}^{-\ell}}\ge x\}$, i.e.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU28.gif?pub-status=live)
For y ∈ ℝ ∪{–∞}, let $\tau_y\,{:\!=}\,\inf\{t\ge0\colon X_t\ge y\}\in\mathcal{M}$ and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU29.gif?pub-status=live)
By Theorem 2.1, if u (ℓ) < ∞ then $\tau_{u^{(\ell)}}^{(\ell)}=\tau^{(\ell)}(u^{(\ell)})$ is optimal for (3.2), i.e.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU30.gif?pub-status=live)
By Lemma 3.1 below, u (ℓ) is increasing in ℓ. Let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn22.gif?pub-status=live)
Remark 3.1
In Section 2 we used the notation V(x), u, and W for the random walk setting. In this section we use the same notation for the Lévy process setting. Moreover, we refer to the setting of the random walk {X n2−ℓ, n = 0, 1, … } by attaching the superscript (ℓ), e.g. V (ℓ)(x), u (ℓ), and $\tau_x^{(\ell)}$.
Theorem 3.1
Let q ≥ 0 and g : ℝ → [0, ∞) be nonconstant, increasing, logconcave, and right continuous. Let β := limx→∞ h′(x −) where h(x) := log g(x). Define V(x), u, and W as in (3.1) and (3.3).
(i) If −∞ ≤ u < ∞ then τu is optimal, i.e. V(u)= 𝔼u(e−qτ ug(Xτu)1{τ u<∞}) for all x.
(ii) If u = ∞ then V(x) = limy→∞ 𝔼x(e−qτ yg(Xτy)1{τ y<∞}) =eβxW for x ∈ ℝ If, in addition, W = ∞ then there exist (randomized) stopping times that yield an infinite expected (discounted ) reward; if W < ∞ then there is no optimal stopping time.
To prove Theorem 3.1, we need the following lemmas where g ≥ 0 is assumed to be nonconstant, increasing, logconcave, and right continuous. Some arguments are needed to deal with the case that g is not continuous at x 0 := inf{s: g(s) > 0}, which is not covered in [Reference Shiryaev22, Chapter 3] where g is required to satisfy $\mathbb {p}_x(\mathop{\underline{\lim}}_{t\to0+}g(X_t)\ge g(x))=1$ for all x.
Lemma 3.1
For x ∈ ℝ and ℓ = 1, 2, …, we have V (ℓ+1)(x) ≥ V (ℓ)(x) > 0 and u (ℓ+1) ≥ u (ℓ) ≥ x 0, where x 0 := inf{s: g(s) > 0}.
Proof. Since $\mathcal{M}^{(\ell)}\subset\mathcal{M}^{(\ell+1)}$, we have 0 < V (ℓ)(x) ≤ V (ℓ+1)(x). To show that u (ℓ+1) ≥ u (ℓ), it suffices to consider the case u (ℓ+1) < ∞ (possibly u (ℓ+1) = −∞). By Corollary 2.1(i),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU31.gif?pub-status=live)
implying that V (ℓ)(x) = g(x) for x ≥ u (ℓ+1). By Corollary 2.1(i) again, u (ℓ+1) ≥ u (ℓ).
Lemma 3.2
Suppose that g(x) > 0 for all x ∈ ℝ. Then
(i) V (∞)(x) := limℓ→∞ V (ℓ)(x) = V(x) for x ∈ ℝ;
(ii) V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u;
(iii) V(x)/g(x) is decreasing in x.
Proof. Since g is logconcave, g(x) > 0, for all x ∈ ℝ, implies that g is continuous.
(i) Since V(x) ≥ V (ℓ)(x), we have V(x) ≥ V (∞)(x). To show that V(x) ≤ V (∞)(x), let $\tau\in\mathcal{M}$ be any stopping time. For ℓ = 1, 2, …, define τ (ℓ) := 2−ℓ(⌊2ℓτ⌋+ 1), where ⌊x⌋ denotes the largest integer not exceeding x with ⌊∞⌋:= ∞. Clearly,
$\tau^{(\ell)}\in\mathcal{M}^{(\ell)}$ and τ (ℓ) ↘ τ as ℓ → ∞. It follows from the right continuity of {X t} and the continuity of g that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU32.gif?pub-status=live)
We have, by Fatou’s lemma,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU33.gif?pub-status=live)
Since $\tau\in\mathcal{M}$ is arbitrary, we have V(x) ≤ V (∞)(x).
(ii) For x < u, choose a (large) ℓ with x < u (ℓ) ≤ u. It follows from Corollary 2.1(i) that g(x) < V (ℓ)(x) ≤ V (∞)(x) = V(x). For x ≥ u, since u (ℓ) ≤ u ≤ x, we have, by Corollary 2.1(i), g(x) = V (ℓ)(x) for all ℓ, implying that g(x) = V (∞)(x) = V(x).
(iii) By Corollary 2.1(ii), V (ℓ)(x)/g(x) is decreasing in x. Since V ℓ(x) ↗ V(x) as ℓ → ∞, it follows that V(x)/g(x) is decreasing in x.
Lemma 3.3
If −∞ < u < ∞ then V(x) = g(x) for x ≥ u.
Proof. Let h(x) := log g(x). Fix an x > u ( ≥ x 0 := inf{s: g(s) > 0}). If h′(x −) = 0 then g(x) = sup{g(y): y ∈ ℝ} ≥ V(x), implying that V(x) = g(x). Suppose that h′(x −) > 0. Let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU34.gif?pub-status=live)
Let g̃(y) := eh̃(y) > 0 for all y ∈ ℝ, which is larger than or equal to g(y), nonconstant, increasing, logconcave and continuous. Define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU35.gif?pub-status=live)
Since g̃(y) = g(y) for all y ≥ x and u (ℓ) ≤ u < x, we have ũ (ℓ) ≤ x. Let ũ := limℓ→∞ ũ (ℓ) ≤ x. By Lemma 3.2(ii) applied to g̃,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU36.gif?pub-status=live)
so V(x) = g(x). We have shown that V(x) = g(x) for all x > u. Finally, V(u) = g(u) follows from the monotonicity of g and V and the right continuity of g.
Lemma 3.4
Suppose that −∞ < u < ∞. For x < u and $\tau\in\mathcal{M}$ with ℙx(τ ≥ τ u) = 1, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU37.gif?pub-status=live)
Proof. Since, on {τ u < ∞}, X τ u+s − X τ u (s ≥ 0) is independent of $\mathcal{F}_{\tau_u}$ and has the same law as X s − X 0, we have, on {τ u < ∞},
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU38.gif?pub-status=live)
where the equality follows from Lemma 3.3 (noting that X τ u ≥ u on {τ u < ∞}). Hence, 𝔼x(e−qτg(Xτ)1{τ < ∞}) ≤ 𝔼x(e−qτ ug(Xτu)1{τ u < ∞}), completing the proof.
Lemma 3.5
Suppose that −∞ < u < ∞. For x < u,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU39.gif?pub-status=live)
Proof. For ease of notation, write v := u(ℓ ) and δ := u − u (ℓ) ≥ 0. Noting that, for x < u,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU40.gif?pub-status=live)
we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU41.gif?pub-status=live)
where the last inequality follows from Lemma 3.4 and the fact that $\mathbb {P}_x(\tau^{(\ell)}_u\ge\tau_u)=1$.
Lemma 3.6
Suppose that x 0 < u < ∞. Then V(x) is continuous everywhere and τu is optimal.
Proof. Fix v ∈ (x 0, u). Let h(x) := log g(x), and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU42.gif?pub-status=live)
and g̃(x) := eh̃(x) > 0, so that g̃(x) ≥ g(x) for x ∈ ℝ. Clearly, g̃(x) is nonconstant, increasing, logconcave, and continuous. Define ũ (ℓ), ũ, Ṽ (ℓ)(x), and Ṽ(x) in terms of g̃ in exactly the same way that u (ℓ), u, V (ℓ)(x), and V(x) are in terms of g. For (large) ℓ with v < u (ℓ), the fact that g̃(x) = g(x) for all x ≥ v yields ũ (ℓ) = u (ℓ), implying that ũ := limℓ→∞ ũ (ℓ) = limℓ→∞ u (ℓ) = u. Moreover, ũ (ℓ) = u (ℓ) > v and g̃(x) = g(x) for all x ≥ v implies that Ṽ (ℓ)(x) = V (ℓ)(x) for x ∈ ℝ, which in turn implies that V(x) = Ṽ(x) for x ∈ ℝ, since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU43.gif?pub-status=live)
where the first equality is due to Lemma 3.2(i) applied to g̃
By Lemma 3.2(iii) (applied to g̃), Ṽ(x)/g̃(x) is decreasing in x, which implies that Ṽ(x−)/g̃(x−), ≥ Ṽ(x +)/g̃(x +), x ∈ ℝ. Since g̃(x) = g̃(x −) = g̃(x +) > 0, we have Ṽ(x −) ≥ Ṽ(x +), so Ṽ(x −) = Ṽ(x +). Thus, Ṽ(x) ( = V(x)) is a continuous function.
To show the optimality of τ u, we need to prove
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn23.gif?pub-status=live)
By Lemma 3.3, V(x) = g(x) for x ≥ u, so that (3.4) holds for x ≥ u. For x < u( = ũ), we have, by Lemma 3.5 (applied to g̃),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU44.gif?pub-status=live)
It follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU45.gif?pub-status=live)
where the first equality is due to Lemma 3.2(i) applied to g̃. This proves (3.4).
Lemma 3.7
Suppose that −∞ < u < ∞ and τu is optimal. Then
(i) g(x) ≤ 𝔼x(e−qτ yg(Xτy)1{τ y < ∞}) for x < y ≤ u;
(ii) 𝔼x(e−qτ yg(Xτy)1{τ y < ∞}) ≤ 𝔼x(e−qτ zg(Xτz)1{τ z < ∞}) for x ≤ y < z ≤ u.
Proof. (i) The desired inequality holds trivially if g(x) = 0. Suppose that g(x) > 0. Since $\mathcal{L}(\tau_y,(u-y+X_{\tau_y})\mathbf{1}_{\{\tau_y<\infty\}}\mid
X_0=x)=\mathcal{L}(\tau_u,X_{\tau_u}\mathbf{1}_{\{\tau_u<\infty\}}\mid
X_0=x+u-y)$, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU46.gif?pub-status=live)
where the last inequality follows from the logconcavity of g.
(ii) Noting that τ z ≥ τ y a.s., on {τ y < τ z} = {τ y < ∞, X τ y < z}, by (i) we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU47.gif?pub-status=live)
from which the desired inequality in (ii) follows.
Proof of Theorem 3.1(i). If u = −∞ then u(ℓ ) = −∞ for ℓ= 1, 2, …, implying that 0 < V (ℓ)(x) = g(x) for all x ∈ℝ and ℓ = 1, 2, …. It follows from Lemma 3.2(i) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU48.gif?pub-status=live)
proving that τ u = τ −∞ is optimal.
If x 0 < u < ∞, τ u is optimal by Lemma 3.6. It remains to show that, for u = x 0 > −∞,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn24.gif?pub-status=live)
By Lemma 3.3, V(x) = g(x) for x ≥ u = x 0, so that (3.5) holds for x ≥ u = x 0. For x < u = x 0 and any stopping time $\tau\in\mathcal{M}$, we have X τ x 0 ≥ x 0 = u a.s. on {τ x 0 < τ}, so that, on {τ x 0 < τ},
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn25.gif?pub-status=live)
It follows from (3.6) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU49.gif?pub-status=live)
which together with g(X τ ) = 0 a.s. on {τ < τ x 0} implies that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU50.gif?pub-status=live)
Since $\tau\in\mathcal{M}$ is arbitrary, (3.5) follows. The proof is complete.
Proof of Theorem 3.1(ii). Assume that u = ∞. We claim that, for x ∈ℝ,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn26.gif?pub-status=live)
Consider an increasing sequence {b k} satisfying b 1 > x 0 and limk→∞ b k = ∞. Let g k(x) := g(x ∧ b k), which is nonconstant, increasing, logconcave, and right continuous. For ℓ ≥ 1, let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU51.gif?pub-status=live)
Let $u_k\,{:\!=}\,\lim_{\ell\to\infty}u_{k}^{(\ell)}\le b_k<\infty$. In other words, u k is defined in terms of g k in exactly the same way that u is defined in terms of g. Since u k < ∞, we have, by Theorem 3.1(i),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn27.gif?pub-status=live)
Thus, V k(x) = g k(x) for x ≥ u k and V k(x) > g k(x) for x < u k. Since g k is increasing in k, it is easily shown that both V k and u k are increasing. Let V ∞(x) := limk→∞ V k(x) and u ∞ := limk→∞ u k. Clearly V ∞(x) = V(x). Since $u^{(\ell)}_k\nearrow u^{(\ell)}$ as k → ∞ and
$u^{(\ell)}_k\nearrow u_k$ as ℓ → ∞, it follows that u := lim ℓ→∞ u (ℓ) = ∞ implies that u ∞ := limk→∞ u k = ∞. Incidentally, since V(x) ≥ V k(x) > g k(x) = g(x) for x < u k ≤ b k, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU52.gif?pub-status=live)
To prove (3.7), it suffices to show for x ≤ y 1 < y 2,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn28.gif?pub-status=live)
For large k with u k > y 2, applying Lemma 3.7 to g k yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn29.gif?pub-status=live)
By the monotone convergence theorem, the two sides of (3.10) converge to the corresponding sides of (3.9), respectively. This proves (3.9) and establishes the claim (3.7).
By (3.7), W := supy∈ℝ 𝔼(e−qτ yg(Xτy)1{τ y < ∞}) = limy→∞ 𝔼(e−qτ yg(Xτy)1{τ y < ∞}). Following the argument for (2.16) in the proof of Theorem 2.1(ii), we can show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn30.gif?pub-status=live)
Furthermore, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU53.gif?pub-status=live)
implying by (3.11) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU54.gif?pub-status=live)
If W = ∞, it is readily seen that there are randomized stopping times that yield an infinite expected (discounted) reward. Suppose that W < ∞. Following the argument for (2.17) in the proof of Theorem 2.1(ii), we can show that there is no optimal stopping time. The proof is complete.
4. On the principle of smooth fit for Lévy processes
In this section we investigate the principle of smooth fit for Lévy processes. Let X = {X t}t≥0 be a Lévy process with initial state X 0 = x ∈ ℝ, and assume that ℙ(X 1 > 0) > 0. For y ∈ ℝ, let τ y := inf{t ≥ 0: X t ≥ y} and τ y+ := inf{t ≥ 0: X t > y}.
Theorem 4.1
Let g: ℝ → [0, ∞) be nonconstant, increasing, logconcave, and right continuous. Define V(x) and u as in (3.1) and (3.3). Suppose that −∞ ≤ x 0 < u < ∞ and g is differentiable at u (i.e. g′(u −) = g′(u +) = g′(u)). If 0 is regular for (0, ∞) for X then V is differentiable at u, i.e. V′(u −) = V′(u +) ( = g′(u)).
Proof. Since V(u − ε) > g(u − ε) for ε > 0 and V(u) = g(u), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn31.gif?pub-status=live)
Since
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU55.gif?pub-status=live)
we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn32.gif?pub-status=live)
By the concavity of h(x) : = log g(x), we have, on {τ ε < ∞},
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn33.gif?pub-status=live)
for some θ ∈ (0, 1) by the mean value theorem. It follows from (4.2) and (4.3) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn34.gif?pub-status=live)
where the second-to-last equality follows from the fact that τ ε ↓ τ 0+ as ε ↓ 0 together with the right continuity of {X t} and the concavity of h, and the last equality follows from ℙ(τ 0+ = 0) = 1 (since 0 is regular for (0, ∞)). Combining (4.1) and (4.4) together with eh(u)h′(u +) = g′(u +) = g′(u) yields V′(u −) = g′(u) = V′(u +). The proof is complete.
Remark 4.1
For a Lévy process X with 0 regular for (0, ∞), Theorem 4.1 shows that the smooth fit principle holds if g is differentiable at u (the optimal stopping boundary). It is easy to show by example that the value function may fail to satisfy the smooth fit condition if g′(u −) ≠ g′(u +).
Theorem 4.2
Let g: ℝ → [0, ∞) be nonconstant, increasing, logconcave, and right continuous. Let h(x) : = log g(x). Define V(x) and u as in (3.1) and (3.3). Suppose that −∞ ≤ x 0 < u < ∞ and 0 is irregular for (0, ∞) for X. Then
(i) V′(u −) = 𝔼(e-qτ 0+ g′((u + X τ 0+)-)1{τ 0+ <∞});
(ii) V′(u −) = V′(u +) ( = g′(u +)) if and only if
(4.5)$$\begin{equation}\label{a1} h'((u+\zeta)\mathbb {E}rn1.5pt{-})=h'(u\mathbb {E}rn1.5pt{+}), \end{equation}$$
where ζ : = inf{x: ℙ(X τ 0+ > x | τ 0+ < ∞) = 0}, the essential supremum of the (conditional) distribution
$\mathcal{L}(X_{\tau_{0+}}\mid X_0=0,\tau_{0+}<\infty)$, and where h′((u + ζ) −) = limx→∞ h′(x −) if ζ = ∞;
(iii) V(x) = g(u)eh′(u+)(x−u) for x < u, provided that condition (4.5) holds.
To prove Theorem 4.2, we need the following lemmas.
Lemma 4.1
Let g: ℝ → [0, ∞) be nonconstant, increasing, logconcave, and right continuous. Suppose that 0 is irregular for (0, ∞) for X and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn36.gif?pub-status=live)
Then
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU56.gif?pub-status=live)
Proof. The following proof is similar to those of Lemmas 2.3 and 2.4. Note that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU57.gif?pub-status=live)
is decreasing in z ∈ (x 0, ∞), implying by (4.6) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn37.gif?pub-status=live)
Let J 0 := τ x+ and, for n ≥ 1,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU58.gif?pub-status=live)
Note that $\mathcal{L}(J_{n+1}-J_n\mid
X_0=x,J_n<\infty)=\mathcal{L}(\tau_{0+}\mid X_0=0)$ and that ℙ(τ 0+ > 0) = 1. It follows that J n → ∞ a.s. Fix y > x. Since the Lévy process X either satisfies
$\overline{\lim}_{t\to\infty}X_t=+\infty$ a.s. or limt→∞ X t = −∞ a.s., we have J n = τ y < ∞ for some n in the former case and J n = ∞ for large n in the latter case. In either case, L n := min{J n, τ y} = τ y for large n. As a consequence, e−qL ng(X L n)1{L n<∞} → e−qτ yg(X τ y)1{τ y<∞} a.s., so that, by Fatou’s lemma,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn38.gif?pub-status=live)
By (4.7), it is readily shown that 𝔼x (e−qL ng(X L n)1{L n<∞}) is decreasing in n, which together with (4.8) implies that 𝔼x (e−qτ yg(X τ y)1{τ y<∞}) ≤ 𝔼x (e−qL 0g(X L 0)1{L 0<∞}) ≤ g(x). The proof is complete.
Let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn39.gif?pub-status=live)
Lemma 4.2
Let g: ℝ → [0, ∞) be nonconstant, increasing, logconcave, and right continuous. Define u and u′ as in (3.3) and (4.9). If 0 is irregular for (0, ∞) for X then u′= u.
Proof. We first show that u ≥ u′. It suffices to consider the u < ∞ case. By Theorem 3.1, we have 𝔼x (e−qτ xg(X τ x+)1{τ x+<∞}) ≤ V(x+) for all x ≥ u, implying by (4.9) that u′≤ u. To show u ≤ u′, suppose to the contrary that u > u′. Let x be such that u > x > u′( ≥ x 0). If u < ∞, it follows from (4.9) and Lemma 4.1 that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU59.gif?pub-status=live)
a contradiction. If u = ∞, it follows from (4.9) and Lemma 4.1 that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU60.gif?pub-status=live)
which implies that, by Theorem 3.1(ii),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU61.gif?pub-status=live)
a contradiction. This proves u ≤ u′ and completes the proof.
Proof of Theorem 4.2. (i) We have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn40.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn41.gif?pub-status=live)
where (4.11) is due to the definition of u′and u′= u (by Lemma 4.2). Since both g(x) and 𝔼x (e−qτ xg(X τ x+)1{τ x+<∞}) (= 𝔼 (e−qτ 0+g(X τ 0+)1{τ 0+<∞}) are continuously increasing in x > x 0, (4.10) and (4.11) together imply that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn42.gif?pub-status=live)
which in turn implies that both τ u and τ u+ are optimal stopping times.
For x < u,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU62.gif?pub-status=live)
On {τ x+ < ∞}, since X s+τ x+ − X τ x+ is independent of $\mathcal{F}_{\tau_{x+}}$ and has the same law as X s − X 0, and since τ u+ is optimal, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU63.gif?pub-status=live)
It follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU64.gif?pub-status=live)
Taking x = u − ε for ε > 0 yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn43.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU65.gif?pub-status=live)
where
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU66.gif?pub-status=live)
To show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn44.gif?pub-status=live)
we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn45.gif?pub-status=live)
for some θ ∈ (0, 1), where the inequality is due to the concavity of h(x) := log g(x) and the last equality follows from the mean value theorem applied to the function ex. So, for any (fixed) v ∈ (x 0, u) and for 0 < ε < u − v, on {τ 0+ < ∞},
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU67.gif?pub-status=live)
Since on {τ 0+ < ∞}
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU68.gif?pub-status=live)
we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn46.gif?pub-status=live)
by the dominated convergence theorem together with the fact (cf. (4.12)) that 𝔼 (e−qτ 0+g(X τ 0+)1{τ 0+<∞}. Instead of the upper bound in (4.15), we can use the lower bound
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU69.gif?pub-status=live)
to derive in a similar way
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU70.gif?pub-status=live)
which together with (4.16) establishes (4.14).
To show that limε↓0 B(ε) = 0, note that V(x) = g(x) for x ≥ u and g(x) < V(x) ≤ g(u) for x < u. So
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU71.gif?pub-status=live)
implying that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn47.gif?pub-status=live)
(since 0 is irregular for (0, ∞) and X does not creep upwards). Combining (4.14) and (4.17) yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU72.gif?pub-status=live)
(ii) We have, by (i),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU73.gif?pub-status=live)
where the inequality is due to the concavity of h(x). This inequality is an equality if and only if ℙ(h′((u + X τ 0+) −) = h′(u +) | τ 0+ < ∞) = 1 or, equivalently, h′ ((u + ζ) −) = h′(u +).
(iii) Condition (4.5) implies that h′(x −) = h′(x +) = h′(u +) for u < x < u + ζ, which in turn implies that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn48.gif?pub-status=live)
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn49.gif?pub-status=live)
Let g̃ (x) := g(u)eh′(u +)(x−u) for x ∈ ℝ, so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn50.gif?pub-status=live)
Note that, for all y ∈ ℝ,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn51.gif?pub-status=live)
where the third equality follows from (4.19). To show that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn52.gif?pub-status=live)
we treat the two cases h′(u +) = 0 and h′(u +) > 0 separately. If h′(u +) = 0 then g(u) = max{g(x): x ∈ ℝ}. In order for (4.12) to hold, necessarily q = 0 and ℙ(τ 0+ < ∞) = 1, which implies that V(x) = g(u) for all x, proving (4.22). Now assume that h′(u +) > 0. Note that g̃ is nonconstant, continuous, increasing, and logconcave. It is readily shown by (4.21) that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU74.gif?pub-status=live)
and that, for any a ∈ [ − ∞, ∞), τ a is optimal for the reward function g̃. So we have, for x < u,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU75.gif?pub-status=live)
where the second equality is due to (4.20). The proof is complete.
Remark 4.2
Theorems 4.1 and 4.2 assume that x 0 < u < ∞, which makes it unnecessary to require the right continuity of g at x 0. Let g(x) ≥ 0 be increasing and logconcave. We say g(x) is degenerate if log g(x) is linear for x in some interval. Note that g is degenerate if condition (4.5) holds. For a nondegenerate g(x), q ≥ 0, and Lévy process X, suppose that the optimal threshold u (given in (3.3)) satisfies x 0 < u < ∞ and g is differentiable at u. Then, by Theorems 4.1 and 4.2, the principle of smooth fit holds if and only if 0 is regular for (0, ∞) for X. If g is degenerate, the principle of smooth fit may hold even when 0 is irregular for (0, ∞) for X. An example involving the Lévy process X t = −t + N t, t ≥ 0, where N t is a Poisson process, can be found in [Reference Lin and Yao14].
Remark 4.3
When smooth fit fails, it is of interest to find conditions under which the principle of continuous fit holds. Again suppose that g is increasing and logconcave. If the optimal threshold satisfies ∞ > u > x 0 := inf{s: g(s) > 0} then, by Lemma 3.6, the value function V(x) is continuous everywhere, so that there is continuous fit at u. Suppose that u = x 0 and g is discontinuous at x 0, i.e. g(x 0 −) = 0 < g(x 0) = g(x 0 +). For a Lévy process X for which 0 is regular for (0, ∞), it is clear that V is continuous at u ( = x 0), i.e. V(x 0 −) = V(x 0) = g(x 0). We now consider a Lévy process X for which 0 is irregular for (0, ∞). Since u = x 0, we have, by Lemma 4.2 and (4.9),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn53.gif?pub-status=live)
It follows from u = x 0 that V(x 0)=g(x 0) and V(x 0-) = 𝔼 (e−qτ 0+g(x 0 + X τ 0+)1{τ 0+<∞}. This proves that there is continuous fit (i.e. V(x 0) = V(x 0 −)) if and only if the inequality in (4.23) is an equality.
5. Concluding remarks
The optimal stopping problem (1.1) involves the reward function g and the underlying process {X t} as well as the discount rate q ≥ 0. Motivated by well-known results in the literature, we explored the close connection between increasing and logconcave reward functions and optimal stopping times of threshold type. Specifically, in this paper, g is assumed to be nonnegative, nonconstant, increasing, logconcave, and right continuous, while {X t} is either a random walk in discrete time or a Lévy process in continuous time. We showed that there exists a unique threshold u ∈ [ − ∞, ∞] such that
• the value function V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u;
• if −∞ ≤ u < ∞ then τ u = inf{t ≥ 0: X t ≥ u} is optimal;
• if u = ∞, the stopping region {x: V(x) = g(x)} = [u, ∞) =∅ and no optimal stopping time exists in the sense that for any initial state x and for any stopping time τ, one can always find another stopping time τ′ such that P x(τ′ ≥ τ) = 1 and
$$\begin{equation} \mathbb {E}_x\big(e^{-q\tau'}g(X_{\tau'})\mid\mathcal{F}_{\tau}\big)> e^{-q\tau}g(X_{\tau})\quad\mbox{a.s.\ on }\{\tau<\infty\}.\end{equation}$$
The work of Alili and Kyprianou [Reference Alili and Kyprianou1] made use of a fluctuation identity to give insight into the importance of the role played by the regularity of the paths of {X t} in the solution for the American put optimal stopping problem. Building on it, we investigated the principle of smooth fit more generally when g is increasing and logconcave. We obtained necessary and sufficient conditions for the smooth fit principle to hold. We also briefly discussed the principle of continuous fit when smooth fit fails.
After the present paper had been submitted, a referee brought to our attention the article by Christensen and Irle [Reference Christensen and Irle5]. (Note that both [Reference Christensen and Irle5] and the original version [Reference Lin and Yao14] of the present paper were posted on arxiv in October 2017.) The authors proposed a general method for finding the optimal threshold for discrete-time optimal stopping problems with general underlying Markov processes {X n}n≥0. By considering an auxiliary problem involving the associated ascending ladder process, they introduced a threshold α ∗ as a natural candidate for the optimal threshold. Assuming that the optimal threshold is greater than x 0 := inf{s: g(s) > 0} and that, for every x ∈ ℝ,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn54.gif?pub-status=live)
they obtained a sufficient condition for α ∗ to be optimal. Furthermore, they showed that the sufficient condition is satisfied when g is increasing and logconcave and {X n}n≥0 is a random walk, in which case α ∗ coincides with u in (2.2). They also showed that the sufficient condition is satisfied in some well-known problems beyond the random walk setting. On the other hand, in the present paper, for the discrete-time case, we considered a general random walk {X n}n≥0 without imposing condition (5.1) or any other condition. Theorem 2.1 completely characterizes the solution of (1.1) in terms of u, where the full range of u is −∞ ≤ x 0 ≤ u ≤ ∞. The two extreme cases u = x 0 and u = ∞ required careful analysis especially when g is discontinuous at x 0. Furthermore, we treated the continuous-time case in some detail with a thorough discussion of the principle of smooth fit. When {X t}t≥0 is a Lévy process with 0 irregular for (0, ∞), the optimal threshold given in (4.9) is the continuous-time counterpart of u in (2.2), which is of independent interest.
Appendix A. Proofs of Lemmas 2.3–2.6
Proof of Lemma 2.3. (i) For a ∈ [ − ∞, y), let J 0 := T a and, for n ≥ 1,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU77.gif?pub-status=live)
Here, starting at X T a = X J 0 (if T a < ∞), the (finite) J 1, J 2, … are the weak ascending ladder epochs and X J 1, X J 2, … are the corresponding weak ascending ladder heights. Let L n := min{J n, T y} for n ≥ 0. It is well known (cf. Theorem 8.2.5 of [Reference Chung7]) that the random walk {X n} either satisfies $\overline{\lim}_{n\to\infty}X_n=+\infty$ a.s. or limn→∞ X n = −∞ a.s. If
${\lim}_{n\to\infty}X_n=+\infty$ a.s. then a.s. T y < ∞ and 0 < J 0 < J 1 < J 2 < · · · are all finite, so that L n = min{J n, T y} = T y for large n. If limn→∞ X n = −∞ a.s. then a.s. there exists a finite n′≥ 0 such that J n = ∞ for all n ≥ n′, implying that L n = min{J n, T y} = T y for all n ≥ n′. Thus, in either case, we have L n = T y for large n a.s. As a consequence,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU78.gif?pub-status=live)
which together with e−qL ng(XLn)1{L n < ∞} ≤ max{g(y), e−qT yg(XTy)1{T y < ∞} implies that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn55.gif?pub-status=live)
We now prove that, for n ≥ 1,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn56.gif?pub-status=live)
Note that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn57.gif?pub-status=live)
For (integer) ℓ < ∞ and x′ < y, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU79.gif?pub-status=live)
so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU80.gif?pub-status=live)
where the first inequality follows from (2.6). So on A n,y := {L n−1 < ∞, X Ln−1 < y}, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn58.gif?pub-status=live)
It is easily seen that on Ω\A n,y,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn59.gif?pub-status=live)
By (A.3)–(A.5), (A.2) follows. Since T a = J 0 = min{J 0, T y} = L 0, we have, by (A.1) and (A.2),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU81.gif?pub-status=live)
as n → ∞. This proves the desired inequality in Lemma 2.3(i).
(ii) For x < y, the desired inequality in Lemma 2.3(ii) is a consequence of (i) and g(x) ≤ 𝔼x (e−qT xg(XTx)1{T x < ∞}). The proof is complete.
Proof of Lemma 2.4. Part (i) can be established along the lines of the proof of Lemma 2.3(i) with minor changes. Part (ii) follows immediately.
Proof of Lemma 2.5. The special (trivial) case that q = 0 and 𝔼(ξ) ≥ 0 can be easily treated. We now consider the general case q ≥ 0 and assume that 𝔼(ξ) < 0 if q = 0. Let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn60.gif?pub-status=live)
which is positive for q > 0 and for q = 0 with 𝔼(ξ) < 0.
Suppose that −∞ < u < ∞ and let V*(x) = 𝔼u(e−qτ ug(Xτu)1{τ u < ∞}), x ∈ ℝ, x ∈ ℝ Then V *(x) = g(x) for x ≥ u. We need to prove that V *(x) = V(x) for all x. Since V(x) ≥ V *(x), it remains to show that V *(x) ≥ V(x). By Lemma 2.2, it suffices to prove that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn61.gif?pub-status=live)
and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn62.gif?pub-status=live)
Recall that u ≥ x 0 = inf{s: g(s) > 0}, implying that g(y) > 0 for all y > u and 𝔼u(e−qT ug(XTu)1{T u < ∞}) > 0 (since ℙ(ξ > 0) > 0). By (2.2), we have 𝔼y(e−qT yg(XTy)1{T y < ∞})/g(y) ≤ 1 for y > u, which yields
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn63.gif?pub-status=live)
where the equality follows from g(u) = g(u +) and the fact that 𝔼y(e−qT yg(XTy)1{T y<∞}) which equals 𝔼(e−qT 0g(y + X T 0)1{T 0<∞}) decreases to 𝔼u(e−qT ug(XTu)1{T u<∞}) as y ↓ u. Thus, g(u) > 0. Noting that g(X τ u) ≥ g(u) on {τ u < ∞}, we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU82.gif?pub-status=live)
If u = x 0, V*(x) > 0 = g(x) for x < u. If u > x 0, 𝔼u(e−qT ug(XTu)1{T u<∞})/g(u) = 1 by (2.7), which together with Lemma 2.3(ii) implies that g(x) ≤ 𝔼x(e−qτ ug(Xτu)1{τ u < ∞}) = V*(x) for x < u. This proves (A.7).
To prove (A.8), we have, by Lemma 2.1, for x < u,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU83.gif?pub-status=live)
By (A.9), we also have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU84.gif?pub-status=live)
implying that V*(u) ≥ 𝔼(e−qV*(u + ξ)).
It remains to show (A.8) for x > u ( > −∞). Letting h(x) := log g(x), fix an (arbitrary) x > u (≥ x 0) with c := h′(x −). We have g(x) > 0 and 0 ≤ c < ∞. It follows from the concavity of h that h(y) ≤ h(x) + c(y − x) for all y ∈ ℝ. For each w ∈ [0, ∞), define
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU85.gif?pub-status=live)
It is readily seen that h w(y) ≥ h(y) for all y, h w(x) = h(x) and h w( · ) is increasing, concave, and continuous. By the concavity of h, we have, for 0 ≤ w 1 < w 2 and for all y ∈ ℝ
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn64.gif?pub-status=live)
so that h w( · ) is continuous and increasing in w. Note that h ∞(y) := limw→∞ h w(y) = h(x) + c(y − x) for all y ∈ ℝ.
For w ∈ [0, ∞] and y ∈ ℝ, let g w(y) := eh w(y) and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn65.gif?pub-status=live)
Then g w(y) = eh w(y) ≥ eh(y) = g(y), so
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn66.gif?pub-status=live)
Note that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn67.gif?pub-status=live)
If c ≤ α then we have, by the convexity of ϕ and (A.6), e−qϕ(c) ≤ e−qϕ(α) = 1, implying by (A.13) that 𝔼(e−qg ∞(y + ξ)) ≤ g ∞(y) for y ∈ ℝ. Thus, {e−qng ∞(X n)}n≥0 (with X 0 = x) is a (positive) supermartingale, so that 𝔼u(e−qT ug ∞(XTu)1{T u<∞}) ≤ g ∞(x) = g(x) = V*(x).
Therefore,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU86.gif?pub-status=live)
proving (A.8) for x > u > −∞ with c ≤ α. (Recall that c = h′(x −) and α is given in (A.6).)
We now show (A.8) for x > u > −∞ with c > α > 0. For w ∈ [0, ∞], let
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn68.gif?pub-status=live)
Since g 0(y) = g(y) for all y ≥ x, and since x > u, we have, by (2.2),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU87.gif?pub-status=live)
In addition, we have, by (A.13), 𝔼(e−qg ∞(y + ξ))/g ∞(y) = e−q ϕ(c) > e−q ϕ(α) = 1, implying that {e−qng ∞(X n)}n≥0 (with X 0 = x) is a submartingale, so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn69.gif?pub-status=live)
where $T^{n}_x\,{:\!=}\,\min\{T_x,n\}$. Recall that 𝔼(ξ) < 0 if q = 0, in which case limn→∞ X n = −∞ a.s. and limn→∞ g ∞(X n) = 0 a.s. Since, for all n,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU88.gif?pub-status=live)
and since $e^{-qT^{n}_x}g_\infty(X_{T^{n}_x})\to e^{-qT_x}g_\infty(X_{T_x})\mathbf{1}_{\{T_x<\infty\}}$ a.s. as n → ∞, we have, by (A.14) and (A.15),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU89.gif?pub-status=live)
Furthermore, by (A.10), for 0 ≤ w 1 < w 2 < ∞,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU90.gif?pub-status=live)
implying that f (w) is continuously increasing to f (∞) as w → ∞. It follows from f (0) ≤ 1 ≤ f (∞) that f (w′) = 1 for some w′∈ [0, ∞]. Noting that V *(x) = g(x) = g w′∈(x), we have
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU91.gif?pub-status=live)
where the second inequality follows by Lemma 2.3(i) applied to g w′(which is increasing and logconcave). This proves (A.8) for x > u > −∞ with c > α > 0, and establishes the optimality of τ u for the −∞ < u < ∞ case.
To prove the optimality of τ u for u = −∞, we approximate g(x) by a sequence of increasing and logconcave functions g n(x) := g(x)1[−n,∞)(x), n = 1, 2, …. Let u n be defined as in (2.2) with g replaced by g n. Since u = −∞, we have u n = −n, so that τ u n = τ −n is optimal for the optimal stopping problem (1.1) with g replaced by g n. Since g n↗g as n → ∞, it is easily seen that V(x) = g(x) for all x, implying that τ −∞ is optimal.
Finally, to show that V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u, all that remains to be done is to prove that V(x) > g(x) for x < u. For x < u with g(x) > 0, we have, by (2.2), g(x) < 𝔼x(e−qT xg(XTx)1{T x < ∞}) ≤ V(x). For x < u with g(x) = 0, g(x) < V(x) holds trivially. The proof of Lemma 2.5 is complete.
Proof of Lemma 2.6. Note that, for x ≥ x′, 𝔼x(e–qT xg(XTx)1{T x < ∞}) ≤ supy∈ℝ g(y) = c = g(x), implying that u ≤ x′. For k = 1, 2, …, let $\{X_n^{(k)}\}_{n\ge0}$ be a random walk generated by (truncated) increments
$\xi_i^{(k)}\,{:\!=}\,\min\{\xi_i,k\}$, i = 1, 2, . . . . For x ∈ ℝ, define
$V_k(x)\,{:\!=}\,\sup_{\tau\in\mathcal{M}^{(k)}}\mathbb {E}_x\big(e^{-q\tau}g(X_\tau^{(k)})\mathbf{1}_{\{\tau<\infty\}}\big)$, where
$\mathcal{M}^{(k)}\subset\mathcal{M}$ is the class of stopping times with values in [0, ∞] with respect to the filtration
$\{\mathcal{F}_n^{(k)}\}_{n\ge0}$, with
$\mathcal{F}_n^{(k)}\,{:\!=}\,\sigma\{\xi_1^{(k)},\dots,\xi_n^{(k)}\}\subset\sigma\{\xi_1,\dots,\xi_n\}\,{:\!=}\,\mathcal{F}_n$. Since {ξ n} is i.i.d. and
$\{X^{(k)}_n\}$ is Markov, it is not difficult to see that the value function V k(x) cannot be increased by taking stopping times in
$\mathcal{M}$, i.e.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU92.gif?pub-status=live)
For any $\tau\in\mathcal{M}$, we have
$e^{-q\tau}g(X^{(k)}_{\tau})\mathbf{1}_{\{\tau<\infty\}}\nearrow e^{-q\tau}g(X_\tau)\mathbf{1}_{\{\tau<\infty\}}$. a.s. as k → ∞, so that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU93.gif?pub-status=live)
It follows that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn70.gif?pub-status=live)
Let $T_x^{(k)}\,{:\!=}\,\inf\{n\ge1\colon X_n^{(k)}\ge x\}$ and
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU94.gif?pub-status=live)
Since 𝔼(eλξ (k)) for all λ ≥ 0, we have, by Lemma 2.5,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU95.gif?pub-status=live)
where $\tau^{(k)}_{u_k}\,{:\!=}\,\inf\{n\ge0\colon X^{(k)}_n\ge u_k\}$. Since V k+1(x) ≥ V k(x) for all x, we have g(u k+1) = V k+1 (u k+1) ≥ V k(u k+1) ≥ g(u k+1). So g(u k+1) = V k(u k+1), implying that u k+1 ≥ u k.
Let u ∞ = limk→∞ u k. We prove the optimality of τ u in two steps. We show in step 1 that τ u ∞ is optimal and in step 2 that u ∞ = u.
Step 1. We show that τ u ∞ is optimal, i.e.
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn71.gif?pub-status=live)
For any x ≥ u ∞, we have x ≥ u k and V k(x) = g(x) for all k, which implies by (A.16) that V(x) = limk→∞ V k(x) = g(x), establishing (A.17) for x ≥ u ∞. It remains to prove (A.17) for x < u ∞. It suffices to show that, for x < u ∞,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn72.gif?pub-status=live)
We argue below that with X 0 = x < u ∞, as k → ∞,
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn73.gif?pub-status=live)
which together with the bounded convergence theorem implies (A.18). We now show that (A.19) holds a.s. on {τ u ∞ < ∞} and on {τ u ∞ = ∞} separately. Let Δ := maxj≤τ u ∞ ξ j < ∞ on {τ u ∞ < ∞}. Then, for k > Δ, $\smash{X^{(k)}_i=x+\sum_{j=1}^{i}\xi^{(k)}_j=x+\sum_{j=1}^{i}\xi_j=X_i}$ for all i ≤ τ u ∞. Letting Δ′ := max0≤i<τ u ∞ X i (< u ∞), choose a sufficiently large k 0 such that k 0 > Δ and u k 0 > Δ′. Thus,
$\smash{\tau^{(k)}_{u_k}=\tau_{u_\infty}}$ and
$X^{(k)}_{\tau^{(k)}_{u_k}}=X_{\tau_{u_\infty}}$ for all k ≥ k 0, implying that both sides of (A.19) are equal for all k ≥ k 0. So (A.19) holds a.s. on {τ u ∞ < ∞}.
To show (A.19) holds a.s. on {τ u ∞ = ∞}, note that the random walk {X 0, X 1, … } either satisfies $\overline{\lim}_{n\to\infty}X_n=+\infty$ a.s. or limn→∞ X n = −∞ a.s. If
$\overline{\lim}_{n\to\infty}X_n=+\infty$ a.s. then τ u ∞ < ∞ a.s., so that trivially (A.19) holds a.s. on {τ u ∞ = ∞}. Now suppose that limn→∞ X n = −∞ a.s. Then on {τ u ∞ = ∞}, limn→∞ X n = −∞} there is an n 0 ∈ ℕ such that X n < u ∞ for 0 ≤ n ≤ n 0 and X n < u ∞ − 1 for n > n 0. Let 0 < ε < 1 be such that ε < u ∞ − max{X 0, X 1, …, X n 0}, so that X n < u ∞ − ε for all n. Choose k 1 such that u k > u ∞ − ε for all k ≥ k 1. For k ≥ k 1,
$X_{n}^{(k)}\le {{X}_{n}}<{{u}_{\infty }}-\varepsilon <{{u}_{k}}$ for all n, so that
$\tau^{(k)}_{u_k}=\infty$ for k ≥ k 1. Hence, on {τ u ∞ = ∞}, limn→∞ X n = −∞}, both sides of (A.19) equal 0 for k ≥ k 1. It follows that (A.19) holds a.s. on {τ u ∞ = ∞}.This completes step 1.
Step 2. We now prove that u ∞ = u. If u ∞ = −∞ then x 0 = −∞, i.e. g(x) > 0 for all x. By the optimality of τ u ∞ = τ −∞ = 0, g(x) = V(x) ≥ 𝔼x(e–qT xg(XTx)1T x<∞) for all x, implying that u = −∞.
If u > u ∞ > −∞, we have, by (A.17), g(u ∞) = V(u ∞) > 0 and, by (2.2),
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqnU96.gif?pub-status=live)
a contradiction. So u ≤ u ∞. Now suppose that u < u ∞. Let x be such that u < x < u ∞. By (2.2), 𝔼x(e–qT xg(XTx)1{T x<∞})/g(x) ≤ 1, which by Lemma 2.4(ii), implies that
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20190722083340888-0296:S0001867819000041:S0001867819000041_eqn74.gif?pub-status=live)
Since x < u ∞, we can choose k such that x < u k. By Lemma 2.5 applied to the random walk with increments $\xi_{i}^{(k)}$ for which V k is the value function and u k is the optimal threshold, we have g(x) < V k(x) ≤ V(x), contradicting (A.20). This proves u = u ∞.
Finally, it follows easily from the optimality of τ u that V(x) > g(x) for x < u and V(x) = g(x) for x ≥ u. The proof is complete.
Acknowledgements
We wish to dedicate this work to Professor Herman Chernoff on the occasion of his 95th birthday (July 1, 2018). We are grateful to the referee for a number of helpful comments. We gratefully acknowledge support from the Ministry of Science and Technology of Taiwan, ROC.