Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-07T05:13:03.685Z Has data issue: false hasContentIssue false

A unifying approach to non-minimal quasi-stationary distributions for one-dimensional diffusions

Published online by Cambridge University Press:  08 August 2022

Kosuke Yamato*
Affiliation:
Kyoto University
*
*Postal address: Department of Mathematics, Graduate School of Science, Kyoto University, Kyoto, Japan. Email address: yamato.kosuke.43r@st.kyoto-u.ac.jp
Rights & Permissions [Opens in a new window]

Abstract

We study convergence to non-minimal quasi-stationary distributions for one-dimensional diffusions. We give a method for reducing the convergence to the tail behavior of the lifetime via a property we call the first hitting uniqueness. We apply the results to Kummer diffusions with negative drift and give a class of initial distributions converging to each non-minimal quasi-stationary distribution.

Type
Original Article
Copyright
© The Author(s), 2022. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let us consider a one-dimensional diffusion $X = (X_t)_{t \geq 0}$ on $I = [0, b) \ \text{or} \ [0, b] $ $ (0 < b \leq \infty)$ killed nowhere and stopped upon hitting 0, and let $T_0$ denote its first hitting time of 0. A probability distribution $\nu$ on $I \setminus \{0\}$ is called a quasi-stationary distribution of X when the distribution of $X_t$ with the initial distribution $\nu$ conditioned to be away from 0 until time t is time-invariant, that is, the following holds:

\begin{equation*} {\mathbb{P}}_\nu[X_t \in {\mathrm{d}} x \mid T_0 > t] = \nu({\mathrm{d}} x) \quad (t > 0),\end{equation*}

where ${\mathbb{P}}_{\nu}$ denotes the underlying probability measure of X with its initial distribution $\nu$ . For a certain quasi-stationary distribution $\nu$ , we study a sufficient condition on an initial distribution $\mu$ such that

(1.1) \begin{equation}\mu_{t}({\mathrm{d}} x) \,:\!=\, {\mathbb{P}}_{\mu}[X_t \in {\mathrm{d}} x \mid T_0 > t] \xrightarrow[t \to \infty]{} \nu({\mathrm{d}} x).\end{equation}

Here and hereafter all convergence of probability distributions is in the sense of weak convergence. In the case where $\mu$ is compactly supported, the convergence (1.1) has been studied by many authors (e.g. [Reference Hening and Kolb8], [Reference Kolb and Steinsaltz10], [Reference Littin14], and [Reference Mandl17]), and it has been shown that (1.1) holds under very general conditions and the limit distribution $\nu$ does not depend on the choice of a compactly supported $\mu$ . The limit measure $\nu$ is sometimes called the Yaglom limit or the minimal quasi-stationary distribution. On the other hand, for some diffusions there exist infinitely many quasi-stationary distributions. Although it is a natural problem to consider for what initial distributions (1.1) holds for each quasi-stationary distribution $\nu$ , there are very few studies considering this problem for non-minimal quasi-stationary distributions. The author only knows two papers, Lladser and San Martín [Reference Lladser and Martín15] and Martínez, Picco, and San Martín [Reference Martínez, Picco and Martín19], whose results we generalize in the present paper.

The present paper has two main results. One is Theorem 3.1, which gives a method for reducing the convergence (1.1) to the tail behavior of $T_0$ . The other is Theorem 5.1, which applies Theorem 3.1 to Kummer diffusions with negative drift and derives concrete sufficient conditions for convergence (1.1).

A Kummer diffusion $Y^{(0)} = Y^{(\alpha,\beta)}$ $ (\alpha > 0, \beta \in {\mathbb{R}})$ is a diffusion on $[0, \infty)$ stopped upon hitting 0 whose local generator ${\mathcal{L}}^{(0)} = {\mathcal{L}}^{(\alpha,\beta)}$ on $(0,\infty)$ is

(1.2) \begin{equation} {\mathcal{L}}^{(0)} = {\mathcal{L}}^{(\alpha,\beta)} = x\dfrac{{\mathrm{d}}^2}{{\mathrm{d}} x^2} + ({-}\alpha + 1 - \beta x) \dfrac{{\mathrm{d}}}{{\mathrm{d}} x}.\end{equation}

Note that the process $Y^{(0)} = Y^{(\alpha,\beta)}$ is also called a radial Ornstein–Uhlenbeck process in some of the literature (see e.g. [Reference Borodin and Salminen2] and [Reference Göing-Jaeschke and Yor7]). We write

(1.3) \begin{equation} g_\gamma(x) \,:\!=\, {\mathbb{P}}_{x}\bigl[{\mathrm{e}}^{-\gamma T_0^{(0)}}\bigr] = \int_{0}^{\infty}\,{\mathrm{e}}^{-\gamma t} {\mathbb{P}}_{x}\bigl[T^{(0)}_0 \in {\mathrm{d}} t\bigr] \quad (\gamma \geq 0),\end{equation}

which is the Laplace transform of the first hitting time of 0 for $Y^{(0)} = Y^{(\alpha,\beta)}$ . Then $g_\gamma$ is a $\gamma$ -eigenfunction for ${\mathcal{L}}^{(0)}$ , i.e. ${\mathcal{L}}^{(0)} g_\gamma = \gamma g_\gamma$ (see e.g. [Reference Rogers and Williams22, p. 292]). We define a Kummer diffusion with negative drift $Y^{(\gamma)} = Y^{(\alpha,\beta,\gamma)} $ $ (\gamma \geq 0)$ as the h-transform of $Y^{(\alpha,\beta)}$ by the function $g_\gamma$ , that is, the process $Y^{(\alpha,\beta,\gamma)}$ is a diffusion on $[0, \infty)$ stopped at 0 whose local generator on $(0,\infty)$ is

\begin{equation*} {\mathcal{L}}^{(\gamma)} = {\mathcal{L}}^{(\alpha,\beta,\gamma)} = \dfrac{1}{g_\gamma}({\mathcal{L}}^{(0)} - \gamma) g_\gamma.\end{equation*}

If we write

\begin{equation*} \tilde{Y}^{(\alpha,\beta,\gamma)} \,:\!=\, \sqrt{2Y^{(\alpha,\beta,\gamma)}},\end{equation*}

then the local generator $\tilde{{\mathcal{L}}}^{(\alpha,\beta,\gamma)}$ of $\tilde{Y}^{(\alpha,\beta,\gamma)}$ on $(0,\infty)$ is given by

(1.4) \begin{equation} \tilde{{\mathcal{L}}}^{(\alpha,\beta,\gamma)} = \dfrac{1}{2}\dfrac{{\mathrm{d}}^2}{{\mathrm{d}} x^2} + \biggl( \dfrac{1-2\alpha}{2x} - \dfrac{\beta x}{2} + \dfrac{\tilde{g}^{\prime}_\gamma}{\tilde{g}_\gamma} \biggr)\dfrac{{\mathrm{d}}}{{\mathrm{d}} x},\end{equation}

where $\tilde{g}_\gamma(x) = \tilde{{\mathbb{P}}}_x\bigl[{\mathrm{e}}^{-\gamma \tilde{T}_0}\bigr]$ denotes the Laplace transform of the first hitting time of 0 for $\tilde{Y}^{(0)}$ starting from x. When $\alpha = 1/2$ and $\gamma = 0$ , the process $\tilde{Y}^{(1/2,\beta,0)}$ is the Ornstein–Uhlenbeck process, and when $\beta = 0$ , the process $\tilde{Y}^{(\alpha,0,\gamma)}$ is the Bessel process with negative drift (see e.g. [Reference Göing-Jaeschke and Yor7]).

Previous studies. We briefly review several previous studies of quasi-stationary distributions for one-dimensional diffusions.

A first remarkable result on quasi-stationary distributions for one-dimensional diffusions was given by Mandl [Reference Mandl17]. He treated the case where the right boundary is natural and gave a sufficient condition for the convergence to the minimal quasi-stationary distributions. His condition has been weakened by many authors, e.g. Collet, Martínez, and San Martín [Reference Collet, Martínez and Martín5], Hening and Kolb [Reference Hening and Kolb8], Kolb and Steinsaltz [Reference Kolb and Steinsaltz10], and Martínez and San Martín [Reference Martínez and Martín18]. Under certain weak assumptions it is shown that all compactly supported initial distributions imply convergence to the minimal quasi-stationary distribution.

The case where the right boundary is entrance has also been widely studied. Cattiaux et al. [Reference Cattiaux, Collet, Lambert, Martínez, Méléard and Martín3] and Littin [Reference Littin14] showed that in this case there exists a unique quasi-stationary distribution and all compactly supported initial distributions are attracted to the unique quasi-stationary distribution. Takeda [Reference Takeda23] generalized their results to symmetric Markov processes with the tightness property.

Let us come back to the case where the right boundary is natural. We then have non-minimal quasi-stationary distributions. In the present paper, we let $L^1(I,\nu)$ denote the set of integrable functions on I with respect to the measure $\nu$ , and denote $f(x) \sim g(x) $ $ (x \to \infty)$ when $\lim_{x \to \infty}f(x) / g(x) = 1$ .

First, Martínez, Picco, and San Martín [Reference Martínez, Picco and Martín19] studied Brownian motion with negative drift and showed convergence to non-minimal quasi-stationary distributions under the assumptions on tail behavior of the initial distribution.

Theorem 1.1. ([Reference Martínez, Picco and Martín19, Theorem 1.1].) Let $B_t$ be a standard Brownian motion and let $\alpha > 0$ and consider the process

\begin{equation*}X_t = B_t - \alpha t. \end{equation*}

For an initial distribution $\mu$ on $(0,\infty)$ , assume $\mu({\mathrm{d}} x) = \rho(x)\,{\mathrm{d}} x$ for some $\rho \in L^1((0,\infty), {\mathrm{d}} x)$ satisfying

\begin{equation*}\log \rho(x) \sim -(\alpha - \delta) x \quad (x \to \infty) \end{equation*}

for some $\delta \in (0,\alpha)$ . Then we have

\begin{equation*}{\mathbb{P}}_{\mu}[X_t \in {\mathrm{d}} x \mid T_0 > t ] \xrightarrow[t \to \infty]{} \nu_{\lambda}({\mathrm{d}} x), \end{equation*}

with

\begin{equation*}\lambda = (\alpha^2 - \delta^2)/2 \quad and \quad\nu_{\lambda}({\mathrm{d}} x) = C_\lambda \,{\mathrm{e}}^{-\alpha x}\sinh \bigl(x\sqrt{\alpha^2 - 2\lambda}\bigr)\,{\mathrm{d}} x \end{equation*}

for the normalizing constant $C_\lambda$ .

Secondly, Lladser and San Martín [Reference Lladser and Martín15] studied Ornstein–Uhlenbeck processes.

Theorem 1.2. ([Reference Lladser and Martín15, Theorem 1.1].) Let $\alpha > 0$ . Let X be the solution of the following SDE:

\begin{equation*}{\mathrm{d}} X_t = {\mathrm{d}} B_t - \alpha X_t \,{\mathrm{d}} t, \end{equation*}

where B is a standard Brownian motion. For an initial distribution $\mu$ on $(0,\infty)$ , assume $\mu({\mathrm{d}} x) = \rho(x)\,{\mathrm{d}} x$ for some $\rho \in L^1((0,\infty), {\mathrm{d}} x)$ satisfying

\begin{equation*}\rho(x) \sim x^{-2 + \delta}\ell(x) \quad (x \to \infty) \end{equation*}

for some $\delta \in (0,1)$ and a slowly varying function $\ell$ at $\infty$ . Then we have

\begin{equation*}{\mathbb{P}}_{\mu}[X_t \in {\mathrm{d}} x \mid T_0 > t ] \xrightarrow[t \to \infty]{} \nu_{\lambda}({\mathrm{d}} x) \end{equation*}

with

\begin{equation*}\lambda = \alpha(1 - \delta) \quad and \quad\nu_{\lambda}({\mathrm{d}} x) = C_\lambda \psi_{-\lambda}(x)\,{\mathrm{e}}^{-\alpha x^2}\,{\mathrm{d}} x \end{equation*}

for the normalizing constant $C_\lambda$ , where $u = \psi_{-\lambda}$ denotes the unique solution for the following differential equation:

\begin{equation*}\dfrac{1}{2}\dfrac{{\mathrm{d}}^2}{{\mathrm{d}} x^2}u - \alpha x\dfrac{{\mathrm{d}}}{{\mathrm{d}} x}u = -\lambda u, \quad\lim_{x \to 0+}u(x) = 0,\quad\lim_{x \to 0+}\dfrac{{\mathrm{d}}}{{\mathrm{d}} x}u(x) = 1 \quad (x \in (0,\infty)). \end{equation*}

We will give a generalization of these two results in Theorem 5.1.

Outline of the paper. The remainder of the present paper is organized as follows. In Section 2 we will recall several known results on one-dimensional diffusions, the quasi-stationary distributions and the spectral theory for second-order ordinary differential operators. In Section 3 we will show one of our main results giving a general condition for convergence to quasi-stationary distributions. In Section 4 we will give the hitting density of Kummer diffusions with negative drift. In Section 5 we will show the second main result, which gives a sufficient condition for convergence to non-minimal quasi-stationary distributions for Kummer diffusions with negative drift.

2. Preliminaries

2.1. Feller’s canonical form of second-order differential operators

Let $(X,{\mathbb{P}}_x)_{x \in I}$ be a one-dimensional diffusion on $I = [0, b)$ or [0, b] $(0 < b \leq \infty)$ , that is, the process X is a time-homogeneous strong Markov process on I which has a continuous path up to its lifetime. Throughout this paper, we always assume

(2.1) \begin{equation} {\mathbb{P}}_x[T_y < \infty] > 0 \quad (x \in I \setminus \{0\}, \ y \in [0, b)),\end{equation}

where $T_y$ denotes the first hitting time of y, and assume the point 0 is a trap:

\begin{equation*} X_t = 0 \quad \text{for} \ t \geq T_0.\end{equation*}

Let us recall Feller’s classification of the boundaries (see e.g. Itô [Reference Itô9]). There exist a Radon measure m on $I \setminus \{0\}$ with full support and a strictly increasing continuous function s on (0, b) such that the local generator ${\mathcal{L}}$ on (0, b) is represented by

\begin{equation*} {\mathcal{L}} = \dfrac{{\mathrm{d}}}{{\mathrm{d}} m}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s}.\end{equation*}

We call m the speed measure and s the scale function of X and we say X is a $\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ -diffusion. Let $c = 0$ or b and take $d \in (0, b)$ . Set

\begin{equation*} I(c) = \int_{c}^{d}\,{\mathrm{d}} s(x)\int_{c}^{x}\,{\mathrm{d}} m(y), \quad J(c) = \int_{c}^{d}\,{\mathrm{d}} m(x)\int_{c}^{x}\,{\mathrm{d}} s(y).\end{equation*}

The boundary c is classified as follows:

\begin{align*} \text{The boundary is} \quad \begin{cases}\text{regular} &\text{when}\ I(c) < \infty, \ J(c) < \infty, \\\text{exit} &\text{when} \ I(c) = \infty, \ J(c) < \infty, \\\text{entrance} &\text{when} \ I(c) < \infty, \ J(c) = \infty, \\\text{natural} &\text{when} \ I(c) = \infty, \ J(c) = \infty. \end{cases}\end{align*}

Since ${\mathbb{P}}_{x}[T_0 < \infty] > 0$ for every $x > 0$ , the boundary 0 is necessarily regular or exit, equivalently $J(0) < \infty$ . Note that in this case $s(0)\,:\!=\, \lim_{x \to 0+}s(x) > -\infty$ holds. We also assume that the boundary b is not exit and that the boundary b is reflecting when it is regular.

Let us consider a diffusion on I whose local generator ${\mathcal{L}}$ on (0, b) is

\begin{equation*} {\mathcal{L}} = a(x)\dfrac{{\mathrm{d}}^2}{{\mathrm{d}} x^2} + c(x) \dfrac{{\mathrm{d}}}{{\mathrm{d}} x} \quad (x \in (0, b))\end{equation*}

for functions a and c. Assume $a(x) > 0 $ $(x \in (0, b))$ . Then ${\mathcal{L}} = \frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ , where

\begin{equation*} {\mathrm{d}} m(x) = \dfrac{1}{a(x)}\exp \biggl(\int_{d}^{x}\dfrac{c(y)}{a(y)}\,{\mathrm{d}} y \biggr)\,{\mathrm{d}} x, \quad {\mathrm{d}} s(x) = \exp \biggl({-}\int_{d}^{x}\dfrac{c(y)}{a(y)}\,{\mathrm{d}} y\biggr)\,{\mathrm{d}} x\end{equation*}

for arbitrary given $d \in (0, b)$ .

2.2. Quasi-stationary distributions

Let us summarize known results on quasi-stationary distributions for one-dimensional diffusions and give a necessary and sufficient condition for the existence of quasi-stationary distributions. Let X be a $\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ -diffusion on $I = [0, b)$ or [0, b] $(0 < b \leq \infty)$ . We define a function $u = \psi_{\lambda}$ as the unique solution of the following equation:

(2.2) \begin{equation} \dfrac{{\mathrm{d}}}{{\mathrm{d}} m}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s}u(x) = \lambda u(x), \quad \lim_{x \to 0+}u(x) = 0,\quad \lim_{x \to 0+}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s}u(x) = 1 \quad (x \in (0, b), \lambda \in {\mathbb{R}}).\end{equation}

Note that from the assumption that the boundary 0 is regular or exit, the function $\psi_{\lambda}$ always exists. The operator $L = -\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ defines a non-negative definite self-adjoint operator on

\begin{equation*}L^2(I, {\mathrm{d}} m) \,:\!=\, \biggl\{ f\,:\, I \to {\mathbb{R}} \mid \int_{I}|f|^2\,{\mathrm{d}} m < \infty \biggr\}.\end{equation*}

Here we assume the Dirichlet boundary condition at 0 and the Neumann boundary condition at b if the boundary b is regular. We denote the infimum of the spectrum of L by $\lambda_0 \geq 0$ .

Let us consider the case where the boundary b is not natural. It is then known that there is a unique quasi-stationary distribution (noting that Takeda [Reference Takeda23] showed the corresponding result for general Markov processes with the tightness property).

Proposition 2.1. (See e.g. [Reference Littin14, Lemma 2.2, Theorem 4.1].) Assume the boundary b is not natural. Then we have

\begin{equation*}\lambda_0 > 0 \end{equation*}

and the function $\psi_{-\lambda_0}$ is strictly positive on $I \setminus \{0\}$ and integrable with respect to ${\mathrm{d}} m$ , and there is a unique quasi-stationary distribution given by

\begin{equation*}\nu_{\lambda_0}({\mathrm{d}} x) = \lambda \psi_{-\lambda_0}(x)\ {\mathrm{d}} m(x), \quad {\mathbb{P}}_{\nu_{\lambda_0}}[T_0 \in {\mathrm{d}} t] = \lambda_0\ {\mathrm{e}}^{-\lambda_0 t}\,{\mathrm{d}} t. \end{equation*}

Moreover, for every probability distribution $\mu$ on $(0, b)$ with compact support, we obtain

\begin{equation*}\mu_{t} \xrightarrow[t \to \infty]{} \nu_{\lambda_0}. \end{equation*}

We now assume the boundary b is natural. We now have

(2.3) \begin{equation} {\mathbb{P}}_{x}[T_b < \infty] = 0 \quad (x \in (0, b))\end{equation}

and

\begin{equation*} \dfrac{s(x) - s(0)}{s(M) - s(0)} = {\mathbb{P}}_{x}[T_M < T_0] \quad (0 < x < M < b)\end{equation*}

(see e.g. Itô [Reference Itô9]). Taking limit $M \to b$ , we have from (2.3)

\begin{equation*} \dfrac{s(x) - s(0)}{s(b) - s(0)} = {\mathbb{P}}_x[T_0 = \infty].\end{equation*}

Hence it follows that

\begin{equation*} {\mathbb{P}}_x[T_0 < \infty] = 1 \quad \text{for some / any} \ x > 0 \quad \Leftrightarrow \quad s(b) = \infty.\end{equation*}

If $\nu$ is a quasi-stationary distribution, the distribution ${\mathbb{P}}_{\nu}[T_0 \in {\mathrm{d}} t]$ is exponentially distributed because ${\mathbb{P}}_{\nu}[T_0 > t+s \mid T_0 > t] = {\mathbb{P}}_{\nu}[X_{t+s} > 0 \mid T_0 > t] = {\mathbb{P}}_{\nu}[X_s > 0] = {\mathbb{P}}_{\nu}[T_0 > s]$ . Then by (2.1) we have ${\mathbb{P}}_{\nu}[T_0 = \infty] < 1$ , and therefore ${\mathbb{P}}_{\nu}[T_0 = \infty] = 0$ , which implies $s(b) = \infty$ . We recall the following good properties for the function $\psi_{\lambda}$ .

Proposition 2.2. ([Reference Collet, Martínez and Martín6, Lemma 6.18].) Suppose the boundary b is natural and $s(b) = \infty$ . Then for $\lambda > 0$ the following hold.

  1. (i) For $0 < \lambda \leq \lambda_0$ , the function $\psi_{-\lambda}$ is strictly positive on $I {\setminus \{0\}}$ and

    \begin{equation*} 1 = \lambda \int_{0}^{b}\psi_{-\lambda}(x)\,{\mathrm{d}} m(x).\end{equation*}
  2. (ii) For $\lambda > \lambda_0$ , the function $\psi_{-\lambda}$ changes signs on I.

Now we state a necessary and sufficient condition for the existence of non-minimal quasi-stationary distributions without proof.

Theorem 2.1. ([Reference Collet, Martínez and Martín6, Theorem 6.34], [Reference Kotani and Watanabe12, Theorem 3, Appendix I].) Suppose the boundary b is natural. Then a non-minimal quasi-stationary distribution exists if and only if

(2.4) \begin{equation}\lambda_0 > 0 \quad and \quad s(b) = \infty. \end{equation}

This condition is equivalent to

\begin{equation*}m(d,b) < \infty \quad for\ some \ d \in (0, b) \quad and \quad \limsup_{x \to b}s(x)m(x,b) < \infty. \end{equation*}

In this case a probability measure $\nu$ is a quasi-stationary distribution if and only if

\begin{equation*}\nu({\mathrm{d}} x) = \lambda \psi_{-\lambda}(x)\,{\mathrm{d}} m(x) \,=\!:\, \nu_\lambda({\mathrm{d}} x), \quad {\mathbb{P}}_{\nu_{\lambda}}[T_0 \in {\mathrm{d}} t] = \lambda \,{\mathrm{e}}^{-\lambda t}\,{\mathrm{d}} t \quad for\ some \ 0 < \lambda \leq \lambda_0 . \end{equation*}

Here we note that as [Reference Collet, Martínez and Martín6] only dealt with the case where the boundary 0 is regular, the proof also works in the case where the boundary 0 is exit.

For probability distributions on (0, b), we introduce a partial order. For $\mu_1$ , $\mu_2 \in {\mathcal{P}}(0,\infty)$ , we define $\mu_1 \preceq \mu_2$ by

\begin{equation*} \mu_2(0,x] \leq \mu_1(0,x] \quad (x > 0).\end{equation*}

This order gives a total order for quasi-stationary distributions and, as the following proposition says, the distribution $\nu_{\lambda_0}$ gives the minimal element. This is why we call it the minimal quasi-stationary distribution.

Proposition 2.3. Suppose the boundary b is natural and (2.4) holds. Then we have

\begin{equation*}\nu_{\lambda} \preceq \nu_{\lambda^{\prime}} \quad (0 < \lambda^{\prime} \leq \lambda \leq \lambda_0). \end{equation*}

In particular, the distribution $\nu_{\lambda_0}$ is the minimal one in this order.

Proof. From (2.2) we have

\begin{equation*}\psi_{-\lambda}(x) = s(x) - \lambda \int_{0}^{x}\,{\mathrm{d}} s(y)\int_{0}^{y}\psi_{-\lambda}(z)\,{\mathrm{d}} m(z) \quad (x > 0, \lambda \in {\mathbb{R}}). \end{equation*}

Hence it follows that

(2.5) \begin{equation}\nu_\lambda(0,x] = \lambda\int_{0}^{x}\psi_{-\lambda}(y)\,{\mathrm{d}} m(y) = 1 - \psi_{-\lambda}^+(x) \quad (x > 0, 0 < \lambda \leq \lambda_0), \end{equation}

where $\psi_{-\lambda}^+(x)$ is the right-derivative of $\psi_{-\lambda}$ with respect to the scale function:

\begin{equation*}\psi_{-\lambda}^+(x) \,:\!=\, \lim_{h \to 0+}\dfrac{\psi_{-\lambda}(x + h) - \psi_{-\lambda}(x)}{s(x+h) - s(x)}.\end{equation*}

Let $0 < \lambda^{\prime} \leq \lambda \leq \lambda_0$ . From (2.5) we have

\begin{equation*}\psi_{-\lambda}^+(x) \leq \psi_{-\lambda^{\prime}}^+(x) \quad (x > 0) \end{equation*}

by a similar argument to [Reference Collet, Martínez and Martín6, Lemma 6.11], which yields $\nu_{\lambda} \preceq \nu_{\lambda^{\prime}}$ .

2.3. Spectral theory for second-order differential operators

Let us briefly review several results on the spectral theory of second-order differential operators. For the details, see e.g. Coddington and Levinson [Reference Coddington and Levinson4] and Kotani [Reference Kotani11].

Set $I = (0, b) $ $(0 < b \leq \infty)$ . Let ${\mathrm{d}} m$ be a Radon measure on I with full support and let $s\,:\, I \to ({-}\infty,\infty)$ be a strictly increasing continuous function. We assume that the boundary 0 is regular or exit, that is,

\begin{equation*}\int_{0}^{d}\,{\mathrm{d}} m(x)\int_{0}^{x}\,{\mathrm{d}} s(y) < \infty\quad\text{for some $0 <d < b$,}\end{equation*}

and assume the boundary b is natural, that is,

\begin{equation*}\int_{d}^{b}\,{\mathrm{d}} m(x)\int_{x}^{b}\,{\mathrm{d}} s(y) = \infty\quad\text{and}\quad\int_{d}^{b}\,{\mathrm{d}} s(x)\int_{x}^{b}\,{\mathrm{d}} m(y) = \infty\quad \text{for some $0 <d < b$.}\end{equation*}

Let $u = \psi_{\lambda}$ be defined by (2.2). Set

\begin{equation*} g_\lambda(x) = \psi_{\lambda}(x)\int_{x}^{b}\dfrac{{\mathrm{d}} s(y)}{\psi_{\lambda}(y)^2} \quad (\lambda \geq 0).\end{equation*}

Then the function $u = g_\lambda$ is the unique, non-increasing solution for

\begin{equation*} \dfrac{{\mathrm{d}}}{{\mathrm{d}} m}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s} u = \lambda u, \quad \lim_{x \to 0+}u(x) = 1.\end{equation*}

Define the Green’s function

\begin{equation*} G_\lambda(x,y) = G_{\lambda}(y,x) \,:\!=\, \psi_{\lambda}(x)g_\lambda(y) \quad (0 \leq x \leq y < b, \ \lambda \geq 0).\end{equation*}

Then there exists a unique Radon measure $\sigma$ on $[0, \infty)$ , which we call the spectral measure, such that

\begin{equation*} G_\lambda(x,y) = \int_{0}^{\infty}\dfrac{\psi_{-\xi}(x)\psi_{-\xi}(y)}{\lambda + \xi}\sigma({\mathrm{d}} \xi) ,\end{equation*}

and the transition density p(t, x, y) with respect to ${\mathrm{d}} m$ of $\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ -diffusion absorbed at 0 is given by

\begin{equation*} p(t,x,y) = \int_{0}^{\infty}\,{\mathrm{e}}^{-\lambda t}\psi_{-\lambda}(x)\psi_{-\lambda}(y)\sigma({\mathrm{d}} \lambda) \quad (t > 0, x,y \in I)\end{equation*}

(see [Reference McKean20] for the details). Note that under the assumptions of Theorem 2.1, the spectral measure has its support on $[\lambda_0,\infty)$ .

3. Convergence to quasi-stationary distributions

Let X be a $\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ -diffusion on [0, b) $(0 < b \leq \infty)$ . For a set I, we denote the set of initial distributions on I by ${\mathcal{P}}(I)$ . For a class ${\mathcal{P}} {\subset {\mathcal{P}}[0, b)} $ of initial distributions, we say that the first hitting uniqueness holds on ${\mathcal{P}}$ if

\begin{equation*} \text{the map} \ {\mathcal{P}} \ni \mu \longmapsto {\mathbb{P}}_{\mu}[T_0 \in {\mathrm{d}} t] \ \text{is injective.}\end{equation*}

For the class ${\mathcal{P}}$ , we shall take

\begin{equation*} {\mathcal{P}}_{\mathrm{exp}} = \{ \mu \in {\mathcal{P}}[0, b) \mid {\mathbb{P}}_{\mu}[T_0 \in {\mathrm{d}} t] = \lambda \,{\mathrm{e}}^{-\lambda t}\,{\mathrm{d}} t \ (\lambda > 0) \},\end{equation*}

the set of initial distributions with exponential hitting probabilities. We refer to Rogers [Reference Rogers21] as a general study of the first hitting uniqueness. Provided that the first hitting uniqueness holds on ${\mathcal{P}}_{\mathrm{exp}}$ and X satisfies the condition of Theorem 2.1, an initial distribution $\mu \in {\mathcal{P}}[0, b)$ satisfying ${\mathbb{P}}_{\mu}[T_0 \in {\mathrm{d}} t] = \lambda \,{\mathrm{e}}^{-\lambda t}\,{\mathrm{d}} t$ for some $0 < \lambda \leq \lambda_0$ must satisfy $\mu = \nu_\lambda$ .

One of our main theorems is a general result to reduce the convergence (1.1) to the tail behavior of $T_0$ , provided that the first hitting uniqueness holds on ${\mathcal{P}}_{\mathrm{exp}}$ .

Theorem 3.1. Let X be a $\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ -diffusion on $[0, b)$ $(0 < b \leq \infty)$ and set

\begin{equation*}\mu_{t}({\mathrm{d}} x) = {\mathbb{P}}_{\mu}[X_t \in {\mathrm{d}} x \mid T_0 > t]. \end{equation*}

Assume the first hitting uniqueness holds on ${\mathcal{P}}_{\mathrm{exp}}$ and

\begin{equation*}{{\mathbb{P}}_{\nu}}[T_0 \in {\mathrm{d}} t] = \lambda \,{\mathrm{e}}^{-\lambda t}\,{\mathrm{d}} t \quad for\ some\ \lambda > 0 \ and\ some\ {\nu} \in {\mathcal{P}}(0, b). \end{equation*}

Then, for $\mu \in {\mathcal{P}}[0, b)$ and $\lambda > 0$ , the following are equivalent:

  1. (i) $\lim_{t \to \infty}\dfrac{{\mathbb{P}}_{\mu}[T_0 > t + s]}{{\mathbb{P}}_{\mu}[T_0 > t]} = {\mathrm{e}}^{-\lambda s}$ $ (s > 0)$ ,

  2. (ii) ${\mathbb{P}}_{\mu_{t}}[T_0 \in {\mathrm{d}} s] \xrightarrow[t \to \infty]{} \lambda \,{\mathrm{e}}^{-\lambda s}\,{\mathrm{d}} s$ ,

  3. (iii) $\mu_t \xrightarrow[t \to \infty]{} {\nu}$ .

Proof of Theorem 3.1. From the Markov property, we have

\begin{equation*} {\mathbb{P}}_{\mu_{t}}[T_0 > s] = \dfrac{{\mathbb{P}}_{\mu}[T_0 > t + s]}{{\mathbb{P}}_{\mu}[T_0 > t]} \quad (t,s \geq 0). \end{equation*}

Now it is obvious that (i) and (ii) are equivalent. In addition, it is not difficult to see that (iii) implies (i).

We show that (ii) implies (iii). Since ${\mathcal{P}}[0, b]$ , the class of probability measures on the compactification [0, b], is compact under the topology of weak convergence, we can take a sequence $\{t_n\}_n$ which diverges to $\infty$ such that

(3.1) \begin{equation} \mu_{t_n} \xrightarrow[n \to \infty]{} {\tilde{\nu}} \end{equation}

for some $\tilde{\nu} \in {\mathcal{P}}[0, b]$ . From (ii), we have

(3.2) \begin{equation} {\mathbb{P}}_{\mu_{t_n}}[T_0 \in {\mathrm{d}} s] \xrightarrow[n \to \infty]{} \lambda \,{\mathrm{e}}^{-\lambda s}\,{\mathrm{d}} s. \end{equation}

On the other hand, for fixed $t > 0$ we have

\begin{equation*} {\mathbb{P}}_{\mu_{t_n}}[T_0 > t] = \int_{[0, b]}{\mathbb{P}}_{x}[T_0 > t] \mu_{t_n}({\mathrm{d}} x), \end{equation*}

where we understand that

\begin{equation*} {\mathbb{P}}_{x}[T_0 > t] = \begin{cases} 0 & x = 0, \\ 1 & x = b. \end{cases} \end{equation*}

Note that since the boundary b is natural, the function $x \mapsto {\mathbb{P}}_x[T_0 > t]$ is continuous on [0, b]. From (3.1) we obtain

\begin{equation*} \lim_{n \to \infty}{\mathbb{P}}_{\mu_{t_n}}[T_0 > t] = \int_{[0, b]}{\mathbb{P}}_x[T_0 > t] {\tilde{\nu}}({\mathrm{d}} x). \end{equation*}

Then from (3.2) it follows that

(3.3) \begin{equation} \int_{[0, b]}{\mathbb{P}}_x[T_0 > t] {\tilde{\nu}}({\mathrm{d}} x) = {\mathrm{e}}^{-\lambda t}. \end{equation}

Since

\begin{equation*} \lim_{t \to 0}{\mathbb{P}}_x[T_0 > t] = 1\{x > 0\}, \quad \lim_{t \to \infty}{\mathbb{P}}_x[T_0 > t] = 1\{x = b \} \quad (x \in [0, b]), \end{equation*}

we have from the dominated convergence theorem and (3.3) that ${\tilde{\nu} \{0\} = \tilde{\nu} \{b\} = 0}$ . Therefore ${\tilde{\nu}} \in {\mathcal{P}}(0, b)$ and ${{\mathbb{P}}_{\tilde{\nu}}}[T_0 \in {\mathrm{d}} s] = \lambda \,{\mathrm{e}}^{-\lambda s}\,{\mathrm{d}} s$ . Since the first hitting uniqueness holds on ${\mathcal{P}}_{\mathrm{exp}}$ , we have ${\tilde{\nu} = \nu}$ . The limit distribution ${\nu}$ does not depend on the choice of the sequence $\{t_n\}$ , and therefore we obtain (iii).

We give a sufficient condition for Theorem 3.1(i).

Proposition 3.1. Assume the hitting densities $f_x$ of $0$ exist, that is, there exists a non-negative jointly measurable function $f_x(t)$ such that

\begin{equation*}{\mathbb{P}}_{x}[T_0 \in {\mathrm{d}} t] = f_x(t)\,{\mathrm{d}} t \quad ( 0 < x < b,\ t > 0). \end{equation*}

Let $\mu \in {\mathcal{P}}(0, b)$ and assume the function

\begin{equation*} f_\mu(t) \,:\!=\, \int_{0}^{\infty}f_x(t)\mu({\mathrm{d}} x) \quad (0 < x < b, \ t > 0) \end{equation*}

is differentiable in $t > 0$ and

(3.4) \begin{equation} -\lim_{t \to \infty}\dfrac{{\mathrm{d}}}{{\mathrm{d}} t}\log f_\mu(t) = \lambda \in (0,\lambda_0]. \end{equation}

Then we have

(3.5) \begin{equation} \lim_{t \to \infty}\dfrac{{\mathbb{P}}_{\mu}[T_0 > t+s]}{{\mathbb{P}}_{\mu}[T_0 > t]} = {\mathrm{e}}^{-\lambda s} \quad (s > 0). \end{equation}

Proof. Set $g(u) = f_\mu(\!\log u)$ for $u > 1$ . From (3.4) we have

\begin{equation*} \lim_{t \to \infty}\dfrac{tg^{\prime}(t)}{g(t)} = \lim_{t \to \infty}\dfrac{{\mathrm{e}}^t g^{\prime}({\mathrm{e}}^t)}{g({\mathrm{e}}^t)} = -\lambda. \end{equation*}

Then from [Reference Lamperti13, Theorem 2], the function g varies regularly at $\infty$ with exponent $-\lambda$ . From L’Hôpital’s rule, we have for $u = {\mathrm{e}}^s > 1$

\begin{equation*} \lim_{t \to \infty}\dfrac{{\mathbb{P}}_{\mu}[T_0 > t +\log u]}{{\mathbb{P}}_{\mu}[T_0 > t]} = \lim_{t \to \infty}\dfrac{f_\mu(t+ \log u)}{f_\mu(t)} = \lim_{t \to \infty}\dfrac{g({\mathrm{e}}^t u)}{g({\mathrm{e}}^t)} = u^{-\lambda} = {\mathrm{e}}^{-\lambda s}. \end{equation*}

Remark 3.1. We might expect Proposition 3.1 to be extended, with (3.4) replaced by

(3.6) \begin{equation}\log f_\mu(t) \sim -\lambda t \quad (t \to \infty), \end{equation}

which is weaker than (3.4) by L’Hôpital’s rule. In general, however, it does not hold. We give a counterexample that satisfies (3.6) but not (3.5). Let us find a positive function f of the form

\begin{equation*}f(t) = {\mathrm{e}}^{({-}\lambda + {\varepsilon}(t))t} , \end{equation*}

with a function ${\varepsilon}(t)$ vanishing at $\infty$ but not satisfying

(3.7) \begin{equation}\dfrac{\int_{t+s}^{\infty}f(u)\,{\mathrm{d}} u}{\int_{t}^{\infty}f(u)\,{\mathrm{d}} u} \xrightarrow[t \to \infty]{} \,{\mathrm{e}}^{-\lambda s} \quad (s > 0). \end{equation}

By the change of variables, we can see that (3.7) is equivalent to the function

\begin{equation*}h(t) \,:\!=\, \int_{t}^{\infty}u^{-\lambda - 1 + {\varepsilon}(\!\log u) }\,{\mathrm{d}} u \end{equation*}

varying regularly with exponent $-\lambda $ at $\infty$ . If the function ${\varepsilon}$ is non-increasing, by the monotone density theorem [Reference Bingham, Goldie and Teugels1, Theorem 1.7.2] it is equivalent to the slow variation of

\begin{equation*}k(s) = s^{{\varepsilon}(\!\log s)} \quad (s > 1). \end{equation*}

We now set

\begin{equation*}{\varepsilon}(s) = 2^{-n} \quad (4^n < s \leq 4^{n+1}, n \in {\mathbb{N}}), \end{equation*}

and then the function ${\varepsilon}$ vanishes at $\infty$ and

\begin{equation*}\dfrac{k({\mathrm{e}} \cdot \exp\!(4^n))}{k(\exp\!(4^n))} = \dfrac{\exp\!(2^{n} + 2^{-n})}{\exp\!(2^{n+1})}= \exp\!({-}2^n + 2^{-n}) \xrightarrow[n \to \infty]{} 0. \end{equation*}

So the function k does not vary slowly.

We give a sufficient condition for the existence of the hitting densities of 0. For this purpose, we need the following condition on decay of the spectral measure $\sigma$ of $-\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ :

\begin{equation*}\mathrm{(S)} \quad \int_{0}^{\infty}\,{\mathrm{e}}^{-\lambda t} \sigma({\mathrm{d}} \lambda) < \infty \quad(t > 0).\end{equation*}

A sufficient condition for (S) is as follows.

Proposition 3.2. Let m be a speed measure and s be a scale function on $(0, b) (0 < b \leq \infty)$ . Then if $|s(0)| < \infty$ and

\begin{equation*} m(x,c] \leq C (s(x) - s(0))^{-\delta} \quad (0 < x < c) \end{equation*}

for some $C > 0$ , $0 < c < b$ and $0 < \delta < 1$ (in this case, the boundary $0$ is automatically regular or exit), the condition $(\textrm{S})$ holds.

The proof of Proposition 3.2 is given in [Reference Yamato25]. The following result by Yano [Reference Yano26] gives existence and a spectral representation of the hitting densities.

Proposition 3.3. ([Reference Yano26, Proposition 2.1].) Assume $(\textrm{S})$ holds. Then for any $0 < x < b$ the distribution of $T_0$ under ${\mathbb{P}}_{x}$ has density $f_x(t)$ on $(0,\infty)$ with respect to the Lebesgue measure, that is, the following hold:

\begin{equation*} {\mathbb{P}}_x[T_0 \in {\mathrm{d}} t] = f_x(t)\,{\mathrm{d}} t \quad (0 < x < b, \ t > 0). \end{equation*}

The hitting densities have a spectral representation,

(3.8) \begin{equation} f_x(t) = \int_{0}^{\infty}\,{\mathrm{e}}^{-\lambda t}\psi_{-\lambda}(x)\sigma({\mathrm{d}} \lambda) \quad (0 < x < b,\ t > 0), \end{equation}

and have another representation,

\begin{equation*} f_{x}(t) = \dfrac{{\mathrm{d}}}{{\mathrm{d}} s(y)}p(t,x,y)\bigg|_{y=0} \quad (0 < x < b, \ t > 0). \end{equation*}

4. Hitting densities of Kummer diffusions with negative drift

Let us give the hitting densities of Kummer diffusions with negative drift.

First we give a speed measure and a scale function for Kummer diffusions with negative drift. Fix $\alpha > 0$ and $\beta \in {\mathbb{R}}$ . From (1.2) we have

\begin{equation*}{\mathcal{L}}^{(0)} = {\mathcal{L}}^{(\alpha,\beta)} = x\dfrac{{\mathrm{d}}^2}{{\mathrm{d}} x^2} + ({-}\alpha + 1 - \beta x) \dfrac{{\mathrm{d}}}{{\mathrm{d}} x} = \dfrac{{\mathrm{d}}}{{\mathrm{d}} m^{(0)}}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s^{(0)}}\end{equation*}

with

(4.1) \begin{equation}{\mathrm{d}} m^{(0)}(x) \,:\!=\, {\mathrm{d}} m^{(\alpha,\beta)}(x) = x^{-\alpha}\,{\mathrm{e}}^{-\beta x}\,{\mathrm{d}} x, \quad {\mathrm{d}} s^{(0)}(x) \,:\!=\, {\mathrm{d}} s^{(\alpha,\beta)}(x) = x^{\alpha - 1}\,{\mathrm{e}}^{\beta x}\,{\mathrm{d}} x.\end{equation}

In addition, for $\gamma \geq 0$ , we have

\begin{equation*}{\mathcal{L}}^{(\gamma)} = {\mathcal{L}}^{(\alpha,\beta,\gamma)} = x\dfrac{{\mathrm{d}}^2}{{\mathrm{d}} x^2} + \biggl({-}\alpha + 1 - \beta x + \dfrac{xg^{\prime}_\gamma (x)}{g_\gamma(x)}\biggr) \dfrac{{\mathrm{d}}}{{\mathrm{d}} x} = \dfrac{{\mathrm{d}}}{{\mathrm{d}} m^{(\gamma)}}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s^{(\gamma)}} ,\end{equation*}

with

\begin{equation*}{\mathrm{d}} m^{(\gamma)} = g_\gamma^2 \,{\mathrm{d}} m^{(0)} \quad {\mathrm{d}} s^{(\gamma)} = g_\gamma^{-2}\,{\mathrm{d}} s^{(0)},\end{equation*}

where $g_\gamma$ is the function given in (1.3). Note that since $g_\gamma(0) = 1$ , the classification of the boundary 0 for ${\mathcal{L}}^{(\gamma)}$ does not depend on $\gamma \geq 0$ . The boundary $\infty$ for ${\mathcal{L}}^{(\gamma)}$ is always natural, which we will see in Proposition 4.1. We also have

\begin{equation*} {\mathcal{L}}^{(\gamma)} = {\mathcal{L}}^{(0)} + \dfrac{x g^{\prime}_\gamma}{g_\gamma}\dfrac{{\mathrm{d}}}{{\mathrm{d}} x},\end{equation*}

and since

\begin{equation*}\tilde{g}_\gamma(x) = g_\gamma(x^2/2),\end{equation*}

it follows that

\begin{equation*}\tilde{{\mathcal{L}}}^{(\alpha,\beta,\gamma)}= \tilde{{\mathcal{L}}}^{(\alpha,\beta,0)} + \dfrac{\tilde{g}^{\prime}_\gamma}{\tilde{g}_\gamma}\dfrac{{\mathrm{d}}}{{\mathrm{d}} x},\end{equation*}

which implies (1.4).

We summarize several results on the hitting densities for Kummer diffusions with negative drift. Note that from (4.1) and Proposition 3.2, the condition (S) holds for $\frac{{\mathrm{d}}}{{\mathrm{d}} m^{(0)}}\frac{{\mathrm{d}}}{{\mathrm{d}} s^{(0)}}$ .

Theorem 4.1. For the process $Y^{(\alpha,\beta,\gamma)} $ $(\alpha > 0 , \ \beta \in {\mathbb{R}}, \ \gamma \geq 0)$ , the hitting densities $f^{(\gamma)}_{x}$ of $0$ and the spectral measure $\sigma^{(\gamma)}$ for ${\mathcal{L}}^{(\gamma)}$ are given by

(4.2) \begin{equation}f^{(\gamma)}_{x}(t) = \dfrac{{\mathrm{e}}^{-\gamma t}}{g_\gamma(x)}f^{(0)}_x(t) \quad (0 < x < \infty, \ t > 0) \end{equation}

and

(4.3) \begin{equation}\sigma^{(\gamma)}({\mathrm{d}} \lambda) = \sigma^{(0)}({\mathrm{d}} (\lambda - \gamma)), \end{equation}

where

(4.4) \begin{equation} f^{(0)}_{x}(t) = \begin{cases} \dfrac{1}{\Gamma (\alpha)}{x^{\alpha}t^{-\alpha - 1}}\,{\mathrm{e}}^{-x/t} & ( \beta = 0), \\[14pt] \dfrac{x^{\alpha}\,{\mathrm{e}}^{\beta t}}{\Gamma(\alpha)}\biggl(\dfrac{ \beta \,{\mathrm{e}}^{-\beta t}}{1 - {\mathrm{e}}^{-\beta t}}\biggr)^{1+\alpha}\exp \biggl( \dfrac{-x\beta \,{\mathrm{e}}^{-\beta t}}{1 - {\mathrm{e}}^{-\beta t}} \biggr) & ( \beta \neq 0), \end{cases} \end{equation}

and

(4.5) \begin{equation} \sigma^{(0)}({\mathrm{d}} \lambda) =\begin{cases} \beta^{\alpha+1}\displaystyle\sum_{n=0}^{\infty} \dfrac{(\alpha)_{n+1}}{n! \Gamma(\alpha)}\delta_{\beta(n + \alpha)}({\mathrm{d}} \lambda) & (\beta > 0), \\[13pt] \dfrac{1}{\Gamma(\alpha)^2}\lambda^{\alpha}\,{\mathrm{d}} \lambda & (\beta = 0), \\[9pt] ({-}\beta)^{\alpha+1}\displaystyle\sum_{n=0}^{\infty} \dfrac{(\alpha)_{n+1}}{n! \Gamma(\alpha)}\delta_{({-}\beta)(n + 1)}({\mathrm{d}} \lambda) & (\beta < 0),\end{cases} \end{equation}

where $(a)_k $ $(a \in {\mathbb{R}}, \ k \in {\mathbb{N}})$ is a Pochhammer symbol,

\begin{equation*} (a)_k = a(a+1)\cdots (a + k - 1). \end{equation*}

In particular, we have

\begin{align*}\lambda^{(\gamma)}_0 =\begin{cases}\alpha \beta + \gamma & (\beta > 0), \\\gamma & (\beta = 0), \\-\beta + \gamma & (\beta < 0).\end{cases} \end{align*}

Remark 4.1. From [Reference Magnus, Oberhettinger and Soni16, Section 3.7], for example, we have

(4.6) \begin{equation} g_{\gamma}(x) = \begin{cases} \dfrac{1}{2^{\alpha - 1} \Gamma(\alpha)}(2\sqrt{\gamma x})^{\alpha}K_\alpha(2\sqrt{\gamma x}) & ( \beta = 0), \\[14pt] \dfrac{\Gamma(\alpha + \gamma / \beta)}{\Gamma (\alpha)} (\beta x)^{\alpha} U (\alpha + \gamma / \beta, \alpha + 1;\ \beta x) & ( \beta > 0), \\[14pt] \dfrac{\Gamma(1 - \gamma / \beta)}{\Gamma (\alpha)} ({-}\beta x)^{\alpha} \,{\mathrm{e}}^{\beta x}U (1 - \gamma /\beta, \alpha + 1 ;\ -\beta x) & ( \beta < 0), \end{cases} \end{equation}

where $K_\alpha$ denotes the modified Bessel function of the second kind (see e.g. [Reference Magnus, Oberhettinger and Soni16, Section 3.1]) and U denotes the Tricomi confluent hypergeometric function

\begin{equation*} U(a,b;\ x) = \dfrac{1}{\Gamma(a)}\int_{0}^{\infty}\,{\mathrm{e}}^{-sx}s^{a-1}(1+s)^{b-a-1}\,{\mathrm{d}} s \quad (a > 0, \ b\in {\mathbb{R}}, \ x > 0). \end{equation*}

Note that

\begin{equation*} K_\alpha(x) \sim 2^{\alpha -1}\Gamma(\alpha) x^{-\alpha}, \quad U(a,b;\ x) \sim \dfrac{\Gamma(b - 1)}{\Gamma (a)}x^{-b+1} \quad (x \to + 0,\ a> 0, \ b > 1) \end{equation*}

and

(4.7) \begin{equation} K_\alpha(x) \sim \sqrt{\dfrac{\pi}{2x}}\,{\mathrm{e}}^{-x}, \quad U(a,b;\ x) \sim x^{-a} \quad (x \to +\infty,\ a> 0) \end{equation}

(see e.g. [Reference Magnus, Oberhettinger and Soni16, Section 3.14.1]).

Although Theorem 4.1 can be easily shown by compiling some known results, we give a proof for completeness.

Proof of Theorem 4.1. First we show (4.2) and (4.4). We denote the transition probability of $Y^{(\gamma)} = Y^{(\alpha, \beta, \gamma)}$ by

\begin{equation*}{\mathbb{P}}_x\bigl[Y^{(\gamma)}_t \in {\mathrm{d}} y\bigr] = p^{(\gamma)}(t,x,y)\,{\mathrm{d}} m^{(\gamma)}(y) . \end{equation*}

Then we have

\begin{equation*} p^{(\gamma)}(t,x,y) = {\mathrm{e}}^{-\gamma t}\dfrac{p^{(0)}(t,x,y)}{g_\gamma(x)g_\gamma(y)} \end{equation*}

(see e.g. [Reference Takemura and Tomisaki24, p. 172]), where we write ${\mathbb{P}}_x$ for the underlying probability measure for $Y^{(\gamma)}$ starting from x. From [Reference Borodin and Salminen2, Appendix 1], the transition density $p^{(0)}(t,x,y)$ is given by

\begin{align*} p^{(0)}(t,x,y) = \begin{cases}\dfrac{1}{t}(xy)^{\alpha/2}\,{\mathrm{e}}^{-(x+y)/t}I_\alpha\biggl(\dfrac{2\sqrt{xy}}{t}\biggr) & (\beta = 0), \\[17pt]\dfrac{\beta \,{\mathrm{e}}^{-\alpha\beta t/2}}{1 - {\mathrm{e}}^{-\beta t}}(xy)^{\alpha/2}\exp \biggl({-}\dfrac{(x + y)\beta \,{\mathrm{e}}^{-\beta t}}{1 - {\mathrm{e}}^{-\beta t}}\biggr) I_\alpha \biggl(\dfrac{2\sqrt{xy}\beta \,{\mathrm{e}}^{-\beta t / 2}}{1 - {\mathrm{e}}^{-\beta t}}\biggr) & (\beta \neq 0), \end{cases} \end{align*}

where the function $I_\nu$ is the modified Bessel function of the first kind:

\begin{equation*}I_\nu (x) = \sum_{n=0}^{\infty}\dfrac{1}{n!\Gamma(n + \nu + 1)}\biggl(\dfrac{x}{2}\biggr)^{\nu + 2n} \quad (\nu \in {\mathbb{R}}, \ x \in {\mathbb{R}}). \end{equation*}

We now have

\begin{align*} {\mathbb{P}}_x\bigl[T_0^{(\gamma)} > t\bigr] &= \int_{0}^{b}p^{(\gamma)}(t,x,y)\,{\mathrm{d}} m^{(\gamma)}(y) \\ &= \dfrac{{\mathrm{e}}^{-\gamma t}}{g_\gamma (x)}\int_{0}^{b}p^{(0)}(t,x,y)g_\gamma(y)\,{\mathrm{d}} m^{(0)}(y) \\ &= \dfrac{{\mathrm{e}}^{-\gamma t}}{g_\gamma (x)}\int_{0}^{b}p^{(0)}(t,x,y)\,{\mathrm{d}} m^{(0)}(y)\int_{0}^{\infty}\,{\mathrm{e}}^{-\gamma u}f^{(0)}_y(u)\,{\mathrm{d}} u \\ &= \dfrac{{\mathrm{e}}^{-\gamma t}}{g_\gamma (x)}\int_{0}^{\infty}\,{\mathrm{e}}^{-\gamma u}\,{\mathrm{d}} u \int_{0}^{b}p^{(0)}(t,x,y)f^{(0)}_y(u)\,{\mathrm{d}} m^{(0)}(y) \\ &= \dfrac{{\mathrm{e}}^{-\gamma t}}{g_\gamma (x)}\int_{0}^{\infty}\,{\mathrm{e}}^{-\gamma u} f^{(0)}_x(u + t)\,{\mathrm{d}} u \\ &= \dfrac{1}{g_\gamma (x)}\int_{t}^{\infty}\,{\mathrm{e}}^{-\gamma u} f^{(0)}_x(u)\,{\mathrm{d}} u. \end{align*}

This shows (4.2). Then from Proposition 3.3 we obtain (4.4).

From [Reference Takemura and Tomisaki24, p. 173] we have (4.3). We show (4.5). First we consider the case $\beta > 0$ . By some computation, we can check that

(4.8) \begin{equation} \psi_{\lambda}(x) = \dfrac{1}{\alpha}x^{\alpha}M(\lambda/\beta + \alpha,1 + \alpha;\ \beta x ) \quad (x > 0,\ \lambda \in {\mathbb{R}}), \end{equation}

where the function M is Kummer’s confluent hypergeometric function:

\begin{equation*}M(a,b;\ x) = \sum_{n = 0}^{\infty}\dfrac{(a)_n x^n}{(b)_n n!} \quad (a,b \in {\mathbb{R}}, \ x \in {\mathbb{R}}). \end{equation*}

We consider the values of $\lambda$ for which the function $\psi_{\lambda}$ is square-integrable. We may assume $\lambda < 0$ . Since the asymptotic behavior of the function M is given by

\begin{equation*} M(a,b;\ x) \sim \dfrac{\Gamma(b)}{\Gamma(a)} x^{a-b}\,{\mathrm{e}}^{x} \quad (x \to \infty) \end{equation*}

for $a \neq 0,-1,-2,\ldots$ (see e.g. [Reference Magnus, Oberhettinger and Soni16, p. 289]), the function $\psi_{\lambda}$ is not square-integrable with respect to ${\mathrm{d}} m$ when $\lambda/\beta + \alpha \neq 0,-1,-2,\ldots .$ When $\lambda/\beta + \alpha = 0,-1,-2,\ldots ,$ the function $\psi_{\lambda}$ is a polynomial and obviously square-integrable with respect to ${\mathrm{d}} m$ . Note that

\begin{equation*} M({-}n,1 + \alpha;\ \beta x) = \dfrac{n!}{(1 + \alpha)_n}L^{(\alpha)}_n(\beta x), \end{equation*}

where $L^{(\alpha)}_n(x)$ is the nth Laguerre polynomial of parameter $\alpha$ , that is,

\begin{equation*} L^{(\alpha)}_n (x) = {\mathrm{e}}^{x}\dfrac{x^{-\alpha}}{n!}\dfrac{{\mathrm{d}}^n}{{\mathrm{d}} x^n}({\mathrm{e}}^{-x}x^{n + \alpha}) \quad (n \in {\mathbb{N}}) \end{equation*}

(see e.g. [Reference Magnus, Oberhettinger and Soni16, p. 241]). Since the Laguerre polynomials $\{ L^{(\alpha)}_n(x) \}_n$ comprise an orthogonal basis of $L^2((0,\infty), x^{\alpha}\,{\mathrm{e}}^{-x}\,{\mathrm{d}} x)$ , the functions $\{\psi_{-\beta(\alpha + n)}(x)\}$ are an orthogonal basis on $L^2((0,\infty),x^{-\alpha}\,{\mathrm{e}}^{-\beta x}\,{\mathrm{d}} x)$ . Hence the spectral measure only has the point spectrum, and the support of $\sigma$ is $\{ \beta (\alpha + n), \ n \geq 0 \}$ . Since we have

\begin{equation*} \int_{0}^{\infty}L^{(\alpha)}_i(x)L^{(\alpha)}_j(x)x^{\alpha}\,{\mathrm{e}}^{-x}\,{\mathrm{d}} x = \delta_{ij}\dfrac{\Gamma(i + \alpha+ 1)}{i!} \quad (i,j \in {\mathbb{N}} ) \end{equation*}

(see e.g. [Reference Magnus, Oberhettinger and Soni16, p. 241]), it follows that

\begin{align*} \int_{0}^{\infty}\psi_{-\beta(\alpha + n)}(x)^2\,{\mathrm{d}} m(x) &= \dfrac{(n!)^2}{\alpha^2\beta^{\alpha+1} \{ (1 + \alpha)_n \}^2}\int_{0}^{\infty}L^{(\alpha)}_n(x)^2x^{\alpha}\,{\mathrm{e}}^{-x}\,{\mathrm{d}} x \\ &= \dfrac{n!\Gamma(\alpha)}{\beta^{\alpha+1}(\alpha)_{n+1}}. \end{align*}

Hence we obtain

\begin{equation*} \sigma\{ \beta (n + \alpha) \} = \dfrac{\beta^{\alpha+1}(\alpha)_{n+1}}{n!\Gamma(\alpha)} \quad (n \geq 0). \end{equation*}

Next we show the case $\beta < 0$ . Let us consider the map

(4.9) \begin{equation}L^2\bigl((0,\infty), {\mathrm{d}} m^{(\alpha,-\beta)}\bigr) \ni f \longmapsto {\mathrm{e}}^{\beta x}f \in L^2\bigl((0,\infty), {\mathrm{d}} m^{(\alpha,\beta)}\bigr). \end{equation}

Obviously this map is unitary. Moreover, since we have

\begin{equation*}{\mathcal{L}}^{(\alpha,\beta)}\bigl({\mathrm{e}}^{\beta x}\psi_{\lambda}^{(\alpha,-\beta)}(x)\bigr) = (\lambda - \beta (\alpha -1))\bigl({\mathrm{e}}^{\beta x}\psi_{\lambda}^{(\alpha,-\beta)}(x)\bigr) \end{equation*}

and

\begin{equation*}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s^{(\alpha,\beta)}}\bigl({\mathrm{e}}^{\beta x}\psi^{(\alpha,-\beta)}_{\lambda}(x)\bigr) = \beta x^{1-\alpha}\psi^{(\alpha,-\beta)}_{\lambda}(x) + {\mathrm{e}}^{\beta x}\dfrac{{\mathrm{d}}}{{\mathrm{d}} s^{(\alpha,-\beta)}}\psi_{\lambda}^{(\alpha,-\beta)}(x), \end{equation*}

we can see from (4.8) that

\begin{equation*}\psi_{\lambda}^{(\alpha,\beta)}(x) = {\mathrm{e}}^{\beta x}\psi^{(\alpha,-\beta)}_{\lambda + \beta (\alpha - 1)}, \end{equation*}

where we denote the function defined in (2.2) for ${\mathcal{L}}^{(\alpha,\beta)}$ by $\psi_{\lambda}^{(\alpha,\beta)}$ . Then, from the unitarity of the map (4.9) and the argument for the case $\beta > 0$ , the functions $\bigl\{ \psi_{-\beta(n + 1)}^{(\alpha,\beta)}, n \geq 0 \bigr\}$ comprise the orthogonal basis of $L^2((0,\infty), {\mathrm{d}} m^{(\alpha,\beta)})$ and therefore we obtain (4.5) for $\beta < 0$ .

Finally, we show the case $\beta = 0$ . Note that we can see from some computation that

\begin{equation*}\psi_{\lambda}(x) = \Gamma(\alpha) \biggl(\dfrac{x}{\lambda}\biggr)^{\alpha/2}I_\alpha(2\sqrt{\lambda x}) \quad (x > 0, \ \lambda \in {\mathbb{R}}). \end{equation*}

From (3.8) and (4.4) we have

\begin{equation*}\int_{0}^{\infty}\,{\mathrm{e}}^{-\lambda t}\psi_{-\lambda}(x)\sigma^{(0)}({\mathrm{d}} \lambda)= \dfrac{1}{\Gamma(\alpha)}x^{\alpha}t^{-\alpha-1}\,{\mathrm{e}}^{-x/t}. \end{equation*}

Since

\begin{equation*}\dfrac{{\mathrm{d}}}{{\mathrm{d}} x}(x^\nu I_\nu(x)) = x^\nu I_{\nu-1}(x), \quad I_\nu(x) \sim \dfrac{{\mathrm{e}}^{x}}{\sqrt{2\pi x}} \quad (\nu \in {\mathbb{R}},\ x \to \infty) \end{equation*}

(see e.g. [Reference Magnus, Oberhettinger and Soni16, p. 67, p. 139]), we can see that

\begin{equation*}\int_{0}^{\infty}\,{\mathrm{e}}^{-\lambda t}\biggl|\dfrac{{\mathrm{d}}}{{\mathrm{d}} x}\psi_{-\lambda}(x)\biggr|\sigma^{(0)}({\mathrm{d}} \lambda) < \infty \quad (x > 0). \end{equation*}

Thus we have

\begin{align*}\int_{0}^{\infty}\,{\mathrm{e}}^{-\lambda t}\sigma^{(0)}({\mathrm{d}} \lambda) & = \dfrac{{\mathrm{d}}}{{\mathrm{d}} s(x)}\int_{0}^{\infty}\,{\mathrm{e}}^{-\lambda t}\psi_{-\lambda}(x)\sigma^{(0)}({\mathrm{d}} \lambda)\bigg|_{x = 0} \\& = \dfrac{{\mathrm{d}}}{{\mathrm{d}} s(x)}\dfrac{1}{\Gamma(\alpha)}x^{\alpha}t^{-\alpha-1}\,{\mathrm{e}}^{-x/t}\bigg|_{x = 0} \\& = \dfrac{\alpha t^{-\alpha - 1}}{\Gamma(\alpha)}. \end{align*}

From the uniqueness of the Laplace transform, we obtain (4.5).

We give the classification of the boundary $\infty$ for ${\mathcal{L}}^{(\gamma)}$ .

Proposition 4.1. For $\alpha > 0, \ \beta \in {\mathbb{R}},\ \gamma \geq 0$ , the boundary $\infty$ for ${\mathcal{L}}^{(\gamma)}$ is natural.

Proof. Let $\beta > 0$ . From (4.6) and (4.7) we have

\begin{align*}s^{(\gamma)}(x) - s^{(\gamma)}(1) &= \int_{1}^{x}y^{\alpha - 1}\,{\mathrm{e}}^{\beta y}\dfrac{{\mathrm{d}} y}{g_\gamma^2(y)} \\&\asymp \int_{1}^{x}y^{\alpha + 2\gamma / \beta- 1}\,{\mathrm{e}}^{\beta y}\,{\mathrm{d}} y \xrightarrow[x \to \infty]{} \infty, \end{align*}

where $f_1 \asymp f_2$ means that there exists a constant $c > 0$ such that $(1/c)f_1(x) \leq f_2(x) \leq c f_1(x)$ for large $x > 0$ . Note that from L’Hôpital’s rule, it holds for $\delta \in {\mathbb{R}}$ that

\begin{equation*}\int_{x}^{\infty}y^{\delta}\,{\mathrm{e}}^{-\beta y}\,{\mathrm{d}} y \sim \dfrac{1}{\beta}x^{\delta}\,{\mathrm{e}}^{-\beta x} \quad (x \to \infty). \end{equation*}

We have

\begin{align*}\int_{1}^{\infty}\,{\mathrm{d}} s^{(\gamma)}(x)\int_{x}^{\infty}\,{\mathrm{d}} m^{(\gamma)}(y)&\asymp \int_{1}^{\infty}x^{\alpha + 2\gamma / \beta -1}\,{\mathrm{e}}^{-\beta x}\,{\mathrm{d}} x\int_{x}^{\infty}y^{-\alpha - 2\gamma / \beta}\,{\mathrm{e}}^{-\beta y}\,{\mathrm{d}} y \\&\asymp \int_{1}^{\infty}\dfrac{{\mathrm{d}} x}{x} \\&= \infty. \end{align*}

Thus the boundary $\infty$ is natural. We can show the cases of $\beta = 0$ and $\beta < 0$ by a similar argument and hence we omit them.

5. Convergence to non-minimal quasi-stationary distributions for Kummer diffusions with negative drift

Let us apply Theorem 3.1 to Kummer diffusions with negative drift, and give a sufficient condition on initial distributions under which the conditional process converges to each non-minimal quasi-stationary distribution specified.

We classify $Y^{(\gamma)} = Y^{(\alpha,\beta,\gamma)} $ $(\alpha > 0, \ \beta \in {\mathbb{R}}, \ \gamma \geq 0)$ into the following five cases by $\beta$ and $\gamma$ :

(5.1) \begin{equation} \begin{aligned}&\text{Case 1} & \beta = 0, \quad \gamma > 0, \\&\text{Case 2} & \beta > 0, \quad \gamma \geq 0, \\&\text{Case 3} & \beta < 0, \quad \gamma > 0, \\&\text{Case 1}^{\prime} & \beta = 0, \quad \gamma = 0, \\&\text{Case 3}^{\prime} & \beta < 0, \quad \gamma = 0. \end{aligned} \end{equation}

We give a necessary and sufficient condition for Kummer diffusions with negative drift to satisfy the condition of Theorem 2.1.

Proposition 5.1. For ${\mathcal{L}}^{(\alpha,\beta,\gamma)} $ $(\alpha > 0, \beta \in {\mathbb{R}}, \ \gamma \geq 0)$ , the condition of Theorem 2.1 holds if and only if one of Cases 1–3 in (5.1) holds.

Proof. Let $\beta > 0$ . Obviously $m^{(\gamma)}(1,\infty) < \infty$ and $s^{(\gamma)}(\infty) = \infty$ . From (4.7) we have

\begin{align*}m^{(\gamma)}(x,\infty) \bigl(s^{(\gamma)}(x) - s^{(\gamma)}(1)\bigr)&\asymp \bigl(x^{-\alpha -2\gamma /\beta}\,{\mathrm{e}}^{-\beta x}\bigr)\bigl(x^{\alpha + 2\gamma /\beta - 1}\,{\mathrm{e}}^{\beta x}\bigr) \\&\asymp 1/x \xrightarrow[x \to \infty]{} 0. \end{align*}

Let $\beta = 0$ . We can easily check $s^{(\gamma)}(\infty) = \infty$ for $\gamma \geq 0$ and

\begin{equation*}\lim_{x \to \infty} m^{(0)}(x,\infty)\bigl(s^{(0)}(x) - s^{(0)}(1)\bigr) = \infty. \end{equation*}

For $\gamma > 0$ , from (4.7) we have

\begin{equation*}m^{(\gamma)}(x,\infty)\bigl(s^{(\gamma)}(x) - s^{(\gamma)}(1)\bigr)\asymp {\mathrm{e}}^{-4\sqrt{\gamma x}} \cdot {\mathrm{e}}^{4\sqrt{\gamma x}} = 1. \end{equation*}

Let $\beta < 0$ . From (4.1) we obtain $s^{(0)}(\infty) < \infty$ . For $\gamma > 0$ , we have from (4.6)

\begin{align*}s^{(\gamma)}(x) - s^{(\gamma)}(1) &\asymp \int_{1}^{x}y^{-\alpha - \gamma / \beta }\,{\mathrm{e}}^{-\beta y}\,{\mathrm{d}} y \\&\asymp x^{1 - \alpha - 2\gamma / \beta} \,{\mathrm{e}}^{-\beta x} \xrightarrow[x \to \infty]{} \infty. \end{align*}

Similarly, we can show $m^{(\gamma)}(1,x) \asymp x^{-2 + \alpha + 2\gamma / \beta }\,{\mathrm{e}}^{\beta x}$ and thus $m^{(\gamma)}(1,\infty) < \infty$ . Then we have

\begin{equation*}m^{(\gamma)}(x,\infty)\bigl(s^{(\gamma)}(x) - s^{(\gamma)}(1)\bigr) \asymp 1/x \xrightarrow[x \to \infty]{} 0. \end{equation*}

The following is another main result of the present paper. For Kummer diffusions with negative drift, it gives a sufficient condition for an initial distribution under which the conditioned distribution converges to a non-minimal quasi-stationary distribution.

Theorem 5.1. Let $X = Y^{(\gamma)} = Y^{(\alpha,\beta,\gamma)} $ $(\alpha > 0, \ \beta \in {\mathbb{R}}, \ \gamma \geq 0)$ satisfying one of Cases 1–3 in (5.1) and let $\mu \in {\mathcal{P}}(0,\infty)$ . Then the following hold.

  1. (i) If Case 1 holds and $\mu({\mathrm{d}} x) = \rho(x)\,{\mathrm{d}} x$ for some $\rho \in L^1((0,\infty), {\mathrm{d}} x)$ and

    \begin{equation*} \log \rho(x) \sim (\delta - 2\sqrt{\gamma})\sqrt{x} \quad (x \to \infty)\end{equation*}
    for some $0 < \delta < 2\sqrt{\gamma}$ , then we have
    \begin{equation*} \mu_{t} \xrightarrow[t \to \infty]{} \nu_\lambda\end{equation*}
    with $\lambda = \gamma - \delta^2/4 \in \bigl(0,\lambda^{(\gamma)}_0\bigr)$ , where $\lambda_0^{(\gamma)} = \gamma > 0$ is the spectral bottom.
  2. (ii) If Case 2 holds and

    (5.2) \begin{equation} \mu(x,\infty) \sim x^{-\alpha - \gamma / \beta + \delta} \ell(x) \quad (x \to \infty)\end{equation}
    for some $0 < \delta <\alpha + \gamma / \beta$ and some slowly varying function $\ell$ at $\infty$ , then we have
    \begin{equation*} \mu_{t} \xrightarrow[t \to \infty]{} \nu_\lambda\end{equation*}
    with $\lambda = \beta(\alpha - \delta) + \gamma \in \bigl(0,\lambda^{(\gamma)}_0\bigr)$ , where $\lambda^{(\gamma)}_0 = \alpha\beta + \gamma > 0$ is the spectral bottom.
  3. (iii) If Case 3 holds and

    \begin{equation*} \mu(x,\infty) \sim x^{-1 + \gamma / \beta + \delta}\ell(x) \quad (x \to \infty)\end{equation*}
    for some $0 < \delta < 1 - \gamma / \beta$ and some slowly varying function $\ell$ at $\infty$ , then we have
    \begin{equation*} \mu_{t} \xrightarrow[t \to \infty]{} \nu_\lambda\end{equation*}
    with $\lambda = -\beta(1 - \delta) + \gamma \in \bigl(0,\lambda^{(\gamma)}_0\bigr)$ , where $\lambda^{(\gamma)}_0 = -\beta + \gamma > 0$ is the spectral bottom.

The proof of Theorem 5.1 will be given after several preparatory results.

Remark 5.1.

  • When $\alpha = 1/2, \beta = 0$ and $\gamma > 0$ , the process $\sqrt{2 Y^{(1/2,0,\gamma)}}$ is a Brownian motion with negative drift $-\sqrt{2\gamma} t$ . Hence Theorem 5.1(i) gives a generalization of Theorem 1.1.

  • In Theorem 5.1(ii), if $\mu({\mathrm{d}} x) = \rho(x) \,{\mathrm{d}} x$ for $\rho \in L^1((0,\infty), {\mathrm{d}} x)$ and

    \begin{equation*} \rho (x) \sim x^{-\alpha - \gamma /\beta + \delta - 1}\ell(x) \quad (x \to \infty)\end{equation*}
    for a slowly varying function $\ell$ , then (5.2) holds from Karamata’s theorem [Reference Bingham, Goldie and Teugels1, Proposition 1.5.8]. Hence Theorem 5.1(ii) is an extension of Theorem 1.2.

For the process $Y^{(\alpha,\beta,\gamma)}$ , the first hitting uniqueness holds on ${\mathcal{P}}(0,\infty)$ . We show this fact in more general settings as follows.

Theorem 5.2. Let X be a $\frac{{\mathrm{d}}}{{\mathrm{d}} m}\frac{{\mathrm{d}}}{{\mathrm{d}} s}$ -diffusion on $[0, b) ( 0 < b \leq \infty)$ and $s(b) = \infty$ . Suppose the hitting densities $f_x(t)$ of $0$ have the following form:

(5.3) \begin{equation} f_x(t) = u(x)w(t)\,{\mathrm{e}}^{-v(x)y(t)} \quad (0 < x < b, \ t > 0) \end{equation}

for some strictly positive functions $u(x)$ and $v(x)$ on $(0, b)$ and some strictly positive function w(t) and y(t) on $(0,\infty)$ . In addition, suppose v is strictly increasing continuous and $y(0,\infty) = (0,\infty)$ . Then the first hitting uniqueness holds on ${\mathcal{P}}(0,\infty)$ .

Proof. Suppose $\mu_1$ and $\mu_2 \in {\mathcal{P}}(I)$ satisfy

\begin{equation*} {\mathbb{P}}_{\mu_1}\![T_0 \in {\mathrm{d}} t] = {\mathbb{P}}_{\mu_2}[T_0 \in {\mathrm{d}} t] \end{equation*}

and set $\mu = \mu_1 - \mu_2$ . We have

(5.4) \begin{equation} \int_{0}^{b}f_x(t)\mu({\mathrm{d}} x) = 0 \quad (t > 0). \end{equation}

Note that from the continuity of $f_x(t) / w(t)$ with respect to t, the equality (5.4) holds for every $t > 0$ . From (5.3) and by a change of variables, we have

\begin{equation*} 0 = \int_{v(0)}^{v(b)}u\bigl(v^{-1}(x)\bigr)\,{\mathrm{e}}^{-xy(t)}\mu\bigl(v^{-1}({\mathrm{d}} x)\bigr). \end{equation*}

Since $y(0,\infty) = (0,\infty)$ , from the uniqueness of the Laplace transform we obtain

\begin{equation*} u(x)\mu({\mathrm{d}} x) = 0 \quad \text{on (0, b).} \end{equation*}

Since $u(x) > 0$ , we obtain the desired result.

Now we go on to the proof of Theorem 5.1. For the proof of Theorem 5.1(i) we need the following lemma, which enables us to cut off the integral region for the asymptotic behavior of the Laplace transform.

Lemma 5.1. Let $f\,:\, (0,\infty) \to [0, \infty)$ and assume

(5.5) \begin{equation}\log f(x) \sim \delta \sqrt{x} \quad (x \to \infty) \end{equation}

for $\delta > 0$ and

\begin{equation*}\int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x < \infty \quad (t > 0). \end{equation*}

Then we have

\begin{equation*}\log \int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x \sim \dfrac{\delta^2}{4}t , \end{equation*}

and for every ${\varepsilon} > 0$

\begin{equation*}\int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x \sim \int_{(\delta^2/4 - {\varepsilon})t^2}^{(\delta^2/4 + {\varepsilon})t^2}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x \quad (t \to \infty). \end{equation*}

Proof. Since we have

\begin{equation*}\lim_{t \to \infty}\int_{0}^{1}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x < \infty \quad \text{and} \quad \lim_{t \to \infty}\int_{1}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x = \infty, \end{equation*}

we may assume without loss of generality that $f(x) = 0$ for $0 < x < 1$ . It is enough to show that

(5.6) \begin{equation}\lim_{t \to \infty}\dfrac{\int_{1}^{(\delta^2/4 - {\varepsilon})t}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x}{\int_{1}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x} = 0 \end{equation}

and

(5.7) \begin{equation}\lim_{t \to \infty}\dfrac{\int_{(\delta^2/4 + {\varepsilon})t}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x}{\int_{1}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x} = 0. \end{equation}

Let

\begin{equation*}h(x) = \dfrac{\log\!(x^2f(x))}{\sqrt{x}} - \delta \quad (x > 1). \end{equation*}

Then from (5.5) we have $\lim_{x \to \infty}h(x) = 0$ . It follows that

\begin{equation*}\int_{1}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x = \int_{1}^{\infty}\,{\mathrm{e}}^{-\varphi_t(x)}\dfrac{{\mathrm{d}} x}{x^2}, \end{equation*}

where

\begin{equation*}\varphi_t(x) = x/t - (\delta + h(x))\sqrt{x} . \end{equation*}

Note that

\begin{equation*}\varphi_t(x) = \dfrac{1}{t}\biggl( \sqrt{x} - \dfrac{\delta + h(x)}{2}t \biggr)^2 - \dfrac{(\delta + h(x))^2}{4}t. \end{equation*}

Let $\theta \,:\!=\, \delta / 2 - \sqrt{\delta^2 / 4 - {\varepsilon}} > 0$ and take $R > 1$ so that

\begin{equation*}|h(x)| < \theta \quad \text{and} \quad \dfrac{2\delta |h (x)| + h(x)^2}{4} < \theta^2 / 8 \quad (x > R). \end{equation*}

Then for $R < x < (\delta^2/4 - {\varepsilon}) t^2$ it follows that

\begin{equation*}\dfrac{\delta + h(x)}{2}t - \sqrt{x}> \dfrac{\delta + h(x)}{2}t - t\sqrt{\delta^2/4 - {\varepsilon}}> \dfrac{\theta}{2}t \end{equation*}

and thus

\begin{equation*}\varphi_t(x) \geq (\theta^2 / 8 - \delta^2 / 4)t. \end{equation*}

Then it follows that

\begin{align*}\int_{R}^{(\delta^2/4 - {\varepsilon}) t^2}\,{\mathrm{e}}^{-\varphi_t(x)}\dfrac{{\mathrm{d}} x}{x^2}&\leq {\mathrm{e}}^{( \delta^2 /4 - \theta^2 / 8)t}\int_{R}^{(\delta^2/4 - {\varepsilon}) t^2}\dfrac{{\mathrm{d}} x}{x^2} \\&\leq {\mathrm{e}}^{(\delta^2 /4 - \theta^2 / 8 )t}. \end{align*}

To show (5.6), it is enough to show

(5.8) \begin{equation}\log \int_{1}^{\infty}\,{\mathrm{e}}^{-x/t}f(x)\,{\mathrm{d}} x \sim \dfrac{\delta^2}{4}t \quad (t \to \infty). \end{equation}

From [Reference Bingham, Goldie and Teugels1, Theorem 4.12.10(ii)], we have

\begin{equation*}\log \int_{0}^{x}f(y)\,{\mathrm{d}} y \sim \delta \sqrt{x} \quad (x \to \infty). \end{equation*}

From Kohlbecker’s Tauberian Theorem [Reference Bingham, Goldie and Teugels1, Theorem 4.12.1], we therefore obtain (5.8). We can show (5.7) by a similar argument.

Now we proceed to the proof of Theorem 5.1.

Proof of Theorem 5.1. First we show (i). From Proposition 3.1 and Theorem 4.1, it is enough to show that

\begin{equation*}\lim_{t \to \infty}\dfrac{{\mathrm{d}}}{{\mathrm{d}} t} \log \int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}\dfrac{x^{\alpha/2}}{K_\alpha (2\sqrt{\gamma x})}\mu({\mathrm{d}} x) = \delta^2/4. \end{equation*}

From (4.7) we have

(5.9) \begin{equation}\log \tilde{\rho}(x) \,:\!=\, \log \dfrac{x^{\alpha/2}\rho(x)}{K_\alpha (2\sqrt{\gamma x})} \sim \delta \sqrt{x} \quad (x \to \infty). \end{equation}

Take ${\varepsilon} > 0$ . Since

\begin{equation*}\dfrac{{\mathrm{d}}}{{\mathrm{d}} t} \log \int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}\dfrac{x^{\alpha/2}}{K_\alpha (2\sqrt{\gamma x})}\mu({\mathrm{d}} x)= \dfrac{\int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}x\tilde{\rho}(x)\,{\mathrm{d}} x}{t^2 \int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}\tilde{\rho}(x)\,{\mathrm{d}} x}, \end{equation*}

we have from (5.9) and Lemma 5.1

\begin{equation*}\dfrac{\int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}x\tilde{\rho}(x)\,{\mathrm{d}} x}{t^2 \int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}\tilde{\rho}(x)\,{\mathrm{d}} x}\sim \dfrac{\int_{(\delta^2/4 - {\varepsilon})t^2}^{(\delta^2/4 + {\varepsilon})t^2}\,{\mathrm{e}}^{-x/t}x\tilde{\rho}(x)\,{\mathrm{d}} x}{t^2 \int_{(\delta^2/4 - {\varepsilon})t^2}^{(\delta^2/4 + {\varepsilon})t^2}\,{\mathrm{e}}^{-x/t}\tilde{\rho}(x)\,{\mathrm{d}} x} , \end{equation*}

and obviously we have

\begin{equation*}\int_{(\delta^2/4 - {\varepsilon})t^2}^{(\delta^2/4 + {\varepsilon})t^2}\,{\mathrm{e}}^{-x/t}x\tilde{\rho}(x)\,{\mathrm{d}} x\lesseqgtr (\delta^2/4 \pm {\varepsilon})t^2 \int_{(\delta^2/4 - {\varepsilon})t^2}^{(\delta^2/4 + {\varepsilon})t^2}\,{\mathrm{e}}^{-x/t}\tilde{\rho}(x)\,{\mathrm{d}} x. \end{equation*}

Since ${\varepsilon} > 0$ can be arbitrarily small, we obtain

\begin{equation*}\dfrac{\int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}x\tilde{\rho}(x)\,{\mathrm{d}} x}{t^2 \int_{0}^{\infty}\,{\mathrm{e}}^{-x/t}\tilde{\rho}(x)\,{\mathrm{d}} x} \xrightarrow{t \to \infty} \delta^2/4. \end{equation*}

Next we show (ii). From the proof of Proposition 3.1, it is enough to show that the function $f_\mu(\!\log t)$ varies regularly at $\infty$ with exponent $-\lambda$ . From Theorem 4.1, we have

\begin{equation*}f_\mu(\!\log t) = \dfrac{1}{\Gamma(\alpha)}t^{\beta -\gamma}h(t)^{1 + \alpha}\int_{0}^{\infty}\dfrac{x^\alpha}{g_\gamma(x)}\,{\mathrm{e}}^{-h(t)x}\mu({\mathrm{d}} x), \end{equation*}

where

\begin{equation*}h(t) = \dfrac{\beta}{t^{\beta} - 1} \quad (t > 1). \end{equation*}

The inverse function $h^{-1}$ of h is given by

\begin{equation*}h^{-1}(s) = \biggl(1 + \dfrac{\beta}{s} \biggr)^{1/\beta} \quad (s > 0). \end{equation*}

Note that the function $h^{-1}(s)$ varies regularly at $s = 0$ with exponent $-1/\beta$ . By considering the function $f(\!\log h^{-1}(s))$ , it follows that the function $f_\mu(\!\log t)$ varies regularly at $t = \infty$ with exponent $-\lambda$ if and only if the function

\begin{equation*}\int_{0}^{\infty}\dfrac{x^\alpha}{g_\gamma(x)}\,{\mathrm{e}}^{-sx}\mu({\mathrm{d}} x) \end{equation*}

varies regularly at $s = 0$ with exponent $-\alpha - (\gamma -\lambda)/\beta$ . From Karamata’s Tauberian Theorem [Reference Bingham, Goldie and Teugels1, Theorem 1.7.1], it is equivalent to the function

\begin{equation*}\int_{0}^{x}\dfrac{y^{\alpha}}{g_\gamma(y)}\mu({\mathrm{d}} y) \end{equation*}

varying regularly at $x = \infty$ with exponent $\alpha + (\gamma - \lambda) / \beta$ . Then from (4.7) and [Reference Bingham, Goldie and Teugels1, Theorem 1.6.4], it is equivalent to the function $\mu(x,\infty)$ varying regularly at $x = \infty$ with exponent $-\lambda / \beta$ , and therefore we obtain (ii).

Finally, we show (iii). The proof of this case is quite similar to that of (ii). From Theorem 4.1, we have

\begin{equation*} f_\mu(\!\log t) = \dfrac{1}{\Gamma(\alpha)}t^{\beta -\gamma}h(t)^{1 + \alpha}\int_{0}^{\infty}\dfrac{x^\alpha}{g_\gamma(x)}\,{\mathrm{e}}^{-h(t)x}\mu({\mathrm{d}} x). \end{equation*}

Note that for $\beta < 0$ we obtain $\lim_{t \to \infty}h(t) = -\beta$ . Then the function $f_\mu(\!\log t)$ varies regularly at $t = \infty$ with exponent $-\lambda$ if and only if the function

(5.10) \begin{equation}\int_{0}^{\infty}\dfrac{x^\alpha}{g_\gamma(x)}\,{\mathrm{e}}^{-h(t)x}\mu({\mathrm{d}} x)= \dfrac{({-}\beta)^{-\alpha}\Gamma(\alpha)}{\Gamma(1 - \gamma / \beta)}\int_{0}^{\infty}\dfrac{{\mathrm{e}}^{-(h(t) + \beta)x}}{U(1-\gamma/\beta, \alpha + 1;\ -\beta x)}\mu({\mathrm{d}} x) \end{equation}

varies regularly at $t = \infty$ with exponent $-\lambda -\beta + \gamma$ . Note that the function $h^{-1}(s)$ varies regularly at $s = -\beta + 0$ with exponent $-1/\beta$ . Thus, by denoting $u = s + \beta$ , the regular variation at $t = \infty$ of (5.10) with exponent $-\lambda - \beta + \gamma$ is equivalent to that at $u = 0$ of

\begin{equation*}\int_{0}^{\infty}\dfrac{{\mathrm{e}}^{-ux}}{U(1-\gamma/\beta, \alpha + 1;\ {-}\beta x)}\mu({\mathrm{d}} x) \end{equation*}

with exponent $1 + (\lambda - \gamma) / \beta$ . Using (4.7), the rest of the proof can be made by the same argument in (ii) and hence we omit it. The proof is complete.

Acknowledgements

The author would like to thank Kouji Yano, who read an early draft of this paper and gave him valuable comments. Thanks to him, the present paper was significantly improved. The author would also like to thank Toshiro Watanabe, who suggested the counterexample given in Remark 3.1. This work was supported by the Research Institute for Mathematical Sciences, an International Joint Usage/Research Center located in Kyoto University, and was carried out under the ISM Cooperative Research Program (2020-ISMCRP-5013).

Funding information

This work was supported by JSPS Open Partnership Joint Research Projects grant no. JPJSBP120209921, and JSPS KAKENHI grant no. JP21J11000.

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Bingham, N. H., Goldie, C. M. and Teugels, J. L. (1987). Regular Variation (Encyclopedia of Mathematics and its Applications 27). Cambridge University Press, Cambridge.CrossRefGoogle Scholar
Borodin, A. N. and Salminen, P. (2002). Handbook of Brownian Motion: Facts and Formulae, 2nd edn (Probability and its Applications). Birkhäuser, Basel.CrossRefGoogle Scholar
Cattiaux, P., Collet, P., Lambert, A., Martínez, S., Méléard, S. and Martín, J. S. (2009). Quasi-stationary distributions and diffusion models in population dynamics. Ann. Prob. 37, 19261969.CrossRefGoogle Scholar
Coddington, E. A. and Levinson, N. (1955). Theory of Ordinary Differential Equations. McGraw-Hill, New York, Toronto and London.Google Scholar
Collet, P., Martínez, S. and Martín, J. S. (1995). Asymptotic laws for one-dimensional diffusions conditioned to nonabsorption. Ann. Prob. 23, 13001314.CrossRefGoogle Scholar
Collet, P., Martínez, S. and Martín, J. S. (2013). Quasi-Stationary Distributions: Markov Chains, Diffusions and Dynamical Systems (Probability and its Applications). Springer, Heidelberg.CrossRefGoogle Scholar
Göing-Jaeschke, A. and Yor, M. (2003). A survey and some generalizations of Bessel processes. Bernoulli 9, 313349.CrossRefGoogle Scholar
Hening, A. and Kolb, M. (2019). Quasistationary distributions for one-dimensional diffusions with singular boundary points. Stoch. Process. Appl. 129, 16591696.CrossRefGoogle Scholar
Itô, K. (2006). Essentials of Stochastic Processes (Translations of Mathematical Monographs 231). American Mathematical Society, Providence, RI.CrossRefGoogle Scholar
Kolb, M. and Steinsaltz, D. (2012). Quasilimiting behavior for one-dimensional diffusions with killing. Ann. Prob. 40, 162212.CrossRefGoogle Scholar
Kotani, S. (2007). Krein’s strings with singular left boundary. Rep. Math. Phys. 59, 305316.CrossRefGoogle Scholar
Kotani, S. and Watanabe, S. (1982). Kren’s spectral theory of strings and generalized diffusion processes. In Functional Analysis in Markov Processes (Katata/Kyoto 1981) (Lecture Notes Math. 923), pp. 235259. Springer, Berlin and New York.CrossRefGoogle Scholar
Lamperti, J. (1958). An occupation time theorem for a class of stochastic processes. Trans. Amer. Math. Soc. 88, 380387.CrossRefGoogle Scholar
Littin, J. (2012). Uniqueness of quasistationary distributions and discrete spectra when $\infty$ is an entrance boundary and 0 is singular. J. Appl. Prob. 49, 719730.CrossRefGoogle Scholar
Lladser, M. and Martín, J. S. (2000). Domain of attraction of the quasi-stationary distributions for the Ornstein–Uhlenbeck process. J. Appl. Prob. 37, 511520.CrossRefGoogle Scholar
Magnus, W., Oberhettinger, F. and Soni, R. P. (1966). Formulas and Theorems for the Special Functions of Mathematical Physics, 3rd edn (Grundlehren der mathematischen Wissenschaften 52). Springer, New York.CrossRefGoogle Scholar
Mandl, P. (1961). Spectral theory of semi-groups connected with diffusion processes and its application. Czechoslovak Math. J. 11, 558569.CrossRefGoogle Scholar
Martínez, S. and Martín, J. S. (2001). Rates of decay and h-processes for one dimensional diffusions conditioned on non-absorption. J. Theoret. Prob. 14, 199212.CrossRefGoogle Scholar
Martínez, S., Picco, P. and Martín, J. S. (1998). Domain of attraction of quasi-stationary distributions for the Brownian motion with drift. Adv. Appl. Prob. 30, 385408.CrossRefGoogle Scholar
McKean, H. P. Jr (1956). Elementary solutions for certain parabolic partial differential equations. Trans. Amer. Math. Soc. 82, 519548.CrossRefGoogle Scholar
Rogers, L. C. G. (1984). A diffusion first passage problem. In Seminar on Stochastic Processes, 1983 (Gainesville, Fla., 1983) (Progress Prob. Statist. 7), pp. 151160. Birkhäuser, Boston.CrossRefGoogle Scholar
Rogers, L. C. G. and Williams, D. (2000). Diffusions, Markov Processes, and Martingales, Vol. 2, Itô Calculus (Cambridge Mathematical Library), 2nd edn. Cambridge University Press, Cambridge.Google Scholar
Takeda, M. (2019). Existence and uniqueness of quasi-stationary distributions for symmetric Markov processes with tightness property. J. Theoret. Prob. 32, 20062019.CrossRefGoogle Scholar
Takemura, T. and Tomisaki, M. (2012). h transform of one-dimensional generalized diffusion operators. Kyushu J. Math. 66, 171191.CrossRefGoogle Scholar
Yamato, K. (2021). Existence of Laplace transforms of the spectral measures for one-dimensional diffusions with an exit boundary. Infinitely divisible processes and related topics (25), The Institute of Statistical Mathematics Cooperative Research Report 5559.Google Scholar
Yano, K. (2006). Excursion measure away from an exit boundary of one-dimensional diffusion processes. Publ. Res. Inst. Math. Sci. 42, 837878.CrossRefGoogle Scholar