Hostname: page-component-745bb68f8f-f46jp Total loading time: 0 Render date: 2025-02-06T04:46:45.440Z Has data issue: false hasContentIssue false

Yaglom limit for stochastic fluid models

Published online by Cambridge University Press:  08 October 2021

Nigel G. Bean*
Affiliation:
University of Adelaide
Małgorzata M. O’Reilly*
Affiliation:
University of Tasmania
Zbigniew Palmowski*
Affiliation:
Wrocław University of Science and Technology
*
*Postal address: Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers. School of Mathematical Sciences, University of Adelaide, SA 5005 Australia. Email: nigel.bean@adelaide.edu.au
**Postal address: Australian Research Council Centre of Excellence for Mathematical and Statistical Frontiers. Discipline of Mathematics, University of Tasmania, Hobart TAS 7001, Australia. Email: malgorzata.oreilly@utas.edu.au
***Postal address: Faculty of Pure and Applied Mathematics, Wrocław University of Science and Technology, ul. Wybrzeże Wyspiańskiego 27, 50-370 Wrocław, Poland. Email: zbigniew.palmowski@pwr.edu.pl
Rights & Permissions [Opens in a new window]

Abstract

In this paper we analyse the limiting conditional distribution (Yaglom limit) for stochastic fluid models (SFMs), a key class of models in the theory of matrix-analytic methods. So far, only transient and stationary analyses of SFMs have been considered in the literature. The limiting conditional distribution gives useful insights into what happens when the process has been evolving for a long time, given that its busy period has not ended yet. We derive expressions for the Yaglom limit in terms of the singularity˜$s^*$ such that the key matrix of the SFM, ${\boldsymbol{\Psi}}(s)$, is finite (exists) for all $s\geq s^*$ and infinite for $s<s^*$. We show the uniqueness of the Yaglom limit and illustrate the application of the theory with simple examples.

Type
Original Article
Copyright
© The Author(s) 2021. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Let $\{ (\varphi(t)):t\geq 0\}$ be an irreducible, positive-recurrent, continuous-time Markov chain (CTMC) with some finite state space $\mathcal{S}=\{1,2,\ldots ,n\}$ and infinitesimal generator T. Let $\{(\varphi(t),X(t)):t\geq 0\}$ be a Markovian stochastic fluid model (SFM) [Reference Ahn and Ramaswami2, Reference Ahn and Ramaswami3, Reference Ahn and Ramaswami4, Reference Asmussen7, Reference Bean, O’Reilly and Taylor15, Reference Bean, O’Reilly and Taylor16, Reference Bean, O’Reilly and Taylor17, Reference Ramaswami43, Reference Ramaswami44], with phase variable $\varphi(t)\in\mathcal{S}$, level variable $X(t)\geq 0$, and constant rates $c_i\in\mathbb{R}$, for all $i\in\mathcal{S}$. The model assumes that when $\varphi(t)=i$ and $X(t)>0$, the rate at which the level is changing is $c_i$, and when $\varphi(t)=i$ and $X(t)=0$, the rate at which the level is changing is $max\{0,c_i\}$. Therefore, we refer to the CTMC $\{ (\varphi(t)):t\geq 0\}$ as the process that is driving (or modulating) the SFM $\{(\varphi(t),X(t)):t\geq 0\}$.

SFMs are a key class of models in the theory of matrix-analytic methods [Reference He29, Reference Latouche and Ramaswami36, Reference Latouche37], which comprises methodologies for the analysis of Markov chains and Markovian-modulated models that lead to efficient algorithms for numerical computation.

Let $\mathcal{S}_1=\{i\in\mathcal{S}:c_i>0\}$, $\mathcal{S}_2=\{i\in\mathcal{S}:c_i<0\}$, $\mathcal{S}_0=\{i\in\mathcal{S}:c_i=0\}$, and partition the generator as

\begin{align}{\textbf{T}}=\left[\begin{array}{c@{\quad}c@{\quad}c}{\textbf{T}}_{11} & {\textbf{T}}_{12} & {\textbf{T}}_{10}\\[4pt]{\textbf{T}}_{21} & {\textbf{T}}_{22} & {\textbf{T}}_{20}\\[4pt]{\textbf{T}}_{01} & {\textbf{T}}_{02} & {\textbf{T}}_{00}\end{array}\right],\end{align}

according to $\mathcal{S}=\mathcal{S}_1\cup\mathcal{S}_2\cup\mathcal{S}_0$.

We assume that the process is stable; that is,

(1) \begin{eqnarray}\mu = \sum_{i\in\mathcal{S}} c_i\xi_i < 0,\end{eqnarray}

where $\boldsymbol{\xi}=[\xi_i]_{i\in\mathcal{S}}$ is the stationary distribution vector of the Markov chain $\{(\varphi(t)):t\geq 0\}$.

So far, the analysis of SFMs has focused on the transient and stationary behaviour. In this paper, we are interested in the behaviour of the process conditional on absorption not having taken place, where absorption means that the busy period of the process has ended, that is, the process has not hit the level zero as yet. For $x\geq 0$, let $\theta(x)=\inf\{t>0:X(t)=x\}$ be the first time at which the process reaches level x. To this end, we define the following quantity, referred to as the Yaglom limit.

Definition 1. Define the matrix $\boldsymbol{\mu}(dy)^{(x)}=[\mu(dy)^{(x)}_{ij}]_{i,j\in\mathcal{S}}$, $x,y> 0$, such that

(2) \begin{equation}\mu(dy)^{(x)}_{ij}=\lim_{t\to\infty}\mathbb{P}(X(t)\in dy,\varphi(t)=j\ |\ \theta(0)>t,X(0)=x,\varphi(0)=i),\end{equation}

and the matrix $\boldsymbol{\mu}(dy)^{(0)}=[\mu(dy)^{(0)}_{ij}]_{i\in\mathcal{S}_1,j\in\mathcal{S}}$, $y> 0$, such that

(3) \begin{equation}\mu(dy)^{(0)}_{ij}=\lim_{t\to\infty}\mathbb{P}(X(t)\in dy,\varphi(t)=j\ |\ \theta(0)>t,X(0)=0,\varphi(0)=i),\end{equation}

whenever the limit exists. We refer to $\mu(dy)^{(x)}_{ij}$ as the limiting conditional distribution (Yaglom limit) of observing the process in level y and phase j, given that the process started from level x in phase i at time zero and has been evolving without hitting level zero.

Remark 1. In general, for Markov processes there are no sufficient conditions that we can refer to under which the Yaglom limit or quasi-stationary distribution exists. Usually, the existence of the Yaglom limit is proved case by case. Here, we prove that it exists for our model.

We partition $\boldsymbol{\mu}(dy)^{(x)}$, $x>0$, according to $\mathcal{S}_1\cup\mathcal{S}_2\cup\mathcal{S}_0 \times \mathcal{S}_1\cup\mathcal{S}_2\cup\mathcal{S}_0$ as

(4) \begin{equation}\boldsymbol{\mu}(dy)^{(x)}=\left[\begin{array}{c@{\quad}c@{\quad}c}\boldsymbol{\mu}(dy)^{(x)}_{11}&\boldsymbol{\mu}(dy)^{(x)}_{12}&\boldsymbol{\mu}(dy)^{(x)}_{10}\\[8pt] \boldsymbol{\mu}(dy)^{(x)}_{21}&\boldsymbol{\mu}(dy)^{(x)}_{22}&\boldsymbol{\mu}(dy)^{(x)}_{20}\\[8pt] \boldsymbol{\mu}(dy)^{(x)}_{01}&\boldsymbol{\mu}(dy)^{(x)}_{02}&\boldsymbol{\mu}(dy)^{(x)}_{00}\\[4pt]\end{array}\right],\end{equation}

and partition its row sums accordingly, as

(5) \begin{equation}\boldsymbol{\mu}(dy)^{(x)}{\boldsymbol{1}}=\left[\begin{array}{c}\boldsymbol{\mu}(dy)^{(x)}_{1}\\[4pt]\boldsymbol{\mu}(dy)^{(x)}_{2}\\[4pt]\boldsymbol{\mu}(dy)^{(x)}_{0}\\[4pt]\end{array}\right],\end{equation}

where ${\boldsymbol{1}}$ denotes a vector of ones of appropriate size, so that $\boldsymbol{\mu}(dy)^{(x)}_{1}=\boldsymbol{\mu}(dy)^{(x)}_{11}{\boldsymbol{1}}+\boldsymbol{\mu}(dy)^{(x)}_{12}{\boldsymbol{1}}+\boldsymbol{\mu}(dy)^{(x)}_{10}{\boldsymbol{1}}$, and so on.

We partition $\boldsymbol{\mu}(dy)^{(0)}$ according to $\mathcal{S}_1\times\mathcal{S}_1\cup\mathcal{S}_2\cup\mathcal{S}_0$ as

(6) \begin{equation}\boldsymbol{\mu}(dy)^{(0)}=\left[\begin{array}{c@{\quad}c@{\quad}c}\boldsymbol{\mu}(dy)^{(0)}_{11}&\boldsymbol{\mu}(dy)^{(0)}_{12}&\boldsymbol{\mu}(dy)^{(0)}_{10}\\[4pt]\end{array}\right],\end{equation}

and let

(7) \begin{equation}\boldsymbol{\mu}(dy)^{(0)}_{1}=\boldsymbol{\mu}(dy)^{(0)}{\boldsymbol{1}}.\end{equation}

This paper is the first analysis of the Yaglom limit of SFMs. We derive expressions for the Yaglom limit, show its uniqueness, and illustrate the theory with simple examples. The Yaglom limit concerns some Markov process X(t) and some almost surely finite absorption time $\theta$ (usually the first exit time from some set, such as a positive half-line), and is defined by

(8) \begin{equation}\mu(dy)=\lim_{t\rightarrow +\infty} \mathbb{P}(X(t)\in dy\ |\ \theta >t).\end{equation}

It describes the state of the Markov system conditioned on surviving killing coming from $\theta$ for a very long time. The Yaglom limit is strongly related to the so-called quasi-stationary distribution, which satisfies

(9) \begin{equation}\mathbb{P}_\mu(X(t)\in dy|\theta >t)=\mu(dy);\end{equation}

see for example [Reference Darroch and Seneta21]. In particular, the Yaglom limit $\mu$ (if it exists) is necessarily quasi-stationary, but it may be difficult to show its uniqueness [Reference Asmussen8, Section 3]. In other words, there might be more quasi-stationary laws, and the Yaglom limit might be one of them. It might be the case as well that there exists a quasi-stationary distribution but that the Yaglom limit is not well-defined.

A related class of models in the theory of matrix-analytic methods is that of quasi-birth-and-death processes (QBDs) [Reference Latouche and Ramaswami36], in which the level variable is discrete. The quasi-stationary analysis of QBDs has been provided in [Reference Bean10, Reference Bean, Pollett and Taylor11, Reference Bean, Pollett and Taylor12], along with several examples of areas of applications, which are relevant here as well, due to the similar application potential of QBDs and SFMs [Reference Bean and O’Reilly13].

Information on quasi-stationary distributions for other Markov processes can be found in the classical works of Seneta and Vere-Jones [Reference Seneta and Vere-Jones45], Tweedie [Reference Tweedie46], and Jacka and Roberts [Reference Jacka and Roberts32]. The bibliographic database of Pollet [Reference Pollett42] gives a detailed history of quasi-stationary distributions. In particular, Yaglom [Reference Yaglom48] was the first to explicitly identify quasi-stationary distributions for the subcritical Bienaymé–Galton–Watson branching process. Some of the results on quasi-stationary distributions concern Markov chains on the positive integers with an absorbing state at the origin [Reference Collet, Martnez and San Martn20, Reference Ferrari, Kesten, Martínez and Picco23, Reference Flaspohler and Holmes25, Reference Seneta and Vere-Jones45, Reference Van Doorn47, Reference Zhang, Li and Song49]. Other objects of study are the extinction probabilities for continuous-time branching processes and the Fleming–Viot processes [Reference Asselah, Ferrari, Groisman and Jonckheere9, Reference Ferrari and Marić24, Reference Lambert35]. A separate topic is that of Lévy processes exiting from the positive half-line or a cone. Here the case of the Brownian motion with drift was resolved by Martínez and San Martín [Reference Martnez and San Martn40], complementing the result for random walks obtained by Iglehart [Reference Iglehart31]. The case of more general Lévy processes was studied by [Reference Bogdan, Palmowski and Wang19, Reference Kyprianou and Palmowski33, Reference Kyprianou34, Reference Mandjes, Palmowski and Rolski39]. One-dimensional self-similar processes, including the symmetric $\alpha$-stable Lévy process, were subject of interest in [Reference Haas and Rivero28].

The rest of the paper is structured as follows. In Section 2 we define the Laplace–Stieltjes transforms (LSTs) which form the key building blocks of the analysis, and in Section 3 we outline the approach based on the Heaviside principle. The key results of this paper are contained in Section 4. To illustrate the theory we construct a simple example with scalar parameters, which we analyse throughout the paper, as we introduce the theory. In Section 5 we analyse another example, with matrix parameters, where we provide some numerical output as well.

2. The Laplace–Stieltjes transforms

Note that by Definition 1, for $x\geq 0$,

(10) \begin{eqnarray}\mu(dy)^{(x)}_{ij}&\,=\,&\lim_{t\to\infty}\mathbb{P}(X(t)\in dy,\varphi(t)=j\ |\ \theta(0)>t,X(0)=x,\varphi(0)=i)\nonumber\\[4pt] &\,=\,&\lim_{t\to\infty}\frac{\mathbb{P}(X(t)\in dy,\varphi(t)=j,\theta(0)>t\ |\ X(0)=x,\varphi(0)=i)}{\mathbb{P}(\theta(0)>t\ |\ X(0)=x,\varphi(0)=i)}.\end{eqnarray}

For all $x\geq 0$ and $y>0$, define the matrix ${\textbf{E}}(dy)^{(x)}(s)=[E(dy)^{(x)}_{ij}(s)]_{i,j\in\mathcal{S}}$ and the vector ${\textbf{E}}^{(x)}(s)=[E^{(x)}_i(s)]_{i\in\mathcal{S}}$, which record the corresponding Laplace–Stieltjes transforms (LSTs),

(11) \begin{eqnarray}E(dy)^{(x)}_{ij}(s)&\,=\,&\int_0^\infty \mathbb{E}(e^{-s t} {\boldsymbol{1}}\{X(t)\in dy,\varphi(t)=j,\theta(0)>t\}\ |\ X(0)=x,\varphi(0)=i) dt,\nonumber\\[4pt]E^{(x)}_i(s)&\,=\,&\int_0^\infty \mathbb{E}(e^{-s t} {\boldsymbol{1}}\{\theta(0)>t\}\ |\ X(0)=x,\varphi(0)=i)dt,\end{eqnarray}

where ${\boldsymbol{1}}\{\cdot\}$ denotes an indicator function. We have

(12) \begin{eqnarray}{\textbf{E}}^{(x)}(s)&\,=\,&\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s){\boldsymbol{1}}.\end{eqnarray}

We partition ${\textbf{E}}(dy)^{(x)}(s)$, $x>0$, according to $\mathcal{S}\times \mathcal{S}$ for $\mathcal{S}=\mathcal{S}_1\cup\mathcal{S}_2\cup\mathcal{S}_0$ as

(13) \begin{equation}{\textbf{E}}(dy)^{(x)}(s)=\left[\begin{array}{c@{\quad}c@{\quad}c}{\textbf{E}}(dy)^{(x)}(s)_{11}&{\textbf{E}}(dy)^{(x)}(s)_{12}&{\textbf{E}}(dy)^{(x)}(s)_{10}\\[4pt]{\textbf{E}}(dy)^{(x)}(s)_{21}&{\textbf{E}}(dy)^{(x)}(s)_{22}&{\textbf{E}}(dy)^{(x)}(s)_{20}\\[4pt]{\textbf{E}}(dy)^{(x)}(s)_{01}&{\textbf{E}}(dy)^{(x)}(s)_{02}&{\textbf{E}}(dy)^{(x)}(s)_{00}\\[4pt]\end{array}\right],\end{equation}

and ${\textbf{E}}^{(x)}(s)$, $x>0$, as

(14) \begin{equation}{\textbf{E}}^{(x)}(s)=\left[\begin{array}{c}{\textbf{E}}^{(x)}(s)_1\\[4pt]{\textbf{E}}^{(x)}(s)_2\\[4pt]{\textbf{E}}^{(x)}(s)_0\end{array}\right].\end{equation}

We partition ${\textbf{E}}(dy)^{(0)}(s)$ according to $\mathcal{S}_1\times\mathcal{S}_1\cup\mathcal{S}_2\cup\mathcal{S}_0$ as

(15) \begin{equation}{\textbf{E}}(dy)^{(0)}(s)=\big[\begin{array}{c@{\quad}c@{\quad}c}{\textbf{E}}(dy)^{(0)}(s)_{11}&{\textbf{E}}(dy)^{(0)}(s)_{12}&{\textbf{E}}(dy)^{(0)}(s)_{10}\\[4pt]\end{array}\big]\end{equation}

and let

(16) \begin{equation}{\textbf{E}}(dy)^{(0)}(s)_{1}={\textbf{E}}(dy)^{(0)}(s){\boldsymbol{1}}.\end{equation}

Define ${\textbf{C}}_1=diag(c_i)_{i\in\mathcal{S}_1}$, ${\textbf{C}}_2=diag(|c_i|)_{i\in\mathcal{S}_2}$, and let ${\textbf{Q}}(s)$ be the key fluid generator matrix ${\textbf{Q}}(s)$ introduced in [Reference Bean, O’Reilly and Taylor16],

(17) \begin{equation}{\textbf{Q}}(s)=\left[\begin{array}{c@{\quad}c}{\textbf{Q}}_{11}(s) & {\textbf{Q}}_{12}(s) \\[4pt]{\textbf{Q}}_{21}(s) & {\textbf{Q}}_{22}(s)\end{array}\right],\end{equation}

where the block matrices are given by

(18) \begin{eqnarray}{\textbf{Q}}_{22}(s)&\,=\,&{\textbf{C}}^{-1}_{2}\left({\textbf{T}}_{22}-s{\textbf{I}}-{\textbf{T}}_{20}({\textbf{T}}_{00}-s{\textbf{I}})^{-1}{\textbf{T}}_{02}\right),\nonumber\\[4pt]{\textbf{Q}}_{11}(s)&\,=\,&{\textbf{C}}^{-1}_{1}\left({\textbf{T}}_{11}-s{\textbf{I}}-{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-1}{\textbf{T}}_{01}\right),\nonumber\\[4pt]{\textbf{Q}}_{12}(s)&\,=\,&{\textbf{C}}^{-1}_{1}\left({\textbf{T}}_{12}-{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-1}{\textbf{T}}_{02}\right),\nonumber\\[4pt]{\textbf{Q}}_{21}(s)&\,=\,&{\textbf{C}}^{-1}_{2}\left({\textbf{T}}_{21}-{\textbf{T}}_{20}({\textbf{T}}_{00}-s{\textbf{I}})^{-1}{\textbf{T}}_{01}\right).\end{eqnarray}

Here, ${\textbf{Q}}(s)$ exists for all real s such that $({\textbf{T}}_{00}-s{\textbf{I}})^{-1}=\int_{t=0}^{\infty}e^{-st}e^{{\textbf{T}}_{00}t}dt<\infty$, or for all real s when $S_0=\emptyset$.

Also, let ${\boldsymbol{\Psi}}(s)$ be the key matrix for SFMs [Reference Bean, O’Reilly and Taylor16] such that, for all $i\in\mathcal{S}_1, j\in\mathcal{S}_2$,

(19) \begin{equation}[{\boldsymbol{\Psi}}(s)]_{ij}=\mathbb{E}(e^{-s\theta( 0)} {\boldsymbol{1}}\{\theta(0)<\infty,\varphi(\theta(0))=j\}\ |\ \varphi(0)=i,X(0)=0)\end{equation}

is the LST of the first return time to the original level zero, with this occurring in phase j, given a start at level zero in phase i. Let $\boldsymbol{\psi}(t)$ be the corresponding density, so that ${\boldsymbol{\Psi}}(s)=\int_{t=0}^{\infty}e^{-st}\boldsymbol{\psi}(t)dt$. Clearly, ${\boldsymbol{\Psi}}(s)>0$ for real s such that ${\boldsymbol{\Psi}}(s)<\infty$ exists.

Define matrices

(20) \begin{eqnarray} {\textbf{K}}(s)&\,=\,&{\textbf{Q}}_{11}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s),\nonumber\\[4pt]{\textbf{D}}(s)&\,=\,&{\textbf{Q}}_{22}(s)+{\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s),\end{eqnarray}

and note that by the assumed stability of the process, the spectra of ${\textbf{K}}(s)$ and $({-}{\textbf{D}}(s))$ are separate for $s\geq 0$ by [Reference Bean, O’Reilly and Taylor15, Reference Bean, O’Reilly and Taylor16, Reference Bean, O’Reilly and Taylor17]; that is, $sp({\textbf{K}}(s))\cap sp({-}{\textbf{D}}(s))=\varnothing$.

We extend the result in [Reference Bean, O’Reilly and Taylor16, Equation (23)] from $Re(s)\geq 0$ to all real s such that ${\boldsymbol{\Psi}}(s)<\infty$ exists.

Lemma 1. For all real s such that ${\boldsymbol{\Psi}}(s)<\infty$ exists, the matrix ${\boldsymbol{\Psi}}(s)$ is a solution of the Riccati equation,

(21) \begin{equation}{\textbf{Q}}_{12}(s)+{\textbf{Q}}_{11}(s){\textbf{X}}+{\textbf{X}}{\textbf{Q}}_{22}(s)+{\textbf{X}}{\textbf{Q}}_{21}(s){\textbf{X}}={\boldsymbol{0}}.\end{equation}

Proof. Suppose s is real and ${\boldsymbol{\Psi}}(s)<\infty$. Then, by [Reference Bean, O’Reilly and Taylor16, Theorem 1] and [Reference Bean, O’Reilly and Taylor17, Algorithm 1 of Section 3.1],

(22) \begin{equation}\infty> {\boldsymbol{\Psi}}(s)=\int_{y=0}^{\infty}e^{{\textbf{Q}}_{11}(s)y}({\textbf{Q}}_{12}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s))e^{{\textbf{Q}}_{22}(s)y}dy.\end{equation}

Then, letting

(23) \begin{eqnarray}{\boldsymbol{\Psi}}(s,y)&\,=\,& e^{{\textbf{Q}}_{11}(s)y}({\textbf{Q}}_{12}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s)) e^{{\textbf{Q}}_{22}(s)y},\end{eqnarray}

we have

(24) \begin{eqnarray}\frac{\partial}{\partial y} {\boldsymbol{\Psi}}(s,y) &\,=\,&{\textbf{Q}}_{11}(s){\boldsymbol{\Psi}}(s,y)+{\boldsymbol{\Psi}}(s,y){\textbf{Q}}_{22}(s),\end{eqnarray}
(25) \begin{eqnarray}{\boldsymbol{\Psi}}(s,0)&\,=\,&{\textbf{Q}}_{12}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s),\,\,\end{eqnarray}
(26) \begin{eqnarray}\lim_{y\to\infty}{\boldsymbol{\Psi}}(s,y)&\,=\,&{\boldsymbol{0}},\qquad\qquad\qquad\qquad\qquad\qquad\end{eqnarray}

and so

(27) \begin{eqnarray}{\textbf{Q}}_{11}(s){\boldsymbol{\Psi}}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{22}(s)&\,=\,&\int_{y=0}^{\infty}\frac{\partial}{\partial y}\left( {\boldsymbol{\Psi}}(s,y) \right)dy\nonumber\\[8pt] &\,=\,&\lim_{y\to\infty}{\boldsymbol{\Psi}}(s,y)-{\boldsymbol{\Psi}}(s,0)\end{eqnarray}

and

(28) \begin{equation}{\boldsymbol{0}}={\textbf{Q}}_{12}(s)+{\textbf{Q}}_{11}(s){\boldsymbol{\Psi}}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{22}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s),\end{equation}

which implies that ${\boldsymbol{\Psi}}(s)$ is a solution of (21).

Below, we state expressions for ${\textbf{E}}(dy)^{(0)}(s)$ derived in [Reference Ahn and Ramaswami3, Theorem 3.1.1] and [Reference Bean and O’Reilly14, Theorem 2].

Lemma 2. We have

(29) \begin{eqnarray}{\textbf{E}}(dy)^{(0)}(s)_{11}&\,=\,&e^{{\textbf{K}}(s)y}{\textbf{C}}_1^{-1}dy,\end{eqnarray}
(30) \begin{eqnarray}{\textbf{E}}(dy)^{(0)}(s)_{12}&\,=\,&e^{{\textbf{K}}(s)y}{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}dy,\end{eqnarray}
(31) \begin{eqnarray}{\textbf{E}}(dy)^{(0)}(s)_{10}&\,=\,&\left[\begin{array}{c@{\quad}c}{\textbf{E}}(dy)^{(0)}(s)_{11}&{\textbf{E}}(dy)^{(0)}(s)_{12}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[4pt]{\textbf{T}}_{20}\end{array}\right] \times({-}({{\textbf{T}}}_{00}-s{\textbf{I}})^{-1})dy.\end{eqnarray}

In the next lemma, we derive expressions for ${\textbf{E}}(dy)^{(x)}(s)$, $x>0$. The formal proof of this lemma was already given by Ahn and Ramaswami [Reference Ahn and Ramaswami5]. However, since the lemma is crucial to all of the remaining analysis, we include its proof for completeness.

Lemma 3. For $x>0$ we have

(32) \begin{eqnarray}{\textbf{E}}(dy)^{(x)}(s)_{21}&\,=\,&\int_{z=0}^{\min\{x,y\}}e^{{\textbf{D}}(s)(x-z)}{\textbf{Q}}_{21}(s)e^{{\textbf{K}}(s)(y-z)}{\textbf{C}}_1^{-1}dzdy,\nonumber\\[7pt] {\textbf{E}}(dy)^{(x)}(s)_{22}&\,=\,&{\textbf{E}}(dy)^{(x)}(s)_{21}{\textbf{C}}_1{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}+e^{{\textbf{D}}(s)(x-y)}{\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y<x\}dy,\nonumber\\[7pt] {\textbf{E}}(dy)^{(x)}(s)_{20}&\,=\,&\left[\begin{array}{c@{\quad}c}{\textbf{E}}(dy)^{(x)}(s)_{21}&{\textbf{E}}(dy)^{(x)}(s)_{22}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[7pt] {\textbf{T}}_{20}\end{array}\right] \times({-}({{\textbf{T}}}_{00}-s{\textbf{I}})^{-1})dy,\nonumber\\[7pt] {\textbf{E}}(dy)^{(x)}(s)_{11}&\,=\,&{\boldsymbol{\Psi}}(s){\textbf{E}}(dy)^{(x)}(s)_{21}+e^{{\textbf{K}}(s)(y-x)}{\textbf{C}}_1^{-1}{\boldsymbol{1}}\{y>x\}dy,\nonumber\\[7pt]{\textbf{E}}(dy)^{(x)}(s)_{12}&\,=\,&{\boldsymbol{\Psi}}(s){\textbf{E}}(dy)^{(x)}(s)_{22}+e^{{\textbf{K}}(s)(y-x)}{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y> x\}dy,\nonumber\\[7pt]{\textbf{E}}(dy)^{(x)}(s)_{10}&\,=\,&\left[\begin{array}{c@{\quad}c}{\textbf{E}}(dy)^{(x)}(s)_{11}&{\textbf{E}}(dy)^{(x)}(s)_{12}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[7pt]{\textbf{T}}_{20}\end{array}\right] \times({-}({{\textbf{T}}}_{00}-s{\textbf{I}})^{-1})dy.\end{eqnarray}

Proof. The expressions for ${\textbf{E}}(dy)^{(x)}(s)_{10}$ and ${\textbf{E}}(dy)^{(x)}(s)_{20}$ follow from the argument in the proof of Lemma 2. Furthermore, by partitioning the sample paths, since the process may visit level y after returning to level x first, or without hitting level x at all, we have

(33) \begin{eqnarray}{\textbf{E}}(dy)^{(x)}(s)_{11}&\,=\,&{\boldsymbol{\Psi}}(s){\textbf{E}}(dy)^{(x)}(s)_{21}+{\textbf{E}}(d(y-x))^{(0)}(s)_{11}{\boldsymbol{1}}\{y>x\}dy,\nonumber\\[4pt]{\textbf{E}}(dy)^{(x)}(s)_{12}&\,=\,&{\boldsymbol{\Psi}}(s){\textbf{E}}(dy)^{(x)}(s)_{22}+{\textbf{E}}(d(y-x))^{(0)}(s)_{12}{\boldsymbol{1}}\{y>x\}dy,\end{eqnarray}

and so the expressions for ${\textbf{E}}(dy)^{(x)}(s)_{11}$ and ${\textbf{E}}(dy)^{(x)}(s)_{12}$ follow from Lemma 2.

Next, we consider ${\textbf{E}}(dy)^{(x)}(s)_{22}$. For $x>0$, define a matrix ${\textbf{G}}(x,t)=[G(x,t)_{kj}]$ such that for $k,j\in\mathcal{S}$,

(34) \begin{equation}G(x,t)_{kj}=\mathbb{P}(\theta(0)\leq t,\varphi(\theta(0))=j\ |\ X(0)=x,\varphi(0)=k)\end{equation}

is the probability that, given that the process starts from level x in phase k, it first hits level zero by time t, and does so in phase j. We partition ${\textbf{G}}(x,t)$ according to $\mathcal{S}_1\cup\mathcal{S}_2$ as

(35) \begin{equation}{\textbf{G}}(x,t)=\left[\begin{array}{c@{\quad}c}{\boldsymbol{0}}&{\textbf{G}}(x,t)_{12}\\[4pt]{\boldsymbol{0}}&{\textbf{G}}(x,t)_{22}\end{array}\right].\end{equation}

Also, define $\widetilde{\textbf{G}}(x,s)=\int_{t=0}^{\infty}e^{-s t}d{\textbf{G}}(x,t)$, which we partition in an analogous manner.

The expression for ${\textbf{E}}(dy)^{(x)}(s)_{22}$ then follows from partitioning the sample paths. The process can visit level y in some phase in $\mathcal{S}_2$ directly after a visit to level y in some phase in $\mathcal{S}_1$, or without visiting level y in some phase in $\mathcal{S}_1$ at all, and so we take the sum of expressions corresponding to these two possibilities, which gives

(36) \begin{eqnarray}{\textbf{E}}(dy)^{(x)}(s)_{22} = {\textbf{E}}(dy)^{(x)}(s)_{21}{\textbf{C}}_1{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}+\widetilde{\textbf{G}}(x-y,s){\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y<x\}dy.\end{eqnarray}

The result follows since by [Reference Bean, O’Reilly and Taylor16], $\widetilde{\textbf{G}}(x-y,s)=e^{{\textbf{D}}(s)(x-y)}$.

Finally, we consider ${\textbf{E}}(dy)^{(x)}(s)_{21}$. Define

(37) \begin{equation}\underline{X}(t)=\inf_{u\in[0,t]}\{X(u)\}.\end{equation}

Note that, given that the process starts with $X(0)=x$, $\varphi(0)=i$, for the process to end with $X(t)\in dy$, $\varphi(t)=j$, with a taboo $\theta(0)>t$, one of the following two alternatives must hold.

The first alternative is that $y\geq x$. In this case, the following occurs:

  • First, given $X(0)=x$, $\varphi(0)=i$, the process must reach some infimum $\underline{X}(t)=z\in(0,x]$ at some time $u\in[0,t]$, in some phase in $\mathcal{S}_2$, with the corresponding density recorded by the matrix ${\textbf{G}}_{22}(x-z,u)$. This is followed by an instantaneous transition to some phase k in $\mathcal{S}_1$ according to the rate recorded by the block matrix ${\textbf{Q}}_{21}$ of the fluid generator Q, by the physical interpretation of Q in [Reference Bean, O’Reilly and Taylor16]. The corresponding density of this occurring is therefore $[{\textbf{G}}_{22}(x-z,u){\textbf{Q}}_{21}]_{ik}$.

  • Next, starting from level z in phase k at time u, the process must remain above level z during the time interval [u,t], ending in some level y in phase j at time t. The corresponding density of this occurring is $[\boldsymbol{\phi}(y-z,t-u)]_{kj}$.

Consequently, the LST of this alternative is

\begin{eqnarray*}&&{\int_{z=0}^x\int_{t=0}^{\infty}\int_{u=0}^te^{-s t}{\textbf{G}}_{22}(x-z,u){\textbf{Q}}_{21}(s)\boldsymbol{\phi}(y-z,t-u)_{11}dudtdz}\\[4pt] &&\,=\,\int_{z=0}^x\int_{u=0}^{\infty}\int_{t=u}^{\infty}e^{-s u}{\textbf{G}}_{22}(x-z,u){\textbf{Q}}_{21}(s)e^{-s (t-u)}\boldsymbol{\phi}(y-z,t-u)_{11}dudtdz\\[4pt] &&\,=\,\int_{z=0}^x\left(\int_{u=0}^{\infty}e^{-s u}{\textbf{G}}_{22}(x-z,u)du\right){\textbf{Q}}_{21}(s)\left(\int_{t=0}^{\infty}e^{-s t}\boldsymbol{\phi}(y-z,t)_{11}dt\right)dz\\[4pt] &&\,=\,\int_{z=0}^xe^{{\textbf{D}}(s)(x-z)}{\textbf{Q}}_{21}(s)e^{{\textbf{K}}(s)(y-z)}dz.\end{eqnarray*}

The second alternative is that $y<x$. The LST of this alternative, by an argument similar to the above, is

\begin{eqnarray*}\int_{z=0}^ye^{{\textbf{D}}(s)(x-z)}{\textbf{Q}}_{21}(s)e^{{\textbf{K}}(s)(y-z)}dz.\end{eqnarray*}

Taking the sum of the expressions corresponding to the two alternatives and right-multiplying by ${\textbf{C}}_1^{-1}$ results in the integral expression for ${\textbf{E}}(dy)^{(x)}(s)_{21}$.

Remark 2. Consider

(38) \begin{equation} {\textbf{E}}^{(x)}(s)_{21} =\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s)_{21}={\textbf{X}}{\textbf{C}}_1^{-1}, \end{equation}

where ${\textbf{X}}=\int_{y=0}^{\infty}{\textbf{X}}(y)dy$, and

(39) \begin{eqnarray} {\textbf{X}}(y)&\,=\,& \int_{z=0}^{\min\{x,y\}} e^{{\textbf{D}}(s)(x-z)}{\textbf{Q}}_{21}(s)e^{{\textbf{K}}(s)(y-z)}dz. \end{eqnarray}

Then, by integration by parts in (39), ${\textbf{X}}(y)$ is the solution of

(40) \begin{eqnarray} {\textbf{D}}(s){\textbf{X}}(y)+{\textbf{X}}(y){\textbf{K}}(s) &\,=\,&-\left[ e^{{\textbf{D}}(s)(x-z)}{\textbf{Q}}_{21}(s)e^{{\textbf{K}}(s)(y-z)} \right]_{z=0}^{\min\{x,y\}}, \end{eqnarray}

and by integrating (40), X is the solution of

(41) \begin{eqnarray} {\textbf{D}}(s){\textbf{X}}+{\textbf{X}}{\textbf{K}}(s)&\,=\,&e^{{\textbf{D}}(s)(x)}{\textbf{Q}}_{21}(s)({-}{\textbf{K}}(s)^{-1})-({-}{\textbf{D}}(s)^{-1}){\textbf{Q}}_{21}(s)\nonumber\\[4pt] &&+\,({-}{\textbf{D}}(s)^{-1})e^{{\textbf{D}}(s)(x)}{\textbf{Q}}_{21}(s)+{\textbf{Q}}_{21}(s){\textbf{K}}(s)^{-1}. \end{eqnarray}

3. Approach

The key idea is to write each of ${\textbf{E}}(dy)^{(x)}(s)$ and ${\textbf{E}}^{(x)}(s)$ in the form

(42) \begin{equation}\tilde f(s)=\tilde f(s^*) - C (s-s^*)^{1/2}+o((s-s^*)^{1/2}),\end{equation}

and then apply the Heaviside principle in order to evaluate (10). In this section, we summarise the relevant mathematical background required for this analysis.

Consider a function $f:\mathbb{R}\to\mathbb{R}$. Let $\tilde f(s):=\int_0^\infty e^{-s x}f(x)\ dx$ for $s\in \mathbb{R}$ be its Laplace transform. Consider the singularities of $\tilde{f}(s)$. We assume that one with the largest strictly negative real part is real, and we denote it by $s^*<0$. Notice that this yields the integrability of $\int_0^\infty|f(x)|\ dx$. The inversion formula reads

(43) \begin{equation}f(x)=\frac{1}{2\pi {\textrm{i}}}\int_{a-{\textrm{i}} \infty}^{a+{\textrm{i}}\infty}\tilde{f}(s) e^{sx}\ ds\end{equation}

for some (and then any) $a>s^*$.

We now focus on a class of theorems that infer the tail behaviour of a function from its Laplace transform, commonly referred to as Tauberian theorems. Importantly, the behaviour of the Laplace transform around the singularity $s^*$ plays a crucial role here. The following heuristic principle given in [Reference Abate and Whitt1] is often relied upon. Suppose that for $s^*$, some constants K and C, and a non-integer $q > 0$,

(44) \begin{equation}\tilde{f}(s)=K - C(s-s^*)^q+o((s-s^*)^q) \qquad\mbox{as $s\downarrow s^*$.}\end{equation}

Then

(45) \begin{equation}f(x)=\frac{C}{\Gamma({-}q)} x^{-q-1}e^{s^* x}(1+o(1)) \qquad \mbox{as $x\to\infty$,}\end{equation}

where $\Gamma(\cdot)$ is the gamma function. Below we specify conditions under which this relation can be rigorously proven. Later in our paper we apply it in the specific case that $q=1/2$; recall that $\Gamma({-}1/2)=-2\sqrt{\pi}$.

A formal justification of the above relation can be found in Doetsch [Reference Doetsch22, Theorem 37.1]. Following Miyazawa and Rolski [Reference Miyazawa and Rolski41], we consider the following specific form. For this we first recall the concept of the $\mathfrak{W}$-contour with a half-angle of opening $\pi/2<\psi\le \pi$, as depicted in [Reference Doetsch22, Figure 30, p. 240]; also, ${\mathscr G}_{\alpha}(\psi)$ is the region between the contour $\mathfrak W$ and the line $\Re(z)=0$. More precisely,

(46) \begin{equation}{\mathscr G}_{\alpha}(\psi) \equiv \{z \in \mathbb{C}; \Re(z) < 0, z \ne \alpha,\ |\ \arg (z - \alpha)| < \psi \},\end{equation}

where $\arg z$ is the principal part of the argument of the complex number z. In the following theorem, conditions are identified such that the above principle holds; we refer to this as Heaviside’s operational principle, or simply the Heaviside principle.

Theorem 1. (Heaviside principle) Suppose that for $\tilde f:\mathbb{C}\to \mathbb{C}$ and $s^* < 0$ the following three conditions hold:

  1. (A1) $\tilde f(\cdot)$ is analytic in a region ${\mathscr G}_{s^*}(\psi)$ for some $\pi/2<\psi\le \pi$;

  2. (A2) $\tilde f(s) \to 0$ as $|s| \to \infty$ with $s \in {\mathscr G}_{s^*}(\psi)$;

  3. (A3) for some constants K and C, and a non-integer $q>0$,

    (47) \begin{eqnarray} \tilde f(s)=K - C (s-s^*)^{q}+o((s-s^*)^{q}), \end{eqnarray}
    where ${\mathscr G}_{s^*}(\psi)\ni s \to s^*$.

Then

\begin{eqnarray*} f(x)=\frac{C}{\Gamma({-}q)} x^{-q-1}e^{s^* x}(1+o(1)) \end{eqnarray*}

as $x\to\infty$.

We now discuss when the assumption (A1) is satisfied. To check that the Laplace transform $\tilde{f}(\cdot)$ is analytic in the region $\mathscr G_{s^*}(\psi)$, we can use the concept of semiexponentiality of f (see [30, p. 314]).

Definition 2. (Semiexponentiality) f is said to be semiexponential if for some $0< \phi\le\pi/2$ and all $-\phi\le {\vartheta}\le \phi$ there exists finite and strictly negative $\gamma({\vartheta})$, defined as the infimum of all a such that

\[\left|f(e^{{\textrm{i}}{\vartheta}}r)\right|<e^{ar}\]

for all sufficiently large r.

Relying on this concept, the following sufficient condition for (A1) applies.

Proposition 1. [Reference Henrici30, Theorem 10.9f] Suppose that f is semiexponential with $\gamma({\vartheta})$ fulfilling the following conditions: (i) $\gamma=\gamma(0)<0$, (ii) $\gamma({\vartheta})\geq\gamma(0)$ in a neighborhood of ${\vartheta}=0$, and (iii) $\gamma({\vartheta})$ is smooth. Then (A1) is satisfied.

Note that by Lemma 3, all assumptions of Proposition 1 are satisfied, and we can apply the Heaviside principle given in Theorem 1 for ${\textbf{E}}(dy)^{(x)}(s)$ and ${\textbf{E}}^{(x)}(s)$.

4. Application of the Heaviside principle

By Section 2, ${\textbf{E}}(dy)^{(x)}(s)$ and ${\textbf{E}}^{(x)}(s)$ are expressed in terms of ${\textbf{Q}}(s)$ and ${\boldsymbol{\Psi}}(s)$, and so we derive the expansion around $s^*$ for each of them first.

Consider ${\boldsymbol{\Psi}}(s)$ as defined in (19). We have ${\boldsymbol{\Psi}}(s)=\int_{t=0}^{\infty}e^{-st}\boldsymbol{\psi}(t)dt<\infty$ for all $s\geq 0$ by [Reference Bean, O’Reilly and Taylor16, Reference Bean, O’Reilly and Taylor17]. Define the singularity

(48) \begin{eqnarray}s^*&\,=\,&\max\{s\leq 0:{\boldsymbol{\Psi}}(s)<\infty, {\boldsymbol{\Psi}}(z)=\infty \mbox{ for all }z < s\},\end{eqnarray}

where the existence of $s^*$ follows from [Reference Doetsch22, Theorem 3.3, p. 15].

Consider matrices ${\textbf{K}}(s)$ and ${\textbf{D}}(s)$ as defined in (20), and recall that $sp({\textbf{K}}(s))\cap sp({-}{\textbf{D}}(s))=\varnothing$ for all $s\geq 0$. Define

(49) \begin{eqnarray}\delta^*&\,=\,&\max\{s\in[s^*,0):sp({\textbf{K}}(s))\cap sp({-}{\textbf{D}}(s))\not=\varnothing\},\end{eqnarray}

whenever the maximum exists. The definition implies that ${\textbf{K}}(\delta^*)$ and $({-}{\textbf{D}}(\delta^*))$ have a common eigenvalue.

Lemma 4. We have $s^*=\delta^*$.

Proof. Consider Equation (21), and for all s for which ${\textbf{Q}}(s)$ exists, define the following function of ${\textbf{X}}=[x_{ij}]_{i\in\mathcal{S}_1,j\in\mathcal{S}_2}$:

(50) \begin{equation} g_s({\textbf{X}})={\textbf{Q}}_{12}(s)+{\textbf{Q}}_{11}(s){\textbf{X}}+{\textbf{X}}{\textbf{Q}}_{22}(s)+{\textbf{X}}{\textbf{Q}}_{21}(s){\textbf{X}}. \end{equation}

For ${\textbf{X}}\geq {\boldsymbol{0}}$, ${\textbf{X}}\not= {\boldsymbol{0}}$, we have

(51) \begin{eqnarray} \frac{d}{ds}g_s({\textbf{X}})&\,=\,& -{\textbf{C}}^{-1}_1{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{02} {\textbf{X}} -{\textbf{C}}^{-1}_1\left({\textbf{I}}+{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{01}\right) {\textbf{X}} \nonumber\\[4pt] && -{\textbf{X}} {\textbf{C}}^{-1}_2\left({\textbf{I}}+{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{02}\right) - {\textbf{X}} {\textbf{C}}^{-1}_2{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{01} {\textbf{X}} \nonumber\\[4pt] &<&{\boldsymbol{0}}, \end{eqnarray}

since $({\textbf{T}}_{00}-s{\textbf{I}})^{-2}=\left(\int_{t=0}^{\infty}e^{-st}e^{{\textbf{T}}_{00}t}dy\right)^2>{\boldsymbol{0}}$, and so $g_s({\textbf{X}})$ is a decreasing function of s.

Also, define the functions $g_s^{(u,v)}({\textbf{X}})=[g_s({\textbf{X}})]_{uv}$, for $u\in\mathcal{S}_1$, $v\in\mathcal{S}_2$, by

(52) \begin{eqnarray} g_s^{(u,v)}({\textbf{X}})&\,=\,&[{\textbf{Q}}_{12}(s)]_{u,v} +\sum_{k\in\mathcal{S}_1} [{\textbf{Q}}_{11}(s)]_{uk}x_{kv} +\sum_{\ell\in\mathcal{S}_2} x_{u\ell}[{\textbf{Q}}_{22}(s)]_{\ell v} +\sum_{k\in\mathcal{S}_2,\ell\in\mathcal{S}_1} x_{uk}[{\textbf{Q}}_{21}(s)]_{k\ell}x_{\ell v};\nonumber\\ \end{eqnarray}

each of these corresponds to an $|\mathcal{S}_1|\times|\mathcal{S}_2|$-dimensional smooth quadratic surface. The matrix equation (21) is equivalent to the system of $|\mathcal{S}_1|\times|\mathcal{S}_2|$ quadratic polynomial equations given by

(53) \begin{eqnarray} g_s^{(u,v)}({\textbf{X}})&\,=\,&0 \quad \mbox{ for all }u\in\mathcal{S}_1,v\in\mathcal{S}_2, \end{eqnarray}

each corresponding to the (u,v)th level curve.

Now, by Lemma 1, for all $s\geq s^*$, ${\boldsymbol{\Psi}}(s)$ is a solution of $g_{s}({\textbf{X}})={\boldsymbol{0}}$ and so is an intersection point of all level curves (53).

Some other solutions to $g_{s}({\textbf{X}})={\boldsymbol{0}}$ may exist. For all real s, we denote by ${\textbf{X}}(s)$ the family of solutions that correspond to the intersection point ${\boldsymbol{\Psi}}(s)$. That is, when $s\geq s^*$, ${\textbf{X}}(s)={\boldsymbol{\Psi}}(s)$, and if ${\textbf{X}}(s)$ exists for $s<s^*$ in some neighbourhood of $s^*$, then ${\textbf{X}}(s)$ must be a continuous function of s in this neighbourhood, by the monotonicity and continuity of $g_s({\textbf{X}})$.

So suppose that there exist solutions ${\textbf{X}}(s)$ to $g_{s}({\textbf{X}})={\boldsymbol{0}}$ for $s<s^*$ in some neighbourhood of $s^*$, and that $\lim_{s\uparrow s^*}{\textbf{X}}(s)={\boldsymbol{\Psi}}(s^*)$. Then, since ${\boldsymbol{\Psi}}(s^*)>{\boldsymbol{0}}$, there exists ${\textbf{W}}>{\boldsymbol{0}}$ with $g_{s}({\textbf{W}})={\boldsymbol{0}}$ for some $s<s^*$ with $sp({\textbf{Q}}_{11}(s))\cap sp({-}{\textbf{Q}}_{22}(s))=\varnothing$ (since the spectra $sp({\textbf{Q}}_{11}(s))$ and $sp({-}{\textbf{Q}}_{22}(s))$ are discrete).

Therefore, by [Reference Guo27, Theorem 2.3] and [Reference Bean, O’Reilly and Taylor17, Algorithm 1], we have ${\boldsymbol{\Psi}}(s)<\infty$ for such $s<s^*$, and this contradicts the definition of $s^*$. Consequently, ${\textbf{X}}(s)$ does not exist for $s<s^*$, and so the level curves (53) must touch (have a common tangent line) at $s=s^*$, but not at $s>s^*$.

Define

(54) \begin{equation} \nabla g_{s^*}^{(u,v)}(x_{ij},[x_{ij}^*]) = \frac{\partial}{\partial x_{ij}}g^{(u,v)}([x_{ij}])\Big|_{[x_{ij}^*]}, \end{equation}

and note that

(55) \begin{eqnarray} && { \nabla g_{s^*}^{(u,v)}(x_{ij},[x_{ij}^*]) = \frac{\partial}{\partial x_{ij}}g_{s^*}^{(u,v)}([x_{ij}])\Big|_{[x_{ij}^*]} } \nonumber \\[2pt] &&\,=\, [{\textbf{Q}}_{11}(s^*)]_{ui}{\boldsymbol{1}}\{j=v\} +[{\textbf{Q}}_{22}(s^*)]_{jv}{\boldsymbol{1}}\{i=u\} +\sum_{k\in\mathcal{S}_2} [{\textbf{Q}}_{21}(s^*)]_{ki}\times x_{uk}^* {\boldsymbol{1}}\{j=v\} \nonumber\\ &&\quad \,+ \sum_{\ell\in\mathcal{S}_1}[{\textbf{Q}}_{21}(s^*)]_{j\ell}\times x_{\ell v}^*{\boldsymbol{1}}\{i=u\} \nonumber\\[2pt] &&\,=\, [{\textbf{K}}(s^*)]_{ui}{\boldsymbol{1}}\{j=v\} +[{\textbf{D}}(s^*)]_{jv}{\boldsymbol{1}}\{i=u\} . \end{eqnarray}

The tangent plane to the (u,v)th level curve (53) at ${\boldsymbol{\Psi}}(s^*)=[x_{ij}^*]$ is the solution to the equation

(56) \begin{eqnarray} 0&\,=\,& \sum_{i,j} \nabla g^{(u,v)}_{s^*}(x_{ij},[x_{ij}^*]) (x_{ij}-x_{ij}^*) \nonumber\\ &\,=\,&\sum_{i,j} \Big( [{\textbf{K}}(s^*)]_{ui}{\boldsymbol{1}}\{j=v\} +[{\textbf{D}}(s^*)]_{jv}{\boldsymbol{1}}\{i=u\} \Big) (x_{ij}-x_{ij}^*) \nonumber\\ &\,=\,& \sum_i[{\textbf{K}}(s^*)]_{ui}[{\textbf{X}}-{\boldsymbol{\Psi}}(s^*)]_{iv} +\sum_j [{\textbf{X}}-{\boldsymbol{\Psi}}(s^*)]_{uj}[{\textbf{D}}(s^*)]_{jv} \nonumber\\ &\,=\,& \left[ {\textbf{K}}(s^*)({\textbf{X}}-{\boldsymbol{\Psi}}(s^*)) +({\textbf{X}}-{\boldsymbol{\Psi}}(s^*)){\textbf{D}}(s^*)\right]_{uv}. \end{eqnarray}

From linear algebra, a matrix equation of the form ${\boldsymbol{0}}={\textbf{A}}{\textbf{X}}+{\textbf{X}}{\textbf{B}}$ has a nonzero solution if and only if A and $({-}{\textbf{B}})$ have a common eigenvalue (e.g. see [Reference Bhatia and Rosenthal18]). Therefore, the equation

(57) \begin{eqnarray} {\boldsymbol{0}}&\,=\,& {\textbf{K}}(s^*)({\textbf{X}}-{\boldsymbol{\Psi}}(s^*)) +({\textbf{X}}-{\boldsymbol{\Psi}}(s^*)){\textbf{D}}(s^*) \end{eqnarray}

has a solution ${\textbf{Z}}=[z_{ij}]\not= {\boldsymbol{\Psi}}(s^*)$ if and only if ${\textbf{K}}(s^*)$ and $({-}{\textbf{D}}(s^*))$ have a common eigenvalue, in which case the tangent planes (56) to all level curves (53) at ${\boldsymbol{\Psi}}(s^*)$ intersect one another at a tangent line that goes through Z and ${\boldsymbol{\Psi}}(s^*)$.

That is, the level curves (53) touch if and only if $sp({\textbf{K}}(s^*))\cap sp({-}{\textbf{D}}(s^*))\not=\varnothing$. Hence, $s^*=\delta^*$.

We now extend the result for $s>0$ in [Reference Bean, O’Reilly and Taylor16, Theorem 1] to all $s\geq s^*$.

Corollary 4.1. For all $s\geq s^*$, ${\boldsymbol{\Psi}}(s)$ is the minimum nonnegative solution of the Riccati equation (21).

Proof. Suppose $s\geq s^*$. Then ${\textbf{Q}}_{11}(s)\leq {\textbf{K}}(s)={\textbf{Q}}_{11}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s)$ and ${\textbf{Q}}_{22}(s)\leq {\textbf{D}}(s)={\textbf{Q}}_{22}(s)+{\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s)$, and so $sp({\textbf{Q}}_{11}(s))\cap sp({-}{\textbf{Q}}_{22}(s))=\varnothing$.

Therefore, by [Reference Guo27, Theorem 2.3] and [Reference Bean, O’Reilly and Taylor17, Algorithm 1], ${\boldsymbol{\Psi}}(s)$ is the minimum nonnegative solution of (21).

In order to illustrate the theory, we consider the following simple example, which we will analyse as we develop the results throughout the paper.

Example 1. Let $\mathcal{S}=\{1,2\}$, $\mathcal{S}_1=\{1\}$, $\mathcal{S}_2=\{2\}$, $c_1=1$, $c_2=-1$, and

(58) \begin{eqnarray} {\textbf{T}} &\,=\,&\left[ \begin{array}{c@{\quad}c} {\textbf{T}}_{11}&{\textbf{T}}_{12}\\[4pt] {\textbf{T}}_{21}&{\textbf{T}}_{22} \end{array} \right] =\left[ \begin{array}{c@{\quad}c} -a&a\\[4pt] b&-b \end{array} \right], \end{eqnarray}
(59) \begin{eqnarray} {\textbf{Q}}(s) &\,=\,&\left[ \begin{array}{c@{\quad}c} {\textbf{Q}}_{11}(s)&{\textbf{Q}}_{12}(s)\\[4pt] {\textbf{Q}}_{21}(s)&{\textbf{Q}}_{22}(s) \end{array} \right] =\left[ \begin{array}{c@{\quad}c} -a-s&a\\[4pt] b&-b-s \end{array} \right], \end{eqnarray}

with $a>b>0$ so that the process is stable.

Then ${\boldsymbol{\Psi}}(s)$ is the minimum nonnegative solution of (21), here equivalent to

(60) \begin{equation} b x^2 - (a+b+2s)x+a=0, \end{equation}

which has solutions provided $\Delta(s)=(a+b+2s)^2-4ab \geq 0$, that is, for all

(61) \begin{eqnarray} s\in \left({-}\infty, \frac{-(a+b)-2\sqrt{ab}}{2} \right] \cup\left[ \frac{-(a+b)+2\sqrt{ab}}{2}, +\infty\right) . \end{eqnarray}

Since

(62) \begin{eqnarray} (a+b+2s)-\sqrt{\Delta(s)}\geq 0 \iff s\leq \frac{2ab}{a+b}, \end{eqnarray}

it follows that ${\boldsymbol{\Psi}}(s)$ exists for all $s\geq \frac{-(a+b)+2\sqrt{ab}}{2}$, and

(63) \begin{eqnarray} {\boldsymbol{\Psi}}(s)&\,=\,&\frac{(a+b+2s)-\sqrt{\Delta(s)}}{2b},\qquad\quad\ \ \ \ \end{eqnarray}
(64) \begin{eqnarray} {\textbf{K}}(s)&\,=\,&-a-s+\frac{(a+b+2s)-\sqrt{\Delta(s)}}{2},\end{eqnarray}
(65) \begin{eqnarray} {\textbf{D}}(s)&\,=\,&-b-s+\frac{(a+b+2s)-\sqrt{\Delta(s)}}{2}. \end{eqnarray}

Therefore,

(66) \begin{equation} s^*=\frac{-(a+b)+2\sqrt{ab}}{2}<0 \end{equation}

and

(67) \begin{eqnarray} {\boldsymbol{\Psi}}(s^*)&\,=\,&\sqrt{\frac{a}{b}},\qquad\qquad\qquad\qquad\qquad \ \ \ \end{eqnarray}
(68) \begin{eqnarray} {\textbf{K}}(s^*)&\,=\,&-a-s^*+\sqrt{\frac{a}{b}}\ b = \frac{b-a}{2}<0 , \end{eqnarray}
(69) \begin{eqnarray} {\textbf{D}}(s^*)&\,=\,&-b-s^*+b\ \sqrt{\frac{a}{b}} =\frac{a-b}{2}>0. \end{eqnarray}

Note that $s^*=\delta^*$.

Lemma 5. For all $s>s^*$,

(70) \begin{eqnarray}{\textbf{Q}}_{22}(s)&\,=\,&{\textbf{Q}}_{22}(s^*)-{\textbf{A}}_{22}(s^*)(s-s^*)+o(s-s^*),\end{eqnarray}
(71) \begin{eqnarray}{\textbf{Q}}_{11}(s)&\,=\,&{\textbf{Q}}_{11}(s^*)-{\textbf{A}}_{11}(s^*)(s-s^*)+o(s-s^*),\end{eqnarray}
(72) \begin{eqnarray}{\textbf{Q}}_{12}(s)&\,=\,&{\textbf{Q}}_{12}(s^*)-{\textbf{A}}_{12}(s^*)(s-s^*)+o(s-s^*),\end{eqnarray}
(73) \begin{eqnarray}{\textbf{Q}}_{21}(s)&\,=\,&{\textbf{Q}}_{21}(s^*)-{\textbf{A}}_{21}(s^*)(s-s^*)+o(s-s^*),\end{eqnarray}

where, for all $s> s^*$,

(74) \begin{eqnarray}{\textbf{A}}_{22}(s)&\,=\,&-\frac{d}{ds}{\textbf{Q}}_{22}(s)={\textbf{C}}^{-1}_2\left({\textbf{I}}+{\textbf{T}}_{20}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{02} \right),\end{eqnarray}
(75) \begin{eqnarray}{\textbf{A}}_{11}(s)&\,=\,&-\frac{d}{ds}{\textbf{Q}}_{11}(s)={\textbf{C}}^{-1}_1\left({\textbf{I}}+{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{01} \right),\end{eqnarray}
(76) \begin{eqnarray}\ \ {\textbf{A}}_{12}(s)&\,=\,&-\frac{d}{ds}{\textbf{Q}}_{12}(s)={\textbf{C}}^{-1}_1{\textbf{T}}_{10}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{02},\qquad\qquad\!\!\!\end{eqnarray}
(77) \begin{eqnarray}\ \ {\textbf{A}}_{21}(s)&\,=\,&-\frac{d}{ds}{\textbf{Q}}_{21}(s)={\textbf{C}}^{-1}_2{\textbf{T}}_{20}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{01},\qquad\qquad\!\!\!\end{eqnarray}

and ${\textbf{A}}_{22}(s^*)=\lim_{s\downarrow s^*}{\textbf{A}}_{22}(s)<\infty$, ${\textbf{A}}_{11}(s^*)=\lim_{s\downarrow s^*}{\textbf{A}}_{11}(s)<\infty$, ${\textbf{A}}_{12}(s^*)=\lim_{s\downarrow s^*}$ ${\textbf{A}}_{12}(s)<\infty$, and ${\textbf{A}}_{21}(s^*)=\lim_{s\downarrow s^*}{\textbf{A}}_{21}(s)<\infty$.

Proof. For all $s>s^*$,

(78) \begin{eqnarray} -\frac{d}{ds}\left({-}({\textbf{T}}_{00}-s{\textbf{I}})^{-1} \right) &\,=\,&({\textbf{T}}_{00}-s{\textbf{I}})^{-2}, \end{eqnarray}

and so by (18),

(79) \begin{eqnarray}-\frac{d}{ds}{\textbf{Q}}_{22}(s)&\,=\,&{\textbf{C}}^{-1}_2\left({\textbf{I}}+{\textbf{T}}_{20}({\textbf{T}}_{00}-s{\textbf{I}})^{-2}{\textbf{T}}_{02} \right),\end{eqnarray}

which, with the notation ${\textbf{A}}_{22}(s)=-\frac{d}{ds}{\textbf{Q}}_{22}(s)$, implies

(80) \begin{eqnarray}{\textbf{Q}}_{22}(s+h)={\textbf{Q}}_{22}(s)-{\textbf{A}}_{22}(s)(h)+o(h).\end{eqnarray}

Next, since ${\boldsymbol{\Psi}}(s^*)<\infty$ by Lemma 4, we have $\left({-}({\textbf{T}}_{00}-s^*{\textbf{I}})^{-1} \right)<\infty$ and $({\textbf{T}}_{00}-s^*{\textbf{I}})^{-2}<\infty$, which implies that ${\textbf{A}}_{22}(s^*)<\infty$. Taking the limits as $s\downarrow s^*$ in (80) and substituting $h=(s-s^*)$ gives

(81) \begin{eqnarray}{\textbf{Q}}_{22}(s)&\,=\,&{\textbf{Q}}_{22}(s^*)-{\textbf{A}}_{22}(s^*)(s-s^*)+o(s-s^*).\end{eqnarray}

The proofs for the remaining expressions are analogous.

For $s>s^*$, let

(82) \begin{equation}{\boldsymbol{\Phi}} (s)=\frac{d}{ds}{\boldsymbol{\Psi}}(s)=\lim_{h\to 0}\frac{{\boldsymbol{\Psi}}(s+h)-{\boldsymbol{\Psi}}(s)}{h},\end{equation}

and, for $s\geq s^*$, let

(83) \begin{eqnarray}{\textbf{U}}(s)&\,=\,&{\textbf{A}}_{12}(s)+{\textbf{A}}_{11}(s){\boldsymbol{\Psi}}(s)+{\boldsymbol{\Psi}}(s){\textbf{A}}_{22}(s)+{\boldsymbol{\Psi}}(s){\textbf{A}}_{21}(s){\boldsymbol{\Psi}}(s),\end{eqnarray}

noting that ${\textbf{U}}(s^*)$ exists by Lemma 5.

Lemma 6. For $s>s^*$, ${\boldsymbol{\Phi}}(s)$ is the unique solution of the equation

(84) \begin{eqnarray} {\textbf{K}}(s){\textbf{X}}+{\textbf{X}}{\textbf{D}}(s) &\,=\,& {\textbf{U}}(s). \end{eqnarray}

Furthermore, ${\boldsymbol{\Phi}}(s^*)=\lim_{s\downarrow s^*}{\boldsymbol{\Phi}}(s)=-\infty$.

Proof. By Lemma 4, for all $s>s^*$, ${\textbf{K}}(s)$ and $({-}{\textbf{D}}(s))$ have no common eigenvalues, and so by [Reference Laub38, Theorem 13.18], Equation (84) has a unique solution. We now show that ${\boldsymbol{\Phi}} (s)$ is the solution of (84). Also see [Reference Bean, O’Reilly and Taylor16, Corollary 3]. Indeed, by taking derivatives with respect to s in Equation (21) for ${\boldsymbol{\Psi}}(s)$, we have

(85) \begin{eqnarray}{\boldsymbol{0}}&\,=\,&\frac{d}{ds}\Big({\textbf{Q}}_{12}(s)+{\textbf{Q}}_{11}(s){\boldsymbol{\Psi}}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{22}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s)\Big)\nonumber\\[4pt]&\,=\,&-{\textbf{A}}_{12}(s)-{\textbf{A}}_{11}(s){\boldsymbol{\Psi}}(s)+{\textbf{Q}}_{11}(s){\boldsymbol{\Phi}}(s)-{\boldsymbol{\Psi}}(s){\textbf{A}}_{22}(s)+{\boldsymbol{\Phi}}(s){\textbf{Q}}_{22}(s)\nonumber\\[4pt]&&+{\boldsymbol{\Phi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s){\textbf{A}}_{21}(s){\boldsymbol{\Psi}}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Phi}}(s)\nonumber\\[4pt]&\,=\,&-{\textbf{U}}(s)+{\textbf{K}}(s){\boldsymbol{\Phi}} (s)+{\boldsymbol{\Phi}} (s){\textbf{D}}(s).\end{eqnarray}

Also, ${\boldsymbol{\Phi}}(s)<{\boldsymbol{0}}$, since

(86) \begin{equation}{\boldsymbol{\Phi}}(s)=\frac{d}{ds}{\boldsymbol{\Psi}}(s)=\frac{d}{ds}\int_0^{\infty}e^{-st}\boldsymbol{\psi}(t)dt=-\int_0^{\infty}te^{-st}\boldsymbol{\psi}(t)dt<{\boldsymbol{0}}.\end{equation}

When $s=s^*$, however, by Lemma 4, ${\textbf{K}}(s)$ and $({-}{\textbf{D}}(s))$ have a common eigenvalue, and so by [Reference Laub38, Theorem 13.18], Equation (84) does not have a unique solution.

Finally, we show that $\lim_{s\downarrow s^*}{\boldsymbol{\Phi}}(s)=-\infty$. By standard methodology [Reference Laub38, Section 13.3], for $s>s^*$, the unique solution to Equation (84) can be written in the form

(87) \begin{eqnarray} vec({\boldsymbol{\Phi}}(s)) &\,=\,& ({\textbf{Z}}(s))^{-1}vec({\textbf{U}}(s)) = \frac{adj({\textbf{Z}}(s))}{det({\textbf{Z}}(s))}vec({\textbf{U}}(s)), \end{eqnarray}

where $vec({\boldsymbol{\Phi}}(s))$ and $vec({\textbf{U}}(s))$ are column vectors obtained by stacking the columns (from left to right) of the original matrices one under another,

(88) \begin{equation} {\textbf{Z}}(s)=({\textbf{I}}\otimes {\textbf{K}}(s))+({\textbf{D}}(s)^T \otimes {\textbf{I}}), \end{equation}

and the eigenvalues of ${\textbf{Z}}(s)$ are $(\lambda_i-\mu_j)$, where $\lambda_i$ are eigenvalues of ${\textbf{K}}(s)$ and $\mu_j$ are eigenvalues of $({-}{\textbf{D}}(s))$. Because $det({\textbf{Z}}(s))$ is the product of the eigenvalues of ${\textbf{Z}}(s)$, and because, as $s\downarrow s^*$, one of the eigenvalues will approach zero since $s^*=\delta^*$ by Lemma 4, we have $\lim_{s\downarrow s^*}det({\textbf{Z}}(s))=0$ and so ${\boldsymbol{\Phi}}(s^*)=\lim_{s\downarrow s^*}{\boldsymbol{\Phi}}(s)=-\infty$, where the negative sign is due to the fact that ${\boldsymbol{\Phi}}(s)<{\boldsymbol{0}}$ for all $s>s^*$.

We now state the key result of this paper.

Theorem 2. For all $s>s^*$,

(89) \begin{eqnarray} {\boldsymbol{\Psi}}(s)&\,=\,&{\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)\sqrt{s-s^*}+o(\sqrt{s-s^*}), \end{eqnarray}

where ${\boldsymbol{0}}<{\textbf{B}}(s^*)<\infty$ solves

(90) \begin{eqnarray} {\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*) &\,=\,&{\textbf{U}}(s^*)-{\textbf{Y}}(s^*),\end{eqnarray}
(91) \begin{eqnarray} {\textbf{K}}(s^*){\textbf{B}}(s^*)+{\textbf{B}}(s^*){\textbf{D}}(s^*)&\,=\,&{\boldsymbol{0}}, \end{eqnarray}

and

(92) \begin{eqnarray} {\textbf{Y}}(s^*)&\,=\,&\lim_{s\downarrow s^*} \left( {\textbf{K}}(s^*){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s){\textbf{D}}(s^*) \right) . \end{eqnarray}

Proof. Note that for any function $h(\cdot)$ with $h(s-s^*)=o(s-s^*)$ or $h(s-s^*)=c\cdot (s-s^*)$ for some constant c, we have

(93) \begin{equation} -\lim_{s\downarrow s^*} \left( \frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{s-s^*}h(s-s^*) \right) ={\boldsymbol{0}}.\end{equation}

Consider $h(s-s^*)=(s-s^*)/|| {\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*) ||$. We have

(94) \begin{equation} \lim_{s\downarrow s^*}\frac{s-s^*}{h(s-s^*)} =\lim_{s\downarrow s^*}|| {\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*) || =0,\end{equation}

which implies $(s-s^*)=o(h(s-s^*))$, and

(95) \begin{equation} \lim_{s\downarrow s^*} \Bigg|\Bigg| \frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{s-s^*}h(s-s^*) \Bigg|\Bigg| =1\not= 0.\end{equation}

Therefore, there exists a continuous, positive-valued function $h(\cdot)$ such that $(s-s^*)=$ $o(h(s-s^*))$ and

(96) \begin{equation} -\lim_{s\downarrow s^*} \left( \frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{s-s^*}h(s-s^*) \right) ={\textbf{B}}(s^*)\end{equation}

for some constant matrix ${\boldsymbol{0}}<{\textbf{B}}(s^*)<\infty$. For such $h(\cdot)$, define the function $g(\cdot)$ by

(97) \begin{equation} g(s-s^*)=\frac{s-s^*}{h(s-s^*)};\end{equation}

clearly $\lim_{s\downarrow s^*}g(s-s^*)=0$ since $(s-s^*)=o(h(s-s^*))$.

Consequently, we have

(98) \begin{equation} -\lim_{s\downarrow s^*} \left( \frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{g(s-s^*)} \right) ={\textbf{B}}(s^*),\end{equation}

which implies that

(99) \begin{eqnarray} {\boldsymbol{\Psi}}(s)&\,=\,&{\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)g(s-s^*) +o(g(s-s^*)).\end{eqnarray}

We now solve for ${\textbf{B}}(s^*)$ and $g(s-s^*)$. By (21) and Lemma 5, since

(100) \begin{eqnarray} {\boldsymbol{0}}&\,=\,& {\textbf{Q}}_{12}(s^*) + {\textbf{Q}}_{11}(s^*){\boldsymbol{\Psi}}(s^*) +{\boldsymbol{\Psi}}(s^*){\textbf{Q}}_{22}(s^*) +{\boldsymbol{\Psi}}(s^*){\textbf{Q}}_{21}(s^*){\boldsymbol{\Psi}}(s^*),\end{eqnarray}

we have

(101) \begin{eqnarray}{\boldsymbol{0}}&\,=\,& {\textbf{Q}}_{12}(s) +{\textbf{Q}}_{11}(s){\boldsymbol{\Psi}}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{22}(s) +{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s)\nonumber \\[4pt] &\,=\,&\left({\textbf{Q}}_{12}(s^*)-{\textbf{A}}_{12}(s^*)(s-s^*)\right) \nonumber\\[4pt] && + \left( {\textbf{Q}}_{11}(s^*)-{\textbf{A}}_{11}(s^*)(s-s^*) \right) \left( {\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)g(s-s^*) \right) \nonumber\\[4pt] && + \left( {\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)g(s-s^*) \right) \left( {\textbf{Q}}_{22}(s^*)-{\textbf{A}}_{22}(s^*)(s-s^*) \right) \nonumber\\[4pt] && + \left( {\boldsymbol{\Psi}}(s^*) - {\textbf{B}}(s^*)g(s-s^*) \right) \left( {\textbf{Q}}_{21}(s^*)-{\textbf{A}}_{21}(s^*)(s-s^*) \right) \left( {\boldsymbol{\Psi}}(s^*) - {\textbf{B}}(s^*)g(s-s^*) \right) \nonumber\\[4pt] && +o(s-s^*)+o(g(s-s^*)),\end{eqnarray}

and so

(102) \begin{eqnarray} {\boldsymbol{0}} &\,=\,& -(s-s^*){\textbf{U}}(s^*)+g(s-s^*){\textbf{W}}(s^*) +g^2(s-s^*){\textbf{V}}(s^*) +o(s-s^*) +o(g(s-s^*)), \nonumber\\[4pt]\end{eqnarray}

where ${\textbf{U}}(s^*)$ is as defined in (83), and

(103) \begin{eqnarray} {\textbf{W}}(s^*)&\,=\,& \left( {\textbf{Q}}_{11}(s^*)+{\boldsymbol{\Psi}}(s^*){\textbf{Q}}_{21}(s^*) \right) {\textbf{B}}(s^*) +{\textbf{B}}(s^*) \left( {\textbf{Q}}_{22}(s^*)+{\textbf{Q}}_{21}(s^*){\boldsymbol{\Psi}}(s^*) \right) \nonumber\\[4pt] &\,=\,& {\textbf{K}}(s^*){\textbf{B}}(s^*)+{\textbf{B}}(s^*){\textbf{D}}(s^*) , \nonumber\\[4pt] {\textbf{V}}(s^*) &\,=\,& {\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*) .\end{eqnarray}

We now use (102) to solve for ${\textbf{B}}(s^*)$ and $g(s-s^*)$. We note that ${\textbf{V}}(s^*)\not= {\boldsymbol{0}}$ and ${\textbf{U}}(s^*)\not= {\boldsymbol{0}}$. Indeed, ${\textbf{V}}(s^*)\not= {\boldsymbol{0}}$, since ${\textbf{V}}(s^*)> {\boldsymbol{0}}$ because ${\textbf{B}}(s^*)>{\boldsymbol{0}}$, ${\textbf{Q}}_{21}(s^*)\geq {\boldsymbol{0}}$, and ${\textbf{Q}}_{21}(s^*)\not= {\boldsymbol{0}}$. Furthermore, ${\textbf{U}}(s^*)\not= {\boldsymbol{0}}$ since ${\textbf{U}}(s^*)> {\boldsymbol{0}}$. Indeed, in the case $\mathcal{S}_0=\varnothing$, since ${\textbf{C}}_1,{\textbf{C}}_2>{\boldsymbol{0}}$ and ${\boldsymbol{\Psi}}(s^*)>{\boldsymbol{0}}$, we have ${\textbf{U}}(s^*)={\textbf{C}}_1^{-1}{\boldsymbol{\Psi}}(s^*)+{\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}>{\boldsymbol{0}}$. In the case $\mathcal{S}_0\not=\varnothing$, we have $-({\textbf{T}}_{00}-s^*{\textbf{I}})^{-1}=\int_{t=0}^{\infty}e^{-s^*t}e^{{\textbf{T}}_{00}t}dt>{\boldsymbol{0}}$, and $({\textbf{T}}_{00}-s^*{\textbf{I}})^{-2}=({-}({\textbf{T}}_{00}-s^*{\textbf{I}})^{-1})^2>0$. Therefore ${\textbf{A}}_{11}(s^*),{\textbf{A}}_{22}(s^*)>{\boldsymbol{0}}$, ${\textbf{A}}_{12}(s^*),{\textbf{A}}_{21}(s^*)\geq 0$, and ${\boldsymbol{\Psi}}(s^*)>{\boldsymbol{0}}$, so that ${\textbf{U}}(s^*)={\textbf{A}}_{12}(s^*)+{\textbf{A}}_{11}(s^*){\boldsymbol{\Psi}}(s^*)+{\boldsymbol{\Psi}}(s^*){\textbf{A}}_{22}(s^*)+{\boldsymbol{\Psi}}(s^*){\textbf{A}}_{21}(s^*){\boldsymbol{\Psi}}(s^*)>{\boldsymbol{0}}$.

Consequently, below we consider the two cases ${\textbf{W}}(s^*)\not= {\boldsymbol{0}}$ and ${\textbf{W}}(s^*)= {\boldsymbol{0}}$, respectively labelled Case I and Case II.

Case I. Suppose ${\textbf{W}}(s^*)\not= {\boldsymbol{0}}$. Then

(104) \begin{eqnarray} {\boldsymbol{0}} &\,=\,& -(s-s^*){\textbf{U}}(s^*)+g(s-s^*){\textbf{W}}(s^*) +g^2(s-s^*){\textbf{V}}(s^*) +o(s-s^*) +o(g(s-s^*)). \nonumber\\[4pt]\end{eqnarray}

Consider $(s-s^*)$ and $g(s-s^*)$. Either one of them dominates the other, or one is a multiple of the other.

  1. (i) If $g(s-s^*)=o(s-s^*)$, then dividing Equation (104) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\boldsymbol{0}}={\textbf{U}}(s^*)$, a contradiction.

  2. (ii) If $(s-s^*)=o(g(s-s^*))$, then dividing Equation (104) by $g(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\boldsymbol{0}}={\textbf{W}}(s^*)$, a contradiction.

  3. (iii) If $g(s-s^*)=c\cdot (s-s^*)$ for some constant $c>0$, then without loss of generality we may assume $c=1$, since ${\textbf{B}}(s^*)g(s-s^*)=({\textbf{B}}(s^*)c )(s-s^*)$ suggests the substitution $\tilde {\textbf{B}}(s^*)\equiv {\textbf{B}}(s^*)c$. We then have

    (105) \begin{eqnarray} {\boldsymbol{\Psi}}(s)&\,=\,&{\boldsymbol{\Psi}}(s^*)-\tilde{\textbf{B}}(s^*)(s-s^*) +o(s-s^*),\end{eqnarray}
    with $\tilde {\textbf{B}}(s^*)<\infty$. However, dividing Equation (105) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives, by Lemma 6,
    (106) \begin{eqnarray} \tilde{\textbf{B}}(s^*)=-\lim_{s\downarrow s^*}\frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{s-s^*}=\infty,\end{eqnarray}
    a contradiction.

That is, the assumption ${\textbf{W}}(s^*)\not= {\boldsymbol{0}}$ leads to a contradiction.

Case II. By the above, we must have ${\textbf{W}}(s^*)= {\boldsymbol{0}}$, or equivalently,

(107) \begin{equation}{\textbf{K}}(s^*){\textbf{B}}(s^*)+{\textbf{B}}(s^*){\textbf{D}}(s^*)={\boldsymbol{0}},\end{equation}

and so,

(108) \begin{eqnarray} {\boldsymbol{0}}&\,=\,&-(s-s^*){\textbf{U}}(s^*)+g^2(s-s^*){\textbf{V}}(s^*)+o(s-s^*) +o(g(s-s^*)).\end{eqnarray}

We note that $g^2(s-s^*)=o(g(s-s^*))$, and consider the following.

(i) First, we show that $(s-s^*)=o(g(s-s^*))$. Indeed, if $g(s-s^*)=o(s-s^*)$ or $g(s-s^*)=c\cdot (s-s^*)$ for some $c\not= 0$, then dividing Equation (108) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{U}}(s^*)={\boldsymbol{0}}$, a contradiction. Therefore we must have $(s-s^*)=o(g(s-s^*))$. That is, $g(s-s^*)$ dominates both $g^2(s-s^*)$ and $(s-s^*)$.

Then,

(109) \begin{eqnarray} \lim_{s\to s^*}\frac{o(s-s^*)}{g(s-s^*)}=\lim_{s\to s^*}\frac{o(s-s^*)}{(s-s^*)}\frac{(s-s^*)}{g(s-s^*)} =0,\end{eqnarray}

which gives $o(s-s^*)=o(g(s-s^*))$, and so we write (108) in the form

(110) \begin{eqnarray} {\boldsymbol{0}}&\,=\,&-(s-s^*){\textbf{U}}(s^*)+g^2(s-s^*){\textbf{V}}(s^*) +o(g(s-s^*)).\end{eqnarray}

Since $(s-s^*)=o(g(s-s^*))$, we consider two cases, $o(g(s-s^*))={\boldsymbol{0}}$ and $o(g(s-s^*))\not ={\boldsymbol{0}}$, respectively labelled (A) and (B) below.

(A) Suppose $o(g(s-s^*))={\boldsymbol{0}}$. Then (110) reduces to

(111) \begin{eqnarray}{\boldsymbol{0}}&\,=\,&-(s-s^*){\textbf{U}}(s^*)+g^2(s-s^*){\textbf{V}}(s^*).\end{eqnarray}

If $(s-s^*)=o(g^2(s-s^*))$, we divide (111) by $g^2(s-s^*)$ and take limits as $s\downarrow s^*$ to get ${\textbf{V}}(s^*)={\boldsymbol{0}}$, a contradiction. If $g^2(s-s^*)=o(s-s^*)$, we divide (111) by $(s-s^*)$ and take limits as $s\downarrow s^*$ to get ${\textbf{U}}(s^*)={\boldsymbol{0}}$, a contradiction. So we must have $(s-s^*)=c\cdot g^2(s-s^*)$ for some constant $c>0$, and without loss of generality we may assume $c=1$. Then dividing (111) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{U}}(s^*)={\textbf{V}}(s^*)$, or equivalently,

(112) \begin{equation}{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*)={\textbf{U}}(s^*).\end{equation}

That is, Case (A) gives $g(s-s^*)=\sqrt{s-s^*}$.

(B) Suppose $o(g(s-s^*))\not={\boldsymbol{0}}$. Then we write the term $o(g(s-s^*))$ in the form

(113) \begin{eqnarray} o(g(s-s^*))&\,=\,& L(s-s^*){\textbf{Y}}(s^*) +o(L(s-s^*))\end{eqnarray}

for some function $L(\cdot)\not= 0$ such that $L(s-s^*)=o(g(s-s^*))$ and some constant ${\textbf{Y}}(s^*)\not= {\boldsymbol{0}}$.

We then have

(114) \begin{eqnarray} {\boldsymbol{0}}&\,=\,&-(s-s^*){\textbf{U}}(s^*)+g^2(s-s^*){\textbf{V}}(s^*) +L(s-s^*){\textbf{Y}}(s^*) +o(L(s-s^*)). \end{eqnarray}

In relation to the terms $(s-s^*)$, $g^2(s-s^*)$, and $L(s-s^*)$, we consider three cases, labelled (B)(ii)–(B)(iv). We will show that Case (B)(ii) gives a contradiction and Cases (B)(iii) and (B)(iv) give $g(s-s^*)=\sqrt{s-s^*}$.

(B)(ii) Suppose one of $(s-s^*)$, $g^2(s-s^*)$, and $L(s-s^*)$ dominates the other two.

If $(s-s^*)$ dominates the other terms, that is, $g^2(s-s^*)=o(s-s^*)$ and $L(s-s^*)=$ $o(s-s^*)$, then dividing Equation (114) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{U}}(s^*)={\boldsymbol{0}}$, a contradiction.

If $g^2(s-s^*)$ dominates the other terms, that is, $(s-s^*)=o(g^2(s-s^*))$ and $L(s-s^*)=o(g^2(s-s^*))$, then dividing Equation (114) by $g^2(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{V}}(s^*)={\boldsymbol{0}}$, a contradiction.

If $L(s-s^*)$ dominates the other terms, that is, $(s-s^*)=o(L(s-s^*))$ and $g^2(s-s^*)=o(L(s-s^*))$, then dividing Equation (114) by $L(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{Y}}(s^*)={\boldsymbol{0}}$, a contradiction.

That is, Case (B)(ii) gives a contradiction. Therefore at least two of $(s-s^*)$, $g^2(s-s^*)$, and $L(s-s^*)$ must be multiples of each other.

(B)(iii) Suppose each of $(s-s^*)$, $g^2(s-s^*)$ and $L(s-s^*)$ is a multiple of the others. Then $(s-s^*)=c\cdot g^2(s-s^*)=d\cdot L(s-s^*)$, and without loss of generality we may assume $c=1$, $d=1$, by an argument analogous to the one used before. Therefore, dividing Equation (114) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\boldsymbol{0}}=-{\textbf{U}}(s^*)+{\textbf{V}}(s^*)+{\textbf{Y}}(s^*)$, or equivalently,

(115) \begin{eqnarray}{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*) &\,=\,&{\textbf{U}}(s^*)-{\textbf{Y}}(s^*).\end{eqnarray}

That is, Case (B)(iii) gives $g(s-s^*)=\sqrt{s-s^*}$.

(B)(iv) Suppose exactly two of $(s-s^*)$, $g^2(s-s^*)$, and $L(s-s^*)$ are multiples of one another. Then these two terms must dominate the third term, or we have a contradiction by Part (i) of Case II above.

If $(s-s^*)=c\cdot g^2(s-s^*)$ for some $c>0$, then without loss of generality we may assume $c=1$. Also, we must have $L(s-s^*)=o(s-s^*)$. Therefore, $g(s-s^*)=\sqrt{s-s^*}$, and dividing Equation (114) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{U}}(s^*)={\textbf{V}}(s^*)$, or equivalently,

(116) \begin{equation}{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*)={\textbf{U}}(s^*).\end{equation}

If $L(s-s^*)=c\cdot (s-s^*)$ for some $c\not= 0$, then dividing Equation (114) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{V}}(s^*)={\boldsymbol{0}}$, a contradiction.

If $L(s-s^*)=c\cdot g^2(s-s^*)$ for some $c> 0$, then without loss of generality we may assume $c=1$. Also, we must have $L(s-s^*)=o(s-s^*)$. Therefore, Equation (114) becomes

(117) \begin{eqnarray} {\boldsymbol{0}}&\,=\,&-(s-s^*){\textbf{U}}(s^*)+g^2(s-s^*)({\textbf{V}}(s^*)+{\textbf{Y}}(s^*)) +o(g^2(s-s^*)).\end{eqnarray}

In this case, if $g^2(s-s^*)=o(s-s^*)$, then dividing Equation (117) by $(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{U}}(s^*)={\boldsymbol{0}}$, a contradiction. If $(s-s^*)=o(g^2(s-s^*))$, then dividing Equation (117) by $g^2(s-s^*)$ and taking limits as $s\downarrow s^*$ gives ${\textbf{U}}(s^*)+{\textbf{Y}}(s^*)={\boldsymbol{0}}$, a contradiction. Therefore, we must have $g^2(s-s^*)=c\cdot (s-s^*)$ for some $c> 0$. Without loss of generality we may assume $c=1$. Therefore, $g(s-s^*)=\sqrt{s-s^*}$, and dividing Equation (117) by $g^2(s-s^*)$ and taking limits as $s\downarrow s^*$ gives

(118) \begin{eqnarray} {\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*)&\,=\,& {\textbf{U}}(s^*)- {\textbf{Y}}(s^*).\end{eqnarray}

That is, Case (B)(iv) gives $g(s-s^*)=\sqrt{s-s^*}$.

By the above arguments, we must have

(119) \begin{equation} g(s-s^*)=\sqrt{s-s^*},\end{equation}

and

(120) \begin{eqnarray} {\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*)&\,=\,& {\textbf{U}}(s^*)- {\textbf{Y}}(s^*), \end{eqnarray}
(121) \begin{eqnarray} {\textbf{K}}(s^*){\textbf{B}}(s^*)+{\textbf{B}}(s^*){\textbf{D}}(s^*)&\,=\,&{\boldsymbol{0}},\end{eqnarray}

and $-\infty<{\textbf{Y}}(s^*)\leq {\textbf{U}}(s^*)$. Here, ${\textbf{Y}}(s^*)= {\boldsymbol{0}}$ whenever the term $o(g(s-s^*))$ in (110) satisfies $o(g(s-s^*))=o(s-s^*)$, and ${\textbf{Y}}(s^*)\not= {\boldsymbol{0}}$ when $o(g(s-s^*))=(s-s^*){\textbf{Y}}(s^*)+o(s-s^*)$.

Finally, we show (92). By L’Hospital’s rule,

(122) \begin{eqnarray}{\textbf{B}}(s^*)&\,=\,& -\lim_{s\downarrow s^*}\frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{\sqrt{s-s^*}}\ = \ \lim_{s\downarrow s^*}\left({\boldsymbol{\Phi}}(s)2\sqrt{s-s^*}\right),\end{eqnarray}

and so, by taking limits as $s\downarrow s^*$ in (85),

(123) \begin{eqnarray}&&{{\textbf{B}}(s^*){\textbf{Q}}_{21} (s^*){\textbf{B}}(s^*)+{\textbf{Y}}(s^*)= {\textbf{U}}(s^*)}\nonumber\\[4pt] &&\,=\, \lim_{s\downarrow s^*} \Big[ {\textbf{K}}(s){\boldsymbol{\Phi}}(s) +{\boldsymbol{\Phi}}(s){\textbf{D}}(s) \Big]\nonumber\\[4pt] &&\,=\,\lim_{s\downarrow s^*}\Big[({\textbf{Q}}_{11}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s)){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s)({\textbf{Q}}_{22}(s)+{\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s))\Big]\nonumber &&\,=\,\lim_{s\downarrow s^*}\Bigg[\frac{1}{2}\left(\frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{\sqrt{s-s^*}}\right){\textbf{Q}}_{21}(s)\left({\boldsymbol{\Phi}}(s)2\sqrt{s-s^*}\right)\nonumber\\[4pt] &&+\frac{1}{2}\left({\boldsymbol{\Phi}}(s)2\sqrt{s-s^*}\right){\textbf{Q}}_{21}(s)\left(\frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{\sqrt{s-s^*}}\right)\nonumber\\[4pt]&&+{\textbf{Q}}_{11}(s){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s){\textbf{Q}}_{22}(s)+{\boldsymbol{\Psi}}(s^*){\textbf{Q}}_{21}(s){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s){\textbf{Q}}_{21}(s){\boldsymbol{\Psi}}(s^*)\Bigg]\nonumber\\[4pt]&\,=\,&\frac{1}{2}{\textbf{B}}(s^*){\textbf{Q}}_{21} (s^*){\textbf{B}}(s^*)+\frac{1}{2}{\textbf{B}}(s^*){\textbf{Q}}_{21} (s^*){\textbf{B}}(s^*)\nonumber\\[4pt]&&+\lim_{s\downarrow s^*}\Bigg[\left(\frac{{\textbf{Q}}_{11}(s)-{\textbf{Q}}_{11}(s^*)}{\sqrt{s-s^*}}\right)({\boldsymbol{\Phi}}(s)\sqrt{s-s^*})+(\sqrt{s-s^*}{\boldsymbol{\Phi}}(s))\left(\frac{{\textbf{Q}}_{22}(s)-{\textbf{Q}}_{22}(s^*)}{\sqrt{s-s^*}} \right)\nonumber\\[4pt]&&+{\textbf{Q}}_{11}(s^*){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s){\textbf{Q}}_{22}(s^*)\nonumber\\[4pt]&&+{\boldsymbol{\Psi}}(s^*)\left(\frac{{\textbf{Q}}_{21}(s)-{\textbf{Q}}_{21}(s^*)}{\sqrt{s-s^*}}\right)({\boldsymbol{\Phi}}(s)\sqrt{s-s^*})+({\boldsymbol{\Phi}}(s)\sqrt{s-s^*})\left(\frac{{\textbf{Q}}_{21}(s)-{\textbf{Q}}_{21}(s^*)}{\sqrt{s-s^*}}\right){\boldsymbol{\Psi}}(s^*)\nonumber\\[4pt]&&+{\boldsymbol{\Psi}}(s^*){\textbf{Q}}_{21}(s^*){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s){\textbf{Q}}_{21}(s^*){\boldsymbol{\Psi}}(s^*)\Bigg]\nonumber\\[4pt]&\,=\,&{\textbf{B}}(s^*){\textbf{Q}}_{21} (s^*){\textbf{B}}(s^*)+ {\boldsymbol{0}}+\lim_{s\downarrow s^*}\Big[{\textbf{K}}(s^*){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s){\textbf{D}}(s^*)\Big],\end{eqnarray}

which completes the proof.

The next result follows immediately by Theorem 1.

Corollary 1. We have

(124) \begin{equation}\boldsymbol{\psi}(t)={\textbf{B}}(s^*)\frac{1}{2\sqrt{\pi}}t^{-3/2}e^{s^* t}(1+o(1)).\end{equation}

Example 1. Since $\lim_{s\downarrow s^*} \Delta(s)=\Delta(s^*)=0$, we have

(125) \begin{eqnarray}\lim_{s\downarrow s^*}\frac{d}{ds}{\boldsymbol{\Psi}}(s)&\,=\,& \lim_{s\downarrow s^*}\frac{d}{ds} \frac{(a+b+2s)-\sqrt{\Delta(s)}}{2b}\nonumber\\[4pt]&\,=\,& \lim_{s\downarrow s^*}\frac{d}{ds}\left(\frac{1}{b}-\frac{1}{4b \sqrt{\Delta(s)}}(8s+4(a+b))\right)\nonumber\\[4pt]&\,=\,&-\infty,\end{eqnarray}

as expected. Furthermore,

(126) \begin{eqnarray}\lim_{s\downarrow (s^*)}\frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{\sqrt{s-s^*}}&\,=\,&\lim_{s\downarrow (s^*)}\left(\frac{(a+b+2s)-\sqrt{\Delta(s)}}{2b\sqrt{s-s^*}}-\frac{(a+b+2s^*)-\sqrt{\Delta(s^*)}}{2b\sqrt{s-s^*}}\right)\nonumber\\[4pt]&\,=\,& \frac{\sqrt{2\sqrt{ab}}}{-b},\end{eqnarray}

which implies

(127) \begin{equation}{\boldsymbol{\Psi}}(s)={\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)\sqrt{s-s^*}+o(\sqrt{s-s^*}),\end{equation}

where

(128) \begin{equation}{\textbf{B}}(s^*)=\frac{\sqrt{2\sqrt{ab}}}{b}.\end{equation}

Therefore, by Theorem 1,

(129) \begin{equation}\boldsymbol{\psi}(t)=\frac{\sqrt{2\sqrt{ab}}}{2b\sqrt{\pi}}t^{-3/2}\exp\left(\left(\frac{-(a+b)+2\sqrt{ab}}{2}\right)t\right)(1+o(1)).\end{equation}

Also, ${\textbf{A}}_{12}(s^*)=0$, ${\textbf{A}}_{21}(s^*)=0$, ${\textbf{A}}_{11}(s^*)=1$, ${\textbf{A}}_{22}(s^*)=1$, so

(130) \begin{eqnarray}{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*)&\,=\,&b\left(\frac{\sqrt{2\sqrt{ab}}}{b}\right)^2= \ 2\sqrt{\frac{a}{b}}\ = \ {\textbf{U}}(s^*),\end{eqnarray}

and

(131) \begin{eqnarray}{\textbf{K}}(s^*){\textbf{B}}(s^*)+{\textbf{B}}(s^*){\textbf{D}}(s^*)&\,=\,&\left( \frac{b-a}{2}+\frac{a-b}{2} \right)\left(\frac{\sqrt{2\sqrt{ab}}}{b}\right)={\boldsymbol{0}},\end{eqnarray}
(132) \begin{eqnarray} \lim_{s\downarrow s^*} \left( {\textbf{K}}(s^*){\boldsymbol{\Phi}}(s)+{\boldsymbol{\Phi}}(s){\textbf{D}}(s^*) \right)&\,=\,&\lim_{s\downarrow s^*}\left(\left( \frac{b-a}{2}+\frac{a-b}{2} \right)2 {\boldsymbol{\Phi}}(s)\right)={\boldsymbol{0}}. \end{eqnarray}

For $n\geq 1$, define the matrices

(133) \begin{eqnarray}{\textbf{H}}_{1,n}(s^*)&\,=\,&\sum_{i=0}^{n-1}\left({\textbf{K}}(s^*) \right)^i\times{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*)\times\left({\textbf{K}}(s^*) \right)^{n-1-i}\end{eqnarray}

and

(134) \begin{equation}{\textbf{H}}(s^*,y)=\sum_{n=1}^{\infty}\frac{y^n}{n!}{\textbf{H}}_{1,n}(s^*),\ \ {\textbf{H}}(s^*)=\int_{y=0}^{\infty}{\textbf{H}}(s^*,y)dy,\end{equation}

and define a column vector

(135) \begin{eqnarray}\widetilde{\textbf{H}}(s^*)&\,=\,&{\textbf{H}}(s^*){\textbf{C}}_1^{-1}{\boldsymbol{1}}+\left({-}({\textbf{K}}(s^*))^{-1}{\textbf{B}}(s^*)+{\textbf{H}}(s^*){\boldsymbol{\Psi}}(s^*)\right){\textbf{C}}_2^{-1}{\boldsymbol{1}}\nonumber\\[4pt]&&+\left[\begin{array}{c@{\quad}c}{\textbf{H}}(s^*)&-({\textbf{K}}(s^*))^{-1}{\textbf{B}}(s^*)+{\textbf{H}}(s^*){\boldsymbol{\Psi}}(s^*)\end{array}\right]\left[\begin{array}{c}{\textbf{C}}_1^{-1}{\textbf{T}}_{10}\\[4pt]{\textbf{C}}_2^{-1}{\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}){\boldsymbol{1}}.\nonumber\\[4pt]\end{eqnarray}

Below, we derive the expressions for $\boldsymbol{\mu}(dy)^{(0)}$.

Theorem 3. The matrix $\boldsymbol{\mu}(dy)^{(0)}$ is unique, and

(136) \begin{eqnarray}\boldsymbol{\mu}(dy)^{(0)}_{11}&\,=\,&diag({\widetilde{\textbf{H}}(s^*)})^{-1}{ {\textbf{H}}(s^*,y){\textbf{C}}_1^{-1}}dy,\nonumber\\[4pt]\boldsymbol{\mu}(dy)^{(0)}_{12}&\,=\,&diag({\widetilde{\textbf{H}}(s^*)})^{-1}\left(e^{{\textbf{K}}(s^*)y}{\textbf{B}}(s^*)+{\textbf{H}}(s^*,y){\boldsymbol{\Psi}}(s^*)\right){\textbf{C}}_2^{-1}dy,\nonumber\\[4pt]\boldsymbol{\mu}(dy)^{(0)}_{10}&\,=\,& \left[\begin{array}{c@{\quad}c}\boldsymbol{\mu}(dy)^{(0)}_{11} &\boldsymbol{\mu}(dy)^{(0)}_{12}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[4pt]{\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}).\end{eqnarray}

Remark 3. From Theorem 3 it follows that the crucial step in identifying the Yaglom limit given above is identification of $s^*$. Unfortunately, this must be done for each stochastic fluid queue separately.

Proof. By Lemma 2, Lemma 5, and Theorem 2, we have

(137) \begin{eqnarray}e^{{\textbf{K}}(s)y}&\,=\,&\lim_{K\to\infty}\sum_{n=0}^{K}\frac{y^n}{n!}\left({\textbf{Q}}_{11}(s)+{\boldsymbol{\Psi}}(s){\textbf{Q}}_{21}(s)\right)^n\nonumber\qquad\ \\[4pt] &\,=\,&\lim_{K\to\infty}\sum_{n=0}^{K}\frac{y^n}{n!}\left({\textbf{Q}}_{11}(s^*)+\left( {\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)\sqrt{s-s^*} \right){\textbf{Q}}_{21}(s^*)+o(\sqrt{s-s^*})\right)^n\nonumber\\[4pt] &\,=\,&\lim_{K\to\infty}\sum_{n=0}^{K}\frac{y^n}{n!}\left({\textbf{Q}}_{11}(s^*)+{\boldsymbol{\Psi}}(s^*){\textbf{Q}}_{21}(s^*)\right)^n-\lim_{K\to\infty}\sqrt{s-s^*}\sum_{n=1}^{K }\frac{y^n}{n!}{\textbf{H}}_{1,n}+o(\sqrt{s-s^*})\nonumber\\[4pt] &\,=\,&e^{{\textbf{K}}(s^*)y}-\sqrt{s-s^*}{\textbf{H}}(s^*,y)+o(\sqrt{s-s^*}),\end{eqnarray}

which gives

(138) \begin{eqnarray}{\textbf{E}}(dy)^{(0)}(s)_{11}&\,=\,&e^{{\textbf{K}}(s)y}{\textbf{C}}_1^{-1}dy\nonumber\\[4pt]&\,=\,&{\textbf{E}}(dy)^{(0)}(s^*)_{11}-\sqrt{s-s^*}{\textbf{H}}(s^*,y){\textbf{C}}_1^{-1}dy+o(\sqrt{s-s^*})\end{eqnarray}

and

(139) \begin{eqnarray} {\textbf{E}}(dy)^{(0)}(s)_{12}& = & e^{{\textbf{K}}(s)y}{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}dy\nonumber\\[4pt] &=\,&\left(e^{{\textbf{K}}(s^*)y}-\sqrt{s-s^*}{\textbf{H}}(s^*,y)\right)({\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)\sqrt{s-s^*}){\textbf{C}}_2^{-1}dy+o(\sqrt{s-s^*})\nonumber\\[4pt] &=\,&{\textbf{E}}(dy)^{(0)}(s^*)_{12}-\sqrt{s-s^*}\left(e^{{\textbf{K}}(s^*)y}{\textbf{B}}(s^*)+{\textbf{H}}(s^*,y){\boldsymbol{\Psi}}(s^*)\right){\textbf{C}}_2^{-1}dy+o(\sqrt{s-s^*}).\nonumber\\[5pt]\end{eqnarray}

Also, noting that $({\textbf{T}}_{00}-s{\textbf{I}})^{-1}-({\textbf{T}}_{00}-s^*{\textbf{I}})^{-1}=(s-s^*)({\textbf{T}}_{00}-s^*{\textbf{I}})^{-2}+o(s-s^*)$, which gives $({{\textbf{T}}}_{00}-s{\textbf{I}})^{-1}=({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}+o(\sqrt{s-s^*})$, we have

(140) \begin{eqnarray}&& {{\textbf{E}}(dy)^{(0)}(s)_{10}=\left[\begin{array}{c@{\quad}c} e^{{\textbf{K}}(s)y} {\textbf{C}}_1^{-1}&e^{{\textbf{K}}(s)y}{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[4pt] {\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s{\textbf{I}})^{-1})}\nonumber\\[4pt] &&\,=\,{\textbf{E}}(dy)^{(0)}(s^*)_{10}-\sqrt{s-s^*}\left[\begin{array}{c@{\quad}c}{\textbf{H}}(s^*,y)&\left(e^{{\textbf{K}}(s^*)y}{\textbf{B}}(s^*)+{\textbf{H}}(s^*,y){\boldsymbol{\Psi}}(s^*)\right)\end{array}\right]\nonumber\\[4pt] &&\quad\times\left[\begin{array}{c}{\textbf{C}}_1^{-1}{\textbf{T}}_{10}\\[4pt]{\textbf{C}}_2^{-1}{\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1})dy+o(\sqrt{s-s^*}).\end{eqnarray}

Furthermore,

(141) \begin{eqnarray}{\textbf{E}}^{(0)}(s)_1&\,=\,&\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(0)}(s)_{11}{\boldsymbol{1}}+\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(0)}(s)_{12}{\boldsymbol{1}}+\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(0)}(s)_{10}{\boldsymbol{1}}\nonumber\\[8pt] &\,=\,&{\textbf{E}}^{(0)}(s^*)_1-\sqrt{s-s^*}{\textbf{H}}(s^*){\textbf{C}}_1^{-1}{\boldsymbol{1}}-\sqrt{s-s^*}\left({-}({\textbf{K}}(s^*))^{-1}{\textbf{B}}(s^*)+{\textbf{H}}(s^*){\boldsymbol{\Psi}}(s^*)\right){\boldsymbol{1}}\nonumber\\[8pt] &&-\sqrt{s-s^*}\left[\begin{array}{c@{\quad}c}{\textbf{H}}(s^*)&\left({-}({\textbf{K}}(s^*))^{-1}{\textbf{B}}(s^*)+{\textbf{H}}(s^*){\boldsymbol{\Psi}}(s^*)\right)\end{array}\right]\nonumber\\[9pt] &&\times\left[\begin{array}{c}{\textbf{C}}_1^{-1}{\textbf{T}}_{10}\\[10pt] {\textbf{C}}_2^{-1}{\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}){\boldsymbol{1}}+o(\sqrt{s-s^*})\nonumber\\[4pt]&\,=\,&{\textbf{E}}^{(0)}(s^*)_1-\sqrt{s-s^*}\widetilde{\textbf{H}}(s^*)+o(\sqrt{s-s^*}).\end{eqnarray}

The result follows from Theorem 1 and (10), since the relevant terms cancel out. Indeed, for $i,j\in\mathcal{S}_1$, by (138)–(141), Theorem 1, and (10),

(142) \begin{eqnarray} \mu(dy)^{(0)}_{ij} &\,=\,& \lim_{t\to\infty} \frac{ \mathbb{P}(X(t)\in dy,\varphi(t)=j,\theta(0)>t\, |\, X(0)=0,\varphi(0)=i) } { \mathbb{P}(\theta(0)>t\, |\, X(0)=0,\varphi(0)=i) } \nonumber\\[8pt] &\,=\,& \frac{ \lim_{t\to\infty} ([{\textbf{H}}(s^*,y) {\textbf{C}}_1^{-1}]_{ij}\Gamma(1/2)^{-1}t^{-1/2-1}e^{s^*t}(1+o(1)) } { \lim_{t\to\infty} ([\widetilde{\textbf{H}}(s^*)]_{i}\Gamma(1/2)^{-1}t^{-1/2-1}e^{s^*t}(1+o(1)) }dy \nonumber\\[8pt] &\,=\,& \frac{[{\textbf{H}}(s^*,y){\textbf{C}}_1^{-1}]_{ij} } { [\widetilde{\textbf{H}}(s^*)]_{i} }dy, \end{eqnarray}

which gives the result for $\boldsymbol{\mu}(dy)^{(0)}_{11}$. The expressions for $\boldsymbol{\mu}(dy)^{(0)}_{12}$ and $\boldsymbol{\mu}(dy)^{(0)}_{10}$ follow in a similar manner.

Example 1. Finally,

(143) \begin{eqnarray}{\textbf{H}}(s^*,y)&\,=\,&\sum_{n=1}^{\infty}\frac{y^n}{n!}{\textbf{H}}_{1,n}(s^*)\nonumber\\[4pt]&\,=\,&\sum_{n=1}^{\infty}\frac{y^n}{n!}\sum_{i=0}^{n-1}\left({\textbf{K}}(s^*) \right)^i\times{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*)\times\left({\textbf{K}}(s^*) \right)^{n-1-i}\nonumber\\[4pt]&\,=\,&\sum_{n=1}^{\infty}\frac{y^n}{n!}\sum_{i=0}^{n-1}\left( -(a-b)/2\right)^{n-1}\sqrt{2\sqrt{ab}}\nonumber\\[4pt]&\,=\,&ye^{\left( -(a-b)/2 \right)y}\sqrt{2\sqrt{ab}},\end{eqnarray}

and

(144) \begin{eqnarray}{\textbf{H}}(s^*)&\,=\,&\int_{y=0}^{\infty}ye^{\left( -(a-b)/2 \right)y}\sqrt{2\sqrt{ab}}\ dy\nonumber\\[4pt] &\,=\,&\frac{\sqrt{2\sqrt{ab}}}{(a-b)^2/4},\nonumber\\[4pt] \widetilde{\textbf{H}}(s^*)&\,=\,&{\textbf{H}}(s^*)+\left({-}({\textbf{K}}(s^*))^{-1}{\textbf{B}}(s^*)+{\textbf{H}}(s^*){\boldsymbol{\Psi}}(s^*)\right)\nonumber\\[4pt] &\,=\,&\frac{\sqrt{2\sqrt{ab}}}{(a-b)^2/4} \left(1+\sqrt{\frac{a}{b}} \right)+\frac{2}{a-b}\left(\frac{\sqrt{2\sqrt{ab}}}{b}\right),\end{eqnarray}

so

(145) \begin{eqnarray}\boldsymbol{\mu}(dy)^{(0)}_{11}&\,=\,& diag({\widetilde{\textbf{H}}(s^*)})^{-1} { {\textbf{H}}(s^*,y)} dy \nonumber\\[4pt] &\,=\,&\left(\frac{1}{(a-b)^2/4}\, \left(1+\sqrt{\frac{a}{b}} \right) +\frac{2}{a-b} \left( \frac{1}{b} \right)\right)^{-1}ye^{({-}(a-b)/2)y}dy,\nonumber\\[4pt] \boldsymbol{\mu}(dy)^{(0)}_{12}&\,=\,&diag({\widetilde{\textbf{H}}(s^*)})^{-1}\left(e^{{\textbf{K}}(s^*)y}{\textbf{B}}(s^*)+{\textbf{H}}(s^*,y){\boldsymbol{\Psi}}(s^*)\right)dy\nonumber\\[4pt] &\,=\,& \left(\frac{1}{(a-b)^2/4}\, \left(1+\sqrt{\frac{a}{b}} \right) +\frac{2}{a-b} \left( \frac{1}{b} \right) \right)^{-1} \left( \frac{1}{b}+y\sqrt{\frac{a}{b}} \right) e^{({-}(a-b)/2)y} dy.\nonumber\\[4pt] \end{eqnarray}

We plot the values of $\boldsymbol{\mu}(dy)^{(0)}_{11}$ and $\boldsymbol{\mu}(dy)^{(0)}_{12}$ in Figure 1.

Figure 1: The values of $\boldsymbol{\mu}(dy)^{(0)}_{11}/dy$ and $\boldsymbol{\mu}(dy)^{(0)}_{12}/dy$ in Example 4 for $b=1$, $a=4,3,2$ (dotted, solid, and dashed lines, respectively).

We will now find the Yaglom limit for a strictly positive initial position of $X(0)=x>0$. For $n\geq 1$, define the matrices

(146) \begin{eqnarray} \mathbf{W}(s^*, x-z)&\,=\,&\sum_{n=1}^\infty \frac{(x-z)^n}{n!}\sum_{i=1}^{n-1}\mathbf{D}(s^*)^i \times\mathbf{Q}_{21}(s^*) \mathbf{B}(s^*) \times\mathbf{D}(s^*)^{n-1-i}, \nonumber\\[4pt] \mathbf{W}_x(s^*)&\,=\,&\int_{z=0}^x \mathbf{W}(s^*, x-z)dz, \nonumber\\[4pt] \mathbf{Z}_x(s^*,y)&\,=\,&\int_{z=0}^{\min\{x,y\}}\left(\mathbf{W}(s^*, x-z){\textbf{Q}}_{21}(s^*)e^{{\textbf{K}}(s^*)(y-z)}\right. \nonumber\\[4pt] &&\left.+e^{{\textbf{D}}(s^*)(x-z)}{\textbf{Q}}_{21}(s^*)\mathbf{H}(s^*, y-z) \right)dz, \nonumber\\[4pt] {\mathbf{Z}}_x(s^*)&\,=\,&\int_{y=0}^\infty \mathbf{Z}_x(s^*,y)\;dy, \end{eqnarray}

and define the column vectors

(147) \begin{eqnarray*} \widetilde{\mathbf{Z}}_x(s^*) &\,=\,&\widetilde{\mathbf{Z}}_x(s^*)_{11}{\boldsymbol{1}} + \widetilde{\mathbf{Z}}_x(s^*)_{12}{\boldsymbol{1}} \nonumber\\[4pt] && + \left[ \begin{array}{c@{\quad}c} \widetilde{\mathbf{Z}}_x(s^*)_{11} & \widetilde{\mathbf{Z}}_x(s^*)_{12} \end{array} \right] \left[ \begin{array}{c} {\textbf{T}}_{10}\\[4pt] {\textbf{T}}_{20} \end{array} \right] ({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}) {\boldsymbol{1}}, \widetilde{\widetilde{\mathbf{Z}}}_x(s^*) &\,=\,&\widetilde{\widetilde{\mathbf{Z}}}_x(s^*)_{21}{\boldsymbol{1}} + \widetilde{\widetilde{\mathbf{Z}}}_x(s^*)_{22}{\boldsymbol{1}} \nonumber\\[4pt] && + \left[ \begin{array}{c@{\quad}c} \widetilde{\widetilde{\mathbf{Z}}}_x(s^*)_{21} & \widetilde{\widetilde{\mathbf{Z}}}_x(s^*)_x(s^*)_{22} \end{array} \right] \left[ \begin{array}{c} {\textbf{T}}_{10}\\[4pt] {\textbf{T}}_{20} \end{array} \right] ({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}) {\boldsymbol{1}},\end{eqnarray*}

where

(148) \begin{eqnarray} \widetilde{\mathbf{Z}}_x(s^*)_{11}&\,=\,& {\mathbf{Z}}_x(s^*){\textbf{C}}_1^{-1}, \nonumber\\[4pt] \widetilde{\mathbf{Z}}_x(s^*)_{12}&\,=\,& \left( {\textbf{E}}^{(x)}(s^*)_{21}{\textbf{C}}_1 \mathbf{B}(s^*) +{\mathbf{Z}}_x(s^*){\boldsymbol{\Psi}}(s^*)+\mathbf{W}_x(s^*) \right) {\textbf{C}}_2^{-1}, \nonumber\\[4pt] \widetilde{\widetilde{\mathbf{Z}}}_x(s^*)_{21} &\,=\,& \mathbf{B}(s^*){\textbf{E}}^{(x)}(s^*)_{21} +{\boldsymbol{\Psi}}(s^*)\mathbf{Z}_x(s^*){\textbf{C}}_1^{-1} +\mathbf{H}(s^*){\textbf{C}}_1^{-1}, \nonumber\\[4pt] \widetilde{\widetilde{\mathbf{Z}}}_x(s^*)_{22}&\,=\,& \mathbf{B}(s^*){\textbf{E}}^{(x)}(s^*)_{22} +{\boldsymbol{\Psi}}(s^*) \Big( {\textbf{E}}^{(x)}(s^*)_{21}{\textbf{C}}_1\mathbf{B}(s^*){\textbf{C}}_2^{-1} +\mathbf{Z}_x(s^*){\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1} + \mathbf{W}(s^*){\textbf{C}}_2^{-1} \Big) \nonumber\\[4pt] && + \left( ({-}{{\textbf{K}}(s^*))^{-1}}\mathbf{B}(s^*) + {\textbf{H}}(s^*){\boldsymbol{\Psi}}(s^*) \right){\textbf{C}}_2^{-1}, \end{eqnarray}

with ${\textbf{E}}^{(x)}(s^*)_{21}=\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s^*)_{21}$ as considered in Remark 2, and

(149) \begin{eqnarray}{\textbf{E}}^{(x)}(s^*)_{22}&\,=\,&{\textbf{E}}^{(x)}(s^*)_{21}{\textbf{C}}_1{\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}+\int_{y=0}^xe^{{\textbf{D}}(s^*)(x-y)}{\textbf{C}}_2^{-1}dy\nonumber\\[4pt]&\,=\,&{\textbf{E}}^{(x)}(s^*)_{21}{\textbf{C}}_1{\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}-\int_{w=0}^xe^{{\textbf{D}}(s^*)w}{\textbf{C}}_2^{-1}dw\nonumber\\[4pt]&\,=\,&{\textbf{E}}^{(x)}(s^*)_{21}{\textbf{C}}_1{\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}-({\textbf{D}}(s^*))^{-1}\left(e^{{\textbf{D}}(s^*)x}-{\textbf{I}}\right){\textbf{C}}_2^{-1}.\end{eqnarray}

Theorem 4. For $x>0$, the matrix $\boldsymbol{\mu}(dy)^{(x)}$ is unique, and

\begin{eqnarray*}\boldsymbol{\mu}(dy)^{(x)}_{21}&\,=\,&diag({\widetilde{\textbf{Z}}_x(s^*)})^{-1}{{\textbf{Z}}_x(s^*,y){\textbf{C}}_1^{-1}}dy,\nonumber\\[8pt] \boldsymbol{\mu}(dy)^{(x)}_{22}&\,=\,&diag({\widetilde{\textbf{Z}}_x(s^*)})^{-1}\Big({\textbf{E}}(dy)^{(x)}(s^*)_{21}{\textbf{C}}_1\mathbf{B}(s^*){\textbf{C}}_2^{-1}+\mathbf{Z}_x(s^*,y){\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}dy\nonumber\\[8pt] &&+ \mathbf{W}(s^*, x-y){\boldsymbol{1}}\{y<x\}{\textbf{C}}_2^{-1}dy\Big),\\[8pt] \boldsymbol{\mu}(dy)^{(x)}_{20}&\,=\,&\left[\begin{array}{c@{\quad}c}\boldsymbol{\mu}(dy)^{(x)}_{21}&\boldsymbol{\mu}(dy)^{(x)}_{22}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[4pt] {\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}),\nonumber\\[8pt] \boldsymbol{\mu}(dy)^{(x)}_{11}&\,=\,&diag({\widetilde{\widetilde{\textbf{Z}}}_x(s^*)})^{-1}\Big(\mathbf{B}(s^*){\textbf{E}}(dy)^{(x)}(s^*)_{21}+{\boldsymbol{\Psi}}(s^*)\mathbf{Z}_x(s^*,y){\textbf{C}}_1^{-1} dy\\[4pt] &&+\mathbf{H}(s^*, y-x){\textbf{C}}_1^{-1}{\boldsymbol{1}}\{y>x\}dy\Big),\boldsymbol{\mu}(dy)^{(x)}_{12}&\,=\,&diag({\widetilde{\widetilde{\textbf{Z}}}_x(s^*)})^{-1}\Big\{\Big(\mathbf{B}(s^*){\textbf{E}}(dy)^{(x)}(s^*)_{22}+{\boldsymbol{\Psi}}(s^*)\Big({\textbf{E}}(dy)^{(x)}(s^*)_{21}{\textbf{C}}_1\mathbf{B}(s^*){\textbf{C}}_2^{-1}\nonumber\\[4pt]&&+\mathbf{Z}_x(s^*,y){\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}dy+ \mathbf{W}(s^*, x-y){\boldsymbol{1}}\{y<x\}{\textbf{C}}_2^{-1}dy\Big)\Big)\nonumber\\[4pt]&&+\left(e^{{\textbf{K}}(s^*)(y-x)}\mathbf{B}(s^*)+{\textbf{H}}(s^*,y-x){\boldsymbol{\Psi}}(s^*)\right){\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y>x\}dy\Big\},\nonumber\\[4pt]\boldsymbol{\mu}(dy)^{(x)}_{10}&\,=\,&\left[\begin{array}{c@{\quad}c}\boldsymbol{\mu}(dy)^{(x)}_{11}&\boldsymbol{\mu}(dy)^{(x)}_{12}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[4pt]{\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1}). \end{eqnarray*}

The next corollary follows from Theorems 3 and 4.

Corollary 2. The Yaglom limit depends on the initial position of the fluid level $X(0)=x$ in the model.

Remark 4. There has been a conjecture that the Yaglom limit does not depend on the initial position of the Markov process. However, a counterexample to this conjecture has already been provided by Foley and McDonald [Reference Foley and McDonald26]. Our model produces another counterexample of the same kind.

Proof. Our proof is again based on Theorem 1 and (10). Note that

(150) \begin{align}e^{\mathbf{D}(s)(x-z)}& =\lim_{K\to+\infty}\sum_{n=0}^K\frac{(x-z)^n}{n!}\left(\mathbf{Q}_{22}(s)+\mathbf{Q}_{21}(s){\boldsymbol{\Psi}}(s)\right)^n\nonumber\\[4pt] & =\lim_{K\to+\infty}\sum_{n=0}^k\frac{(x-z)^n}{n!}\left(\mathbf{Q}_{22}(s^*)+\mathbf{Q}_{21}(s^*)({\boldsymbol{\Psi}}(s^*)-\mathbf{B}(s^*)\sqrt{s-s^*}+o(\sqrt{s-s^*})))\right)^n\nonumber\\[4pt] & = e^{\mathbf{D}(s^*)(x-z)}- \sqrt{s-s^*}\mathbf{W}(s^*, x-z)+o(\sqrt{s-s^*}).\end{align}

By (137), (150), Lemmas 5 and 3, and Theorem 2, we have

(151) \begin{eqnarray}{\textbf{E}}(dy)^{(x)}(s)_{21}&\,=\,&\int_{z=0}^{\min\{x,y\}}e^{{\textbf{D}}(s)(x-z)}{\textbf{Q}}_{21}(s)e^{{\textbf{K}}(s)(y-z)}{\textbf{C}}_1^{-1}dzdy\nonumber\\[4pt]&\,=\,&\int_{z=0}^{\min\{x,y\}}\left(e^{\mathbf{D}(s^*)(x-z)}- \sqrt{s-s^*}\mathbf{W}^*(s^*, x-z)\right)\mathbf{Q}_{21}(s^*)\nonumber\\[4pt]&&\times \left(e^{\mathbf{K}(s^*)(y-z)}- \sqrt{s-s^*}\mathbf{H}(s^*, y-z)\right){\textbf{C}}_1^{-1}dz dy+o(\sqrt{s-s^*})\nonumber\\[4pt]&\,=\,&{\textbf{E}}(dy)^{(x)}(s^*)_{21}- \sqrt{s-s^*}\mathbf{Z}_x(s^*,y){\textbf{C}}_1^{-1} dy +o(\sqrt{s-s^*}),\end{eqnarray}
(152) \begin{align}& {\textbf{E}}(dy)^{(x)}(s)_{22}={\textbf{E}}(dy)^{(x)}(s)_{21}{\textbf{C}}_1{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}+e^{{\textbf{D}}(s)(x-y)}{\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y<x\}dy\nonumber\\[4pt] & = \left({\textbf{E}}(dy)^{(x)}(s^*)_{21}{\textbf{C}}_1- \sqrt{s-s^*}\mathbf{Z}_x(s^*,y) dy\right)\left({\boldsymbol{\Psi}}(s^*)-{\textbf{B}}(s^*)\sqrt{s-s^*}\right){\textbf{C}}_2^{-1}\nonumber\\[4pt] & \quad +\left(e^{\mathbf{D}(s^*)(x-y)}- \sqrt{s-s^*}\mathbf{W}(s^*, x-y)\right){\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y<x\}dy+o(\sqrt{s-s^*})\nonumber\\[4pt] & ={\textbf{E}}(dy)^{(x)}(s^*)_{22}- \sqrt{s-s^*}\Big({\textbf{E}}(dy)^{(x)}(s^*)_{21}{\textbf{C}}_1({-}\mathbf{B}(s^*)){\textbf{C}}_2^{-1}+\mathbf{Z}_x(s^*,y){\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}dy\nonumber\\[4pt] & \quad + \mathbf{W}(s^*, x-y){\boldsymbol{1}}\{y<x\}{\textbf{C}}_2^{-1}dy\Big)+o(\sqrt{s-s^*}),\end{align}
(153) \begin{align}& {{\textbf{E}}(dy)^{(x)}(s)_{20}=\left[\begin{array}{c@{\quad}c}{\textbf{E}}(dy)^{(x)}(s)_{21}&{\textbf{E}}(dy)^{(x)}(s)_{22}\end{array}\right]\left[\begin{array}{c}{\textbf{T}}_{10}\\[4pt] {\textbf{T}}_{20}\end{array}\right]({-}({{\textbf{T}}}_{00}-s{\textbf{I}})^{-1})}\nonumber\\[4pt] & = {\textbf{E}}(dy)^{(x)}(s^*)_{20}\nonumber\\[4pt] & \quad - \sqrt{s-s^*} \left[ \begin{array}{c@{\quad}c} {\mathbf{Z}}_x(s^*,y) & {\textbf{E}}(dy)^{(x)}(s^*)_{21}{\textbf{C}}_1 \mathbf{B}(s^*) +{\mathbf{Z}}_x(s^*){\boldsymbol{\Psi}}(s^*)dy+\mathbf{W}(s^*,x-y){\boldsymbol{1}}\{y<x\}dy \end{array} \right] \nonumber\\[4pt] & \quad\times \left[ \begin{array}{c} {\textbf{C}}_1^{-1}{\textbf{T}}_{10}\\[4pt] {\textbf{C}}_2^{-1}{\textbf{T}}_{20} \end{array} \right] ({-}({{\textbf{T}}}_{00}-s^*{\textbf{I}})^{-1})+o(\sqrt{s-s^*}),\end{align}

and

(154) \begin{eqnarray}{\textbf{E}}^{(x)}(s)_2&\,=\,&\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s)_{21}{\boldsymbol{1}}+\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s)_{22}{\boldsymbol{1}}+\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s)_{20}{\boldsymbol{1}}\nonumber\\[4pt]&\,=\,&{\textbf{E}}^{(x)}(s^*)_1- \sqrt{s-s^*}\widetilde{\textbf{Z}}_x(s^*)+o(\sqrt{s-s^*}).\end{eqnarray}

Thus, the expressions for $\boldsymbol{\mu}(dy)^{(x)}_{21}$, $\boldsymbol{\mu}(dy)^{(x)}_{22}$, and $\boldsymbol{\mu}(dy)^{(x)}_{20}$ follow from arguments similar to those in the proof of Theorem 3.

Furthermore, by (137), Lemmas 5 and 3, and Theorem 2, we have

(155) \begin{align}&{{\textbf{E}}(dy)^{(x)}(s)_{11}={\boldsymbol{\Psi}}(s){\textbf{E}}(dy)^{(x)}(s)_{21}+e^{{\textbf{K}}(s)(y-x)}{\textbf{C}}_1^{-1}{\boldsymbol{1}}\{y>x\}dy}\nonumber\\[4pt] & = ({\boldsymbol{\Psi}}(s^*)-\mathbf{B}(s^*)\sqrt{s-s^*})({\textbf{E}}(dy)^{(x)}(s^*)_{21}- \sqrt{s-s^*}\mathbf{Z}_x(s^*,y){\textbf{C}}_1^{-1} dy)\nonumber\\[4pt] & \quad +\left(e^{\mathbf{K}(s^*)(y-x)}- \sqrt{s-s^*}\mathbf{H}(s^*, y-x)\right){\textbf{C}}_1^{-1}{\boldsymbol{1}}\{y>x\}dy+o(\sqrt{s-s^*})\nonumber\\[4pt] & = {\textbf{E}}(dy)^{(x)}(s^*)_{11}- \sqrt{s-s^*}\Big(\mathbf{B}(s^*){\textbf{E}}(dy)^{(x)}(s^*)_{21}+{\boldsymbol{\Psi}}(s^*)\mathbf{Z}_x(s^*,y){\textbf{C}}_1^{-1} dy\nonumber\\[4pt] & \quad +\mathbf{H}(s^*, y-x){\textbf{C}}_1^{-1}{\boldsymbol{1}}\{y>x\}dy\Big)+o(\sqrt{s-s^*})\end{align}

and

(156) \begin{align}& {{\textbf{E}}(dy)^{(x)}(s)_{12}={\boldsymbol{\Psi}}(s){\textbf{E}}(dy)^{(x)}(s)_{22}+e^{{\textbf{K}}(s)(y-x)}{\boldsymbol{\Psi}}(s){\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y> x\}dy}\nonumber\\[4pt] & = \left({\boldsymbol{\Psi}}(s^*)-\mathbf{B}(s^*)\sqrt{s-s^*}\right)\Big( {\textbf{E}}(dy)^{(x)}(s^*)_{22}- \sqrt{s-s^*}\Big({\textbf{E}}(dy)^{(x)}(s^*)_{21}{\textbf{C}}_1\mathbf{B}(s^*){\textbf{C}}_2^{-1}\nonumber\\[4pt] & \quad +\mathbf{Z}_x(s^*,y){\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}dy+ \mathbf{W}(s^*, x-y){\boldsymbol{1}}\{y<x\}{\textbf{C}}_2^{-1}dy\Big)\nonumber\\[4pt] & \quad +\Big(e^{{\textbf{K}}(s^*)(y-x)}{\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y>x\}dy\nonumber\\[4pt] & \quad - \sqrt{s-s^*}\left(e^{{\textbf{K}}(s^*)(y-x)}\mathbf{B}(s^*)+{\textbf{H}}(s^*,y-x){\boldsymbol{\Psi}}(s^*)\right){\textbf{C}}_2^{-1}dy\Big)+o(\sqrt{s-s^*})\nonumber\\[4pt] & = {\textbf{E}}(dy)^{(x)}(s^*)_{12}- \sqrt{s-s^*}\Big(\mathbf{B}(s^*){\textbf{E}}(dy)^{(x)}(s^*)_{22}+{\boldsymbol{\Psi}}(s^*)\Big({\textbf{E}}(dy)^{(x)}(s^*)_{21}{\textbf{C}}_1\mathbf{B}(s^*){\textbf{C}}_2^{-1}\nonumber\\[4pt] & \quad +\mathbf{Z}_x(s^*,y){\boldsymbol{\Psi}}(s^*){\textbf{C}}_2^{-1}dy+ \mathbf{W}(s^*, x-y){\boldsymbol{1}}\{y<x\}{\textbf{C}}_2^{-1}dy\Big)\Big)\nonumber\\[4pt] & \quad - \sqrt{s-s^*}\left(e^{{\textbf{K}}(s^*)(y-x)}\mathbf{B}(s^*)+{\textbf{H}}(s^*,y-x){\boldsymbol{\Psi}}(s^*)\right){\textbf{C}}_2^{-1}{\boldsymbol{1}}\{y>x\}dy+o(\sqrt{s-s^*}).\nonumber\\[4pt] \end{align}

Thus, the expressions for $\boldsymbol{\mu}(dy)^{(x)}_{11}$, $\boldsymbol{\mu}(dy)^{(x)}_{12}$, and $\boldsymbol{\mu}(dy)^{(x)}_{10}$ follow from a similar argument, with

(157) \begin{eqnarray}{\textbf{E}}^{(x)}(s)_1&\,=\,&\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s)_{11}{\boldsymbol{1}}+\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s)_{12}{\boldsymbol{1}}+\int_{y=0}^{\infty}{\textbf{E}}(dy)^{(x)}(s)_{10}{\boldsymbol{1}}\nonumber\\[4pt]&\,=\,&{\textbf{E}}^{(x)}(s^*)_1- \sqrt{s-s^*}\widetilde{\widetilde{\textbf{Z}}}_x(s^*)+o(\sqrt{s-s^*}).\end{eqnarray}

5. Example with non-scalar ${\boldsymbol{\Psi}}(s)$

Below we construct an example where, unlike in Example 1, key quantities are matrices, rather than scalars. We derive expressions for this example analytically and illustrate these results with some numerical output as well.

Example 2. Consider a system with $N=2$ sources based on an example analysed in [Reference Anick, Mitra and Sondhi6]. Let $\mathcal{S}=\{1,2,3\}$, $\mathcal{S}_1=\{1\}$, $\mathcal{S}_2=\{2,3\}$, $c_1=1$, $c_2=c_3=-1$, and

\begin{eqnarray}{\textbf{T}}&\,=\,&\left[\begin{array}{c@{\quad}c}{\textbf{T}}_{11}&{\textbf{T}}_{12}\\[4pt] {\textbf{T}}_{21}&{\textbf{T}}_{22}\end{array}\right]=\left[\begin{array}{c|cc}-2\lambda&2\lambda&0\\[4pt] \hline 1&-(1+\lambda)&\lambda\\[4pt] 0&2&-2\end{array}\right],\nonumber\\[4pt]{\textbf{Q}}(s)&\,=\,&\left[\begin{array}{c@{\quad}c}{\textbf{Q}}_{11}(s)&{\textbf{Q}}_{12}(s)\\[4pt]{\textbf{Q}}_{21}(s)&{\textbf{Q}}_{22}(s)\end{array}\right]=\left[\begin{array}{c|cc}-(2\lambda +s)&2\lambda&0\\[4pt]\hline1&-(1+\lambda +s)&\lambda\\[4pt]0&2&-(2+s)\end{array}\right],\nonumber\end{eqnarray}

with some parameter $\lambda>\sqrt{2}-1$ such that the process is stable. In our plots of the output below, we will assume the value $\lambda=2.5$.

Denote by $[x\ z]={\boldsymbol{\Psi}}(s)=\int_{t=0}^{\infty}e^{-st}\boldsymbol{\psi}(t)dt$ the minimum nonnegative solution of (21), here equivalent to

(158) \begin{eqnarray}[0\ 0]&\,=\,&[2\lambda\ 0]-(2\lambda+s)[x\ z]+[x\ z]\left[\begin{array}{c@{\quad}c}-(1+\lambda+s)&\lambda\\[4pt]2&-(2+s)\end{array}\right]\nonumber\\[4pt]&&+[x\ z]\left[\begin{array}{c}1\\[4pt]0\end{array}\right][x\ z],\end{eqnarray}

which we write as a system of equations

(159) \begin{eqnarray} 0&\,=\,&x^2-(1+3\lambda+2s)x+2z+2\lambda,\end{eqnarray}
(160) \begin{eqnarray}0&\,=\,&-(2+2\lambda+2s-x)z+\lambda x.\quad\ \ \ \ \end{eqnarray}

The minimum nonnegative solution $[x\ z]$ of (159)–(160) must be strictly positive, satisfy $2+2\lambda+2s-x> 0$, and occur at the intersection of the two curves

(161) \begin{eqnarray}z=z_1(x,s)&\,=\,&-\frac{1}{2}x^2+\frac{1}{2}(1+3\lambda+2s)x-\lambda,\end{eqnarray}
(162) \begin{eqnarray}z=z_2(x,s)&\,=\,&\lambda x/(2+2\lambda+2s-x).\qquad\ \ \ \ \ \ \end{eqnarray}

To facilitate the analysis that follows, we consider the shape of the curves in (161)–(162); see Figure 2. It is a straightforward exercise to verify that, when $s=0$, we have $z_1(x,0)<-\lambda<z_2(x,0)$ for all $x< 0$, and so the two curves may only intersect at some point (x, z) with $x>0$.

Figure 2: Plots of (161)–(162) for $s=0$ (left) and $s=-2$ (right), when $\lambda=2.5$.

Furthermore, when $2+2\lambda+2s-x> 0$, we have

(163) \begin{eqnarray}\frac{\partial z_2(x,s)}{\partial x}=\frac{\lambda(2+2\lambda+2s-x)+\lambda x}{(2+2\lambda+2s-x)^2} > 0,\end{eqnarray}

and so, when $s=0$, the minimum nonnegative solution $[x\ z]$ of (159)–(160) is in fact the minimum real-valued solution of (159)–(160).

Also, when $x>0$ and $2+2\lambda+2s-x> 0$, we have

(164) \begin{eqnarray}\frac{\partial z_1(x,s)}{\partial s}=x&\ >\ &0,\nonumber\\[4pt] \frac{\partial z_2(x,s)}{\partial s}=\frac{-2\lambda x}{(2+2\lambda+2s-x)^2}&\ <\ &0,\end{eqnarray}

and so as $s\downarrow s^*$ we have $z_1(x,s)\downarrow$ while $z_2(x,s)\uparrow$, until the two curves touch when $s=s^*$, and then move apart when $s<s^*$. Therefore, by the argument about the continuity of ${\boldsymbol{\Psi}}(s)$ which was used in the proof of Lemma 4, for all $s\in [s^*,0]$, ${\boldsymbol{\Psi}}(s)=[x\ z]$ is the minimum real-valued solution of (159)–(160).

Instead of looking at the problem in terms of two intersecting curves $z_1(x,s)$ and $z_2(x,s)$, we now look at it in terms of one cubic curve $g_s(x)$. Substitute (162) into (159) and multiply by $(2+2\lambda+2s-x)$ to get

(165) \begin{eqnarray}0&\,=\,&-x^3+(3+5\lambda+4s)x^2-(2+2\lambda+2s)(1+3\lambda+2s)x+(2+2\lambda+2s)2\lambda\nonumber\\[4pt]&\,=\,&g_s(x),\end{eqnarray}

which is of the form

(166) \begin{equation}ax^3+bx^2+cx+d=0,\end{equation}

with $g_s(0)=d>0$ (we have $d>0$ because $0<x<2+2\lambda+2s$, since $z>0$ in (162)). (See the plots of $g_s(x)$ in Figure 3 for the case $\lambda=2.5$.) Noting that $a=-1<0$, we conclude that when $s=s^*$, the solution $[x\ z]$ corresponds to the local minimum,

(167) \begin{equation}x=\min \left\{ \frac{-b+ \sqrt{b^2-3ac}}{3a},\frac{-b - \sqrt{b^2-3ac}}{3a} \right\}=\frac{-b+ \sqrt{b^2-3ac}}{3a},\end{equation}

where

(168) \begin{equation}b^2-3ac>0.\end{equation}

Figure 3: The plot of (165) for $s=0$ (top left), $s=-2$ (top right), and $s=-1.1178$ (bottom), when $\lambda=2.5$.

We transform the cubic equation (166) into the form

(169) \begin{equation}y^3+py+q=0\end{equation}

using the substitution

(170) \begin{eqnarray}x&\,=\,&y-\frac{b}{3a}\end{eqnarray}

with

(171) \begin{eqnarray}p&\,=\,&\frac{3ac-b^2}{3a^2}= s\times c_p^{(1)}+s^2\times c_p^{(2)}+c_p\end{eqnarray}

for suitable $c_p^{(1)}$, $c_p^{(2)}$, $c_p$, and

(172) \begin{eqnarray}q&\,=\,&\frac{2b^3+27a^2d-9abc}{27a^3}= s\times c_q^{(1)}+s^2\times c_q^{(2)}+s^3\times c_q^{(3)}+c_q\end{eqnarray}

for suitable $c_q^{(1)}$, $c_q^{(2)}$, $c_q^{(3)}$, $c_q$.

Below, we choose the convention of writing p(s) to demonstrate that p is a function of s, with similar notation applied for other quantities such as q, x, y, and so on. Observe that

\begin{eqnarray*}s^3-(s^*)^3&\,=\,&(s-s^*)(s^2+ss^*+(s^*)^2)=C_3\times (s-s^*)+o(s-s^*),\\[4pt]s^2-(s^*)^2&\,=\,&(s-s^*)(s+s^*)=C_2\times (s-s^*)+o(s-s^*),\end{eqnarray*}

where $C_3=3(s^*)^2$ and $C_2=2s^*$, and so by (171)–(172),

(173) \begin{eqnarray}p(s)-p(s^*)&\,=\,&C_p\times (s-s^*)+o(s-s^*),\end{eqnarray}
(174) \begin{eqnarray}q(s)-q(s^*)&\,=\,&C_q\times (s-s^*)+o(s-s^*),\end{eqnarray}

where the constants $C_p$ and $C_q$ are given by

(175) \begin{eqnarray}C_p&\,=\,& c_p^{(1)}+C_2\times c_p^{(2)},\nonumber\\[4pt]C_q&\,=\,&c_q^{(1)}+C_2\times c_q^{(2)}+C_3\times c_q^{(3)}.\end{eqnarray}

Consider (169) and apply Vieta’s substitution,

(176) \begin{equation}y=u-\frac{p}{3u},\end{equation}

where $u^3$ solves the quadratic equation

(177) \begin{equation}(u^3)^2+qu^3-\frac{p^3}{27}=0.\end{equation}

The two solutions are

(178) \begin{equation}u^3(s)=\frac{-q(s)\pm \sqrt{\Delta (s)}}{2}\end{equation}

with

(179) \begin{equation}\Delta (s)= q^2(s)+4\times \frac{p^3(s)}{27},\end{equation}

where $\Delta(s)<0$ for $s>s^*$, and the repeated root requires

(180) \begin{equation}\Delta(s^*)=q^2(s^*)+4\times \frac{p^3(s^*)}{27}=0.\end{equation}

When $s>s^*$, the three (real) solutions of (169) are the three cube roots

(181) \begin{eqnarray}y_0,y_1,y_2&\,=\,&\left(\frac{-q(s) + \sqrt{\Delta (s)}}{2}\right)^{1/3},\end{eqnarray}

and we choose the minimum,

(182) \begin{equation}y(s)=\min\{y_0(s),y_1(s),y_2(s)\},\end{equation}

which corresponds to the minimum $x(s)={\boldsymbol{\Psi}}(s)_1$, where ${\boldsymbol{\Psi}}(s)_i$ denotes the ith element of ${\boldsymbol{\Psi}}(s)$.

Therefore, by (174),

(183) \begin{eqnarray}u^3(s)-u^3(s^*)&\,=\,&\frac{-q(s) + \sqrt{\Delta (s)}}{2}+ \frac{q(s^*)}{2}\nonumber\\[4pt]&\,=\,&-\frac{1}{2}C_q\times (s-s^*) + \frac{1}{2}\sqrt{\Delta(s)}+o(s-s^*).\end{eqnarray}

Now,

(184) \begin{align}\Delta(s) & =\Delta(s)-\Delta(s^*)\nonumber\\[4pt] & = \frac{1}{2}(q(s)-q(s^*))(q(s)+q(s^*))+\frac{4}{27} (p(s)-p(s^*))(p(s)^2+p(s)p(s^*)+p(s^*)^2),\nonumber\\[4pt]\end{align}

and so by (173)–(174),

(185) \begin{equation}\Delta(s)= C_\Delta\times (s-s^*)+o(s-s^*),\end{equation}

where the constant $C_\Delta<0$ is given by

(186) \begin{eqnarray}C_\Delta &\,=\,&\frac{1}{2}C_q\times 2q(s^*)+\frac{4}{27} C_p\times 3p^2(s^*).\end{eqnarray}

Therefore,

\begin{eqnarray*}\lim_{s\downarrow s^*}\left(\frac{u^3(s)-u^3(s^*)}{\sqrt{s-s^*}}\right)&\,=\,&\lim_{s\downarrow s^*}\left({-}\frac{1}{2} C_q\sqrt{s-s^*}+\frac{1}{2}\sqrt{\frac{C_\Delta\times (s-s^*)+o(s-s^*)}{s-s^*}}\right)\\[4pt]&\,=\,&\frac{1}{2}\sqrt{C_\Delta},\end{eqnarray*}

and

\begin{eqnarray*}\lim_{s\downarrow s^*}\left(\frac{u(s)-u(s^*)}{\sqrt{s-s^*}}\right)&\,=\,&\lim_{s\downarrow s^*}\left(\frac{u(s)-u(s^*)}{\sqrt{s-s^*}}\times\frac{u^2(s)+u(s)u(s^*)+u^2(s^*)}{u^2(s)+u(s)u(s^*)+u^2(s^*)}\right)\nonumber\\[4pt]&\,=\,&\lim_{s\downarrow s^*}\left(\frac{u^3(s)-u^3(s^*)}{\sqrt{s-s^*}}\times\frac{1}{u^2(s)+u(s)u(s^*)+u^2(s^*)}\right)\nonumber\\[4pt]&\,=\,&\frac{1}{6u^2(s^*)}\sqrt{C_\Delta},\end{eqnarray*}

where $u(s^*)\neq 0$ by (177), since $p(s^*)\neq 0$ by (168) and (171).

From the above we conclude that by (176),

\begin{align*} & {\lim_{s\downarrow s^*}\left(\frac{y(s)-y(s^*)}{\sqrt{s-s^*}}\right)=\lim_{s\downarrow s^*}\left(\frac{u(s)-u(s^*)}{\sqrt{s-s^*}} -\frac{1}{3\sqrt{s-s^*}}\left(\frac{p(s)}{u(s)}-\frac{p(s^*)}{u(s^*)}\right)\right)}\\[4pt] & = \lim_{s\downarrow s^*}\left(\frac{u(s)-u(s^*)}{\sqrt{s-s^*}}-\frac{(p(s)-p(s^*))u(s^*)}{3\sqrt{s-s^*}u(s)u(s^*)}+\frac{p(s^*)(u(s)-u(s^*))}{3\sqrt{s-s^*}u(s)u(s^*)}\right)\\[4pt] & = \pm \frac{1}{6u^2(s^*)}\sqrt{C_\Delta}-0 \frac{1}{6u^2(s^*)}\sqrt{C_\Delta}\frac{p(s^*)}{3u^2(s^*)}\\[4pt] & = \frac{1}{6u^2(s^*)}\sqrt{C_\Delta}\left(1+\frac{p(s^*)}{3u^2(s^*)}\right). \end{align*}

Therefore, by (170), we have

(187) \begin{eqnarray}\lim_{s\downarrow s^*}\left(\frac{{\boldsymbol{\Psi}}(s)_1-{\boldsymbol{\Psi}}(s^*)_1}{\sqrt{s-s^*}}\right)&\,=\,&\lim_{s\downarrow s^*}\left(\frac{x(s)-x(s^*)}{\sqrt{s-s^*}}\right)\nonumber\\[4pt]&\,=\,&\lim_{s\downarrow s^*}\left(\frac{y(s)-y(s^*)}{\sqrt{s-s^*}} +o(1)\right)\nonumber\\[4pt]&\,=\,&\frac{1}{6u^2(s^*)}\sqrt{C_\Delta}\left(1+\frac{p(s^*)}{3u^2(s^*)}\right)\nonumber\\[4pt]&\,=\,&-\mathbf{B}(s^*)_1.\end{eqnarray}

Furthermore, by (162),

(188) \begin{align}& { \lim_{s\downarrow s^*} \left(\frac{{\boldsymbol{\Psi}}(s)_2-{\boldsymbol{\Psi}}(s^*)_2}{\sqrt{s-s^*}}\right)= \lim_{s\downarrow s^*} \left( \frac{z(s)-z(s^*)}{\sqrt{s-s^*}} \right)}\nonumber\\[4pt] & = \lim_{s\downarrow s^*} \left(\frac{1}{\sqrt{s-s^*}}\left(\frac{\lambda x(s)}{2+2\lambda+2s-x(s)}-\frac{\lambda x(s^*)}{2+2\lambda+2s^*-x(s^*)}\right)\right)\nonumber\\[4pt] & =\frac{2\lambda(1+\lambda+s^*)}{(2+2\lambda+2s^*-x(s^*))^2}\mathbf{B}(s^*)_1\nonumber\\[4pt] & = -\mathbf{B}(s^*)_2,\end{align}

which gives

(189) \begin{equation}\lim_{s\downarrow s^*}\left(\frac{{\boldsymbol{\Psi}}(s)-{\boldsymbol{\Psi}}(s^*)}{\sqrt{s-s^*}}\right)=-[\mathbf{B}(s^*)_1\ \ \ \mathbf{B}(s^*)_2]=-{\textbf{B}}(s^*),\end{equation}

as expected (cf. (89)).

Now, assuming $\lambda=2.5$, we solve (180) numerically, obtaining

(190) \begin{eqnarray}s^* \approx -1.1178.\end{eqnarray}

We evaluate $[x\ z]={\boldsymbol{\Psi}}(s^*)$, using (167) to get x and then (162) to get z,

(191) \begin{eqnarray}{\boldsymbol{\Psi}}(s^*) \approx [1.7878\quad 1.5016].\end{eqnarray}

We obtain ${\textbf{K}}(s^*)$ and $({-}{\textbf{D}}(s^*))$ using (20):

(192) \begin{eqnarray}{\textbf{K}}(s^*) \approx [-2.0944],\qquad\qquad\end{eqnarray}
(193) \begin{eqnarray}-{\textbf{D}}(s^*) \approx \left[\begin{array}{c@{\quad}c}-0.5944 & 4.0016\\[4pt] 2.0000 & -0.8822\end{array}\right],\end{eqnarray}

which have common eigenvalue $\gamma \approx -2.0944$. Also, we use (83) to obtain

(194) \begin{eqnarray}{\textbf{U}}(s^*) \approx [3.5756\quad 3.0031].\end{eqnarray}

Finally, we evaluate ${\textbf{B}}(s^*)$ using (187) and ${\textbf{Y}}(s^*)$ using (90):

(195) \begin{eqnarray}\qquad\qquad\quad\ \ {\textbf{B}}(s^*) \approx [1.6416 \quad\quad\quad 2.2069],\end{eqnarray}
(196) \begin{eqnarray}{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*){\textbf{B}}(s^*) \approx [2.6948\quad\quad\quad 3.6228],\end{eqnarray}
(197) \begin{eqnarray}\qquad\qquad\ \ \ \ \ {\textbf{Y}}(s^*) \approx [0.8808 \quad\quad -0.6197].\end{eqnarray}

This yields

(198) \begin{eqnarray}{\textbf{K}}(s^*){\textbf{B}}(s^*)+{\textbf{B}}(s^*){\textbf{D}}(s^*) \approx 10^{-14}\times [0.7550 \quad -0.0888],\end{eqnarray}

which is approximately zero, as expected (cf. (91)).

Finally,

(199) \begin{eqnarray}{\textbf{H}}(s^*,y)&\,=\,&\sum_{n=1}^{\infty}\frac{y^n}{n!}{\textbf{H}}_{1,n}(s^*)\nonumber\\[4pt]&\,=\,&\sum_{n=1}^{\infty}\frac{y^n}{n!}\sum_{i=0}^{n-1}\left({\textbf{K}}(s^*) \right)^i\times\mathbf{B}(s^*){\textbf{Q}}_{21}(s^*)\times\left({\textbf{K}}(s^*) \right)^{n-1-i}\nonumber\\[4pt]&\,=\,&\sum_{n=1}^{\infty}\frac{y^n}{n!}\sum_{i=0}^{n-1}\left( {\textbf{K}}(s^*)\right)^{n-1}\left(\mathbf{B}(s^*){\textbf{Q}}_{21}(s^*)\right)\nonumber\\[4pt]&\,=\,&ye^{{\textbf{K}}(s^*)y}\mathbf{B}(s^*){\textbf{Q}}_{21}(s^*)\nonumber\\[4pt]&\approx&1.6416 ye^{-2.0944\times y},\end{eqnarray}

and

(200) \begin{eqnarray}{\textbf{H}}(s^*)&\,=\,&\int_{y=0}^{\infty}ye^{{\textbf{K}}(s^*)y}{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*)dy\nonumber\\[4pt]&\,=\,&({\textbf{K}}(s^*))^{-2}{\textbf{B}}(s^*){\textbf{Q}}_{21}(s^*),\nonumber\\[4pt]\widetilde{\textbf{H}}(s^*)&\,=\,&{\textbf{H}}(s^*)+\left({-}({\textbf{K}}(s^*))^{-1}\mathbf{B}(s^*)+{\textbf{H}}(s^*){\boldsymbol{\Psi}}(s^*)\right){\boldsymbol{1}}\nonumber\\[4pt]&\approx& 3.4428,\end{eqnarray}

so

(201) \begin{eqnarray} \boldsymbol{\mu}(dy)^{(0)}_{11}&\,=\,& diag({\widetilde{\textbf{H}}(s^*)})^{-1} { {\textbf{H}}(s^*,y)} dy \nonumber\\[4pt] &\approx& 0.2905\times 1.6416 ye^{-2.0944\times y} dy ,\nonumber\\[4pt] \boldsymbol{\mu}(dy)^{(0)}_{12}&\,=\,& diag({\widetilde{\textbf{H}}(s^*)})^{-1} \left( e^{{\textbf{K}}(s^*)y}{\textbf{B}}(s^*) + {\textbf{H}}(s^*,y){\boldsymbol{\Psi}}(s^*) \right) dy \nonumber\\[4pt] &\approx& 0.2905\times \left( [1.6416\quad 2.2069] +1.6416 y [1.7878\quad 1.5016] \right) e^{-2.0944\times y} dy , \nonumber\\[4pt] \end{eqnarray}

where $\mu(dy)^{(0)}_{ij}$ is the limiting conditional distribution (Yaglom limit) of observing the process in level y and phase j, given that the process started from level zero in phase i at time zero, and has been evolving without hitting level zero.

Acknowledgements

We would like to thank the Australian Research Council for funding this research through Linkage Project LP140100152. Zbigniew Palmowski was partially supported by the National Science Centre (Poland) under the grant 2018/29/B/ST1/00756.

References

Abate, J. and Whitt, W. (1997). Asymptotics for M/G/1 low-priority waiting-time tail probabilities. Queueing Systems 25, 173233.CrossRefGoogle Scholar
Ahn, S. and Ramaswami, V. (2003). Fluid flow models and queues—a connection by stochastic coupling. Stoch. Models 19, 325348.CrossRefGoogle Scholar
Ahn, S. and Ramaswami, V. (2004). Transient analysis of fluid flow models via stochastic coupling to a queue. Stoch. Models 20, 71101.CrossRefGoogle Scholar
Ahn, S. and Ramaswami, V. (2005). Efficient algorithms for transient analysis of stochastic fluid flow models. J. Appl. Prob. 42, 531549.CrossRefGoogle Scholar
Ahn, S. and Ramaswami, V. (2006). Transient analysis of fluid models via elementary level-crossing arguments. Stoch. Models 22, 129147.10.1080/15326340500481788CrossRefGoogle Scholar
Anick, D., Mitra, D. and Sondhi, M. (1982). Stochastic theory of a data handling system with multiple sources. Bell System Tech. J. 61, 18711894.CrossRefGoogle Scholar
Asmussen, S. (1995). Stationary distributions for fluid flow models with or without Brownian noise. Stoch. Models 11, 2149.CrossRefGoogle Scholar
Asmussen, S. (2003). Applied Probability and Queues. Springer, New York.Google Scholar
Asselah, A., Ferrari, P. A., Groisman, P. and Jonckheere, M. (2016). Fleming–Viot selects the minimal quasi-stationary distribution: the Galton–Watson case. Ann. Inst. H. Poincaré Prob. Statist. 52, 647668.CrossRefGoogle Scholar
Bean, N. et al. (1997). The quasi-stationary behavior of quasi-birth-and-death processes. Ann. Appl. Prob. 7, 134155.CrossRefGoogle Scholar
Bean, N., Pollett, P. and Taylor, P. (1998). Quasistationary distributions for level-independent quasi-birth-and-death processes. Commun. Statist. Stoch. Models 14, 389406.CrossRefGoogle Scholar
Bean, N., Pollett, P. and Taylor, P. (2000). Quasistationary distributions for level-dependent quasi-birth-and-death processes. Commun. Statist. Stoch. Models 16, 511541.CrossRefGoogle Scholar
Bean, N. G. and O’Reilly, M. M. (2013). Spatially-coherent uniformization of a stochastic fluid model to a quasi-birth-and-death process. Performance Evaluation 70, 578592.CrossRefGoogle Scholar
Bean, N. G. and O’Reilly, M. M. (2014). The stochastic fluid–fluid model: a stochastic fluid model driven by an uncountable-state process, which is a stochastic fluid model itself. Stoch. Process. Appl. 124, 17411772.10.1016/j.spa.2013.12.006CrossRefGoogle Scholar
Bean, N. G., O’Reilly, M. M. and Taylor, P. G. (2005). Algorithms for return probabilities for stochastic fluid flows. Stoch. Models 21, 149184.10.1081/STM-200046511CrossRefGoogle Scholar
Bean, N. G., O’Reilly, M. M. and Taylor, P. G. (2005). Hitting probabilities and hitting times for stochastic fluid flows. Stoch. Process. Appl. 115, 15301556.10.1016/j.spa.2005.04.002CrossRefGoogle Scholar
Bean, N. G., O’Reilly, M. M. and Taylor, P. G. (2008). Algorithms for the Laplace–Stieltjes transforms of first return times for stochastic fluid flows. Methodology Comput. Appl. Prob. 10, 381408.CrossRefGoogle Scholar
Bhatia, R. and Rosenthal, P. (1997). How and why to solve the operator equation $AX-XB=Y$. Bull. London Math. Soc. 29, 121.CrossRefGoogle Scholar
Bogdan, K., Palmowski, Z. and Wang, L. (2018). Yaglom limit for stable processes in cones. Electron. J. Prob. 23, 119.CrossRefGoogle Scholar
Collet, P., Martnez, S. and San Martn, J. (2013). Quasi-Stationary Distributions. Springer, Berlin, Heidelberg.CrossRefGoogle Scholar
Darroch, J. and Seneta, E. (1965). On quasi-stationary distributions in absorbing discrete-time Markov chains. J. Appl. Prob. 2, 88100.CrossRefGoogle Scholar
Doetsch, G. (1974). Introduction to the Theory and Application of the Laplace Transformation. Springer, Berlin.10.1007/978-3-642-65690-3CrossRefGoogle Scholar
Ferrari, P. A., Kesten, H., Martínez, S. and Picco, P. (1995). Existence of quasi-stationary distributions. A renewal dynamical approach. Ann. Prob. 23, 501521.CrossRefGoogle Scholar
Ferrari, P. A. and Marić, N. (2007). Quasi stationary distributions and Fleming–Viot processes in countable spaces. Electron. J. Prob. 12, 684702.10.1214/EJP.v12-415CrossRefGoogle Scholar
Flaspohler, D. C. and Holmes, P. T. (1972). Additional quasi-stationary distributions for semi-Markov processes. J. Appl. Prob. 9, 671676.CrossRefGoogle Scholar
Foley, R. and McDonald, D. (2017). Yaglom limits can depend on the starting state. Preprint. Available at https://arxiv.org/abs/1709.07578.Google Scholar
Guo, C. (2002). Nonsymmetric algebraic Riccati equations and Wiener–Hopf factorization for M-matrices. SIAM J. Matrix Anal. Appl. 23, 225242.CrossRefGoogle Scholar
Haas, B. and Rivero, V. (2012). Quasi-stationary distributions and Yaglom limits of self-similar Markov processes. Stoch. Process. Appl. 122, 40544095.CrossRefGoogle Scholar
He, Q. (2013). Fundamentals of Matrix-Analytic Methods. Springer, New York.Google Scholar
Henrici, P. (1977). Applied and Computational Complex Analysis, Vol. 2. John Wiley, New York.Google Scholar
Iglehart, D. L. (1974). Random walks with negative drift conditioned to stay positive. J. Appl. Prob. 11, 742751.CrossRefGoogle Scholar
Jacka, S. D. and Roberts, G. O. (1995). Weak convergence of conditioned processes on a countable state space. J. Appl. Prob. 32, 902916.CrossRefGoogle Scholar
Kyprianou, A. E. and Palmowski, Z. (2006). Quasi-stationary distributions for Lévy processes. Bernoulli 12, 571581.CrossRefGoogle Scholar
Kyprianou, E. K. (1971). On the quasi-stationary distribution of the virtual waiting time in queues with Poisson arrivals. J. Appl. Prob. 8, 494507.CrossRefGoogle Scholar
Lambert, A. (2007). Quasi-stationary distributions and the continuous-state branching process conditioned to be never extinct. Electron. J. Prob. 12, 420446.CrossRefGoogle Scholar
Latouche, G. and Ramaswami, V. (1999). Introduction to Matrix Analytic Methods in Stochastic Modeling. Society for Industrial and Applied Mathematics, Philadelphia.CrossRefGoogle Scholar
Latouche, G. et al. (2013). Matrix-Analytic Methods in Stochastic Models. Springer, New York.CrossRefGoogle Scholar
Laub, A. (2005). Matrix Analysis for Scientists and Engineers. Society for Industrial and Applied Mathematics, Philadelphia.CrossRefGoogle Scholar
Mandjes, M., Palmowski, Z. and Rolski, T. (2012). Quasi-stationary workload in a Lévy-driven storage system. Stoch. Models 28, 413432.CrossRefGoogle Scholar
Martnez, S. and San Martn, J. (1994). Quasi-stationary distributions for a Brownian motion with drift and associated limit laws. J. Appl. Prob. 31, 911920.CrossRefGoogle Scholar
Miyazawa, M. and Rolski, T. (2009). Exact asymptotics for a Lévy-driven tandem queue with an intermediate input. Queueing Systems 63, 323353.CrossRefGoogle Scholar
Pollett, P. (2015). Quasi-stationary distributions: a bibliography. Available at www.maths.uq.edu.au/pkp/papers/qsds/qsds.pdf.Google Scholar
Ramaswami, V. (1996). Matrix analytic methods: a tutorial overview with some extensions and new results. In Matrix-Analytic Methods in Stochastic Models (Lecture Notes Pure Appl. Math. 183), Dekker, New York, pp. 261296.CrossRefGoogle Scholar
Ramaswami, V. (1999). Matrix analytic methods for stochastic fluid flows. In Teletraffic Engineering in a Competitive World—Proc. 16th International Teletraffic Congress, Elsevier, Amsterdam, pp. 10191030.Google Scholar
Seneta, E. and Vere-Jones, D. (1966). On quasi-stationary distributions in discrete-time Markov chains with a denumerable infinity of states. J. Appl. Prob. 3, 403434.CrossRefGoogle Scholar
Tweedie, R. L. (1974). Quasi-stationary distributions for Markov chains on a general state space. J. Appl. Prob. 11, 726741.CrossRefGoogle Scholar
Van Doorn, E. A. (1991). Quasi-stationary distributions and convergence to quasi-stationarity of birth–death processes. Adv. Appl. Prob. 23, 683700.CrossRefGoogle Scholar
Yaglom, A. M. (1947). Certain limit theorems of the theory of branching random processes. Dokl. Akad. Nauk. SSSR (N.S.) 56, 795798.Google Scholar
Zhang, J., Li, S. and Song, R. (2014). Quasi-stationarity and quasi-ergodicity of general Markov processes. Sci. China Math. 57, 20132024.CrossRefGoogle Scholar
Figure 0

Figure 1: The values of $\boldsymbol{\mu}(dy)^{(0)}_{11}/dy$ and $\boldsymbol{\mu}(dy)^{(0)}_{12}/dy$ in Example 4 for $b=1$, $a=4,3,2$ (dotted, solid, and dashed lines, respectively).

Figure 1

Figure 2: Plots of (161)–(162) for $s=0$ (left) and $s=-2$ (right), when $\lambda=2.5$.

Figure 2

Figure 3: The plot of (165) for $s=0$ (top left), $s=-2$ (top right), and $s=-1.1178$ (bottom), when $\lambda=2.5$.