Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-11T08:49:03.789Z Has data issue: false hasContentIssue false

Generalised liouville processes and their properties

Published online by Cambridge University Press:  23 November 2020

Edward Hoyle*
Affiliation:
AHL Partners LLP
Levent Ali Menguturk*
Affiliation:
University College London
*
*Postal address: AHL Partners LLP, Man Group plc, London EC4R 3AD, UK.
**Postal address: Department of Mathematics, University College London, London WC1E 6BT, UK.
Rights & Permissions [Opens in a new window]

Abstract

We define a new family of multivariate stochastic processes over a finite time horizon that we call generalised Liouville processes (GLPs). GLPs are Markov processes constructed by splitting Lévy random bridges into non-overlapping subprocesses via time changes. We show that the terminal values and the increments of GLPs have generalised multivariate Liouville distributions, justifying their name. We provide various other properties of GLPs and some examples.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

Lévy random bridges (LRBs) – Lévy processes conditioned to have a fixed marginal law at a fixed future date – have been applied to various problems in credit risk modelling, asset pricing, and insurance (see e.g. [Reference Brody, Davis, Friedman and Hughston3], [Reference Brody, Hughston and Macrina4], [Reference Brody, Hughston and Macrina5], [Reference Brody, Hughston and Macrina6], and [Reference Hoyle, Hughston and Macrina15]). In [Reference Hoyle, Hughston and Macrina16], the authors present a bivariate insurance reserving model by splitting an LRB (in this case based on the 1/2-stable subordinator) in two. The two subprocesses are transformed to span the same time horizon, and are used to model the accumulation of insurance claims. In a similar fashion, the present authors constructed in [Reference Hoyle and Menguturk14] two classes of multivariate process by splitting and transforming an LRB based on the gamma process. The first class, Archimedean survival processes, provides a natural link between stochastic processes and Archimedean copulas, and was applied to a copula interpolation problem. The second, more general class was the class of Liouville processes, so named because the finite-dimensional distributions of a Liouville process are multivariate Liouville distributions [Reference Fang, Kotz and Ng8, Reference Gupta and Richards10, Reference Gupta and Richards11, Reference Gupta and Richards12]. This more general class was applied to the joint modelling of realised variance for two stock indices.

We extend the splitting and transformation mechanism to a general LRB to create what we call a generalised Liouville process (GLP). We show that the sum of coordinates of GLPs are one-dimensional LRBs, and prove that the finite-dimensional distributions of GLPs are generalised multivariate Liouville distributions as defined in [Reference Gupta and Richards13]. We show that GLPs are Markov processes and that there exists a measure change under which the law of an n-dimensional GLP is that of a vector of n independent Lévy processes. We prove that any integrable GLP admits a canonical semimartingale representation with respect to its natural filtration. We also show that GLPs are multivariate harnesses. We prove that GLPs satisfy the weak Markov consistency condition, but not necessarily the strong Markov consistency condition. Similarly, we introduce what we call weak and strong semimartingale consistency properties, and show that GLPs have the former, but not necessarily the latter. The class of GLPs contains as special cases Archimedean survival processes, Liouville processes, and the bivariate process based on the 1/2-stable subordinator.

Throughout much of this work we focus on processes taking continuous values. However, although details are omitted, many results are straightforward to extend to processes on a lattice. Indeed, later we provide examples of both a continuous and a discrete GLP. More specifically, we consider what we call Brownian Liouville processes and Poisson Liouville processes, and present some of their special characteristics.

2. Preliminaries

Throughout this work, for a vector $\textbf{x}\in\mathbb{R}^n$ , we denote the sum of its coordinates by $\textbf{1}\cdot \textbf{x}=\sum_i x_i$ . We work on a probability space $(\Omega,\mathcal{F},\mathbb{P})$ equipped with a filtration $\{\mathcal{F}_{t}\}_{t\geq0}$ . We fix a finite time horizon $t\in[0,T]$ for some $T<\infty$ and assume $\{\mathcal{F}_{t}\}_{0\leq t \leq T}$ and all its sub-filtrations are right-continuous and complete. Unless stated otherwise, every stochastic process is càdlàg with a state-space that is a continuous subspace of $(\mathbb{R}^n,\mathcal{B}(\mathbb{R}^n))$ for some $n\in\mathbb{N}_+$ , where $\mathcal{B}(\mathbb{R}^n)$ is the Borel $\sigma$ -field.

Let $\{X_{t}\}_{t\geq 0}$ be a Lévy process taking values in $\mathbb{R}$ , such that the law of $X_{t}$ is absolutely continuous with respect to the Lebesgue measure for every $t\in[0,T]$ . In this case the density $f_t$ of $X_t$ exists and satisfies the Chapman–Kolmogorov convolution identity $f_{t}(x)=\int_{\mathbb{R}}f_{t-s}(x-y)f_{s}(y)\,\textrm{d} y$ , for $0<s<t\leq T$ and $x\in\mathbb{R}$ . Having independent and stationary increments, the finite-dimensional law of $\{X_{t}\}_{0\leq t \leq T}$ is given by

\[\mathbb{P}(X_{t_{1}}\in \,\textrm{d} x_{1},\ldots, X_{t_{m}}\in \,\textrm{d} x_{m})=\prod^{m}_{i=1}f_{t_{i}-t_{i-1}}(x_{i}-x_{i-1})\,\textrm{d} x_{i}\]

for $m\in\mathbb{N}_{+}$ , $0<t_{1}<\cdots <t_{m}<T$ and $x_{1},\ldots,x_n\in\mathbb{R}$ .

A Lévy bridge is a Lévy process conditioned to take some fixed value at a fixed future time. Since Lévy processes are homogeneous strong Markov processes, the definition of their bridges can be formalised in terms of Doob h-transformations. See [Reference Fitzsimmons, Pitman and Yor9] for further details on the bridges of Markov processes. Let $\{X^{(z)}_{t,T}\}_{0\leq t \leq T}$ be a bridge of $\{X_{t}\}_{0\leq t \leq T}$ to the value $z\in\mathbb{R}$ at time T, where $0<f_{T}(z)<\infty$ . The transition density of $\{X^{(z)}_{t,T}\}_{0\leq t < T}$ is given by the following Doob h-transform of the transition density of $\{X_{t}\}_{0\leq t < T}$ :

(2.1) \begin{equation} \mathbb{P}\Big(X^{(z)}_{t,T}\in \,\textrm{d} x \mid X^{(z)}_{s,T}=y\Big)=\dfrac{h_t(x)}{h_s(y)}f_{t-s}(x-y)\,\textrm{d} x\end{equation}

for $0\leq s <t <T$ , where $h_t(x)=f_{T-t}(z-x)$ . Note that $\{h_t\}_{0\leq t < T}$ defined as such is harmonic with respect to $\{X_{t}\}_{0\leq t < T}$ . Note also that $\mathbb{P}(0<h_t(X^{(z)}_{t,T})<\infty)=1$ for all $0\leq t < T$ , so the ratios of densities in (2.1) are almost surely well-defined (this is discussed in the remark following Proposition 1 of [Reference Fitzsimmons, Pitman and Yor9]). Similar ratios feature throughout this work and are likewise almost surely well-defined, and we may pass by them without further comment.

Lévy random bridges (LRBs) are an extension of Lévy bridges. Their interpretation in [Reference Hoyle, Hughston and Macrina15] is as a bridge to an arbitrary random variable at time T, rather than a fixed value. A process $\{L_{t}\}_{0\leq t \leq T}$ is an LRB with generating law $\nu$ if it satisfies: (i) $L_{T}$ has marginal law $\nu$ , (ii) there exists a Lévy process $\{X_{t}\}_{0\leq t \leq T}$ such that the density $f_t$ of $X_t$ exists for all $t\in(0,T]$ , (iii) $\nu$ concentrates mass where $f_{T}$ is positive and finite $\nu$ -a.s., (iv) for all $m\in\mathbb{N}_{+}$ , every $0<t_{1}<\cdots<t_{m}<T$ , every $(x_{1},\ldots,x_{m})\in\mathbb{R}^{m}$ , and $\nu$ -a.e. z,

\[\mathbb{P}(L_{t_{1}}\leq x_{1},\ldots, L_{t_{m}}\leq x_{m} \mid L_{T}=z)=\mathbb{P}(X_{t_{1}}\leq x_{1},\ldots, X_{t_{m}}\leq x_{m} \mid X_{T}=z).\]

The finite-dimensional distribution of $\{L_{t}\}_{0\leq t \leq T}$ is given by

(2.2) \begin{equation}\mathbb{P}(L_{t_{1}}\in \,\textrm{d} x_{1},\ldots, L_{t_{m}}\in \,\textrm{d} x_{m}, L_{T}\in \,\textrm{d} z)=\prod^{m}_{i=1}(f_{t_{i}-t_{i-1}}(x_{i}-x_{i-1})\,\textrm{d} x_{i})\vartheta_{t_{m}}(\textrm{d} z;\, x_{m}),\end{equation}

where $\vartheta_{0}(\textrm{d} z;\, y)=\nu(\textrm{d} z)$ and $\vartheta_{t}(\textrm{d} z;\, y)=\nu(\textrm{d} z)f_{T-t}(z-y)/f_{T}(z)$ for $t\in(0,T)$ . It follows that LRBs are Markov processes with stationary increments, where the transition law of $\{L_{t}\}_{0\leq t \leq T}$ is

(2.3) \begin{equation}\mathbb{P}(L_{T}\in \,\textrm{d} z \mid L_{s}=y)=\dfrac{\vartheta_{s}(\textrm{d} z;\, y)}{\vartheta_{s}(\mathbb{R};\, y)}, \quad\mathbb{P}(L_{t}\in \,\textrm{d} x \mid L_{s}=y)=\dfrac{\vartheta_{t}(\mathbb{R};\, x)}{\vartheta_{s}(\mathbb{R};\, y)}f_{t-s}(x-y)\,\textrm{d} x\end{equation}

for $0\leq s < t$ . We note that the finite-dimensional distributions of LRBs with discrete state-spaces have similar transition probabilities given in terms of probability mass functions (for details see [Reference Hoyle, Hughston and Macrina15]). The extension of many later results to discrete processes follows from this.

Remark 2.1. Note that (2.3) is also a Doob h-transform of the transition density of $\{X_{t}\}_{0\leq t < T}$ , and $\{\vartheta_{t}(\mathbb{R};\, X_t)\}_{0\leq t < T}$ is a positive $(\mathcal{F}_t^{X},\mathbb{P})$ -martingale, where $\mathcal{F}_t^{X}=\sigma(\{X_u\}\colon 0\leq u \leq t)$ .

Let $X_1,\ldots,X_n$ be random variables taking values in $\mathbb{R}$ with a joint density of the form

(2.4) \begin{equation}p\biggl( \sum_{i=1}^n x_i \biggr)\prod_{i=1}^n \phi_{a_i}(x_i),\end{equation}

where $a_1,\ldots,a_n>0$ are parameters, and the set of functions $\{\phi_a\colon a>0\}$ satisfies the convolution property $\phi_{a}*\phi_{b}=\phi_{a+b}$ . In [Reference Gupta and Richards13], this is referred to as a ‘Liouville density function’. Indeed, according to the definition given in [Reference Gupta and Richards13], $(X_1,\ldots,X_n)$ then has a Liouville distribution, although we prefer to refer to this as the generalised Liouville distribution to distinguish it from the original and special case that $\{\phi_a\}$ are gamma densities (see [Reference Fang, Kotz and Ng8], [Reference Gupta and Richards10], [Reference Gupta and Richards11], and [Reference Gupta and Richards12]). The actual definition of the generalised Liouville distribution given in [Reference Gupta and Richards13] replaces the functions $\{\phi_{a}\}$ with measures, and so it includes examples where the joint density may not exist. For our purposes, it is convenient to relax (2.4) in a different way. We keep $\{\phi_a\}$ , but replace the function p with a measure $\nu$ .

Definition 2.1. Let $X_1,\ldots,X_n$ be random variables taking values in $\mathbb{R}$ , let $\nu\colon \mathcal{B}(\mathbb{R})\rightarrow\mathbb{R}_+$ be a probability law, and let $\mathcal{A}=\{\phi_a\colon 0<a\leq A < \infty)\}$ be a family of functions satisfying the convolution property: $\phi_{a}*\phi_{b}=\phi_{a+b}$ , for $a+b\leq A$ . Then $(X_1,\ldots,X_n)$ has a generalised multivariate Liouville distribution if its joint probability law is of the form

(2.5) \begin{equation}\mathbb{P}\biggl(X_1\in \textrm{d} x_1, \ldots, X_{n-1}\in \textrm{d} x_{n-1}, \sum_{i=1}^n X_{i}\in \textrm{d} z\biggr) = \dfrac{\phi_{a_n}\big(z-\sum_{i=1}^{n-1}x_i\big)\nu(\textrm{d} z )}{\phi_{\textbf{1}\cdot \textbf{a}}(z)} \prod_{i=2}^n \phi_{a_i}(x_i) \,\textrm{d} x_i\end{equation}

for $x_1,\ldots,x_n\in \mathbb{R}$ , $\phi_{a_1},\ldots,\phi_{a_n}\in\mathcal{A}$ , $\textbf{a}=(a_1,\ldots,a_n)^\top\in\mathbb{R}^n_+$ , $\textbf{1}\cdot \textbf{a}\leq A$ .

Remark 2.2. Writing $B+x=\{y\colon y-x\in B\}$ , for $B\subset\mathbb{R}$ and $x\in\mathbb{R}$ , then (2.5) is equivalent to

(2.6) \begin{align}&\mathbb{P}(X_1\in \textrm{d} x_1, \ldots, X_{n-1}\in \textrm{d} x_{n-1}, X_{n}\in B) \nonumber \\* &\quad =\prod_{i=2}^n (\phi_{a_i}(x_i) \,\textrm{d} x_i)\int_{z\in B+\sum_{i=1}^{n-1}x_i} \dfrac{\phi_{a_n}\big(z-\sum_{i=1}^{n-1}x_i\big)}{\phi_{\textbf{1}\cdot \textbf{a}}(z)} \nu(\textrm{d} z ) \nonumber \\* &\quad =\prod_{i=2}^n (\phi_{a_i}(x_i) \,\textrm{d} x_i)\int_{x_n\in B} \dfrac{\phi_{a_n}(x_n)}{\phi_{\textbf{1}\cdot\textbf{a}}(\!\sum_ix_i)}\nu\biggl(\sum_{i=1}^{n-1}x_i + \textrm{d} x_n \biggr).\end{align}

Furthermore, if $\nu$ admits a density p, then (2.6) can be written in the form of a Liouville density:

\begin{equation*}\mathbb{P}(X_1\in \textrm{d} x_1, \ldots, X_{n-1}\in \textrm{d} x_{n-1}, X_{n}\in \textrm{d} x_n)=\dfrac{p(\!\sum_{i}x_i)}{\phi_{\textbf{1}\cdot \textbf{a}}(\!\sum_ix_i)}\prod_{i=1}^n (\phi_{a_i}(x_i) \,\textrm{d} x_i).\end{equation*}

3. Generalised Liouville processes

To construct a GLP, we start with a ‘master’ LRB $\{L_t\}_{0\leq t \leq u_n}$ for $u_n\in\mathbb{R}_+$ and $n\geq2$ , where $L_{u_n}$ has marginal law $\nu$ . We assume that $\nu$ has no continuous singular part and split $\{L_t\}_{0\leq t \leq u_n}$ into n non-overlapping subprocesses.

Definition 3.1. For $m_1,\ldots, m_n >0$ ( $n\geq 2$ ), define the strictly increasing sequence $\{u_{i}\}^{n}_{i=1}$ by $u_{0}=0$ and $u_{i}=u_{i-1}+m_{i}$ for $i=1,\ldots,n$ . Then a process $\{{\xi}_{t}\}_{0\leq t \leq 1}$ is an n-dimensional generalised Liouville process (GLP) if

\begin{equation*}\{{\xi}_t\}_{0\leq t \leq 1} \overset{\textnormal{law}}{=} \{(L_{tm_1}-L_{0},\ldots,L_{tm_i+u_{i-1}}-L_{u_{i-1}},\ldots,L_{tm_n+u_{n-1}}-L_{u_{n-1}})^\top\}_{0\leq t \leq 1}\end{equation*}

for some LRB $\{L_t\}_{0\leq t \leq u_n}$ with generating law $\nu$ . We say that the generating law of $\{{\xi}_{t}\}_{0\leq t \leq 1}$ is $\nu$ and the activity parameter of $\{{\xi}_{t}\}_{0\leq t \leq 1}$ is $\textbf{m}=(m_1,\ldots,m_n)^\top$ .

We have restricted the definition of GLPs to the time horizon [0,1] for convenience. It is straightforward to generalise to an arbitrary closed time horizon. Each coordinate $\{\xi_t^{(i)}\}_{0\leq t \leq 1}$ of $\{{\xi}_t\}_{0\leq t \leq 1}$ is a subprocess of an LRB. Since subprocesses of LRBs are themselves LRBs (see [Reference Hoyle, Hughston and Macrina15]), GLPs form a multivariate generalisation of LRBs. For the rest of the paper, we let $\{{\xi}_t\}_{0\leq t \leq 1}$ be an n-dimensional GLP with generating law $\nu$ , and let $\{L_t\}_{0\leq t \leq u_n}$ be the master process of $\{{\xi}_t\}_{0\leq t \leq 1}$ . In addition, we denote the filtration generated by $\{{\xi}_t\}_{0\leq t \leq 1}$ by $\{\mathcal{F}_t^{{\xi}}\}_{0\leq t \leq 1} \subset \{\mathcal{F}_t\}_{0\leq t \leq 1}$ . Explicitly, we have $\mathcal{F}_t^{{\xi}}=\sigma(\{{\xi}_u\}\colon 0\leq u \leq t)$ .

Remark 3.1. The bivariate model of insurance claims based on the 1/2-stable subordinator proposed in [Reference Hoyle, Hughston and Macrina16] is a GLP.

Remark 3.2. Liouville processes and Archimedean survival processes, as introduced in [Reference Hoyle and Menguturk14], form a subclass of GLPs. In Definition 3.1, if the LRB $\{L_t\}_{0\leq t \leq u_n}$ is a gamma random bridge with unit activity parameter, then we have a Liouville process. If we further fix $m_i = 1$ for $i=1,\ldots, n$ , then we have an Archimedean survival processes.

Proposition 3.1. The following hold for any GLP $\{{\xi}_t\}_{0\leq t \leq 1}$ .

  1. (1) The increments of $\{{\xi}_t\}_{0\leq t \leq 1}$ have a generalised multivariate Liouville distribution.

  2. (2) The terminal value ${\xi}_1$ has a generalised multivariate Liouville distribution.

Proof. See the Appendix. □

In what follows, we define a family of unnormalised measures $\{\theta_t\}_{0\leq t < 1}$ , such that

(3.1) \begin{equation}\theta_0(B;\, x)=\nu(B), \quad\theta_t(B;\, x)=\int_B \dfrac{f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-x)}{f_{\textbf{1}\cdot \textbf{m}}(z)} \, \nu(\textrm{d} z)\end{equation}

for $t\in[0,1)$ , $x\in\mathbb{R}$ and $B\in\mathcal{B}(\mathbb{R})$ . We also write $\Theta_t(x)=\theta_t(\mathbb{R};\, x)$ . We define $R_t$ to be the sum of coordinates of ${\xi}_t$ :

\begin{equation*}R_t=\sum_{i=1}^n \xi^{(i)}_t = \textbf{1}\cdot {\xi}_t.\end{equation*}

Proposition 3.2. The GLP $\{{\xi}_t\}_{0\leq t \leq 1}$ is a Markov process with the transition law given by

\begin{align*} & \mathbb{P}\big(\xi_1^{(1)}\in\textrm{d} z_1,\ldots, \xi_1^{(n-1)}\in\textrm{d} z_{n-1},\xi_1^{(n)}\in B \mid {\xi}_s=\textbf{x} \big)\\* &\quad = \dfrac{\theta_{\tau(s)}\big(B+\sum_{i=1}^{n-1}z_i;\, x_n+\sum_{i=1}^{n-1}z_i\big)}{\Theta_s(\textbf{1}\cdot \textbf{x})}\prod_{i=1}^{n-1}f_{(1-s)m_i}(z_i-x_i)\,\textrm{d} z_i,\end{align*}

and

(3.2) \begin{equation}\mathbb{P}( {\xi}_t\in \,\textrm{d}\textbf{y} \mid {\xi}_s=\textbf{x} )=\dfrac{\Theta_{t}(\textbf{1}\cdot \textbf{y})}{\Theta_s(\textbf{1}\cdot \textbf{x})}\prod_{i=1}^{n}f_{(t-s)m_i}(y_i-x_i) \,\textrm{d} y_i,\end{equation}

where $\textbf{x}, \textbf{y}\in\mathbb{R}^n$ , $\tau(t)=1-m_n(1-t)/(\textbf{1}\cdot \textbf{m})$ , $0\leq s<t<1$ , and $B\in\mathcal{B}(\mathbb{R})$ .

Proof. See the Appendix. □

Remark 3.3. From Proposition 3.2, if the generating law $\nu$ admits a density p, we get a neater transition law to the terminal value, given by

\begin{equation*}\mathbb{P}( {\xi}_1 \in \,\textrm{d}\textbf{z} \mid {\xi}_s=\textbf{x} )=\dfrac{p(\textbf{1}\cdot \textbf{z})}{\Theta_s(\textbf{1}\cdot \textbf{x}) f_{\textbf{1}\cdot \textbf{m}}(\textbf{1}\cdot \textbf{z})}\prod_{i=1}^n f_{(1-s)m_i}(z_i-x_i) \,\textrm{d} z_i.\end{equation*}

Remark 3.4. Our definition of GLPs is somewhat heuristic. A formal definition is possible through a Doob h-transform, since $\{\Theta_{t}\}_{0\leq t < 1}$ is harmonic to a Lévy process $\{\textbf{X}_t\}_{t\geq 0}$ taking values in $\mathbb{R}^n$ with marginal density $g_t(\textbf{x}) = \prod_{i=1}^{n}f_{m_i t}(x_i)$ . To see this, note that we can alternatively write (3.2) as

\[\mathbb{P}( {\xi}_t\in \,\textrm{d}\textbf{y} \mid {\xi}_s=\textbf{x} )=\dfrac{\tilde{\Theta}_{t}(\textbf{y})}{\tilde{\Theta}_s(\textbf{x})}g_{t-s}(\textbf{y}-\textbf{x}) \,\textrm{d} \textbf{y},\]

where $\tilde{\Theta}_{t}(\textbf{x})=\Theta_t(\textbf{1}\cdot \textbf{x})$ , for $0\leq t < 1$ . To see that $\{\tilde{\Theta}_{t}\}_{0\leq t < 1}$ is harmonic to $\{\textbf{X}\}_{0\leq t <1}$ , note that

(3.3) \begin{align}\int_{\mathbb{R}^n}g_{t-s}(\textbf{y}-\textbf{x})\tilde{\Theta}_{t}(\textbf{y}) \,\textrm{d}\textbf{y}&=\int_{\mathbb{R}^n}\prod_{i=1}^{n}f_{(t-s)m_i}(y_i-x_i)\tilde{\Theta}_{t}(\textbf{y}) \,\textrm{d}\textbf{y} \notag \\[2pt]&=\int_{\mathbb{R}}\int_{\mathbb{R}^n}f_{\textbf{1}\cdot \textbf{m} (1-t)}(z-\textbf{1}\cdot\textbf{y})\prod_{i=1}^{n}f_{(t-s)m_i}(y_i-x_i) \,\textrm{d}\textbf{y} \dfrac{\textrm{d}\nu(z)}{f_{\textbf{1}\cdot \textbf{m}}(z)} \notag \\[2pt] &=\int_{\mathbb{R}}f_{\textbf{1}\cdot \textbf{m} (1-s)}(z-\textbf{1}\cdot\textbf{x}) \dfrac{\textrm{d}\nu(z)}{f_{\textbf{1}\cdot\textbf{m}}(z)}\\ &=\tilde{\Theta}_{s}(\textbf{x}) \notag \end{align}

for $0\leq s < t < 1$ , where (3.3) follows from repeated use of the convolution property of $\{f_t\}_{0\leq t\leq \textbf{1}\cdot \textbf{m}}$ .

Remark 3.4 demonstrates that the laws of $\{{\xi}_t\}_{0\leq t <1}$ and $\{\textbf{X}_t\}_{0\leq t <1}$ are equivalent, which we formalise by the corollary below.

Corollary 3.1. Suppose that $\{{\xi}_{t}\}_{t\geq0}$ is a Lévy process under measure $\widetilde{\mathbb{P}}$ with $\widetilde{\mathbb{P}}({\xi}_{t}\in \,\textrm{d} \textbf{x})=g_t(\textbf{x})\,\textrm{d}\textbf{x}$ . Then $\{\Theta_t(R_t)^{-1}\}_{0\leq t <1}$ is a Radon–Nikodým density process that defines the measure change

(3.4) \begin{equation}\dfrac{\textrm{d} \widetilde{\mathbb{P}}}{\textrm{d} \mathbb{P}}\biggr|_{\mathcal{F}_t^{{\xi}}}=\Theta_t(R_t)^{-1} \quad (0\leq t <1),\end{equation}

and $\{{\xi}_t\}_{0\leq t < 1}$ is a $\mathbb{P}$ -GLP with generating law $\nu$ and activity parameter $\textbf{m}$ .

Proof. See the Appendix. □

Remark 3.5. Let $\nu_{st}(B)=\mathbb{P}(R_t\in B\mid \mathcal{F}_s^{{\xi}})$ for $0\leq s < t \leq 1$ and $B\in\mathcal{B}(\mathbb{R})$ . Given ${\xi}_s$ , the increment ${\xi}_t - {\xi}_s$ has a generalised multivariate Liouville distribution with generating law $\nu_{st}(B+R_s)$ for $B\in\mathcal{B}(\mathbb{R})$ and parameter vector $\textbf{m}(t-s)$ .

Proposition 3.3. Given ${\xi}_1$ , the process $\{{\xi}_t\}_{0\leq t \leq 1}$ is a vector of independent Lévy bridges.

Proof. For all $s\in[0,1)$ the transition probabilities to ${\xi}_t$ ( $s<t<1$ ) can be computed from (3.2) by first substituting $\nu$ with the Dirac measure $\delta_{\textbf{1}\cdot \textbf{z}}$ in (3.1), yielding

\begin{equation*}\mathbb{P}({\xi}_t \in \textrm{d} \textbf{y} \mid {\xi}_1=\textbf{z}, \mathcal{F}_s^{{\xi}}) = \prod_{i=1}^n \dfrac{f_{m_i(t-s)}\big(y_i-\xi^{(i)}_s\big)f_{m_i(1-t)}(z_i-y_i)}{f_{m_i(1-s)}\big(z_i-\xi^{(i)}_s\big)} \,\textrm{d} y_i\end{equation*}

for almost every $\textbf{z}\in\mathbb{R}^n$ . Conditional on ${\xi}_1=\textbf{z}$ , we see that the transition laws of the coordinates of $\{{\xi}_t\}_{0\leq t \leq 1}$ are independent, and that each is the transition law of a Lévy bridge. □

Using the Markov property of $\{{\xi}_t\}_{0\leq t \leq 1}$ , we can also provide the conditional laws of the coordinates $\xi^{(i)}_t$ given $\mathcal{F}_s^{\xi^{(i)}}=\sigma(\{\xi^{(i)}_u\}\colon 0\leq u \leq s)$ or given $\mathcal{F}_s^{{\xi}}$ , for $s<t$ .

Proposition 3.4. The coordinates of $\{{\xi}_t\}_{0\leq t < 1}$ have the following transition laws.

  1. (1) The marginally conditioned case:

    \begin{equation*}\mathbb{P}\big(\xi^{(i)}_t \in \,\textrm{d} y_i \mid \xi^{(i)}_s=x_i\big) = \dfrac{\Psi_{t}^{(i)}(y_i)}{\Psi_{s}^{(i)}(x_i)}f_{(t-s)m_i}(y_i-x_i)\,\textrm{d} y_i,\end{equation*}
    where
    \begin{equation*}\Psi_{t}^{(i)}(x)=\int_{\mathbb{R}} \dfrac{f_{\textbf{1}\cdot \textbf{m} -tm_i}(r-x)}{f_{\textbf{1}\cdot \textbf{m}}(r)}\nu(\textrm{d} r).\end{equation*}
  2. (2) The fully conditioned case:

    \begin{equation*}\mathbb{P}\big(\xi^{(i)}_t \in \textrm{d} y_i \mid {\xi}_s = \textbf{x}\big) = \dfrac{\Theta_t^{(i)}(\textbf{x},y_i)}{\Theta_s(\textbf{1}\cdot \textbf{x})}f_{(t-s)m_i}(y_i-x_i) \,\textrm{d} y_i,\end{equation*}
    where
    \begin{equation*}\Theta_t^{(i)}(\textbf{x},y)= \int_{\mathbb{R}} \dfrac{f_{\textbf{1}\cdot \textbf{m}(1-s) + (t-s)m_i}(r-\textbf{1}\cdot \textbf{x} + (y-x_i))}{f_{\textbf{1}\cdot \textbf{m}}(r)} \, \nu(\textrm{d} r),\end{equation*}
    for $0 \leq s < t < 1$ .

Proof. See the Appendix. □

Proposition 3.5. The process $\{R_t\}_{0\leq t \leq 1}$ is an LRB with generating law $\nu$ and the transition law

(3.5) \begin{align}\mathbb{P}(R_1\in \textrm{d} r\mid {\xi}_s=\textbf{x}) &= \dfrac{\theta_{s}(\textrm{d} r;\, \textbf{1}\cdot \textbf{x})}{\Theta_{s}(\textbf{1}\cdot \textbf{x})}, \end{align}
(3.6) \begin{align}\mathbb{P}(R_t\in \textrm{d} r\mid {\xi}_s=\textbf{x})&= \dfrac{\Theta_{t}(r)}{\Theta_{s}(\textbf{1}\cdot \textbf{x})}f_{(t-s)\textbf{1}\cdot \textbf{m}}(r-\textbf{1}\cdot \textbf{x})\,\textrm{d} r. \end{align}

Proof. See the Appendix. □

The next statement is a key result for defining stochastic integrals of integrable LRBs, and hence integrable GLPs.

Proposition 3.6. If $\mathbb{E}(|R_t|)<\infty$ for all $t\in[0,1]$ , then $\{R_t\}_{0\leq t < 1}$ admits the canonical semimartingale representation

(3.7) \begin{equation}R_t = \int_{0}^{t}\dfrac{\mathbb{E}( R_{1} \mid \mathcal{F}^{{\xi}}_{s}) - R_{s}}{1-s}\,\textrm{d} s + M_t\end{equation}

for $0\leq t < 1$ , where $\{M_t\}_{0\leq t < 1}$ is an $(\mathcal{F}_t^{{\xi}},\mathbb{P})$ -martingale with initial state $M_0=0$ .

Proof. From Proposition 3.5, $\{R_t\}_{0\leq t \leq 1}$ is an LRB. Hence, if $\mathbb{E}(|R_t|)<\infty$ for all $t\in(0,1]$ , then

(3.8) \begin{equation}\mathbb{E}(R_t \mid {\xi}_{s} = \textbf{x}) = \dfrac{1-t}{1-s}\textbf{1}\cdot \textbf{x} + \dfrac{t-s}{1-s}\mathbb{E}(R_1 \mid {\xi}_{s} = \textbf{x} ), \quad s\in[0,t).\end{equation}

We shall use (3.8) to prove that $\{M_t\}_{0\leq t < 1}$ given in (3.7) is an $\mathcal{F}_t^{{\xi}}$ -martingale. Since $\{{\xi}_t\}_{0\leq t \leq 1}$ is Markov,

\begin{align*}\mathbb{E}(M_t - M_s \mid \mathcal{F}_s^{{\xi}}) &= \mathbb{E}(R_t - R_s \mid \mathcal{F}_s^{{\xi}}) - \int_s^t \dfrac{\mathbb{E}(R_1 \mid {\xi}_s) - \mathbb{E}(R_u \mid {\xi}_s)}{1-u} \,\textrm{d} u \notag \\[2pt]&= \dfrac{1-t}{1-s}\textbf{1}\cdot {\xi}_{s} + \dfrac{t-s}{1-s}\mathbb{E}(R_1 \mid {\xi}_{s}) - R_{s} - \int_s^t \dfrac{\mathbb{E}(R_1 \mid {\xi}_s)}{1-u} \,\textrm{d} u \notag \\[2pt]&\quad\, + \int_s^t \dfrac{1}{1-u}\biggl(\dfrac{1-u}{1-s}\textbf{1}\cdot {\xi}_{s} + \dfrac{u-s}{1-s}\mathbb{E}(R_1 \mid {\xi}_s)\biggr) \,\textrm{d} u \notag \\[2pt]&=0\end{align*}

for $0\leq s <t \leq 1$ . Given $\mathbb{E}(|R_t|)<\infty$ ,

\[\mathbb{E}\biggl(\int_0^t\frac{|\mathbb{E}(R_1 \mid {\xi}_s) - R_s|}{1-s} \,\textrm{d} s\biggr) <\infty\quad \text{for $0\leq t < 1$}\]

remains to be shown:

\begin{align*}\mathbb{E}\biggl(\int_0^t \dfrac{|\mathbb{E}(R_1 \mid {\xi}_s) - R_s|}{1-s} \,\textrm{d} s\biggr) &\leq \mathbb{E}\biggl(\int_0^t \dfrac{|\mathbb{E}(R_1 \mid {\xi}_s)|}{1-s} \,\textrm{d} s\biggr) + \mathbb{E}\biggl(\int_0^t \dfrac{|R_s|}{1-s} \,\textrm{d} s\biggr) \notag \\[2pt]&= \int_0^t \mathbb{E}\biggl(\dfrac{|\mathbb{E}(R_1 \mid {\xi}_s)|}{1-s}\biggr) \,\textrm{d} s + \int_0^t \mathbb{E}\biggl(\dfrac{|R_s|}{1-s} \biggr)\,\textrm{d} s \notag \\[2pt]& < \infty,\end{align*}

since $\{\mathbb{E}(R_1 \mid \mathcal{F}^{{\xi}}_t)\}_{0\leq t < 1}$ is a martingale. Hence $\mathbb{E}(|M_t|)<\infty$ for $0\leq t < 1$ . Finally, $M_0=0$ since $R_0=0$ . □

Remark 3.6. Let $\alpha_t = (1-t)^{-1}$ and $\beta_t = \mathbb{E}(R_{1} \mid {\xi}_{t})$ . Then

\begin{equation*}\textrm{d} R_t = \alpha_t(\beta_t - R_t )\,\textrm{d} t + \textrm{d} M_t\end{equation*}

for $0\leq t < 1$ . In this form, the dynamics of LRBs resemble those of an Ornstein–Uhlenbeck process, with an increasing mean-reversion rate $\{\alpha_t\}_{0\leq t < 1}$ and a state-dependent reversion level $\{\beta_t\}_{0\leq t < 1}$ . We can write

\begin{equation*}R_t = \int_{0}^{t}\dfrac{1-t}{(1-s)^2}\mathbb{E}( R_{1} \mid {\xi}_{s})\,\textrm{d} s + \int_{0}^{t} \dfrac{1-t}{1-s}\,\textrm{d} M_s \quad \text{for $0\leq s < t < 1$}.\end{equation*}

The following two propositions are motivated by [Reference Mansuy and Yor18]. We first recall that a measurable process $\{H_t\}_{t\geq0}$ is called a harness if, for all $t\geq 0$ , $\mathbb{E}(|H_t|)<\infty$ and for all $0\leq a<b<c<d$ ,

\begin{equation*}\mathbb{E}\biggl( \dfrac{H_c - H_b}{c-b} \mid \mathcal{H}_{a,d} \biggr) = \dfrac{H_d - H_a}{d-a},\end{equation*}

where $\mathcal{H}_{a,d}= \sigma(\{H_t\}_{t\leq a}, \{H_t\}_{t\geq d})$ .

Proposition 3.7. If $\mathbb{E}(\textbf{1}\cdot {\xi}_t)<\infty$ for $t\in [0,1]$ , then $\{{\xi}_t\}_{0\leq t \leq 1}$ and $\{R_t\}_{0\leq t \leq 1}$ are harnesses.

Proof. See the Appendix. □

Proposition 3.8. Let $\varphi$ be a $C^1$ -function. If $\mathbb{E}(|R_t|)<\infty$ for all $t\in(0,1]$ , then the stochastic process $\{Z_t\}_{0\leq t < 1}$ defined by

\begin{equation*}Z_{t} = \dfrac{\mathbb{E}( R_{1} \mid {\xi}_t) - R_t}{1-t}\int_t^1 \varphi(u)\,\textrm{d} u + \int_0^t \varphi(u)\,\textrm{d} R_u \quad (0\leq t <1)\end{equation*}

is an $(\mathcal{F}_{t}^{{\xi}},\mathbb{P})$ -martingale.

Proof. We have

\begin{align*}\mathbb{E}\biggl( \int_t^1 \varphi(u)\,\textrm{d} R_u \mid \mathcal{F}_t^{{\xi}} \biggr) &=\varphi(1)\,\mathbb{E}( R_{1} \mid \mathcal{F}_t^{{\xi}}) - \varphi(t)R_t - \int_t^1\mathbb{E}(R_u \mid \mathcal{F}_t^{{\xi}})\,\textrm{d} \varphi(u) \notag \\*&= \dfrac{\mathbb{E}( R_{1} \mid {\xi}_t) - R_t}{1-t}\int_t^1 \varphi(u)\,\textrm{d} u,\end{align*}

from the integration-by-parts formula, (3.8), and the Markov property of $\{{\xi}_t\}_{0\leq t \leq 1}$ . Hence $Z_t = \mathbb{E}(\int_0^1 \varphi(u)\,\textrm{d} R_u \mid \mathcal{F}_t^{{\xi}})$ , which is an $(\mathcal{F}_{t}^{{\xi}},\mathbb{P})$ -martingale. □

Similar to Proposition 3.6, we have the following result (we omit the proof to avoid repetition).

Proposition 3.9. If $\mathbb{E}(|{\xi}_t|)<\infty$ for all $t\in(0,1]$ , then $\{{\xi}_t\}_{0\leq t < 1}$ admits the canonical semimartingale representation

(3.9) \begin{equation}{\xi}_t = \int_{0}^{t}\dfrac{\mathbb{E}( {\xi}_{1} \mid \mathcal{F}^{{\xi}}_{s}) - {\xi}_{s}}{1-s}\,\textrm{d} s + \textbf{M}_t \quad (0\leq t < 1),\end{equation}

where $\{\textbf{M}_t\}_{0\leq t < 1}$ is an $(\mathcal{F}_t^{{\xi}},\mathbb{P})$ -martingale.

In [Reference Jakubowski and Pytel17] it is shown that Archimedean survival processes satisfy the weak Markov consistency condition, but not necessarily the strong Markov consistency condition. Motivated by this, Proposition 3.10 below provides a generalised version of this result for GLPs. First we recall the weak and strong Markov consistency conditions. Let $\{\textbf{X}_t\}_{t\geq0}$ be an n-dimensional real-valued Markov process and let $\mathcal{F}_t^{\textbf{X}}=\sigma(\{\textbf{X}_u\}\colon 0\leq u \leq t)$ . Also, for each coordinate process $\{X^{(i)}_t\}_{t\geq 0}$ , $i=1,\ldots,n$ , write $\mathcal{F}_t^{X^{(i)}}=\sigma(\{X^{(i)}_u\}\colon 0\leq u \leq t)\subset \mathcal{F}_t^{\textbf{X}}$ . The process $\{\textbf{X}_t\}_{t\geq0}$ satisfies the weak Markov consistency condition if

(3.10) \begin{equation}\mathbb{P}\big(X^{(i)}_t \in B \mid \mathcal{F}_s^{X^{(i)}} \big) = \mathbb{P}\big( X^{(i)}_t \in B \mid X^{(i)}_s \big)\end{equation}

for every $i=1,\ldots,n$ and every $B\in\mathcal{B}(\mathbb{R})$ . Further, $\{\textbf{X}_t\}_{t\geq 0}$ satisfies the strong Markov consistency condition if

(3.11) \begin{equation}\mathbb{P}\big( X^{(i)}_t \in B \mid \mathcal{F}_s^{\textbf{X}} \big) = \mathbb{P}\big( X^{(i)}_t \in B \mid X^{(i)}_s \big)\end{equation}

for every $i=1,\ldots,n$ and every $B\in\mathcal{B}(\mathbb{R})$ .

Proposition 3.10. Any GLP $\{{\xi}_{t}\}_{0\leq t \leq 1}$ is weak Markov consistent, but not necessarily strong Markov consistent.

Proof. Each coordinate $\{\xi^{(i)}_t\}_{0\leq t \leq 1}$ of the GLP $\{{\xi}_t\}_{0\leq t \leq 1}$ is an LRB, since every subprocess of an LRB is an LRB (see [Reference Hoyle, Hughston and Macrina15]). Thus (3.10) is satisfied for every $i=1,\ldots,n$ , every $B\in\mathcal{B}(\mathbb{R})$ , and all $0\leq s < t \leq 1$ . However, (3.11) does not necessarily hold since

\begin{align*}\mathbb{P}\big( \xi^{(i)}_t -\xi^{(i)}_s \in \textrm{d} y \mid \mathcal{F}_s^{{\xi}} \big) = \mathbb{P}\biggl( \xi^{(i)}_t -\xi^{(i)}_s \in \textrm{d} y \mid \sum_j \xi^{(j)}_s \biggr)\\[-15pt]\end{align*}

is only equal to $\mathbb{P}( \xi^{(i)}_t -\xi^{(i)}_s \in \textrm{d} y \mid \xi^{(i)}_s )$ if both $\sum_j \xi^{(j)}_s$ and $\xi^{(i)}_s$ are independent from the increment $\xi^{(i)}_t -\xi^{(i)}_s$ for all $0\leq s < t \leq 1$ . In such a case the coordinates of $\{{\xi}_t\}_{0\leq t \leq 1}$ are independent Lévy processes. □

In the same spirit we shall introduce weak and strong semimartingale consistency conditions. Definition 3.2 below goes beyond a Markov setting, but in the context of GLPs it offers links to Markov consistency.

Definition 3.2. Let $\{\boldsymbol{S}_t\}_{t\geq 0}$ be an $(\mathcal{F}_t^{\boldsymbol{S}},\mathbb{P})$ -semimartingale, where $\mathcal{F}_t^{\boldsymbol{S}}=\sigma(\{\boldsymbol{S}_u\}\colon 0\leq u\leq t)$ . Let $\{S_t^{(i)}\}_{t\geq0}$ be a coordinate of $\{\boldsymbol{S}_t\}_{t\geq0}$ , and let $\mathcal{F}_t^{S^{(i)}}=\sigma(\{S^{(i)}_u\}\colon 0\leq u\leq t)$ , for $i=1,\ldots,n$ .

  1. (1) If $\{S_t^{(i)}\}_{t\geq0}$ admits a decomposition $S_t^{(i)} = a_t^{(i)} + m_t^{(i)}$ , where $\{a_t^{(i)}\}_{t\geq0}$ is a càdlàg $\{\mathcal{F}_t^{S^{(i)}}\}$ -adapted process with bounded variation and $\{m_t^{(i)}\}_{t\geq0}$ is an $(\mathcal{F}_t^{S^{(i)}},\mathbb{P})$ -local martingale, then $\{\boldsymbol{S}_t\}_{t\geq0}$ is weakly semimartingale consistent with respect to $\{S_t^{(i)}\}_{t\geq0}$ . If this holds for every $i=1,\ldots,n$ , then $\{\boldsymbol{S}_t\}_{t\geq0}$ satisfies the weak semimartingale consistency condition.

  2. (2) Let $\{\boldsymbol{S}_t\}_{t\geq0}$ be decomposed as $\boldsymbol{S}_t= \boldsymbol{A}_t + \boldsymbol{M}_t$ , where $\{\boldsymbol{A}_t\}_{t\geq0}$ is a càdlàg $\{\mathcal{F}_t^{\boldsymbol{S}}\}$ -adapted process with bounded variation and $\{\boldsymbol{M}_t\}_{t\geq0}$ is an $(\mathcal{F}_t^{\boldsymbol{S}},\mathbb{P})$ -local martingale, with coordinates $\{A_t^{(i)}\}_{t\geq0}$ and $\{M_t^{(i)}\}_{t\geq0}$ , respectively. Given that $S_t^{(i)}=A_t^{(i)} + M_t^{(i)}$ , if $\{A_t^{(i)}\}_{t\geq0}$ is $\{\mathcal{F}_t^{S^{(i)}}\}$ -adapted and $\{M_t^{(i)}\}_{t\geq0}$ is an $(\mathcal{F}_t^{S^{(i)}},\mathbb{P})$ -local martingale, then $\{\boldsymbol{S}_t\}_{t\geq0}$ is strongly semimartingale consistent with respect to $\{S_t^{(i)}\}_{t\geq0}$ . If this holds for every $i=1,\ldots,n$ , then $\{\boldsymbol{S}_t\}_{t\geq0}$ satisfies the strong semimartingale consistency condition.

Proposition 3.11. Any GLP $\{{\xi}_{t}\}_{0\leq t < 1}$ , where $\mathbb{E}(|{\xi}_t|)<\infty$ for all $t\in(0,1]$ , is weak semimartingale consistent, but not necessarily strong semimartingale consistent.

Proof. Let $\mathbb{E}(|{\xi}_t|)<\infty$ for all $t\in(0,1]$ and define $\alpha_t=(1-t)^{-1}$ for $t<1$ . Following similar steps to the proof of Proposition 3.6, each coordinate of $\{{\xi}_{t}\}_{0\leq t < 1}$ admits a decomposition $\xi^{(i)}_t = a^{(i)}_t + m^{(i)}_t$ , where

\begin{equation*}a^{(i)}_t = \int_{0}^{t}\alpha_s(\mathbb{E}( \xi^{(i)}_1 \mid \mathcal{F}^{\xi^{(i)}}_{s}) - \xi^{(i)}_s)\,\textrm{d} s,\end{equation*}

which is $\{\mathcal{F}_t^{\xi^{(i)}}\}$ -adapted, and $\{m^{(i)}_t\}_{0\leq t < 1}$ is an $(\mathcal{F}_t^{\xi^{(i)}},\mathbb{P})$ -martingale, for $i=1,\ldots,n$ . Hence $\{{\xi}_{t}\}_{0\leq t < 1}$ is weak semimartingale consistent. From Proposition 3.9, we also know that $\xi^{(i)}_t=A^{(i)}_t+M^{(i)}_t$ , where

\begin{equation*}A^{(i)}_t = \int_{0}^{t}\alpha_s(\mathbb{E}( \xi^{(i)}_1 \mid \mathcal{F}^{{\xi}}_{s}) - \xi^{(i)}_s)\,\textrm{d} s,\end{equation*}

which is $\{\mathcal{F}_t^{{\xi}}\}$ -adapted, and $\{M_t^{(i)}\}_{0\leq t < 1}$ is an $(\mathcal{F}_t^{{\xi}},\mathbb{P})$ -martingale. Since $\{{\xi}_{t}\}_{0\leq t \leq 1}$ is Markov and using Proposition 3.10, we know that $\mathbb{E}( \xi^{(i)}_1 \mid {\xi}_{t})$ is not necessarily equal to $\mathbb{E}( \xi^{(i)}_1 \mid \xi^{(i)}_{t})$ . Hence $\{A^{(i)}_t\}_{0\leq t < 1}$ is not necessarily $\{\mathcal{F}_t^{\xi^{(i)}}\}$ -adapted. Also, $\{M_t^{(i)}\}_{0\leq t < 1}$ is not necessarily an $(\mathcal{F}_t^{\xi^{(i)}},\mathbb{P})$ -martingale. □

We used Proposition 3.10 to prove Proposition 3.11; we shall note another link between semimartingale consistency and Markov consistency. From [Reference Bielecki, Jakubowski and Nieweglowski1], if a Markov process $\{\textbf{X}_t\}_{t\geq0}$ satisfies the weak Markov consistency with respect to its marginal $\{X^{(i)}_t\}_{t\geq0}$ , then $\{\textbf{X}_t\}_{t\geq0}$ is also strongly Markov consistent with respect to $\{X^{(i)}_t\}_{t\geq0}$ if and only if $\{\mathcal{F}_t^{X^{(i)}}\}_{t\geq0}$ is $\mathbb{P}$ -immersed in $\{\mathcal{F}_t^{\textbf{X}}\}_{t\geq0}$ . Here $\mathbb{P}$ -immersion means that if $\{X^{(i)}_t\}_{t\geq0}$ is an $(\mathcal{F}_t^{X^{(i)}},\mathbb{P})$ -local martingale, then it is an $(\mathcal{F}_t^{\textbf{X}},\mathbb{P})$ -local martingale. As an opposite direction to $\mathbb{P}$ -immersion, we prove a result that links strong martingale consistency and strong Markov consistency.

Proposition 3.12. Let $\{\boldsymbol{S}_t\}_{t\geq 0}$ be a Markov $(\mathcal{F}_t^{\boldsymbol{S}},\mathbb{P})$ -martingale, satisfying weak Markov consistency. Then $\{\boldsymbol{S}_t\}_{t\geq0}$ is strong semimartingale consistent if and only if it is strong Markov consistent.

Proof. Since $\{\boldsymbol{S}_t\}_{t\geq 0}$ is an $(\mathcal{F}_t^{\boldsymbol{S}},\mathbb{P})$ -martingale, we have $\mathbb{E}( S^{(i)}_t \mid \mathcal{F}^{\boldsymbol{S}}_{u})=S^{(i)}_u$ for $0\leq u < t$ . Then, if $\{\boldsymbol{S}_t\}_{t\geq0}$ is strong martingale consistent, we have $\mathbb{E}( S^{(i)}_t \mid \mathcal{F}^{\boldsymbol{S}}_{u})=\mathbb{E}( S^{(i)}_t \mid \mathcal{F}_u^{S^{(i)}})=S^{(i)}_u$ . Thus, given that $\{\boldsymbol{S}_t\}_{t\geq 0}$ is Markovian satisfying weak Markov consistency,

\begin{align*} \mathbb{E}( S^{(i)}_t \mid \mathcal{F}^{\boldsymbol{S}}_{u}) & = \int_{\mathbb{R}}x\mathbb{P}( S^{(i)}_t \in \textrm{d} x \mid \mathcal{F}^{\boldsymbol{S}}_{u} )\\[3pt] &=\int_{\mathbb{R}}x\mathbb{P}( S^{(i)}_t \in \textrm{d} x \mid \boldsymbol{S}_u ) \\[3pt] & =\mathbb{E}( S^{(i)}_t \mid \mathcal{F}_u^{S^{(i)}})\\[3pt] &= \int_{\mathbb{R}}x\mathbb{P}( S^{(i)}_t \in \textrm{d} x \mid \mathcal{F}_u^{S^{(i)}} )\\[3pt] &=\int_{\mathbb{R}}x\mathbb{P}( S^{(i)}_t \in \textrm{d} x \mid S^{(i)}_u ).\end{align*}

For the opposite direction, if $\{\boldsymbol{S}_t\}_{t\geq0}$ is strong Markov consistent, then since $\{\boldsymbol{S}_t\}_{t\geq 0}$ is an $(\mathcal{F}_t^{\boldsymbol{S}},\mathbb{P})$ -martingale satisfying weak Markov consistency,

\begin{align*}\int_{\mathbb{R}}x\mathbb{P}( S^{(i)}_t \in \textrm{d} x \mid \mathcal{F}^{\boldsymbol{S}}_{u} ) & =\mathbb{E}( S^{(i)}_t \mid \mathcal{F}^{\boldsymbol{S}}_{u}) =S^{(i)}_u \\[3pt] &=\int_{\mathbb{R}}x\mathbb{P}( S^{(i)}_t \in \textrm{d} x \mid S^{(i)}_u ) \\[3pt] &= \int_{\mathbb{R}}x\mathbb{P}( S^{(i)}_t \in \textrm{d} x \mid \mathcal{F}_u^{S^{(i)}} ) \\[3pt] &= \mathbb{E}( S^{(i)}_t \mid \mathcal{F}_u^{S^{(i)}}).\end{align*}

Hence $\{S_t^{(i)}\}_{t\geq0}$ is also an $(\mathcal{F}_t^{S^{(i)}},\mathbb{P})$ -martingale, and the statement follows. □

4. Examples

We shall now study two examples of GLPs in more detail: Brownian Liouville processes and Poisson Liouville processes.

4.1. Brownian Liouville processes

As a subclass of GLPs, let us consider what we call Brownian Liouville processes (BLPs). In Definition 3.1, we let $\{L_t\}_{0\leq t \leq u_n}$ be a Brownian random bridge given by

(4.1) \begin{equation}L_t = \dfrac{t}{\textbf{1}\cdot \textbf{m}}L_{\textbf{1}\cdot \textbf{m}}+\sigma \biggl(W_t-\dfrac{t}{\textbf{1}\cdot \textbf{m}}W_{\textbf{1}\cdot \textbf{m}}\biggr),\end{equation}

where $\sigma>0$ and $\{W_t\}_{0\leq t \leq u_n}$ is a standard Brownian motion independent of the random variable $L_{\textbf{1}\cdot \textbf{m}}$ . For a background of the anticipative orthogonal representation given in (4.1) for a Brownian random bridge, we refer the reader to [Reference Brody and Hughston2] and [Reference Brody, Hughston and Macrina6]. We also note that the Gaussian process $\{W_t-({{t}/{\textbf{1}}\cdot \textbf{m}})W_{\textbf{1}\cdot \textbf{m}}\}_{0\leq t \leq \textbf{1}\cdot \textbf{m}}$ in (4.1) is a Brownian bridge starting and ending at zero. The following proposition is analogous to [Reference Hoyle and Menguturk14, Proposition 3.10] for Archimedean survival processes. We denote the Hadamard (i.e. element-wise) product of vectors $\textbf{x}, \textbf{y} \in \mathbb{R}^n$ by $\textbf{x} \circ \textbf{y}$ . We say $\{\beta_t\}_{0\leq t \leq 1}$ is a standard Brownian bridge if (a) it is a Brownian bridge, (b) $\beta_0=\beta_1=0$ , and (c) $\textrm{Var}(\beta_t)=(1-t)^2$ .

Proposition 4.1. If $\{{\xi}_t\}_{0\leq t \leq 1}$ is a BLP with the master process (4.1), then it admits the independent Brownian bridge representation

(4.2) \begin{equation}{\xi}_t = t\biggl(\dfrac{\textbf{m}}{\textbf{1}\cdot \textbf{m}} R_1 + \sigma \textbf{Z} \biggr) + \sigma \sqrt{\textbf{m}} \circ \boldsymbol{\beta}_t,\end{equation}

where $\sqrt{\textbf{m}}=(\sqrt{m_1},\ldots,\sqrt{m_n})^\top$ , $\{\boldsymbol{\beta}_t\}$ is a vector of independent standard Brownian bridges, and the random vector $\textbf{Z} = (Z_1,\ldots,Z_n)^{\top}$ is multivariate Gaussian with

\begin{equation*} \textrm{Cov}(Z_i, Z_j) = \delta_{ij}m_i - \dfrac{m_im_j}{\textbf{1}\cdot \textbf{m}}.\end{equation*}

Proof. For the proof we use $W_t$ and W(t) interchangeably. We have

\begin{align*}\xi^{(i)}_t &= L(m_it-u_{i-1}) - L(u_{i-1}) \notag \\*&=\dfrac{m_it}{\textbf{1}\cdot \textbf{m}}L(\textbf{1}\cdot \textbf{m}) + \sigma\biggl( W(m_it+u_{i-1}) - W(u_{i-1})-\dfrac{m_it}{\textbf{1}\cdot \textbf{m}}W(\textbf{1}\cdot \textbf{m}) \biggr) \notag \\*&=\dfrac{m_it}{\textbf{1}\cdot \textbf{m}}R_1 + \sigma\sqrt{m_i}\beta^{(i)}_t + \sigma t\biggl( W(u_{i}) - W(u_{i-1})-\dfrac{m_i}{\textbf{1}\cdot \textbf{m}}W(\textbf{1}\cdot \textbf{m}) \biggr)\end{align*}

since $R_1=L_{\textbf{1}\cdot \textbf{m}}$ , and where

\begin{equation*}\sqrt{m_i}\beta^{(i)}_t = W(m_it + u_{i-1}) - W(u_{i-1}) - t(W(u_i) - W(u_{i-1})).\end{equation*}

So $\{\beta^{(i)}_t\}_{0\leq t \leq 1}$ is a standard Brownian bridge, and is independent of $W(u_i) - W(u_{i-1})$ and $W(\textbf{1}\cdot \textbf{m})$ . It is straightforward to verify the independence by noting that they are jointly Gaussian with nil covariation. Thus we can write

\begin{equation*}\xi^{(i)}_t = t \biggl(\dfrac{m_i}{\textbf{1}\cdot \textbf{m}}R_1 + \sigma Z_i\biggr)+ \sigma\sqrt{m_i}\beta^{(i)}_t,\end{equation*}

where $Z_i$ is given by

(4.3) \begin{equation}Z_i = W(u_{i}) - W(u_{i-1})-\dfrac{m_i}{\textbf{1}\cdot \textbf{m}}W(\textbf{1}\cdot \textbf{m}),\end{equation}

and $R_1$ , $Z_i$ and $\{\beta^{(i)}_t\}_{0\leq t \leq 1}$ are mutually independent. Further, $\{\beta^{(1)}_t\}_{0\leq t \leq 1},\, {\ldots}\,, \{\beta^{(n)}_t\}_{0\leq t \leq 1}$ are mutually independent, since they are jointly Gaussian with nil covariation. □

Proposition 4.1 provides an anticipative orthogonal representation for BLPs, whereas (3.9) provides a non-anticipative semimartingale representation when $\{{\xi}_t\}_{0\leq t \leq 1}$ is a BLP.

Remark 4.1. Note that $\textbf{1}\cdot \textbf{Z}=0$ from (4.3), and so its covariance matrix is singular. Also, using (4.2) from Proposition 4.1, $\{R_t\}_{0\leq t \leq 1}$ admits the anticipative representation

\begin{equation*} R_t = \textbf{1}\cdot {\xi}_t = t\textbf{1}\cdot\biggl(\dfrac{\textbf{m}}{\textbf{1}\cdot \textbf{m}} R_1 + \sigma \textbf{Z}\biggr) + \sigma\textbf{1}\cdot(\sqrt{\textbf{m}} \circ \boldsymbol{\beta}_t)= tR_1 + \sigma\sqrt{\textbf{1}\cdot \textbf{m}}\tilde{\beta}_t,\end{equation*}

where $\{\tilde{\beta}_t\}_{0\leq t \leq 1}$ is a standard Brownian bridge.

Proposition 4.2. Let $\pi_0(\textrm{d} \textbf{x}) = \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x})$ and $\pi_t(\textrm{d} \textbf{x}) = \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x} \mid \mathcal{F}_t^{{\xi}})$ . Also, let $\{\textbf{B}_t\}_{0\leq t < 1}$ be a vector of standard $(\mathcal{F}_t^{{\xi}},\mathbb{P})$ -Brownian motions. Then, if $\mathbb{E}(|{\xi}_{1}|)<\infty$ , the multivariate measure-valued process $\{\pi_t\}_{0\leq t < 1}$ satisfies

\begin{equation*}\pi_t(\textrm{d} \textbf{x}) = \pi_0(\textrm{d} \textbf{x}) + \int_{0}^t \pi_s(\textrm{d} \textbf{x}) \boldsymbol{\sigma}_s^\top(\sigma\sqrt{\textbf{m}}\circ\,\textrm{d} \textbf{B}_s)\end{equation*}

for $0\leq t < 1$ , where each coordinate of $\{\boldsymbol{\sigma}_t\}_{0\leq t < 1}$ is given by

\begin{equation*}\sigma_t^{(i)} = \dfrac{x^{(i)}-\mathbb{E}(\xi_1^{(i)}\mid {\xi}_t)}{\sigma^2 m_i (1-t)}.\end{equation*}

Proof. See the Appendix. □

Remark 4.2. Note that $\textbf{1}\cdot\tilde{\textbf{B}}_t=\textbf{1}\cdot(\sigma \sqrt{\textbf{m}}\circ\textbf{B}_t)$ gives the non-anticipative semimartingale representation for $\{R_t\}_{0\leq t < 1}$ , which is

\begin{equation*}R_t = \int_{0}^{t}\dfrac{\mathbb{E}( R_{1} \mid {\xi}_{s}) - R_{s}}{1-s}\,\textrm{d} s + \sigma\sum_{i=1}^n \sqrt{m_i}B_t^{(i)},\end{equation*}

which provides the explicit example of the $(\mathcal{F}_t^{{\xi}},\mathbb{P})$ -martingale in (3.7).

4.2. Poisson Liouville processes

Our second example is that of Poisson Liouville processes (PLPs), which are counting processes. Accordingly, in Definition 3.1, we let $\{L_t\}_{0\leq t \leq u_n}$ be a Poisson random bridge with $\mathbb{P}(L_{u_n}=i)=\nu(\{i\})$ , for $i\in\mathbb{N}_0$ .

Proposition 4.3. Let $\{\lambda_t^R\}_{0\leq t \leq 1}$ be the intensity process of the $L_1$ -norm process $\{R_t\}_{0\leq t \leq 1}$ . If $\mathbb{E}(R_1)<\infty$ , then

(4.4) \begin{equation}\lambda^R_t = \dfrac{\mathbb{E}(R_1 \mid {\xi}_{t}) - R_t}{1-t} \quad for\ \text{$0\leq t < 1$}.\end{equation}

Proof. Since $\{R_t\}_{0\leq t \leq 1}$ is a counting process, we have $\lambda^R_t = \lim_{h\rightarrow 0}\mathbb{E}(R_{t+h} - R_{t} \mid \mathcal{F}_t^{{\xi}})/h$ . Since $\{R_t\}_{0\leq t \leq 1}$ is a Markov process with respect to $\{\mathcal{F}_t^{{\xi}}\}_{0\leq t \leq 1}$ , using (3.8), we have

\begin{align*}\lambda^R_t &= \lim_{h\rightarrow 0} \biggl(\dfrac{\mathbb{E}(R_{t+h} \mid {\xi}_t)}{h} -\dfrac{R_t}{h}\biggr) \notag \\*&= \lim_{h\rightarrow 0} \biggl(\dfrac{1-t-h}{(1-t)h}\textbf{1}\cdot {\xi}_t + \dfrac{h\mathbb{E}(R_{1} \mid {\xi}_t)}{(1-t)h} -\dfrac{R_t}{h}\biggr) \notag\\&= \dfrac{\mathbb{E}(R_{1} \mid {\xi}_t)}{(1-t)} + \lim_{h\rightarrow 0} \biggl(R_t\biggl(\dfrac{1-t-h}{(1-t)h} -\dfrac{1}{h}\biggr)\biggr) \notag \\*&= \dfrac{\mathbb{E}(R_{1} \mid {\xi}_t)}{1-t} - \lim_{h\rightarrow 0} \biggl(R_t\dfrac{h}{(1-t)h}\biggr),\end{align*}

which yields the result. □

Remark 4.3. When $\{{\xi}_t\}_{0\leq t \leq 1}$ is a PLP, Proposition 4.3 provides an alternative proof for Proposition 3.6, since $\{R_t\}_{0\leq t \leq 1}$ is a counting process.

Proposition 4.4. Let $\{\lambda_t^{(i)}\}_{0\leq t \leq 1}$ be the intensity process of the coordinate $\{\xi^{(i)}_t\}_{0\leq t \leq 1}$ . If $\mathbb{E}(R_1)<\infty$ , then

\begin{equation*}\lambda^{(i)}_t = \dfrac{m_i}{\textbf{1}\cdot \textbf{m}}\lambda^R_t \quad \text{for $0\leq t < 1$.}\end{equation*}

Proof. Fix $0\leq s < t < 1$ . From Remark 3.5, we know that, given ${\xi}_s$ , the increment ${\xi}_t - {\xi}_s$ has a generalised multivariate Liouville distribution with generating law

\[ \nu^*(\{i\})=\nu_{st}(\{i+R_s\}) \]

for $i\in\mathbb{N}_0$ and parameter vector $\textbf{m}(t-s)$ . We define

\begin{equation*} \mu_{st} = \sum_{i=0}^\infty i\, \nu^*(\{i\}) = \sum_{i=R_s}^{\infty} i \, \nu_{st}(\{i\}) - R_s = \mathbb{E}(R_t \mid {\xi}_{s}) - R_s= \dfrac{t-s}{1-s}(\mathbb{E}(R_1 \mid R_{s}) - R_s),\end{equation*}

where the last equality comes from (3.8). Then, from [Reference Fang, Kotz and Ng8, Theorem 6.3] and [Reference Gupta and Richards13], we have

\begin{equation*}\mathbb{E}(\xi^{(i)}_t \mid {\xi}_{s}) = \dfrac{m_i}{\textbf{1}\cdot \textbf{m}}\mu_{st} + \xi^{(i)}_s.\end{equation*}

Since $\{\xi^{(i)}_t\}_{0\leq t \leq 1}$ is a counting process, we have

\[\lambda^{(i)}_t = \lim_{h\rightarrow 0}\mathbb{E}[\xi^{(i)}_{t+h} - \xi^{(i)}_{t} \mid \mathcal{F}_t^{{\xi}}]/h.\]

Thus

\begin{equation*}\lambda^{(i)}_t = \dfrac{m_i}{\textbf{1}\cdot \textbf{m}}\lim_{h\rightarrow 0}\dfrac{\mu_{t,t+h}}{h} = \dfrac{m_i}{\textbf{1}\cdot \textbf{m}}\dfrac{\mathbb{E}(R_1 \mid R_{t}) - R_t}{1-t}.\end{equation*}

The result then follows from (4.4). □

Remark 4.4. Note that $\lambda^{R}_t = \sum_i\lambda^{(i)}_t$ .

If we let $\{P_t\}_{0\leq t \leq 1}$ denote a Poisson process, and define $\boldsymbol{\Delta}$ by $\Delta_i = P_{t_i}-P_{t_{i-1}}$ for some partition $0=t_0<t_1<\cdots<t_n$ , we have

\begin{align*}\mathbb{P}(\boldsymbol{\Delta}=\textbf{x}\mid P_{t_n}=k) = \begin{cases} k! \prod_{i=1}^n {{p_i^{x_i}}/{x_i!}} &\text{for $\textbf{x}\in\mathbb{N}_0^n$ with $\textbf{1}\cdot \textbf{x}=k$,}\\* 0 &\text{otherwise,}\end{cases}\end{align*}

where $p_i=(t_i-t_{i-1})/t_n$ . In other words, given $P_{t_n}$ , $\boldsymbol{\Delta}$ has a multinomial distribution. Let $\{{\xi}_t\}_{0\leq t \leq 1}$ be a Poisson Liouville process with generating law $\nu(\{k\})=A(k)$ , $k\in\mathbb{N}_0$ . Then

\begin{equation*}\mathbb{P}({\xi}_1=\textbf{x}) = A(\textbf{1}\cdot \textbf{x}) (\textbf{1}\cdot \textbf{x})! \prod_{i=1}^n \dfrac{p_i^{x_i}}{x_i!}.\end{equation*}

Write $G_\nu$ for the probability generating function of $\nu$ :

\begin{equation*}G_{\nu}(z)=\sum_{k=0}^{\infty}z^kA(k).\end{equation*}

Let $T^{(i)}$ be the time of the first jump of the ith marginal process. If $\xi^{(i)}_1\,{<}\,1$ , then we set $T^{(i)}\,{=}\,\infty$ .

Proposition 4.5. The random times $\{T^{(i)};\, i=1,\ldots,n\}$ satisfy the following:

\begin{align*}\mathbb{P}(T^{(i)}>s) &= G_{\nu}(1-s p_i), \notag \\*\mathbb{P}(T^{(i)}=\infty) &= G_{\nu}(1-p_i), \notag \\\mathbb{P}(T^{(i)}> s_i;\, i=1,\ldots,n)&=G_{\nu}\biggl( 1-\sum_{i=1}^n p_i s_i \biggr), \notag \\*\mathbb{P}(T^{(i)}=\infty;\, i=1,\ldots,n) &= A(0)\end{align*}

for $s\in[0,1]$ and $\textbf{s}\in [0,1]^n$ .

Proof. See the Appendix. □

Here $G_\nu$ is increasing (and invertible) on [0,1]. Write $\psi(x)=G_\nu(1-x)$ , and note that $\psi$ is invertible on [0,1]. If $u_i\in[0,\psi(p_i)]$ , then we have

\begin{equation*}\mathbb{P}\biggl(T^{(i)}> \dfrac{\psi^{-1}(u_i)}{p_i}\biggr) = u_i.\end{equation*}

It follows that the conditioned random variable $\psi(p_i T^{(i)}) \mid \{T^{(i)} < 1\}$ is uniformly distributed. Furthermore

\begin{equation*}\mathbb{P}\biggl(T^{(i)}> \dfrac{\psi^{-1}(u_i)}{p_i};\, i=1,\ldots,n\biggr) = \psi\biggl(\sum_i \psi^{-1}(u_i)\biggr).\end{equation*}

The form of the joint survival function of $\textbf{T}=\{T^{(i)};\, i=1,\ldots,n\}$ resembles that of an Archimedean copula. However, the fact that $\mathbb{P}(T^{(i)}\geq 1)>0$ means that it is not an Archimedean copula.

5. Conclusion

We have introduced generalised Liouville processes – a broad and tractable class of multivariate stochastic processes. The class of GLPs generalises some processes that have already been studied. We detailed various properties of GLPs and provided some new examples.

Appendix A. Proofs

A.1. Proposition 3.1

Proof. Since $\nu$ has no continuous singular part, $\nu(\textrm{d} z)=\sum^{\infty}_{j=-\infty}c_{i}\delta_{z_{i}}(z)\,\textrm{d} z+p(z)\,\textrm{d} z$ , where $c_{i}\in\mathbb{R}_+$ is a point mass of $\nu$ located at $z_{i}\in\mathbb{R}$ , and $p\colon \mathbb{R}\rightarrow\mathbb{R}_{+}$ is the density of the continuous part of $\nu$ . Then, from (2.2), the joint density of an LRB $\{L_t\}_{0\leq t \leq u_n}$ is given by

\begin{align*}& \mathbb{P}(L_{t_1} \in \textrm{d} x_1, \ldots, L_{t_k} \in \textrm{d} x_k, L_{u_n}\in \textrm{d} x_n) \\[3pt] &\quad = \prod_{i=1}^n[f_{t_i-t_{i-1}}(x_i-x_{i-1}) \,\textrm{d} x_i] \dfrac{\sum^{\infty}_{j=-\infty}c_{i}\delta_{z_{i}}(x_n)+p(x_n)}{f_{u_n}(x_n)},\end{align*}

where $x_0=0$ , for all $k\in\mathbb{N}_+$ , all partitions $0=t_0<t_1<\cdots<t_{n-1}<t_{n}=u_n$ , all $x_{n}\in\mathbb{R}$ , and all $(x_1,\ldots,x_k)^\top=\textbf{x} \in \mathbb{R}^k$ . Let $\boldsymbol{\alpha}\in\mathbb{R}^{n}_{+}$ be the vector of time increments $\alpha_{i}=t_{i}-t_{i-1}$ , and $\alpha=\textbf{1}\cdot \boldsymbol{\alpha}=u_n$ . The Jacobian of the transformation $y_{1}=x_{1}, y_{2}=x_{2}-x_{1}, \ldots, y_{n}=x_{n}-x_{n-1}$ is unity, and it follows that

\begin{align*}& \mathbb{P}(L_{t_1}-L_{t_0} \in \textrm{d} y_1, \ldots, L_{u_n}-L_{t_k}\in \textrm{d} y_n)\\[3pt] & \quad = \prod_{i=1}^nf_{\alpha_{i}}(y_i)\,\textrm{d} y_{i} \dfrac{\sum^{\infty}_{j=-\infty}c_{i}\delta_{z_{i}}(\sum^{n}_{i=1}y_{i})+p(\sum^{n}_{i=1}y_{i})}{f_\alpha(\sum^{n}_{i=1}y_{i})}.\end{align*}

From [Reference Gupta and Richards13], we know that $(L_{t_1}-L_{t_0},\ldots,L_{t_k}-L_{t_{k-1}},L_{u_n}-L_{t_k})^{\top}$ has a generalised multivariate Liouville distribution. Fix $k_i\geq 1$ and the partitions $0=t_0^i<t_1^i<\cdots<t_{k_i}^i=1$ , for $i=1,\ldots,n$ . Define the non-overlapping increments $\{\Delta_{ij}\}$ by

\[\Delta_{ij}=\xi^{(i)}_{t^i_j}-\xi^{(i)}_{t^i_{j-1}}\]

for $j=1,\ldots,k_i$ and $i=1,\ldots,n$ . The distribution of the $k_{1}\times \cdots \times k_{n}$ -element vector $\boldsymbol{\Delta}=(\Delta_{11},\ldots,\Delta_{1k_1},\ldots, \Delta_{n1},\ldots,\Delta_{nk_n})^\top$ characterises the finite-dimensional distributions of the GLP $\{{\xi}_t\}_{0\leq t \leq 1}$ . It follows from the Kolmogorov extension theorem that the distribution of $\boldsymbol{\Delta}$ characterises the law of $\{{\xi}_t\}_{0\leq t \leq 1}$ . Note that $\boldsymbol{\Delta}$ contains non-overlapping increments of $\{L_t\}$ such that $\textbf{1}\cdot \boldsymbol{\Delta}=L_{u_n}$ . Hence $\boldsymbol{\Delta}$ has a generalised multivariate Liouville distribution. □

A.2. Proposition 3.2

Proof. We compute the transition probabilities of $\{{\xi}_t\}_{0\leq t \leq 1}$ directly. We begin by transitioning to ${\xi}_t$ for $t<1$ . For all $k\geq 2$ , all $0<t_1<\cdots<t_k<1$ , and all $\textbf{x}_1,\ldots,\textbf{x}_{k} \in \mathbb{R}^n$ , we have

(A.1) \begin{align}&\mathbb{P}({\xi}_{t_k}\in \textrm{d} \textbf{x}_k \mid {\xi}_{t_1}=\textbf{x}_1,\ldots, {\xi}_{t_{k-1}}=\textbf{x}_{k-1}) \nonumber \\[4pt] &\quad= \dfrac{\mathbb{P}({\xi}_{t_1}\in\textrm{d} \textbf{x}_1,\ldots, {\xi}_{t_{k}}\in\textrm{d}\textbf{x}_{k})}{\mathbb{P}({\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k-1}}\in\textrm{d}\textbf{x}_{k-1})} \nonumber \\[4pt] &\quad= \dfrac{\int_{-\infty}^\infty \mathbb{P}({\xi}_{t_1}\in \textrm{d} \textbf{x}_1,\ldots, {\xi}_{t_k}\in \textrm{d}\textbf{x}_k \mid R_1=z)\nu(\textrm{d} z)}{\int_{-\infty}^\infty \mathbb{P}({\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k-1}}\in\textrm{d}\textbf{x}_{k-1} \mid R_1=z) \nu(\textrm{d} z)} \nonumber \\[4pt] &\quad= \dfrac{\int_{-\infty}^\infty \mathbb{P}({\xi}_{t_1}-{\xi}_{t_0}\in -\textbf{x}_{0}+\textrm{d} \textbf{x}_1,\ldots, {\xi}_{t_k}-{\xi}_{t_{k-1}}\in -\textbf{x}_{k-1}+\textrm{d}\textbf{x}_k \mid R_1=z)\nu(\textrm{d} z)}{\int_{-\infty}^\infty \mathbb{P}({\xi}_{t_1}-{\xi}_0\in -\textbf{x}_{0}+\textrm{d} \textbf{x}_1,\ldots, {\xi}_{t_{k-1}}-{\xi}_{t_{k-2}}\in -\textbf{x}_{k-2}+\textrm{d}\textbf{x}_{k-1} \mid R_1=z) \nu(\textrm{d} z)} \end{align}
(A.2) \begin{align}&\quad = \dfrac{\int_{-\infty}^\infty \bigl\{ \prod_{i=1}^n\prod_{j=1}^k f_{m_i(t_j-t_{j-1})}\big(x^{(i)}_j-x^{(i)}_{j-1}\big) \,\textrm{d} x^{(i)}_j \bigr\} f_{\textbf{1}\cdot \textbf{m}(1-t_k)}(z-\textbf{1}\cdot \textbf{x}_k) {{f_{\textbf{1}\cdot \textbf{m}}(z)^{-1}}{{\nu(\textrm{d} z)}}}}{\int_{-\infty}^\infty \bigl\{ \prod_{i=1}^n\prod_{j=1}^{k-1} f_{m_i(t_j-t_{j-1})}\big(x^{(i)}_j-x^{(i)}_{j-1}\big) \,\textrm{d} x^{(i)}_j \bigr\} f_{\textbf{1}\cdot \textbf{m}(1-t_{k-1})}(z-\textbf{1}\cdot \textbf{x}_k) {{f_{\textbf{1}\cdot \textbf{m}}(z)^{-1}}{\nu(\textrm{d} z)}}}\quad \\ &\quad =\dfrac{\int_{-\infty}^\infty \bigl\{ \prod_{i=1}^n f_{m_i(t_k-t_{k-1})}\big(x^{(i)}_k-x^{(i)}_{k-1}\big) \,\textrm{d} x^{(i)}_k \bigr\} f_{\textbf{1}\cdot m(1-t_k)}(z-\textbf{1}\cdot \textbf{x}_k) {{f_{\textbf{1}\cdot \textbf{m}}(z)^{-1}}{\nu(\textrm{d} z)}}}{\int_{-\infty}^\infty f_{\textbf{1}\cdot m(1-t_{k-1})}(z-\textbf{1}\cdot \textbf{x}_{k-1}) {{f_{\textbf{1}\cdot \textbf{m}}(z)^{-1}}{\nu(\textrm{d} z)}}} \nonumber\\[4pt] &\quad = \dfrac{\Theta_{t_k}(\textbf{1}\cdot \textbf{x}_{k})}{\Theta_{t_{k-1}}(\textbf{1}\cdot \textbf{x}_{k-1})} \prod_{i=1}^n f_{m_i(t_k-t_{k-1})}\big(x^{(i)}_k-x^{(i)}_{k-1}\big) \,\textrm{d} x^{(i)}_k, \nonumber \end{align}

where $t_0=0$ , $\textbf{x}_0=\textbf{0}$ and $x^{(i)}_j$ is the ith coordinate of $\textbf{x}_j$ . We provide some remarks on the step (A.1) to (A.2). Note that in (A.1) all the increments of type ${\xi}_t-{\xi}_s$ are vectors of non-overlapping increments of the master LRB $\{L_t\}_{0\leq t \leq \textbf{1}\cdot \textbf{m}}$ . Given $R_1=L_{\textbf{1}\cdot \textbf{m}}$ , $\{L_t\}_{0\leq t \leq \textbf{1}\cdot \textbf{m}}$ is a Lévy bridge, and so its law is invariant to a reordering of its non-overlapping increments. This is a direct result of the so-called cyclical exchangeability property of Lévy bridges (see e.g. [Reference Chaumont, Hobson and Yor7]). The integrands in (A.2) can then be recognised as Lévy bridge transition probabilities.

We now consider transitioning to ${\xi}_1$ . For all $k\geq 1$ , all $0<t_1<\cdots<t_k<1$ , all $\textbf{x}_1,\ldots,\textbf{x}_k \in \mathbb{R}^n$ , all $y_1,\ldots,y_{k-1}\in\mathbb{R}$ , and all $B\in\mathcal{B}(\mathbb{R})$ , we have

(A.3) \begin{align}&\mathbb{P}(\xi^{(1)}_1\in \textrm{d} y_1,\ldots,\xi^{(n-1)}_1\in \textrm{d} y_{n-1}, \xi^{(n)}_1\in B\mid {\xi}_{t_1}=\textbf{x}_1,\ldots, {\xi}_{t_{k}}=\textbf{x}_{k}) \nonumber \\[4pt] &\quad=\dfrac{\mathbb{P}\big(\xi^{(1)}_1\in \textrm{d} y_1,\ldots,\xi^{(n-1)}_1\in \textrm{d} y_{n-1}, \xi^{(n)}_1\in B, {\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k}}\in\textrm{d}\textbf{x}_{k}\big)}{\mathbb{P}({\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k-1}}\in\textrm{d}\textbf{x}_{k})} \nonumber \\[4pt] &\quad=\dfrac{\mathbb{P}\big(\xi^{(1)}_1\in \textrm{d} y_1,\ldots,\xi^{(n-1)}_1\in \textrm{d} y_{n-1}, R_1\in B+\sum_{i=1}^{n-1} y_i, {\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k}}\in\textrm{d}\textbf{x}_{k}\big)}{\mathbb{P}({\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k-1}}\in\textrm{d}\textbf{x}_{k})} \nonumber \displaybreak\\[4pt] &\quad=\dfrac{\int_{z\in B+\sum_{i=1}^{n-1} y_i}\mathbb{P}\big(\xi^{(1)}_1\in \textrm{d} y_1,\ldots,\xi^{(n-1)}_1\in \textrm{d} y_{n-1}, {\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k}}\in\textrm{d}\textbf{x}_{k} \mid R_1=z\big)\nu(\textrm{d} z)}{\int_{-\infty}^{\infty}\mathbb{P}({\xi}_{t_1}\in\textrm{d}\textbf{x}_1,\ldots, {\xi}_{t_{k-1}}\in\textrm{d}\textbf{x}_{k}\mid R_1=z)\nu(\textrm{d} z)} \nonumber \\[4pt] &\quad=\dfrac{\int_{z\in B+\sum_{i=1}^{n-1} y_i}\bigl\{\prod_{i=1}^{n-1}f_{m_i(1-t_k)}\big(y_i-x_k^{(i)}\big)\,\textrm{d} y_i \bigr\}f_{m_n(1-t_k)}\bigl(z-x_k^{(n)}-\sum_{i=1}^{n-1}y_i\bigr) {{f_{\textbf{1}\cdot \textbf{m}}(z)^{-1}}{\nu(\textrm{d} z)}}}{\int_{-\infty}^\infty f_{\textbf{1}\cdot \textbf{m}(1-t_{k-1})}(z-\textbf{1}\cdot \textbf{x}_{k-1}) {{f_{\textbf{1}\cdot \textbf{m}}(z)^{-1}}{\nu(\textrm{d} z)}}}\\ &\quad= \dfrac{\theta_{\tau(t_k)}\bigl(B+\sum_{i=1}^{n-1} y_i;\, x_k^{(n)}+\sum_{i=1}^{n-1}y_i \bigr)}{\Theta_{t_{k-1}}(\textbf{1}\cdot \textbf{x}_{k-1})}\prod_{i=1}^{n-1}f_{m_i(1-t_k)}(y_i-x_k^{(i)})\,\textrm{d} y_i, \nonumber \end{align}

where again $t_0=0$ and (A.3) follows from similar arguments to (A.2). □

A.3. Corollary 3.1

Proof. The process $\{R_t\}_{0\leq t \leq 1}$ is a Lévy process under $\widetilde{\mathbb{P}}$ , where $\widetilde{\mathbb{P}}(R_{t}\in \,\textrm{d} x)=f_{t(\textbf{1}\cdot\textbf{m})}(x)\,\textrm{d} x$ . To show $\mathbb{E}^{\widetilde{\mathbb{P}}}(|\Theta_t(R_t) |)< \infty$ , use the Chapman–Kolmogorov convolution and the non-negativity of f:

\begin{align*}\int_{\mathbb{R}}\biggl(\int_{\mathbb{R}} \biggl|\dfrac{f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-r)f_{t(\textbf{1}\cdot \textbf{m})}(r)}{f_{\textbf{1}\cdot \textbf{m}}(z)}\biggr|\,\textrm{d} r\biggr) \nu(\textrm{d} z)&=\int_{\mathbb{R}}\int_{\mathbb{R}} f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-r)f_{t(\textbf{1}\cdot \textbf{m})}(r)\,\textrm{d} r \dfrac{\nu(\textrm{d} z)}{f_{\textbf{1}\cdot \textbf{m}}(z)} \nonumber \\[3pt]&=\int_{\mathbb{R}}\dfrac{f_{\textbf{1}\cdot \textbf{m}}(z)}{f_{\textbf{1}\cdot \textbf{m}}(z)}\nu(\textrm{d} z)\nonumber \\[3pt]&=\nu(\mathbb{R}) = 1.\end{align*}

Since $\mathbb{R}$ is a $\sigma$ -finite measure space (with respect to Lebesgue measure), and f is measurable, we can use Fubini’s theorem and write

\begin{align*}\int_{\mathbb{R}}\biggl(\int_{\mathbb{R}} \biggl|\dfrac{f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-r)f_{t(\textbf{1}\cdot \textbf{m})}(r)}{f_{\textbf{1}\cdot \textbf{m}}(z)}\biggr|\,\textrm{d} r\biggr) \nu(\textrm{d} z)&=\int_{\mathbb{R}}\biggl(\int_{\mathbb{R}} \biggl|\dfrac{f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-r)f_{t(\textbf{1}\cdot \textbf{m})}(r)}{f_{\textbf{1}\cdot \textbf{m}}(z)}\biggr|\nu(\textrm{d} z)\biggr) \,\textrm{d} r \nonumber \\[3pt]&=\mathbb{E}^{\widetilde{\mathbb{P}}}(|\Theta_t(R_t) |).\end{align*}

Also, since $\{\Theta_t(R_t)\}_{0\leq t<1}$ is harmonic, $\{\Theta_t(R_t)\}_{0\leq t<1}$ is an $(\mathcal{F}_t^{{\xi}},\widetilde{\mathbb{P}})$ -martingale. Explicitly, we have

\begin{align*}\mathbb{E}^{\widetilde{\mathbb{P}}}(\Theta_t(R_t) \mid \mathcal{F}_s^{{\xi}} )&=\mathbb{E}^{\widetilde{\mathbb{P}}}\biggl( \int_{-\infty}^{\infty}\dfrac{f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-R_s-(R_t-R_s))}{f_{\textbf{1}\cdot \textbf{m}}(z)}\, \nu(\textrm{d} z)\mid {\xi}_{s} \biggr) \nonumber\\[3pt] &=\int_{-\infty}^{\infty}\int_{-\infty}^{\infty}f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-R_s-y)f_{\textbf{1}\cdot \textbf{m}(t-s)}(y)\,\textrm{d} y \, \dfrac{\nu(\textrm{d} z)}{f_{\textbf{1}\cdot \textbf{m}}(z)} \nonumber\\[3pt] &=\int_{-\infty}^{\infty}\dfrac{f_{\textbf{1}\cdot \textbf{m}(1-s)}(z-R_s)}{f_{\textbf{1}\cdot \textbf{m}}(z)} \, \nu(\textrm{d} z) \nonumber\\[3pt] &=\Theta_s(R_s)\end{align*}

for $0\leq s<t<1$ . Since $\Theta_0(R_0)=1$ and $\Theta_t(R_t)>0$ , $\{\Theta_t(R_t)\}_{0\leq t<1}$ is a Radon–Nikodým density process. We continue by verifying that under $\mathbb{P}$ the transition law of $\{{\xi}_t\}_{0\leq t < 1}$ is that of a GLP with generating law $\nu$ and parameter vector $\textbf{m}$ :

(A.4) \begin{align}\mathbb{P}({\xi}_t \in \textrm{d} \textbf{x} \mid \mathcal{F}_s^{{\xi}})&=\mathbb{E}({1}_{\{{\xi}_t\in\textrm{d}\textbf{x}\}} \mid \mathcal{F}_s^{{\xi}}) \nonumber\\[3pt] &=\dfrac{1}{\Theta_s(R_s)}\,\mathbb{E}^{\widetilde{\mathbb{P}}}(\Theta_t(R_t) {1}_{\{{\xi}_t\in\textrm{d}\textbf{x}\}} \mid {\xi}_s) \nonumber\\[3pt] &=\dfrac{\Theta_{t}(R_t)}{\Theta_{s}(R_{s})}\prod_{i=1}^n f_{(t-s)(m_i)}(x_i-\xi^{(i)}_s) \,\textrm{d} x_i.\end{align}

Comparing equations (A.4) and (3.2) completes the proof. □

A.4. Proposition 3.4

Proof. Fix $0 \leq s < t < 1$ . Then

(A.5) \begin{equation}\mathbb{P}(\xi^{(i)}_t \in \,\textrm{d} y_i \mid \xi^{(i)}_s=x_i)= \dfrac{\int_{\mathbb{R}} \mathbb{P}(\xi^{(i)}_t \in \,\textrm{d} y_i, \xi^{(i)}_s\in \textrm{d} x_i \mid R_1=r) \mathbb{P}(R_1\in \textrm{d} r)}{\int_{\mathbb{R}}\mathbb{P}(\xi^{(i)}_s\in \textrm{d} x_i \mid R_1=r) \mathbb{P}(R_1\in \textrm{d} r)}.\end{equation}

The numerator of (A.5) is

(A.6) \begin{align}&\int_{\mathbb{R}} \mathbb{P}(\xi^{(i)}_t \in \,\textrm{d} y_i, \,\xi^{(i)}_s\in \textrm{d} x_i \mid R_1=r) \mathbb{P}(R_1\in \textrm{d} r) \notag \\[3pt]&\quad = f_{m_is}(x_i)\,\textrm{d} x_i f_{(t-s)m_i}(y_i-x_i) \,\textrm{d} y_i \int_{\mathbb{R}} \dfrac{ f_{\textbf{1}\cdot \textbf{m} -m_it}(r-y_i)}{f_{\textbf{1}\cdot \textbf{m}}(r)}\nu(\textrm{d} r),\end{align}

and the denominator is

(A.7) \begin{equation}\int_{\mathbb{R}}\mathbb{P}(\xi^{(i)}_s\in \textrm{d} x_i \mid R_1=r) \mathbb{P}(R_1\in \textrm{d} r) = f_{m_is}(x_i)\,\textrm{d} x_i \int_{\mathbb{R}} \dfrac{f_{\textbf{1}\cdot \textbf{m} -m_is}(r-x_i)}{f_{\textbf{1}\cdot \textbf{m}}(r)}\nu(\textrm{d} r).\end{equation}

Dividing (A.6) by (A.7) concludes the first part.

For the second part, write ${\xi}_t^{\oslash i}$ for the vector ${\xi}_t$ excluding its ith coordinate. Using the Markov property of $\{{\xi}_t\}_{0\leq t \leq 1}$ , we have

(A.8) \begin{equation}\mathbb{P}(\xi^{(i)}_t \in \textrm{d} y_i \mid \mathcal{F}_s^{{\xi}} ) = \dfrac{\mathbb{P}\big(\xi^{(i)}_t \in \textrm{d} y_i,\xi^{(i)}_s \in \textrm{d} x_i, {\xi}_s^{\oslash i} \in \textrm{d} \textbf{x} \big)}{\mathbb{P}(\xi^{(i)}_s \in \textrm{d} x_i, {\xi}_s^{\oslash i} \in \textrm{d} \textbf{x} )},\end{equation}

The numerator of (A.8) is given by

(A.9) \begin{align}&\int_{\mathbb{R}} \mathbb{P}( \xi^{(i)}_t \in \textrm{d} y_i , \xi^{(i)}_s \in \textrm{d} x_i, {\xi}_s^{\oslash i} \in\textrm{d} \textbf{x}^{\oslash i} \mid R_1=r ) \, \mathbb{P}(R_1\in\textrm{d} r) \notag \\[3pt] & \quad = \prod_{j=1}^n[f_{m_js}(x_j) \,\textrm{d} x_j] f_{m_i(t-s)}(y_i-x_i) \,\textrm{d} y_i \notag \\[3pt]&\quad\quad\, \times \int_{\mathbb{R}} \dfrac{f_{\textbf{1}\cdot \textbf{m}(1-s) + m_i(t-s)}\big(r-\sum^{n}_{j=1} x_{j} + (y_i-x_i)\big)}{f_{\textbf{1}\cdot \textbf{m}}(r)} \, \nu(\textrm{d} r),\end{align}

and the denominator is given by

(A.10) \begin{equation}\mathbb{P}({\xi}_s\in\,\textrm{d}\textbf{x} ) = \prod_{i=1}^{n}[f_{m_is}(x_i)\,\textrm{d} x_i] \int_{-\infty}^{\infty} \dfrac{f_{\textbf{1}\cdot \textbf{m}(1-s)}\big(r-\sum^{n}_{i=1} x_{i}\big)}{f_{\textbf{1}\cdot \textbf{m}}(r)} \,\nu(\textrm{d} r).\end{equation}

Equation (A.10) follows from the stationary increments property of LRBs and (2.2). Dividing (A.9) by (A.10) concludes the second part.

A.5. Proposition 3.5

Proof. Since $\{{\xi}_t\}_{0\leq t \leq 1}$ is a Markov process with respect to $\{\mathcal{F}_t^{{\xi}}\}_{0\leq t \leq 1}$ , $\{R_t\}_{0\leq t \leq 1}$ is a Markov process with respect to $\{\mathcal{F}_t^{{\xi}}\}_{0\leq t \leq 1}$ . We first verify (3.5), the ${\xi}_s$ -conditional law of $R_1$ . For $s=0$ , trivially the law of $R_1$ is $\nu$ . For $0< s <1$ , using (A.10), we have

\begin{align*}\mathbb{P}(R_1\in \textrm{d} r\mid {\xi}_s=\textbf{x}) &= \dfrac{\mathbb{P}({\xi}_s\in \,\textrm{d}\textbf{x} \mid R_1=r)\,\nu(\textrm{d} z)}{\mathbb{P}({\xi}_s\in\,\textrm{d}\textbf{x} )} \notag \\[3pt]&= \dfrac{f_{\textbf{1}\cdot \textbf{m}(1-s)}(r-\textbf{1}\cdot \textbf{x}) \, {{f_{\textbf{1}\cdot \textbf{m}}(r)^{-1}}{\nu(\textrm{d} r)}}} {\int_{\mathbb{R}}f_{\textbf{1}\cdot \textbf{m}(1-s)}(r-\textbf{1}\cdot \textbf{x}) \, {{f_{\textbf{1}\cdot \textbf{m}}(r)^{-1}}{\nu(\textrm{d} r)}} },\end{align*}

as required. Next we verify (3.6), the ${\xi}_s$ -conditional law of $R_t$ for $0\leq s < t<1$ . The process $\{R_t\}_{0\leq t \leq 1}$ is a $\widetilde{\mathbb{P}}$ -Lévy process with $\widetilde{\mathbb{P}}(R_{t}\in \,\textrm{d} r)=f_{(\textbf{1}\cdot\textbf{m}) t}(r)\,\textrm{d} r$ , where $\widetilde{\mathbb{P}}$ is given by (3.4). Using Corollary 3.1 (or [Reference Hoyle, Hughston and Macrina15, Proposition 3.7]), we have $\{R_t\}_{0\leq t < 1}$ a $\mathbb{P}$ -LRB, where

\begin{align*}\mathbb{P}(R_t \in \textrm{d} r \mid {\xi}_s =\textbf{x})&=\Theta_{s}(r)^{-1}\,\mathbb{E}^{\widetilde{\mathbb{P}}}(\Theta_t(r) {1}_{\{R_t\in\textrm{d} r\}} \mid R_s = \textbf{1}\cdot\textbf{x}) \nonumber \\[3pt]&=\Theta_{s}(r)^{-1}\,\int_{\mathbb{R}} \dfrac{f_{\textbf{1}\cdot \textbf{m}(1-t)}(z-r)}{f_{\textbf{1}\cdot \textbf{m}}(z)} \, \nu(\textrm{d} z) f_{(t-s)\textbf{1}\cdot\textbf{m}}(r-\textbf{1}\cdot\textbf{x})\,\textrm{d} r,\end{align*}

as required. □

A.6. Proposition 3.7

Proof. Conditional on ${\xi}_d$ $(0<d\leq 1)$ , the coordinates of $\{{\xi}_t\}_{t\leq d}$ are (independent) Lévy bridges, and $\{R_t\}_{t\leq d}$ is a Lévy bridge. Thus it is sufficient to prove that an integrable Lévy bridge is a harness. Let $\{X_t\}_{0\leq t \leq 1}$ be a Lévy process such that $X_t$ has a density $f_t$ for $t\in(0,1]$ . We shall show that the conditional process, and Lévy bridge, $\{X_t\mid X_1=k\}$ is a harness. The conditions of the proposition allow us to assume that $\{X_t\mid X_1=k\}$ is integrable. We start by computing the following:

(A.11) \begin{equation}\mathbb{P}\biggl[\bigcap_{i=1}^{n_y} X_{t_i}\in\textrm{d} y_i\mid\biggl(\bigcap_{i=1}^{n_x} X_{a_i}=x_i \biggr) \cap \biggl(\bigcap_{i=1}^{n_z} X_{d_i}=z_i \biggr) \cap (X_1=k)\biggr]\end{equation}

for any $n_x, n_y, n_z\in\mathbb{N}_+$ , any $0=a_0<a_1<\cdots<a_{n_x}=a<t_1<\cdots<t_{n_y}<d=d_1<\cdots<d_{n_z}<1$ , any $(x_1,\ldots,x_{n_x}) \in\mathbb{R}^{n_x}$ , any $(y_1,\ldots,y_{n_x}) \in\mathbb{R}^{n_y}$ , and any $(z_1,\ldots,z_{n_z}) \in\mathbb{R}^{n_z}$ . Following Bayes’ rule, the numerator is

\begin{align*}I_1&=\mathbb{P}\biggl[\biggl(\bigcap_{i=1}^{n_y} X_{t_i}\in\textrm{d} y_i\biggr) \cap\biggl(\bigcap_{i=1}^{n_x} X_{a_i}\in\,\textrm{d} x_i \biggr) \cap \biggl(\bigcap_{i=1}^{n_z} X_{d_i}\in\,\textrm{d} z_i \biggr) \cap (X_1\in \,\textrm{d} k)\biggr] \nonumber \\[3pt]&= \biggl(\prod_{i=1}^{n_x} f_{a_i-a_{i-1}}(x_i-x_{i-1}) \,\textrm{d} x_i \biggr) \\[3pt]&\quad\, \times \biggl(f_{t_1-a}(y_1-x_a)\,\textrm{d} y_1 \prod_{i=2}^{n_y} f_{t_i-t_{i-1}}(y_i-y_{i-1}) \,\textrm{d} y_i\biggr) \nonumber\\[3pt] &\quad\, \times\biggl(f_{d-t_n}(z_1-y_{n_y})\,\textrm{d} z_1 \prod_{i=2}^{n_z} f_{d_i-d_{i-1}}(z_i-z_{i-1}) \,\textrm{d} z_i\biggr) f_{1-d_{n_z}}(k-z_{n_z})\,\textrm{d} k,\end{align*}

and the denominator is

\begin{align*}I_2&=\mathbb{P}\biggl[\biggl(\bigcap_{i=1}^{n_x} X_{a_i}\in \,\textrm{d} x_i \biggr) \cap \biggl(\bigcap_{i=1}^{n_z} X_{d_i}\in \,\textrm{d} z_i \biggr) \cap (X_1 \in \,\textrm{d} k)\biggr] \nonumber \\[3pt]&= \biggl(\prod_{i=1}^{n_x} f_{a_i-a_{i-1}}(x_i-x_{i-1}) \,\textrm{d} x_i \biggr) \nonumber\\[3pt]&\quad\, \times \biggl(f_{d-a}(z_1-x_{n_x})\,\textrm{d} z_1 \prod_{i=2}^{n_z} f_{d_i-d_{i-1}}(z_i-z_{i-1}) \,\textrm{d} z_i\biggr) f_{1-d_{n_z}}(k-z_{n_z})\,\textrm{d} k.\end{align*}

So (A.11) is equal to

\begin{equation*}\dfrac{I_1}{I_2}=\prod_{i=2}^{n_y} (f_{t_i-t_{i-1}}(y_i-y_{i-1}) \,\textrm{d} y_i)\dfrac{f_{t_1-a}(y_1-x_a)f_{d-t_n}(z_1-y_{t_{n_y}})\,\textrm{d} y_1}{f_{d-a}(z_1-x_{n_x})}.\end{equation*}

It follows from the Kolgomorov extension theorem that $\{X_t\mid \mathcal{H}_{a,d}^X\}_{a\leq t \leq d}$ is a Lévy bridge between $X_a$ and $X_d$ . Define $\{\eta_t\}_{0\leq t \leq d-a}$ by $\eta_t=X_{a+t}-X_a$ . Then $\{\eta_t\mid \mathcal{H}_{a,d}^X\}$ is Lévy bridge from 0 to $X_d-X_a$ , and

\begin{equation*}\mathbb{E}[\eta_t\mid \mathcal{H}_{a,d}^X]= \dfrac{t}{d-a}(X_d-X_a),\end{equation*}

which yields the result. □

A.7. Proposition 4.1

Proof. Define the mapping $H\colon \mathbb{R} \times \mathbb{R} \times [0,1) \times \mathbb{R}_+ \rightarrow \mathbb{R}_+$ as follows:

\begin{equation*}H(z, y, t, m) = \exp\biggl\{\dfrac{z y-t z^2/2}{m^2(1-t)}\biggr\}.\end{equation*}

Since the Brownian bridges $\{\beta^{(i)}_t\}_{0\leq t \leq 1}$ , $i=1,\ldots,n$ , in (4.2) are mutually independent and $\{ {\xi}_{t} \}_{0\leq t \leq 1}$ is Markov, we have

(A.12) \begin{align} \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x} \mid \mathcal{F}_t^{{\xi}}) & \overset{\textnormal{law}}{=} \dfrac{\prod_{i=1}^n H (x^{(i)}, \xi_{t}^{(i)}, t, \sigma\sqrt{m_i} ) \, \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x})}{\int_{\mathbb{R}^n}\prod_{i=1}^n H (x^{(i)}, \xi_{t}^{(i)}, t, \sigma\sqrt{m_i} ) \, \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x})} \notag \\[3pt]& = \dfrac{\exp\Bigl\{\sum_i^n \frac{x^{(i)} \xi_{t}^{(i)}-t (x^{(i)})^2/2}{\sigma^2 m_i(1-t)}\Bigr\} \, \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x})}{\int_{\mathbb{R}^n}\exp\Bigl\{\sum_i^n \frac{x^{(i)} \xi_{t}^{(i)}-t (x^{(i)})^2/2}{\sigma^2 m_i(1-t)}\Bigr\} \, \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x})}.\end{align}

If we define the numerator of (A.12) as the function

(A.13) \begin{equation}g\big(\big(\xi^{(i)}_{t}\big)_{i=1,\ldots,n},t ;\, \textrm{d} \textbf{x} \big) = \exp\biggl\{\sum_i^n \dfrac{x^{(i)} \xi_{t}^{(i)}-t \big(x^{(i)}\big)^2/2}{\sigma^2 m_i(1-t)}\biggr\} \, \mathbb{P}({\xi}_{1} \in \textrm{d} \textbf{x})\end{equation}

and apply Itô’s formula to (A.13), we get

\begin{align*}\textrm{d} g &= \dfrac{\partial g}{\partial t} \,\textrm{d} t + \sum_{i=1}^n \dfrac{\partial g}{\partial \xi_{t}^{(i)}} \,\textrm{d} \xi_{t}^{(i)} +\dfrac{1}{2}\sum_{i=1}^n \dfrac{\partial^2 g}{\partial \big(\xi_{t}^{(i)}\big)^2} \,\textrm{d} \langle \xi^{(i)}_{t}\rangle + \sum_{i\neq j}^n \dfrac{\partial^2 g}{\partial \xi_{t}^{(i)}\partial \xi_{t}^{(j)}} \,\textrm{d} \langle \xi^{(i)}_{t}, \xi_{t}^{(j)}\rangle, \notag \\[3pt] &= g\big(\big(\xi^{(i)}_{t}\big)_{i=1,\ldots,n},t ;\, \textrm{d} \textbf{x} \big) \biggl(\sum_{i=1}^n \dfrac{x^{(i)}\xi^{(i)}_{t}}{(\sigma^2m_i(1-t)^2}\,\textrm{d} t + \sum_{i=1}^n\dfrac{x^{(i)}}{\sigma^2 m_i(1-t)}\,\textrm{d} \xi^{(i)}_{t} \biggr),\end{align*}

where the covariation brackets $\langle \xi^{(i)}_{t}, \xi^{(j)}_{t} \rangle$ for $i\neq j$ disappear since the $\{\beta^{(i)}_t\}_{0\leq t \leq 1}$ , $i=1,\ldots,n$ , are mutually independent. Let

\[G\big(\big(\xi^{(i)}_{t}\big)_{i=1,\ldots,n},t \big) = \int_{\mathbb{R}^n}g\big(\big(\xi^{(i)}_{t}\big)_{i=1,\ldots,n},t ;\, \textrm{d} \textbf{x} \big){.}\]

Then, using Fubini’s theorem,

\begin{equation*}\textrm{d} G\big(\big(\xi^{(i)}_{t}\big)_{i=1,\ldots,n},t \big) = G\big(\big(\xi^{(i)}_{t}\big)_{i=1,\ldots,n},t \big) \biggl(\sum_{i=1}^n \dfrac{\mathbb{E}\big(\xi_1^{(i)}\mid {\xi}_t\big)\xi^{(i)}_{t}}{\sigma^2 m_i (1-t)^2} + \sum_{i=1}^n\dfrac{\mathbb{E}\big(\xi_1^{(i)}\mid {\xi}_t\big)}{\sigma^2 m_i (1-t)}\,\textrm{d}\xi^{(i)}_{t}\biggr).\end{equation*}

The statement follows by applying Itô’s formula to the ratio (A.12), where we get

\begin{align*} \textrm{d} \phi_t(\textrm{d} \textbf{x}) &= \phi_t(\textrm{d} \textbf{x})\biggl(\sum_{i=1}^n \dfrac{x^{(i)}-\mathbb{E}\big(\xi_1^{(i)}\mid {\xi}_t\big)}{\sigma^2 m_i (1-t)} \biggl(\textrm{d} \xi^{(i)}_t - \dfrac{\mathbb{E}\big(\xi_1^{(i)}\mid {\xi}_t\big) - \xi^{(i)}_t}{(1-t)}\biggr) \biggr) \notag \\[3pt]&\triangleq\phi_t(\textrm{d} \textbf{x})\biggl(\sum_{i=1}^n \sigma_t^{(i)}\,\textrm{d} \tilde{B}_t^{(i)}\biggr) \notag.\end{align*}

Writing $\tilde{\textbf{B}}_t=(\tilde{B}_t^{(1)},\ldots,\tilde{B}_t^{(n)})^\top$ , define $\{\textbf{B}_t\}_{0\leq t < 1}$ by $\tilde{\textbf{B}}_t=\sigma \sqrt{\textbf{m}}\circ\textbf{B}_t$ . That is, $B_t^{(i)}=\tilde{B}_t^{(i)}/\sigma\sqrt{m_i}$ . For each $i\in\{1,\ldots, n\}$ , $\{\xi^{(i)}_t\}_{0\leq t \leq 1}$ is an LRB, and so following similar steps to the proof of Proposition 3.6, we can show that $\{B_t^{(i)}\}_{0\leq t < 1}$ is continuous with quadratic variation t and is an $(\mathcal{F}_t^{{\xi}},\mathbb{P})$ -martingale. Then, from Lévy’s characterisation, $\{\textbf{B}_t\}_{0\leq t < 1}$ is a vector of standard $(\mathcal{F}_t^{{\xi}},\mathbb{P})$ -Brownian motions.□

A.8. Proposition 4.2

Proof. Let $\{\xi_t^{(i)}\colon i=1,\ldots,n\}_{0\leq t \leq 1}$ be the coordinates of the Poisson Liouville process $\{{\xi}_t\}_{0\leq t \leq 1}$ . The survival function of $T^{(i)}$ is

\begin{align*}\mathbb{P}(T^{(i)}>s) &= \mathbb{P}(\xi^{(i)}_s = 0) \notag\\[3pt] &= \mathbb{E}( \mathbb{P}( \xi_s^{(i)} = 0 \mid \textbf{1}\cdot {\xi}_1 )) \notag\\[3pt] &= \mathbb{E}( (1-s p_i)^{\textbf{1}\cdot {\xi}_1} ) \notag\\[3pt] &= G_{\nu}(1-s p_i).\end{align*}

For $\textbf{s}\in [0,1]^n$ , the joint survival function of $\textbf{T}$ is

\begin{align*}\mathbb{P}(T^{(i)}> s_i;\, i=1,\ldots,n)&=\mathbb{P}(\xi^{(i)}_{s_i}=0;\, i=1,\ldots,n) \notag\\[3pt] &=\mathbb{E}(\mathbb{P}(\xi^{(i)}_{s_i}=0;\, i=1,\ldots,n \mid {\xi}_1)) \notag\\[3pt] &=\mathbb{E}\biggl( \prod_{i=1}^n \mathbb{P}\big(\xi^{(i)}_{s_i}=0 \mid \xi^{(i)}_1\big) \biggr) \notag\\[3pt] &=\mathbb{E}\biggl( \prod_{i=1}^n(1-s_i)^{\xi^{(i)}_1} \biggr) \notag\\[3pt] &=\mathbb{E}\biggl( \biggl( \sum_{i=1}^n p_i(1-s_i) \biggr)^{\textbf{1}\cdot {\xi}_1} \biggr) \notag\\[3pt] &=G_{\nu}\biggl( \sum_{i=1}^n p_i(1-s_i) \biggr) \notag\\[3pt] &=G_{\nu}\biggl( 1-\sum_{i=1}^n p_i s_i \biggr),\end{align*}

which gives the statement. □

Acknowledgements

The authors are grateful to an anonymous referee whose careful reading and comments led to significant improvements in this paper.

References

Bielecki, T. R., Jakubowski, J. and Nieweglowski, M. (2017). Conditional Markov chains: properties, construction and structured dependence. Stoch. Process. Appl. 127, 11251170.10.1016/j.spa.2016.07.010CrossRefGoogle Scholar
Brody, D. C. and Hughston, L. P. (2005). Finite-time stochastic reduction models. J. Math. Phys. 46, 082101.10.1063/1.1990108CrossRefGoogle Scholar
Brody, D. C., Davis, M. H. A., Friedman, R. L. and Hughston, L. P. (2009). Informed traders. Proc. R. Soc. London A 465, 11031122.10.1098/rspa.2008.0465CrossRefGoogle Scholar
Brody, D. C., Hughston, L. P. and Macrina, A. (2007). Beyond hazard rates: a new framework for credit-risk modelling. In Advances in Mathematical Finance, eds M. C. Fu et al., pp. 231257. Birkhäuser, Boston.10.1007/978-0-8176-4545-8_13CrossRefGoogle Scholar
Brody, D. C., Hughston, L. P. and Macrina, A. (2008). Dam rain and cumulative gain. Proc. R. Soc. London A 464, 18011822.10.1098/rspa.2007.0273CrossRefGoogle Scholar
Brody, D. C., Hughston, L. P. and Macrina, A. (2008). Information-based asset pricing. Int. J. Theor. Appl. Finance 11, 107142.10.1142/S0219024908004749CrossRefGoogle Scholar
Chaumont, L., Hobson, D. G. and Yor, M. (2001). Some consequences of the cyclic exchangeability property for exponential functionals of Lévy processes. In Séminaire de Probabilités XXXV, eds J. Azéma et al., pp. 334347. Springer, Berlin and Heidelberg.10.1007/978-3-540-44671-2_23CrossRefGoogle Scholar
Fang, K.-T., Kotz, S. and Ng, K. W. (1990). Symmetric Multivariate and Related Distributions. Chapman & Hall, New York.10.1007/978-1-4899-2937-2CrossRefGoogle Scholar
Fitzsimmons, P., Pitman, J. and Yor, M. (1992). Markovian bridges: construction, Palm interpretation, and splicing. In Seminar on Stochastic Processes, eds E. ×inlar et al., pp. 101134. Birkhäuser, Boston.Google Scholar
Gupta, R. D. and Richards, D. St. P. (1987). Multivariate Liouville distributions. J. Multivariate Anal. 23, 233256.10.1016/0047-259X(87)90155-2CrossRefGoogle Scholar
Gupta, R. D. and Richards, D. St. P. (1991). Multivariate Liouville distributions II. Prob. Math. Statist. 12, 291309.Google Scholar
Gupta, R. D. and Richards, D. St. P. (1992). Multivariate Liouville distributions III. J. Multivariate Anal. 43, 2957.10.1016/0047-259X(92)90109-SCrossRefGoogle Scholar
Gupta, R. D. and Richards, D. St. P. (1995). Multivariate Liouville distributions IV. J. Multivariate Anal. 54, 117.10.1006/jmva.1995.1042CrossRefGoogle Scholar
Hoyle, E. and Menguturk, L. A. (2013). Archimedean survival processes. J. Multivariate Anal. 115, 115.10.1016/j.jmva.2012.09.008CrossRefGoogle Scholar
Hoyle, E., Hughston, L. P. and Macrina, A. (2011). Lévy Random bridges and the modelling of financial information. Stoch. Process. Appl. 121, 856884.10.1016/j.spa.2010.12.003CrossRefGoogle Scholar
Hoyle, E., Hughston, L. P. and Macrina, A. (2015). Stable-$1/2$ bridges and insurance. In Advances in Mathematics of Finance (Banach Center Publications 104), eds A. Palczewski and Ł. Stettner, pp. 95120. Polish Academy of Sciences, Institute of Mathematics, Warsaw.Google Scholar
Jakubowski, J. and Pytel, A. (2016). The Markov consistency of Archimedean survival processes. J. Appl. Prob. 53, 392409.10.1017/jpr.2016.8CrossRefGoogle Scholar
Mansuy, R. and Yor, M. (2005). Harnesses, Lévy bridges and Monsieur Jourdain. Stoch. Process. Appl. 115, 329338.10.1016/j.spa.2004.09.001CrossRefGoogle Scholar