1. Introduction
Random evolutions with finite velocity have drawn the attention of many scientists in the last decades. Indeed, they provide a good alternative to diffusion processes, which often appear unsuitable for describing natural phenomena in life sciences. For instance, the integrated telegraph random process, which is one of the basic models for the description of finite-velocity random motions, can be regarded as a finite-velocity counterpart of the one-dimensional Brownian motion, thanks to the relationship between the corresponding probability density functions. This explains the growing interest in this topic and also the numerous papers published in the literature in recent years. Indeed, starting from the seminal papers [Reference Goldstein23] and [Reference Kac28], many scientists have devoted their research to this theme by proposing generalizations of the integrated telegraph random process (see e.g. [Reference López and Ratanov34, Reference Bshouty, Di Crescenzo, Martinucci and Zacks7, Reference Crimaldi, Di Crescenzo, Iuliano and Martinucci10, Reference Di Crescenzo, Iuliano, Martinucci and Zacks13, Reference Di Crescenzo, Martinucci, Paraggio and Zacks16, Reference Orsingher37, Reference Garra and Orsingher21]) and also applications in many fields (see e.g. [Reference Weiss52, Reference Di Crescenzo and Pellerey17, Reference Ratanov48, Reference Travaglino, Di Crescenzo, Martinucci and Scarpa51]). An in-depth analysis of the one-dimensional telegraph process can be found in the books of Kolesnik and Ratanov [Reference Kolesnik and Ratanov33] and Zacks [Reference Zacks54]. The present paper is part of the aforementioned research and can be regarded as a further step in the path of investigation of random motions.
The standard integrated telegraph process $X_t$ , $t>0$ , describes the position of a particle traveling at constant speed on the real line. The direction of the motion is reversed according to the arrival epochs of a homogeneous Poisson process. This results in exponentially distributed random times between consecutive reversals of direction of motion.
The process $X_t$ is also called a piecewise linear Markov process or Markovian fluid [Reference Asmussen2, Reference Ramaswami46, Reference Rogers49]. Piecewise linear processes were first studied in [Reference Gnedenko and Kovalenko22] and represent a subclass of the family of piecewise deterministic processes [Reference Davis11]. A piecewise deterministic process $\Xi$ is defined as $\Xi(t)=\phi_{\epsilon(\tau_n)}(t),\,\tau_n\leq t<\tau_{n+1}$ , where (1) $\epsilon = (\epsilon(t))_{t\geq0}$ is an arbitrary measurable and adapted process with values in a finite space $\{1,\ldots, N\}$ , (2) $\phi_1,\;\ldots,\;\phi_N$ are N deterministic flows, and (3) $\{\tau_n\}_{n\geq1}$ is the sequence of switching times of $\epsilon$ . Given a fixed starting point, a piecewise deterministic Markov process evolves according to the flow $\phi_i$ for an exponentially distributed random time; then a switch occurs, and the evolution is governed by another flow $\phi_j$ , $j\neq i$ , for another exponential time; then it switches again. The standard integrated telegraph process $X_t$ represents the simplest example of a piecewise linear process based on a two-state Markov process $\epsilon(t)\in\{0, 1\}$ . Its sample paths are composed of straight lines whose slopes alternate between two values. The process $X_t$ alone is not Markovian, whereas if we supply $X_t$ with a second stochastic process keeping track of the driving flow, we obtain a two-dimensional Markov process.
Piecewise linear processes based on the distribution of inter-switching times different from the exponential ones are much less studied. Some examples can be found in [Reference Di Crescenzo and Ratanov18] and [Reference Ratanov47]. Generally, for the integrated telegraph process, there have been many papers in the literature considering a more general setting, but most of them are Markovian. Unfortunately, the assumption of exponentially distributed intertimes is not suitable for many important applications in physics, biology, and engineering, since it gives higher probability to very short intervals. Hence, special attention should be reserved for non-Markovian cases. For instance, in [Reference Di Crescenzo12], the author analyses the case when the random times separating consecutive velocity reversals of the particle have Erlang distribution with possibly unequal parameters. This hypothesis, if we recall that the sum of independent and identically distributed exponential random variables has Erlang distribution, can be interpreted as stating that the particle undergoes a fixed number of collisions arriving according to a Poisson process before reversing its motion direction. Some connections of the model with queueing, reliability theory, and mathematical finance are also presented in the same paper. The study of a one-dimensional random motion with Erlang distribution for the sojourn times is also performed in [Reference Pogorui and Rodríguez-Dagnino40], where the authors apply the methodology of random evolutions to find the partial differential equations governing the particle motion and obtain a factorization of these equations. Moreover, motivated by applications in mathematical biology concerning randomly alternating motion of micro-organisms, in [Reference Di Crescenzo and Martinucci14] the authors consider gamma-distributed random intertimes and obtain the probability law of the process, expressed in terms of series of incomplete gamma functions. Similarly, the case of exponentially distributed random intertimes with linearly increasing parameters has been treated in [Reference Di Crescenzo and Martinucci15], thus modeling a damping behavior sometimes appearing in particle systems. We recall also the papers [Reference Pogorui and Rodríguez-Dagnino41] and [Reference Pogorui and Rodríguez-Dagnino42], where isotropic random motions in higher dimensions with, respectively, Erlang- and gamma-distributed steps are studied.
In the present paper, along the lines of [Reference Di Crescenzo and Zacks19] and [Reference Zacks54], we provide a general expression for the probability law of $X_t$ in terms of the distributions of the n-fold convolutions of the random times between motion reversals. The key idea is to relate the probability law of $X_t$ to that of the occupation time, which gives the fraction of time that the motion moved with positive velocity in [0, t]. Starting from this result, Section 3 is devoted to the explicit derivations of the probability law of the integrated telegraph process in some special cases, when the upward U and downward D random intertimes are distributed as follows: (i) both are gamma-distributed; (ii) both are Erlang-distributed; (iii) U is exponentially distributed and D gamma-distributed; (iv) U is exponentially distributed and D Erlang-distributed. In Cases (i) and (ii), for some values of the parameters involved, we obtain closed-form results for the probability law of the process, which, to the best of our knowledge, is a new result in the field of finite-time random evolutions. This shows the great advantage of this novel expression for the probability law compared to previously existing results: it has good mathematical tractability. Hence, Section 3 offers a collection of explicit expressions for the probability law of the generalized telegraph process, which could be useful for all scholars interested in non-Markovian generalizations of the telegraph process.
In Section 4 we obtain the Laplace transform of the moment generating function $M_{X}^s(t)$ of the generalized telegraph process. This result allows us to do the following:
-
(a) Prove the existence of a Kac-type condition for the integrated telegraph process with identically distributed gamma intertimes. Under such a condition, the probability density function of $X_t$ converges to the probability density function of the standard Brownian motion, thus generalizing the well-known result for the standard integrated telegraph process.
-
(b) Provide an explicit expression for $M_{X}^s(t)$ in the case of gamma- and Erlang-distributed random times between consecutive velocity reversals.
As further validation, we also derive the moment generating function for exponentially distributed intertimes, finding a well-known result. The first and second moment of $X_t$ under the assumptions of gamma-distributed intertimes are also given.
Finally, Section 5 is devoted to the study of certain functionals of the generalized telegraph process. In particular, we provide an explicit expression for the distribution function of the square of $X_t$ , under the assumption of exponential distribution for the upward intertimes U and gamma distribution for the downward intertimes D. We recall that the square of the standard integrated telegraph process has been studied in [Reference Martinucci and Meoli35], where its relationship with the square of the Brownian motion is also stressed. See also [Reference Kolesnik and Ratanov33, Chapter 7] and [Reference Kolesnik32] for other relevant functionals of the telegraph processes.
2. The generalized telegraph process
Let $\big\{X_t;\,t\geq 0\big\}$ be a generalized integrated telegraph process. Such a process describes the position of a particle moving on the real axis with velocity c or $-c(c>0)$ , according to an independent alternating counting process $\big\{N_t;\, t\geq 0\big\}$ . The latter is governed by sequences of positive independent random times $\big\{U_{1},U_{2},\cdots\big\}$ and $\left\{D_{1},D_{2},\cdots\right\} $ , which in turn are assumed to be mutually independent. The random variable $U_{i}$ (resp. $D_i$ ), $i=1,2,\ldots$ , describes the ith random period during which the motion proceeds with positive (resp. negative) velocity. A sample path of $X_t$ with initial velocity $V_0=c$ is shown in Figure 1.
Let us denote by $V_t$ the particle velocity at time $ t\geq 0$ . Assuming that $X_0=0$ , and $V_0\in\{-c,c\}$ , with $V_0$ independent of $N_t$ , we have
where
with
and $U^{(0)}=D^{(0)}=0$ , $U^{(n)}\,:\!=\,U_{1}+\dots+U_{n}$ , $D^{(n)}\,:\!=\,D_{1}+\dots+D_{n}$ .
Hence, if the motion does not change velocity in [0, t], we have $X_t=V_0\,t$ . Otherwise, if there is at least one velocity change in [0, t], then $-c t<X_t<c t$ with probability 1.
Hereafter, according to the assumptions of the standard symmetric telegraph process [Reference Kac28], we assume that the initial velocity is random, i.e.
Let $F_{U_{i}}({\cdot}) $ and $ G_{D_{i}}({\cdot}) $ be the absolutely continuous distribution functions of $U_i$ and $D_{i}$ $(i=1,2,{\dots})$ respectively, with densities $ f_{U_{i}}({\cdot}) $ and $ g_{D_{i}}({\cdot})$ , and denote by $\overline{F}_{U_{i}}({\cdot})$ and $\overline{G}_{D_{i}}({\cdot})$ their complementary distribution functions. In the sequel, we shall denote the distribution functions of $U^{(n)}$ and $D^{(n)}$ by $F_{U}^{(n)}({\cdot}) $ and $G_{D}^{(n)}({\cdot})$ , and their densities by $f_{U}^{(n)}({\cdot}) $ and $g_{D}^{(n)}({\cdot})$ , respectively. If the random variables $ U_{i} $ are independent and identically distributed for $ i=1,2,\dots, n $ , then $ F_{U}^{(n)}({\cdot}) $ is the n-fold convolution of $F_{U_{1}}({\cdot})$ , and similarly for $G_{D}^{(n)}({\cdot})$ . Moreover, we set $F_{U}^{\left(0\right)}\left(x\right)=G_{D}^{\left(0\right)}\left(x\right)=1 $ for $ x\geq 0 $ .
In order to provide the probability law of $X_t$ , we refer to the general method proposed by Zacks in [Reference Zacks53]. The key idea is to relate the probability law of $X_t$ to that of the occupation time
which gives the fraction of time that the motion moved with positive velocity in [0, t]. Indeed, for all $t\geq 0$ , $X_t$ and $W_t$ are linked by the following relationship:
Hence, denoting by
the absolutely continuous component of the probability law of $W_t$ over (0, t), we can express the probability law of $X_t$ in terms of that of $W_t$ , according to the following theorem.
Proposition 2.1. For all $t>0$ we have
and
The distribution of $W_t$ was derived by Perry et al. (see [Reference Perry, Stadje and Zacks38], [Reference Perry, Stadje and Zacks39]) in terms of the distribution of the first time at which a compound process crosses a linear boundary. Proceeding along the lines of Lemmas 5.1 and 5.2 of [Reference Zacks54], in the following theorem we provide the probability law of $W_t$ .
Theorem 2.1. For all $t>0$ it holds that
and, for $0<x<t$ ,
where
Using Theorem 2.1 and Proposition 2.1, we finally obtain the expression for the probability law of $X_t$ .
Theorem 2.2. For all $t>0$ it holds that
and, for $-c t<x<c t$ ,
where
and
An alternative approach to disclosing the probability law of the generalized integrated telegraph process is based on the resolution of the hyperbolic system of partial differential equations related to the probability density of $(X_t,V_t)$ . For instance, in [Reference Di Crescenzo and Martinucci15], for $t>0$ , $-c t<x<c t$ , $j=1,2$ , and $n=1,2,\ldots$ , the authors define the conditional densities of $(X_t,V_t)$ , joint with $\big\{T_{2n-j}\leq t< T_{2n-j+1}\big\}$ , and provide the relative system of partial differential equations. Unfortunately, the resolution of such a system is so hard a task that the authors are forced to follow a different approach.
3. Special cases of the probability law of $\boldsymbol{{X}}_{\boldsymbol{{t}}}$
In this section we make use of Theorem 2.2 to obtain an explicit expression for the probability law of the motion under suitable choices of $F_{U_{i}}\!\left(\cdot\right)$ and $G_{D_{i}}\!\left(\cdot\right)$ , $i=1,2,\ldots$ . First of all, in Theorem 3.1 we assume that the random intertimes $U_{i}$ and $D_{i}$ are identically gamma-distributed. Figure 2 provides some simulated sample paths of the related integrated telegraph process for two different values of the coefficient of variation. We recall that the joint probability law of $\{(X_t, V_t), t\geq 0\}$ , under the assumption of gamma-distributed intertimes, has been expressed in [Reference Di Crescenzo and Martinucci14] in terms of a series of the incomplete gamma function. In the next theorem, we provide an expression for the probability law of $X_t$ in terms of the generalized Wright function. The latter is a very simple and mathematically tractable expression, and, as shown in Propositions (3.2) and (3.3), it allows us to obtain closed-form results for the probability law of the generalized integrated telegraph process in the case of fixed values of the parameters involved. These results appear to be of great interest owing to the absence of similar outcomes in the literature for the one-dimensional case.
Theorem 3.1. For all $ i=1,2,\dots$ , let $ U_{i} $ and $ D_{i} $ be gamma-distributed with shape parameter $ \alpha>0 $ and rate parameter $\beta>0$ , and set
where, for $z\in\mathbb{C}$ , $a_{i},b_{j}\in{\mathbb C}$ , $\alpha_{i},\beta_{j}\in{\mathbb{R}}$ , $\alpha_{i},\beta_{j}\neq 0$ ,
is the generalized Wright function. Then, for $t>0$ , we have
with $\gamma\!\left(s,x\right)$ denoting the lower incomplete gamma function, and, for $-c t<x<c t$ ,
Proof. The first result is due to Equation (2). Under the given assumptions, recalling that
and by setting
(cf. Equation 6.5.4 of [Reference Abramowitz and Stegun1]), from Equation (4) we get
Let us focus on the first term on the right-hand side of (11). Recalling that
and using Equation (7), we obtain
By applying similar reasoning for each term in the right-hand side of (11), we finally get
Similarly, in the case of negative initial velocity, we have
The proof finally follows from recalling the assumption of random initial velocity.
Hereafter we analyze the behavior of the density $f\!\left(x,t\right)$ given in Equation (8) at the extreme points of the interval $({-}c t, c t)$ , for any fixed t.
Proposition 3.1. For $ t,\,\alpha,\,\beta>0 $ , it holds that
Proof. Because of the symmetry properties of $X_t$ , it is enough to analyze the behavior of $f\!\left(x,t\right)$ as $x\rightarrow (ct)^{-}$ .
Let us fix $t,\,\alpha,\,\beta>0 $ and assume $-c t<x<c t$ . By Corollary 1.1 of [Reference Kilbas, Saigo and Trujillo30], the generalized Wright functions appearing in the density (8) are entire functions for all nonnegative integers k. Hence they are continuous, and, recalling their analytical expression (7), we obtain
so that, by Equation (12), after simple calculations we get
In the following proposition we provide a closed-form expression for the probability density function (8) in the case $\alpha=1/2$ .
Proposition 3.2. In the case $\alpha=1/2$ the probability density function (8) has the following expression:
Proof. For $\alpha=1/2$ and recalling that, for $ n\in\mathbb{N}$ ,
one can easily prove that $A^{k}_{2}\!\left(x,t\right)=A^{k+1}_{0}\left(x,t\right)$ , where the function $A_l^{k}\!\left(x,t\right)$ has been defined in (6). Hence we obtain
Moreover, by setting
and using Equation (15), we get
Finally, the proof follows from substituting the previous expression in (16) and recalling the definition of z.
Starting from Theorem 3.1, in the next proposition we provide the expression for the probability law of $X_t$ under the assumption of identically Erlang-distributed random intertimes $U_{i}$ and $D_{i} $ , $ i=1,2,\dots$ . Such results are in agreement with those obtained in [Reference Di Crescenzo and Zacks19].
Corollary 3.1. If $U_{i} $ and $D_{i}$ , $ i=1,2,\dots,$ both have Erlang distribution with parameters $m\in {\mathbb N}$ and $\beta>0$ , then for $t>0$ , we have
and for $-c t<x<c t$ , we have
where
is a two-index pseudo-Bessel function.
Proof. The result follows from Theorem 3.1 by letting $ \alpha=m\in\mathbb{N} $ . For instance, for the first term in (8), by (7), and recalling Equation (17), we have
Using similar reasoning, we obtain
Hence, straightforward calculations and the identities (10) and (12) give
By setting
(cf. Equation 6.5.11 of [Reference Abramowitz and Stegun1]) and recalling that
(cf. Equations 6.5.2 and 6.5.13 of [Reference Abramowitz and Stegun1]), we have
where the last equality comes from the symmetry property $ S^{\left(k,r\right)}_{\left(i,j\right)}\!\left(x,y\right)=S^{\left(r,k\right)}_{\left(\,j,i\right)}\!\left(y,x\right) $ of the pseudo-Bessel function.
Some properties of the two-index pseudo-Bessel function (17) can be found in Remark 3.2 of [Reference Di Crescenzo12].
In the following proposition we provide a closed-form expression for the probability density function (8) in the case $\alpha=2$ .
Proposition 3.3. In the case $\alpha=2$ the probability density function (8), for $-c t<x<c t$ , has the following expression:
where, for $ \nu\in\mathbb{R}$ ,
is the Bessel function of the first kind, and
is the modified Bessel function of the first kind.
Proof. Note that, by (20) and (21),
Hence, setting $\tilde{x}\,:\!=\,\frac{ct+x}{2c}$ and $y\,:\!=\,\frac{ct-x}{2c}$ , we can write the two-index pseudo-Bessel function (17) as
where the last equality follows from recalling that $ I_{-n}\!\left(x\right)=I_{n}\!\left(x\right) $ and $ J_{-n}\!\left(x\right)=\left({-}1\right)^{n}J_{n}\!\left(x\right)$ .
Moreover, for $j\geq 1$ ,
Hence,
Similarly, we have
and for $j\geq 1$ ,
The proof finally follows from Corollary (3.1) and the definition of $\tilde{x}$ and y.
Some plots of the probability density function (8) are shown in Figure 3 for different values of the parameters involved.
The following proposition deals with the case in which the random times $U_{i}$ are exponentially distributed, whereas the random variables $D_{i}$ are gamma-distributed, $i=1,2,\ldots$ .
Corollary 3.2. For $i=1,2,\ldots$ , let us assume that the random times $U_{i}$ are exponentially distributed with parameter $ \lambda>0 $ and that the random variables $D_{i}$ have gamma distribution with shape parameter $\alpha>0$ and rate parameter $\beta>0$ . For $t>0$ , it holds that
and, for $-c t<x<c t$ ,
where
is the Wright function.
Proof. Under the assumptions concerning the distribution of $U_{i}$ and $D_i$ , $i=1,2,\ldots$ , for $x>0$ it holds that
where $ p\!\left(\,j,\eta\right) $ denotes the Poisson probability mass function with mean $ \eta $ evaluated at j, and
Substituting Equations (24) and (25) in Equation (4), we get
Hence, recalling Equation (10), we obtain
From the definition (23), it finally results that
Substituting Equations (24) and (25) in Equation (5), and proceeding in a similar way, we easily obtain the expression for $f_{-c}\!\left(x,t\right)$ , so that the thesis immediately follows from recalling the assumption of random initial velocity.
As an immediate consequence of Proposition 3.2, we obtain the following result.
Proposition 3.4. If the random times $U_{i} $ are exponentially distributed with parameter $ \lambda>0 $ and the $D_{i}$ are Erlang-distributed with parameters $m\in {\mathbb N}$ and $\beta>0$ , it holds that
and
Figure 4 shows some plots of the probability density function given in Proposition 3.4 for different values of the parameters involved.
4. The moment generating function
Let us consider the moment generating function of the generalized telegraph process,
Denoting by
the Laplace transform of an arbitrary function w(t), in this section we provide some results on the Laplace transform ${\mathcal{L}}_p\big[M_{X}^s\big]$ of the moment generating function. Note that ${\mathcal{L}}_p\big[M_{X}^s\big]=\frac{1}{p} \mathbb{E}\Big[{\textrm{e}}^{s X_{e_p}}\Big]$ , where $e_p$ is an exponential random variable with parameter p, independent of $X_t$ .
The following theorem, providing an explicit expression for ${\mathcal{L}}_p\big[M_{X}^s\big]$ , represents a useful tool for further analysis. For instance, it allows us to prove the existence of a Kac-type condition under which the probability density function of the generalized integrated telegraph process, with identically distributed gamma intertimes, converges to that of the standard Brownian motion. We point out the novelty of such a result in the field of finite-velocity random evolutions, since it generalizes the one holding in the case of the standard telegraph process.
Theorem 4.1. If the random variables $U_i$ $(D_i)$ , $i\geq 1$ , are independent copies of an absolutely continuous random variable U (D), for $t>0$ and under the assumption $\mathcal{L}_{p-sc}[f_{U}]\cdot\mathcal{L}_{p+sc}[g_{D}]<1$ , the Laplace transform of the moment generating function of the generalized telegraph process is given by
Proof. Let us denote by $M_{c}^s(t)$ and $M_{-c}^s(t)$ the conditional moment generating functions of the integrated telegraph process under fixed initial velocity $V_{0}=\pm c$ , i.e.
In the case $v_{0}=c$ , by Equations (2) and (4), we get
We now consider the Laplace transform of the moment generating function
Denoting by $w\star h (t)$ the convolution between two functions w and h, and exploiting the properties of the Laplace transform, we obtain
Hence, rearranging the terms, we have
Under the assumption $ \mathcal{L}_{p-sc}[f_{U}]\cdot\mathcal{L}_{p+sc}[g_{D}]<1 $ , we can express the Laplace transform of the conditional moment generating function in terms of the Laplace transform of the densities of the random times $ U_{i} $ and $ D_{i}$ :
In the case $ v_{0}=-c$ , through similar reasoning we obtain
so that the claimed result immediately follows under the assumption of random initial velocity.
Starting from the previous theorem, the following proposition provides the Laplace transform of the moment generating function of the integrated telegraph process, under the assumption of identically distributed gamma intertimes $U_{i}$ and $D_{i}$ .
Proposition 4.1. Let us assume that both the random variables $U_i$ and $D_i$ ( $i\geq 1$ ) have gamma distribution with shape parameter $\alpha>0$ and rate parameter $\beta>0$ . Then, recalling Equations (26) and (27), for $p>-\beta+\sqrt{\beta^2 +c^2 s^2}$ , we have
Proof. The proof can be immediately obtained from Equation (28) and by recalling the following expression for the Laplace transform of the gamma density function with shape parameter $\alpha>0$ and rate parameter $\beta>0$ :
It is well known (see, for instance, [Reference Kolesnik and Ratanov33, Section 2.6]) that under Kac scaling conditions the symmetric telegraph process weakly converges to the Brownian motion. The following theorem allows us to obtain a similar result for the symmetric telegraph process driven by gamma components.
Theorem 4.2. Let us consider the integrated telegraph process $X_t$ under the assumption of gamma-distributed intertimes $U_i$ and $D_i$ ( $i\geq 1$ ) with shape parameter $\alpha>0 $ and rate parameter $ \beta>0$ . Under a Kac-type scaling condition, the probability density function of $X_t$ converges to that of the standard Brownian motion, i.e., recalling Equation (1),
Proof. By Proposition (4.1), and recalling that
for $p>s^2/2$ , we have
Hence, by making use of the binomial series, we obtain
The proof finally follows from recalling that the inverse Laplace transform of $\frac{2}{2 p-s^2}$ , $\displaystyle{p>{s^2}/{2}}$ , is given by ${\textrm{e}}^{s^2 t/2}$ .
The following proposition provides the explicit expression for the moment generating function of the integrated telegraph process under the same assumptions as in Proposition (4.1).
Proposition 4.2. If the random variables U and D both have gamma distribution with shape parameter $ \alpha>0 $ and rate parameter $ \beta>0 $ , for $t>0$ and $s\in {\mathbb R}$ we have
where
is the confluent hypergeometric function and
denotes the Humbert series (cf. [Reference Humbert26]).
Proof. The proof follows from Proposition 4.1. Let us set
so that Equation (28) can be expressed as
From the properties of the Laplace transform, Equation (35a) reads
where the last equality follows from the integral representation of the Kummer hypergeometric function (cf. 13.2.1 of [Reference Abramowitz and Stegun1]). Analogously, Equation (35b) can be written as
Moreover, we have
whereas
By Equation 3.383.2 of [Reference Jeffrey and Zwillinger27], the previous formula becomes
where $I_{\nu}\left(z\right)$ has been defined in Equation (21). It is worth noting that the term $A_{4}$ can be also expressed in terms of the moment generating function $\chi(s)\,:\!=\,E({\textrm{e}}^{s Y})$ of a random variable Y characterized by a beta distribution with equal parameters given by $n\alpha$ . Indeed,
where
is the two-parametric Mittag-Leffler function (see, for instance, Equation 4.1.1 of [Reference Gorenflo, Kilbas, Mainardi and Rogosin24]).
From Equations (37), (38), and (35e), we have
Aiming to evaluate the previous equation, we set
so that Equation (40) becomes
Note that
The inner integral can be evaluated by making use of the formula 7.11.1.5 in [Reference Prudnikov, Brychkov and Marichev44], so that
where we have exploited the integral representation for the Humbert function $ \Phi_{2} $ (cf. Equation (4.6) of [Reference Brychkov4]). From this relationship, and recalling Equation 6.1.18 of [Reference Abramowitz and Stegun1], we finally get
Hence, by Equation (47), from Equation (42) it follows that
From the relation (3.19) of [Reference Brychkov and Saad6], i.e.
we have
A similar argument can be applied to Equation (43), yielding
Finally, the application of the decomposition formula for the Humbert function $ \Phi_{2} $ (cf. (2.45) of [Reference Choi and Hasanov9]),
yields
Hence, the initial velocity being random, Equation (36) becomes
The previous expression can be simplified by taking into account that, by (10) and (12),
and that, by making use of the integral representation for the Humbert function $ \Phi_{2} $ (cf. Equation (4.5) of [Reference Brychkov4]), we have
and
The final part of the proof is devoted to the investigation of the convergence of the series of Humbert functions appearing in the right-hand side of Equation (32). We start from the integral representation of the Humbert function $\Phi_{2}$ (cf. Equation (4.5) of [Reference Brychkov4]),
and recall the following bound for the confluent hypergeometric function $_{1}F_{1}$ (cf. Equation 3.5 of [Reference Carlson8]):
Hence, it follows that
Using Equation (4.1) of [Reference Neuman36], we finally obtain
so that
The other series in (32) involving the Humbert functions can be treated in a similar way.
As an immediate consequence of Theorem 4.2, we obtain the following result.
Proposition 4.3. If the random variables U and D both have Erlang distribution with shape parameter $m\in {\mathbb N}$ and rate parameter $ \beta>0 $ , for $t>0$ and $s\in {\mathbb R}$ we have
Proof. Let $ \alpha=m\in\mathbb{N} $ in Equation (32). The discrete component can be obtained from Equations (18) and (19). Aiming to evaluate the continuous component, let us recall that (cf. Equation (3.8) of [Reference Brychkov, Kim and Rathie5])
where $ L^{\alpha}_{n} $ is the generalized Laguerre polynomial of degree n, and that (cf. Equation (7.11.1.13) of [Reference Prudnikov, Brychkov and Marichev44])
Hence
Similar reasoning can be applied for the other functions involved in Equation (32). After some cumbersome calculations, and by Equations 5.3.10.20 of [Reference Brychkov3] and 2.19.3.6 of [Reference Prudnikov, Brychkov and Marichev43], the moment generating function (32) reads
Finally, the result follows from straightforward calculations.
As further validation of Equation (4.3), in the following theorem we evaluate the moment generating function in the case of identically exponentially distributed random intertimes, finding a well-known result (see, for instance, Section 5 of [Reference Kolesnik31]).
Theorem 4.3. If the random variables U and D both have exponential distribution with parameter $ \beta>0 $ , for $t>0$ and $s\in {\mathbb R}$ we have
In the following proposition we give expressions for the first and second moment of $X_t$ under the assumption of gamma-distributed intertimes.
Proposition 4.4. If $U_{i}$ and $D_{i}$ $(i=1,2,\ldots)$ are gamma-distributed with shape parameter $\alpha>0$ and rate parameter $\beta>0$ , for $t>0$ we have
Proof. The proof follows immediately from Equation (32).
Figure 5 shows some plots of the moment $E(X^2_t)$ given in Equation (50) for different values of $(\alpha,\beta)$ .
Proposition 4.5. For $\alpha=1$ and $t>0$ , Equation (50) reduces to the following well-known result (see, for instance, Equation (26) of [Reference Kolesnik31]):
5. Some results on the squared telegraph process
In this section we study the probability law of the stochastic process $Q_t\,:\!=\,X^2_t$ , $t>0$ , defined as the square of the generalized telegraph process. Hence, $Q_t$ describes the square of the position of a particle performing a telegraph motion. As just stressed in [Reference Martinucci and Meoli35], the sample paths of $Q_t$ show motion reversals of the particle both when an event of the alternating counting process $N_t$ occurs and when the underlying telegraph process $X_t$ reaches the origin, which acts as a reflecting boundary. The interest in the functional $Q_t$ arises in the context of establishing a link between finite-velocity random motions and diffusion processes. For instance, since it is well known that the standard Brownian motion is the limit, in some sense, of the telegraph process, the sum of squared telegraph processes can be treated as the analogue, in the setting of finite-velocity random motions, of the squared Bessel process.
Under the assumption of exponential distribution of the random variables $U_i$ $(i=1,2,\ldots)$ , the following theorem provides the distribution of $Q_t$ , for all $t>0$ .
Proposition 5.1. For all $i=1,2,\ldots,$ let us assume that the random variables $U_i$ have exponential distribution with parameter $\lambda>0$ , and let $G_{D_{i}}({\cdot})$ be the distribution function of the random variables $D_i$ . Denote by $G_{D}^{(n)}({\cdot})$ , $n\geq 1$ , the distribution function of $D^{(n)}\,:\!=\,D_{1}+\dots+D_{n}$ . For $t>0$ and $0\leq z\leq c^2 t^2$ , the distribution function of $Q_t$ is given by
where
and
Proof. Let $N_t$ , $t\geq 0$ , be a Poisson process with parameter $ \lambda$ , and denote by
the compound Poisson process corresponding to $N_t$ . Hence,
Therefore, by Theorem 5.1 of [Reference Zacks54], for $ -c t\leq x\leq ct $ , we have
so that the theorem immediately follows.
Theorem 5.1. If the random variables $U_i$ $(i=1,2,\ldots)$ have exponential distribution with parameter $\lambda>0$ and the random variables $D_i$ $(i=1,2,\ldots)$ are gamma-distributed with shape parameter $ \alpha>0 $ and rate parameter $ \beta>0 $ , for $t>0$ and $0\leq z\leq c^2 t^2$ , we have
where $W_{\rho,\theta}(z)$ is the Wright function (23).
Proof. By Theorem 5.1, assuming that the random variables $D_i$ $(i=1,2,\ldots)$ have gamma distribution with shape parameter $\alpha>0$ and rate parameter $\beta>0$ , it holds that
so that, for $0\leq z\leq c^2 t^2$ , we have
Hence, from (12), Equation (51) becomes
so that the proof immediately follows.
Remark 5.1. Starting from Equation 6.5.29 of [Reference Abramowitz and Stegun1], the incomplete gamma function in (52) can be expressed in terms of the two-parametric Mittag-Leffler function (39). The latter, by Equation 4.4.6 of [Reference Gorenflo, Kilbas, Mainardi and Rogosin24], can be written in terms of the Riemann–Liouville fractional integral of a suitable function (63).
Hence, recalling that the generalized Prabhakar fractional integral $\mathcal{E}^{\omega,\,\rho,\,\kappa}_{\alpha,\,\beta;\,c^{+}}f(x)$ (see e.g. [Reference Srivastava and Tomovski50] and also Appendix C for some details) can be expressed as a series of fractional integrals (cf. Theorem 2.1 of [Reference Fernandez, Baleanu and Srivastava20]), Equation (51) admits, for $t>0$ , the following alternative expression:
Appendix A. Proof of Theorem 4.3
The present section is devoted to the proof of Theorem 4.3.
Let us set $ m=1 $ in the statement of Theorem 4.3. The discrete component simplifies to
From Equations 7.11.1.13 of [Reference Prudnikov, Brychkov and Marichev44] and 13.4.4 of [Reference Abramowitz and Stegun1], we obtain
Let us set
Note that, by Equations 13.1.27 of [Reference Abramowitz and Stegun1], 13.6.9 of [Reference Abramowitz and Stegun1], and 7.11.1.5 of [Reference Prudnikov, Brychkov and Marichev44], we have
By interchanging the order of summation in the previous formula, we get
From Equation 6.6.1.1 of [Reference Prudnikov, Brychkov and Marichev44], and taking into account Equation 5.8.3.4 of [Reference Prudnikov, Brychkov and Marichev43] and the first equation in 10.2.13 of [Reference Abramowitz and Stegun1], we obtain
Similarly, from Equation (54), it follows that
By Equation 6.6.1.6 of [Reference Prudnikov, Brychkov and Marichev44], the first term in Equation (58) reads
whereas, by Equations 6.6.1.1 and 7.11.1.7 of [Reference Prudnikov, Brychkov and Marichev44], the second term in Equation (58) equals
Finally, Equations 5.8.3.4 and 5.8.3.6 of [Reference Prudnikov, Brychkov and Marichev43] and Equation 7.11.1.13 of [Reference Prudnikov, Brychkov and Marichev44] lead to
Equation (55) can easily be simplified with the help of Equation 6.6.1.1 of [Reference Prudnikov, Brychkov and Marichev44], so that
Since
by Equation 5.3.5.5 of [Reference Prudnikov, Brychkov and Marichev44], we have
The inner series of Equation (59), by Equations 6.6.1.1 of [Reference Prudnikov, Brychkov and Marichev44] and 13.6.9 of [Reference Abramowitz and Stegun1], reduces to
Finally, recalling Equation 22.3.9 of [Reference Abramowitz and Stegun1], we get
Hence, after some calculations, the moment generating function can be expressed as
Let us now note that by Equation 13.6.9 of [Reference Abramowitz and Stegun1] and the binomial theorem,
Hence, from the formulas 48.19.3 of [Reference Hansen25] and 22.7.30 of [Reference Abramowitz and Stegun1], Equation (60) becomes
Finally, by taking into account Equations 6.5.29, 9.6.51, and 22.10.14 of [Reference Abramowitz and Stegun1], Equation 6.8.1.3 of [Reference Brychkov3], and the formulas 6.8.1.3 of [Reference Brychkov3], 3.38.1.1 of [Reference Prudnikov, Brychkov and Marichev45], and 5.2.3.1 of [Reference Prudnikov, Brychkov and Marichev43], after some algebraic manipulations we get
The proof follows from the formula 6.5.12 of [Reference Abramowitz and Stegun1], after straightforward calculations.
Appendix B. Proof of Proposition 4.5
In the present section we provide the proof of Proposition 4.5.
By Equation 7.11.2.14 of [Reference Prudnikov, Brychkov and Marichev44], and recalling Equation 6.5.2 of [Reference Abramowitz and Stegun1] and Equation (39), we have
Hence, from Equations 4.12.1.a and 3.9.2 of [Reference Gorenflo, Kilbas, Mainardi and Rogosin24], Equation (61) can be written as
where $\Gamma(\delta, z)$ denotes the upper incomplete gamma function, and
is the Mittag-Leffler function (see, for instance, Equation 3.1.1 of [Reference Gorenflo, Kilbas, Mainardi and Rogosin24]).
Recalling Equation 3.2.1 of [Reference Gorenflo, Kilbas, Mainardi and Rogosin24], the formula (62) for $\alpha=1$ becomes
Hence, the proof immediately follows from Equation (50) and recalling Equations 7.11.2.20, 7.11.2.38, and 7.11.2.56 of [Reference Prudnikov, Brychkov and Marichev44].
Appendix C. Some remarks on the fractional calculus
For any sufficiently well-behaved function f, the Riemann–Liouville fractional integral $ I_{a^{+}}^{\alpha}f $ of order $ \alpha>0 $ is defined as
Note that the values of $I_{c^{+}}^{\alpha}f\left(x\right)$ for $\alpha>0$ are finite if $x>c$ , but it may happen that the limit (if it exists) of $I_{c^{+}}^{\alpha}f\left(x\right)$ is infinite as $x\rightarrow c^{+}$ .
The fundamental property of the fractional integrals is the additive index law (semigroup property), according to which
Among the various operators of fractional integration studied in the recent literature, Srivastava [Reference Srivastava and Tomovski50] introduced the generalized Prabhakar fractional integral defined, for $\rho, \omega \in {\mathbb C}$ , $\Re(\alpha)>\max\{0, \Re( \kappa)-1\}$ , $\min\{\Re( \kappa), \Re( \beta) \}>0$ , as
This operator contains in the kernel the generalized Mittag-Leffler function $E_{\alpha,\beta}^{\rho,\kappa} $ , given by
Note that, in the special case $\omega=0$ , the integral operator $\mathcal{E}^{\omega,\,\rho,\,\kappa}_{\alpha,\,\beta;\,c^{+}}f(x)$ reduces to the right-handed Riemann–Liouville fractional integral operator (63).
A series formula for the generalized Prabhakar integral defined by (64) is provided in Theorem 2.1 of [Reference Fernandez, Baleanu and Srivastava20]. Indeed for any interval $(c,d)\subset {\mathbb R}$ and any function $f\in L^1(c,d)$ , we have
Acknowledgements
B. Martinucci and A. Meoli acknowledge the support of GNCS groups of the Istituto Nazionale di Alta Matematica (INdAM) and MIUR–PRIN 2017, Project Stochastic Models for Complex Systems (no. 2017JFFHSH). The authors thank the anonymous referees for their useful comments, which improved the paper.
Funding information
There are no funding bodies to thank in relation to the creation of this article.
Competing interests
There were no competing interests to declare which arose during the preparation or publication process for this article.