Hostname: page-component-7b9c58cd5d-f9bf7 Total loading time: 0 Render date: 2025-03-15T14:32:33.444Z Has data issue: false hasContentIssue false

Reliability and optimal age replacement policy of a system subject to shocks following a Markovian arrival process

Published online by Cambridge University Press:  14 February 2025

Dheeraj Goyal*
Affiliation:
Centre for Intelligent Multidimensional Data Analysis Limited and Indian Institute of Technology Kanpur
Min Xie*
Affiliation:
Centre for Intelligent Multidimensional Data Analysis Limited and City University of Hong Kong
Min Gong*
Affiliation:
Jinan University
*
*Postal address: Department of Mathematics and Statistics, Indian Institute of Technology Kanpur, Uttar Pradesh-208016, India. Email: dheerajgoyal6897@gmail.com
**Postal address: Centre for Intelligent Multidimensional Data Analysis Limited, Hong Kong Science Park, Hong Kong.
***Postal address: College of Information Science and Technology, Jinan University, Guangzhou, China
Rights & Permissions [Opens in a new window]

Abstract

This paper defines and studies a broad class of shock models by assuming that a Markovian arrival process models the arrival pattern of shocks. Under the defined class, we show that the system’s lifetime follows the well-known phase-type distribution. Further, we examine the age replacement policy for systems with a continuous phase-type distribution, identifying sufficient conditions for determining the optimal replacement time. Since phase-type distributions are dense in the class of lifetime distributions, our findings for the age replacement policy are widely applicable. We include numerical examples and graphical illustrations to support our results.

Type
Original Article
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Applied Probability Trust

1. Introduction

Most engineering systems in real life experience shocks while they operate. Because of this, the literature has given much attention to reliability evaluation under shock models. Numerous shock models have been considered under different assumptions and applied to a variety of topics in reliability and operations research (see, e.g., [Reference Chakravarthy5, Reference Eryilmaz and Kan9, Reference Montoro-Cazorla and Pérez-Ocón26, Reference Wang, Zhao, Wu and Wang36], to name a few).

A system can be exposed to shocks of arbitrary magnitudes at arbitrary times, and it fails if a particular pattern relating to the shocks or the times between the shocks appears. Thus, the lifetime of the system can be represented by a random sum that depends on the times between consecutive shocks and the magnitudes of the shocks. Hence, in any shock modeling, these two pieces of information are crucial: the failure mechanism of the arrived shock and the joint distribution of times between consecutive shocks. Based on the failure mechanism of the arrived shock, shock models can be categorized as extreme shock models, cumulative shock models, run shock models, and $\delta$ -shock models (see, e.g., [Reference Cha and Finkelstein4, Reference Goyal, Hazra and Finkelstein13, Reference Lyu, Qu, Ma, Wang and Yang23, Reference Ozkut30]). The joint distribution of times between consecutive shocks is modeled through a counting process, referred to as the shock process. Most shock models in the literature have been developed under the assumption of either a Poisson or a renewal shock process (see, e.g., [Reference Bozbulut and Eryilmaz3, Reference Eryilmaz and Kan9, Reference Levitin, Finkelstein and Dai22, Reference Wang, Zhao, Wu and Wang36], to name a few). However, a phase-type renewal process and a non-homogeneous Poisson process are among the most popular. The non-homogeneous Poisson process is popular due to its mathematical simplicity. However, this is a specific process and, therefore, is unsuitable for certain applications. The phase-type renewal process is well known for its ability to approximate inter-arrival times following an arbitrary distribution, but it fails to capture the correlation between inter-arrival times. To capture this correlation, various mixed Poisson processes, such as the Pólya process and generalized Pólya process, have been considered in the literature (e.g., [Reference Cha and Finkelstein4, Reference Goyal, Hazra and Finkelstein13, Reference Goyal, Hazra and Finkelstein14). However, these processes are not appropriate for modeling inter-arrival times with an arbitrary distribution.

The Markovian arrival process was introduced in [Reference Neuts29], a versatile point process that not only generalizes the phase-type renewal process but also provides a way to model correlated inter-arrival times, which had mainly been ignored in the literature. It is well known that Markovian arrival processes are dense in the class of point processes [Reference Bladt and Nielsen2, p. 708]. Therefore, they have received much attention in many real-world settings where dependent arrivals are frequently seen, e.g. in reliability, queuing theory, insurance, telegraphy, and weather forecasting [Reference Chakravarthy5, Reference Dizbin and Tan6, Reference Landriault and Shi19, Reference Montoro-Cazorla and Pérez-Ocón26, Reference Ramírez-Cobo, Marzo, Olivares-Nadal, Alvarez Francoso and Carrizosa32]). An overview of Markovian arrival processes is provided in [Reference Asmussen1], highlighting their practical applications and demonstrating how to incorporate them into datasets. Despite these enthralling properties, significantly less attention has been paid to them in the literature on shock models. These processes have been considered in [Reference Goyal, Ali and Hazra12, Reference Montoro-Cazorla and Pérez-Ocón25Reference Montoro-Cazorla and Pérez-Ocón27, Reference Pérez-Ocón and del Carmen Segovia31], but these studies are based on some specific shock models and do not cover some well-known ones, e.g. run shock models and extreme shock models with a change point, which are applicable in many real-life scenarios. Further, existing studies fail to provide us with the precise distribution of the system’s lifetime.

To fill this gap in the literature, we study a wider class of shock models by assuming that the shock process is a Markovian arrival process. In this class, we assume that the distribution of the number of shocks until system failure follows a discrete phase-type distribution. Consequently, many popular shock models, such as extreme shock models, cumulative shock models, run shock models, the extreme shock model with a change point, the extreme shock model with a finite number of shocks, etc. are exceptional cases of the defined class. [Reference Eryilmaz7] studied a class of shock models by assuming that shocks occur according to a phase-type renewal process. Considering the possibility of heavy-tailed inter-arrival times, [Reference Goyal, Hazra and Finkelstein15] studied a class of shock models by assuming a renewal process of matrix Mittag–Leffler type as the shock process. However, neither study considered the correlation between inter-arrival times. [Reference Goyal, Hazra and Finkelstein14] analyzed a class of shock models by considering the dependency between inter-shock times, which were modeled by a homogeneous Poisson generalized gamma process. This process is suitable for modeling arrivals when the process has positive dependency in its increments. However, it has identical inter-arrival times and is unsuitable for modeling inter-arrival times with an arbitrary distribution. Thus, consideration of a Markovian arrival process (a generalization of the phase-type renewal process, with non-identical and dependent inter-arrival times and dense in the class of point processes) as a shock process, and discrete phase-type distribution (dense in the class of distributions in $\mathbb{N}$ ) as the distribution of the number of shocks until system failure, not only fills the existing gap in the literature but also provides great flexibility in the model.

Another contribution of this paper is studying the age replacement policy for a system working under the defined random environment. Age replacement is a popular topic in the domain of maintenance and reliability. It is a simple yet practical replacement policy in real applications because, from the perspective of maintenance management costs, pre-scheduled preventative replacement of functioning units or components is often feasible and advantageous. In the literature, various age replacement policies have been defined and examined (see, e.g., [Reference Eryilmaz8, Reference Levitin, Finkelstein and Dai22, Reference Safaei, Châtelet and Ahmadi33). In this paper, we study the age replacement policy when a continuous phase-type distribution models the lifetime of a system. Sufficient conditions are obtained for the existence and uniqueness of the optimal replacement time that minimizes the average cost. Since phase-type distributions can approximate any lifetime distribution [Reference Bladt and Nielsen2], the derived results in this paper can be applied to any system.

The novel contributions of our paper can be summed up as follows: (i) We study a broad class of shock models that contains many popular shock models by assuming that shocks occur based on a Markovian arrival process. (ii) We discovered that the described class of shock models also includes the extreme shock model with a finite number of shocks, which has not been addressed in the literature. (iii) We show that a system’s lifetime under this general class of shock models follows a phase-type distribution. (iv) We study the age replacement policy for systems following a continuous phase-type distribution.

The rest of the paper is organized as follows. In Section 2, we present our notation, preliminary definitions, and model description. Section 3 contains results on the system’s lifetime distribution and reliability indices. In Section 4, we study the problem of optimal age replacement policy by considering the system’s lifetime as following a phase-type distribution. Section 5 presents special cases of the considered class of shock models. In Section 6, we present illustrative numerical results. Finally, concluding remarks are given in Section 7.

2. Preliminaries and model description

We begin by introducing the notation that will be used throughout the paper. For any underlying random variable U, we denote the probability density function by $f_{U}(\cdot)$ , the cumulative distribution function by $F_{U}(\cdot)$ , the reliability function by $\bar{F}_{U}(\cdot)$ , the failure rate function by $r_{U}(\cdot)$ , the Laplace transform function by $L_{U}(\cdot)$ , and the mean by $\mathbb{E}(U)$ . Further, we denote the set of natural numbers by $\mathbb{N}$ and the set of real numbers by $\mathbb{R}$ . The random variable $X_{i}$ represents the time between the $(i-1)$ th and ith shocks, and $Y_{i}$ denotes the magnitude (or damage) caused by the ith shock, $i \in \mathbb{N}$ . Further, the random variables T and N represent the lifetime of the system and the number of shocks until system failure, respectively. We denote the derivative of a function g(x) with respect to x by g (x), the absolute value of a real number z by $|z|$ , and the identity matrix by $\boldsymbol{I}$ . The Kronecker product of any two matrices $\pmb{E}$ and $\pmb{F}$ is denoted by $\pmb{E} \otimes \pmb{F} = [E_{ij}\pmb{F}]$ , and $\mathrm{diag}(x_{1},x_{2}, \ldots,x_{d})$ denotes the diagonal matrix with ith diagonal entry equal to $x_{i}$ , $i = 1,2,\ldots,d$ .

2.1. Preliminaries

For the stochastic objects considered here, we present the relevant definitions in this subsection.

Definition 1. For two matrices $\pmb{S}_{1}$ and $\pmb{S}_{2}$ , we write $\pmb{S}_{1} \geq \pmb{S}_{2}$ (respectively $\pmb{S}_{1} > \pmb{S}_{2}$ ) if $\pmb{S}_{1}$ and $\pmb{S}_{2}$ are real and every entry of $\pmb{S}_{1} - \pmb{S}_{2}$ is non-negative (respectively positive).

Definition 2. A real matrix $\pmb{S}$ is said to be monotone if $\pmb{S} \pmb{x} \geq 0$ implies $\pmb{x} \geq 0$ .

Definition 3. A matrix $\pmb{S}$ is called an M-matrix if all off-diagonal elements are less than or equal to zero, and all eigenvalues have positive real parts [Reference He17, p. 22].

Remark 1. Any non-singular M-matrix is a monotone matrix, and its inverse is a non-negative matrix [Reference Johnson, Smith and Tsatsomeros18].

Definition 4. A non-negative random variable X is said to have a phase-type distribution if

\begin{equation*} F_X(x) = 1 - {\alpha}{\mathrm{e}}^{\pmb{S}x}\boldsymbol{e} = 1 - {\alpha}\Bigg(\sum_{i=0}^{\infty}\frac{x^{i}}{i!}\pmb{S}^{i}\Bigg)\boldsymbol{e}, \quad x\geq 0, \end{equation*}

where:

  • $\boldsymbol{e}$ is the column vector of order m, where $m \in \mathbb{N}$ , with all entries equal to one;

  • ${\alpha}$ is a row vector of order m with all elements non-negative, and ${\alpha}\boldsymbol{e} = 1$ ; and

  • $\pmb{S}$ is a matrix of order m such that all the entries are non-negative except the diagonal elements, which are negative; all the row sums are less than or equal to zero; and $\pmb{S}$ is non-singular [Reference He17, p. 10].

We call $\pmb{S}$ and the pair $({\alpha},\pmb{S})$ the phase-type generator and the phase-type representation of order m, respectively. The notation $X \sim \mathrm{PH}_{m}({\alpha},\pmb{S})$ means that X follows the phase-type distribution with the representation $({\alpha},\pmb{S})$ of order m. The expectation of X can be derived from

(1) \begin{equation} \mathbb{E}(X) = -({\alpha} \pmb{S}^{-1} \boldsymbol{e}).\end{equation}

The class of phase-type distributions includes many well-known probability distributions. For example, a phase-type distribution with the representation $({\alpha} = 1, \pmb{S} = [\!-\!\lambda])$ corresponds to the exponential distribution with parameter $\lambda$ . Additionally, the Erlang distribution, generalized Erlang distribution, and Coxian distribution are notable special cases of phase-type distributions; for the phase-type representation of these distributions, see [Reference He17, Example 1.2.3].

Remark 2. Let $\pmb{S}$ be a phase-type generator. Then all the eigenvalues of $\pmb{S}$ have negative real part [Reference Bladt and Nielsen2]. Therefore, from Definitions 3 and 4, for $t \geq 0$ , $(t\boldsymbol{I} - \pmb{S})$ is an M-matrix. Hence, from Remark 1, it is monotone.

Definition 5. A discrete phase-type distribution can be viewed as the distribution of the time to absorption in an absorbing Markov chain. Let $\boldsymbol{M}$ be a square matrix of size m, $m \in \mathbb{N}$ , that includes the transition probabilities among the m transient states, and $\pmb{\mu} = (\mu_{1},\mu_{2},\ldots,\mu_{m})$ be a non-negative vector of size m such that $\sum_{i=1}^{m}\mu_{i} = 1$ . Further, let the matrix $\boldsymbol{I} - \boldsymbol{M}$ be non-singular. Then a random variable N is said to have the discrete phase-type distribution, denoted by $N \sim \mathrm{DPH}_{m}(\pmb{\mu}, \boldsymbol{M})$ , if

(2) \begin{equation} \mathbb{P}(N = n) = \pmb{\mu}\boldsymbol{M}^{n-1}(\boldsymbol{I} - \boldsymbol{M})\pmb{e}, \quad n \in \mathbb{N}, \end{equation}

where $\pmb{e}$ is the column vector of size m with all elements being one [Reference Goyal, Hazra and Finkelstein14].

Definition 6. Given matrices $\{\boldsymbol{D}_{0},\boldsymbol{D}_{1}\}$ of order m, where $m \in \mathbb{N}$ , all the elements of the two matrices are non-negative except the diagonal elements of $\boldsymbol{D}_{0}$ , which are negative, and $\boldsymbol{D} = \boldsymbol{D}_{0} + \boldsymbol{D}_{1}$ is an irreducible infinitesimal generator of a Markov process, consider an m-state Markov renewal process $\{(I_{k}, X_{k}), k = 0,1,2,\ldots\}$ . The random variables $I_{k}$ and $X_{k}$ are the states and the sojourn times in the states, respectively. The transition probability matrix with entries (i, j) is given by

$$ \mathbb{P}(I_{k} = j, X_{k} \leq t\mid I_{k-1} = i) = \bigg(\int_{0}^{t}{\mathrm{e}}^{\boldsymbol{D}_{0}x}\boldsymbol{D}_{1}\,{\mathrm{d}} x\bigg)_{ij}, \quad t \geq 0. $$

The Markovian arrival process is defined as the Markov renewal process and a transition probability matrix of the stated particular form. We call the pair $(\boldsymbol{D}_{0},\boldsymbol{D}_{1})$ the representation of the Markovian arrival process. If $\pmb{\pi}$ denotes the initial vector, then we denote the Markovian arrival process by $\mathrm{MAP}_{m}(\pmb{\pi},\boldsymbol{D}_{0}, \boldsymbol{D}_{1})$ [Reference He17, p. 111].

The Markovian arrival process generalizes the Poisson process, the Markov modulated Poisson process, and the phase-type renewal process [Reference Latouche, Remiche and Taylor21]. The joint distribution of $(X_{1},X_{2},\ldots,X_{k})$ for a $\mathrm{MAP}_{m}(\pmb{\pi}, \boldsymbol{D}_{0}, \boldsymbol{D}_{1})$ process is given by

(3) \begin{equation} f_{(X_{1},X_{2},\ldots,X_{k})}(x_{1},x_{2},\ldots,x_{k}) = \pmb{\pi}{\mathrm{e}}^{\boldsymbol{D}_{0}x_{1}}\boldsymbol{D}_{1} {\mathrm{e}}^{\boldsymbol{D}_{0}x_{2}}\cdots\boldsymbol{D}_{1}{\mathrm{e}}^{\boldsymbol{D}_{0}x_{k}}\boldsymbol{D}_{1}\boldsymbol{e}\end{equation}

for $x_{i} \geq 0$ , $i = 1,2,\ldots,k$ . Clearly, all the inter-arrival times $X_{i}$ are dependent, non-identical, and follow a phase-type distribution with generator $\boldsymbol{D}_{0}$ [Reference Bladt and Nielsen2, p. 526].

2.2. Model description

Consider a system subject to shocks over time which are the only factor contributing to its failure. Let $X_{i}$ denote the time between the $(i-1)$ th and ith shocks, and $Y_{i}$ represent the magnitude of the ith shock, $i \in \mathbb{N}$ . Let the random variable N indicate the number of shocks until the system fails. Then the system’s lifetime can be represented by the random sum

(4) \begin{equation} T = \sum_{i=1}^{N} X_{i}.\end{equation}

Note that this is valid for any system working in a random environment. However, by assuming an arbitrary distribution of N, we cannot proceed much further and derive in full generality the distribution of the lifetime of a system vulnerable to shocks. We assume that $N \sim \mathrm{DPH}_{m}(\pmb{\mu},\boldsymbol{M})$ . The discrete phase-type distribution is reasonably general since some popular waiting time distributions, e.g. geometric, geometric distribution of order k, and negative binomial, are special cases. Consequently, many shock models have a discrete phase-type distribution of N (see Section 4). Furthermore, this distribution is dense in the class of distributions in $\mathbb{N}$ [Reference Latouche and Ramaswami20, p. 54]. Therefore, it is a somewhat broad assumption.

As with the distribution of N, we cannot derive the system lifetime by considering an arbitrary shock process. We assume that shocks occur on the system according to the Markovian arrival process, which is dense in the class of point processes on the real line [Reference Bladt and Nielsen2, p. 708] and provides great flexibility in the model. Further, we assume that $X_{i}$ and N are independent, $i \in \mathbb{N}$ . Lastly, we assume that the $Y_{i}$ are independent and identically distributed random variables, and independent of the $X_{i}$ .

We can observe that the class we have defined is very broad but not complete (i.e. it does not contains all shock models, e.g. the $\delta$ -shock model). The $\delta$ -shock model states that the system fails when the time lag between two successive shocks is less than a predetermined threshold $\delta$ [Reference Goyal, Hazra and Finkelstein13]. For this model, $\{N = n\}$ if and only if $\{X_{1} > \delta, X_{2} > \delta, \ldots, X_{n-1} > \delta, X_{n} \leq \delta\}$ . Since N and the $X_{i}$ are dependent, the $\delta$ -shock model is not part of the proposed class.

3. Reliability and mean residual lifetime of the system

We begin this section with the following lemma which is used to prove the main result of this section.

Lemma 1. Let $(\boldsymbol{D}_{0},\boldsymbol{D}_{1})$ be the representation of a Markovian arrival process. Then the eigenvalues for the matrix $(t\boldsymbol{I} - \boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1}$ are contained within the unit circle for all $t \geq 0$ .

Proof. As $(\boldsymbol{D}_{0}, \boldsymbol{D}_{1})$ is a representation of a Markovian arrival process, $(\boldsymbol{D}_{0} + \boldsymbol{D}_{1})\boldsymbol{e} = 0$ . Equivalently, $-\boldsymbol{D}_{0}^{-1}\boldsymbol{D}_{1} \boldsymbol{e} = \boldsymbol{e}$ . Since $\boldsymbol{D}_{0}$ is a phase-type generator, from Remarks 1 and 2, $-\boldsymbol{D}_{0}^{-1}\boldsymbol{D}_{1}$ is a stochastic matrix, hence all the eigenvalues lie within the unit circle. So we proved our result for $t = 0$ . If $t > 0$ then $t\boldsymbol{e} > (\boldsymbol{D}_{0}+\boldsymbol{D}_{1})\boldsymbol{e}$ , or $(tI - \boldsymbol{D}_{0})\boldsymbol{e} > \boldsymbol{D}_{1}\boldsymbol{e}$ . From Remarks 1 and 2, $(t\boldsymbol{I} - \boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1}\boldsymbol{e} < \boldsymbol{e}$ , consequently, the matrix $(t\boldsymbol{I} - \boldsymbol{D}_{0})^{-1} \boldsymbol{D}_{1}$ is a sub-stochastic matrix. Therefore, all the eigenvalues of this matrix have modulus less than 1, which proves the required result.

In what follows, we obtain the distribution of the system’s lifetime for the proposed class of shock models.

Theorem 1. Assume that the shock process is $\mathrm{MAP}_{m_{1}}(\pmb{\pi},\boldsymbol{D}_{0},\boldsymbol{D}_{1})$ . Let $N \sim \mathrm{DPH}_{m_{2}}(\pmb{\mu},\boldsymbol{M})$ . Then $T \sim \mathrm{PH}_{m_{1}m_{2}}(\pmb{\pi}\otimes\pmb{\mu},\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M})$ .

Proof. Clearly, from (4), and by using the assumption of independence between the $X_{i}$ and N, we can write

(5) \begin{equation} L_{T}(t) = \mathbb{E}({\mathrm{e}}^{-T t}) = \sum_{n=1}^{\infty}\mathbb{E}\big({\mathrm{e}}^{-\left(\sum_{i=1}^{n}X_{i}\right)t}\big) \mathbb{P}(N = n). \end{equation}

Now, $\mathbb{E}\big({\mathrm{e}}^{-\left(\sum_{i=1}^{n}X_{i}\right)t}\big) = \int_{0}^{\infty}\cdots\int_{0}^{\infty}{\mathrm{e}}^{\left(-\sum_{i=1}^{n}x_{i}\right)t}f_{X_{1},\ldots,X_{n}}(x_{1},\ldots,x_{n})\, {\mathrm{d}} x_{1} \cdots {\mathrm{d}} x_{n}$ . From (3), we can write

\begin{align*} \mathbb{E}\big({\mathrm{e}}^{-\left(\sum_{i=1}^{n}X_{i}\right)t}\big) & = \int_{0}^{\infty}\cdots\int_{0}^{\infty}{\mathrm{e}}^{\left(-\sum_{i=1}^{n}x_{i}\right)t} \big(\pmb{\pi}{\mathrm{e}}^{\boldsymbol{D}_{0}x_{1}}\boldsymbol{D}_{1}{\mathrm{e}}^{\boldsymbol{D}_{0}x_{2}}\cdots\boldsymbol{D}_{1} {\mathrm{e}}^{\boldsymbol{D}_{0}x_{n}}\boldsymbol{D}_{1}\boldsymbol{e}\big)\,{\mathrm{d}} x_{1}\cdots{\mathrm{d}} x_{n} \\ & = \int_{0}^{\infty}\cdots\int_{0}^{\infty}\pmb{\pi}{\mathrm{e}}^{-x_{1}\left(t\boldsymbol{I}-\boldsymbol{D}_{0}\right)} \boldsymbol{D}_{1}{\mathrm{e}}^{-x_{2}\left(t\boldsymbol{I}-\boldsymbol{D}_{0}\right)}\cdots\boldsymbol{D}_{1} {\mathrm{e}}^{-x_{1}\left(t\boldsymbol{I}-\boldsymbol{D}_{0}\right)}\boldsymbol{D}_{1}\boldsymbol{e}\,{\mathrm{d}} x_{1}\cdots{\mathrm{d}} x_{n} \\ & = \pmb{\pi}(t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1}(t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1} \cdots\boldsymbol{D}_{1}(t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1}\boldsymbol{e} \\ & = \pmb{\pi}((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1})^{n}\boldsymbol{e}. \end{align*}

Now, by using this equality and (2) in (5), we get

\begin{align*} L_{T}(t) & = \sum_{n=1}^{\infty}(\pmb{\pi}((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1})^{n}\boldsymbol{e}) (\pmb{\mu}\boldsymbol{M}^{n-1}(\boldsymbol{I}-\boldsymbol{M})\boldsymbol{e}) \\ & = \sum_{n=1}^{\infty}(\pmb{\pi}\otimes\pmb{\mu})\{((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1} \boldsymbol{D}_{1})^{n-1}\otimes\boldsymbol{M}^{n-1}\}((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1} \boldsymbol{D}_{1}\boldsymbol{e}\otimes(\boldsymbol{I}-\boldsymbol{M})\boldsymbol{e}) \\ & = (\pmb{\pi}\otimes\pmb{\mu})\Bigg[\sum_{n=1}^{\infty}((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1} \boldsymbol{D}_{1}\otimes\boldsymbol{M})^{n-1}\Bigg]((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\otimes \boldsymbol{I})(\boldsymbol{D}_{1}\boldsymbol{e}\otimes(\boldsymbol{I}-\boldsymbol{M})\boldsymbol{e}). \end{align*}

The series term in this equation converges to $(\boldsymbol{I}-((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1} \otimes\boldsymbol{M}))^{-1}$ if and only if all the eigenvalues of the matrix $(t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1}\otimes\boldsymbol{M}$ has modulus strictly less than 1. As all the eigenvalues for the matrix $\boldsymbol{M}$ of a discrete phase-type distribution have modulus strictly less than one [Reference Bladt and Nielsen2, p. 32], from Lemma 1, all the eigenvalues of the matrix $(t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1}\boldsymbol{D}_{1}\otimes\boldsymbol{M}$ lie strictly within the unit circle. Hence, by using $\boldsymbol{D}_{1}\boldsymbol{e} = -\boldsymbol{D}_{0}\boldsymbol{e}$ , we get

\begin{align*} L_{T}(t) & = (\pmb{\pi}\otimes\pmb{\mu})(\boldsymbol{I}-((t\boldsymbol{I}-\boldsymbol{D}_{0})^{-1} \boldsymbol{D}_{1}\otimes\boldsymbol{M}))^{-1}((t\boldsymbol{I}-\boldsymbol{D}_{0})\otimes\boldsymbol{I})^{-1} (\boldsymbol{D}_{1}\boldsymbol{e}\otimes(\boldsymbol{I}-\boldsymbol{M})\boldsymbol{e}) \\ & = (\pmb{\pi}\otimes\pmb{\mu})(t\boldsymbol{I}-(\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M}))^{-1}(\boldsymbol{D}_{1}\boldsymbol{e}\otimes\boldsymbol{e} - \boldsymbol{D}_{1}\boldsymbol{e}\otimes\boldsymbol{M}\boldsymbol{e}) \\ & = (\pmb{\pi}\otimes\pmb{\mu})(t\boldsymbol{I}-(\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M}))^{-1}(\!-\!\boldsymbol{D}_{0}\boldsymbol{e}\otimes\boldsymbol{e} - \boldsymbol{D}_{1}\boldsymbol{e}\otimes\boldsymbol{M}\boldsymbol{e}) \\ & = (\pmb{\pi}\otimes\pmb{\mu})(t\boldsymbol{I}-(\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M}))^{-1}(\!-\!\boldsymbol{D}_{0}\otimes\boldsymbol{I} - \boldsymbol{D}_{1}\otimes\boldsymbol{M})\boldsymbol{e} \\ & = (\pmb{\pi}\otimes\pmb{\mu})(t\boldsymbol{I}-(\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M}))^{-1}(\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M})^{0}, \end{align*}

where $(\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M})^{0} = -(\boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M})\boldsymbol{e}$ . Which is the Laplace transform of a phase-type distribution with the representation $(\pmb{\pi}\otimes\pmb{\mu},\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})$ [Reference He17, p. 18]. Consequently, $T \sim \mathrm{PH}(\pmb{\pi}\otimes\pmb{\mu}, \boldsymbol{D}_{0}\otimes\boldsymbol{I} + \boldsymbol{D}_{1}\otimes\boldsymbol{M})$ .

Theorem 1 shows that the system’s lifetime follows the well-known phase-type distribution for the defined class of shock models. Many shock models and their applications have been studied widely in the literature for phase-type renewal processes (see [Reference Eryilmaz and Kan9Reference Gong, Xie and Yang11, Reference Lyu, Qu, Ma, Wang and Yang23, Reference Ozkut30, Reference Wang, Zhao, Wu and Wang36, Reference Zhao, Wang, Wang and Fan38], to name a few). We believe the theorem’s conclusion can be applied to all of these current and upcoming studies, making this conclusion of considerable importance.

Since, for the defined class of shock models, the system’s lifetime follows the phase-type distribution, from Definition 4, the system’s reliability function is given by

(6) \begin{equation} \bar{F}_{T}(t) = (\pmb{\pi}\otimes\pmb{\mu}){\mathrm{e}}^{(\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})t} \boldsymbol{e},\end{equation}

and the failure rate is given by

(7) \begin{equation} r_{T}(t) = \frac{-(\pmb{\pi}\otimes\pmb{\mu}){\mathrm{e}}^{(\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})t} (\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})\boldsymbol{e}} {(\pmb{\pi}\otimes\pmb{\mu}){\mathrm{e}}^{(\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})t} \boldsymbol{e}}.\end{equation}

For any phase-type distribution $Y \sim \mathrm{PH}_{m}({\alpha}, \pmb{S})$ ,

$$(Y - y\mid Y > y) \sim \mathrm{PH}\bigg(\frac{{\alpha}{\mathrm{e}}^{\pmb{S}y}}{{\alpha}{\mathrm{e}}^{\pmb{S}y}\boldsymbol{e}},\pmb{S}\bigg)$$

(see [Reference He17]). Thus, the system’s mean remaining lifetime is given by

(8) \begin{equation} \mathbb{E}(T - t\mid T > t) = -\bigg(\frac{(\pmb{\pi}\otimes\pmb{\mu}){\mathrm{e}}^{(\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})t}} {(\pmb{\pi}\otimes\pmb{\mu}){\mathrm{e}}^{(\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})t} \boldsymbol{e}}\bigg) (\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})^{-1}\boldsymbol{e}.\end{equation}

4. Optimal age replacement policy via phase-type distribution

The classical age replacement policy states that the system is replaced when it fails or reaches age x, whichever comes first. Let $c_{1}$ and $c_{2}$ represent, respectively, the costs of replacing working and failed systems. We assume that $c_{1} < c_{2}$ since a failure results in an additional penalty. Then, as a function of the replacement age x, the average cost per unit of time is given by

(9) \begin{equation} C(x) = \frac{c_{1}\mathbb{P}(T > x) + c_{2}\mathbb{P}(T \leq x)}{\mathbb{E}(\mathrm{min}(T,x))} = \frac{c_{2} + (c_{1} - c_{2}) P(T > x)}{\mathbb{E}(\mathrm{min}(T,x))}.\end{equation}

To evaluate the function C(x) we need expressions for $\mathbb{P}(T > x)$ and $\mathbb{E}(min(T,x))$ . Note that, for a phase-type distribution Y which has a phase-type representation $({\alpha}, \pmb{S})$ , i.e. $Y \sim \mathrm{PH}_{m}({\alpha}, \pmb{S})$ , $\mathbb{E}(\min(Y,x)) = {\alpha}\pmb{S}^{-1}{\mathrm{e}}^{\pmb{S}x}\boldsymbol{e} - {\alpha}\pmb{S}^{-1}\boldsymbol{e}$ . Hence, from this equation and Theorem 1,

\begin{align*}\mathbb{E}(\!\min(T,x)) &= (\pmb{\pi}\otimes\pmb{\mu})(\boldsymbol{D}_{0}\otimes\boldsymbol{I} +\boldsymbol{D}_{1}\otimes\boldsymbol{M})^{-1}{\mathrm{e}}^{(\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})x}\pmb{e}\\& - (\pmb{\pi}\otimes\pmb{\mu})(\boldsymbol{D}_{0}\otimes\boldsymbol{I} +\boldsymbol{D}_{1}\otimes\boldsymbol{M})^{-1}\pmb{e}\end{align*}

for $x \geq 0$ . Now, by using this and (6) we can obtain value of the cost function C(x) numerically.

We get a phase-type distributed system lifetime for the defined class of shock models. This leads us to consider the lifetime random variable T to be phase-type distributed in the age replacement policy. This policy was studied in [Reference Eryilmaz8] by considering the system lifetime distribution to be a discrete phase-type distribution, while we are considering a continuous phase-type distribution.

For a given shock model and costs $c_{1}$ and $c_{2}$ , the problem is finding the point $x^{*}$ that minimizes the function C(x). For the classical model (9), the optimal solution exists for the increasing failure rate with the condition that it tends to a constant. Our model also agrees with this, which we prove in the following theorem.

Theorem 2. Assume that the lifetime of the system follows a phase-type distribution, i.e. $T \sim \mathrm{PH}_{m}({\alpha},\pmb{S})$ . Let $\theta$ be the dominating eigenvalue of the matrix $\pmb{S}$ . Let $\sigma = \mathbb{E}(T)$ . Assume that $r_{T}(\cdot)$ is strictly increasing and continuous. If $|\theta| > c_{2}/\sigma(c_{2} - c_{1})$ then there exists a finite and unique $x^{*}$ ( $0 < x^{*} < \infty$ ) that satisfies

$$r_{T}(x) \int_{0}^{x} \bar{F}_{T}(t)\,{\mathrm{d}} t - F(x) = \frac{c_{1}}{c_{2} - c_{1}},$$

and the resulting expected cost rate is $C(x^{*}) = (c_{2} - c_{1}) r_{T}(x^{*})$ . Further, if $|\theta| \leq c_{2}/\sigma(c_{2} - c_{1})$ then $x^{*} = \infty$ , i.e. a unit is replaced only at failure, and the expected cost rate is given in (9).

Proof. First, note that, for a phase-type distribution, the dominating eigenvalue is always real and negative [Reference Bladt and Nielsen2, p. 203]. From [Reference Nakagawa28, Theorem 3.1], a sufficient condition that $\lim_{x \rightarrow \infty} C(x) > C(x)$ for some x is that $\lim_{t \rightarrow \infty} r_{T}(t) > c_{2}/\sigma(c_{2} - c_{1})$ . Note that the survival function of a phase-type distribution T can be written as

(10) \begin{equation} \bar{F}_{T}(t) = \sum_{k=1}^{l}\sum_{r=0}^{m_{k} - 1}s_{k_{r}}\frac{t^{r}}{r!}{\mathrm{e}}^{\theta_{k}t}, \end{equation}

where the $\theta_{i}$ are the eigenvalues of $\pmb{S}$ with multiplicities $m_{1}, m_{2}, \ldots, m_{l}$ such that $\sum_{i=1}^{l} m_{i} = m$ , while the $s_{k_{r}}$ are constants [Reference Bladt and Nielsen2]. From (10), we can see that the failure rate function of T is always asymptotically constant and is $|\theta|$ , i.e. $\lim_{t \rightarrow \infty}r_{T}(t) = |\theta|$ . Then our result follows directly from [Reference Nakagawa28, Theorem 3.2].

Since the failure rate function, $r_{T}(\cdot)$ in (7), is matrix based, its monotonic behavior can be seen by plotting it in any mathematical software. However, in the following two theorems, adequate criteria for the monotonicity of the failure rate of a phase-type distribution are derived for two exceptional instances.

Theorem 3. Let $T \sim \mathrm{PH}_{2}({\alpha},\pmb{S})$ . Let $\theta_{1}$ and $\theta_{2}$ be two distinct eigenvalues of the matrix $\pmb{S}$ . Let $\pmb{P}$ be an invertible matrix such that $\pmb{S} = \pmb{P}\,\mathrm{diag}(\theta_{1},\theta_{2})\pmb{P^{-1}}$ . Let $\boldsymbol{C}_{i}$ and $\pmb{R}_{i}$ denote the ith column and row vector of matrix $\pmb{P}$ and $\pmb{P}^{-1}$ , respectively, $i = 1, 2$ . Then $r_{T}(\cdot)$ is strictly increasing and continuous if and only if ${\alpha}\boldsymbol{C}_{1}\pmb{R}_{1}\boldsymbol{e} \in (\!-\!\infty, 0) \cup (1, \infty)$ .

Proof. Define $\beta_{i} = {\alpha}\boldsymbol{C}_{i}\pmb{R}_{i}\boldsymbol{e}$ , $i = 1,2$ . Since the dominating eigenvalue of a phase-type distribution is always real, this implies that both the eigenvalues will be real. Which means $\pmb{P}$ and $\pmb{P}^{-1}$ will be real matrices, as, consequently, are the $\pmb{C}_{i}$ and $\pmb{R}_{i}$ . Therefore, $\beta_{i} \in \mathbb{R}$ , $i = 1,2$ . Now, consider

\begin{align*} \bar{F}_{T}(t) = {\alpha}{\mathrm{e}}^{\pmb{S}t}\boldsymbol{e} = {\alpha}\pmb{P}\,{\mathrm{diag}}({\mathrm{e}}^{\theta_{1}t},{\mathrm{e}}^{\theta_{2}t})\pmb{P^{-1}}\boldsymbol{e} & = {\alpha}(\boldsymbol{C}_{1},\boldsymbol{C}_{2})\,\mathrm{diag}({\mathrm{e}}^{\theta_{1}t},{\mathrm{e}}^{\theta_{2}t}) \begin{pmatrix} \pmb{R}_{1} \\ \pmb{R}_{2} \end{pmatrix} \boldsymbol{e} \\ & = ({\alpha}\pmb{C}_{1},{\alpha}\boldsymbol{C}_{2})\, \mathrm{diag}({\mathrm{e}}^{\theta_{1}t},{\mathrm{e}}^{\theta_{2}t}) \begin{pmatrix} \pmb{R}_{1}\boldsymbol{e} \\ \pmb{R}_{2}\boldsymbol{e} \end{pmatrix} \\ & = {\mathrm{e}}^{\theta_{1}t}({\alpha}\boldsymbol{C}_{1}\pmb{R}_{1}\boldsymbol{e}) + {\mathrm{e}}^{\theta_{2}t}({\alpha}\boldsymbol{C}_{2}\pmb{R}_{2}\boldsymbol{e}) = \beta_{1}{\mathrm{e}}^{\theta_{1}t} + \beta_{2}{\mathrm{e}}^{\theta_{2}t}. \end{align*}

Thus, $f_{T}(t) = -\beta_{1}\theta_{1}{\mathrm{e}}^{\theta_{1}t} - \beta_{2}\theta_{2}{\mathrm{e}}^{\theta_{2}t}$ . Simple calculation gives $r^{\prime}_{T}(t) = -\beta_{1}\beta_{2}(\theta_{1} - \theta_{2})^{2}{\mathrm{e}}^{(\theta_{1} + \theta_{2})t}$ , which is positive if and only if $\beta_{1} \beta_{2} < 0$ . Now, observe that

$$ \beta_{1} + \beta_{2} = {\alpha}\boldsymbol{C}_{1}\pmb{R}_{1}\boldsymbol{e} + {\alpha}\boldsymbol{C}_{2}\pmb{R}_{2}\boldsymbol{e} = {\alpha}(\boldsymbol{C}_{1},\boldsymbol{C}_{2}) \begin{pmatrix} \pmb{R}_{1}\\ \pmb{R}_{2} \end{pmatrix}\boldsymbol{e} = {\alpha}\pmb{P}\pmb{P^{-1}}\boldsymbol{e} = 1. $$

The result follows using this equality in $\beta_{1}\beta_{2} < 0$ .

Corollary 1. Let $\theta_{1} < \theta_{2}$ . If the conditions in Theorem 3 hold and $|\theta_{2}| > c_{2}/\sigma(c_{2} - c_{1})$ , then the result of Theorem 2 holds.

Theorem 4. Let $T \sim \mathrm{PH}_{m}({\alpha},\pmb{S})$ such that all the eigenvalues of the matrix $\pmb{S}$ are the same, say $\theta$ . Then, for $m > 1$ , $r_{T}(\cdot)$ is strictly increasing and continuous.

Proof. From (10), the survival function of T can be written as $\bar{F}_{T}(t) = \sum_{k=1}^{l}p_{k}(t){\mathrm{e}}^{\theta_{k}t}$ , where $p_{k}(\!\cdot\!)$ is a polynomial of degree $m_{k}-1$ and $m_{k}$ is the multiplicity of eigenvalues $\theta_{k}$ such that $\sum_{k=1}^{l}m_{k} = m$ . Thus, if all the eigenvalues are same then we can write $\bar{F}_{T}(t) = p(t){\mathrm{e}}^{-\theta t}$ , where $p(\cdot)$ is a polynomial of degree $m-1$ . Therefore, $f_{T}(t) = -p^{\prime}(t){\mathrm{e}}^{-\theta t} + \theta p(t){\mathrm{e}}^{-\theta t}$ . As a result, $r_{T}(t) = \theta - {p^{\prime}(t)}/{p(t)}$ . Observe that $p^{\prime}(t)/p(t)$ always decreases (strictly) with respect to t, therefore $r_{T}(t)$ is strictly increasing in t. Continuity is obvious.

Remark 3. Note that the Erlang distribution is a particular case of a phase-type distribution such that the phase-type representation has the same eigenvalues [Reference He17, p. 17]. Therefore, this distribution has an increasing failure rate.

Corollary 2. Assume that the conditions in Theorem 4 hold. If $|\theta| > c_{2}/\sigma(c_{2} - c_{1})$ then the result of Theorem 2 holds.

5. Special shock models

In this section, we briefly review models that fall under the defined category of shock models for which N follows a discrete phase-type distribution. Many such models have been studied in the literature, more than we can list here. Therefore, we illustrate some representative shock models. Further, we also show that the extreme shock model with a finite number of shocks belongs to the defined class of shock models (which has not been observed in the literature).

Since all the shock models discussed in this section fall within the defined class, the system’s lifetime distribution and other reliability measures can be derived using Theorem 1 and (6)–(8).

5.1. Extreme shock model

One of the shock models that has drawn considerable attention in the reliability literature is the extreme shock model. Here, the system fails when an individual shock exceeds some given level, e.g. when a crack is too deep, it might cause a vehicle axle to fail. That is, if the magnitude of the ith shock exceeds a predetermined threshold $\gamma$ ( $Y_{i} > \gamma$ ) the system fails at the ith shock, $i \in \mathbb{N}$ [Reference Gut and Hüsler16]. In this case,

\begin{equation*} \mathbb{P}(N = n) = \mathbb{P}(Y_{1} \leq \gamma, Y_{2} \leq \gamma, \ldots, Y_{n-1} \leq \gamma, Y_{n} > \gamma)= p(1-p)^{n-1},\end{equation*}

where $p = \mathbb{P}(Y_{1} > \gamma)$ . Clearly, $N \sim \mathrm{DPH}_{1}(1,1 - p)$ . Other than the classical extreme shock model, some generalized extreme shock models with discrete phase-type representation of N have also been put forward in the literature [Reference Bozbulut and Eryilmaz3].

5.2. A cumulative shock model

A system fails in the cumulative shock model if the total damage from shocks reaches a predetermined threshold value $\gamma$ . This model has been studied in a lot of real applications. For instance, it is possible to imagine each electrical device connected to an electric system as a specific type of shock; the power of the device is comparable to the magnitude of shocks. The system will be overloaded, resulting in a power failure, when the total shock size surpasses a certain level. Let the random variable N(t) denote the number of shocks by time t, $t \geq 0$ . Suppose that the ith shock increases the damage of the system by independent and identically distributed random increments $Y_{i}$ , $i \in \mathbb{N}$ , which are independent of the shock process $\{N(t)\colon t \geq 0\}$ . Therefore, the system’s total damage at time t is determined by the random sum $Y(t) = \sum_{i=1}^{N(t)} Y_{i}$ , where $Y(t) = 0$ when $N(t) = 0$ . Suppose a system fails at time t when the cumulative damage surpasses a random boundary B, $Y(t) > B$ (note that in this model, the threshold value $\gamma$ is considered random). Therefore, in this case, $\bar{F}_{T}(t) = \mathbb{P}(Y(t) \leq B)$ . Let B follow an exponential distribution, and be independent of the shock process and $Y_{i}$ , $i \in \mathbb{N}$ . Then, due to the memoryless property of the exponential distribution, a shock that occurs at time t causes the system to fail with probability $\mathbb{P}(B < Y_{N(t)}) = \mathbb{P}(B < Y_{1})$ [Reference Cha and Finkelstein4]. Thus, in this setting,

\begin{equation*} \mathbb{P}(N = n) = \mathbb{P}(B < Y_{1})\mathbb{P}(B \geq Y_{1})^{n-1} \sim \mathrm{DPH}_{1}(1, \mathbb{P}(B \geq Y_{1})).\end{equation*}

A cumulative shock model studied in [Reference Lyu, Qu, Ma, Wang and Yang23] is also a special case of the defined class of shock models.

5.3. Run shock model

According to the run shock model, a system collapses when the magnitudes of k successive shocks go over a specific threshold $\gamma$ [Reference Mallor and Omey24]. In this case, N can be represented as a discrete phase-type distribution i.e. $N \sim \mathrm{DPH}_{k}(\pmb{\mu},\boldsymbol{M})$ , where $\pmb{\mu} = (1,0,\ldots,0)$ and

$$ \boldsymbol{M} =\left( \begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c}1 - p & p&0 & \cdots & 0 \\1 - p & 0 & p & \cdots & 0 \\\vdots& \vdots & \vdots & \ddots & \vdots \\1-p & 0 & 0 & \cdots & p\\1 - p&0 & 0 & \cdots & 0\end{array}\right)$$

(see [Reference Tank and Eryilmaz35]). Many generalizations of the run shock model have appeared in the literature. For example, [Reference Eryilmaz and Tekin10] studied a shock model by combining the classical run and extreme shock models, according to which a system fails if either k consecutive shocks occur of size at least $\gamma_{1}$ or a single shock occurs of size at least $\gamma_{2}$ ( $\gamma_{1} < \gamma_{2}$ ). Such a mixed shock model helps model the failure pattern of various devices or systems. For example, a single mechanical stress of small size may not cause the failure of a device, while a certain number of consecutive stresses of small sizes or a single large stress may cause the failure of the device [Reference Eryilmaz and Tekin10]. Another generalization was studied in [Reference Gong, Xie and Yang11], which considered two critical levels $\gamma_1$ and $\gamma_2$ such that $\gamma_{1} < \gamma_{2}$ . According to this model, the system fails if either at least $k_{1}$ consecutive shocks with magnitude above $\gamma_{1}$ or $k_{2}$ consecutive shocks with magnitude above $\gamma_{2}$ occur. This model can be helpful in the healthcare management area, where prostate-specific antigen levels in the treatment of patients was divided into three levels with two thresholds; the type of situation determined the choice of therapy [Reference Gong, Xie and Yang11]. A generalized run shock model was studied in [Reference Yalcin, Eryilmaz and Bozbulut37] by considering system failure due to r non-overlapping runs of k consecutive critical shocks, which helps describe the performance of r cold-standby identical systems. In all the aforementioned shock models, N follows the discrete phase-type distribution.

5.4. Extreme shock model with a change point

This model, in which the system fails according to the extreme shock model with threshold value $\gamma$ , was studied in [Reference Eryilmaz and Kan9]. However, after a random number of shocks, the magnitude distribution changes. A model like this is helpful in real-world applications as a sudden change in environmental factors could result in a greater shock. Consider wind turbine blades that are subject to external shocks; a huge shock may cause a blade crack, which stops the turbine. In fact, the wind pattern or temperature fluctuates over a year, and in these environmental conditions a sudden change could happen. Statistically, the change point is random, and the probabilistic laws of shock sizes might be significantly different before and after the change point. Thus, this shock model is able to simulate such real-world scenarios. Let K represent the number of shocks until the change point. The distribution of the magnitude changes after a random number of shocks. That is, $Y_{1}, \ldots, Y_{K}$ have a common distribution $F_{1}$ such that the shock at K or before is below the threshold $\gamma$ with probability $p_{1} = \mathbb{P}(Y_{i} < \gamma) = F_{1}(\gamma)$ , $i = 1,2,\ldots, K$ , and the shock sizes $Y_{K+1}, Y_{K+2}, \ldots$ after the Kth shock have another common distribution $F_{2}$ such that $p_{2} = \mathbb{P}(Y_{i} < \gamma) = F_{2}(\gamma)$ , $i = K+1, K+2, \ldots$

This configuration has two stages in terms of the shock size that the system experiences. In the first stage, the sizes of the shocks may be relatively small; in the second stage, the system can encounter more severe shocks due to a sudden change in environmental conditions [Reference Eryilmaz and Kan9]. In most cases, this change point is random due to uncertainty. In this case, we have $p_{1} > p_{2}$ . Assume that K follows a geometric distribution with probability mass function $\mathbb{P}(K = k) = q(1-q)^{k-1}$ , $k \in \mathbb{N}$ . In this case, [Reference Eryilmaz and Kan9] showed that, if $p_{1}(1-q) \neq p_{2}$ , then

$$\mathbb{P}(N = n) = \varepsilon[p_{1}(1-q)]^{n-1}[1 - p_{1}(1-q)] + (1 - \varepsilon)p_{2}^{n-1}(1 - p_{2}),$$

where

$$\varepsilon = \frac{p_{1} - p_{2}}{p_{1}(1 - q) - p_{2}}, \qquad p_{1}(1 - q) \neq 1, \qquad p_{1}(1- q) \neq p_{2}.$$

Note that N is a mixture of two geometric distributions with mean $1/(1 - p_{1}(1 - q))$ and $1/(1 - p_{2})$ , which means N follows a discrete phase-type distribution. In this case, $N \sim \mathrm{DPH}_{2}(\pmb{\mu},\boldsymbol{M})$ , where $\pmb{\mu} = (\varepsilon,1-\varepsilon)$ and

$$\boldsymbol{M} = \left(\begin{array} {c@{\quad}c} p_{1}(1-q) & 0 \\ 0 & p_{2}\end{array}\right).$$

Further, if $p_{1}(1-q) = p_{2}$ , then $\mathbb{P}(N = n) = \varepsilon (n-1) p_{2}^{n-2} (1 - p_{2})^{2} + (1 - \varepsilon) p_{2}^{n-1} (1 - p_{2})$ , $n \geq 1$ , where $\varepsilon = ({p_{1} - p_{2}})/({1 - p_{2}})$ . In this case, N is a mixture of geometric distribution and negative binomial, which means N follows a discrete phase-type distribution. In this case, $N \sim \mathrm{DPH}_{3}(\pmb{\mu},\boldsymbol{M})$ , where $\pmb{\mu} = (\varepsilon,0,1-\varepsilon)$ and

$$\boldsymbol{M} = \left(\begin{array} {c@{\quad}c@{\quad}c} p_{2} & 1 - p_{2} & 0 \\ 0 & p_{2} & 0 \\ 0 & 0 & p_{2}\end{array}\right).$$

5.5. Extreme shock model with finite number of shocks

We assume that the system can bear at most J shocks. In this model, the system fails according to the extreme shock model; if the system is not failed by the first J shocks, then it will fail on the arrival of the $(J+1)$ th shock. Let $p_{i}$ be the probability that the system fails at the ith shock, $i = 1,2, \ldots, J$ . Then $p_{J+1} = 1 - \sum_{i=1}^{J} p_{i}$ . Clearly, $\mathbb{P}(N = i) = p_{i}$ for all $i = 1,2,\ldots,J+1$ . Since, any distribution with finite support on $\mathbb{N}$ is a discrete phase-type distribution [Reference Latouche and Ramaswami20], N follows a discrete phase-type distribution. In this case, $N \sim \mathrm{DPH}_{J+1}(\pmb{\mu},\boldsymbol{M})$ such that $\pmb{\mu} = (p_{1},p_{2},\ldots,p_{J+1})$ and

$$\boldsymbol{M} = \left(\begin{array}{c@{\quad}c@{\quad}c@{\quad}c@{\quad}c} 0 & 1 & 0 & \cdots & 0 \\ 0 & 0 & 1 & \cdots & 0 \\ \vdots & \vdots & \ddots &\ddots & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0\end{array}\right).$$

Shock models with a finite number of shocks were studied in [Reference Montoro-Cazorla and Pérez-Ocón25, Reference Segovia and Labeau34].

6. Numerical illustration

A shock might be seen as a harmful event (e.g. wind gusts for wind turbines, high temperature, etc.). According to the extreme shock model, the failure of a system occurs due to a shock that has a magnitude above a critical threshold. Consider a system which works under the extreme shock model, i.e. a shock can kill the system with probability p and does not affect the system with probability $(1-p)$ . Assume $\mathrm{MAP}(\pmb{\pi},\boldsymbol{D}_{0},\boldsymbol{D}_{1})$ has been found to be useful for modeling the inter-arrival times between subsequent shocks, with parameters

(11) \begin{equation} \pmb{\pi} = (1,0), \quad \boldsymbol{D}_{0} = \left(\begin{array}{c@{\quad}c} -\xi & \xi \\ 0 & -\xi \end{array}\right), \quad \pmb{D}_{1} = \left(\begin{array}{c@{\quad}c} 0 & 0 \\ 0 & \xi \end{array}\right), \qquad \xi > 0.\end{equation}

Thus, with the help of the results derived in Sections 3 and 4, we will be able to compute reliability measures and optimal replacement times for the system. For the extreme shock model, $N \sim \mathrm{DPH}_{1}(\pmb{\mu},\boldsymbol{M})$ where $\pmb{\mu} = 1$ and $\boldsymbol{M} = 1 - p$ . Then, from Theorem 1, $T \sim \mathrm{PH}_{2}(\pmb{\pi}\otimes\pmb{\mu},\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M})$ , where $\pmb{\pi}\otimes\pmb{\mu} = (1,0)$ and

$$\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M} =\left(\begin{array}{c@{\quad}c} -\xi & \xi \\ 0 & -\xi p \\\end{array}\right).$$

Thus, from (6)–(8), we can find expressions for the system’s survival function, the failure rate function, and the mean residual lifetime function. In Figure 1, we plot these reliability indices. Figures 1(a), (c), and (e) show plots for different $p_{i}$ and for the fixed parameter $\xi = 1$ . Figures 1(b), (b), and (f) show plots for different values of $\xi$ . In Figures 1(b) and (d) we fixed parameter $p = 0.2$ , and in Figure 1(f) we fixed parameter $p = 0.12$ . From Figure 1, we can observe that the system’s reliability and mean residual life will decrease if p or $\xi$ increase, while the system’s failure rate increases if p or $\xi$ increase. Clearly, Figures 1(c) and (d) demonstrate the result given in Theorem 3, and $\lim_{t \rightarrow \infty} r_{T}(t) = \xi p$ (the result which we used in Theorem 2).

Figure 1. Plots of the system’s reliability measures.

From (1), $\sigma = \mathbb{E}(T) = ({p+1})/{\xi p}$ . As we can see, the matrix $\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M}$ has distinct eigenvalues for $0 \leq p < 1$ . Therefore this matrix will be diagonalizable, i.e. there exists an invertible matrix $\pmb{Q}$ such that $\boldsymbol{D}_{0}\otimes\boldsymbol{I}+\boldsymbol{D}_{1}\otimes\boldsymbol{M} = \pmb{Q}\,\mathrm{diag}(\!-\!\xi,-\xi p)\pmb{Q}^{-1}$ , where

$$\pmb{Q} = \left(\begin{array}{c@{\quad}c} 1 & 1\\ 0 & 1-p\\\end{array}\right). \qquad \pmb{Q^{-1}} = \frac{1}{(1-p)}\left(\begin{array}{c@{\quad}c} 1-p & -1\\ 0 & 1 \\ \end{array}\right).$$

Note that $(\pmb{\pi}\otimes\pmb{\mu})\boldsymbol{C}_{1}\pmb{R}_{1}\boldsymbol{e} = -p/(1-p) < 0$ for $ 0 < p < 1$ . So, from Theorem 3, $r_{T}(\cdot)$ is strictly increasing for all $0 < p < 1$ . Consequently, from Theorem 2, there exists a finite and unique $x^{*}$ that minimizes the function C(x) in (9) if $\xi p > c_{2}/\sigma (c_{2} - c_{1})$ , equivalently $p > c_{1}/(c_{2} - c_{1})$ , which means an optimal $x^{*}$ exists for all $ c_{1}/(c_{2} - c_{1}) < p < 1$ and $\xi > 0$ . If $0 < p \leq c_{1}/(c_{2} - c_{1})$ then $x^{*} = \infty$ , i.e. a unit should be replaced only at failure. By considering parameters $c_{1} = 1$ , $c_{2} = 10$ , and $\xi = 1$ , we compute the optimal replacement time $x^{*}$ that minimizes (9). According to our derived conditions, an optimal and finite $x^{*}$ exists if $\frac19 < p < 1$ , otherwise $x^{*} = \infty$ . To illustrate this result, we plot the optimal average run cost against x in Figure 2. From Figures 2(a) and (b) we can see that a finite and unique $x^{*}$ exists, since $p = 0.2$ and $p = 0.12$ lie in the interval $\big(\frac19,1\big)$ . On the other hand, from Figures 2(c) and (d) we can see that $x^{*} = \infty$ as $p = \frac19$ and $p = 0.1$ lies in the interval $\big(0,\frac19\big]$ .

Figure 2. Long-run average cost when $c_{1} = 1$ , $c_{2} = 10$ , and $\xi = 1$ .

Now, assume that the system is subject to shocks that occur based on $\mathrm{MAP}(\pmb{\tilde{\pi}},\pmb{\tilde{D}_{0}},\pmb{\tilde{D}_{1}})$ with parameters

(12) \begin{equation} \pmb{\tilde{\pi}} = (1,0), \quad \pmb{\tilde{D}_{0}} = \left(\begin{array}{c@{\quad}c} -\xi & \xi \\ 0 & -\xi \end{array}\right), \quad \pmb{\tilde{D}_{1}} = \left(\begin{array}{c@{\quad}c} 0 & 0 \\ \xi & 0 \end{array}\right), \qquad \xi > 0. \end{equation}

Note that the above process is indeed a phase-type renewal process with inter-arrival times having phase-type representation $(\pmb{\tilde{\pi}},\pmb{\tilde{D}_{0}})$ . The main difference between the Markovian arrival process with parameters given in (11) and the Markovian arrival process with parameters given in (12) is that the former has a dependency between inter-arrival times, and the latter has independent inter-arrival times. However, both processes have the same phase-type generator for inter-arrival times. Figure 3 illustrates the impact of the dependency between the inter-arrival times of the Markovian arrival process on different reliability measures derived in Section 3. In this figure, we compare these reliability measures for the Markovian arrival process in (11) (the process with dependent inter-arrival times) and the phase-type renewal process in (12) (the process with independent inter-arrival times). For Figures 3(a) and (b) we consider the parameters $\xi = 1$ and $p = 0.1$ , while for Figure 33c we consider $\xi = 3$ and $p = 0.2$ . We see that the reliability measures for the dependent and independent inter-arrival times cases are significantly different.

Figure 3. Plots of the system’s reliability measures for phase-type renewal and Markovian arrival shock processes.

For the phase-type renewal process in (12), from Theorem 1, $T \sim \mathrm{PH}_{2}(\tilde{\pmb{\pi}}\otimes\pmb{\mu},\pmb{\tilde{D}_{0}}\otimes\boldsymbol{I}+\pmb{\tilde{D}_{1}}\otimes\boldsymbol{M})$ , where $\tilde{\pmb{\pi}}\otimes\pmb{\mu} = (1,0)$ and

$$\pmb{\tilde{D}_{0}}\otimes\boldsymbol{I}+\pmb{\tilde{D}_{1}}\otimes\boldsymbol{M} =\left(\begin{array}{c@{\quad}c} -\xi & \xi \\ \xi(1 - p) & -\xi \\\end{array}\right).$$

In this case, from (1), $\sigma = 2/\xi p$ and the eigenvalues of the matrix $\pmb{\tilde{D}_{0}}\otimes\boldsymbol{I}+\pmb{\tilde{D}_{1}}\otimes\boldsymbol{M}$ are $-\xi\pm\sqrt{1-p}$ . Therefore, there exists an invertible matrix $\pmb{\tilde{Q}}$ such that $\pmb{\tilde{D}_{0}}\otimes\boldsymbol{I}+\pmb{\tilde{D}_{1}}\otimes\boldsymbol{M} = \pmb{\tilde{Q}}\,\mathrm{diag}(-\xi - \xi\sqrt{1-p},-\xi + \xi\sqrt{1-p})\,\pmb{\tilde{Q}}^{-1}$ , where

$$\pmb{\tilde{Q}} = \left(\begin{array}{c@{\quad}c} 1 & 1\\ -\sqrt{1-p} & \sqrt{1-p}\\\end{array}\right), \qquad \pmb{\tilde{Q}}^{-1} = \frac{1}{2\sqrt{1-p}}\left(\begin{array} {c@{\quad}c} \sqrt{1-p} & -1\\ \sqrt{1-p} & 1 \\\end{array}\right). $$

Clearly, $(\tilde{\pmb{\pi}}\otimes\pmb{\mu})\boldsymbol{C}_{1}\pmb{R}_{1}\boldsymbol{e} = (\sqrt{1-p} - 1)/2(\sqrt{1-p}) < 0$ for $ 0 < p < 1$ . So, from Theorem 3, $r_{T}(\cdot)$ is strictly increasing for all $0 < p < 1$ . Consequently, from Theorem 2, there exists a finite and unique $x^{*}$ that minimizes the function C(x) in (9) if $\xi - \xi\sqrt{1-p} > c_{2}/\sigma (c_{2} - c_{1})$ , equivalently $p > 4 c_{1} (c_{2} - c_{1})/c_{2}^{2}$ , which means an optimal $x^{*}$ exists for all $4c_{1}(c_{2} - c_{1})/c_{2}^{2} < p < 1$ and $\xi > 0$ . If $0 < p \leq 4c_{1}(c_{2} - c_{1})/c_{2}^{2}$ then $x^{*} = \infty$ . Let $c_{1} = 1$ and $c_{2} = 10$ . In this case, if $0.36 < p < 1$ then a finite and unique $x^{*}$ exists, and the corresponding per unit mean cost is given by (9). Observe that dependency between the inter-arrival times can change decisions significantly. For example, for $p = 0.2$ , in the case of the Markovian arrival process in (11), a finite $x^{*}$ exists, i.e. if the system does not fail by time $x^{*}$ , then we should replace our system with a new one at time $x^{*}$ . However, in the renewal process in (12), $x^{*} = \infty$ , i.e. we should replace the system only at failure. Even if an optimal $x^{*}$ exists for both cases, the values of $x^{*}$ and $C(x^{*})$ differ significantly in both cases. Therefore, consideration of dependency between inter-arrival times can improve our decision-making.

Let $x_{1}^{*}$ (respectively $C_{1}(x_{1}^{*})$ ) and $x_{2}^{*}$ (respectively $C_{2}(x_{2}^{*})$ ) denote the optimal values of the replacement time (respectively the optimal value of the mean cost function) for the Markovian arrival process with parameters given in (11) and (12), respectively. Table 1 shows these optimal values for different values of p and $\xi$ , which shows that the values of $x_{1}^{*}$ (when inter-arrival times are dependent) and the values of $x_{2}^{*}$ (when inter-arrival times are independent) are significantly different, and correspondingly for the optimal mean cost. Further, we observe that an increment in the value of p or $\xi$ decreases the optimal replacement time and increases the value of the optimal mean cost (irrespective of the dependency between inter-arrival times).

Table 1. Optimal values of x and C(x) for different values of p and $\xi$ .

7. Conclusions

We have defined and studied a broad class of shock models containing many popular ones: extreme shock models, cumulative shock models, run shock models, and the extreme shock model with a change point. We discovered that the described class of shock models also includes the extreme shock model with a finite number of shocks. We examined the defined class by assuming that shocks occur according to a Markovian arrival process. These processes are a generalization of phase-type renewal processes, have non-identical correlated inter-arrival times, and can approximate any random arrival processes on the real line. We assume the number of shocks required for system failure, N, follows a discrete phase-type distribution. Considering the Markovian arrival process as a shock process and the discrete phase-type distribution as a distribution of N makes the defined model very flexible. We have shown that the system’s lifetime distribution for the defined shock model class follows the phase-type distribution.

This paper also studied the age replacement policy for continuous phase-type lifetime distributions. We established sufficient conditions for determining the optimal replacement time, aimed at minimizing the average cost rate. Given that phase-type distributions can effectively represent various lifetime distributions, the derived results apply to a wide range of situations.

Although numerous well-known shock models exist in the proposed class, some well-known ones (e.g. the censored $\delta$ -shock models) do not belong. Hence, future studies on these models in the context of the Markovian arrival process of shocks may be pertinent.

Acknowledgements

We wish to thank the Editor-in-Chief, the Associate Editor, and the anonymous reviewers for their valuable constructive comments and suggestions which led to an improved version of the manuscript.

Funding information

This work is supported by the Hong Kong Innovation and Technology Commission (InnoHK Project CIMDA). The first author received partial support for this work from the Indian Institute of Technology Kanpur through IPDF grant (PDF423).

Competing interests

There were no competing interests to declare which arose during the preparation or publication process of this article.

References

Asmussen, S. (2000). Matrix-analytic models and their analysis. Scand. J. Statist. 27, 193226.CrossRefGoogle Scholar
Bladt, M. and Nielsen, B. (2017). Matrix-Exponential Distributions in Applied Probability. Springer, New York.CrossRefGoogle Scholar
Bozbulut, A. R. and Eryilmaz, S. (2020). Generalized extreme shock models and their applications. Commun. Statist. Sim. Comput. 49, 110120.CrossRefGoogle Scholar
Cha, J. H. and Finkelstein, M. (2016). New shock models based on the generalized Pólya process. Europ. J. Operat. Res. 251, 135141.CrossRefGoogle Scholar
Chakravarthy, S. R. (2012). Maintenance of a deteriorating single server system with Markovian arrivals and random shocks. Europ. J. Operat. Res. 222, 508522.CrossRefGoogle Scholar
Dizbin, N. M. and Tan, B. (2020). Optimal control of production-inventory systems with correlated demand inter-arrival and processing times. Int. J. Prod. Econom. 228, 107692.CrossRefGoogle Scholar
Eryilmaz, S. (2017). Computing optimal replacement time and mean residual life in reliability shock models. Comput. Ind. Eng. 103, 4045.CrossRefGoogle Scholar
Eryilmaz, S. (2021). Revisiting discrete time age replacement policy for phase-type lifetime distributions. Europ. J. Operat. Res. 295, 699704.CrossRefGoogle Scholar
Eryilmaz, S. and Kan, C. (2019). Reliability and optimal replacement policy for an extreme shock model with a change point. Reliability Eng. System Safety, 190, 106513.CrossRefGoogle Scholar
Eryilmaz, S. and Tekin, M. (2019). Reliability evaluation of a system under a mixed shock model. J. Comput. Appl. Math. 352, 255261.CrossRefGoogle Scholar
Gong, M., Xie, M. and Yang, Y. (2018). Reliability assessment of system under a generalized run shock model. J. Appl. Prob. 55, 12491260.CrossRefGoogle Scholar
Goyal, D., Ali, R. and Hazra, N. K. (2024). Reliability analysis of $\delta$ -shock models based on the Markovian arrival process. Appl. Stochast. Models Business Industry, 40, 12911312. DOI: 10.1002/asmb.2858.CrossRefGoogle Scholar
Goyal, D., Hazra, N. K. and Finkelstein, M. (2022). On the general $\delta$ -shock model. TEST 31, 9941029.CrossRefGoogle Scholar
Goyal, D., Hazra, N. K. and Finkelstein, M. (2023). A general class of shock models with dependent inter-arrival times. TEST 32, 10791105.CrossRefGoogle Scholar
Goyal, D., Hazra, N. K. and Finkelstein, M. (2024). Shock models based on renewal processes with matrix Mittag–Leffler distributed inter-arrival times. J. Comput. Appl. Math. 435, 115090.CrossRefGoogle Scholar
Gut, A. and Hüsler, J. (1999). Extreme shock models. Extremes 2, 295307.CrossRefGoogle Scholar
He, Q. M. (2014). Fundamentals of Matrix-Analytic Methods. Springer, London.CrossRefGoogle Scholar
Johnson, C. R., Smith, R. L. and Tsatsomeros, M. J. (2020). Matrix Positivity. Cambridge University Press.CrossRefGoogle Scholar
Landriault, D. and Shi, T. (2015). Occupation times in the MAP risk model. Insurance Math. Econom. 60, 7582.CrossRefGoogle Scholar
Latouche, G. and Ramaswami, V. (1999). Introduction to Matrix Analytic Methods in Stochastic Modeling. Society for Industrial and Applied Mathematics, Philadelphia, PA.CrossRefGoogle Scholar
Latouche, G., Remiche, M. A. and Taylor, P. (2003). Transient Markov arrival processes. Ann. Appl. Prob. 13, 6280–640.CrossRefGoogle Scholar
Levitin, G., Finkelstein, M. and Dai, Y. (2018). Heterogeneous standby systems with shocks-driven preventive replacements. Europ. J. Operat. Res. 266, 11891197.CrossRefGoogle Scholar
Lyu, H., Qu, H., Ma, L., Wang, S. and Yang, Z. (2022). Reliability assessment of a system with multi-shock sources subject to dependent competing failure processes under phase-type distribution. Quality Reliability Eng. Int. 38, 28202844.CrossRefGoogle Scholar
Mallor, F. and Omey, E. (2001). Shocks, runs and random sums. J. Appl. Prob. 38, 438448.CrossRefGoogle Scholar
Montoro-Cazorla, D. and Pérez-Ocón, R. (2011). Two shock and wear systems under repair standing a finite number of shocks. Europ. J. Operat. Res. 214, 298307.CrossRefGoogle Scholar
Montoro-Cazorla, D. and Pérez-Ocón, R. (2014). A reliability system under different types of shock governed by a Markovian arrival process and maintenance policy K . Europ. J. Operat. Res. 235, 636642.CrossRefGoogle Scholar
Montoro-Cazorla, D. and Pérez-Ocón, R. (2015). A reliability system under cumulative shocks governed by a BMAP. Appl. Math. Modelling 39, 76207629.CrossRefGoogle Scholar
Nakagawa, T. (2005). Maintenance Theory of Reliability. Springer, London.Google Scholar
Neuts, M. F. (1979). A versatile Markovian point process. J. Appl. Prob. 16, 764779.CrossRefGoogle Scholar
Ozkut, M. (2023). Reliability and optimal replacement policy for a generalized mixed shock model. TEST 32, 10381054. DOI: 10.1007/s11749-023-00864-z.CrossRefGoogle Scholar
Pérez-Ocón, R. and del Carmen Segovia, M. (2009). Shock models under a Markovian arrival process. Math. Comput. Modelling 50, 879884.CrossRefGoogle Scholar
Ramírez-Cobo, P., Marzo, X., Olivares-Nadal, A., Alvarez Francoso, J. and Carrizosa, E. (2014). The Markovian arrival process: A statistical model for daily precipitation amounts. J. Hydrology 510, 459471.CrossRefGoogle Scholar
Safaei, F., Châtelet, E. and Ahmadi, J. (2020). Optimal age replacement policy for parallel and series systems with dependent components. Reliability Eng. System Safety 197, 106798.CrossRefGoogle Scholar
Segovia, M. C. and Labeau, P. E. (2013). Reliability of a multi-state system subject to shocks using phase-type distributions. Appl. Math. Modelling 37, 48834904.CrossRefGoogle Scholar
Tank, F. and Eryilmaz, S. (2015). The distributions of sum, minima and maxima of generalized geometric random variables. Statist. Papers 56, 11911203.CrossRefGoogle Scholar
Wang, X., Zhao, X., Wu, C. and Wang, S. (2022). Mixed shock model for multi-state weighted k-out-of-n: F systems with degraded resistance against shocks. Reliability Eng. System Safety 217, 108098.CrossRefGoogle Scholar
Yalcin, F., Eryilmaz, S. and Bozbulut, A. R. (2018). A generalized class of correlated run shock models. Dependence Modeling 6, 131138.CrossRefGoogle Scholar
Zhao, X., Wang, S., Wang, X. and Fan, Y. (2020). Multi-state balanced systems in a shock environment. Reliability Eng. System Safety 193, 106592.CrossRefGoogle Scholar
Figure 0

Figure 1. Plots of the system’s reliability measures.

Figure 1

Figure 2. Long-run average cost when $c_{1} = 1$, $c_{2} = 10$, and $\xi = 1$.

Figure 2

Figure 3. Plots of the system’s reliability measures for phase-type renewal and Markovian arrival shock processes.

Figure 3

Table 1. Optimal values of x and C(x) for different values of p and $\xi$.