Hostname: page-component-745bb68f8f-b6zl4 Total loading time: 0 Render date: 2025-02-07T00:47:10.635Z Has data issue: false hasContentIssue false

Bounds for the hazard rate and the reversed hazard rate of the convolution of dependent random lifetimes

Published online by Cambridge University Press:  11 December 2019

Félix Belzunce*
Affiliation:
University of Murcia
Carolina Martínez-Riquelme*
Affiliation:
University of Murcia
*
*Postal address: Departamento Estadística e Investigación Operativa, Universidad de Murcia, Facultad de Matemáticas, Campus de Espinardo, 30100 Espinardo (Murcia), Spain.
*Postal address: Departamento Estadística e Investigación Operativa, Universidad de Murcia, Facultad de Matemáticas, Campus de Espinardo, 30100 Espinardo (Murcia), Spain.
Rights & Permissions [Opens in a new window]

Abstract

An upper bound for the hazard rate function of a convolution of not necessarily independent random lifetimes is provided, which generalizes a recent result established for independent random lifetimes. Similar results are considered for the reversed hazard rate function. Applications to parametric and semiparametric models are also given.

Type
Research Papers
Copyright
© Applied Probability Trust 2019 

1. Introduction

One of the most important functions in the context of reliability and survival analysis is the hazard rate function. Given a non-negative random variable X that represents the random lifetime of a system or a living organism with density function f, distribution function F, and survival function $\skew2\overline F\equiv 1 -F$ , the hazard rate function of X is defined by

$$r_X(t)=\lim_{h\rightarrow 0^+}\frac{\mathrm{P}(t<X<t+h \mid X>t)}{h}=\frac{f(t)}{\skew2\overline F(t)}$$

for all t such that $\skew2\overline F(t)>0$ . The hazard rate is probably the main function to describe the ageing process of a unit or a system (see Lai and Xie (Reference Lai and Xie2006) for further details and references) and it is usually considered as the instantaneous failure rate of an item which has survived up to time t. It is quite common to observe increasing failure rates, and in such a case the random variable is said to be IFR (increasing failure rate). In addition, there is a wide literature on the hazard rate function of the convolution of two random variables. Recall that convolution is the name for the mathematical operation of the sum of random variables. Convolution arises in reliability when we consider a two-component standby system where a failed unit is replaced by a new one, which is not necessarily distributed identically to the former one. Another context where convolution appears in a natural way is that of insurance. The individual risk model corresponds to the situation where a portfolio consists of a fixed number of different insurance policies and the total claim of the portfolio is the sum (convolution) of the random claims of each policy.

Many authors have provided different reliability properties of the hazard rate function of a convolution. In particular, a well-known result is that the hazard rate function of a convolution of independent random lifetimes with increasing hazard rate functions is also increasing (see Barlow et al. (Reference Barlow, Marshall and Proschan1963), p. 380). Recently, two more problems dealing with the hazard rate function of a convolution have been considered. The first one is the limiting behaviour of the hazard rate function of a convolution (see Block et al. (Reference Block, Langberg and Savits2014), (Reference Block, Langberg and Savits2015)), and the second one is the domination of the hazard rate function of a convolution by the hazard rate function of one of its components with an increasing hazard rate function. Specifically, Block and Savits (Reference Block and Savits2015) have stated that the hazard rate of a convolution of two independent components lies below the hazard rate function of any of the components with an increasing hazard rate, if any.

The purpose of this paper is to generalize the above-mentioned result to the case of dependent components. This result is given in Section 2 along with some applications for parametric and semiparametric models of bivariate random vectors.

Additionally, we will consider this result for the reversed hazard rate function. Recall that, given a non-negative random variable X with density function f and distribution function F, the reversed hazard rate function of X is defined by

$$\skew1\overline r_X(t)=\lim_{h\rightarrow 0^-}\frac{\mathrm{P}(t+h\le X\le t \mid X\le t)}{h}=\frac{f(t)}{F(t)},$$

for all t such that $F(t)>0$ . It is said that X is DRHR if the reversed hazard rate function decreases. See Block et al. (Reference Block, Savits and Singh1998), Finkelstein (Reference Finkelstein2002), and Chechile (Reference Chechile2011) for further details and properties. In Section 3, similar results for the reversed hazard rate functions are established. Finally, some additional comments and considerations are made in Section 4.

2. Main results and examples

As mentioned in the introduction, Block and Savits (Reference Block and Savits2015) have proved that, for two independent random lifetimes X and Y, whenever X is IFR then

$$r_{X+Y}(t) \le r_X(t) \quad \text{for all }t\ge 0,$$

where $r_{X+Y}$ denotes the hazard rate of $X+Y$ .

Unfortunately, there are many situations where the components are dependent, which means that the previous theorem cannot be applied. Therefore, a natural question arises in this context. Is the thesis of the previous result still valid for dependent random lifetimes? The following example shows that the answer to this question is not always positive.

Example 2.1. Let us consider a bivariate random vector (X,Y) with Fairlie–Gumbel–Morgernstern copula given by

$$C(u,v)= uv[1 + \theta (1-u)(1-v)],\quad \text{for all }0\le u,v \le 1,$$

where $-1 \le \theta \le 1$ is a dependence parameter such that the dependence is positive for $0 < \theta \le 1$ , negative for $-1 \le \theta <0$ , and the components are independent for $\theta =0$ . The marginal distributions follow a gamma-distributed model, denoted by $G(r, \sigma)$ , with density function given by

$$f(x)=\Big(\frac{x}{\sigma}\Big)^{r-1} \frac{{\rm e}^{-\frac{x}{\sigma}}}{\Gamma(r)},\quad \text{for all }x\ge 0,$$

where r is the shape parameter and $\sigma$ is the scale parameter. Let us consider $X\sim G(1.5, 1)$ , $Y\sim G(1.25, 1)$ , and $\theta =0.5$ . In such a case, the hazard rate function of the convolution $X+Y$ crosses the hazard rate functions of both components (see Figure 1). Analogously, taking $\theta =-0.5$ (in other words, assuming negative association), the conclusion remains the same (see again Figure 1).

Figure 1: Plot of the hazard rate function of $X+Y$ (continuous line), X (dashed and dotted line), and Y (dashed line).

Therefore, in order to obtain a bound for the hazard rate of a convolution of dependent random lifetimes, we need to consider a different approach. Next, we provide a set of conditions which generalizes the result given for the case of independent components. Let us fix the following notation prior to stating the result. Given a bivariate random vector (X,Y), we denote the hazard rate function of the conditional random variable ${(X \mid Y=y)}$ , where y belongs to the support of Y, by $r_X(t \mid Y=y)$ .

Theorem 2.1. Let (X, Y) be a non-negative bivariate random vector with joint density function f. If

(2.1) \begin{equation} r_X(t \mid Y=y) \text{ is increasing in }y \text{ for all }t\geq 0 \end{equation}

and

(2.2) \begin{equation}r_X(t \mid Y=y) \text{ is increasing in }t,\text{ for all }y\geq 0,\label{eqn2}\end{equation}

then $r_{X+Y}(t)\leq r_X(t \mid Y=t),\text{ for all }t\geq 0$ .

Proof. Let us denote by h and $\skew2\overline H $ respectively the density and the survival function of the convolution $X+Y$ . Then

$$h(t)= \int^{t}_{0} f(t-y,y)\,{\rm d} y,\quad \text{for all } t\geq 0$$

and

\begin{align} \skew2\overline H(t) & = \int_{0}^{\infty}\int_{t-y}^{\infty} f(x,y) \,{\rm d}\kern0.5ptx\,{\rm d} y \\ & = \int_{0}^{t}\int_{t-y}^{\infty} f(x,y) \,{\rm d}\kern0.5ptx\,{\rm d} y + \int_{t}^{\infty}\int_{0}^{\infty} f(x,y) \,{\rm d}\kern0.5ptx\,{\rm d} y \\ & = \int_{0}^{t}\int_{t-y}^{\infty} f(x,y) \,{\rm d}\kern0.5ptx\,{\rm d} y + \skew3\overline G(t) , \end{align}

for all $t\geq 0$ , where $\skew3\overline G$ is the marginal survival function of Y. Consequently, the condition

$$r_{X+Y}(t)\leq r_X(t \mid Y=t),\quad \text{for all }t\geq 0$$

is equivalent to

$$h(t)-r_X(t \mid Y=t)\skew2\overline H(t)\leq0,\quad \text{for all }t\geq 0.$$

Let us prove the previous inequality. Under the assumptions, the following chain of equalities and inequalities holds:

\begin{align}h(t)-r_X(t \mid Y=t)\skew2\overline H(t) \\ & = \int_{0}^{t}f(t-y,y)\,{\rm d} y \\ & \quad - r_X(t \mid Y=t)\int_{0}^{t}\int_{t-y}^{\infty} f(x,y) \,{\rm d}\kern0.5ptx\,{\rm d} y + \skew3\overline G(t) \\ & = \int_{0}^{t}\Big[f(t-y,y) - r_X(t \mid Y=t)\int_{t-y}^{\infty} f(x,y) \,{\rm d}\kern0.5ptx\Big]\,{\rm d} y \\ & \quad - r_X(t \mid Y=t)\skew3\overline G(t) \\ & = \int_{0}^{t}\Big[r(t-y \mid Y=y)\int_{t-y}^{\infty}f(x,y)\,{\rm d}\kern0.5ptx \\ & \qquad \qquad - r_X(t \mid Y=t)\int_{t-y}^{\infty} f(x,y) \,{\rm d}\kern0.5ptx\Big]\,{\rm d} y \\ & \quad - r_X(t \mid Y=t)\skew3\overline G(t) \\ & = \int_{0}^{t}[r(t-y \mid Y=y) - r_X(t \mid Y=t)]\int_{t-y}^{\infty}f(x,y)\,{\rm d}\kern0.5ptx\,{\rm d} y \\ & \quad - r_X(t \mid Y=t)\skew3\overline G(t) \\ & \leq - r_X(t \mid Y=t)\skew3\overline G(t) \\ \le 0 \end{align}

for all $t\geq 0$ , where the first inequality follows by taking into account that conditions (2.1) and (2.2) imply that

$$r(t-y \mid Y=y) \leq r(t-y \mid Y=t) \leq r_X(t \mid Y=t), \quad \text{for all } t,y\geq 0.$$

Therefore, we conclude that $r_{X+Y}(t)\leq r_X(t \mid Y=t)$ for all $t\geq 0$ .

Let us make several remarks on the previous result.

Recently, Navarro and Sordo (Reference Navarro and Sordo2018) have characterized Condition (2.1) in terms of the survival copula. In particular, denoting by $\widehat C$ the survival copula of a bivariate random vector (X,Y), Navarro and Sordo (Reference Navarro and Sordo2018) proved that (2.1) is satisfied if, and only if,

$$\frac{\partial_1 \widehat C(u,v_2)}{\partial_1 \widehat C(u,v_1)}\text{ is increasing in }u, \text{ for all } 0\le v_1\le v_2\le 1.$$

Remark 2.1. Condition (2.1) has already been considered as a negative dependence property. In particular, Shaked (1977) and Lee (1985) defined the DRR(0,1) notion (dependence by reversed regular rule) for a bivariate random vector (X,Y) by means of Condition (2.1). Analogously, if the roles of X and Y are exchanged in Condition (2.1), (X,Y) is said to be DRR(1,0). Furthermore, given (X,Y) with an RR2 (reversed regular of order 2; see Karlin (1968)) joint density function, then (X,Y) is DRR(0,1) and DRR(1,0).

Remark 2.2. As far as the case of independent components is concerned, we want to point out that Condition (2.1) is trivially satisfied and Condition (2.2) is equivalent to the IFR property of the random variable X; in such cases the previous theorem is reduced to the one given by Block and Savits (2015). This means that Theorem 2.1 generalizes the result given by Block and Savits (2015) to the case of not necessarily independent components.

Remark 2.3. Let us assume that (X,Y) is DRR(1,0) and DRR(0,1), and let us denote by $r_Y(t \mid X=x)$ the hazard rate function of $(Y \mid X=x)$ , for any x in the support of X. If $r_X(t \mid Y=y)$ satisfies Condition (2.2) and $r_Y(t \mid X=x)$ is increasing in t, for all x in the support of X, then, by applying Theorem 2.1, we obtain

$$r_{X+Y}(t) \le \min\{r_X(t \mid Y=t),r_Y(t \mid X=t)\},\quad \text{for all }t > 0.$$

Next, we apply the previous theorem to several examples of bivariate random vectors. First, we consider Gumbel’s bivariate exponential, Model I (see Kotz et al. (Reference Kotz, Balakrishnan and Johnson2000), p. 350).

Example 2.2. (Gumbel’s bivariate exponential (Model I).) Let (X,Y) be a bivariate random vector with joint density function given by

$$f(x,y)= \exp{(-x-y-\theta xy)}\{(1+\theta x)(1+\theta y)-\theta\},\quad \text{for }x,y>0,$$

where $0\le \theta \le 1$ .

It is easy to see that the hazard rate function of $(X \mid Y=y)$ is given by

$$r_X(t \mid Y=y)=\frac{(1+\theta y )(1+ \theta t) - \theta}{1+\theta t},\quad \text{for }t\ge 0.$$

Analogously, ${(Y \mid X=x)}$ has a hazard rate function given by

$$r_Y(t \mid X=x)=\frac{(1+\theta x )(1+ \theta t) - \theta}{1+\theta t},\quad \text{for }t\ge 0.$$

On the one hand, it is obvious that $r_X(t \mid Y=y)$ and $r_Y(t \mid X=x)$ are increasing in y and x, respectively, for all $t\geq0$ . On the other hand, it is not difficult to see that $r_X(t \mid Y=y)$ and $r_Y(t \mid X=x)$ are increasing in $t\ge 0$ for all $y\ge 0$ and $x\ge 0$ , respectively. Therefore, the sufficient conditions in Theorem 2.1 are satisfied and, consequently, it is ensured that

$$r_{X+Y}(t) \leq \frac{(1+\theta t )^2- \theta}{1+\theta t},\quad \text{for all }t\geq 0.$$

Figure 2 shows the particular case for $\theta=0.2$ .

Figure 2: Left: The hazard rate of the convolution $X+Y$ (continuous line) and the bound $((1+\theta t )^2- \theta)/{(1+\theta t)}$ (dashed line). Right: The joint density function of (X,Y).

Next, we consider a parametric family where the conditional distributions are gamma distributed (see Arnold et al. (Reference Arnold, Castillo and Sarabia1999)). Let us apply Theorem 2.1 to this model.

Example 2.3. (Gamma conditionals (Model II).) Let (X,Y) be a bivariate random vector with joint density function given by

$$f(x,y)= \frac{k_{r,s}(\theta)}{\sigma_1^r \sigma_2^s \Gamma(r) \Gamma(s)} x^{r-1} y^{s-1}\exp\Big\{-\frac{x}{\sigma_1}-\frac{y}{\sigma_2}-\frac{\theta xy}{\sigma_1 \sigma_2}\Big\} , \quad \text{for }x,y>0,$$

where $\sigma_1, \sigma_2,r,s>0$ are non-negative, $\theta\ge0$ , and $k_{r,s}(\theta)$ is a normalizing constant. Observe that $\sigma_1$ and $\sigma_2$ are scale parameters, r and s are shape parameters, and $\theta$ is a dependence parameter.

In this case it is known that $(X \mid Y=y)$ follows a gamma distribution with shape parameter r and scale parameter ${(1+cy/{\sigma_2})/{\sigma_1}}$ . Analogously, ${(Y \mid X=x)}$ follows a gamma distribution with shape parameter s and scale parameter ${(1+cx/\sigma_1)/\sigma_2}$ .

It is not difficult to see that $r_X(t \mid Y=y)$ and $r_Y(t \mid X=x)$ are increasing in y and x, respectively, for all $t \geq 0$ (see Table 1 in Belzunc e et al. (2016)). In addition, it is well known

that gamma-distributed random variables have increasing failure rates if the shape parameter is greater than or equal to 1. Therefore, $r_X(t \mid Y=t)$ [ $r_Y(t \mid X=t)$ ] is increasing in $t\ge 0$ if the parameter $r \geq 1$ [ $s\geq 1$ ]. To summarize, if $r\ge 1$ [ $s\ge 1$ ], then

$$r_{X+Y}(t) \leq r_X(t \mid Y=t)\text{ }[r_Y(t \mid X=t)] ,\quad \text{for all }t\geq 0 %,$$

by applying Theorem 2.1.

Figure 3 shows the particular case for $\theta=0.5$ , $r=1.2$ , $s=1.5$ , $\sigma_1=4$ , and $ \sigma_2=3$ .

Figure 3: Left: The hazard rate of the convolution $X+Y$ (continuous line), the function $r_X(t \mid Y=t)$ (dashed line), and the function $r_Y(t \mid X=t)$ (dashed and dotted line). Right: The joint density function of (X,Y).

Next, let us apply the result to two semiparametric models. The first one was introduced by Navarro and Sarabia (Reference Navarro and Sarabia2013).

Example 2.4. (Bivariate conditional proportional hazard rate model.) Let (X,Y) be a bivariate random vector with joint density function given by

\begin{multline} f(x,y)= k(\phi)\sigma_1 \sigma_2 \lambda_1(x) \lambda_2(y)\exp\{-\sigma_1 \Lambda_1(x) -\sigma_2 \Lambda_2(y) -\phi \sigma_1 \sigma_2 \Lambda_1(x)\Lambda_2(y)\} \\ , \text{for }x,y>0, \end{multline}

where $\sigma_1,\sigma_2>0$ are scale parameters, $\phi\ge 0 $ is a dependence parameter, and $k(\phi)$ is a normalizing constant. The functions $\Lambda_i$ , $i=1,2$ , are cumulative hazard rate functions for non-negative random variables with hazard rates $\lambda_i(x)=\Lambda_i'(x)$ , $i=1,2$ , respectively. It is known that the hazard rate function of ${(X \mid Y=y)}$ is given by

$$r_X(t \mid Y=y)=\sigma_1 [1 + \phi \sigma_2 \Lambda_2(y)]\lambda_1(t).$$

Analogously, the hazard rate function of ${(Y \mid X=x)}$ is given by

$$r_Y(t \mid X=x)=\sigma_2 [1 + \phi \sigma_1 \Lambda_1(x)]\lambda_2(t).$$

Since $\Lambda_1$ and $\Lambda_2$ are cumulative hazard functions, it is obvious that $r_X(t \mid Y=y)$ and $r_Y(t \mid X=x)$ are increasing in y and x, respectively, for all $t\geq0$ . Moreover, if the hazard rates $\lambda_1$ and/or $\lambda_2$ are increasing (IFR), then $r_X(t \mid Y=t)$ and/or $r_Y(t \mid X=t)$ are also increasing in $t\ge 0$ . To sum up, if $\lambda_1$ [ $\lambda_2$ ] is increasing, then

$$r_{X+Y}(t) \leq \sigma_1 (1 + \phi \sigma_2 \Lambda_2(t))\lambda_1(t)\text{ }[\sigma_2 (1 + \phi \sigma_1 \Lambda_1(t))\lambda_2(t)] %, \quad \text{for all }t\geq 0.$$

Next, let us consider the semiparametric family given by Navarro et al. (Reference Navarro, Esna-Ashari, Asadi and Sarabia2015).

Example 2.5. (Bivariate conditional proportional generalized odds rate model) Let (X,Y) be a bivariate random vector with joint density function given by

$$f(x,y)= \frac{K\sigma_1 \sigma_2 \lambda_1(x) \lambda_2(y)}{ [\sigma_0 + \theta \sigma_1 \Lambda_1(x) + \theta \sigma_2 \Lambda_2(y) + \theta \phi \sigma_1 \sigma_2 \Lambda_1(x) \Lambda_2(x) ]^{1+\frac1c}} , \quad \text{for }x,y>0,$$

where $\sigma_1,\sigma_2,\theta,K>0$ and $\sigma_0,\phi\ge 0 $ . The functions $\Lambda_i$ , $i=1,2$ , are univariate generalized odds functions such that $\lambda_i(x)=\Lambda_i'(x)$ , $i=1,2$ .

Let us define the functions

\begin{equation} \theta_1(y)=\frac{\sigma_0 + \theta \sigma_2 \Lambda_2(y)}{\sigma_1 + \phi \sigma_1 \sigma_2 \Lambda_2(y)} ,\qquad \theta_2(x)=\frac{\sigma_0 + \theta \sigma_1 \Lambda_1(x)}{\sigma_2 + \phi \sigma_1 \sigma_2 \Lambda_1(x)}. \end{equation}

It is known that the hazard rate function of ${(X \mid Y=y)}$ is given by

$$r_X(t \mid Y=y)=\frac{\lambda_1(t)}{\theta_1(y) + \theta \Lambda_1(t)}$$

and, analogously, the hazard rate function of $(Y \mid X=x)$ is given by

$$r_Y(t \mid X=x)=\frac{\lambda_2(t)}{\theta_2(x) + \theta \Lambda_2(t)}.$$

It is easy to see that if $\theta< [>]\,\phi\sigma_0$ then $r_X(t \mid Y=y)$ and $r_Y(t \mid X=x)$ increase [decrease] in y and x, respectively. Furthermore, if $\theta_1(y) + \theta \Lambda_1(t)$ is logconvex [logconcave] in t then $r_X(t \mid Y=y)$ increases [decreases] in t, and analogously for $r_Y(t \mid X=x)$ . To sum up, if $\theta<\phi\sigma_0$ and $\theta_1(y) + \theta \Lambda_1(t)$ [ $\theta_2(y) + \theta \Lambda_2(t)$ ] is logconvex in t, then

$$r_{X+Y}(t) \leq \frac{\lambda_1(t)}{\theta_1(t) + \theta \Lambda_1(t)}\text{ }\Big[ \frac{\lambda_2(t)}{\theta_2(t) + \theta \Lambda_2(t)}\Big] , \quad \text{for all }t\geq 0.$$

3. Results for the reversed hazard rate function

In this section similar results to the previous ones given in Section 2 are provided for the reversed hazard rate function. First, we provide a lower bound for the reversed hazard rate function of a convolution of not necessarily independent components. Let us fix some notation prior to stating the result. Given a bivariate random vector (X,Y), we denote by $\skew1\overline r_X(t \mid Y=y)$ the reversed hazard rate function of $(X \mid Y=y)$ , where y is a value in the support of Y.

Theorem 3.1. Let (X, Y) be a non-negative bivariate random vector with joint density function f. If

(3.1) \begin{equation} \skew1\overline r_X(t \mid Y=y) \text{ is decreasing in }y\text{ for all }t\geq 0 \end{equation}

and

(3.2) \begin{equation} \skew1\overline r_X(t \mid Y=y) \text{ is decreasing in }t \text{ for all }y\geq 0, \end{equation}

then $\skew1\overline r_{X+Y}(t)\geq \skew1\overline r_X(t \mid Y=t) \text{ for all }t\geq 0$ .

Proof. According to the proof of Theorem 2.1, h and H denote the density and the distribution function of the convolution $X+Y$ and we will equivalently show that $h(t)-\skew1\overline r_X(t \mid Y=t) H(t)\geq0$ , where

$$H(t)= \int_{0}^{t}\int_0^{t-y}f(x,y) \,{\rm d}\kern0.5ptx\,{\rm d} y,$$

for all $t\geq 0$ .

Since conditions (3.1) and (3.2) imply that

$$\skew1\overline r(t-y \mid Y=y) \geq \skew1\overline r_X(t \mid Y=y) \geq \skew1\overline r_X(t \mid Y=t) \quad \text{for all } t\ge y\geq 0,$$

we have that

$$h(t)-\skew1\overline r_X(t \mid Y=t) H(t)= \int_{0}^{t}\Big(f(t-y,y)\,{\rm d} y - \skew1\overline r_X(t \mid Y=t)\int_{0}^{t-y} f(x,y) \,{\rm d}\kern0.5ptx\Big)\,{\rm d} y \ge 0 %,$$

for all $t\geq 0$ . Therefore, we conclude that $h(t)-\skew1\overline r_X(t \mid Y=t) H(t)\geq0$ or, equivalently, $\skew1\overline r_{X+Y}(t)\geq \skew1\overline r_X(t \mid Y=t)$ , for all $t\geq 0$ .

Despite the fact that Condition (3.1) can be considered as a negative dependence property, as far as we know this condition has never been seen from this point of view. Recently, however, Navarro and Sordo (Reference Navarro and Sordo2018) characterized this property in terms of the copula. In particular, they have proved that, given a bivariate random vector (X,Y) with copula C, Condition (3.1) is satisfied if, and only if,

$$\frac{\partial_1 C(u,v_2)}{\partial_1 C(u,v_1)}\text{ is decreasing in }u \text{ for all }0<v_1 \le v_2 < 1.$$

We also want to observe that, for independent components, Condition (3.1) is always satisfied and Condition (3.2) is equivalent to the DRHR property of the random variable X. Therefore, we can state the following corollary.

Corollary 3.1. Let X and Y be two independent non-negative random variables such that X is DRHR with reversed hazard rate function denoted by $\skew1\overline r$ . Then

$$\skew1\overline r_{X+Y}(t)\geq \skew1\overline r_X(t) \quad \text{for all }t\geq 0.$$

Next, we apply Theorem 3.1 to a parametric family such that the conditional distributions are exponentially distributed (see Arnold and Strauss (Reference Arnold and Strauss1988) and Arnold et al. (Reference Arnold, Castillo and Sarabia1999), p. 80).

Example 3.1. Let (X,Y) be a bivariate random vector with joint density function given by

$$f(x,y)= \frac{k(\theta)}{\sigma_1 \sigma_2} \exp\Bigg\{-\frac{x}{\sigma_1}-\frac{y}{\sigma_2}-\frac{\theta xy}{\sigma_1 \sigma_2}\Bigg\} \quad \text{for }x,y>0,$$

where $\sigma_1,\sigma_2>0$ , $\theta\ge0$ , and

$$k(\theta)=\frac{1}{\int_0^{+\infty} {\rm e}^{-u} (1+ \theta u)^{-1}\,{\rm d} u}.$$

Observe that $\sigma_1$ and $\sigma_2$ are scale parameters and $\theta$ is a dependence parameter. It is known that ${(X \mid Y=y)}$ follows an exponential distribution with parameter ${(1+\theta y/\sigma_2)/\sigma_1}$ ; therefore, the reversed hazard rate of ${(X \mid Y=y)}$ is given by

$$\skew1\overline r_X(t \mid Y=y)=\frac{\frac{1}{\sigma_1}\Big(1+\frac{\theta y}{\sigma_2}\Big)}{\exp\Big(\frac{t}{\sigma_1}\Big(1+\frac{\theta y}{\sigma_2}\Big)\Big) - 1}$$

and, analogously, the reversed hazard rate of ${(Y \mid X=x)}$ is given by

$$\skew1\overline r_Y(t \mid X=x)=\frac{\frac{1}{\sigma_2}\Big(1+\frac{\theta x}{\sigma_1}\Big)}{\exp\Big(\frac{t}{\sigma_2}\Big(1+\frac{\theta x}{\sigma_1}\Big)\Big) - 1},$$

for all $t>0$ .

It is not difficult to see that $\skew1\overline r_X(t \mid Y=y)$ and $\skew1\overline r_Y(t \mid X=x)$ are decreasing in y and x, respectively, for all $t\geq0$ , and $\skew1\overline r_X(t \mid Y=y)$ and $\skew1\overline r_Y(t \mid X=x)$ are increasing in $t\ge 0$ , for all $x,y>0$ . Therefore, the sufficient conditions in Theorem (3.1) are satisfied and, consequently, it is ensured that

$$\skew1\overline r_{X+Y}(t) \geq \min\left\{\frac{\frac{1}{\sigma_1}\Big(1+\frac{\theta t}{\sigma_2}\Big)}{\exp\Big(\frac{t}{\sigma_1}\Big(1+\frac{\theta t}{\sigma_2}\Big)\Big) - 1}, \frac{\frac{1}{\sigma_2}\Big(1+\frac{\theta t}{\sigma_1}\Big)}{\exp\Big(\frac{t}{\sigma_2}\Big(1+\frac{\theta t}{\sigma_1}\Big)\Big) - 1}\right\} \quad \text{for all }t\geq 0.$$

4. Discussion and remarks

In this paper, an upper bound for the hazard rate of a convolution of not necessarily independent random lifetimes is provided. This result is an extension of a recent result by Block and Savits (Reference Block and Savits2015) where the hazard rate of a convolution is upper-bounded by the hazard rate of an IFR component in the case of independent components. As long as the result of Block and Savits (Reference Block and Savits2015) provides an upper bound in terms of the hazard rate function of one of the marginals, this is not possible in the case of dependent random lifetimes. In particular, negative dependence among the components has to be assumed, as well as monotonicity of the hazard rate function of the conditional distribution ${(X \mid Y=y)}$ [or ${(Y \mid X=x)}$ ]. Moreover, a similar result for the reversed hazard rate function is also provided, where a lower bound for the reversed hazard rate function of the convolution $X+Y$ is given. Applications of these results to several parametric and semiparametric families of bivariate distribution functions are also given.

Let us make some remarks on the results:

(i) We have considered here the case of non-negative random variables. The main reason is the applicability of these results in contexts like reliability, survival, and insurance, where the random quantities of interest are, obviously, non-negative. However, with the appropriate modifications the results can be extended to random variables with support not restricted to non-negative values.

(ii) We have only considered the convolution of two random variables. The result can be extended, using the previous techniques, to the case of more than two random variables. The idea is as follows:

Let us consider the case of n non-negative random variables, not necessarily independent, $X_1,X_2,\ldots,X_n$ , and let us denote by $i\in \{1,2,\ldots,n\}$ the index such that

$$r_{X_i}(t \mid Z=y) \text{ is increasing in }y \text{ for all }t\geq 0$$

and

$$r_{X_i}(t \mid Z=y) \text{ is increasing in }t \text{ for all }y\geq 0,$$

where $Z=\sum_{j\neq i} X_j$ ; then, by Theorem 2.1, we get that $r_{\sum_{j=1}^n X_j}(t)\leq r_X(t\!\mid\!Z=t)$ for all $t\geq 0$ .

However, we need to know the distribution of ${(X_i \mid Z=x)}$ , which is a non-trivial task. Therefore, we leave as an open question whether there are some easy-to-check sufficient conditions in the general case.

(iii) This work can be considered as a starting point for the study of some other properties of the hazard and reversed hazard rate functions of the convolution of not necessarily independent random lifetimes, such as the monotonicity or the limiting behaviour.

Acknowledgements

The authors are really grateful for the comments of the two anonymous referees and the associate editor, which have improved the presentation and the contents of this paper. The authors also want to acknowledge the support received by the Ministerio de Economa, Industria y Competitividad under grant MTM2016-79943-P (AEI/FEDER, UE).

References

Arnold, B. C. and Strauss, D. (1988). Bivariate distributions with exponential conditionals. J. Amer. Statist. Assoc. 83, 522527.CrossRefGoogle Scholar
Arnold, B., Castillo, E. and Sarabia, J. M. (1999). Conditional Specification of Statistical Models. Springer Series in Statistics. Springer, New York.Google Scholar
Barlow, R. E., Marshall, A. W. and Proschan, F. (1963). Properties of probability distributions with monotone hazard rate. Ann. Math. Statist. 34, 375389.CrossRefGoogle Scholar
Belzunce, F., Martnez-Riquelme, C. and Mulero, J. (2015). An Introduction to Stochastic Orders. Elsevier/Academic Press, Amsterdam.Google Scholar
Block, H., Langberg, N. and Savits, T. (2014). The limiting failure rate for a convolution of gamma distributions. Statist. Prob. Lett. 94, 176180.CrossRefGoogle Scholar
Block, H., Langberg, N. and Savits, T. (2015). The limiting failure rate for a convolution of life distributions. J. Appl. Prob. 52, 894898.CrossRefGoogle Scholar
Block, H. and Savits, T. (2015). The failure rate of a convolution dominates the failure rate of any IFR component. Statist. Prob. Lett. 107, 142144.CrossRefGoogle Scholar
Block, H., Savits, T. and Singh, H. (1998). The reversed hazard rate function. Prob. Eng. Inf. Sci. 12, 6990.CrossRefGoogle Scholar
Chechile, R. A. (2011). Properties of reverse hazard functions. J. Math. Psych. 55, 203222.CrossRefGoogle Scholar
Finkelstein, M. S. (2002). On the reversed hazard rate. Reliab. Eng. Syst. Safe. 48, 7175.CrossRefGoogle Scholar
Karlin, S. (1968). Total Positivity. Stanford University Press, Palo Alto, CA.Google Scholar
Kotz, S., Balakrishnan, N. and Johnson, N. L. (2000). Continuous Multivariate Distributions, Vol. 1, 2nd edn. Wiley-Interscience, New York.CrossRefGoogle Scholar
Lai, C. D. and Xie, M. (2006). Stochastic Ageing and Dependence for Reliability. Springer, New York.Google Scholar
Lee, M. L. T. (1985). Dependence by reverse regular rule. Ann. Prob. 13, 583591.CrossRefGoogle Scholar
Navarro, J., Esna-Ashari, M., Asadi, M. and Sarabia, J. M. (2015). Bivariate distributions with conditionals satisfying the proportional generalized odds rate model. Metrika 78, 691709.CrossRefGoogle Scholar
Navarro, J. and Sarabia, J. M. (2013). Reliability properties of bivariate conditional proportional hazard rate models. J. Multivariate Anal. 113, 116127.CrossRefGoogle Scholar
Navarro, J. and Sordo, M. A. (2018). Stochastic comparisons and bounds for conditional distributions by using copula properties. Dependence Modelling 6, 156177.CrossRefGoogle Scholar
Shaked, M. (1977). A family of concepts of dependence for bivariate distributions. J. Amer. Statist. Assoc. 72, 642650.CrossRefGoogle Scholar
Figure 0

Figure 1: Plot of the hazard rate function of $X+Y$ (continuous line), X (dashed and dotted line), and Y (dashed line).

Figure 1

Figure 2: Left: The hazard rate of the convolution $X+Y$ (continuous line) and the bound $((1+\theta t )^2- \theta)/{(1+\theta t)}$ (dashed line). Right: The joint density function of (X,Y).

Figure 2

Figure 3: Left: The hazard rate of the convolution $X+Y$ (continuous line), the function $r_X(t \mid Y=t)$ (dashed line), and the function $r_Y(t \mid X=t)$ (dashed and dotted line). Right: The joint density function of (X,Y).