1. Introduction
Let $X_1,\ldots,X_n$ be n random variables. The largest-order statistic, denoted by
plays an important role in reliability, statistics, finance, and many other applied areas; see [Reference Balakrishnan and Rao2], [Reference Balakrishnan and Rao3], and [Reference David and Nagaraja14]. For example, in reliability, the lifetime of a parallel system is the same as the largest-order statistic. It is also used to measure the systemic risk in finance. There is a large literature on stochastic comparisons for the largest-order statistic. Pledger and Proschan [Reference Pledger and Proschan32] were among the first to study the problem of stochastically comparing the largest-order statistics of two sets of n exponential random variables, where the random variables in one set are independent and heterogeneous and the random variables in the other set are independent and identically distributed (i.i.d.). Subsequently, many papers further investigated similar stochastic comparison problems and generalized them to other distributions, such as (generalized) gamma, Weibull, etc.; see [Reference Balakrishnan and Zhao4], [Reference Bon and PĂltĂnea8], [Reference Haidari and Najafabadi18], [Reference Kochar21], [Reference Mao and Hu23], [Reference Misra and Misra26], [Reference Zhao and Balakrishnan35], [Reference Zhao and Balakrishnan36], and [Reference Zhao, Zhang and Qiao37], and the references therein.
In this paper we investigate the problem of stochastic comparison for the proportional reversed hazard rate (PRHR) model [Reference Gupta and Gupta17], which is also called the exponentiated random variable; see [Reference Al-Hussaini and Ahsanullah1], [Reference Mudholkar and Srivastava27], and [Reference Mudholkar, Srivastava and Freimer28]. Recall that the PRHR model, which can be viewed as a dual model of the well-known proportional hazards model, consists in describing random failure times by a family of distribution functions
where F is a baseline distribution function [Reference Di Crescenzo15, Reference Gupta and Gupta17]. The PRHR model is useful for estimating the survival function for left censored data [Reference Kalbfleisch and Lawless19], and has wide applications in reliability theory and statistics [Reference Navarro31, Reference Wang, Yu and Coolen34].
For ease of reference, we first recall some definitions of stochastic orders. Let X and Y be two random variables with distribution functions F and G, respectively.
(1) X is said to be smaller than Y in the usual stochastic order if $F(x)\ge G(x)$ for $x\in \mathbb{R}$ . We denote this by $X\prec_{\mathrm{st}} Y$ .
(2) X is said to be smaller than Y in the reversed hazard rate order if $G(x)/F(x)$ is increasing in $x\in {\mathrm{supp}}(X)$ . We denote this by $X\prec_{\mathrm{rh}} Y$ .
(3) If, moreover, X and Y have probability density functions f and g respectively, X is said to be smaller than Y in the likelihood ratio order if $g(x)/f(x)$ is increasing in $x\in {\mathrm{supp}}(X)\cup {\mathrm{supp}}(Y)$ . We denote this by $X\prec_{\mathrm{lr}} Y$ .
It is well known that the relation between the three stochastic orders is
We refer to [Reference Shaked and Shanthikumar33] for more properties of stochastic orders. We recall that the reversed hazard rate function of a continuous random variable X is defined as
where F and f are the distribution function and density function of X, respectively, and here and throughout the paper, ${\mathrm{supp}}(X)$ represents the support of the distribution function of a random variable X. For two continuous random variables X and Y, $X\prec_{\mathrm{rh}} Y$ if and only if $\tilde{r}_X(t)\le \tilde{r}_Y(t)$ for all $t\in\mathbb{R}$ .
Remark 1. Let X and Y be two random variables with respective reversed hazard rate functions $\tilde{r}_X(t) $ and $\tilde{r}_Y(t)$ . It can be easily verified that if $ \tilde{r}_{X}(t) \le \tilde{r}_{Y}(t)$ and $ \tilde{r}_{Y}(t)/ {\tilde{r}_{X}(t)}$ is increasing in t, then $X\prec_{\mathrm{lr}}Y$ . This is due to the fact that
and $\tilde{r}_{X}(t) \le \tilde{r}_{Y}(t)$ if and only if $G(t)/F(t)$ is increasing in t.
Let $X_1,\ldots,X_n$ be n i.i.d. random variables with $X_i\sim F$ and let $Y_1,\ldots,Y_n$ be n independent random variables such that $Y_i=_{\mathrm{st}} X_i/\mu_i$ , $i=1,\ldots,n$ , where $\mu_i>0$ , $i=1,\ldots,n$ , are constants. Let $X_{n:n}$ and $Y_{n:n}$ denote the largest-order statistics of $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ , respectively. Mao and Hu [Reference Mao and Hu23] showed that if F is an exponential distribution, then
and
Zhao and Balakrishnan [Reference Zhao and Balakrishnan36] and Zhao, Zhang, and Qiao [Reference Zhao, Zhang and Qiao37] proved
if F is a gamma distribution or a Weibull distribution, respectively.Footnote 1 It should be pointed out that for gamma distributions, $\sum_{i=1}^n\mu_i\le n $ is also a necessary condition for $X_{n:n}\prec_{\mathrm{lr}}Y_{n:n}$ . However, it is not the case for Weibull distributions (see Section 3.1). Recently, Haidari and Najafabadi [Reference Haidari and Najafabadi18] generalized the above result to exponentiated generalized gamma distributions, which will be stated in Proposition 3 in Section 3.1.
In this paper we first introduce two properties (defined later in Section 2) of stochastic comparison of the largest-order statistics of two sets for a (baseline) distribution function. The two properties are similar to and slightly stronger than (1) and (2), respectively, and are satisfied by the exponential and gamma distributions (with scale larger than 1). We show that two properties can be inherited by the PRHR model. This is one of the main results of our paper. The second contribution of the paper is to apply the main result to two popular PRHR models: the exponentiated generalized gamma (EGG) distribution and the exponentiated Pareto distribution. We give the necessary and sufficient conditions for the stochastic comparisons of the largest-order statistics for the EGG distribution and the exponentiated Pareto distribution. Our results on the EGG distribution recover the recent result established by Haidari and Najafabadi [Reference Haidari and Najafabadi18] and strengthen the result for the Weibull distribution given by Zhao, Zhang, and Qiao [Reference Zhao, Zhang and Qiao37]. To the best of our knowledge, the result on the (exponentiated) Pareto distribution is novel.
The paper is organized as follows. In Section 2 we first introduce two properties in terms of reversed hazard rate functions for a distribution F and then we show that the two properties could be inherited by a PRHR model. In Section 3.1 we first apply the main result to EGG distributions to give a necessary and sufficient condition for the stochastic comparison of largest-order statistics with respect to reversed hazard rate order and likelihood ratio order for EGG distributions. We also derive a corollary which is an equivalent characterization for Weibull distributions. In Section 3.2 we apply the main result to exponentiated Pareto distributions and obtain a (necessary and) sufficient condition for the stochastic comparison of largest-order statistics with respect to the reversed hazard rate order and likelihood ratio order for the Pareto and exponentiated Pareto distributions.
2. Main result
Let F be a distribution function and define $G(x)= (F(x))^\theta=\!:\, F_{\theta}(x)$ , $x\in\mathbb{R}$ , where $\theta>0$ is a constant. It is obvious that $\{F_{\theta},\,\theta>0\}$ is a class of distribution functions with $F_{1}=F$ ; $\{F_\theta,\theta>0\}$ is called an exponentiated distribution (the corresponding random variable is called an exponentiated random variable; see [Reference Mudholkar and Srivastava27] and [Reference Mudholkar, Srivastava and Freimer28]) or a PRHR model [Reference Gupta and Gupta17]. Specifically, $F_\theta$ is called a PRHR model with proportionality constant $\theta$ $(>0)$ .
We introduce the following properties for a distribution F. We then show that each of the two properties can be inherited by a PRHR model. To state them, for any $n\in{\mathbb{N}}$ and $\mu_i\in\mathbb{R}_+$ , $i=1,\ldots,n$ , let $X_1,\ldots,X_n$ be n i.i.d. random variables such that $X_i\sim F$ and denote $Y_i=X_i/\mu_i,\, i=1,\ldots,n$ . We also let $X_{n:n}$ and $Y_{n:n}$ denote the largest-order statistics of $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ , respectively.
Property 1. For any $n\in\mathbb{N}$ and $\mu_i\in\mathbb{R}_+$ , $i=1,\ldots,n$ ,
where $\overline{\mu}= {\sum_{i=1}^n\mu_i}/n .$
Property 2. For any $n\in\mathbb{N}$ and $\mu_i\in\mathbb{R}_+$ , $i=1,\ldots,n$ ,
where $\mu_{1:n}=\min\{\mu_1,\ldots,\mu_n\}$ .
Let $\tilde{r}$ be the reversed hazard rate function of F. Then the reversed hazard rate functions of the order statistics $X_{n:n}$ and $Y_{n:n}$ , respectively, can be written as
Then it is easy to obtain the following equivalent characterization of Properties 1 and 2.
Proposition 1. Let F be a distribution function. Then:
(i) F satisfies Property 1 if and only if, for any $n\in\mathbb{N}$ and $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ ,
\begin{equation*} \overline{\mu}\le 1 \Longleftrightarrow n \tilde{r}(t) \le \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it) {\Longrightarrow} \sum_{i=1}^n \dfrac{ \mu_i\,\tilde{r}(\mu_it)}{ \tilde{r}(t)}\ \textit{is\ increasing\ in\ $t\in\mathbb{R}${,}}\end{equation*}(ii) F satisfies Property 2 if and only if both $t \tilde{r}(t)$ and ${\tilde{r}(\mu t)}/{\tilde{r}(t)}$ are decreasing in $t\in\mathbb{R}$ for $\mu\ge 1$ , and $n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)$ implies $\mu_{1:n}\ge 1.$
Proof. We only give the proof of (ii), as the proof of (i) is trivial from the definition of the reversed hazard rate function. To show necessity, note that Property 2 is equivalent to that for $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ :
Taking $n=1$ , (6) then reduces to
This implies that $t \tilde{r}(t)$ and ${\tilde{r}(\mu t)}/{\tilde{r}(t)}$ are decreasing in $t\in\mathbb{R}$ . Also noting that the equivalence relation of (6) implies that $n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)$ $\Rightarrow$ $\mu_{1:n}\ge 1$ , we complete the proof of necessity.
To show sufficiency, note that the monotonicity of $t \tilde{r}(t)$ and ${\tilde{r}(\mu t)}/{\tilde{r}(t)}$ for $\mu\ge 1$ implies that
Combining with the implication that $n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)$ implies $\mu_{1:n}\ge 1$ , we have
Thus (6) is satisfied. That is, Property 2 is satisfied. Hence we complete the proof.
(i) It is shown that the gamma distribution function with shape parameter less than one and the exponential distribution function satisfy Property 1; see [Reference Zhao and Balakrishnan36] and [Reference Mao, Hu and Zhao24]. Meanwhile, it can be easily shown that the gamma distribution and hence the exponential distribution satisfy Property 2; see [Reference Mao, Hu and Zhao24].
(ii) It is easy to show that if a distribution function F satisfies Property 1, then by Remark 1 we can immediately get
\begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \overline{\mu}\le 1.\end{equation*}Similarly, if F satisfies Property 2, then\begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow 1 \le \mu_{1:n}.\end{equation*}
To show our main result, we first give a lemma that is useful for the proof of the main result.
Lemma 1. For $\theta_i>0,\mu_i>0$ , $i=1,\ldots,n$ , with ${\sum_{i=1}^n\theta_i}={\sum_{i=1}^n\theta_i\mu_i}$ , then for each $\delta>0$ there exist $\theta_i^*>0$ , $i=1,\ldots,n$ , such that $0<|\theta_i-\theta_i^*|\le \delta$ , $i=1,\ldots,n$ , ${\sum_{i=1}^n\theta_i^*}\ge {\sum_{i=1}^n\theta_i^*\mu_i}$ .
Proof. Without loss of generality, assume that $\mu_1\le\cdots\le\mu_n$ and they are not identically equal as the result is trivial when $\mu_1=\dots=\mu_n=1$ . From ${\sum_{i=1}^n\theta_i}={\sum_{i=1}^n\theta_i\mu_i}$ we have ${\sum_{i=1}^n\theta_i(\mu_i-1)}=0$ , which implies that there exists $i_0\in\{1,\ldots,n\}$ such that $\mu_{i_0}< 1\le \mu_{i_0+1}$ . Now we take $\theta_i^*>\theta_i$ for $i\le i_0$ and $\theta_i^*<\theta_i$ for $i> i_0$ . Then we have
where the equality follows from ${\sum_{i=1}^n\theta_i(\mu_i-1)}=0$ , and the inequality follows from $(\theta_i^*-\theta_i)(\mu_i-1)\le 0$ for all i and $(\theta_1^*-\theta_1)(\mu_1-1)<0$ . Hence we have ${\sum_{i=1}^n\theta_i^*}\ge {\sum_{i=1}^n\theta_i^*\mu_i}$ .
Theorem 1. Let F be a distribution and define $F_{\mu,\theta}(x) = F^{\theta}(\mu x)$ $x\in\mathbb{R}$ , where $\mu,\,\theta>0$ are constants. Let $X_1,\ldots,X_n$ be n i.i.d. random variables such that $X_i\sim F_{\mu,\theta}$ and let $Y_1,\ldots,Y_n$ be n independent random variables such that $Y_i\sim F_{\mu_i,\theta_i}$ , $i=1,\ldots,n$ , with $n\theta=\theta_1+\cdots+\theta_n$ .
(i) If F satisfies Property 1, then
\begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu\theta\ge\dfrac1n{\sum_{i=1}^n\theta_i\mu_i}.\end{equation*}(ii) If F satisfies Property 2, then
\begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu \le \mu_{1:n}.\end{equation*}
Proof. We only give the proof of (i), as (ii) can be shown similarly and its proof is postponed to the Appendix for completeness. Now three cases arise.
(a) First assume that $\theta_i$ , $i=1,\ldots,n$ and ${\theta}$ are all integers. Let
be two independent samples of size $m \coloneqq n{\theta}=\theta_1+\cdots+\theta_n$ such that $Z_i\sim F_{\mu,{{1}}}$ for $i=1,\ldots,m$ ; $W_i \sim F_{\mu_1,{1}}$ for $i=1,\ldots,\theta_1$ ; $W_i \sim F_{\mu_2,{1}}$ for $i=\theta_1+1,\ldots,\theta_1+\theta_2$ ; $\cdots$ ; $W_i \sim F_{\mu_n,{1}}$ , $i=\theta_1+\cdots+\theta_{n-1}+1,\ldots,\theta_1+\cdots+\theta_n$ . By observing the cumulative distribution functions of $Z_{m:m}$ and $W_{m:m}$ , we have that $Z_{m:m}$ and $W_{m:m}$ have the same distribution as those of $ X_{n:n}$ and $Y_{n:n}$ , respectively. By Property 1, we have $Z_{m:m}\prec_{\mathrm{lr}}W_{m:m}$ or $Z_{m:m}\prec_{\mathrm{rh}}W_{m:m}$ if and only if
Then the result holds when $\theta_i$ , $i=1,\ldots,n$ , and $\theta$ are all integers.
(b) Next consider the case when $\theta_i$ , $i=1,\ldots,n$ and $\theta$ are all rational. Let $f_{X_{n:n}}$ and $g_{ Y_{n:n}}$ denote the density functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively. It can be verified that
where $F_{ Y_{n:n}}$ and $G_{ Y_{n:n}}$ denote the cumulative distribution functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively:
As $\theta_i$ , $i=1,\ldots,n$ and $\theta$ are all rational, there exists N such that $N\theta_i$ , $i=1,\ldots,n$ and $N\theta$ are all integers. Hence, by part (a), $(h(x))^N$ is increasing if and only if $g(x)=Ng(x)/N$ is increasing, and hence $X_{n:n} \prec_{\mathrm{lr}} Y_{n:n}$ if and only if $X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}$ and each of them is equivalent to ${\sum_{i=1}^n\theta_i\mu_i}/{(n\theta)}\le \mu$ .
(c) Finally, consider the general case. We first show that $\mu\ge {\sum_{i=1}^n\theta_i\mu_i}/{(n\theta)}$ is sufficient for $X_{n:n} \prec_{\mathrm{lr}} Y_{n:n}$ and $X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}$ . We assert that there exist $\theta_1^k,\ldots,\theta_n^k$ and $\theta^k$ , $k\in\mathbb{N}$ , such that they are all rational and $\theta^k\rightarrow \theta$ , $\theta_i^k\rightarrow \theta_i$ as $k\to\infty$ , $i=1,\ldots,n$ , and
This is obviously true when
and from the proof of Lemma 1, it is also true when
Let $X_{n:n}^k$ and $Y_{n:n}^k$ denote the largest-order statistics of $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ , respectively, where both $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ are n independent random variables such that $X_i^k\sim F_{\mu,\theta^k}$ and $Y_i^k\sim F_{\mu_i,\theta_i^k}$ , $i=1,\ldots,n$ , $k\in\mathbb{N}$ . By the proof of (b), we have $X_{n:n}^k\prec_{\mathrm{lr}} (\!\!\prec_{\mathrm{rh}})Y_{n:n}^k$ for each $k\in\mathbb{N}$ . Note that $X_{n:n}^k\to_{\mathrm{st}} X_{n:n}$ and $Y_{n:n}^k\to_{\mathrm{st}} Y_{n:n}$ as $k\to\infty$ . Then, by Theorem 1.C.7 of [Reference Shaked and Shanthikumar33], $\mu\ge {\sum_{i=1}^n\theta_i\mu_i}/(n\theta)$ is sufficient for $X_{n:n} \prec_{\mathrm{lr}} Y_{n:n}$ and $X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}$ .
We next show the necessity by contradiction. If $\mu< {\sum_{i=1}^n\theta_i\mu_i}/(n\theta)$ , let $\tilde{r}_{\mu}$ denote the reversed hazard rate function of $X\thicksim F_{\mu,1}$ . Then we complete the result in two steps. Without loss of generality, we assume $\mu_1\le \cdots\le \mu_n$ . Then, by Property 1, we know that $\tilde{r}_{\mu_1}(t)\ge\cdots\ge\tilde{r}_{\mu_n}(t)$ for all $t\in\mathbb{R}$ .
First we consider the case when $\theta$ is rational. Then there exist rational numbers $\theta_i^*$ , $i=1,\ldots,n$ , such that $\theta_1^*>\theta_1, \theta_i^*<\theta_i$ , $i=2,\ldots,n$ and $\sum_{i=1}^n\theta_i^\ast=n\theta$ . Let $Y_{n:n}^\ast$ denote the largest-order statistics of $Y_1^\ast,\ldots,Y_n^\ast$ , respectively, where $Y_1^\ast,\ldots,Y_n^\ast$ are n independent random variables such that $Y_i^\ast\sim F_{\mu_i,\theta_i^\ast}$ , $i=1,\ldots,n$ . By part (b), $X_{n:n} \not\prec_{\mathrm{rh}} Y_{n:n}$ , that is, there exists $t_0\in\mathbb{R}$ such that
It can be verified that
where the inequality follows from $\tilde{r}_{\mu_1}(t)\ge\cdots\ge\tilde{r}_{\mu_n}(t)$ and $\theta_1^*>\theta_1, \theta_i^*<\theta_i$ , $i=2,\ldots,n$ , and the last equality follows from $\sum_{i=1}^n\theta_i^\ast=n\theta=\sum_{i=1}^n\theta_i$ . This implies $X_{n:n}\nprec_{\mathrm{rh}} Y_{n:n}$ .
Secondly, we consider the general case when $\theta$ may not be rational. There exists a rational number $\theta'>\theta$ such that $\mu<\Sigma_{i=1}^n\theta_i\mu_i/(n\theta')$ . Let $\theta_1^{\prime}=\theta_1+n(\theta'-\theta)$ , and let $X_{n:n}^{\prime}, Y_{n:n}^{\prime}$ denote the largest-order statistics of $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ , respectively, where both $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ are n independent random variables such that $X_i^{\prime}\sim F_{\mu,\theta'}, i=1,\ldots,n$ and $Y_1^{\prime}\sim F_{\mu_1,\theta_1^{\prime}}$ . It is easy to verify that $\mu<(\theta_1^{\prime}\mu_1+\sum_{i=2}^n\theta_i\mu_i)/(n\theta')$ and $\theta_1^{\prime}+\sum_{i=2}^n\theta_i=n\theta'$ . By the first case, there exists $t_0\in\mathbb{R}$ such that
Note that
By $\tilde{r}_{\mu}(t_0)\le\tilde{r}_{\mu_1}(t_0)$ , we have ${\tilde{r}_{_{Y_{n:n}}}(t_0)}/{\tilde{r}_{_{X_{n:n}}}(t_0)}\le{\tilde{r}_{_{Y_{n:n}^{\prime}}}(t_0)}/{\tilde{r}_{_{X_{n:n}^{\prime}}}(t_0)}<1$ . This implies $X_{n:n}\nprec_{\mathrm{rh}} Y_{n:n}$ . Hence we complete the proof.
From the proof of Theorem 1, we can easily get the first equivalent relation in (4) of Property 1, and the first equivalent relation in (5) of Property 2 can also be inherited by a PRHR model.
Proposition 2. Let F be a distribution and define $F_{\mu,\theta}(x) = F^{\theta}(\mu x)$ $x\in\mathbb{R}$ , where $\mu,\,\theta>0$ are constants. Let $X_1,\ldots,X_n$ be n i.i.d. random variables such that $X_i\sim F_{\mu,\theta}$ and let $Y_1,\ldots,Y_n$ be n independent random variables such that $Y_i\sim F_{\mu_i,\theta_i}$ , $i=1,\ldots,n$ , with $n\theta=\theta_1+\cdots+\theta_n$ .
(i) If F satisfies the equivalent relation in (4) of Property 1, then
\begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow \mu\theta\ge\dfrac1n{\sum_{i=1}^n\theta_i\mu_i}.\end{equation*}(ii) If F satisfies the equivalent relation in (5) of Property 2, then
\begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow \mu \le \mu_{1:n}.\end{equation*}
From Theorem 1, we can easily get the following result by relaxing the constraint $n\theta=\sum_{i=1}^n\theta_i$ .
Corollary 1. Under the notation of Theorem 1, we have the following statements.
(i) If F satisfies Property 1 and $n\theta\le\theta_1+\cdots+\theta_n$ , then
\begin{equation*} \mu\theta\ge\dfrac1n{\sum_{i=1}^n\theta_i\mu_i} \Longrightarrow X_{n:n}\prec_{\mathrm{lr}} Y_{n:n},X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}.\end{equation*}(ii) If F satisfies Property 2 and $n\theta\ge\theta_1+\cdots+\theta_n$ , then
\begin{equation*} \mu \le \mu_{1:n} \Longrightarrow X_{n:n} \succ_{\mathrm{lr}} Y_{n:n}, X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}.\end{equation*}
Proof. We only give the proof of (i), as (ii) can be shown similarly. Let $\theta^* \coloneqq \sum_{i=1}^n\theta_i/n\ge \theta.$ Then
We define $X_{n:n}^*$ as the largest-order statistics of $X_1^*,X_2^*,\ldots,X_n^*$ such that $X_i^*\thicksim F_{\mu,\theta^*}$ , $i=1,\ldots,n$ . Then, by Theorem 1, we have $X_{n:n}^*\prec_{\mathrm{rh}}Y_{n:n}$ . Note that the reserved hazard rate function of $X_{n:n}$ is increasing in $\theta>0$ , and hence $X_{n:n}\prec_{\mathrm{rh}}Y_{n:n}.$ Also, note that
Then g(x) is also increasing as $g^*(x)$ is increasing from the proof of Theorem 1. Hence we have $X_{n:n}\prec_{\mathrm{lr}}Y_{n:n}.$ This completes the proof.
3. Applications
In this section we investigate the applications of the main result to the exponentiated generalized gamma (EGG) distribution and the exponentiated Pareto distribution.
3.1. EGG distribution
From Theorem 1, we can immediately get the following corollary for the EGG distribution. The result has been given by Haidari and Najafabadi [Reference Haidari and Najafabadi18], whose proof is technical and requires several lemmas. Recall that a random variable X is said to have the EGG distribution with shape parameters $\theta>0$ , $\lambda>0$ , $r>0$ , and scale parameter $\mu>0$ , denoted by $X\sim$ EGG $(\theta,\lambda,r,\mu)$ , if it has the following cumulative distribution function:
It is easy to see that the class of EGG distributions contains the Weibull distribution, the gamma distribution, and hence the exponential distribution.
(i) If $\theta=\lambda=1$ , then the EGG distribution reduces to the gamma distribution, $\Gamma(\mu,r)$ , with the density function
\begin{equation*} f(x)= \dfrac{ \mu^r}{\Gamma(r)} x^{r-1} {\mathrm{e}}^{-\mu x} ,\quad x>0. \end{equation*}(ii) If $\lambda=r$ , $\theta=1$ , then the EGG distribution reduces to the Weibull distribution, ${\mathrm{Wel}}(\lambda,\mu)$ , with the density function
(8) \begin{equation} f(x) = \lambda\mu^\lambda x^{\lambda-1} {\mathrm{e}}^{-(\mu x)^\lambda},\quad x>0.\end{equation}(iii) If $\lambda=\theta=r=1$ , then the EGG distribution reduces to the exponential distribution, ${\mathrm{Exp}}(\mu)$ , with the density function
\begin{equation*} f(x)= \mu {\mathrm{e}}^{-\mu x} ,\quad x>0. \end{equation*}
Proposition 3. Let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim {\mathrm{EGG}}(\theta,\lambda, r, \mu)$ and $Y_i\sim {\mathrm{EGG}}(\theta_i,\lambda, r, \mu_i)$ , where $\theta_i>0,\, \theta>0$ , $\lambda>0$ , $r>0$ , $\mu_i>0$ , $\mu>0$ , $i=1,\ldots,n$ , and $n\theta=\theta_1+\cdots+\theta_n$ .
(i) For $0<\lambda\le r$ , we have
\begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu \ge \Biggl( \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i^\lambda\Biggr)^{1/\lambda}.\end{equation*}(ii) For $\lambda,\, r>0$ , we have
\begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu \le \mu_{1:n}.\end{equation*}(iii) For $\lambda,\, r>0$ , we have
\begin{equation*}X_{n:n} \prec_{\mathrm{st}} Y_{n:n}\Longleftrightarrow \mu^{n\theta}\ge \prod_{i=1}^n\mu_i^{\theta_i}.\end{equation*}
Proof. Let F be the distribution function of a gamma distribution $\Gamma(1,r/\lambda)$ . Then $F_{\mu^{\lambda},\theta}$ is the distribution function of ${\mathrm{EGG}}(\theta,1,r/\lambda,\mu^{\lambda})$ defined by (7). Note that for a random variable X, $X^\lambda\sim {\mathrm{EGG}}(\theta,1,r/\lambda,\mu^\lambda)$ if and only if $X\sim {\mathrm{EGG}}(\theta,\lambda,r,\mu)$ , and $\prec_{\mathrm{lr}}$ and $\prec_{\mathrm{rh}}$ are preserved under increasing transforms. Hence, by Remark 2 and Theorem 1, we have that if $\lambda\le r$ , then $\Gamma(1,r/\lambda)$ satisfies Property 1, and hence, if $\lambda\le r$ , we have
That is, (i) holds true. Similarly, we can show that (ii) holds true. For (iii) it can be proved by Theorem 3.2 of [Reference Khaledi, Farsinezhad and Kochar20], similar arguments to those in the proof of Theorem 1, and the above observations. We next show the necessity by contradiction. We assume that $\mu^{n\theta}< \prod_{i=1}^n\mu_i^{\theta_i}$ . Note that
This yields a contradiction with $X_{n:n} \prec_{\mathrm{st}} Y_{n:n}$ . Hence (iii) follows.
With Proposition 3 in hand, we immediately get the following result for the Weibull distribution, a special case of the EGG distribution, whose density function is given by (8).
Corollary 2. Let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim {\mathrm{Wel}}(\lambda,\mu), Y_i\sim {\mathrm{Wel}}(\lambda,\mu_i),$ where $\lambda>0, \mu>0, \mu_i>0, i=1,2,\ldots,n.$ Then
Remark 3. It is worth noting that Corollary 2 strengthens Theorem 3.2 of [Reference Zhao, Zhang and Qiao37]. They present a sufficient condition for $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ when $0<\lambda\le1$ , that is,
It is easy to verify that
if $0<\lambda\le 1$ . The condition
is only a sufficient but not necessary condition for $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ .
We give some examples to illustrate the result of Proposition 3.
Example 1. Let $n=5$ , $X_1,\ldots,X_5$ and $Y_1,\ldots,Y_5$ be two samples of independent random variables such that $X_i\sim {\mathrm{EGG}}(\theta,\lambda, r, \mu)$ and $Y_i\sim {\mathrm{EGG}}(\theta_i,\lambda, r, \mu_i)$ , $i=1,\ldots,5$ . In Figure 1 we plot the ratio of cumulative distribution functions and the ratio of the density functions for different parameters. Let $F_{X_{5:5}}$ and $F_{Y_{5:5}}$ (resp. $f_{X_{5:5}}$ and $f_{Y_{5:5}}$ ) denote the cumulative distribution functions (resp. density functions) of $ {X_{5:5}}$ and $ {Y_{5:5}}$ , respectively. The parameters are taken as the following four cases. One can find that all the examples are consistent with Proposition 3.
(i) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=1.5, \,\mu=2, \,r=2, \mu_1=3, \mu_2=\mu_3=\mu_4=2, \mu_5=1$ . One can verify that
\begin{equation*} \mu \ge \Biggl( \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i^\lambda\Biggr)^{1/\lambda}. \end{equation*}Hence, by Proposition 3(i), we have $X_{5:5}\prec_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\prec_{\mathrm{rh}} Y_{5:5}.$ The ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are both increasing.(ii) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=1.5, \,\mu=2, \,r=2, \mu_1=1, \mu_2=\mu_3=\mu_4=2, \mu_5=3$ . One can verify that
\begin{equation*} \mu < \Biggl( \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i^\lambda\Biggr)^{1/\lambda}. \end{equation*}By Proposition 3(i), we have $X_{5:5}\not\prec_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\not\prec_{\mathrm{rh}} Y_{5:5}.$ Neither of the ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are monotone.(iii) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=2, \,\mu=1.5, \,r=1, \,\mu_1=\mu_2=\mu_3=2$ , $\mu_4=\mu_5=3$ . One can verify that $ \mu \ge \mu_{1:5}$ . By Proposition 3(ii), we have $X_{5:5} \succ_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5} \succ_{\mathrm{rh}} Y_{5:5}.$ The ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are both decreasing.
(iv) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=2, \,\mu=2, \,r=1, \,\mu_1=1,\, \mu_2=\mu_3=\mu_4=2, \,\mu_5=3$ . One can verify that $ \mu> \mu_{1:5}$ . By Proposition 3(ii), we have $X_{5:5}\not\succ_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\not\succ_{\mathrm{rh}} Y_{5:5}.$ Neither of the ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are monotone.
Example 2. We employ the notation of Example 1. In Figure 2, for $n=5$ , we compare the cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ for different parameters. We consider the following two examples. One can find that all the examples are consistent with Proposition 3(iii).
(i) $\theta=3, \theta_1=i, i=1,\ldots,5$ , $\lambda=1, \mu=1, r=2, \,\mu_1=5,\,\mu_2=2,\,\mu_3=1$ , $\mu_4=\mu_5=2$ . One can verify that $\mu^{n\theta}\ge \prod_{i=1}^n\mu_i^{\theta_i}.$ Then, by Proposition 3(iii), we have $X_{n:n} \prec_{\mathrm{st}} Y_{n:n}$ . The cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ are plotted in Figure 2(a), from which we can find that $F_{X_{n:n}} \ge F_{ Y_{n:n}}$ .
(ii) $\theta=3, \theta_1=i, i=1,\ldots,5$ , $\lambda=2, \mu=2, r=1, \,\mu_1=1,\,\mu_2=2,\,\mu_3=2$ , $\mu_4=\mu_5=3$ . One can verify that $\mu^{n\theta}< \prod_{i=1}^n\mu_i^{\theta_i}.$ Then, by Proposition 3(iii), we have $X_{n:n} \not\prec_{\mathrm{st}} Y_{n:n}$ . The cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ are plotted in Figure 2(b), from which we can find that neither of $F_{X_{n:n}}$ and $F_{ Y_{n:n}}$ dominate each other.
3.2. Exponentiated Pareto distribution
Let $G_{\mu,\theta}$ be a distribution given by
We call $G_{\mu,\theta}$ an exponentiated Pareto distribution with tail index $\beta>0$ , denoted by ${\mathrm{EP}}(\beta,\mu,\theta)$ [Reference Nadarajah29, Reference Nadarajah, Jiang and Chu30]. When $\mu=\theta=1$ ,
is called a simple Pareto distribution with tail index $\beta>0$ (and is also called a Lomax distribution; see [Reference Kotz, Balakrishnan and Johnson22]). Pareto distributions are the most popular models in finance, economics, and related areas. It is obvious to see that $G_{1,\theta}$ is the class of the PRHR model with respect to the Pareto distribution. They are commonly used to model random variables such as income, risk, and prices. We will show that a simple Pareto distribution with tail index $\beta>0$ satisfies the first equivalent relations in (4) of Property 1.
Lemma 2. Let $F=G_{1,1}$ be the simple Pareto distribution defined by (9) with $\beta>0$ . Then F satisfies the equivalent relation of Property 1.
Proof. We first show the $\Longrightarrow$ in the equivalent relation. Let $\tilde{r}$ denote the reversed hazard rate function of F. Note that
We first show that $x\tilde{r}(x)$ is decreasing convex in $x\in\mathbb{R}_+$ . We take the derivatives of $\lambda(x) =\!:\break x\tilde{r}(x)/\beta$ :
Note that $h'(x)=-\beta x(1+\beta)(1+x)^{\beta-1}<0$ for $x>0$ and $h(0)=0$ . We have $\lambda'(x)<0$ , which implies that $x\tilde{r}(x)$ is decreasing in $x\in\mathbb{R}_+$ . Also note that $\lambda'(x)=h(x)(\tilde{r}(x))^2/\beta^2$ . It then follows that
where $A\stackrel{\mathrm{sgn}}=B$ indicates that A and B have the same sign. One can verify that $w(0)=w'(0)=w^{\prime\prime}(0)=0$ , and
Hence we have $w(x)\ge 0$ for $x\in\mathbb{R}_+$ , and hence $x\tilde{r}(x)$ is decreasing and convex in $x\in\mathbb{R}_+$ . Then, for any $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ such that $\mu_1+\cdots+\mu_n\le n$ (i.e. $\overline{\mu}\le 1$ ), we have
where the two inequalities follow from the fact that $x\tilde{r}(x)$ is decreasing and convex in $x\in\mathbb{R}_+$ , respectively. Note that (10) implies $n\tilde{r}(x) \le \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_ix)$ , that is, $\tilde{r}_{X_{n:n}(x)}\le \tilde{r}_{Y_{n:n}(x)}$ . Then we have $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ , and thus we complete the proof of $\Longrightarrow$ in the equivalent relation.
We next show the $\Longleftarrow$ in the equivalent relation. For $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ , let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim G_{1,1}$ and $Y_i\sim G_{\mu_i,1}$ , where $\mu_i>0$ , $i=1,\ldots,n$ . Let $\tilde{r}_{X_{n:n}}$ and $\tilde{r}_{Y_{n:n}}$ denote the reversed hazard rate functions of order statistics $X_{n:n}$ and $Y_{n:n}$ , respectively. Then, by Taylor’s expansion, we have
and
It then follows from $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ (i.e. $\tilde{r}_{X_{n:n}}\le \tilde{r}_{Y_{n:n}}$ ) that $\sum_{i=1}^n \mu_i\le n$ . Hence we complete the proof.
By Lemma 2 and Theorem 1, we can immediately get the following result for the exponentiated Pareto distribution.
Proposition 4. Let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim {\mathrm{EP}}(\beta,\mu,\theta)$ and $Y_i\sim {\mathrm{EP}}(\beta,\mu_i,\theta_i)$ , where $\theta_i>0,\, \theta>0$ , $\beta>0$ , $\mu_i>0$ , $\mu>0$ , $i=1,\ldots,n$ , and $n\theta=\theta_1+\cdots+\theta_n$ . Then
and
Proof. It is easy to see that (11) follows from Lemma 2 and Theorem 1. To see (12), let $\tilde{r}$ denote the reversed hazard rate function of $F=G_{1,1}$ defined by (9). Note that in the proof of Lemma 2 we have already shown that $x\tilde{r}(x)$ is decreasing in $x\in\mathbb{R}_+$ . This implies that, for any $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ such that $\mu_{1:n}\ge \mu$ , we have $\mu t\tilde{r}(\mu t) \ge \mu_it\,\tilde{r}(\mu_it)$ , $i=1,\ldots,n$ , and thus
That is, $ \tilde{r}_{_{X_{n:n}}}(t) \ge \tilde{r}_{_{Y_{n:n}}}(t) .$ Hence (12) holds.
We give some examples to illustrate the result of Proposition 4.
Example 3. Let $n=5$ , $X_1,\ldots,X_5$ and $Y_1,\ldots,Y_5$ be two samples of independent random variables such that $X_i\sim {\mathrm{EP}}(\beta,\mu,\theta)$ and $Y_i\sim {\mathrm{EP}}(\beta,\mu_i,\theta_i)$ , $i=1,\ldots,5$ . In Figure 3 we plot the ratio of cumulative distribution functions and the ratio of the density functions for different parameters. Let $F_{X_{5:5}}$ and $F_{Y_{5:5}}$ denote the cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ , respectively. The parameters are taken as the following three cases. One can find that all the examples are consistent with Proposition 4.
(i) $\beta=2{,} \theta=3{,} \theta_i=i$ , $i=1,\ldots,5$ , $\mu=2$ , $\mu_1=3$ , $\mu_2=\mu_3=\mu_4=2$ , $\mu_5=1$ . One can verify that
\begin{equation*} \mu \ge \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i, \end{equation*}and thus, by Proposition 4(i), we have $X_{5:5}\prec_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\prec_{\mathrm{rh}} Y_{5:5}.$ The ratio $F_{Y_{5:5}}/F_{X_{5:5}}$ is increasing.(ii) $\beta=2, \theta=3, \theta_i=i$ , $i=1,\ldots,5$ , $\mu=2$ , $\mu_1=\mu_2=\mu_3= \mu_4=2$ , $\mu_5=3$ . One can verify that $ \mu \le \mu_{1:5}$ . Then, by Proposition 4(ii), we have $X_{5:5}\succ_{\mathrm{rh}} Y_{5:5}.$ The ratio $F_{Y_{5:5}}/F_{X_{5:5}}$ is decreasing.
(iii) $\beta=2, \theta=3, \theta_i=i$ , $i=1,\ldots,5$ , $\mu=2$ , $\mu_1=\mu_2=\mu_3=1$ , $\mu_4=3$ , $\mu_5=3$ . One can verify that
\begin{equation*} \mu<\dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i. \end{equation*}Then, by Proposition 4(i), we note that $X_{5:5}\not \succ_{\mathrm{rh}} Y_{5:5}.$ The ratio $F_{Y_{5:5}}/F_{X_{5:5}}$ is not monotone. Also, note that $ \mu > \mu_{1:5}$ . We have the condition $ \mu > \mu_{1:5}$ in Proposition 4(ii) could not be relaxed in general.
3.3. Reinsurance
Starting from the seminal work of [Reference Borch9], the study of optimal reinsurance has drawn significant interest from both practicing actuaries and academics (see e.g. [Reference Bernard, He, Yan and Zhou5], [Reference Cai, Tan, Weng and Zhang7], [Reference Cheung, Sung, Yam and Yung12], and [Reference Chi and Tan13]). Optimal reinsurance is an important risk management tool for insurance companies, especially in catastrophe insurance.
In this section we consider the following reinsurance model. Let $X_1,\ldots,X_n$ be n independent random losses which represent random claims faced by an insurance company. We assume that each of the $X_i$ follows a regularly varying distribution. Recall that a distribution F is called a regularly varying distribution if there exists $\alpha>0$ such that
where $\overline{F}=1-F$ is the survival function. This is denoted by $\overline{F}\in {\mathrm{RV}}_{-\alpha}$ . Pareto distributions, or more generally exponentiated Pareto distributions, are typical examples of regularly varying distributions. We point out that if the random claims represent the risks of a portfolio or the claims of catastrophe risks, it is well acknowledged that those risks have the properties of leptokurtosis and fat tails. It is typical to use regularly varying distributions to model such risks (see e.g. [Reference Embrechts, Klüppelberg and Mikosch16]). It has been verified that, for such risks, the largest-order statistic has tail properties similar to that of the sum of the risks. That is, for n independent random variables $X_1,\ldots,X_n$ following the regularly varying distributions (even with different indexes), we have
where $f(x)\sim g(x)$ represents $\lim_{x\to\infty} f(x)/g(x)=1$ (see e.g. [Reference Embrechts, Klüppelberg and Mikosch16]). Hence we assume that the insurer is only concerned with the risk of the largest risk $X_{n:n}$ and wants to transfer part of $X_{n:n}$ to a reinsurance company and pay a premium. That is, in the presence of n risks $X_1,\ldots,X_n$ , the insurer is concerned with an optimal partition of $X_{n:n}$ into two parts, $I(X_{n:n})$ and $R_I (X_{n:n})$ , where $I(X_{n:n})$ satisfying $0 \le I(X_{n:n})\le X_{n:n}$ represents the amount of loss ceded to the reinsurer and $R_I (X_{n:n})=X_{n:n}-I(X_{n:n})$ is the loss retained by the insurer. Meanwhile the insurer incurs an additional cost in the form of a reinsurance premium which is payable to the reinsurer. We use $\pi$ to represent the reinsurance premium principle. Then the risk of the insurer is $R_I (X_{n:n}) +\pi(I(X_{n:n}))$ . Here we consider the following typical optimal reinsurance framework studied by Cai et al. [Reference Cai, Tan, Weng and Zhang7]. The functional I takes the form of a stop-loss contract, that is,
and $\pi(X)=(1+ \rho){\mathbb{E}}[X]$ , where $\rho>0$ is a loading factor. Then the risk of the insurer is
The insurer wants to minimize his/her risk as follows:
where ${\mathrm{VaR}}_\alpha(X)$ is the Value-at-Risk (VaR) of a risk X defined as
Let $d^*_{\mathrm{VaR}}(X_{n:n})$ denote the optimal value of d of the above optimization problem (15). We assume that $d^*_{\mathrm{VaR}}(X_{n:n})=\infty$ if the optimal solution to the minimization problem (15) does not exist.
We now study the properties of $d^*_{\mathrm{VaR}}(X_{n:n})$ with respect to the sample of $X_1,\ldots,X_n$ . To state it, let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables and let $X_{n:n}$ and $Y_{n:n}$ denote their largest-order statistics, respectively. Define $ \pi(X_{n;n},d)$ and $ \pi(Y_{n;n},d)$ as in (14), and let $d^*_{\mathrm{VaR}}(X_{n:n})$ and $d^*_{\mathrm{VaR}}(Y_{n:n})$ denote the optimal values of d with the sample $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ , respectively. That is,
Here we only consider the infimum of the optimal values of d as the optimal values are generally not unique in general (see e.g. [Reference Cai, Tan, Weng and Zhang7]). By Proposition 4, we can immediately obtain the following result.
Proposition 5. Under the condition of Proposition 4, if the optimal solutions to optimization problems of (16) exist, then the following statements hold.
(i) If
\begin{equation*} \mu \ge \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i, \end{equation*}then $ d^*_{\mathrm{VaR}}(X_{n:n})\le d^*_{\mathrm{VaR}}(Y_{n:n}).$(ii) If $ \mu \le \mu_{1:n} $ , then $ d^*_{\mathrm{VaR}}(X_{n:n})\ge d^*_{\mathrm{VaR}}(Y_{n:n}).$
Proof. Denote $\rho^*=\rho/(1+\rho)$ . It is easy to verify that both $X_{n:n}$ and $Y_{n:n}$ are positive random variables, and thus $\rho^*>0=\mathbb{P}(X_{n:n}\le 0)$ . Also noting that the optimal solutions to optimization problems of (16) exist, the conditions of Theorem 3.1(i,ii) of [Reference Cai, Tan, Weng and Zhang7] are satisfied. Then, by Theorem 3.1(i,ii) of [Reference Cai, Tan, Weng and Zhang7], we have $d^*_{\mathrm{VaR}}(X_{n:n})= {\mathrm{VaR}}_{\rho^*}(X_{n:n})$ and $d^*_{\mathrm{VaR}}(Y_{n:n})= {\mathrm{VaR}}_{\theta^*}(Y_{n:n})$ . On the other hand, by Proposition 4(i), if
then $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ , which implies $X_{n:n}\prec_{\mathrm{st}} Y_{n:n}$ . That is, ${\mathrm{VaR}}_{\beta}(X_{n:n})\le {\mathrm{VaR}}_{\beta}(Y_{n:n})$ for any $\beta\in (0,1)$ . Also, by Proposition 4(ii), if $ \mu \le \mu_{1:n} $ , then $X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}$ , which also implies $X_{n:n} \succ_{\mathrm{st}} Y_{n:n}$ . That is, ${\mathrm{VaR}}_{\beta}(X_{n:n})\ge {\mathrm{VaR}}_{\beta}(Y_{n:n})$ for any $\beta\in (0,1)$ . Thus we complete the proof.
(i) We can similarly consider the optimization problem (15) based on the expected shortfall (ES), which is defined as
\begin{equation*} {\mathrm{ES}}_\alpha(X)=\dfrac1{1-\alpha}\int_\alpha^1 {\mathrm{VaR}}_{u}(X) \,{\mathrm{d}} u,\quad \alpha\in(0,1). \end{equation*}That is, consider the optimization problem\begin{equation*} \min_{d\in\mathbb{R}} {\mathrm{ES}}_{\alpha}(\pi(X_{n;n},d)).\end{equation*}Then we can obtain a result similar to Proposition 5 by Theorem 4.1 of [Reference Cai, Tan, Weng and Zhang7].(ii) If $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ follow the EGG distribution, then by Proposition 3 we can show a result similar to Proposition 5. However, the EGG distribution is a light-tailed distribution, and does not have the property shown in (13). Thus we do not state the result here.
4. Conclusion
In this paper, by introducing two properties in terms of the reversed hazard rate function of a (baseline) distribution, and showing that they can be inherited by the proportional reversed hazard rate (PRHR) model, we have investigated the stochastic comparison of parallel systems with respect to the hazard rate and the likelihood ratio orders for the PRHR model. We then applied the result to two popular PRHR models: the EGG distribution and the exponentiated Pareto distribution. Based on the well-established result on the reversed hazard rate function of the gamma distribution, we get a necessary and sufficient condition for the stochastic comparisons of the largest-order statistics for EGG distributions. As a by-product, we get a similar equivalent characterization result for Weibull distributions. Our results recover the recent result established in [Reference Haidari and Najafabadi18] and strengthen the result for Weibull distributions given in [Reference Zhao, Zhang and Qiao37]. We also investigated the properties of reversed hazard rate function of Pareto distributions. This, combined with the inheriting property of a PRHR model, gives the (equivalent) characterization for stochastic comparisons in terms of the likelihood ratio (reversed hazard rate) order for exponentiated Pareto distributions.
Appendix A. Proof of Theorem 1(ii)
The proof is similar to that of Theorem 1(i). We show the result for the following three cases.
(i) First consider the case when the $\theta_i$ and $\theta$ are all integers. Let
be $m \coloneqq n\theta=\theta_1+\cdots+\theta_n$ i.i.d. random variables with $Z_1\sim F$ , and define $W_i = Z_i/\mu_1$ for $i=1,\ldots,\theta_1$ , $W_i = Z_i/\mu_2$ for $i=\theta_1+1,\ldots,\theta_1+\theta_2$ , …, and $W_i = Z_i/\mu_n$ for $i=\theta_1+\cdots+\theta_{n-1}+1,\ldots,\theta_1+\cdots+\theta_n$ . By observing the cumulative distribution functions of $Z_{m:m}$ and $W_{m:m}$ , $Z_{m:m}$ and $W_{m:m}$ have the same distribution as those of $ X_{n:n}$ and $Y_{n:n}$ , respectively. By Property 1, we have $Z_{m:m}\succ_{\mathrm{lr}}W_{m:m}$ or $Z_{m:m}\succ_{\mathrm{rh}}W_{m:m}$ if and only if
Then the desired result holds for the case when the $\theta_i$ and $\theta$ are all integers.
(ii) Next, consider the case when the $\theta_i$ and $\theta$ are all rational. Let $f_{X_{n:n}}$ and $g_{ Y_{n:n}}$ denote the density functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively. It can be verified that
where $F_{ Y_{n:n}}$ and $G_{ Y_{n:n}}$ denote the cumulative distribution functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively:
As the $\theta_i$ and $\theta$ are all rational, there exists N such that $N\theta_i$ , $i=1,\ldots,n$ and $N\theta$ are all integers. Hence, by the above proof for the case when the $\theta_i$ and $\theta$ are all integers, we have that $ (h(x))^N$ , and hence h(x), and $g(x)=Ng(x)/N$ are increasing in x if and only if $\mu\ge \mu_{1:n}$ .
(iii) Finally we show the result for the general case. Note that there exist $\theta^k,\theta_1^k,\ldots,\theta_n^k$ , $k\in\mathbb{N}$ , such that they are rational and
Let $X_{n:n}^k$ and $Y_{n:n}^k$ denote the largest-order statistics of $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ , where both $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ are n independent random variables such that $X_i^k\sim F_{\mu,\theta^k}, Y_i^k\sim F_{\mu_i,\theta_i^k}$ , $i=1,\ldots,n$ , $k\in\mathbb{N}$ . By the above proof, $\mu\ge\mu_{1:n}$ is sufficient for $X_{n:n} \succ_{\mathrm{lr}} Y_{n:n}$ and $X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}$ .
We next show the necessity by contradiction. Without loss of generality, assume that $\mu_1\le\dots\le\mu_n$ . By Property 2, we also have $ {\tilde{r}(at)\le\tilde{r}(bt)}, t\in\mathbb{R}$ if and only if $a\ge b$ . Further, if $\mu>\mu_1$ and $\theta$ is rational, then there exist $\theta_1^k\uparrow\theta_1, \theta_i^k\downarrow\theta_i$ as $ k\to\infty$ , $i=1,\ldots,n$ such that they are all rational and $\Sigma_{i=1}^n\theta_i^k=n\theta$ for every $k\in\mathbb{N}$ . Let $Y_{n:n}^k$ denote the largest-order statistics of $Y_1^k,\ldots,Y_n^k$ , respectively, where $Y_1^k,\ldots,Y_n^k$ are n independent random variables such that $Y_i^k\sim F_{\mu_i,\theta_i^k}$ , $i=1,\ldots,n$ , $k\in\mathbb{N}$ . In the same way as the above proof, there exists $t_0\in\mathbb{R}$ such that
This means that $X_{n:n}\nsucc Y_{n:n}$ .
If $\theta\in\mathbb{R}$ , there exists a rational number $\theta'<\theta$ . We define $\theta_1^{\prime}=\theta_1-n(\theta-\theta')$ and let $X_{n:n}^{\prime}, Y_{n:n}^{\prime}$ denote the largest-order statistics of $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ , respectively, where both $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ are n independent random variables such that $X_i^{\prime}\sim F_{\mu,\theta'}, i=1,\ldots,n$ and $Y_1^{\prime}\sim F_{\mu_1,\theta_1^{\prime}}, Y_j\sim F_{\mu_j,\theta_j}, j=2,\ldots,n$ . By the above proof, there exists $t_0\in\mathbb{R}$ such that
and
This means that $X_{n:n}\nsucc Y_{n:n}$ .
Acknowledgement
The authors are grateful for the support from National Science Foundation of China (grants 71671176, 71871208, and 71921001).