Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-06T06:07:16.625Z Has data issue: false hasContentIssue false

Stochastic comparisons of largest-order statistics for proportional reversed hazard rate model and applications

Published online by Cambridge University Press:  04 September 2020

Lu Li*
Affiliation:
University of Science and Technology of China
Qinyu Wu*
Affiliation:
University of Science and Technology of China
Tiantian Mao*
Affiliation:
University of Science and Technology of China
*
*Postal address: Department of Statistics and Finance, University of Science and Technology of China, Hefei, Anhui 230026, China. Email address: lizzylee@mail.ustc.edu.cn
**Email address: wu051555@mail.ustc.edu.cn
***Email address: tmao@ustc.edu.cn
Rights & Permissions [Opens in a new window]

Abstract

We investigate stochastic comparisons of parallel systems (corresponding to the largest-order statistics) with respect to the reversed hazard rate and likelihood ratio orders for the proportional reversed hazard rate (PRHR) model. As applications of the main results, we obtain the equivalent characterizations of stochastic comparisons with respect to the reversed hazard rate and likelihood rate orders for the exponentiated generalized gamma and exponentiated Pareto distributions. Our results recover and strengthen some recent results in the literature.

Type
Research Papers
Copyright
© Applied Probability Trust 2020

1. Introduction

Let $X_1,\ldots,X_n$ be n random variables. The largest-order statistic, denoted by

\begin{equation*}X_{n:n}=\max\{X_1,\ldots,X_n\},\end{equation*}

plays an important role in reliability, statistics, finance, and many other applied areas; see [Reference Balakrishnan and Rao2], [Reference Balakrishnan and Rao3], and [Reference David and Nagaraja14]. For example, in reliability, the lifetime of a parallel system is the same as the largest-order statistic. It is also used to measure the systemic risk in finance. There is a large literature on stochastic comparisons for the largest-order statistic. Pledger and Proschan [Reference Pledger and Proschan32] were among the first to study the problem of stochastically comparing the largest-order statistics of two sets of n exponential random variables, where the random variables in one set are independent and heterogeneous and the random variables in the other set are independent and identically distributed (i.i.d.). Subsequently, many papers further investigated similar stochastic comparison problems and generalized them to other distributions, such as (generalized) gamma, Weibull, etc.; see [Reference Balakrishnan and Zhao4], [Reference Bon and PĂltĂnea8], [Reference Haidari and Najafabadi18], [Reference Kochar21], [Reference Mao and Hu23], [Reference Misra and Misra26], [Reference Zhao and Balakrishnan35], [Reference Zhao and Balakrishnan36], and [Reference Zhao, Zhang and Qiao37], and the references therein.

In this paper we investigate the problem of stochastic comparison for the proportional reversed hazard rate (PRHR) model [Reference Gupta and Gupta17], which is also called the exponentiated random variable; see [Reference Al-Hussaini and Ahsanullah1], [Reference Mudholkar and Srivastava27], and [Reference Mudholkar, Srivastava and Freimer28]. Recall that the PRHR model, which can be viewed as a dual model of the well-known proportional hazards model, consists in describing random failure times by a family of distribution functions

\begin{equation*}\{(F(x))^\theta, \, \theta>0\},\end{equation*}

where F is a baseline distribution function [Reference Di Crescenzo15, Reference Gupta and Gupta17]. The PRHR model is useful for estimating the survival function for left censored data [Reference Kalbfleisch and Lawless19], and has wide applications in reliability theory and statistics [Reference Navarro31, Reference Wang, Yu and Coolen34].

For ease of reference, we first recall some definitions of stochastic orders. Let X and Y be two random variables with distribution functions F and G, respectively.

  1. (1) X is said to be smaller than Y in the usual stochastic order if $F(x)\ge G(x)$ for $x\in \mathbb{R}$ . We denote this by $X\prec_{\mathrm{st}} Y$ .

  2. (2) X is said to be smaller than Y in the reversed hazard rate order if $G(x)/F(x)$ is increasing in $x\in {\mathrm{supp}}(X)$ . We denote this by $X\prec_{\mathrm{rh}} Y$ .

  3. (3) If, moreover, X and Y have probability density functions f and g respectively, X is said to be smaller than Y in the likelihood ratio order if $g(x)/f(x)$ is increasing in $x\in {\mathrm{supp}}(X)\cup {\mathrm{supp}}(Y)$ . We denote this by $X\prec_{\mathrm{lr}} Y$ .

It is well known that the relation between the three stochastic orders is

\begin{equation*}X\prec_{\mathrm{lr}} Y \Longrightarrow X\prec_{\mathrm{rh}} Y \Longrightarrow X\prec_{\mathrm{st}} Y.\end{equation*}

We refer to [Reference Shaked and Shanthikumar33] for more properties of stochastic orders. We recall that the reversed hazard rate function of a continuous random variable X is defined as

\begin{equation*}\tilde{r}_X(t) = \dfrac{f(t)}{F(t)},\quad t\in{\mathrm{supp}}(X),\end{equation*}

where F and f are the distribution function and density function of X, respectively, and here and throughout the paper, ${\mathrm{supp}}(X)$ represents the support of the distribution function of a random variable X. For two continuous random variables X and Y, $X\prec_{\mathrm{rh}} Y$ if and only if $\tilde{r}_X(t)\le \tilde{r}_Y(t)$ for all $t\in\mathbb{R}$ .

Remark 1. Let X and Y be two random variables with respective reversed hazard rate functions $\tilde{r}_X(t) $ and $\tilde{r}_Y(t)$ . It can be easily verified that if $ \tilde{r}_{X}(t) \le \tilde{r}_{Y}(t)$ and $ \tilde{r}_{Y}(t)/ {\tilde{r}_{X}(t)}$ is increasing in t, then $X\prec_{\mathrm{lr}}Y$ . This is due to the fact that

\begin{equation*}\dfrac{g(t)}{f(t)} = \dfrac{\tilde{r}_{Y}(t)}{ \tilde{r}_{X}(t)} \times \dfrac{G(t)}{F(t)},\quad t\in{\mathrm{supp}}(X)\cup {\mathrm{supp}}(Y),\end{equation*}

and $\tilde{r}_{X}(t) \le \tilde{r}_{Y}(t)$ if and only if $G(t)/F(t)$ is increasing in t.

Let $X_1,\ldots,X_n$ be n i.i.d. random variables with $X_i\sim F$ and let $Y_1,\ldots,Y_n$ be n independent random variables such that $Y_i=_{\mathrm{st}} X_i/\mu_i$ , $i=1,\ldots,n$ , where $\mu_i>0$ , $i=1,\ldots,n$ , are constants. Let $X_{n:n}$ and $Y_{n:n}$ denote the largest-order statistics of $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ , respectively. Mao and Hu [Reference Mao and Hu23] showed that if F is an exponential distribution, then

(1) \begin{equation} X_{n:n}\prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n}\prec_{\mathrm{rh}} Y_{n:n} \Longleftrightarrow \sum_{i=1}^n\mu_i\le n\end{equation}

and

(2) \begin{equation} Y_{n:n} \prec_{\mathrm{lr}}X_{n:n} \Longleftrightarrow Y_{n:n}\prec_{\mathrm{rh}}X_{n:n} \Longleftrightarrow \mu_{1:n}\ge 1.\end{equation}

Zhao and Balakrishnan [Reference Zhao and Balakrishnan36] and Zhao, Zhang, and Qiao [Reference Zhao, Zhang and Qiao37] proved

(3) \begin{equation} \sum_{i=1}^n\mu_i\le n \Longrightarrow X_{n:n}\prec_{\mathrm{lr}}Y_{n:n}\end{equation}

if F is a gamma distribution or a Weibull distribution, respectively.Footnote 1 It should be pointed out that for gamma distributions, $\sum_{i=1}^n\mu_i\le n $ is also a necessary condition for $X_{n:n}\prec_{\mathrm{lr}}Y_{n:n}$ . However, it is not the case for Weibull distributions (see Section 3.1). Recently, Haidari and Najafabadi [Reference Haidari and Najafabadi18] generalized the above result to exponentiated generalized gamma distributions, which will be stated in Proposition 3 in Section 3.1.

In this paper we first introduce two properties (defined later in Section 2) of stochastic comparison of the largest-order statistics of two sets for a (baseline) distribution function. The two properties are similar to and slightly stronger than (1) and (2), respectively, and are satisfied by the exponential and gamma distributions (with scale larger than 1). We show that two properties can be inherited by the PRHR model. This is one of the main results of our paper. The second contribution of the paper is to apply the main result to two popular PRHR models: the exponentiated generalized gamma (EGG) distribution and the exponentiated Pareto distribution. We give the necessary and sufficient conditions for the stochastic comparisons of the largest-order statistics for the EGG distribution and the exponentiated Pareto distribution. Our results on the EGG distribution recover the recent result established by Haidari and Najafabadi [Reference Haidari and Najafabadi18] and strengthen the result for the Weibull distribution given by Zhao, Zhang, and Qiao [Reference Zhao, Zhang and Qiao37]. To the best of our knowledge, the result on the (exponentiated) Pareto distribution is novel.

The paper is organized as follows. In Section 2 we first introduce two properties in terms of reversed hazard rate functions for a distribution F and then we show that the two properties could be inherited by a PRHR model. In Section 3.1 we first apply the main result to EGG distributions to give a necessary and sufficient condition for the stochastic comparison of largest-order statistics with respect to reversed hazard rate order and likelihood ratio order for EGG distributions. We also derive a corollary which is an equivalent characterization for Weibull distributions. In Section 3.2 we apply the main result to exponentiated Pareto distributions and obtain a (necessary and) sufficient condition for the stochastic comparison of largest-order statistics with respect to the reversed hazard rate order and likelihood ratio order for the Pareto and exponentiated Pareto distributions.

2. Main result

Let F be a distribution function and define $G(x)= (F(x))^\theta=\!:\, F_{\theta}(x)$ , $x\in\mathbb{R}$ , where $\theta>0$ is a constant. It is obvious that $\{F_{\theta},\,\theta>0\}$ is a class of distribution functions with $F_{1}=F$ ; $\{F_\theta,\theta>0\}$ is called an exponentiated distribution (the corresponding random variable is called an exponentiated random variable; see [Reference Mudholkar and Srivastava27] and [Reference Mudholkar, Srivastava and Freimer28]) or a PRHR model [Reference Gupta and Gupta17]. Specifically, $F_\theta$ is called a PRHR model with proportionality constant $\theta$ $(>0)$ .

We introduce the following properties for a distribution F. We then show that each of the two properties can be inherited by a PRHR model. To state them, for any $n\in{\mathbb{N}}$ and $\mu_i\in\mathbb{R}_+$ , $i=1,\ldots,n$ , let $X_1,\ldots,X_n$ be n i.i.d. random variables such that $X_i\sim F$ and denote $Y_i=X_i/\mu_i,\, i=1,\ldots,n$ . We also let $X_{n:n}$ and $Y_{n:n}$ denote the largest-order statistics of $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ , respectively.

Property 1. For any $n\in\mathbb{N}$ and $\mu_i\in\mathbb{R}_+$ , $i=1,\ldots,n$ ,

(4) \begin{equation} \overline{\mu}\le 1\Longleftrightarrow \tilde{r}_{_{X_{n:n}}}(t) \le \tilde{r}_{_{Y_{n:n}}}(t) \Longrightarrow \dfrac{ \tilde{r}_{_{Y_{n:n}}}(t)} {\tilde{r}_{_{X_{n:n}}}(t)}\ \text{is increasing in $t\in\mathbb{R}$,}\end{equation}

where $\overline{\mu}= {\sum_{i=1}^n\mu_i}/n .$

Property 2. For any $n\in\mathbb{N}$ and $\mu_i\in\mathbb{R}_+$ , $i=1,\ldots,n$ ,

(5) \begin{equation} \mu_{1:n}\ge 1\Longleftrightarrow\tilde{r}_{_{X_{n:n}}}(t) \ge \tilde{r}_{_{Y_{n:n}}}(t) \Longrightarrow \dfrac{ \tilde{r}_{_{Y_{n:n}}}(t)} {\tilde{r}_{_{X_{n:n}}}(t)}\ \text{is decreasing in $t\in\mathbb{R}$,}\end{equation}

where $\mu_{1:n}=\min\{\mu_1,\ldots,\mu_n\}$ .

Let $\tilde{r}$ be the reversed hazard rate function of F. Then the reversed hazard rate functions of the order statistics $X_{n:n}$ and $Y_{n:n}$ , respectively, can be written as

\begin{equation*} \tilde{r}_{_{X_{n:n}}}(t) = \dfrac{nf(t)}{F(t)}=n\tilde{r}(t)\quad\text{and}\quad \tilde{r}_{_{Y_{n:n}}}(t) =\sum_{i=1}^n\mu_i \dfrac{f(\mu_i t)}{F(\mu_i t)}=\sum_{i=1}^n\mu_i \tilde{r}(\mu_it),\quad t\in\mathbb{R}.\end{equation*}

Then it is easy to obtain the following equivalent characterization of Properties 1 and 2.

Proposition 1. Let F be a distribution function. Then:

  1. (i) F satisfies Property 1 if and only if, for any $n\in\mathbb{N}$ and $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ ,

    \begin{equation*} \overline{\mu}\le 1 \Longleftrightarrow n \tilde{r}(t) \le \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it) {\Longrightarrow} \sum_{i=1}^n \dfrac{ \mu_i\,\tilde{r}(\mu_it)}{ \tilde{r}(t)}\ \textit{is\ increasing\ in\ $t\in\mathbb{R}${,}}\end{equation*}
  2. (ii) F satisfies Property 2 if and only if both $t \tilde{r}(t)$ and ${\tilde{r}(\mu t)}/{\tilde{r}(t)}$ are decreasing in $t\in\mathbb{R}$ for $\mu\ge 1$ , and $n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)$ implies $\mu_{1:n}\ge 1.$

Proof. We only give the proof of (ii), as the proof of (i) is trivial from the definition of the reversed hazard rate function. To show necessity, note that Property 2 is equivalent to that for $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ :

(6) \begin{equation} \mu_{1:n}\ge 1 \Longleftrightarrow n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)\Longrightarrow\sum_{i=1}^n \dfrac{ \mu_i\,\tilde{r}(\mu_it)}{ \tilde{r}(t)}\ \text{is decreasing in $t\in\mathbb{R}$.}\end{equation}

Taking $n=1$ , (6) then reduces to

\begin{equation*} \mu_1\ge 1 \Longleftrightarrow {\begin{cases} t \tilde{r}(t) \ge \mu_1t\tilde{r}(\mu_1t) &t\in\mathbb{R}_+\\ t \tilde{r}(t) \le \mu_1t\tilde{r}(\mu_1t) &t\in\mathbb{R}_- \end{cases}} \Longrightarrow \dfrac{ \mu_1t\,\tilde{r}(\mu_1t)}{ t\tilde{r}(t)}\ \text{is decreasing in $t\in\mathbb{R}$.} \end{equation*}

This implies that $t \tilde{r}(t)$ and ${\tilde{r}(\mu t)}/{\tilde{r}(t)}$ are decreasing in $t\in\mathbb{R}$ . Also noting that the equivalence relation of (6) implies that $n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)$ $\Rightarrow$ $\mu_{1:n}\ge 1$ , we complete the proof of necessity.

To show sufficiency, note that the monotonicity of $t \tilde{r}(t)$ and ${\tilde{r}(\mu t)}/{\tilde{r}(t)}$ for $\mu\ge 1$ implies that

\begin{equation*} \mu_{1:n}\ge 1 \Longrightarrow n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)\quad\text{and}\quad\sum_{i=1}^n \dfrac{ \mu_i\,\tilde{r}(\mu_it)}{ \tilde{r}(t)}\ \text{is decreasing in $t\in\mathbb{R}$.}\end{equation*}

Combining with the implication that $n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it)$ implies $\mu_{1:n}\ge 1$ , we have

\begin{equation*} \mu_{1:n}\ge 1 \Longleftrightarrow n \tilde{r}(t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it). \end{equation*}

Thus (6) is satisfied. That is, Property 2 is satisfied. Hence we complete the proof.

Remark 2.

  1. (i) It is shown that the gamma distribution function with shape parameter less than one and the exponential distribution function satisfy Property 1; see [Reference Zhao and Balakrishnan36] and [Reference Mao, Hu and Zhao24]. Meanwhile, it can be easily shown that the gamma distribution and hence the exponential distribution satisfy Property 2; see [Reference Mao, Hu and Zhao24].

  2. (ii) It is easy to show that if a distribution function F satisfies Property 1, then by Remark 1 we can immediately get

    \begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \overline{\mu}\le 1.\end{equation*}
    Similarly, if F satisfies Property 2, then
    \begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow 1 \le \mu_{1:n}.\end{equation*}

To show our main result, we first give a lemma that is useful for the proof of the main result.

Lemma 1. For $\theta_i>0,\mu_i>0$ , $i=1,\ldots,n$ , with ${\sum_{i=1}^n\theta_i}={\sum_{i=1}^n\theta_i\mu_i}$ , then for each $\delta>0$ there exist $\theta_i^*>0$ , $i=1,\ldots,n$ , such that $0<|\theta_i-\theta_i^*|\le \delta$ , $i=1,\ldots,n$ , ${\sum_{i=1}^n\theta_i^*}\ge {\sum_{i=1}^n\theta_i^*\mu_i}$ .

Proof. Without loss of generality, assume that $\mu_1\le\cdots\le\mu_n$ and they are not identically equal as the result is trivial when $\mu_1=\dots=\mu_n=1$ . From ${\sum_{i=1}^n\theta_i}={\sum_{i=1}^n\theta_i\mu_i}$ we have ${\sum_{i=1}^n\theta_i(\mu_i-1)}=0$ , which implies that there exists $i_0\in\{1,\ldots,n\}$ such that $\mu_{i_0}< 1\le \mu_{i_0+1}$ . Now we take $\theta_i^*>\theta_i$ for $i\le i_0$ and $\theta_i^*<\theta_i$ for $i> i_0$ . Then we have

\begin{equation*}{\sum_{i=1}^n\theta_i^*(\mu_i-1)} = {\sum_{i=1}^n(\theta_i^*-\theta_i)(\mu_i-1)}<0,\end{equation*}

where the equality follows from ${\sum_{i=1}^n\theta_i(\mu_i-1)}=0$ , and the inequality follows from $(\theta_i^*-\theta_i)(\mu_i-1)\le 0$ for all i and $(\theta_1^*-\theta_1)(\mu_1-1)<0$ . Hence we have ${\sum_{i=1}^n\theta_i^*}\ge {\sum_{i=1}^n\theta_i^*\mu_i}$ .

Theorem 1. Let F be a distribution and define $F_{\mu,\theta}(x) = F^{\theta}(\mu x)$ $x\in\mathbb{R}$ , where $\mu,\,\theta>0$ are constants. Let $X_1,\ldots,X_n$ be n i.i.d. random variables such that $X_i\sim F_{\mu,\theta}$ and let $Y_1,\ldots,Y_n$ be n independent random variables such that $Y_i\sim F_{\mu_i,\theta_i}$ , $i=1,\ldots,n$ , with $n\theta=\theta_1+\cdots+\theta_n$ .

  1. (i) If F satisfies Property 1, then

    \begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu\theta\ge\dfrac1n{\sum_{i=1}^n\theta_i\mu_i}.\end{equation*}
  2. (ii) If F satisfies Property 2, then

    \begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu \le \mu_{1:n}.\end{equation*}

Proof. We only give the proof of (i), as (ii) can be shown similarly and its proof is postponed to the Appendix for completeness. Now three cases arise.

(a) First assume that $\theta_i$ , $i=1,\ldots,n$ and ${\theta}$ are all integers. Let

\begin{equation*} Z_1,\ldots,Z_{n\theta},\quad \text{and}\quad W_1,\ldots,W_{\theta_1},\,W_{\theta_1+1},\ldots,W_{\theta_1+\theta_2},\ldots,W_{\theta_1+\cdots+\theta_n} \end{equation*}

be two independent samples of size $m \coloneqq n{\theta}=\theta_1+\cdots+\theta_n$ such that $Z_i\sim F_{\mu,{{1}}}$ for $i=1,\ldots,m$ ; $W_i \sim F_{\mu_1,{1}}$ for $i=1,\ldots,\theta_1$ ; $W_i \sim F_{\mu_2,{1}}$ for $i=\theta_1+1,\ldots,\theta_1+\theta_2$ ; $\cdots$ ; $W_i \sim F_{\mu_n,{1}}$ , $i=\theta_1+\cdots+\theta_{n-1}+1,\ldots,\theta_1+\cdots+\theta_n$ . By observing the cumulative distribution functions of $Z_{m:m}$ and $W_{m:m}$ , we have that $Z_{m:m}$ and $W_{m:m}$ have the same distribution as those of $ X_{n:n}$ and $Y_{n:n}$ , respectively. By Property 1, we have $Z_{m:m}\prec_{\mathrm{lr}}W_{m:m}$ or $Z_{m:m}\prec_{\mathrm{rh}}W_{m:m}$ if and only if

\begin{equation*} \dfrac{\sum_{i=1}^n\theta_i\mu_i}{{n\theta}}= \dfrac1{m}\sum_{i=1}^{n} \sum_{j=1}^{\theta_i} \mu_i \le \mu. \end{equation*}

Then the result holds when $\theta_i$ , $i=1,\ldots,n$ , and $\theta$ are all integers.

(b) Next consider the case when $\theta_i$ , $i=1,\ldots,n$ and $\theta$ are all rational. Let $f_{X_{n:n}}$ and $g_{ Y_{n:n}}$ denote the density functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively. It can be verified that

\begin{equation*} \dfrac{g_{ Y_{n:n}}(x)}{f_{X_{n:n}}(x)} = \dfrac{G_{ Y_{n:n}}(x)}{F_{X_{n:n}}(x)} g(x), \end{equation*}

where $F_{ Y_{n:n}}$ and $G_{ Y_{n:n}}$ denote the cumulative distribution functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively:

\begin{equation*} \dfrac{G_{Y_{n:n}}(x)}{F_{X_{n:n}}(x)} = \dfrac{\prod_{i=1}^n F^{\theta_i}(\mu_i x)}{F^{n\theta}(\mu x)}=\!:\,h(x)\quad \text{and}\quad g(x) = \dfrac{\sum_{i=1}^n\theta_i\mu_i {{f(\mu_i x)}/{F(\mu_i x)}}}{n\mu\theta f(\mu x)/F(\mu x)}. \end{equation*}

As $\theta_i$ , $i=1,\ldots,n$ and $\theta$ are all rational, there exists N such that $N\theta_i$ , $i=1,\ldots,n$ and $N\theta$ are all integers. Hence, by part (a), $(h(x))^N$ is increasing if and only if $g(x)=Ng(x)/N$ is increasing, and hence $X_{n:n} \prec_{\mathrm{lr}} Y_{n:n}$ if and only if $X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}$ and each of them is equivalent to ${\sum_{i=1}^n\theta_i\mu_i}/{(n\theta)}\le \mu$ .

(c) Finally, consider the general case. We first show that $\mu\ge {\sum_{i=1}^n\theta_i\mu_i}/{(n\theta)}$ is sufficient for $X_{n:n} \prec_{\mathrm{lr}} Y_{n:n}$ and $X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}$ . We assert that there exist $\theta_1^k,\ldots,\theta_n^k$ and $\theta^k$ , $k\in\mathbb{N}$ , such that they are all rational and $\theta^k\rightarrow \theta$ , $\theta_i^k\rightarrow \theta_i$ as $k\to\infty$ , $i=1,\ldots,n$ , and

\begin{equation*} {{n\theta^k=\sum_{i=1}^n\theta_i^k}}\quad \text{and}\quad \mu\theta^k\ge \dfrac1n{\sum_{i=1}^n\theta_i^k\mu_i}. \end{equation*}

This is obviously true when

\begin{equation*} \mu\theta> \dfrac1n{\sum_{i=1}^n\theta_i\mu_i}{,} \end{equation*}

and from the proof of Lemma 1, it is also true when

\begin{equation*} \mu\theta= \dfrac1n{\sum_{i=1}^n\theta_i\mu_i}. \end{equation*}

Let $X_{n:n}^k$ and $Y_{n:n}^k$ denote the largest-order statistics of $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ , respectively, where both $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ are n independent random variables such that $X_i^k\sim F_{\mu,\theta^k}$ and $Y_i^k\sim F_{\mu_i,\theta_i^k}$ , $i=1,\ldots,n$ , $k\in\mathbb{N}$ . By the proof of (b), we have $X_{n:n}^k\prec_{\mathrm{lr}} (\!\!\prec_{\mathrm{rh}})Y_{n:n}^k$ for each $k\in\mathbb{N}$ . Note that $X_{n:n}^k\to_{\mathrm{st}} X_{n:n}$ and $Y_{n:n}^k\to_{\mathrm{st}} Y_{n:n}$ as $k\to\infty$ . Then, by Theorem 1.C.7 of [Reference Shaked and Shanthikumar33], $\mu\ge {\sum_{i=1}^n\theta_i\mu_i}/(n\theta)$ is sufficient for $X_{n:n} \prec_{\mathrm{lr}} Y_{n:n}$ and $X_{n:n} \prec_{\mathrm{rh}} Y_{n:n}$ .

We next show the necessity by contradiction. If $\mu< {\sum_{i=1}^n\theta_i\mu_i}/(n\theta)$ , let $\tilde{r}_{\mu}$ denote the reversed hazard rate function of $X\thicksim F_{\mu,1}$ . Then we complete the result in two steps. Without loss of generality, we assume $\mu_1\le \cdots\le \mu_n$ . Then, by Property 1, we know that $\tilde{r}_{\mu_1}(t)\ge\cdots\ge\tilde{r}_{\mu_n}(t)$ for all $t\in\mathbb{R}$ .

First we consider the case when $\theta$ is rational. Then there exist rational numbers $\theta_i^*$ , $i=1,\ldots,n$ , such that $\theta_1^*>\theta_1, \theta_i^*<\theta_i$ , $i=2,\ldots,n$ and $\sum_{i=1}^n\theta_i^\ast=n\theta$ . Let $Y_{n:n}^\ast$ denote the largest-order statistics of $Y_1^\ast,\ldots,Y_n^\ast$ , respectively, where $Y_1^\ast,\ldots,Y_n^\ast$ are n independent random variables such that $Y_i^\ast\sim F_{\mu_i,\theta_i^\ast}$ , $i=1,\ldots,n$ . By part (b), $X_{n:n} \not\prec_{\mathrm{rh}} Y_{n:n}$ , that is, there exists $t_0\in\mathbb{R}$ such that

\begin{equation*} {\tilde{r}_{_{X_{n:n}}}(t_0)} = {n\theta \tilde{r}_{\mu}(t_0)}> {\tilde{r}_{_{Y_{n:n}^\ast}}(t_0)} ={\Sigma_{i=1}^n{\theta_i^\ast \tilde{r}_{\mu_i}(t_0)}}. \end{equation*}

It can be verified that

\begin{align*} {\tilde{r}_{_{Y_{n:n}^\ast}}(t_0)} &= {\sum_{i=1}^n{\theta_i^\ast \tilde{r}_{\mu_i}(t_0)}} \\* &= {\sum_{i=1}^n (\theta_i^\ast-\theta_i) \tilde{r}_{\mu_i}(t_0)} + \tilde{r}_{_{Y_{n:n}}}(t_0) \\& \ge {\sum_{i=1}^n (\theta_i^\ast-\theta_i) \tilde{r}_{\mu_1}(t_0)} + \tilde{r}_{_{Y_{n:n}}}(t_0) \\*& =\tilde{r}_{_{Y_{n:n}}}(t_0), \end{align*}

where the inequality follows from $\tilde{r}_{\mu_1}(t)\ge\cdots\ge\tilde{r}_{\mu_n}(t)$ and $\theta_1^*>\theta_1, \theta_i^*<\theta_i$ , $i=2,\ldots,n$ , and the last equality follows from $\sum_{i=1}^n\theta_i^\ast=n\theta=\sum_{i=1}^n\theta_i$ . This implies $X_{n:n}\nprec_{\mathrm{rh}} Y_{n:n}$ .

Secondly, we consider the general case when $\theta$ may not be rational. There exists a rational number $\theta'>\theta$ such that $\mu<\Sigma_{i=1}^n\theta_i\mu_i/(n\theta')$ . Let $\theta_1^{\prime}=\theta_1+n(\theta'-\theta)$ , and let $X_{n:n}^{\prime}, Y_{n:n}^{\prime}$ denote the largest-order statistics of $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ , respectively, where both $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ are n independent random variables such that $X_i^{\prime}\sim F_{\mu,\theta'}, i=1,\ldots,n$ and $Y_1^{\prime}\sim F_{\mu_1,\theta_1^{\prime}}$ . It is easy to verify that $\mu<(\theta_1^{\prime}\mu_1+\sum_{i=2}^n\theta_i\mu_i)/(n\theta')$ and $\theta_1^{\prime}+\sum_{i=2}^n\theta_i=n\theta'$ . By the first case, there exists $t_0\in\mathbb{R}$ such that

\begin{equation*}\dfrac{\tilde{r}_{_{Y_{n:n}^{\prime}}}(t_0)}{\tilde{r}_{_{X_{n:n}^{\prime}}}(t_0)}= \dfrac{\theta_1^{\prime} \tilde{r}_{\mu_1}(t_0)+\Sigma_{i=2}^n{\theta_i \tilde{r}_{\mu_i}(t_0)}}{n\theta' \tilde{r}_{\mu}(t_0)}<1 .\end{equation*}

Note that

\begin{equation*}\dfrac{\tilde{r}_{_{Y_{n:n}}}(t_0)}{\tilde{r}_{_{X_{n:n}}}(t_0)}= \dfrac{\theta_1^{\prime} \tilde{r}_{\mu_1}(t_0)+\Sigma_{i=2}^n{\theta_i \tilde{r}_{\mu_i}(t_0)}-n(\theta'-\theta)\tilde{r}_{\mu_1}(t_0)}{n\theta' \tilde{r}_{\mu}(t_0)-n(\theta'-\theta)\tilde{r}_{\mu}(t_0)}= \dfrac{\tilde{r}_{_{Y_{n:n}^{\prime}}}(t_0)-n(\theta'-\theta)\tilde{r}_{\mu_1}(t_0)} {\tilde{r}_{_{X_{n:n}^{\prime}}}(t_0)-n(\theta'-\theta)\tilde{r}_{\mu}(t_0)} .\end{equation*}

By $\tilde{r}_{\mu}(t_0)\le\tilde{r}_{\mu_1}(t_0)$ , we have ${\tilde{r}_{_{Y_{n:n}}}(t_0)}/{\tilde{r}_{_{X_{n:n}}}(t_0)}\le{\tilde{r}_{_{Y_{n:n}^{\prime}}}(t_0)}/{\tilde{r}_{_{X_{n:n}^{\prime}}}(t_0)}<1$ . This implies $X_{n:n}\nprec_{\mathrm{rh}} Y_{n:n}$ . Hence we complete the proof.

From the proof of Theorem 1, we can easily get the first equivalent relation in (4) of Property 1, and the first equivalent relation in (5) of Property 2 can also be inherited by a PRHR model.

Proposition 2. Let F be a distribution and define $F_{\mu,\theta}(x) = F^{\theta}(\mu x)$ $x\in\mathbb{R}$ , where $\mu,\,\theta>0$ are constants. Let $X_1,\ldots,X_n$ be n i.i.d. random variables such that $X_i\sim F_{\mu,\theta}$ and let $Y_1,\ldots,Y_n$ be n independent random variables such that $Y_i\sim F_{\mu_i,\theta_i}$ , $i=1,\ldots,n$ , with $n\theta=\theta_1+\cdots+\theta_n$ .

  1. (i) If F satisfies the equivalent relation in (4) of Property 1, then

    \begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow \mu\theta\ge\dfrac1n{\sum_{i=1}^n\theta_i\mu_i}.\end{equation*}
  2. (ii) If F satisfies the equivalent relation in (5) of Property 2, then

    \begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow \mu \le \mu_{1:n}.\end{equation*}

From Theorem 1, we can easily get the following result by relaxing the constraint $n\theta=\sum_{i=1}^n\theta_i$ .

Corollary 1. Under the notation of Theorem 1, we have the following statements.

  1. (i) If F satisfies Property 1 and $n\theta\le\theta_1+\cdots+\theta_n$ , then

    \begin{equation*} \mu\theta\ge\dfrac1n{\sum_{i=1}^n\theta_i\mu_i} \Longrightarrow X_{n:n}\prec_{\mathrm{lr}} Y_{n:n},X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}.\end{equation*}
  2. (ii) If F satisfies Property 2 and $n\theta\ge\theta_1+\cdots+\theta_n$ , then

    \begin{equation*} \mu \le \mu_{1:n} \Longrightarrow X_{n:n} \succ_{\mathrm{lr}} Y_{n:n}, X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}.\end{equation*}

Proof. We only give the proof of (i), as (ii) can be shown similarly. Let $\theta^* \coloneqq \sum_{i=1}^n\theta_i/n\ge \theta.$ Then

\begin{equation*} \mu\theta^*\ge\mu\theta\ge\dfrac1n\sum_{i=1}^n\theta_i\mu_i. \end{equation*}

We define $X_{n:n}^*$ as the largest-order statistics of $X_1^*,X_2^*,\ldots,X_n^*$ such that $X_i^*\thicksim F_{\mu,\theta^*}$ , $i=1,\ldots,n$ . Then, by Theorem 1, we have $X_{n:n}^*\prec_{\mathrm{rh}}Y_{n:n}$ . Note that the reserved hazard rate function of $X_{n:n}$ is increasing in $\theta>0$ , and hence $X_{n:n}\prec_{\mathrm{rh}}Y_{n:n}.$ Also, note that

\begin{equation*}g(x) = \dfrac{\sum_{i=1}^n\theta_i\mu_i {{f(\mu_i x)}/{F(\mu_i x)}}}{n\mu\theta f(\mu x)/F(\mu x)}=\dfrac{\sum_{i=1}^n\theta_i\mu_i {{f(\mu_i x)}/{F(\mu_i x)}}}{n\mu\theta^* f(\mu x)/F(\mu x)}\dfrac{\theta^*}{\theta} =\!:\, g^*(x)\dfrac{\theta^*}{\theta}.\end{equation*}

Then g(x) is also increasing as $g^*(x)$ is increasing from the proof of Theorem 1. Hence we have $X_{n:n}\prec_{\mathrm{lr}}Y_{n:n}.$ This completes the proof.

3. Applications

In this section we investigate the applications of the main result to the exponentiated generalized gamma (EGG) distribution and the exponentiated Pareto distribution.

3.1. EGG distribution

From Theorem 1, we can immediately get the following corollary for the EGG distribution. The result has been given by Haidari and Najafabadi [Reference Haidari and Najafabadi18], whose proof is technical and requires several lemmas. Recall that a random variable X is said to have the EGG distribution with shape parameters $\theta>0$ , $\lambda>0$ , $r>0$ , and scale parameter $\mu>0$ , denoted by $X\sim$ EGG $(\theta,\lambda,r,\mu)$ , if it has the following cumulative distribution function:

(7) \begin{equation} G(x) = \biggl(\int_0^x \dfrac{\lambda \mu^r}{\Gamma(r/\lambda)} t^{r-1} {\mathrm{e}}^{-(\mu t)^\lambda} \,{\mathrm{d}} t \biggr)^\theta,\quad x>0.\end{equation}

It is easy to see that the class of EGG distributions contains the Weibull distribution, the gamma distribution, and hence the exponential distribution.

  1. (i) If $\theta=\lambda=1$ , then the EGG distribution reduces to the gamma distribution, $\Gamma(\mu,r)$ , with the density function

    \begin{equation*} f(x)= \dfrac{ \mu^r}{\Gamma(r)} x^{r-1} {\mathrm{e}}^{-\mu x} ,\quad x>0. \end{equation*}
  2. (ii) If $\lambda=r$ , $\theta=1$ , then the EGG distribution reduces to the Weibull distribution, ${\mathrm{Wel}}(\lambda,\mu)$ , with the density function

    (8) \begin{equation} f(x) = \lambda\mu^\lambda x^{\lambda-1} {\mathrm{e}}^{-(\mu x)^\lambda},\quad x>0.\end{equation}
  3. (iii) If $\lambda=\theta=r=1$ , then the EGG distribution reduces to the exponential distribution, ${\mathrm{Exp}}(\mu)$ , with the density function

    \begin{equation*} f(x)= \mu {\mathrm{e}}^{-\mu x} ,\quad x>0. \end{equation*}

Proposition 3. Let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim {\mathrm{EGG}}(\theta,\lambda, r, \mu)$ and $Y_i\sim {\mathrm{EGG}}(\theta_i,\lambda, r, \mu_i)$ , where $\theta_i>0,\, \theta>0$ , $\lambda>0$ , $r>0$ , $\mu_i>0$ , $\mu>0$ , $i=1,\ldots,n$ , and $n\theta=\theta_1+\cdots+\theta_n$ .

  1. (i) For $0<\lambda\le r$ , we have

    \begin{equation*} X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu \ge \Biggl( \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i^\lambda\Biggr)^{1/\lambda}.\end{equation*}
  2. (ii) For $\lambda,\, r>0$ , we have

    \begin{equation*} X_{n:n} \succ_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu \le \mu_{1:n}.\end{equation*}
  3. (iii) For $\lambda,\, r>0$ , we have

    \begin{equation*}X_{n:n} \prec_{\mathrm{st}} Y_{n:n}\Longleftrightarrow \mu^{n\theta}\ge \prod_{i=1}^n\mu_i^{\theta_i}.\end{equation*}

Proof. Let F be the distribution function of a gamma distribution $\Gamma(1,r/\lambda)$ . Then $F_{\mu^{\lambda},\theta}$ is the distribution function of ${\mathrm{EGG}}(\theta,1,r/\lambda,\mu^{\lambda})$ defined by (7). Note that for a random variable X, $X^\lambda\sim {\mathrm{EGG}}(\theta,1,r/\lambda,\mu^\lambda)$ if and only if $X\sim {\mathrm{EGG}}(\theta,\lambda,r,\mu)$ , and $\prec_{\mathrm{lr}}$ and $\prec_{\mathrm{rh}}$ are preserved under increasing transforms. Hence, by Remark 2 and Theorem 1, we have that if $\lambda\le r$ , then $\Gamma(1,r/\lambda)$ satisfies Property 1, and hence, if $\lambda\le r$ , we have

\begin{equation*}X_{n:n} \prec_{\mathrm{lr}} Y_{n:n} \Longleftrightarrow X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu^\lambda \ge \dfrac{\sum_{i=1}^n\theta_i\mu_i^\lambda}{n\theta}.\end{equation*}

That is, (i) holds true. Similarly, we can show that (ii) holds true. For (iii) it can be proved by Theorem 3.2 of [Reference Khaledi, Farsinezhad and Kochar20], similar arguments to those in the proof of Theorem 1, and the above observations. We next show the necessity by contradiction. We assume that $\mu^{n\theta}< \prod_{i=1}^n\mu_i^{\theta_i}$ . Note that

\begin{align*} \lim_{x\to0} \dfrac{F_{Y_{n:n}}(x)}{F_{X_{n:n}}(x)} &= \lim_{x\to0} \dfrac{\prod_{i=1}^n \big(\!\int_0^x {{\lambda \mu_i^r}\,{\Gamma(r/\lambda)}^{-1}} t^{r-1} {\mathrm{e}}^{-(\mu_i t)^\lambda} \,{\mathrm{d}} t \big)^{\theta_i}}{\big(\!\int_0^x {{ \mu^r}\,{\Gamma(r/\lambda)}^{-1}} t^{r-1} {\mathrm{e}}^{-(\mu t)^\lambda} \,{\mathrm{d}} t \big)^{n\theta}}\\[3pt] & = \prod_{i=1}^n\biggl( \lim_{x\to0} \dfrac{\int_0^x \mu_i^r t^{r-1} {\mathrm{e}}^{-(\mu_i t)^\lambda} \,{\mathrm{d}} t }{\int_0^x \mu^r t^{r-1} {\mathrm{e}}^{-(\mu t)^\lambda} \,{\mathrm{d}} t }\biggr)^{\theta_i}\\[3pt] & = \prod_{i=1}^n\biggl( \lim_{x\to0} \dfrac{ \mu_i^r x^{r-1} {\mathrm{e}}^{-(\mu_i x)^\lambda} }{ \mu^r x^{r-1} {\mathrm{e}}^{-(\mu x)^\lambda} }\biggr)^{\theta_i} \\[3pt] &= \prod_{i=1}^n\biggl(\dfrac{ \mu_i^r }{ \mu^r }\biggr)^{\theta_i}\\* & >1. \end{align*}

This yields a contradiction with $X_{n:n} \prec_{\mathrm{st}} Y_{n:n}$ . Hence (iii) follows.

With Proposition 3 in hand, we immediately get the following result for the Weibull distribution, a special case of the EGG distribution, whose density function is given by (8).

Corollary 2. Let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim {\mathrm{Wel}}(\lambda,\mu), Y_i\sim {\mathrm{Wel}}(\lambda,\mu_i),$ where $\lambda>0, \mu>0, \mu_i>0, i=1,2,\ldots,n.$ Then

\begin{equation*}X_{n:n}\prec_{\mathrm{lr}} Y_{n:n}\Longleftrightarrow X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}\Longleftrightarrow \mu\ge \Biggl(\dfrac1n\sum_{i=1}^n{\mu_i^\lambda}\Biggr)^{1/\lambda}.\end{equation*}

Remark 3. It is worth noting that Corollary 2 strengthens Theorem 3.2 of [Reference Zhao, Zhang and Qiao37]. They present a sufficient condition for $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ when $0<\lambda\le1$ , that is,

\begin{equation*} \mu\ge \dfrac1n\sum_{i=1}^n{\mu_i} \Longrightarrow X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}.\end{equation*}

It is easy to verify that

\begin{equation*} \dfrac1n\sum_{i=1}^n{\mu_i}\ge \Biggl(\dfrac1n\sum_{i=1}^n{\mu_i^\lambda}\Biggr)^{1/\lambda} \end{equation*}

if $0<\lambda\le 1$ . The condition

\begin{equation*} \mu\ge \dfrac1n\sum_{i=1}^n{\mu_i} \end{equation*}

is only a sufficient but not necessary condition for $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ .

We give some examples to illustrate the result of Proposition 3.

Example 1. Let $n=5$ , $X_1,\ldots,X_5$ and $Y_1,\ldots,Y_5$ be two samples of independent random variables such that $X_i\sim {\mathrm{EGG}}(\theta,\lambda, r, \mu)$ and $Y_i\sim {\mathrm{EGG}}(\theta_i,\lambda, r, \mu_i)$ , $i=1,\ldots,5$ . In Figure 1 we plot the ratio of cumulative distribution functions and the ratio of the density functions for different parameters. Let $F_{X_{5:5}}$ and $F_{Y_{5:5}}$ (resp. $f_{X_{5:5}}$ and $f_{Y_{5:5}}$ ) denote the cumulative distribution functions (resp. density functions) of $ {X_{5:5}}$ and $ {Y_{5:5}}$ , respectively. The parameters are taken as the following four cases. One can find that all the examples are consistent with Proposition 3.

Figure 1. Ratio of cumulative distribution functions and densities for the EP distribution.

  1. (i) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=1.5, \,\mu=2, \,r=2, \mu_1=3, \mu_2=\mu_3=\mu_4=2, \mu_5=1$ . One can verify that

    \begin{equation*} \mu \ge \Biggl( \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i^\lambda\Biggr)^{1/\lambda}. \end{equation*}
    Hence, by Proposition 3(i), we have $X_{5:5}\prec_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\prec_{\mathrm{rh}} Y_{5:5}.$ The ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are both increasing.
  2. (ii) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=1.5, \,\mu=2, \,r=2, \mu_1=1, \mu_2=\mu_3=\mu_4=2, \mu_5=3$ . One can verify that

    \begin{equation*} \mu < \Biggl( \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i^\lambda\Biggr)^{1/\lambda}. \end{equation*}
    By Proposition 3(i), we have $X_{5:5}\not\prec_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\not\prec_{\mathrm{rh}} Y_{5:5}.$ Neither of the ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are monotone.
  3. (iii) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=2, \,\mu=1.5, \,r=1, \,\mu_1=\mu_2=\mu_3=2$ , $\mu_4=\mu_5=3$ . One can verify that $ \mu \ge \mu_{1:5}$ . By Proposition 3(ii), we have $X_{5:5} \succ_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5} \succ_{\mathrm{rh}} Y_{5:5}.$ The ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are both decreasing.

  4. (iv) $\theta=3,\, \theta_1=i,\,i=1,\ldots,5$ , $\lambda=2, \,\mu=2, \,r=1, \,\mu_1=1,\, \mu_2=\mu_3=\mu_4=2, \,\mu_5=3$ . One can verify that $ \mu> \mu_{1:5}$ . By Proposition 3(ii), we have $X_{5:5}\not\succ_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\not\succ_{\mathrm{rh}} Y_{5:5}.$ Neither of the ratios $F_{Y_{5:5}}/F_{X_{5:5}}$ and $f_{Y_{5:5}}/f_{X_{5:5}}$ are monotone.

Example 2. We employ the notation of Example 1. In Figure 2, for $n=5$ , we compare the cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ for different parameters. We consider the following two examples. One can find that all the examples are consistent with Proposition 3(iii).

Figure 2. Cumulative distribution function for the EGG distribution.

  1. (i) $\theta=3, \theta_1=i, i=1,\ldots,5$ , $\lambda=1, \mu=1, r=2, \,\mu_1=5,\,\mu_2=2,\,\mu_3=1$ , $\mu_4=\mu_5=2$ . One can verify that $\mu^{n\theta}\ge \prod_{i=1}^n\mu_i^{\theta_i}.$ Then, by Proposition 3(iii), we have $X_{n:n} \prec_{\mathrm{st}} Y_{n:n}$ . The cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ are plotted in Figure 2(a), from which we can find that $F_{X_{n:n}} \ge F_{ Y_{n:n}}$ .

  2. (ii) $\theta=3, \theta_1=i, i=1,\ldots,5$ , $\lambda=2, \mu=2, r=1, \,\mu_1=1,\,\mu_2=2,\,\mu_3=2$ , $\mu_4=\mu_5=3$ . One can verify that $\mu^{n\theta}< \prod_{i=1}^n\mu_i^{\theta_i}.$ Then, by Proposition 3(iii), we have $X_{n:n} \not\prec_{\mathrm{st}} Y_{n:n}$ . The cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ are plotted in Figure 2(b), from which we can find that neither of $F_{X_{n:n}}$ and $F_{ Y_{n:n}}$ dominate each other.

3.2. Exponentiated Pareto distribution

Let $G_{\mu,\theta}$ be a distribution given by

\begin{equation*} G_{\mu,\theta}(x) = ( 1-(1+\mu x)^{-\beta})^\theta,\quad x>0, \quad \beta,\,\mu,\,\theta>0.\end{equation*}

We call $G_{\mu,\theta}$ an exponentiated Pareto distribution with tail index $\beta>0$ , denoted by ${\mathrm{EP}}(\beta,\mu,\theta)$ [Reference Nadarajah29, Reference Nadarajah, Jiang and Chu30]. When $\mu=\theta=1$ ,

(9) \begin{equation}G_{1,1}=1- (1+x)^{-\beta},\quad x>0,\end{equation}

is called a simple Pareto distribution with tail index $\beta>0$ (and is also called a Lomax distribution; see [Reference Kotz, Balakrishnan and Johnson22]). Pareto distributions are the most popular models in finance, economics, and related areas. It is obvious to see that $G_{1,\theta}$ is the class of the PRHR model with respect to the Pareto distribution. They are commonly used to model random variables such as income, risk, and prices. We will show that a simple Pareto distribution with tail index $\beta>0$ satisfies the first equivalent relations in (4) of Property 1.

Lemma 2. Let $F=G_{1,1}$ be the simple Pareto distribution defined by (9) with $\beta>0$ . Then F satisfies the equivalent relation of Property 1.

Proof. We first show the $\Longrightarrow$ in the equivalent relation. Let $\tilde{r}$ denote the reversed hazard rate function of F. Note that

\begin{align*}\tilde{r}(x)=\dfrac{\beta (x+1)^{-\beta-1}}{1-(1+x)^{-\beta}} = \dfrac1{1+x}\times\dfrac{\beta }{(1+x)^\beta-1}.\end{align*}

We first show that $x\tilde{r}(x)$ is decreasing convex in $x\in\mathbb{R}_+$ . We take the derivatives of $\lambda(x) =\!:\break x\tilde{r}(x)/\beta$ :

\begin{equation*}\lambda'(x) = \dfrac{(1-\beta x)(1+x)^\beta-1}{(1+x)^2((1+x)^\beta-1)^2} =\!:\, \dfrac{h(x)}{(1+x)^2((1+x)^\beta-1)^2}.\end{equation*}

Note that $h'(x)=-\beta x(1+\beta)(1+x)^{\beta-1}<0$ for $x>0$ and $h(0)=0$ . We have $\lambda'(x)<0$ , which implies that $x\tilde{r}(x)$ is decreasing in $x\in\mathbb{R}_+$ . Also note that $\lambda'(x)=h(x)(\tilde{r}(x))^2/\beta^2$ . It then follows that

\begin{align*}\beta^2\lambda^{\prime\prime}(x) & = h'(x)(\tilde{r}(x))^2+ 2h(x)\tilde{r}(x)\tilde{r}'(x)\\[2pt] & =\dfrac{-\beta(1+\beta )x(1+x)^{\beta-1}}{(1+x)^2((1+x)^\beta-1)^2}-\dfrac{2((1-\beta x)(1+x)^\beta-1)((\beta+1)(1+x)^\beta-1)}{(1+x)^3((1+x)^\beta-1)^3}\\[2pt] &\stackrel{\mathrm{sgn}}= (\!-\beta(1+\beta )x(1+x)^{\beta-1})(1+x)((1+x)^\beta-1)\\[2pt] &\quad\, -2((1-\beta x)(1+x)^\beta-1)((\beta+1)(1+x)^\beta-1)\\[2pt] & =((\beta+1)(\beta x-2)(1+x)^{\beta}+(\beta+1)(\beta x+2)+2(1-\beta x))(1+x)^\beta-2\\[2pt] & =\!:\, w(x),\end{align*}

where $A\stackrel{\mathrm{sgn}}=B$ indicates that A and B have the same sign. One can verify that $w(0)=w'(0)=w^{\prime\prime}(0)=0$ , and

\begin{equation*}w^{\prime\prime\prime}(x)\stackrel{\mathrm{sgn}}=(1+x)^{\beta-2}({-}(\beta-1)^2+(1+\beta x)(\beta+1)(2\beta+1))\ge 0,\quad x\in\mathbb{R}_+.\end{equation*}

Hence we have $w(x)\ge 0$ for $x\in\mathbb{R}_+$ , and hence $x\tilde{r}(x)$ is decreasing and convex in $x\in\mathbb{R}_+$ . Then, for any $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ such that $\mu_1+\cdots+\mu_n\le n$ (i.e. $\overline{\mu}\le 1$ ), we have

(10) \begin{equation} nx\,\tilde{r}(x) \le n\overline{\mu} x\,\tilde{r}(\overline{\mu} x) \le \sum_{i=1}^n \mu_ix\,\tilde{r}(\mu_ix),\end{equation}

where the two inequalities follow from the fact that $x\tilde{r}(x)$ is decreasing and convex in $x\in\mathbb{R}_+$ , respectively. Note that (10) implies $n\tilde{r}(x) \le \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_ix)$ , that is, $\tilde{r}_{X_{n:n}(x)}\le \tilde{r}_{Y_{n:n}(x)}$ . Then we have $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ , and thus we complete the proof of $\Longrightarrow$ in the equivalent relation.

We next show the $\Longleftarrow$ in the equivalent relation. For $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ , let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim G_{1,1}$ and $Y_i\sim G_{\mu_i,1}$ , where $\mu_i>0$ , $i=1,\ldots,n$ . Let $\tilde{r}_{X_{n:n}}$ and $\tilde{r}_{Y_{n:n}}$ denote the reversed hazard rate functions of order statistics $X_{n:n}$ and $Y_{n:n}$ , respectively. Then, by Taylor’s expansion, we have

\begin{equation*}\tilde{r}_{Y_{n:n}(x)} = \sum_{i=1}^n \mu_i\dfrac{\beta (\mu_ix+1)^{-\beta-1}}{1-(1+\mu_ix)^{-\beta}} = \dfrac n{x} -\sum_{i=1}^n \mu_i\dfrac{\beta+1}{2} \quad \text{as}\ x\to0,\end{equation*}

and

\begin{equation*} {\tilde{r}_{X_{n:n}(x)}} =\dfrac{n\beta (x+1)^{-\beta-1}}{1-(1+x)^{-\beta}} =n \biggl(\dfrac1{x}-\dfrac{\beta+1}{2}\biggr) \quad \text{as}\ x\to0.\end{equation*}

It then follows from $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ (i.e. $\tilde{r}_{X_{n:n}}\le \tilde{r}_{Y_{n:n}}$ ) that $\sum_{i=1}^n \mu_i\le n$ . Hence we complete the proof.

By Lemma 2 and Theorem 1, we can immediately get the following result for the exponentiated Pareto distribution.

Proposition 4. Let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables such that $X_i\sim {\mathrm{EP}}(\beta,\mu,\theta)$ and $Y_i\sim {\mathrm{EP}}(\beta,\mu_i,\theta_i)$ , where $\theta_i>0,\, \theta>0$ , $\beta>0$ , $\mu_i>0$ , $\mu>0$ , $i=1,\ldots,n$ , and $n\theta=\theta_1+\cdots+\theta_n$ . Then

(11) \begin{equation}\mu \ge \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i\Longleftrightarrow X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}\end{equation}

and

(12) \begin{equation} \mu \le \mu_{1:n} \Longrightarrow X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}.\end{equation}

Proof. It is easy to see that (11) follows from Lemma 2 and Theorem 1. To see (12), let $\tilde{r}$ denote the reversed hazard rate function of $F=G_{1,1}$ defined by (9). Note that in the proof of Lemma 2 we have already shown that $x\tilde{r}(x)$ is decreasing in $x\in\mathbb{R}_+$ . This implies that, for any $\mu_1,\ldots,\mu_n\in\mathbb{R}_+$ such that $\mu_{1:n}\ge \mu$ , we have $\mu t\tilde{r}(\mu t) \ge \mu_it\,\tilde{r}(\mu_it)$ , $i=1,\ldots,n$ , and thus

\begin{equation*}n\mu\tilde{r}(\mu t) \ge \sum_{i=1}^n \mu_i\,\tilde{r}(\mu_it).\end{equation*}

That is, $ \tilde{r}_{_{X_{n:n}}}(t) \ge \tilde{r}_{_{Y_{n:n}}}(t) .$ Hence (12) holds.

We give some examples to illustrate the result of Proposition 4.

Example 3. Let $n=5$ , $X_1,\ldots,X_5$ and $Y_1,\ldots,Y_5$ be two samples of independent random variables such that $X_i\sim {\mathrm{EP}}(\beta,\mu,\theta)$ and $Y_i\sim {\mathrm{EP}}(\beta,\mu_i,\theta_i)$ , $i=1,\ldots,5$ . In Figure 3 we plot the ratio of cumulative distribution functions and the ratio of the density functions for different parameters. Let $F_{X_{5:5}}$ and $F_{Y_{5:5}}$ denote the cumulative distribution functions of $ {X_{5:5}}$ and $ {Y_{5:5}}$ , respectively. The parameters are taken as the following three cases. One can find that all the examples are consistent with Proposition 4.

Figure 3. Ratio of cumulative distribution functions for the EP distribution.

  1. (i) $\beta=2{,} \theta=3{,} \theta_i=i$ , $i=1,\ldots,5$ , $\mu=2$ , $\mu_1=3$ , $\mu_2=\mu_3=\mu_4=2$ , $\mu_5=1$ . One can verify that

    \begin{equation*} \mu \ge \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i, \end{equation*}
    and thus, by Proposition 4(i), we have $X_{5:5}\prec_{\mathrm{lr}} Y_{5:5}$ and $X_{5:5}\prec_{\mathrm{rh}} Y_{5:5}.$ The ratio $F_{Y_{5:5}}/F_{X_{5:5}}$ is increasing.
  2. (ii) $\beta=2, \theta=3, \theta_i=i$ , $i=1,\ldots,5$ , $\mu=2$ , $\mu_1=\mu_2=\mu_3= \mu_4=2$ , $\mu_5=3$ . One can verify that $ \mu \le \mu_{1:5}$ . Then, by Proposition 4(ii), we have $X_{5:5}\succ_{\mathrm{rh}} Y_{5:5}.$ The ratio $F_{Y_{5:5}}/F_{X_{5:5}}$ is decreasing.

  3. (iii) $\beta=2, \theta=3, \theta_i=i$ , $i=1,\ldots,5$ , $\mu=2$ , $\mu_1=\mu_2=\mu_3=1$ , $\mu_4=3$ , $\mu_5=3$ . One can verify that

    \begin{equation*} \mu<\dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i. \end{equation*}
    Then, by Proposition 4(i), we note that $X_{5:5}\not \succ_{\mathrm{rh}} Y_{5:5}.$ The ratio $F_{Y_{5:5}}/F_{X_{5:5}}$ is not monotone. Also, note that $ \mu > \mu_{1:5}$ . We have the condition $ \mu > \mu_{1:5}$ in Proposition 4(ii) could not be relaxed in general.

3.3. Reinsurance

Starting from the seminal work of [Reference Borch9], the study of optimal reinsurance has drawn significant interest from both practicing actuaries and academics (see e.g. [Reference Bernard, He, Yan and Zhou5], [Reference Cai, Tan, Weng and Zhang7], [Reference Cheung, Sung, Yam and Yung12], and [Reference Chi and Tan13]). Optimal reinsurance is an important risk management tool for insurance companies, especially in catastrophe insurance.

In this section we consider the following reinsurance model. Let $X_1,\ldots,X_n$ be n independent random losses which represent random claims faced by an insurance company. We assume that each of the $X_i$ follows a regularly varying distribution. Recall that a distribution F is called a regularly varying distribution if there exists $\alpha>0$ such that

\begin{equation*} \lim_{t\to\infty} \dfrac{\overline{F}(tx)}{\overline{F}(t)} = x^{-\alpha}\quad \text{for all $x>0$,}\end{equation*}

where $\overline{F}=1-F$ is the survival function. This is denoted by $\overline{F}\in {\mathrm{RV}}_{-\alpha}$ . Pareto distributions, or more generally exponentiated Pareto distributions, are typical examples of regularly varying distributions. We point out that if the random claims represent the risks of a portfolio or the claims of catastrophe risks, it is well acknowledged that those risks have the properties of leptokurtosis and fat tails. It is typical to use regularly varying distributions to model such risks (see e.g. [Reference Embrechts, Klüppelberg and Mikosch16]). It has been verified that, for such risks, the largest-order statistic has tail properties similar to that of the sum of the risks. That is, for n independent random variables $X_1,\ldots,X_n$ following the regularly varying distributions (even with different indexes), we have

(13) \begin{equation} \mathbb{P}(X_1+\cdots+X_n>x) \sim \mathbb{P}(X_{n:n}>x)\sim\sum_{i=1}^n\mathbb{P}(X_i>x)\quad \text{as}\ x\to\infty,\end{equation}

where $f(x)\sim g(x)$ represents $\lim_{x\to\infty} f(x)/g(x)=1$ (see e.g. [Reference Embrechts, Klüppelberg and Mikosch16]). Hence we assume that the insurer is only concerned with the risk of the largest risk $X_{n:n}$ and wants to transfer part of $X_{n:n}$ to a reinsurance company and pay a premium. That is, in the presence of n risks $X_1,\ldots,X_n$ , the insurer is concerned with an optimal partition of $X_{n:n}$ into two parts, $I(X_{n:n})$ and $R_I (X_{n:n})$ , where $I(X_{n:n})$ satisfying $0 \le I(X_{n:n})\le X_{n:n}$ represents the amount of loss ceded to the reinsurer and $R_I (X_{n:n})=X_{n:n}-I(X_{n:n})$ is the loss retained by the insurer. Meanwhile the insurer incurs an additional cost in the form of a reinsurance premium which is payable to the reinsurer. We use $\pi$ to represent the reinsurance premium principle. Then the risk of the insurer is $R_I (X_{n:n}) +\pi(I(X_{n:n}))$ . Here we consider the following typical optimal reinsurance framework studied by Cai et al. [Reference Cai, Tan, Weng and Zhang7]. The functional I takes the form of a stop-loss contract, that is,

\begin{equation*}I\in\mathcal I \coloneqq \{I_d\mid I_d(x)= (x-d)_+,\ d\in\mathbb{R}\}\end{equation*}

and $\pi(X)=(1+ \rho){\mathbb{E}}[X]$ , where $\rho>0$ is a loading factor. Then the risk of the insurer is

(14) \begin{equation}\pi(X_{n;n},d) \coloneqq R_I (X_{n:n}) +\pi(I(X_{n:n}))=\min\{X_{n:n},d\}) + (1+\rho){\mathbb{E}}[(X_{n:n}-d)_+]. \end{equation}

The insurer wants to minimize his/her risk as follows:

(15) \begin{equation}\min_{d\in\mathbb{R}} {\mathrm{VaR}}_{\alpha}(\pi(X_{n;n},d)){,}\end{equation}

where ${\mathrm{VaR}}_\alpha(X)$ is the Value-at-Risk (VaR) of a risk X defined as

\begin{equation*}{\mathrm{VaR}}_{\alpha}(X) = \inf\{x\in\mathbb{R}\colon \mathbb{P}(X\le x)\ge \alpha\}.\end{equation*}

Let $d^*_{\mathrm{VaR}}(X_{n:n})$ denote the optimal value of d of the above optimization problem (15). We assume that $d^*_{\mathrm{VaR}}(X_{n:n})=\infty$ if the optimal solution to the minimization problem (15) does not exist.

We now study the properties of $d^*_{\mathrm{VaR}}(X_{n:n})$ with respect to the sample of $X_1,\ldots,X_n$ . To state it, let $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ be two samples of independent random variables and let $X_{n:n}$ and $Y_{n:n}$ denote their largest-order statistics, respectively. Define $ \pi(X_{n;n},d)$ and $ \pi(Y_{n;n},d)$ as in (14), and let $d^*_{\mathrm{VaR}}(X_{n:n})$ and $d^*_{\mathrm{VaR}}(Y_{n:n})$ denote the optimal values of d with the sample $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ , respectively. That is,

(16) \begin{align}d^*_{\mathrm{VaR}}(X_{n:n})=\inf\arg\min_{d\in\mathbb{R}} {\mathrm{VaR}}_\alpha(\pi(X_{n;n},d))\quad \text{and}\quad d^*_{\mathrm{VaR}}(Y_{n:n})=\inf\arg\min_{d\in\mathbb{R}} {\mathrm{VaR}}_\alpha(\pi(Y_{n;n},d)).\nonumber\\\end{align}

Here we only consider the infimum of the optimal values of d as the optimal values are generally not unique in general (see e.g. [Reference Cai, Tan, Weng and Zhang7]). By Proposition 4, we can immediately obtain the following result.

Proposition 5. Under the condition of Proposition 4, if the optimal solutions to optimization problems of (16) exist, then the following statements hold.

  1. (i) If

    \begin{equation*} \mu \ge \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i, \end{equation*}
    then $ d^*_{\mathrm{VaR}}(X_{n:n})\le d^*_{\mathrm{VaR}}(Y_{n:n}).$
  2. (ii) If $ \mu \le \mu_{1:n} $ , then $ d^*_{\mathrm{VaR}}(X_{n:n})\ge d^*_{\mathrm{VaR}}(Y_{n:n}).$

Proof. Denote $\rho^*=\rho/(1+\rho)$ . It is easy to verify that both $X_{n:n}$ and $Y_{n:n}$ are positive random variables, and thus $\rho^*>0=\mathbb{P}(X_{n:n}\le 0)$ . Also noting that the optimal solutions to optimization problems of (16) exist, the conditions of Theorem 3.1(i,ii) of [Reference Cai, Tan, Weng and Zhang7] are satisfied. Then, by Theorem 3.1(i,ii) of [Reference Cai, Tan, Weng and Zhang7], we have $d^*_{\mathrm{VaR}}(X_{n:n})= {\mathrm{VaR}}_{\rho^*}(X_{n:n})$ and $d^*_{\mathrm{VaR}}(Y_{n:n})= {\mathrm{VaR}}_{\theta^*}(Y_{n:n})$ . On the other hand, by Proposition 4(i), if

\begin{equation*}\mu \ge \dfrac{1}{n\theta}\sum_{i=1}^n\theta_i\mu_i,\end{equation*}

then $X_{n:n}\prec_{\mathrm{rh}} Y_{n:n}$ , which implies $X_{n:n}\prec_{\mathrm{st}} Y_{n:n}$ . That is, ${\mathrm{VaR}}_{\beta}(X_{n:n})\le {\mathrm{VaR}}_{\beta}(Y_{n:n})$ for any $\beta\in (0,1)$ . Also, by Proposition 4(ii), if $ \mu \le \mu_{1:n} $ , then $X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}$ , which also implies $X_{n:n} \succ_{\mathrm{st}} Y_{n:n}$ . That is, ${\mathrm{VaR}}_{\beta}(X_{n:n})\ge {\mathrm{VaR}}_{\beta}(Y_{n:n})$ for any $\beta\in (0,1)$ . Thus we complete the proof.

Remark 4.

  1. (i) We can similarly consider the optimization problem (15) based on the expected shortfall (ES), which is defined as

    \begin{equation*} {\mathrm{ES}}_\alpha(X)=\dfrac1{1-\alpha}\int_\alpha^1 {\mathrm{VaR}}_{u}(X) \,{\mathrm{d}} u,\quad \alpha\in(0,1). \end{equation*}
    That is, consider the optimization problem
    \begin{equation*} \min_{d\in\mathbb{R}} {\mathrm{ES}}_{\alpha}(\pi(X_{n;n},d)).\end{equation*}
    Then we can obtain a result similar to Proposition 5 by Theorem 4.1 of [Reference Cai, Tan, Weng and Zhang7].
  2. (ii) If $X_1,\ldots,X_n$ and $Y_1,\ldots,Y_n$ follow the EGG distribution, then by Proposition 3 we can show a result similar to Proposition 5. However, the EGG distribution is a light-tailed distribution, and does not have the property shown in (13). Thus we do not state the result here.

4. Conclusion

In this paper, by introducing two properties in terms of the reversed hazard rate function of a (baseline) distribution, and showing that they can be inherited by the proportional reversed hazard rate (PRHR) model, we have investigated the stochastic comparison of parallel systems with respect to the hazard rate and the likelihood ratio orders for the PRHR model. We then applied the result to two popular PRHR models: the EGG distribution and the exponentiated Pareto distribution. Based on the well-established result on the reversed hazard rate function of the gamma distribution, we get a necessary and sufficient condition for the stochastic comparisons of the largest-order statistics for EGG distributions. As a by-product, we get a similar equivalent characterization result for Weibull distributions. Our results recover the recent result established in [Reference Haidari and Najafabadi18] and strengthen the result for Weibull distributions given in [Reference Zhao, Zhang and Qiao37]. We also investigated the properties of reversed hazard rate function of Pareto distributions. This, combined with the inheriting property of a PRHR model, gives the (equivalent) characterization for stochastic comparisons in terms of the likelihood ratio (reversed hazard rate) order for exponentiated Pareto distributions.

Appendix A. Proof of Theorem 1(ii)

The proof is similar to that of Theorem 1(i). We show the result for the following three cases.

(i) First consider the case when the $\theta_i$ and $\theta$ are all integers. Let

\begin{equation*}Z_1,\ldots,Z_{\theta_1},\,Z_{\theta_1+1},\ldots,Z_{\theta_1+\theta_2},\ldots,Z_{\theta_1+\cdots+\theta_n}\end{equation*}

be $m \coloneqq n\theta=\theta_1+\cdots+\theta_n$ i.i.d. random variables with $Z_1\sim F$ , and define $W_i = Z_i/\mu_1$ for $i=1,\ldots,\theta_1$ , $W_i = Z_i/\mu_2$ for $i=\theta_1+1,\ldots,\theta_1+\theta_2$ , …, and $W_i = Z_i/\mu_n$ for $i=\theta_1+\cdots+\theta_{n-1}+1,\ldots,\theta_1+\cdots+\theta_n$ . By observing the cumulative distribution functions of $Z_{m:m}$ and $W_{m:m}$ , $Z_{m:m}$ and $W_{m:m}$ have the same distribution as those of $ X_{n:n}$ and $Y_{n:n}$ , respectively. By Property 1, we have $Z_{m:m}\succ_{\mathrm{lr}}W_{m:m}$ or $Z_{m:m}\succ_{\mathrm{rh}}W_{m:m}$ if and only if

\begin{equation*} \mu\ge \mu_{1:n}. \end{equation*}

Then the desired result holds for the case when the $\theta_i$ and $\theta$ are all integers.

(ii) Next, consider the case when the $\theta_i$ and $\theta$ are all rational. Let $f_{X_{n:n}}$ and $g_{ Y_{n:n}}$ denote the density functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively. It can be verified that

\begin{equation*} \dfrac{g_{ Y_{n:n}}(x)}{f_{X_{n:n}}(x)} = \dfrac{G_{ Y_{n:n}}(x)}{F_{X_{n:n}}(x)} g(x), \end{equation*}

where $F_{ Y_{n:n}}$ and $G_{ Y_{n:n}}$ denote the cumulative distribution functions of $ X_{n:n}$ and $ Y_{n:n}$ , respectively:

\begin{equation*} \dfrac{G_{Y_{n:n}}(x)}{F_{X_{n:n}}(x)} = \dfrac{\prod_{i=1}^n F^{\theta_i}(\mu_i x)}{F^{n\theta}(\mu x)} =\!:\, h(x)\quad \text{and}\quad {g(x) = \dfrac{\sum_{i=1}^n\theta_i\mu_i {{f(\mu_i x)}/{F(\mu_i x)}}}{n\theta \mu f(\mu x)/F(\mu x)}.} \end{equation*}

As the $\theta_i$ and $\theta$ are all rational, there exists N such that $N\theta_i$ , $i=1,\ldots,n$ and $N\theta$ are all integers. Hence, by the above proof for the case when the $\theta_i$ and $\theta$ are all integers, we have that $ (h(x))^N$ , and hence h(x), and $g(x)=Ng(x)/N$ are increasing in x if and only if $\mu\ge \mu_{1:n}$ .

(iii) Finally we show the result for the general case. Note that there exist $\theta^k,\theta_1^k,\ldots,\theta_n^k$ , $k\in\mathbb{N}$ , such that they are rational and

\begin{equation*}\theta^k\to \theta,\theta_i^k\to \theta_i \quad \text{as}\ k\to\infty,\ i=1,\ldots,n.\end{equation*}

Let $X_{n:n}^k$ and $Y_{n:n}^k$ denote the largest-order statistics of $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ , where both $X_1^k,\ldots,X_n^k$ and $Y_1^k,\ldots,Y_n^k$ are n independent random variables such that $X_i^k\sim F_{\mu,\theta^k}, Y_i^k\sim F_{\mu_i,\theta_i^k}$ , $i=1,\ldots,n$ , $k\in\mathbb{N}$ . By the above proof, $\mu\ge\mu_{1:n}$ is sufficient for $X_{n:n} \succ_{\mathrm{lr}} Y_{n:n}$ and $X_{n:n} \succ_{\mathrm{rh}} Y_{n:n}$ .

We next show the necessity by contradiction. Without loss of generality, assume that $\mu_1\le\dots\le\mu_n$ . By Property 2, we also have $ {\tilde{r}(at)\le\tilde{r}(bt)}, t\in\mathbb{R}$ if and only if $a\ge b$ . Further, if $\mu>\mu_1$ and $\theta$ is rational, then there exist $\theta_1^k\uparrow\theta_1, \theta_i^k\downarrow\theta_i$ as $ k\to\infty$ , $i=1,\ldots,n$ such that they are all rational and $\Sigma_{i=1}^n\theta_i^k=n\theta$ for every $k\in\mathbb{N}$ . Let $Y_{n:n}^k$ denote the largest-order statistics of $Y_1^k,\ldots,Y_n^k$ , respectively, where $Y_1^k,\ldots,Y_n^k$ are n independent random variables such that $Y_i^k\sim F_{\mu_i,\theta_i^k}$ , $i=1,\ldots,n$ , $k\in\mathbb{N}$ . In the same way as the above proof, there exists $t_0\in\mathbb{R}$ such that

\begin{equation*} \dfrac{\tilde{r}_{_{X_{n:n}}}(t_0)}{\tilde{r}_{_{Y_{n:n}}}(t_0)} \le\dfrac{\tilde{r}_{_{X_{n:n}}}(t_0)}{\tilde{r}_{_{Y_{n:n}^1}}(t_0)} <1 .\end{equation*}

This means that $X_{n:n}\nsucc Y_{n:n}$ .

If $\theta\in\mathbb{R}$ , there exists a rational number $\theta'<\theta$ . We define $\theta_1^{\prime}=\theta_1-n(\theta-\theta')$ and let $X_{n:n}^{\prime}, Y_{n:n}^{\prime}$ denote the largest-order statistics of $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ , respectively, where both $X_1^{\prime},X_2^{\prime},\ldots,X_n^{\prime}$ and $Y_1^{\prime},Y_2,\ldots,Y_n$ are n independent random variables such that $X_i^{\prime}\sim F_{\mu,\theta'}, i=1,\ldots,n$ and $Y_1^{\prime}\sim F_{\mu_1,\theta_1^{\prime}}, Y_j\sim F_{\mu_j,\theta_j}, j=2,\ldots,n$ . By the above proof, there exists $t_0\in\mathbb{R}$ such that

\begin{equation*}\dfrac{\tilde{r}_{_{X_{n:n}^{\prime}}}(t_0)}{\tilde{r}_{_{Y_{n:n}^{\prime}}}(t_0)}= \dfrac{n\theta' \tilde{r}(\mu t_0)}{\theta_1^{\prime} \tilde{r}(\mu_1 t_0)+\Sigma_{i=2}^n{\theta_i \tilde{r}(\mu_i t_0)}}<1 ,\end{equation*}

and

\begin{equation*}\dfrac{\tilde{r}_{_{X_{n:n}}}(t_0)}{\tilde{r}_{_{Y_{n:n}}}(t_0)}= \dfrac{n\theta' \tilde{r}(\mu t_0)+n(\theta-\theta')\tilde{r}(\mu t_0)}{\theta_1^{\prime} \tilde{r}(\mu_1 t_0)+\Sigma_{i=2}^n{\theta_i \tilde{r}(\mu_i t_0)}+n(\theta-\theta')\tilde{r}(\mu_1 t_0)}= \dfrac{\tilde{r}_{_{X_{n:n}^{\prime}}}(t_0)+n(\theta-\theta')\tilde{r}(\mu_1 t_0)} {\tilde{r}_{_{Y_{n:n}^{\prime}}}(t_0)+n(\theta-\theta')\tilde{r}(\mu t_0)}<1 .\end{equation*}

This means that $X_{n:n}\nsucc Y_{n:n}$ .

Acknowledgement

The authors are grateful for the support from National Science Foundation of China (grants 71671176, 71871208, and 71921001).

Footnotes

1 For Weibull distributions, although this is not the original version of Reference Zhao, Zhang and Qiao37, it is trivial to see that (3) for a Weibull distribution is equivalent to the original version.

References

Al-Hussaini, E. K. and Ahsanullah, M. (2015). Exponentiated Distributions (Atlantis Studies in Probability and Statistics 21). Atlantis Press, Paris.Google Scholar
Balakrishnan, N. and Rao, C. R. (1998). Order Statistics: Theory and Methods (Handbook of Statistics 16). Elsevier, New York.Google Scholar
Balakrishnan, N. and Rao, C. R. (1998). Order Statistics: Applications (Handbook of Statistics 17). Elsevier, New York.Google Scholar
Balakrishnan, N. and Zhao, P. (2013). Ordering properties of order statistics from heterogeneous populations: a review with an emphasis on some recent developments. Prob. Eng. Inf. Sci. 27 (4), 403443.10.1017/S0269964813000156CrossRefGoogle Scholar
Bernard, C., He, X., Yan, J. A. and Zhou, X. Y. (2015). Optimal insurance design under rank-dependent expected utility. Math. Finance 25, 154186.CrossRefGoogle Scholar
Cai, J. and Wei, W. (2012). Optimal reinsurance with positively dependent risks. Insurance Math. Econom. 50 (1), 5763.CrossRefGoogle Scholar
Cai, J., Tan, K. S., Weng, C. and Zhang, Y. (2008). Optimal reinsurance under VaR and CTE risk measures. Insurance Math. Econom. 43 (1), 185196.10.1016/j.insmatheco.2008.05.011CrossRefGoogle Scholar
Bon, J. L. and PĂltĂnea, E. (2006). Comparison of order statistics in a random sequence to the same statistics with i.i.d. variables. ESAIM Prob. Statist. 10, 110.CrossRefGoogle Scholar
Borch, K. (1962). Equilibrium in a reinsurance market. Econometrica 30, 424444.CrossRefGoogle Scholar
Chateauneuf, A., Cohen, M. and Meilijson, I. (2004). Four notions of mean-preserving increase in risk, risk attitudes and applications to the rank-dependent expected utility model. J. Math. Econom. 40, 547571.10.1016/S0304-4068(03)00044-2CrossRefGoogle Scholar
Chateauneuf, A., Cohen, M. and Meilijson, I. (2005). More pessimism than greediness: a characterization of monotone risk aversion in the rank-dependent expected utility model. Economic Theory 25 (3), 649667.CrossRefGoogle Scholar
Cheung, K. C., Sung, K. C. J., Yam, S. C. P. and Yung, S. P. (2014). Optimal reinsurance under general law-invariant risk measures. Scand. Actuarial J. 2014, 7291.10.1080/03461238.2011.636880CrossRefGoogle Scholar
Chi, Y. and Tan, K. S. (2011). Optimal reinsurance under VaR and CVaR risk measures: a simplified approach. ASTIN Bull. 41, 487509.Google Scholar
David, H. A. and Nagaraja, H. N. (1970). Order Statistics. John Wiley.Google Scholar
Di Crescenzo, A. (2000). Some results on the proportional reversed hazards model. Statist. Prob. Lett. 50 (4), 313321.CrossRefGoogle Scholar
Embrechts, P., Klüppelberg, C. and Mikosch, T. (1997). Modelling Extremal Events for Finance and Insurance. Springer, Berlin.CrossRefGoogle Scholar
Gupta, R. C. and Gupta, R. D. (2007). Proportional reversed hazard rate model and its applications. J. Statist. Planning Infer. 137 (11), 35253536.CrossRefGoogle Scholar
Haidari, A. and Najafabadi, A. (2019). Characterization ordering results for largest order statistics from heterogeneous and homogeneous exponentiated generalized gamma. Prob. Eng. Inf. Sci. 33, 460470.CrossRefGoogle Scholar
Kalbfleisch, J. D. and Lawless, J. F. (1989). Inference based on retrospective ascertainment: an analysis of the data on transfusion-related AIDS. J. Amer. Statist. Assoc. 84 (406), 360372.CrossRefGoogle Scholar
Khaledi, B. E., Farsinezhad, S. and Kochar, S. C. (2011). Stochastic comparisons of order statistics in the scale model. J. Statist. Planning Infer. 141 (1), 276286.CrossRefGoogle Scholar
Kochar, S. (2012). Stochastic comparisons of order statistics and spacings: a review. ISRN Prob. Statist. 2012, 839473, 147.CrossRefGoogle Scholar
Kotz, S., Balakrishnan, N. and Johnson, N. L. (2004). Continuous Multivariate Distributions, Vol. 1: Models and Applications. John Wiley.Google Scholar
Mao, T. and Hu, T. (2010). Equivalent characterizations on orderings of order statistics and sample ranges. Prob. Eng. Inf. Sci. 24 (2), 245262.CrossRefGoogle Scholar
Mao, T., Hu, T. and Zhao, P. (2010). Ordering convolutions of heterogeneous exponential and geometric distributions revisited. Prob. Eng. Inf. Sci. 24 (3), 329348.10.1017/S026996481000001XCrossRefGoogle Scholar
Marshall, A. W., Olkin, I. and Arnold, B. C. (1979). Inequalities: Theory of Majorization and its Applications (Springer Series in Statistics 143). Academic Press, New York.Google Scholar
Misra, N. and Misra, A. K. (2013). On comparison of reversed hazard rates of two parallel systems comprising of independent gamma components. Statist. Prob. Lett. 83 (6), 15671570.Google Scholar
Mudholkar, G. S. and Srivastava, D. K. (1993). Exponentiated Weibull family for analyzing bathtub failure-rate data. IEEE Trans. Reliab. 42 (2), 299302.10.1109/24.229504CrossRefGoogle Scholar
Mudholkar, G. S., Srivastava, D. K. and Freimer, M. (1995). The exponentiated Weibull family: a reanalysis of the bus-motor-failure data. Technometrics 37 (4), 436445.CrossRefGoogle Scholar
Nadarajah, S. (2005). Exponentiated Pareto distributions. Statistics 39 (3), 255260.CrossRefGoogle Scholar
Nadarajah, S., Jiang, X. and Chu, J. (2017). Comparisons of smallest order statistics from Pareto distributions with different scale and shape parameters. Ann. Operat. Res. 254 (1–2), 191209.CrossRefGoogle Scholar
Navarro, J. (2016). Stochastic comparisons of generalized mixtures and coherent systems. Test 25 (1), 150169.10.1007/s11749-015-0443-5CrossRefGoogle Scholar
Pledger, G. and Proschan, F. (1971). Comparisons of order statistics and of spacings from heterogeneous distributions. In Optimizing Methods in Statistics, pp. 89113. Academic Press.Google Scholar
Shaked, M. and Shanthikumar, J. G. (2007). Stochastic Orders (Springer Series in Statistics). Springer.CrossRefGoogle Scholar
Wang, B. X., Yu, K. and Coolen, F. P. (2015). Interval estimation for proportional reversed hazard family based on lower record values. Statist. Prob. Lett. 98, 115122.CrossRefGoogle Scholar
Zhao, P. and Balakrishnan, N. (2012). Stochastic comparisons of largest order statistics from multiple-outlier exponential models. Prob. Eng. Inf. Sci. 26 (2), 159182.CrossRefGoogle Scholar
Zhao, P. and Balakrishnan, N. (2014). A stochastic inequality for the largest order statistics from heterogeneous gamma variables. J. Multivar. Anal. 129, 145150.10.1016/j.jmva.2014.04.003CrossRefGoogle Scholar
Zhao, P., Zhang, Y. and Qiao, J. (2016). On extreme order statistics from heterogeneous Weibull variables. Statistics 50 (6), 13761386.10.1080/02331888.2016.1230859CrossRefGoogle Scholar
Figure 0

Figure 1. Ratio of cumulative distribution functions and densities for the EP distribution.

Figure 1

Figure 2. Cumulative distribution function for the EGG distribution.

Figure 2

Figure 3. Ratio of cumulative distribution functions for the EP distribution.