Hostname: page-component-745bb68f8f-cphqk Total loading time: 0 Render date: 2025-02-05T22:56:13.461Z Has data issue: false hasContentIssue false

MONOTONICITY IN LAG FOR NONMONOTONE MARKOV CHAINS

Published online by Cambridge University Press:  01 October 2004

Rafał Kulik
Affiliation:
Mathematical Institute, University of Wrocław, Wrocław, Poland, E-mail: rkuli@math.uni.wroc.pl, and, Department of Mathematics and Statistics, University of Ottawa, K1N 6N5 Ottawa, Ontario, Canada, E-mail: Rafal.Kulik@science.uottawa.ca
Rights & Permissions [Opens in a new window]

Abstract

It is well known that for a stochastically monotone Markov chain {Jn}n≥1 a function γ(n) = Cov[f (J1),g(Jn)] is decreasing if f and g are increasing. We prove this property for a special subclass of nonmonotone double stochastic Markov chains.

Type
Research Article
Copyright
© 2004 Cambridge University Press

1. INTRODUCTION

Since Daley's [2] article, it is well known that the stochastically monotone Markov chain {Jn}n≥1 has the following property. Let

be monotone and γ(n) = Cov[f (J1),f (Jn)]. Then, a map n → γ(n) is decreasing. Recall that a real-valued homogeneous discrete-time Markov chain {Jn}n≥1 is stochastically monotone if its one-step transition probability function P(Jn+1 > x|Jn = y) is nondecreasing in x for every fixed y. Daley's result was extended in Bergmann and Stoyan [1]. Hu and Joe [3] (see also Joe [5, Thms. 8.3, 8.4, and 8.7]) stated conditions for the concordance ordering of bivariates of Markov chains under some monotonicity assumptions. The most general result was given in Hu and Pan [4]. If both the Markov chain and its time-reversed counterpart are stochastically monotone, then γ(n1,…,nm) = E [f (Jn1,…,Jnm)] is decreasing in (n1,…,nm) coordinatewise for each m ≥ 2 and supermodular f. We refer to Müller and Stoyan [10] for a review of a recent work in this area. The monotonicity of a Markov chain is also sufficient for association of it (Lindqvist [7]). However, monotonicity is not necessary; see Lindqvist's article for a counterexample.

In our article, we define a family of stationary and homogeneous Markov chains which are not necessary monotone. For that family, we derive the monotonicity of bivariates w.r.t. supermodular order and, hence, monotonicity of covariances. Therefore, we show that an answer to a question which appears in [9] is negative.

The article is organized as follows. In Section 2, we collect some needed definitions and preliminary results. In Section 3, we define the family of Markov chains and prove the main result. Section 4 is devoted to examples and counterexamples. Some additional comparison results for our class are given in Section 5.

2. PRELIMINARIES

Define for 1 ≤ im and ε > 0 a difference operator Δiε by

for given u1,…,um. A function

is called supermodular if for all 1 ≤ i < jm and εij > 0,

for all u = (u1,…,um). For smooth functions, the above condition is equivalent to

for all 1 ≤ i < jn. The standard examples of supermodular functions are u1 × ··· × um, −max(u1,…,um), min(u1,…,um), and h(u1 + ··· + um), where

is convex.

This class of functions induces the so-called supermodular order. For arbitrary random vectors

, we write

if

for all supermodular φ such that the respective expectations are finite. Similarly, for stationary random sequences

, we write

if for all m ≥ 1,

Supermodular ordering is a dependence ordering in the sense of Joe [5]. In particular, if

, then

Consider now two stationary homogeneous Markov chains

with the state space {1,…,N}, the same stationary distribution π = (π1,…,πN), and transition matrices

, respectively. We will write

if

. Here, in the sequel, we will assume that all matrices have dimension N × N.

For the supermodular ordering of

, we have the following characterization (cf. Hu and Pan [4]).

Lemma 2.1: Let Π be a diagonal matrix diag(π1,…,πN). Then,

if and only if

for all r,s ∈ {1,…,N}.

Condition (2.2) is equivalent to a concordance ordering of bivariates; that is,

for every r and s in the state space. Note that for the double stochastic matrices

, condition (2.2) is equivalent to

Moreover, supermodular ordering for the double stochastic matrices can be characterized in the following way. Let

Note that

The matrix T was used in Keilson and Kester [6] for obtaining stochastic monotonicity. A matrix A is stochastic monotone if and only if T−1AT0, where 0 is a matrix which consists of zeros.

Lemma 2.2: Assume that

are double stochastic. Then,

if and only if

.

Proof: Inequality

is equivalent to

for all r and s, which means

by Eq. (2.3). █

3. MAIN RESULT

We introduce a class of Markov chains which are not necessary monotone.

Definition 3.1: We say that a matrix A belongs to a class

if it is stochastic and TAT−10. Equivalently, we say that a stationary homogeneous Markov chain belongs to

if its transition probability matrix does.

Let a and b be N-dimensional vectors. We say that a precedes b in the partial sum ordering

. Now, denote by a·i, i = 1,…,N, the columns of a matrix A. Then,

if and only if

Denote now by

a class of double stochastic matrices. It turns out that for the Markov chains driven by

, the following monotonicity property holds.

Theorem 3.1: Let {Jn}n≥1 be a stationary homogeneous Markov chain with a transition probability matrix

. Then, for any n ≥ 1,

Corollary 3.1: Under conditions of Theorem 3.1, we have for the functions f and g both either nonincreasing or nondecreasing,

The proof of Theorem 3.1 consists of the sequence of lemmas.

Lemma 3.1:

.

Proof:

1. We have TABT−1 = TAT−1TBT−10 from the assumptions on A and B.

2. Obvious. █

Lemma 3.2: Assume that

. If

, then

.

Proof: We have

The first term in brackets is greater than 0 due to assumptions on A; the second one has the same property because of Lemma 2.2. Therefore,

and using Lemma 2.2 once more, we obtain the comparison result. █

Lemma 3.3: Assume that

. Then, A2 <sm A. Moreover, for every n ≥ 1, An+1 <sm An.

Proof: Note that for any double stochastic matrix A, we have A <sm I, where I is an identity matrix. Moreover,

. From Lemma 3.2, we have A2 <sm A. Assume now that An <sm An−1. Because of Lemma 3.1, An and An−1 belong to

. Moreover, they are double stochastic. Therefore, Lemma 3.2 applies and we have AAn <sm AAn−1. █

Proof of Theorem 3.1: Denote by P(k) = (pij(k))i,j=1N, k ≥ 1, the k-step probability matrix for {Jn}n≥1. From Lemma 3.3, we have that

for all r,s = 1,…,N. Therefore, (J1,Jn+1) <sm (J1,Jn). █

Corollary 3.1 follows from stationarity of a Markov chain and the fact that for the functions f and g both either nonincreasing or nondecreasing, the function φ(x,y) = f (x)g(y) is supermodular.

4. EXAMPLES AND COUNTEREXAMPLES

Example 4.1: Let p and ε > 0 be such that p ≥ 2ε and p + ε ≤ 1. Then, the following matrix belongs to

and is not monotone:

Example 4.2: Lemma 3.3 fails if one of the assumptions is removed. Let

Then,

, and it is not true that A2 <sm A. The above matrix was used in Lindqvist [7] as a counterexample of a Markov chain which is associated but not stochastically monotone.

Now, let

This matrix belongs to

but is neither double stochastic nor monotone. It is not true that A2 <sm A.

5. ADDITIONAL COMPARISON RESULTS

Observe that for aa′ and stochastic matrices A and B with the same invariant distributions, we have aA + (1 − a′)B <sm aA + (1 − a)B if A <sm B. This can be interpreted as follows. Let

be the stationary homogeneous Markov chains with transition matrices A and B, respectively. Define

, where

are Bernoulli random variables with

, independent of everything else. Then,

, provided

. If we have some additional monotonicity properties, we can have a comparison of the whole sequences {Zn}n≥1 and

. However, the above consideration cannot be rewritten for the random variables

assuming their values in {1,…,K}. This can be done if the transition matrices belong to

. We refer to Marshall and Olkin [8] for the concept of majorization.

Proposition 5.1: Assume the following:

Then,

Proof: Define for l = 1,…,N and r = 1,…,K the vectors

where for j = 1,…,N, vj(l,r) are defined as

According to assumption (a), these vectors have decreasing coordinates. Moreover, because of assumption (b), we have

for each fixed l. Now, take

From Marshall and Olkin [8, p.125], we have

for every l = 1,…,N. Because

have decreasing coordinates, majorization order implies

. Observe now that the jth coordinates of

can be written as

respectively. Since

, we have for each k, l

which implies comparison of matrices by (2.3). █

Acknowledgments

The author is grateful to Ryszard Szekli for many helpful comments. This work was partially done during the author's stay at University of Ottawa.

References

REFERENCES

Bergmann, R. & Stoyan, D. (1978). Monotonicity properties of second order characteristics of stochastically monotone Markov chains. Mathematische Nachrichten 82: 99102.Google Scholar
Daley, D.J. (1968). Stochastically monotone Markov chains. Zeitschrift für Wahrscheinlichkeitstheorie und Verwandte Gebiete 10: 305317.Google Scholar
Hu, T. & Joe, H. (1995). Monotonicity of positive dependence with time for stationary reversible Markov chains. Probability in the Engineering and Informational Sciences 9: 227237.Google Scholar
Hu, T. & Pan, X. (2000). Comparisons of dependence for stationary Markov processes. Probability in the Engineering and Informational Sciences 14: 299315.Google Scholar
Joe, H. (1997). Multivariate models and dependence concepts. London: Chapman & Hall.
Keilson, J. & Kester, A. (1977). Monotone matrices and monotone Markov processes. Stochastic Processes and Their Applications 5: 231241.Google Scholar
Lindqvist, B.H. (1987). Monotone and associated Markov chains, with applications to reliability theory. Journal of Applied Probability 24: 679695.Google Scholar
Marshall, A.W. & Olkin, I. (1979). Inequalities: Theory of majorization and its applications. New York: Academic Press.
Miyoshi, N. & Rolski, T. (2004). Ross type conjectures on monotonicity of queues. Australian & New Zealand Journal of Statistics 46: 121131.Google Scholar
Müller A., &Stoyan, D. (2002). Comparison methods for stochastic models and risks. New York: Wiley.