Hostname: page-component-7b9c58cd5d-bslzr Total loading time: 0.001 Render date: 2025-03-15T06:20:04.903Z Has data issue: false hasContentIssue false

ON RANK ESTIMATION IN SYMMETRIC MATRICES: THE CASE OF INDEFINITE MATRIX ESTIMATORS

Published online by Cambridge University Press:  06 September 2007

Stephen G. Donald
Affiliation:
University of Texas at Austin
Natércia Fortuna
Affiliation:
CEMPRE, Faculdade de Economia, Universidade do Porto
Vladas Pipiras
Affiliation:
University of North Carolina at Chapel Hill
Rights & Permissions [Opens in a new window]

Abstract

In this paper we consider estimating the rank of an unknown symmetric matrix based on a symmetric, asymptotically normal estimator of the matrix. The related positive definite limit covariance matrix is assumed to be estimated consistently and to have either a Kronecker product or an arbitrary structure. These assumptions are standard although they exclude the case when the matrix estimator is positive or negative semidefinite. We adapt and reexamine here some available rank tests and introduce a new rank test based on the sum of eigenvalues of the matrix estimator. We discuss two applications where rank estimation in symmetric matrices is of interest, and we also provide a small simulation study.The first author acknowledges the support of an Alfred P. Sloan Foundation Research Fellowship and NSF Grant SES-0196372. We thank the co-editor and the two referees for useful comments and suggestions. CEMPRE—Centro de Estudos Macroeconómicos e Previsão—is supported by the Fundação para a Ciência e a Tecnologia, Portugal, through the Programa Operacional Ciência, Tecnologia e Inovação (POCTI) of the Quadro Comunitário de Apoio III, which is financed by FEDER and Portuguese funds.

Type
MISCELLANEA
Copyright
© 2007 Cambridge University Press

1. INTRODUCTION

Let M be an unknown n × k matrix,

be its estimator, and rk{M} denote the rank of M. Suppose that

is asymptotically normal having a limiting covariance matrix C that can be consistently estimated by

. Under these assumptions, many authors have proposed ways to test for rk{M} by using

and

. Recent and commonly used rank tests are the LDU (lower diagonal–upper triangular decomposition) test of Gill and Lewbel (1992) and Cragg and Donald (1996), the minimum chi-squared (MINCHI2) test of Cragg and Donald (1997), the ALS (asymptotic least squares) test of Gouriéroux, Monfort, and Trognon (1985) and Robin and Smith (1995), the SVD (singular value decomposition) tests in Ratsimalahelo (2002, 2003) and Kleibergen and Paap (2006), and the characteristic root test of Robin and Smith (1995, 2000).

In this paper we examine the situation where the matrices M and

are symmetric.

1

This paper is an abridged and revised version of Donald, Fortuna, and Pipiras (2005), now containing Assumptions (a) and (a*) and correcting Example 3.1.

As shown in Section 3, there are a number of applications where the matrix under consideration is symmetric. Despite this, relatively little attention has been paid to the symmetric case aside from brief discussions of adjustments required for some of the tests; see Cragg and Donald (1997, p. 235) and Robin and Smith (1995). We show how the SVD, LDU, and MINCHI2 rank tests need to be modified in the symmetric case, and we also introduce a new rank test (EIG test) based on the sum of eigenvalues of

. In a small set of simulations we find that the adjusted SVD test is slightly superior to the other tests in terms of size properties. In addition, we examine the connection between assumptions about the definiteness of the matrix and that of the variance covariance matrix of the unique elements of the matrix. It is shown that positive definiteness of the latter is incompatible with positive (or negative) semidefiniteness of the former as has been assumed. Thus the results of this paper, which assume positive definiteness of the variance covariance matrix, do not apply to the case where

is positive or negative semidefinite. In this sense, the tests presented here are limited, and we are not yet aware of general assumptions that would successfully include this case.

2

One approach is that of Robin and Smith (2000), who allow the limiting covariance matrix to have deficient rank. Another popular approach is to replace the inverse of a covariance matrix in standard rank tests by a generalized inverse.

The rest of the paper is organized as follows. In Section 2, we state and discuss the assumptions used in this work. Some situations where rank estimation in symmetric matrices arises are described in Section 3. Various rank tests for symmetric matrices under the assumptions of Section 2 are discussed in Section 4. A simulation study can be found in Section 5. The proofs of the more technical results are contained in Appendix A.

2. ASSUMPTIONS AND OTHER PRELIMINARIES

Suppose that M is an unknown symmetric n × n matrix with real entries. We are interested in estimating the rank of M by using its estimator

where N is the sample size or any other relevant parameter such that N → ∞. Let

be a symmetric n × n matrix having independent normal entries with variance 1 on the diagonal and variance ½ off the diagonal. Also, let Dn and Kn be the n2 × n(n + 1)/2 duplication matrix and the n2 × n2 commutation matrix, respectively (see, e.g., Magnus and Neudecker, 1999, pp. 46–53). It is known that DnDn is nonsingular and that the n(n + 1)/2 × n2 matrix Dn+ = (DnDn)−1Dn′, known as the elimination matrix, is the Moore–Penrose inverse of Dn. The standard vec and vech operations are used frequently in the discussion that follows. The notation →d and →p stands for the convergence in distribution and probability, respectively.

We shall use the following assumptions in rank tests.

Assumptions (A). There are real, symmetric n × n matrices

such that

as N → ∞, where F is a nonsingular n × n matrix. There are matrices

such that

.

Assumptions (A*). There are symmetric matrices

such that

where the matrix C is positive definite. There are matrices

such that

.

By using

with Ω = ½(In2 + Kn) = Dn Dn+ and Theorem 13(b) and (d) in Magnus and Neudecker (1999, pp. 49–50), one can express (2.1) as (2.2) with C = Dn+((FF)−1 [otimes ] (FF)−1)Dn+′ = (Dn′(FF [otimes ] FF)Dn)−1. Thus Assumption (A) corresponds to the case where C has a Kronecker product–like structure. Considering

may also appear to require a more general Kronecker product structure than that used in (2.1). But, in fact, one can show using symmetry of

that this necessarily implies that G = cF for some

.

Some of the rank tests of Section 4 work also under slightly weaker assumptions.

Assumptions (a) and (a*). Replace (2.1) and (2.2) by the following:

for symmetric matrices

and

such that

, uMu = 0 for a vector u implies that

, and

with nonsingular F and

with nonsingular C, respectively.

As the next proposition shows, under Assumptions (A*),

cannot be positive or negative semidefinite. This excludes an interesting case when M and

are theoretical and sample covariance matrices. Some authors have, in fact, incorrectly assumed that it was possible to have rank deficiency for a semidefinite matrix and at the same time have a positive definite variance covariance matrix; see Cragg and Donald (1997) and Donkers and Schafgans (2003).

PROPOSITION 2.1. If Assumptions (A*) hold with a positive definite matrix C and rk{M} < n, then

cannot be positive or negative semidefinite.

Proposition 2.1 does not exclude the case where M is positive semidefinite. Example 3.1 in Section 3 shows that this may indeed occur. Our EIG rank test is formulated, in fact, only under this assumption.

Assumption (B). M is positive semidefinite.

3. EXAMPLES

Example 3.1 (Number of factors in a nonparametric relationship)

Consider a multivariate nonparametric relationship:

between an n × 1 vector Yi of response variables and a d × 1 vector Xi of independent variables, where F is a vector of unknown functions and εi are error terms. Suppose that Ei|Xi) = 0 and Eiεi′|Xi) = Σ with a nonsingular matrix Σ. Define rk{F} as the smallest integer r for which F(x) = A·H(x), where A is an n × r matrix and H(x) is an r × 1 vector of functions of x. Many econometric problems, for example, related to demand systems, nonparametric instrumental variables, and arbitrage pricing, can be formulated as inference on rk{F} (Donald, 1997; Kneip, 1994).3

Donald (1997), in fact, considers a more general rank rk{F} = rk{F; x1} defined as the smallest r for which F(x) = Θ′x1 + A·H(x), where x1 is d1 × 1 (possibly empty) subvector of x, Θ′ is an n × d1 matrix, A is an n × r matrix, and H(x) is an r × 1 vector of functions of x. We suppose that x1 ≡ 0 for simplicity. We also focus only on the kernel-based tests, which are one type of test considered by Donald (1997). Another related work is Fortuna (2004).

One can easily show that rk{F} = rk{M}, where M = E(p(Xi)F(Xi)F(Xi)′) and p(x) is a density function of Xi. Observe that the matrix M is symmetric and positive semidefinite. Following Donald (1997), it is convenient to estimate M by

where Kh(x) = hdK(x/h) denotes a kernel K scaled by a bandwidth h > 0. Observe that the estimator

is symmetric too but also indefinite.

Substituting Yi = F(Xi) + εi into (3.2), we can write

, where

is defined as in (3.2) replacing YiYj′ by F(Xi)F(Xj)′ + F(Xij′ + εi F(Xj)′ when k = 1 and εiεj′ when k = 2. Following the proof of Lemma 2 in Donald (1997), one can show that, under suitable assumptions on Xi, εi, and F,

where V = (2∥K22Ep(Xi))−1/2 and ∥K22 = ∫K(x)2 dx, and that the covariance matrix Σ can be estimated consistently. Observe also that uMu = 0 for a vector u implies that F(x)′u = 0 and hence that

. Moreover, arguing as for (3.3), one can show under suitable assumptions that

is asymptotically normal, in particular,

. Gathering all these observations, we see that

satisfies Assumptions (a).

Example 3.2 (Reduced rank regression with two sets of regressors)

Consider a multivariate regression model with two sets of regressors:

where Yk is an n × 1 vector of response variables, Xk and Zk are two sets of p × 1 and q × 1 regressors, and A and B are unknown n × p and n × q matrices. Suppose that Ei) = 0 and Eiεi′) = Σ with a nonsingular matrix Σ. In some applications (Robin and Smith, 2000, p. 161), B is restricted to be symmetric, and the goal is to estimate its rank.

Imposing the symmetry restriction, the matrix B can be estimated by a symmetric matrix

obtained by least squares methods. Under standard assumptions on Xi, Zi, and εi, one can show that

where C is nonsingular. The exact expressions for

and C are quite lengthy. In a special case when A is empty, for example, we have

and, under suitable assumptions on Zii, C = (Dn′(EZi Zi′ [otimes ] In)Dn)−1(Dn′(EZi Zi′ [otimes ] Σ)Dn)(Dn′(EZi Zi′ [otimes ] In)Dn)−1, where Z = (Z1,…,ZN) and Y = (Y1,…,YN).

4. RANK TESTS FOR SYMMETRIC MATRICES

The test statistics to be considered, generically denoted by

, will have an asymptotic distribution under the null that follows either of the following conditions:

All the tests considered in this paper will be consistent under the alternative in the sense that

4.1. SVD Rank Test

Kleibergen and Paap (2006) have recently proposed an ingenious rank test based on the SVD of a matrix. For symmetric matrices, singular values are just the absolute values of eigenvalues. To make symmetry evident, it is convenient to rewrite and partition the SVD by using eigenvalues in the form of the Schur decomposition as

where U is orthogonal, ϒ is diagonal and consists of eigenvalues ordered in decreasing absolute value, and U11 and ϒ1 are r × r matrices for r = 0,…,n. Following Kleibergen and Paap (2006), set

and

so that M = Ar Br + Ar,⊥Λr Br,⊥. The null hypothesis H0 : rk{M} = r is equivalent to H0 : Λr = 0. The decomposition for

is of the same form with the corresponding matrices in (4.4)–(4.6) replaced by matrices with hats.

PROPOSITION 4.1. Under Assumptions (a*) and under H0 : rk{M} = r, we have

, where Ωr = Dnr+(Br, [otimes ] Ar,′)Dn CDn′(Br,′ [otimes ] Ar,)Dnr+′.

Based on this result, consider the test statistic

where

is defined as Ωr with

and

.

4

As in Kleibergen and Paap (2006), under stronger Assumptions (A), one can show that the SVD statistic

for the normalized matrix

and the corresponding covariance matrix

satisfies

, where

is the MINCHI2 statistic defined by (4.17).

The following result shows how this statistic behaves when used to test for the rank of the matrix.

THEOREM 4.1. Under Assumptions (a*) and if the matrix Ωr is nonsingular,

satisfies (4.1) and (4.3).

4.2. LDU Rank Test

The LDU rank test5

Robin and Smith (1995) suggest replacing the inverse of a covariance matrix in the LDU test by its generalized reflexive inverse. These authors state that the corresponding test statistic has a limiting χ2((nr)(nr + 1)/2) distribution under the null and diverges under the alternative. This approach is also used in Kapetanios and Camba-Mendez (1999), Ratsimalahelo (2002), and Robin and Smith (2000).

(Gill and Lewbel, 1992; Cragg and Donald, 1996) is based on the Gaussian elimination procedure with complete pivoting. This procedure does not preserve symmetry but could be modified to one that does as follows (Bunch and Parlett, 1971, pp. 646–647).

Step 1. Search M = (mij) for max{mii2,|mjj mkkmjk2|,i,j,k}. If mii2 is the maximum, set S = mii. If the maximum is |mjj mkkmjk2|, then set S = (mjj mjk; mkj mkk).

Step 2. Permute rows and columns to bring the 1 × 1 or 2 × 2 pivot S into the upper left corner of M.

Step 3. Use Gaussian elimination to transform M into

where R1 and B1 correspond to the permutation and the Gaussian elimination steps, respectively. Here, m11(1) = S and m22(1) is a symmetric matrix defined by m22(1) = UTS−1T′, where R1 MR1′ = (ST′;TU) is a partition of the permuted matrix.

Step 4. Apply the previous steps to m22(1), and so on, to obtain the expression

where m22(k) is symmetric.

In the case of

, we obtain (4.8) with the matrices replaced by their counterparts having hats. By permuting

beforehand, we may suppose that symmetric permutations become no longer necessary, that is,

. Introduce also k(r) = min{k : dim(m22(k)) ≤ nr} and l(r) = dim(m11(k(r))). Note that l(r) = r or r + 1. Observe that, if rk {M} < n, then the Gaussian elimination procedure with symmetric pivoting can be applied until m22(k0) = 0 for some k0. In fact, k0 = k(rk{M}), dim(m22(k0)) = n − rk{M}, and l(rk{M}) = rk{M}. Also, let

and

be defined similarly when using

.

Consider the symmetric analogue of the LDU statistic:

where

with

and

is a partition with

submatrix

.

6

Because Gaussian elimination with symmetric pivoting may involve 2 × 2 pivots, we may have

for some r. Note also that (4.9) is not properly defined in the following situation. If

, the remaining symmetric matrix

is 2 × 2. If

is the chosen 2 × 2 pivot in the next Gaussian elimination step, the elimination method cannot continue, and hence

is not well defined by (4.9). In this case, we set

.

THEOREM 4.2. Under Assumptions (A*),

satisfies (4.1) and (4.3).

4.3. EIG Rank Test for Semidefinite Matrices

Let

and

be the ordered eigenvalues of

and

, respectively.

7

Under Assumptions (A), the EIG rank test was already used, albeit implicitly, in Donald (1997) and Fortuna (2004). We state it here in general and also prove it under more general assumptions.

THEOREM 4.3. Under Assumptions (a) and (B), and with rk{M} = q, we have

Based on Theorem 4.3, consider the test statistic

THEOREM 4.4. Under Assumptions (a) and (B), the test statistic

satisfies (4.2) and (4.3). Moreover, under

, where q = rk{M} andd becomes =d when r = q.

In the case of Assumptions (a*), let

be the ordered eigenvalues of

and 0 = υ1 = ··· = υq < υq+1 ≤ ··· ≤ υn be the ordered eigenvalues of M with q = rk{M}. Let U be an n × n orthogonal matrix in the Schur decomposition of M,

where U1 is n × r. Also let

be the ordered eigenvalues of

, where the matrix

is such that

.

THEOREM 4.5. Under Assumptions (a*) and (B), and with rk{M} = q, r = q in U2, we have

Let

, r = 0,…,n. By using Theorem 4.5, we see that, under

, where σr2 = vec(U2U2′)′Dn CDn′ vec(U2U2′). Define the estimator

of the variance σr2 by replacing U2 and C with their sample counterparts. Setting

we obtain the following result.

THEOREM 4.6. Under Assumptions (a*) and (B),

satisfies (4.2) and (4.3).

4.4. MINCHI2 Rank Test

Consider the ordered eigenvalues

of

and let

The next result follows easily from Theorem 4.3.

THEOREM 4.7. Under Assumptions (a),

satisfies (4.1) and (4.3). Moreover, under

, where q = rk{M} andd becomes =d in the case r = q.

Following the proof of Theorem 3 in Cragg and Donald (1993), for example, one can show that

Thus

can be viewed as the MINCHI2 statistic considered, in particular, by Cragg and Donald (1997) and Robin and Smith (2000). We expect (though we do not have a proof) that the minimum over

in (4.18) can be replaced by the minimum over

. In this case, the statistic (4.18) can be written as

where

is a consistent estimator of the covariance matrix C.

The expression (4.19) is that of the MINCHI2 statistic commonly used for symmetric matrices under the more general Assumptions (A*). See, for example, Cragg and Donald (1997, p. 235).

THEOREM 4.8. Under Assumptions (A*),

satisfies (4.1) and (4.3).

Though the MINCHI2 statistic is appealing theoretically, it is more difficult to compute when C does not have a Kronecker product structure. In such cases numerical optimization procedures are needed to compute (4.19), and these may not always converge to a global minimum.8

Under Assumptions (A*) or (a*), one can also introduce the test statistic analogous to (4.16) but based on the squared eigenvalues

. In the unrestricted case, this approach is taken by Robin and Smith (2000). The limit of the corresponding test statistic under H0 can be written as a weighted sum of independent χ2-variables of degree 1.

5. SIMULATION STUDY

We examine here the small-sample properties of the various rank tests through a small simulation study. The setup is that of Example 3.2. We generate Yi = BZi + εi, i = 1,…,N, where B is a fixed, symmetric, positive semidefinite 6 × 6 matrix with rk{B} = 3. (The eigenvalues of B are 0, 0, 0, 3.51, 4.79, and 24.69.) The variables Zi are independent and identically distributed (i.i.d.)

. The error terms εi = Hui with i.i.d.

variables ui and a fixed, nonsingular matrix H. The sample size is set at N = 500.

In Figure 1, for different rank values r, we present the PP-plots of a probability p ∈ (0,1) on the vertical axis against

on the horizontal axis, estimated from 2,000 replications of the corresponding statistic

. Here, cr(p) is such that P(ξ(r) > cr(p)) = 1 − p, where ξ(r) is distributed according to the limiting distribution of

.

PP-plots in a simulation study.

The PP-plots for r = 3 = rk{B} in Figure 1 suggest that the EIG test is undersized and that the LDU and SVD tests are oversized.

9

We do not consider the MINCHI2 test based on (4.19) because of numerical problems associated with it.

In other words, for smaller sample sizes, the EIG test is likely to accept too low a rank whereas the LDU and SVD tests are likely to accept too high a rank. An idea of the power properties of tests can be seen from the PP-plots for r = 0,1,2. The LDU and SVD tests appear to have better power than the EIG test, although one should note that these are not adjusted for size distortions. The PP-plots for r = 4,5 > rk{B} illustrate nonstandard distributions of the corresponding

.

Observe also that the PP-plot for the SVD test when r = 3 is quite close to the asymptotic 45° line, although deviations do occur at the key right tail of the distribution. Although not reported here, as the sample size increases, all the statistics appear to converge to the limiting distribution, with the SVD statistic converging fastest. For instance the PP-plots for all tests when r = 3 coincide with the 45° line when N = 10,000. This convergence confirms the theoretical limit results for the test statistics established in the paper.

APPENDIX A. Technical Proofs

Proof of Proposition 2.1. Consider the case of positive semidefiniteness. We shall argue by contradiction. Suppose first that M is a diagonal matrix M = diag(m11,…,mnn). Because rk{M} < n, we may suppose without loss of generality that mnn = 0. Because

is asymptotically normal and mnn = 0, we obtain that

. On the other hand, because

is positive semidefinite,

, with u′ = (0 ,…,01). This implies that σn2 = 0 and hence that the covariance matrix C is singular (the row of C corresponding to

consists of zeros).

Consider now the case when M is any symmetric matrix. There is an orthogonal transformation U such that UMU′ = M* = diag(m11*,…,mnn*) is diagonal. Let

and observe that

is also positive semidefinite. Moreover, we have that

The matrix Dn+(U′ [otimes ] U′)Dn is positive definite by Theorem 13(c) in Magnus and Neudecker (1999, pp. 49–50). Hence, the covariance matrix in (A.1) is also positive definite, and the contradiction follows as in the simple case considered earlier. █

Proof of Proposition 4.1. The proof adapts that of Theorem 1 of Kleibergen and Paap (2006). Observe that, by using Assumptions (a*) and the definitions of

,

,

,

and the ones without hats,

Because

and

, it is enough to show that

is an

-consistent estimator for Ar,⊥.

Because

, it is enough to show

-consistency of

. Because of the special structure of Br and

, this follows from

-consistency of

. Because

, it is enough to show that

. But

, and the latter can be shown to be Op(1) as in Theorems 4.3 and 4.5. █

Proof of Theorem 4.2. We only consider the more difficult case of H0 : rk{M} = r. Suppose first that the permutations Rk in (4.8) are identity and that they are defined uniquely (i.e., there are no ties). Then, for large N,

,

, and

. As in Cragg and Donald (1996, p. 1308), we have

where Φ = (−M21 M11−1 Inr) and M = (M11 M12; M21 M22) is a partition with r × r submatrix M11. Relation (A.2) implies that

, where Πs = Dnr+(Φ [otimes ] Φ)Dn. By using Assumptions (A*), we obtain that

and hence

. When the permutations Rk in (4.8) are not identity but still defined uniquely, we have

for large N, and the matrices

and M are permuted in advance so that permutations become no longer necessary.

The case of ties can be dealt with as in Cragg and Donald (1996). Let

be defined by using one set of possible permutations

indexed by a and satisfying

for large N. Taking into account the permutations

beforehand,

is thus defined in terms of the matrix

, where

is a permutation matrix, and the matrix

, which is a consistent estimator for the limit covariance matrix Ca appearing in

. Also, let

To show that ties make no difference asymptotically to

, it is enough to prove that

.

One can verify that, for any a,

Because

is defined by permuting the matrix

for the Gaussian elimination, the restriction in the minimum of (A.4) can be replaced by

such that

and

is r × r. As in Cragg and Donald (1996, p. 1309), we have

where

and

are the free parameters. Similarly,

where Πsa = Dnr+a [otimes ] Φa)Dn with Φa = (−Ma,21 Ma,11−1 Inr). It is now enough to show that

This can be proved as in Cragg and Donald (1996, p. 1309), as long as ΠsaDn+B = 0. The latter follows from the identity ΠsaDn+B = Dnr+a [otimes ] Φa)Dn Dn+B = Dnr+a [otimes ] Φa)B = 0, where we used (Φa [otimes ] Φa)Dn Dn+ = Dnr Dnr+a [otimes ] Φa) (see Magnus and Neudecker, 1999, Thm. 12(b), p. 49, and Thm. 9(a), p. 47) and the fact that (Φa [otimes ] Φa)B = 0 as proved in Cragg and Donald (1996, p. 1309). █

Proof of Theorem 4.3. Let ci, i = 1,…,n, be the eigenvectors of the matrix FFM corresponding to the eigenvalue λi, that is, FFMci = λi ci. Denote C = (c1,…,cn) = (Cnr Cr) with Cnr = (c1,…,cnr) and Cr = (cnr+1,…,cn). (This C is used only in this proof and should not be confused with the covariance-like matrix C appearing in (2.2).) We may normalize C as C′(FF)−1C = In. Arguing as in the proof of Theorem 3.1 in Robin and Smith (2000), one can show that

, i = 1,…,nr, are asymptotically the eigenvalues of

. Observe finally that

The last equality follows because

and

. The convergence (4.11) follows because

, k = nr + 1,…,n. █

Proof of Theorem 4.4. The stochastic dominance can be shown as in the proof of Theorems 1 and 2 in Donald (1997). █

Proof of Theorem 4.5. Arguing as in the proof of Theorem 4.3, we may prove that

, k = 1,…,nq, are asymptotically the ordered eigenvalues of

. This shows the convergence (4.14). The proof of (4.15) is straightforward. █

Proof of Theorem 4.7. To prove the stochastic dominance, observe first that

, where

, k = 1,…,nq, denote the eigenvalues of

in increasing order. Letting B = (0(nr)×(rq) Inr)′, we have BB = Inr and, applying the Poincaré separation theorem,

. Because

, it follows that

(to see the last equality use the fact

). █

Proof of Theorem 4.8. We prove the result under H0 only. Restriction

can be expressed as

, where

and Ξ1 are n × r and r × (nr) matrices, respectively. Because the matrix

is symmetric, there are (nr)r + r(r + 1)/2 =: s free parameters μ for

(these are the free parameters of

because

in the case of symmetric matrices), and hence

where

. Now let B(μ) be an n(n + 1)/2 × s matrix defined by

. Because rk{M} = r, we have

for some μ0, and, moreover, the corresponding submatrix

has full column rank. We obtain that the matrix B0) is of full column rank, that is, rk{B0)} = s.

Let

be μ minimizing the expression on the right-hand side of (A.5). By using the Taylor expansion and

, we have

The first-order conditions for minimizing (A.5), together with (A.6), imply that

By substituting (A.7) into (A.6), we get that

where A0 = C−1/2B0). By (2.2) and because the matrix B0) (and hence A0) has full column rank, we obtain that

. █

References

REFERENCES

Bunch, J.R. & B.N. Parlett (1971) Direct methods for solving symmetric indefinite systems of linear equations. SIAM Journal on Numerical Analysis 8, 639655.Google Scholar
Cragg, J.G. & S.G. Donald (1993) Testing identifiability and specification in instrumental variable models. Econometric Theory 9, 222240.Google Scholar
Cragg, J.G. & S.G. Donald (1996) On the asymptotic properties of LDU-based tests of the rank of a matrix. Journal of the American Statistical Association 91, 13011309.Google Scholar
Cragg, J.G. & S.G. Donald (1997) Inferring the rank of a matrix. Journal of Econometrics 76, 223250.Google Scholar
Donald, S.G. (1997) Inference concerning the number of factors in a multivariate nonparametric relationship. Econometrica 65, 103131.Google Scholar
Donald, S.G., N. Fortuna, & V. Pipiras (2005) On Rank Estimation in Symmetric Matrices: The Case of Indefinite Matrix Estimators. FEP Working paper 167, Faculdade de Economia do Porto, Porto, Portugal.
Donkers, B. & M. Schafgans (2003) A Derivative Based Estimator for Semiparametric Index Models. Econometric Institute Report 2003-08, Erasmus University Rotterdam, Rotterdam, the Netherlands.
Fortuna, N. (2004) Local Rank Tests in a Multivariate Nonparametric Relationship. FEP Working paper 137, Faculdade de Economia do Porto, Porto, Portugal.
Gill, L. & A. Lewbel (1992) Testing the rank and definiteness of estimated matrices with applications to factor, state-space and ARMA models. Journal of the American Statistical Association 87, 766776.Google Scholar
Gouriéroux, C., A. Monfort, & A. Trognon (1985) Moindres carrés asymptotiques. Annales de l'I.N.S. E.E. 58, 91122.Google Scholar
Kapetanios, G. & G. Camba-Mendez (1999) A Bootstrap Test of Cointegration Rank. NIESR Discussion paper 151, National Institute of Economic and Social Research, London, United Kingdom.
Kleibergen, F. & R. Paap (2006) Generalized reduced rank tests using the singular value decomposition. Journal of Econometrics 133, 97126.Google Scholar
Kneip, A. (1994) Nonparametric estimation of common regressors for similar curve data. Annals of Statistics 22, 13861427.Google Scholar
Magnus, J.R. & H. Neudecker (1999) Matrix Differential Calculus with Applications in Statistics and Econometrics. Wiley. Revised reprint of the 1988 original.
Ratsimalahelo, Z. (2002) Rank Test Based on Matrix Perturbation Theory. Preprint, U.F.R. Science Economique, University of Franche-Comté.
Ratsimalahelo, Z. (2003) Strongly Consistent Determination of the Rank of Matrix. Preprint, U.F.R. Science Economique, University of Franche-Comté.
Robin, J.-M. & R.J. Smith (1995) Tests of Rank. DAE Working paper 9521, Department of Applied Economics, University of Cambridge, Cambridge, United Kingdom.
Robin, J.-M. & R.J. Smith (2000) Tests of rank. Econometric Theory 16, 151175.Google Scholar
Figure 0

PP-plots in a simulation study.