Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-06T04:44:59.213Z Has data issue: false hasContentIssue false

A CONSISTENT NONPARAMETRIC EQUALITY TEST OF CONDITIONAL QUANTILE FUNCTIONS

Published online by Cambridge University Press:  23 May 2006

Yiguo Sun
Affiliation:
University of Guelph
Rights & Permissions [Opens in a new window]

Abstract

This paper proposes a consistent nonparametric test for testing for equality of unknown conditional quantile curves across subgroups within a survey data framework. Moreover, to improve the small-sample performance of the test, we propose a modified version of wild bootstrap procedure in a quantile context. Monte Carlo evidence shows that the performance of the test in small samples is adequate but is improved significantly when the bootstrap critical values are used.I thank the co-editor and two referees for helpful comments that improved the paper. Financial support from the Social Sciences and Humanities Research Council of Canada is gratefully acknowledged.

Type
Research Article
Copyright
© 2006 Cambridge University Press

1. INTRODUCTION

Survey data are frequently used in economics and have one prominent characteristic: data heterogeneity across population subgroups. The presence of heterogeneity naturally makes attractive the application of conditional quantile regressions; see Koenker and Bassett (1978, 1982), Buchinsky (1994), Powell (1986), and Newey and Powell (1990), among others. A major concern of modeling survey data is whether the data are poolable. To our knowledge, testing for poolability within the framework of a conditional quantile regression has not yet been studied in the literature, although within the framework of a conditional mean regression it has been investigated by Baltagi, Hidalgo, and Li (1996) and Lavergne (2001), among others. This paper aims to fill in the literature gap by providing a consistent nonparametric test for testing for the equality of unknown conditional quantile curves across population subgroups. The proposed test is a residual-based test derived by recognizing that an orthogonal moment condition only holds under the null hypothesis. Tests based on orthogonal moment(s) have been investigated extensively in other contexts also; related work includes Fan and Li (1996), Zheng (1996, 1998), and many others.

The proposed nonparametric test requires the availability of a consistent nonparametric estimator of the unknown conditional quantile curve. In the literature of nonparametric conditional quantile curve estimation, related work includes the kernel and k-nearest neighbor estimator of Bhattacharya and Gangopadtyay (1990), the spline smoothing estimator of Koenker, Ng, and Portnoy (1994), the weighted Nadaraya–Watson estimator of Hall, Wolff, and Yao (1999), the local linear regression approach of Fan, Hu, and Truong (1994), the double kernel method of Yu and Jones (1998), and many others. Because the proposed test is designed to handle data containing both discrete (with or without natural ordering) and continuous regressors—usually the case of survey data—we extend the estimation method of Fan et al. (1994) to the mixed data case.

We show that the proposed nonparametric test not only is consistent against any fixed alternative but it also has asymptotic power of one against local alternatives converging to the null hypothesis at usual nonparametric rates. Monte Carlo evidence, however, indicates a size-distortion problem as well documented in the literature of nonparametric hypothesis tests in finite samples. In a mean regression context, the technique of wild bootstrap is used to improve the finite-sample performance of a nonparametric test; see Gozalo (1997), Härdle and Mammen (1993), and Li and Wang (1998), for example. The commonly used wild bootstrap, however, fails in a quantile context because the conditional mean of disturbance terms may not be zero for a general quantile regression model. Hence, we propose a modified version of the wild bootstrap procedure in the quantile context. Monte Carlo simulation results indicate that the proposed bootstrap procedure significantly improves the performance of the test in small samples.

The rest of the paper is organized as follows. Section 2 gives the test and its asymptotic results. Section 3 describes the bootstrap procedure and provides theoretical justification. Section 4 presents a Monte Carlo simulation study of the test. Section 5 concludes. All relevant proofs are given in the Appendixes.

2. EQUALITY TEST AND MAIN THEOREMS

Suppose we fit the following nonparametric quantile regression model (at a probability mass θ ∈ (0,1)) to a set of survey data comprising J subgroups with total sample size n:

where the θth-conditional quantile of vij given xij is 0 and gθ,j(x) is an unknown function of x, i = 1,2,…, nj; j = 1,2,…, J. Here J (≥2) is a finite number. Denote the sample size of the jth subgroup by nj. We have aj = limn→∞(nj /n), where

. Let xij = (x1,ij′,x2,ij′)′ ∈ Rqp × Rp, where xij is a q × 1 column vector of explanatory variables with p (≥1) continuously distributed variables x2,ij and qp(≥0) discretely distributed variables x1,ij.

Let Fj(y|x) be the conditional distribution of Y given X = x of the jth subgroup. Then

Throughout this paper we assume that Fj(y|x) are absolutely continuous in y for almost all x, which implies that Fj(gθ,j(x)|x) = θ for almost all x. To simplify notation, we suppress the subscript θ of gθ,j(·). The aim of this paper is to test whether gj(·) are the same across subgroups. If they are, the pooled data will be used to estimate the unknown conditional quantile curve to gain more efficiency. Hence, the null and alternative hypotheses under consideration are

Using the pooled data to estimate the unknown θth-conditional quantile curve, g(x), Lemma 1, which follows, shows that g(x) is a weighted average of gj(x) and that g(x) ≡ gj(x) holds for all j only under H0. Hence, the equivalent null and alternative hypotheses can be written as

As in Zheng (1998), we transfer model (1) to the following conditional mean regression model:

Let uij = I(yijg(xij)) − θ. Then under H0, uij = εij; under H1, uij = εij + I(yijg(xij)) − I(yijgj(xij)). Hence, E [uij E(uij|xij)] ≥ 0, and the equality holds if and only if H0 is true. To avoid the random denominator problem in nonparametric kernel estimation, we use the density weighted version of E [uij E(uij|xij)] as in Powell, Stock, and Stoker (1989). The leave-one-out estimator of E [uij E(uij|xij) fj(xij)] is

where

. Also, for any fixed j,

where λ = (λ1,…, λqp)′ are the smoothing parameters for the discrete variables and H = diag(h12,…, hp2) for the continuous variables, and

. Moreover,

defined in Lemma 1 is the nonparametric estimator of g(xij) from the pooled data by using a different set of smoothing parameters

in (7), where

.

Let

for any matrix A and x = (x1′,x2′)′ ∈ Rqp × Rp. We list all the relevant assumptions as follows.

Assumption 1. (xij,yij) are independent and identically distributed (i.i.d.) in the subscript i and independently distributed in the subscript j.

Assumption 2. Let f1,j(x1) be the probability of x1,ij = x1 and f2|1,j(x2|x1) be the conditional density function of x2,ij given x1,ij = x1. Denote fj(x1,x2) = f1,j(x1) f2|1,j(x2|x1). Then they satisfy the following restrictions:

(i) x1,ij can take any value from

, which contains

different points {0,1,…, ct − 1 : ct ≥ 2}t=1qp. Also,

with f1,j(x1) > 0 for any

.

(ii) f2|1,j(x2|x1) > 0, and its first- and second-order (partial) derivatives with respect to x2 are all uniformly bounded.

Assumption 3. The conditional cumulative distribution function (c.d.f.) and probability density function (p.d.f.) of errors vij given xij = x are Fv,j(v|x) and fv,j(v|x), respectively. Also, fv,j(v|x) has uniformly bounded first- and second-order partial derivatives with respect to v and x2. Moreover, fv,j(0|x) > 0 and Fv,j(0|x) = Pr(vij ≤ 0|xij = x) = θ hold for all j on its domain.

Assumption 4. The unknown quantile curves gj(x) satisfy the conditions in Fan et al. (1994). For instance, gj(x) are twice continuously differentiable with respect to x2, and gj(x) and their first- and second-order partial derivatives with respect to x2 are all uniformly bounded.

Assumption 5. The product kernel function is used, K(u1,u2) = K1(u1;λ) × K2(u2), where

. (i) k1(u1,ss) = 1 if u1,s = 0,λs otherwise, where 0 ≤ λs < 1 for all s; (ii) k2(·) is a nonnegative, symmetric, and bounded function taking values on [−1,1] such that

, and

.

Assumption 6. As n → ∞, the smoothing parameters satisfy the following restrictions: (i) n|H|1/2 → ∞, ∥H∥ → 0, λs → 0 for all s; (ii)

; (iii)

; and (iv)

.

The term K1(u1;λ) in Assumption 5 is similar to what is defined by Racine and Li (2004). One advantage of this smoothing method relative to the conventional estimator with λs = 0 for all s lies in its ability to borrow information from nearby cells.

The construction of Jn in (4) requires consistent nonparametric estimation of g(x). For the continuous case Fan et al. (1994) develop a local linear regression approach to estimate g(x) and give the consistency and the asymptotic normality results of the estimator. Our paper extends their estimation method to the multivariate mixed categorical and continuous cases, and the result is presented in the following lemma.

LEMMA 1. Suppose Assumptions 1–5 and (ii) of Assumption 6 hold. Then we have the following condition.

(i) For any fixed

, there exists a set of weightsj(x0)}j=1J satisfying

so that

.

(ii) Solving the following optimization problem:

gives

, where ρθ(u) = u[θ − I(u ≤ 0)] is called the “check function” and

and (iii)

The discrete variables are excluded from the check function in (6) because the validity of Lemma 1 requires the error term of the Taylor expansion (A.10) in Appendix A to vanish at some appropriate rate.

The following theorem gives the main asymptotic results of the proposed test and Lemmas 6–9 in Appendix A give the proof of the theorem.

THEOREM 2. If Assumptions 1–6 hold, under H0 we have

where

can be consistently estimated by

Under H1 we have

where

. Thus

where Bn = o(n|H|1/4).

To investigate the local power of the test, we consider the following local alternative hypotheses:

where dn → 0 as n → ∞, the unknown functions mj(x) are assumed to have continuous second-order partial derivatives with respect to x2, and mj(x) and their first- and second-order partial derivatives are all uniformly bounded.

THEOREM 3. (Local power). If Assumptions 1–6 hold, under local alternatives H1a with dn = n−1/2|H|−1/8, we have

where

, and λl(x) are as in Lemma 1.

Theorems 2 and 3 indicate that the proposed test is consistent against any fixed alternative and against local alternatives converging to the null hypothesis at usual nonparametric rates. Let

; then

under H0. This is a one-sided test. If Tn is greater than the 95% critical value of the standard normal distribution, then the null hypothesis will be rejected at the significance level of 5%.

3. HOW TO BOOTSTRAP

Bootstrap procedure is usually used to improve the performance of a test in small samples. Härdle and Mammen (1991, 1993) show that the two-point wild bootstrap is valid in the context of nonparametric estimation of conditional mean curves and nonparametric model specification tests. However, the commonly used wild bootstrap fails in a quantile context where the assumption of disturbances having zero conditional mean is not true. Therefore, this paper proposes a modified version of the wild bootstrap procedure for the quantile case.

In our context, we define the estimated residuals as

Let vijb be the bootstrap resamples of

. For each pair of index (i,j), randomly draw with replacement the bootstrap residuals vijb from a two-point distribution

such that

where

denotes a probability measure that puts mass one at x, pij ∈ (0,1). If aij ≤ 0 < bij, (16) gives pij = θ. Given pij = θ, solving (17) and (18) gives

where cij = −dij /(1 + dij) and dij is one of the real roots of the following function:

For example, when θ = 0.25, we obtain1

When

, we add a small positive number, δ, to

such that bij > 0 will be obtained. For example, we take δ = 10−8.

We now construct the bootstrap resamples {(xy, yijb)}, where

and

are calculated from the pooled data {(xij,yij)} as described in Lemma 1 with smoothing parameters

. The proposed bootstrap method guarantees that

for all i and j, where the probability is taken over the distribution of yijb conditional on the true observations {(xij,yij)}. Lemma 4, which follows, shows that

needs to be oversmoothed, as found by Härdle and Mammen (1991) in the conditional mean regression context.

LEMMA 4. Given the assumptions in Lemma 1, we have

Moreover, if

, we have

Given that the bootstrap resamples {(xij,yijb)}, we construct the bootstrap test Tn*. We conduct B bootstrap repetitions to obtain the bootstrap test statistics {Tn,b*}b=1B so that we can construct the (1 − α) quantile tα*. If Tn calculated from the true data is greater than tα*, we reject the null hypothesis of poolability. The validity of the proposed bootstrap procedure requires that the distribution of Tn* should consistently approximate that of Tn derived under the null hypothesis no matter whether the data are or are not generated from the null hypothesis. In what follows, we give the theoretical results and delay the proof to Appendix B.

THEOREM 5. If the assumptions in Lemma 4 hold, then

where

is the conditional distribution of the bootstrap test Tn* given {(xij,yij)} and d2(G,H), the Mallows distance, is the infimum of E(XY)2 over all joint distributions for the pair of random variables X and Y whose marginal distribution functions are G and H, respectively. The functions G and H belong to a functional space Γ2 = {G : ∫x2 dG(x) < ∞}.

Bickel and Freedman (1981) show that convergence in Mallows distance is equivalent to the weak convergence; Pólya's theorem will extend the weak convergence to the strong convergence (Serfling 2002, p. 18).

4. MONTE CARLO SIMULATIONS

In this section we investigate the finite-sample properties of the test by means of Monte Carlo simulations. We consider θ = 0.25, 0.5, and 0.75; n = 200 and 400. The number of Monte Carlo replications is M = 1,000 for each experiment. The data generating processes take the form of

where uiji.i.d.Nθ,1) such that the θth quantile of uij is zero, x1,ij ∈ {0,1,2,3} with natural ordering, and

for l = 0,1,2,3, for i = 1,2,…, nj, j = 1,2, and n1 = n2. When j = 1, we generate x2,i1i.i.d. U [0,1], σ1(x) ≡ 0.5, and d = 0. When j = 2, we generate

, σ2(xi2) = cx2,i2 with c determined by

, and d ∈ {0, 0.5, 1} where d = 0 gives the null hypothesis.

Two pairs of smoothing parameters are used: (λj,hj) for nonparametric estimation from subgroup data and

for nonparametric estimation from the pooled data. We set

, where

are the standard deviations of x2,ij for a given j and for the pooled data, respectively. Assumption 6 implies that

; we take β = 0.2 and take α = 0.6 and 0.65 to measure the sensitivity of the test to the choice of bandwidth. Following Racine and Li (2004), we define k1(u;λ) = λs if |u| = s. Here

is the Epanechnikov kernel. We acknowledge the use of the Koenker and d'Orey (1987) algorithm.

Table 1 presents the percentage rejection rates of the test. Generally speaking, the power of the test improves as the sample size increases; the size of the test, however, does not improve much when the sample size increases to 400 from 200, and this size distortion is observed in all cases; a smaller bandwidth (α = 0.65) is good for size and a larger bandwidth (α = 0.6) is good for power; the optimal bandwidth will be different for the test at the different probability masses.

Percentage of rejections using critical values from the standard normal distribution

To improve the performance of the test in small samples, we apply the proposed bootstrap procedure to the case of θ = 0.25 and n = 200 and present the results in Table 2. By Lemma 4, we use

to calculate

, and we replace

defined earlier by

. With 100 bootstrap replications, the results in Table 2 indicate that the performance of the test is improved significantly when the bootstrap critical values are used—smaller size distortions and stronger powers are observed.

Percentage of rejections using bootstrap critical values: θ = 0.25 and n = 200

5. CONCLUSION

This paper proposes a consistent nonparametric equality test for testing whether the unknown conditional quantile functions are the same across subgroups within a survey data framework. To improve the small-sample performance of the test, we propose a modified version of the two-point wild bootstrap in the quantile context. Monte Carlo evidence shows that the performance of the test in small samples is adequate but is improved significantly when the bootstrap critical values are used.

APPENDIX A

Proof of Lemma 1. First, for any given

, we will show that

calculated from the pooled data converges to a weighted average of g1(x0),…, gJ(x0). To simplify the proof, we consider the linear constant approach with an objective function

If Assumptions 1–5 hold and

for all s, as n → ∞, we obtain

and similarly,

. Hence,

Because

is convex in a and Q0(a) is continuous in a, by the convexity lemma of Pollard (1991) and Theorem 2.1 of Newey and McFadden (1994), we have that

converges in probability to g(x0), the unique minimizer of Q0(a) and the solution of the following equation:

By Assumption 3, taking Taylor expansion at 0 yields

where ζ0,j is between 0 and g(x0) − gj(x0). Substituting (A.5) into (A.4) gives

where

for all j and

. Assumption 2 and the continuity of vij conditional on xij indicate that g(x0) = gj(x0) for all j if and only if H0 is true.

Next, closely following Fan et al. (1994), we give the proof of the rest of Lemma 1. For the sake of convenience, in what follows, the summations begin with 1 and end with n, and the heterogeneity in the j subscript is suppressed without hurting the essence of the proof of this lemma. Hence, the modified linear quantile regression approach minimizes the following objective function:

where

. Here we define

and yi* = yig(x0) − [dtri ]x2 g(x0)(x2,ix2,0), where [dtri ]x2 g(x0) is the first-order partial derivative of g(x) with respect to x2 evaluated at x0, the subscripts 1 correspond to the discrete variables, and the subscripts 2 refer to the continuous variables. Then

minimizes the following function:

Our proof will repeatedly use the following three results: (a) Taylor expansion of g(x1,0,x2,i) at x0 = (x1,0,x2,0)′ implies that

holds uniformly over

; (b) the law of large numbers gives

, where I(yi* = 0) = 1 if yi* = 0, 0 otherwise; and (c) for any x ≠ 0

As a result, if

as n → ∞, we have

, where

and ξi = g(x0) + [dtri ]x2 g(x0)(x2,ix2,0) − g(x1,i,x2,i); also,

with ξi lying between ξi + t and ξi. Similarly, we can show that those conditions are also sufficient for Rn = op(1).

Therefore, by the convexity lemma of Pollard (1991), we have

holds uniformly. Then simple calculations yield

where

It is easy to calculate that

and

. This will complete the proof of (8). █

LEMMA 6. If Assumptions 1–5 and (i) of Assumption 6 hold, then under H0,

as n → ∞.

Proof. Under the null hypothesis, uij = εij. For a given j(1 ≤ jJ), denote

. Let zij = (εij,xij); then Hn,j(zij,zkj) = εijεkj Kij,kj is symmetric and E [Hn,j(zij,zkj)|zij] = 0 for any ik. The central limit theorem of Hall (1984) for a degenerate U-statistic will be used to derive the asymptotic normality result of Jn1,j. Then the asymptotic normality of

will follow.

For kis, given the assumption, we have

where σj2 = θ2(1 − θ)2R2(K2)E [fj(x1,ij,x2,ij)], λmax = max1≤sqp λs, and

Hence 0 ≤ {E [Gn,j2(zij,zkj)] + nj−1E [Hn,j4(zij,zkj)]}/{E [Hn,j2(zij,zkj)]}2 → 0 if

, and λmax → 0 as nj → ∞. Then Theorem 1 of Hall (1984) gives

. Immediately, we have

, where

. █

LEMMA 7. If Assumptions 1–6 hold, then under H0 we have n|H|1/4Jn2 = op(1).

Proof. Under

where

. Simple calculation gives

where ikmst. The computational complexity of (A.18) stems from the dependency between

because

depends on vij. To overcome this problem, we rewrite

by (A.12)–(A.14):

, where

is the product kernel with smoothing parameters

. The rest of the proofs follow by recognizing that only Δkj,ij is correlated with εij for all ki conditional on x. Without hurting the essence of the proof, we also assume that

is the leave-one-out estimate of g(xkj) to remove the dependence between

; this is only assumed to shorten the proof.

First, we obtain

where ξkj lies between

. Second, define

where

. The terms

are independent of εij and εsj for all ksi conditional on x. Some calculations give

, and

. Hence E(n|H|1/4Jn2)2 = o(1) under Assumption 6.

Similarly, we obtain

if

because

holds for ki. This will complete the proof of this lemma. █

LEMMA 8. If Assumptions 1–6 hold, then under H0 we have n|H|1/4Jn3 = op(1).

Proof. First we have

if

as n → ∞. Second, we obtain

where ismkt under Assumptions 1–6, because

Equations (A.21) and (A.22) together give n|H|1/4Jn3 = op(1). █

LEMMA 9. If Assumptions 1–6 hold, then

as n → ∞.

Proof. For a given j, let

, a U-statistic with Hn,j(xij,xkj) = Kij,kj2, which is a symmetric function of

. Hence, Lemma 3.1 of Powell et al. (1989) yields

Hence

. █

LEMMA 10. If Assumptions 1–6 hold, then under H1, we have

Proof. Lemma 1 shows that

also holds under H1, and it is easy to show that Jn2 and Jn3 are both op(1). Hence, by (4), we only need to show that

where uij = I(yijg(xij)) − θ .

Because

, by Lemma 3.1 of Powell et al. (1989), we obtain

where wij = I(yijg(xij)) − I(yijgj(xij)). This gives (A.24) and will complete the proof of this lemma. █

Proof of Theorem 3. Under

as n → ∞. Similar to the proofs in Lemmas 7 and 8, we obtain that n|H|1/4Jn2 and n|H|1/4Jn3 are both op(1). So we only need to show that

.

By (4), we have

where Lemma 6 gives

and simple calculations give Bn = op(1).

Let

. We obtain

where

. Following the proof of Lemma 8, we have var(Cn) = o(1). Hence, Cn = μ + op(1), which will complete the proof of this theorem. █

APPENDIX B

We have bootstrap resamples {(xij,yijb)}, where

where a1,b2 > 0 and a2,b1 < 0. To simplify the proofs, we assume that the leave-one-out nonparametric estimation of g(·) is applied.

Proof of Lemma 4. The proof of this lemma closely follows that of Lemma 1 in Appendix A, replacing yi by yib, yi* by yib*, and

by

in the proof of Lemma 1, where

, and

.

By (B.1)–(B.3), a2 < 0 < a1, and b1 < 0 < b2, simple calculations give

Treating the last equation as a function of t and applying Taylor expansion to this function at t = 0 yields

where c = a2−1a1−1 = b1−1b2−1. We then obtain

, where

If Assumptions 1–5 and (ii) of Assumption 6 hold, we have

where ξ(x0,θ) = θ if ∂2g(x0)/∂x2x2′ is a nonnegative definite matrix, otherwise 1 − θ. It can be shown that

has the same order as

in the proof of Lemma 1, and this will complete the proof of (21).

Next, we are going to show that

, where e1 is a (p + 1) × 1 vector with the first element being one and zero for the rest. By (B.4), we obtain

where w(·) is a uniformly bounded function over its domain. If

, we have

and

because it can be shown that

based on the results from the proof of Lemma 1. This will complete the proof of this lemma. █

Proof of Theorem 5. The bootstrap test is given by

where

, and

. The proposed bootstrap procedure ensures that Ebijb) = 0, Ebijb2) = θ(1 − θ), and Ebijbεkjb) = 0 for ik. Following the proofs of Lemma 6, the asymptotic normality of n|H|1/4Jn1b can be easily obtained. Following the proofs in Lemmas 7 and 8, we can show that n|H|1/4Jn2b = op(1) and n|H|1/4Jn3b = op(1), respectively, although the proofs are more complicated because more terms are involved. █

References

REFERENCES

Baltagi, B.H., J. Hidalgo, & Q. Li (1996) A nonparametric test for poolability using panel data. Journal of Econometrics 75, 345367.Google Scholar
Bhattacharya, P.K. & A.K. Gangopadtyay (1990) Kernel and nearest-neighbor estimation of a conditional quantile. Annals of Statistics 18, 14001415.Google Scholar
Bickel, P.J. & D.A. Freedman (1981) Some asymptotic theory for the bootstrap. Annals of Statistics 9, 11961217.Google Scholar
Buchinsky, M. (1994) Changes in the U.S. wage structure 1963–1987: Application of quantile regression. Econometrica 62, 405458.Google Scholar
Fan, J., T. Hu, & Y.K. Truong (1994) Robust non-parametric function estimation. Scandinavian Journal of Statistics 21, 433446.Google Scholar
Fan, Y. & Q. Li (1996) Consistent model specification tests: Omitted variables and semiparametric functional forms. Econometrica 64, 865890.Google Scholar
Gozalo, P.L. (1997) Nonparametric bootstrap analysis with applications to demographic effects in demand functions. Journal of Econometrics 81, 357393.Google Scholar
Hall, P. (1984) Central limit theorem for integrated square error of multivariate nonparametric density estimators. Journal of Multivariate Analysis 14, 116.Google Scholar
Hall, P., R.C.L. Wolff, & Q. Yao (1999) Methods for estimating a conditional distribution function. Journal of the American Statistical Association 94, 154163.Google Scholar
Härdle, W. & E. Mammen (1991) Bootstrap simultaneous error bars for nonparametric regression. Annals of Statistics 19, 778796.Google Scholar
Härdle, W. & E. Mammen (1993) Comparing nonparametric versus parametric regression fits. Annals of Statistics 21, 19261947.Google Scholar
Koenker, R. & G. Bassett (1978) Regression quantiles. Econometrica 46, 3350.Google Scholar
Koenker, R. & G. Bassett (1982) Robust test for heteroskedasticity based on regression quantiles. Econometrica 50, 4361.Google Scholar
Koenker, R. & V. d'Orey (1987) Computing regression quantiles. Applied Statistics 36, 383393.Google Scholar
Koenker, R., P. Ng, & S. Portnoy (1994) Quantile smoothing spline. Biometrika 81, 673680.Google Scholar
Lavergne, P. (2001) An equality test across nonparametric regressions. Journal of Econometrics 103, 307340.Google Scholar
Li, Q. & S. Wang (1998) A simple consistent bootstrap test for a parametric regression function. Journal of Econometrics 87, 145165.Google Scholar
Newey, W.K. & D. McFadden (1994) Large sample estimation and hypothesis testing. In R.F. Engle & D.L. McFadden (eds.), Handbook of Econometrics, vol. 4. Elsevier Science B.V., 21112245.
Newey, W.K. & J.L. Powell (1990) Efficient estimation of linear and type I censored regression models under conditional quantile restrictions. Econometric Theory 6, 295317.Google Scholar
Pollard, D. (1991) Asymptotic for least absolute deviation regression estimators. Econometric Theory 7, 186199.Google Scholar
Powell, J.L. (1986) Censored regression quantiles. Journal of Econometrics 32, 143155.Google Scholar
Powell, J.L., J.H. Stock, & T.M. Stoker (1989) Semiparametric estimation of index coefficients. Econometrica 57, 14031430.Google Scholar
Racine, J. & Q. Li (2004) Nonparametric estimation of regression functions with both categorical and continuous data. Journal of Econometrics 119, 99130.Google Scholar
Serfling, R.J. (2002) Approximation theorems of mathematical statistics. Wiley.
Yu, K. & M.C. Jones (1998) Local linear quantile regression. Journal of the American Statistical Association 93, 228237.Google Scholar
Zheng, J.X. (1996) A consistent test of functional form via nonparametric estimation techniques. Journal of Econometrics 75, 263289.Google Scholar
Zheng, J.X. (1998) A consistent nonparametric test of parametric regression models under conditional quantile restrictions. Econometric Theory 14, 123138.Google Scholar
Figure 0

Percentage of rejections using critical values from the standard normal distribution

Figure 1

Percentage of rejections using bootstrap critical values: θ = 0.25 and n = 200