No CrossRef data available.
Article contents
03.1.1. Deriving the Observed Information Matrix in Ordered Probit and Logit Models Using the Complete-Data Likelihood Function—Solution
Published online by Cambridge University Press: 05 March 2004
Extract
Deriving the observed information matrix in ordered probit and logit models using the complete-data likelihood function—solution.
- Type
- PROBLEMS AND SOLUTIONS: SOLUTIONS
- Information
- Copyright
- © 2004 Cambridge University Press
The complete-data log likelihood function is

Define the categorical variables

Therefore, the missing information matrix is given by

where φj−1, i = φ(wj−1, i), Φj−1, i = Φ(wj−1, i), φj, i = φ(wj, i), Φj−1, i = Φ(wj−1, i), wj−1, i = αj−1 − β′xi, wj, i = αj − β′xi, and

because

where φ and Φ are, respectively, the probability density function and cumulative distribution functions of a N(0,1) random variable using formulas for variances of doubly truncated distributions (Maddala, 1983, p. 366). Furthermore, the complete-data information matrix is given by

From equations (4) and (6), it follows that the observed information matrix is

which is equal to the observed information matrix, −∂2 ln L/∂β∂β′, obtained from the observed-data log likelihood function

Alternatively, the observed information matrix may be computed by using the result that the observed score function is the conditional expectation of the latent score function given the observed variables (Louis, 1982, p. 227):

By differentiation of the observed score function in equation (9) with respect to β′, we obtain the observed information matrix in equation (7) as the negative of the hessian of the observed log likelihood function.